Fake News Bubble? by Digitalis

In the last few weeks, Facebook, Twitter and Google have all taken steps to curb fake news, misinformation, and harassment on the internet, announcing plans to give users increased control over what they see online.

The impact of fake news and its amplification by the so-called social media filter bubble has been hotly debated over the past few months, following the UK EU referendum and the US presidential election.

In a recent news conference, Barack Obama referred to the issue of fake news as a ‘threat to democracy’, and analysis carried out by Buzzed showed how “fake election news stories outperformed real news on Facebook” in the final months of the US election campaign.

Facebook cracks down on fake news

This week both Facebook and Google have responded with measures to address the issue, making it clear they would not tolerate misinformation and would target fake news sites’ revenue sources.

Mark Zuckerberg released a statement on his Facebook page acknowledging that “more work needs to be done”, and outlined the projects currently underway.

While there is consensus that there should be a crackdown on false information on social media, deciding how content gets classified as ‘true’ is rather more complicated. And this classification needs to be done without infringing on personal opinions or restricting freedom of speech.

As Zuckerberg puts it, “The problems here are complex, both technically and philosophically”, and Facebook “doesn’t want to become the arbiters of truth themselves”.  As such, many of the projects detailed by Zuckerberg rely on input from Facebook’s community of users.

As well as developing “better technical systems to detect what people will flag as false before they do it themselves”, Facebook has said it is keen to listen and learn from third parties, such as fact checking organizations and journalists, and will make it easier for all users of the platform to report questionable content.

Twitter helps users fight harassment

Meanwhile, Twitter has been making some changes of its own in a bid to give its users more control over what they see.

Released last week, an update to the platform’s functionality will allow users to mute specific key words, or entire conversations, helping people to avoid spoilers and conversations they don’t want to be a part of.

But the new features are also part of a wider push by Twitter to address harassment on the platform. Alongside the new muting options, Twitter has also improved the options users have for reporting hate speech, and it says it has improved its internal tools and training to deal more effectively with the content that’s reported.

Echoing Zuckerberg, Del Harvey – who leads Twitter’s trust and safety efforts – acknowledged the complexities around implementing changes that impact who can see what on the platform: “we’re trying to be very thoughtful about the decisions we make, and make sure there aren’t unintended and negative consequences.”

Like Facebook, Twitter appears to be tackling its issues by handing more control to its users, empowering them to be able to choose which content can be seen. Despite the increasingly important role both platforms play in disseminating news, neither seem keen to take on editorial responsibility any time soon!


error: Content is protected !!