Facebook’s Safe Harbour for Hate Speech?

Centre for Media, Technology and Democracy

 
 
 
FacebookSafeHarborHateSpeech-17.png

It won’t be news to anyone to hear that social media platforms are not only incredibly powerful – they also don’t always wield that power in the public interest. While there are countless examples of that behaviour throughout their histories, the ongoing pandemic has put it on full display.

Facebook, Twitter, Instagram, Reddit, YouTube, and others have all helped to spread misinformation about the COVID-19 virus, making people believe, among many other things, that it was developed in a Chinese laboratory and that various remedies – natural or pharmaceutical – would protect them from or cure the virus. It hasn’t helped that some of that misinformation has been coming directly from the President of the United States, Donald Trump, who has pushed the debate around content moderation firmly into the public dialogue.

Platform companies have long held that Trump’s posts were largely exempt from the usual content moderation rules because he held such a powerful position; it was in the public interest for people to be able to see what he was tweeting even if it broke the rules that would apply to those with less power and privilege. 

However, in late May, Twitter fact-checked one of Trump’s tweets about mail-in voting, then hid a tweet sent by Trump a few days later about the growing protests in Minneapolis for “glorifying violence.” Facebook meanwhile, refused to do the same, with CEO Mark Zuckerberg claiming the platform shouldn’t be “the arbiter of truth.

In response to Twitter’s actions, Trump signed a new executive order taking aim at Section 230 of the Communications Decency Act, which effectively frees platform holders of liability for the content posted by third parties, even if they edit or remove it. The order revamps how users submit complaints to the White House should they disagree with moderation decisions, and would remove liability protections from platforms whose moderation practices are deemed to be “deceptive.”

These developments are critical when considering the broader discussion of how to regulate social media platforms, especially in the aftermath of the pandemic.

In a recent essay, Mike Ananny, an Associate Professor of Communication and Journalism at USC Annenberg, writes about the need to understand the ideologies and financial incentives embedded in social media platforms, particularly how “their machine learning algorithms, artificial intelligence models, and recommendation systems are actually driven by their values and goals (and are not simply objective mirrors of society).” 

He provides ample evidence against platforms’ usual line of defense: they are private, for-profit infrastructures, rather than channels or broadcasters. Their infrastructural elements are “the best and most underexploited places where regulation can have the greatest impact,” and that requires a full understanding of their inner workings, including “the practices, cultures, norms, and metrics of those engineers.” 

These are essential observations as the conversation around what to do with the tech giants progresses. The developments in recent weeks make it clear that there needs to be a concerted regulatory response to address what gets posted on social media platforms and the broader social effects of algorithms whose primary motivation is to generate ad dollars for a company like Facebook. But that will require a deeper understanding of what’s going on behind the scenes.

Since the 2016 U.S. election, the platform has refused to remove certain content. Meanwhile, even as Zuckerberg claimed the company was committed to free speech in recent weeks, the platform was still removing the accounts of Palestinian, Syrian, and Tunisian activists and journalists.

Recent research has also shown Black user accounts were far more likely to be automatically disabled by Facebook’s automated content moderation system. 

Trump’s victory in the 2016 election is partly attributed to how Facebook helped boost his message. Zeynep Tufeci, Associate Professor at the University of North Carolina, has argued the recent executive order should be seen through that lens: not to actually change Section 230, but to keep Facebook from taking any major action against Trump’s posts before the November 2020 election.

It’s clear that action is necessary to address the effects of tech platforms on society, and that will require a deeper understanding of the platforms, how they work, and the ideologies they help to enforce. While Facebook is starting to change its position in response to advertisers and public criticism, there need to be better tools to ensure these companies are serving the public interest, not simply their bottom lines.

 
Previous
Previous

Anti-intellectualism and Information Preferences during the COVID-19 Pandemic

Next
Next

Why Governments, Not Markets, Need to Address Platform Harms