How to stop social media from supercharging hate speech

Sherif Elsayed-Ali
Published : 24 Feb 2018, 09:03 PM
Updated : 24 Feb 2018, 09:03 PM

Donald Trump's retweets of anti-Muslim videos first circulated by an anti-immigrant, far-right British party were just the tip of the iceberg. From Myanmar to the United States, controversial posts by political leaders and public figures have sparked a growing and increasingly global debate about how social media may be facilitating the spread of hatred and discrimination.

But this discussion is only the latest in a larger debate that is as old as the internet itself: who should decide the limits of freedom of expression online, and how should they do it?

We have come a long way from the days when Facebook, YouTube and Twitter were hailed as enablers of free speech and democracy. Such platforms have undeniably helped to democratise the public sphere. Individuals can amass tens of thousands of followers, and earn millions of views, without relying on the media, public relations agencies, or governments. Activists can organise, disseminate information and mobilise more effectively than ever before.

It is a testament to the power of social media that many countries are imposing stricter controls or even blocking their access entirely. We must bear the positive aspects of that power in mind when we consider how best to tackle its flip side: the way social platforms can be used to spread abuse, vitriol and hatred more rapidly than ever before.

In 2017, politicians in many countries deployed social media to spread hate-filled agendas. Amnesty International's latest annual report on the state of the world's human rights documents a global rise in state-sponsored hate, and chronicles the variety of ways governments and leaders are increasingly peddling hateful rhetoric and policies that seek to demonise already marginalised groups; President Trump's transparently-hateful travel ban on citizens from half a dozen Muslim-majority countries was one of the most prominent examples.

As access to social media expands worldwide, it is increasingly being used by governments to promote hateful rhetoric, to control their citizens, and to silence any opposition. From xenophobic statements by politicians against LGBTI and Roma people in Bulgaria, anti-Rohingya propaganda posted on Facebook by senior military officers and government spokespeople in Myanmar, and the use of troll networks against government critics in the Philippines, those in power are learning how to use social media as yet another tool of repression.

These findings present many dilemmas. To what extent are social media companies such as Facebook and Twitter – who have responded only belatedly to the torrent of hate speech and "fake news"– at fault? Should governments take action? What can we do to preserve the good that social media can offer while countering its more corrosive effects?

There are no simple answers. The right to free expression protects ideas that many people find offensive, and there are many instances where racist, sexist, xenophobic or other hateful material is not prohibited under human rights law. Nevertheless, freedom of expression comes with responsibilities, and there are cases under human rights law – such as incitement to violence or child sex abuse imagery – where it can legitimately be restricted. Complexities tend to arise because the definition of "offense" is always subjective: one person's free speech is another's vicious diatribe.

Any attempt at regulation must also consider the fact that the right to be able to say things to which others – including those in positions of power – will vehemently object is one of the foundations of an open society. Take that away and you take away the free press and any kind of government accountability.

For all their potential for abuse, social media sites such as Facebook and Twitter provide a space for expression and access to information that is much freer than anything available in the past. Yet, this freedom is fragile – for example, Amnesty's research has shown that online abuse can have a silencing effect on its targets.

So what's the solution? There are three types of actions that can be taken to counter hate on social media and the internet more generally: legal enforcement, content moderation, and education.

States should have in place laws that prohibit advocacy of hatred, and take legal action only in the very clearly-defined cases allowed by international human rights law. Specifically this is when there is a clear show of intent to incite others to discrimination, hostility or violence against a particular group.

Nevertheless, many governments have threatened social media companies with strict rules on intermediary liability, which means that companies may be held liable for content posted on their platforms. The problem is that intermediary liability can easily be used to restrict freedom of expression and to force companies to censor their users for fear of legal consequences.

Regardless of government regulation, companies have a responsibility to avoid causing or contributing to human rights harms. Content moderation by social media companies is therefore an important part of the solution: it does not require legislation and therefore does not open the doors to unjustified restrictions on freedom of expression.

All major platforms have community standards and rules of conduct in place to deal with advocacy of hatred and discrimination, which would work well so long as they did not conflict with human rights law. Making them effective will require social media companies to consistently uphold these rules, and to devote sufficient resources to addressing violations. This means improving the tools available to users to report abusive content, employing and training content moderators, and measures to identify and restrict troll networks. It also requires transparency about the rate at which these rules and standards are violated, including information about types of abuse and actions taken.

Reducing the spread of hate on social media also requires education. This is perhaps the most important intervention: legal enforcement and content moderation can only treat the symptoms of online abuse. Whether it is through school programs, or campaigns on social media itself, the only viable long-term way to reduce racism, sexism and bigotry online is by understanding and addressing the roots of discrimination and hate in our societies.