When Paris’s Notre-Dame cathedral caught fire on April 15, it was only a matter of minutes before conspiracy theories were swirling across social media.
Some were from far-right accounts and outlets desperate to blame the blaze on some kind of Islamist attack. As the inferno was broadcast worldwide, websites such as US-based Infowars pushed out reams of unverified and often false information, speculation and rhetoric. “The West has fallen,” tweeted right-wing documentary filmmaker Mike Cernovich.
According to the European Union’s specialist disinformation monitoring unit, these messages were then further picked up, amplified and spread by a host of Russian-linked accounts, platforms and automated trolls. They added new, often deeply implausible theories. As well as blaming Islamists, some suggested France’s “yellow vest” protesters might be responsible. Others suggested unlikely Ukrainian links to the fire, while one – completely without evidence – speculated the pope himself would call for a mosque to be built on the medieval cathedral’s ruins.
The latter claim came from the website of the Kremlin- supporting Tsargrad TV – which pushes a relentlessly nationalist pro-Russian Orthodox Church line highly supportive of President Vladimir Putin. Its sheer implausibility points to a growing trend in the growing ideological battles on social media and the Internet in which the truth can appear increasingly irrelevant.
How to handle this – and whether, when, how, and if to exploit it – is a growing challenge for almost every country. In the aftermath of last weekend’s Easter bombings that killed more than 350, Sri Lanka imposed an outright – if not always effective – block on multiple social media platforms including Facebook, WhatsApp and Snapchat. That follows widespread condemnation of social media platforms after the New Zealand mosque shooting the previous month, when unedited footage streamed by the gunman was widely distributed.
How effective such restrictions are remains in question. Indeed, they may be counterproductive, simply adding to the air of crisis. In Sri Lanka, many users were able to find ways around the block, not least to reassure friends and family they were safe. Indeed, by activating a function whereby users could mark themselves unharmed after attacks and disasters, the tech firm may well have contributed to the ban’s failure, whether willingly or not.
Such draconian action can also simply reinforce the narrative that a government has something to hide – particularly in Sri Lanka’s case, where warnings of an imminent attack from other intelligence services may have been ignored.
What does appear increasingly clear is that the Internet and social media platforms have become something of a cesspool for hatred and conspiracy, and that powerful forces wish to use that to their advantage. What is equally clear is that those who wish to stop them have yet to find a truly workable strategy.
Those who have done the job describe it as an unending nightmare of pornography, violence – including beheadings, child abuse and more – along with sexist, racist, and brutally misogynist rants. Contractors are timed to make fast decisions, and report horrific mental health issues in consequence. Without them, even more of that material would doubtless reach a wider audience – but even stopping that would only tackle the tip of the iceberg.
In many ways, of course, these problems are not new. Sectarian, racist, and divisive language and rumour have long been a potent tool for populists, xenophobes and anyone else seeking to foster division, fear and distrust to entrench their own control. Indeed, in many locations – for example, the Indian subcontinent – things used to be much worse.
In 1983, a much smaller attack by Tamil Tiger rebels on government troops triggered massive ethnic riots against Tamils in the capital, killing hundreds and jump-starting a quarter century long war. India’s Sikhs suffered similar attacks the next year following the assassination of Prime Minister Indira Gandhi. Much worse communal violence was orchestrated during India’s 1947 partition by radio, leaflet and rumour, long before the advent of the Internet.
The effect of social media can be rapid – but it can also be slower and even more insidious. In private Facebook and WhatsApp groups, truth often does not matter, and conversations and memes can swiftly prove self-reinforcing. In Western politics, that increasingly includes Islamaphobia, anti-Semitism, or both, often couched as part of a wider backlash against liberal multicultural elites and norms.
This is a world created by modern technology and political frustration, particularly amongst the perceived “losers” of globalisation. But it is a situation some actors in particular – Russia, the far right, and Islamist groups such as Islamic State – have learned to exploit. A study this year by the NATO Strategic Indications Centre of Excellence found a majority of accounts on Russian social media site VK that were trolling against Western military activity in Eastern Europe were automated and robotic.
Such tools are increasingly for sale. A separate report by the same centre identified an increasingly lucrative black market in often illegal social media manipulation software, data and other products, allowing users to buy “likes”, comments, ads and fake personas. Many are easy to spot – but improved artificial intelligence and machine learning may soon make that more difficult.
Perhaps the best pointer to how to tackle this comes from the Christchurch mosque massacre in New Zealand. While still criticising social media firms, Prime Minister Jacinda Arden spent most of her time on more practical solutions – introducing gun control and pushing forward a more inclusive vision of the country that blunted the attacker’s message.
Whether that will be enough is another question. Because if the West ever genuinely falls, it is as likely to do so through social media-fuelled bigotry, division and deceit as any physical attack.