In 1964, the US historian Richard Hofstadter wrote an essay identifying the “paranoid style in American politics”. In the wake of Senator Joseph McCarthy’s crusade against communism, he described the use of “heated exaggeration, suspiciousness and conspiratorial fantasy” by the radical right.
By removing videos and podcasts by Alex Jones, the rightwing radio host and founder of Infowars, Apple, Facebook and YouTube last week limited the paranoid style on their platforms. Twitter held back, deciding that Mr Jones’ hostility to “the scientifically engineered lies of the globalists and their ultimate goal of enslaving humanity” falls within its rules.
Jack Dorsey, Twitter’s chief executive, declared in March that he was “committing Twitter to help increase the collective health, openness and civility of public conversation”. Last week, he said it was the job of journalists, not his platform, to “document, validate and refute” misinformation such as Mr Jones’ claim that the mass shooting of children at a school in Sandy Hook in 2012 was “a giant hoax”.
In some ways, Mr Jones is an easy case. Not only does his outrage breach YouTube and Facebook’s rules against hate speech and encouragement of violence, but he is running a business on the back of it — selling not only survivalist advertisements but herbal pills. The fact that Twitter is struggling to deal with him does not prompt faith in its ability to limit other abuses.
The affair, following controversies about “fake news” and how Facebook was used to influence the US presidential election, makes the dilemma facing technology platforms even more acute. They started out by declaring they were neutral conduits for other people’s communications rather than publishers with editorial oversight, but are now being pushed to take responsibility.
Social media platforms are being abused and exploited by extremists on all sides of politics. These platforms are vulnerable to being gamed by users — zealots can expand their influence by making a comment or video go viral. Those that shout the loudest and have the most aggressive “friends” often dominate.
This can degenerate into “brigading” — collusion to bully or denigrate others. A “tweetstorm” by activists against Tom Watson, deputy leader of the UK Labour party, briefly topped Twitter’s trending topics although one analysis found that 62 per cent of the tweets came from only 1,200 users. This was a political campaign, but such tactics can be used against private individuals.
Platforms are struggling to deal with the tide of manipulation. They have become too large to monitor, yet alone enforce rules on the mass of tweets and posts. Even if they could, they would constantly be faced with having to judge what is a nefarious lie and what is a fact, or what is argument and what abuse. Publishers have always edited their material; platforms have not.
There is a liberal argument that platforms should not discriminate: they have special status under laws such as the US Communications Decency Act 1996 and wider roles than publishers. “We lean toward free expression,” Richard Allan, Facebook’s vice-president of policy, wrote last week, and Mr Jones protests that his US constitutional rights have been breached.
But if Facebook, Twitter and others are not traditional publishers, neither are they the internet or the public square. Mr Jones has broad rights to say what he wants, no matter how paranoid, but not a claim on others to host and promote it. Platforms must work out how to curb abuses they have facilitated. Responsibility has been thrust upon them, none too soon.