What should we do about online hate speech?

Tuesday 23 October, 19:3021:00, Flemish-Dutch House deBuren, Leopoldstraat 6, 1000 BrusselsBattle of Ideas Europe

Partners:

Across Europe and beyond, it is hard to avoid the growing debate surrounding hate speech. The Low Countries are no exception. The burqa ban in Belgium, the campaign against the ‘Black Pete’ tradition or the trial against Geert Wilders in the Netherlands are just a few issues that resulted in an avalanche of online aggression and even direct threats to the activists, politicians and opinion makers involved. In 2016, the European Commission introduced a code of conduct to counter illegal hate speech online, with Vĕra Jourová, the commissioner for justice, stating: ‘The internet must be a safe place, free from illegal hate speech, free from xenophobic and racist content.’ As fears grow that the unregulated presence of hate speech brings new threats to our security, emotional wellbeing and the safety of children, tech companies are increasingly deploying ‘flaggers’ – staff who can quickly spot illegal content – and stepping up efforts to eliminate offensive and distasteful speech online.

While the term ‘hate speech’ has yet to be defined in a watertight way either in international law or in relevant scholarship, the term refers to a broad spectrum of behaviour, including hatred, incitement to hatred, abusive expression, vilification and extreme forms of prejudice and bias. Despite this confusion over definitions, those concerned about the influence of hate speech point to evidence that online hate speech increases real-life hate crimes. In Germany, one study argued that Facebook posts by the right-wing Alternative for Germany (AfD) party increased anti-refugee attacks by 13 per cent. In the US, academics say that Donald Trump’s tweets predicted hate crimes against specific minorities he mentioned. An Amnesty International study points to the impact of online misogyny on women across the globe, with over half of female respondents saying they’d experienced stress, anxiety or panic attacks after receiving online abuse and harassment.

Freedom of speech advocates warn, however, of the dangers of an Orwellian digital dystopia where government apparatchiks and large corporations dictate what we can read and write on the web and on social media. For example, if it’s now deemed hate speech to criticise Islam or join a Facebook group called ‘I Hate Christians’, are commentators right to worry that the hard-won freedom to criticise religion is being undermined? One media organisation, The Intercept, dedicated to ‘fearless, adversarial journalism’, worries that hate-speech laws are often used to suppress and punish left-wing viewpoints. For example, the German government ordered an influential left-wing website to be shut down on the ground that it ‘stirred up’ unrest at the G20 summit in Hamburg and was used to incite violence.

Beyond these specific examples, there is concern over a wider dynamic of censorship. If countries follow Germany in fining companies up to €50million if they persistently fail to remove illegal online content, could this lead to restrictions on free expression? To what extent should privately owned media companies like newspapers, YouTube, Twitter or Facebook become responsible for policing the public sphere? Can we really justify defending hate speech as free speech if it leads to real-life crime, or are such claims exaggerated? Will legislating hate speech create a more tolerant society? Or will people’s intolerance simply be banned online, but remain in everyday life?