German laws criminalizing hate speech and defamation are already some of the most restrictive in Europe. But a new bill going through the Bundestag, intended to combat hate speech, will create powerful incentives for online platforms to suppress content that is not even illegal, and inhibit the development of data-driven tools that offer more sophisticated ways of fighting extremism.
The proposed law, backed by the governing CDU/CSU/SPD grand coalition, would impose fines of €50 million (a hundredfold increase on the earlier proposal of €500,000) on Internet platforms like Facebook, Twitter, and Google, if they do not remove offending posts within 24 hours of users having flagged them.
This bill threatens algorithmic methods for combating hate speech – and for preserving free speech – by making their use legally and financially too risky. Online platforms already have content guidelines for their human moderators, which include requirements to block access to content in jurisdictions where it is clearly illegal.
Over time and with trial and error, platforms can also use algorithms to take down some prohibited content and prioritize how items are displayed in search results and news feeds, such as by favouring reputable and well-sourced content above propaganda and hate speech, which allows users to make their own informed judgements of what they read online.
This type of automation is only feasible because platforms’ minimal legal liability allows a wide margin for error, which leaves open the opportunity to gradually improve these tools.
However, the threat of liability and fines would stop this kind of innovation dead in its tracks, because platforms cannot rely on algorithms to keep them on the right side of such a law.
Moreover, it is not only algorithms that require a wide margin for error. The realms of hate speech and defamation are murky, and include vast expanses of legal grey areas that arguably incorporate the majority of potential cases. Until this law passes, anyone accused of publishing something that violates German law can at least defend their output in court, where qualified judges decide whether it is legal or illegal.
Although the threat of such litigation is enough to frighten many users out of posting anything online that may be too controversial or critical, at least some stout individuals can insist on their day in court.
But the new law would thwart even those willing to face the courts by restricting their ability to publish online in the first place. It would force companies to decide these cases, out of court and with no legal process or right of appeal for content creators. Worse, Internet platforms will face much more distorted choices than judges: either give the content the benefit of the doubt and risk the cost of a court case and a fine of €50 million, or simply remove it, and incur nothing worse than an angry customer, and maybe a tiny dip in advertising revenue.
Consider the case of German satirist Jan Böhmermann. In March 2016, Böhmermann read out a poem he had written called Schmähkritik (abusive criticism) which poked fun at the Turkish President Recep Tayyip Erdoğan. Erdoğan demanded that German authorities prosecute Böhmermann under section 103 of the German Criminal Code, which forbade insults against representatives of foreign states. Chancellor Angela Merkel even authorized the case to go ahead, but prosecutors later dropped the charges, citing lack of evidence.
Mr Böhmermann himself made clear that he was using his poem to start a public debate about defamation law, following an earlier demand by Erdoğan that German authorities suppress a music video released by the comedy show extra 3 (which went on to produce another song one year later). Although Mr Böhmermann probably did not enjoy the spectre of prosecution or the threats from Turkish nationalists, he was ultimately very successful: Section 103 was repealed last week.
But under the new law, anyone who tries to start a similar debate about online hate speech will fail, because unlike Böhmermann’s case, theirs will never be subject to due process or political scrutiny. Any content posted online to deliberately test the limits of hate speech laws – such as an especially provocative cartoon about a religion – would not trigger a Böhmermann-style public debate about whether the law goes too far, because the online platform’s moderators would simply remove it. What business would gamble the costs of a court battle and a €50 million fine for that?
These risks are not unique to Germany. The Council of the European Union recently backed proposals for a directive that would require all EU member states to force online platforms to promptly remove videos containing hate speech. In Britain too, Conservative politicians have talked about introducing similar regulations.
And in Austria, a court has ruled that instead of just blocking access from within Austria to posts that break Austrian law, Facebook must delete them, so that nobody anywhere else in the world can see them either. The Austrian Green Party, which brought the case, intends to appeal to the country’s highest court to impose even tougher restrictions, including a requirement to find and delete similar content.
Whether one agrees or disagrees with laws that dictate what people can say or read, it should nevertheless be the job of the courts to establish when the law has been broken. Using fines to transfer this task to private companies creates very troubling perverse incentives, and will restrict free speech even beyond the requirements of the law, at the same time as stifling technological innovations that provide new tools in the fight against online hate and extremism. Members of the Bundestag should reject this bill, and their fellow lawmakers throughout Europe should resist all similar proposals everywhere else.