Please note, our website will be undergoing scheduled maintenance on Monday, 25th November night from 11:00 PM to 3:00 AM IST (5:30 PM to 9:30 PM UTC) and will be temporarily unavailable. Sorry for the inconvenience.
On May 31, 2016, Facebook, Microsoft, Twitter and YouTube, which are involved in the European Union Internet Forum, formulated a Code of Conduct on countering illegal hate speech online. Even while declaring that they take pride in promoting and facilitating freedom of expression, these information technology (IT) companies shared the commitment of the European Union member states to tackle illegal hate speech. In an act of balancing competing interests, the IT companies set out in the Code of Conduct several public commitments, some of which are: (i) To have in place an effective process to review notifications regarding illegal hate speech on their services so that they can remove or disable access to such content; (ii) To review the notifications for removal of illegal hate speech in less than 24 hours; (iii) To educate and raise awareness with the users about the types of content not permitted under their Rules and Community Guidelines; (iv) To encourage via partnerships with civil society organisations the provision of notices and flagging of content that promotes incitement to violence and hateful conduct.
The coming together of these IT companies to formulate a code of conduct was the culmination of a series of events. On March 22, 2016, Brussels was rocked by three bomb explosions, which left 32 persons dead and over 300 injured. Immediately, the European Union Ministers for Justice and Home Affairs issued a joint statement on March 24, 2016. Among other things, this statement emphasised the need to intensify the work with IT companies, notably in the European Union Internet Forum, to counter terrorist propaganda and to develop by June 2016 a code of conduct against hate speech online. This is the genesis of the Code of Conduct formulated on May 31, 2016.
Moves to curb hate speech
As a matter of fact, the European Parliament and the Council of Europe had already issued Directive 2000/31/EC, generally known as “Directive on Electronic Commerce”, on June 8, 2000 whereby the member-states were required to take measures to fight against incitement to hatred on grounds of race, sex, religion or nationality. The directive imposed a limited liability upon intermediary service providers, if, upon receipt of actual knowledge or awareness of illegal activities, they didn’t act expeditiously to remove or to disable access to the information concerned.
For historical reasons, Europe has not been able to adopt a more liberal approach towards hate speech like the United States. After the Holocaust, European countries were compelled to take preventive steps. On March 7, 1966, the International Convention on the Elimination of All Forms of Racial Discrimination was adopted. Since Article 4 of the convention requires member states to criminalise the dissemination of ideas based on racial superiority or hatred, many European nations adopted legislation aimed at curbing hate speech. Learning from past experiences, the Council of Europe issued in 1993 a Declaration and a Plan of Action on combating racism, xenophobia, anti-Semitism and intolerance. A Commission set up pursuant to the Declaration found a steep increase in the number of racist websites in Germany and circulation of anti-Semitic material in the Netherlands.
On December 15, 2000, the European Commission against Racism and Intolerance recommended to the Council of Europe to include the suppression of hate speech by way of a protocol in the Convention on Cybercrime. But the United States opposed it, as a consequence of which the protocol was removed. But on November 7, 2002 the Committee of Ministers of the Council of Europe adopted an Additional Protocol. The Additional Protocol imposes a liability primarily upon individuals who actually post racist contents on the Internet, but limits the liability of intermediary service providers who serve as mere conduits of speech. The Additional Protocol as well as the European Directive on Electronic Commerce make it clear that intermediary service providers have no duty to monitor conduct/content. The service providers are not liable for the information that they store, provided they have no actual knowledge of the illegal activity and they act swiftly to remove or disable access to information upon being notified.
The above developments had a direct impact upon the manner in which the European Court of Human Rights (ECHR) looked at hate speech. In 1976, the ECHR stood on the side of free speech, by declaring in Handyside v. United Kingdom that freedom of expression is applicable not only to information or ideas that are favourably received or regarded as inoffensive or a matter of indifference, but also to those that tend to shock, offend and disturb. But it 2012, the same ECHR took a U-turn in Vejdeland v. Sweden, involving neo-Nazis who unlawfully entered a school and placed in the students’ lockers hundreds of pamphlets condemning homosexuals, by upholding a Swedish law that prohibited criticism of homosexual behaviour.
Lack of clarity
To a large extent, there is lack of clarity on what a hate speech is. The Code of Conduct formulated on May 31, 2016 adopted the definition of “hate speech” as found in Framework Decision 2008/913/JHA of November 28, 2008, to mean all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.
But still there is confusion at the ground level as seen from some interesting cases. In England, a preacher named Andrew Alderson was arrested for simply reading out extracts from the Bible in public and the owner of a cafe was directed by the police not to display verses from the Bible on a television. In Ireland, a complaint was lodged against a bishop who gave a sermon about the growing culture of godlessness in the country. A television network in Spain was fined for publicly supporting traditional family ties and criticising homosexuality. In a rather bizarre incident, a friendly conversation between the owners of a hotel and a guest practising a different religion, on the supremacy of one religion over the other, resulted in the owners of a hotel being arrested and charged with a “religiously aggravated public order offence”.
Therefore, on the question whether to allow free speech including hate speech or to allow only hate-free speech, it would be better to adopt the approach of former U.S. President Woodrow Wilson who said: “The greatest freedom of speech is the greatest safety, because if a man is a fool, the best thing to do is to encourage him to advertise the fact by speaking.”
V. Ramasubramanian is a judge at the High Court of Judicature in Hyderabad
Keywords: freedom of expression, hate speech, Code of Conduct, Human Rights