Suppressing Atrocity Speech on Social Media

In its August 2018 report on violence against Rohingya and other minorities in Myanmar, the Fact Finding Mission of the Office of the High Commissioner for Human Rights noted that “the role of social media [was] significant” in fueling the atrocities. Over the course of more than four hundred pages, the report documented how Facebook was used to spread misinformation, hate speech, and incitement to violence in the lead-up to and during the violence in Myanmar. Concluding that there were reasonable grounds to believe that genocide was perpetrated against the Rohingya, the report indicated that “the Mission has no doubt that the prevalence of hate speech,” both offline and online, “contributed to increased tension and a climate in which individuals and groups may become more receptive to incitement.” The experience in Myanmar demonstrates the increasing role that social media plays in the commission of atrocities, prompting suggestions that social media companies should operate according to a human rights framework.

and incitement. 7 In response to this criticism, some social media companies-Facebook prominent among them-increasingly acknowledge their role in these events, noting that until now they did not "take a broad enough view of [their] responsibility." 8 This essay explores whether social media companies are legally obliged to supress hate speech and incitement posted on their platforms under international, regional, or domestic laws. After concluding that there is a significant gap in legal obligations at each level, the essay considers the opportunities presented by self-regulation carried out in a human rights framework.
Responsibilities at the International, Regional, and Domestic Levels At the international level, only international human rights law contains rules relating to hate speech and incitement that are relevant to the present discussion. Article 20(2) of the International Covenant on Civil and Political Rights stipulates that "any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law." This obligation is directed at states-requiring them to prohibit such conduct under domestic law-rather than at social media companies, which, as domestic corporate actors, are not directly bound by human rights treaties. However, while companies may lack international legal personality, recent years have seen efforts to subject companies to human rights law standards. The UN Guiding Principles on Business and Human Rights 9 detail a number of responsibilities that companies have to respect human rights in their activities. These include the responsibility to avoid contributing to adverse human rights impacts 10 and the responsibility to conduct due diligence to identify the potential human rights impacts of company activities. 11 Efforts to hold companies to human rights standards have shifted the normative environment in which companies operate. However, the choice of the term "responsibilities" throughout the UN Guiding Principles was deliberate: it denotes that human rights are a standard of expected conduct for companies, not a set of legal obligations. 12 Attempts to draft a treaty that would impose obligations directly on private companies remain at a standstill. 13 As such, in the search for a legal obligation on social media companies to supress hate speech and incitement on their platforms, international human rights law only takes us so far.
While international human rights law does not bind companies directly, international criminal law was specifically designed to bind non-state actors. Certain regional developments are relevant here: in 2014, the African Union adopted the Malabo Protocol, 14 which will create an international criminal law section within the ( Under the Malabo Protocol, intention on the part of a company to commit a crime can be established "by proof that it was the policy of the corporation to do the act which constituted the offence." 19 Assuming that the Malabo Protocol eventually enters into force, 20 and assuming that jurisdiction can be established in a given case, 21 prosecuting social media companies for unlawful content posted on their platforms is a theoretical possibility. However, given that third parties are the ones posting the unlawful content, rather than social media companies themselves, only in very unusual circumstances would it be possible to show that the company policy was to "do the act which constituted the offence." Simply failing to remove problematic content, even with knowledge that the content is unlawful, is unlikely to qualify. As such, the Malabo Protocol does not currently offer an effective avenue for compelling social media companies to supress hate speech and incitement on their platforms.
There have also been soft law developments at the regional level that are specifically targeted at the growing dangers posed by social media. In 2016, the European Commission presented Twitter, Facebook, YouTube, and Microsoft with a Code of Conduct on Countering Illegal Hate Speech Online. 22 The commitments in this code center around procedures for reporting and removing hate speech from social media platforms. These include a commitment to establish clear and effective processes to review reported content and an undertaking to remove illegal hate speech within twenty-four hours of it being reported. Hate speech is understood as including incitement to violence towards particular groups.
In the 2016 press release that introduced this code, the Commission stressed the importance of member states complying with their EU law obligation to criminalize hate speech within their domestic legal systems. However, with respect to social media companies, the Code of Conduct is careful to use the word "commitments" rather than "obligations." The code is meant to create a normative environment of compliance, in the same way that the UN Guidelines on Business and Human Rights does, but is not intended to create legally binding obligations for 15 Protocol on the Statute of the African Court of Justice and Human Rights, July 1, 2008. 16 Jurisdiction over corporations is provided for in Article 46C of the Malabo Protocol; for further information on incitement under social media companies themselves. Subsequent developments at the EU level, while more specific, detailed, and forceful in their language, essentially continue this voluntary approach. 23 Germany has transposed the approach of the European Commission Code of Conduct into binding domestic law. The 2017 Network Enforcement Act sets out requirements for the way in which large social media companies in Germany must handle reports of unlawful content. Unlawful content includes, but is not limited to, hate speech and incitement to violence. 24 As with the Code of Conduct, the Network Enforcement Act creates a requirement to establish clear and effective processes to review unlawful content, and an obligation to remove content that is manifestly unlawful within twenty-four hours of receiving a complaint. (For other unlawful content, the deadline is seven days.) 25 Failure to have proper procedures in place can result in financial penalties. 26 Kenya and Honduras have also taken steps to impose legal requirements on social media companies to remove hate speech and incitement to violence on their platforms. 27 As the above overview shows, initiatives at the domestic level have gone furthest in imposing legal obligations on social media companies, but these remain the exception rather than the norm. The general position at the international, regional, and domestic levels is to rely on voluntary commitments and self-regulation for the moderation of online content. As the following section discusses, this is not necessarily a problem. Given the challenges associated with state-by-state regulation, there are advantages to having companies regulate themselves. How these approaches can be adapted to acute situations, however, is an open question.

The Way Forward?
While the weaponization of social media during conflict is not new, the fact that, in Myanmar, "Facebook is the internet" 28 meant that the use of the platform to disseminate hate speech and incitement to violence was particularly prominent. Despite this, the Fact Finding Mission (FFM) did not propose that states impose legal obligations on social media companies, recommending instead that social media companies voluntarily adopt human rights law as the framework for moderating content. 29 In other words, the preferred approach is to encourage social media companies to self-regulate in removing problematic content in a way that is consistent with human rights. The UN Special Rapporteur on Freedom of Opinion and Expression 30 and civil society also support this self-regulation approach. 31 23 European Commission, Tackling Illegal Content Online: Towards an Enhanced Responsibility of Online Platforms, COM(2017) 555 final (Sept. 28, 2017). With respect to terrorist online content specifically, which may constitute hate speech and incitement but which is a much broader category of content, the European Union has initiated steps to impose binding measures on service providers to remove content. This reluctance to recommend state-imposed legal regulation is likely attributable to the fact that moving away from a self-regulation model has some significant disadvantages. First, increased legal pressure on social media companies to suppress certain types of content can lead to the overremoval of content. As social media companies are guided by economic considerations, there is a significant risk that they will prioritize avoiding liability over the protection of free speech, and so remove more content than is warranted. 32 After Germany's Network Enforcement Act came into force, allegations circulated that social media companies were removing legitimate speech. 33 Second, domestic legal regulation would not necessarily mean that human rights would be better protected. On the contrary, where legal regulation does exist in domestic contexts, it often contains vaguely formulated rules and compels social media companies to suppress political dissent online under the guise of suppressing hate speech and incitement. 34 If self-regulation is preferred to state-imposed regulation, how should the human rights framework shape this self-regulation? The UN Special Rapporteur on Freedom of Opinion and Expression stressed some core human rights concepts that should guide social media companies in moderating content, including legality, necessity and proportionality, nondiscrimination, transparency, and access to a remedy. One particularly innovative idea is the proposal to create social media councils. 35 These councils, composed of both social media company representatives and other relevant stakeholders, would operate as independent self-regulatory bodies for the industry. They could elaborate ethical guidelines and content moderation policies, and could act as a focal point for individual complaints, thereby providing access to a remedy for users and accountability for the companies. To gain public trust, these councils would have to work in an open and consultative manner.
A self-regulation approach to content moderation that is informed by a human rights framework, such as a system of social media councils, has much to recommend it. In promoting the independence of social media companies, it reduces the opportunities for states to use these companies to silence opposition; in focusing on transparency and access to a remedy, it may reduce the likelihood that social media companies act overly broadly in removing content, because it would require them to provide justifications for their decisions.
However, there is still work to be done to understand how these broad human rights concepts should guide social media companies in acute situations, such as that in Myanmar. How social media companies approach hate speech and incitement on their platforms in the context of stable, secure societies may differ from how they should approach such content in unstable, insecure countries where violence often lurks just below the surface. The FFM Report recommends that "all death threats and threats of harm in Myanmar should be treated as serious and immediately removed when detected." 36 Such an all-or-nothing approach may be appropriate in the Myanmar context, but less appropriate in more stable contexts. In the latter, the companies or the councils should take time to understand the background of the speech, such as whether it was made in jest, in order to ascertain whether removal is justified. While ultimately the outcome may be the same, in that the speech is removed, the process for arriving at the decision may differ. A further recommendation of the FFM is that all social media platforms active in Myanmar establish "early warning systems for emergency escalation." 37 Again, this may be desirable for unstable countries, but these systems require a degree of data collection that may not accord with human rights principles if done in other contexts. 32 Report of the Special Rapporteur, supra note 12, para. 17. On the problem of overbroad removals, see Avi Shapiro, YouTube and Facebook In the absence of an appropriate international law framework, and given the disadvantages of state-imposed legal regulation, self-regulation informed by human rights norms would seem to be the best option on the table. Indeed, the human rights framework is likely flexible and robust enough to accommodate different approaches in different communities. What is needed now is further guidance on how to strike the balance between supressing hate speech and incitement, and the need to protect human rights, particularly in acute situations. Perhaps here, the role of social media councils may be particularly useful.