1. Introduction
The social media platform Facebook, owned by Meta Inc. (the platform company), has a sizeable and growing user base in India. Today, users in India constitute the largest share of Facebook’s users from any country by far (Statista.com, 2025). Facebook’s stunning popularity in India coincides with the mobile phone and telecommunications revolution in the country. Affordable data plans and the availability of low-cost smartphones have led to the widespread use of Internet-enabled mobile phones in India, which has allowed for the exponential growth of Facebook’s user base (Jeffrey and Doron, Reference Jeffrey and Doron2013; Punathambekar and Mohan, Reference Punathambekar and Mohan2019). Given its growing usership in India, and the importance of the Indian market, Facebook has a sizeable presence in the country in terms of its employees and offices. In 2021, at the time that Facebook changed the rule that governs hate speech on the platform which I discuss in this paper, the platform had five offices that employed over 300 persons.Footnote 1 Facebook’s growing presence in India has resulted in the platform company’s global legal and policy norms, exemplified by its hate speech rules, gradually being influenced by local contexts—in this case, the Indian law that regulates hate speech.
In this paper, I show how Facebook’s global norms on hate speech that are highly influenced by the United States First Amendment and consequentialist understanding of the limits of free speech have gradually shifted to a constitutive approach to hate speech, influenced by the social, political, and legal context of countries such as India. This shift, I argue, has taken place through the institutional processes that Meta has instituted that involve a range of internal and external actors who are involved in Facebook’s polycentric model governance. In India, the social, political, and legal context that has influenced Facebook’s shift in its normative framing of its hate speech rules that are applied across jurisdictions in which the platform operates, includes both a history of British colonial authorities wielding laws to suppress dissent and control public opinion, and the development of law as a response to concerns of state authorities around political actors using hate speech to mobilise crowds based on religious and other community identity to incite communal tensions and violence (Dhavan, Reference Dhavan1987; Kaur and Mazzarella, Reference Kaur and Mazzarella2009; Nair, Reference Nair2013). While the instrumental use of hate speech by political actors to incite communal tensions and violence is not new in India, this strategy has taken on a distinct majoritarian dimension in contemporary India (Jaffrelot, Reference Jaffrelot2021; Hansen and Roy, Reference Hansen, Roy, Hansen and Roy2022, pp. 2–16).
As Hindu nationalist ideology has become politically dominant, and state violence has taken on blatantly anti-minority forms, the virality of hate speech online on social media platforms and the movement of hate speech across different forms of media, has been effectively weaponised by Hindu nationalists and their supporters to silence and subordinate minorities and all forms of political dissent (Sundaram, Reference Sundaram2020; Nizaruddin, Reference Nizaruddin2022). Today, the social and political context in India that Facebook’s policies and actions are responding to includes the “real-world” harm caused by the virality of Hindu majoritarian hate speech online. However, as I point out later in the paper, Meta’s reluctance to enforce its policies and hate speech rules has led to a divergence between its policies and contributed to the increasing impunity that Hindu majoritarian individuals and groups enjoy for their speech-acts targeting Muslims and other minorities (Soundararajan et al., Reference Soundararajan, Kumar, Nair and Greely2019; Manuvie et al., Reference Manuvie, Jalajadevi, Kahle and Khan2022; India Hate Lab, 2024). These speech-acts, including hate speech by Hindu majoritarian individuals and groups, when circulated on social media platforms such as Facebook, have deadly real-world harms and consequences. The virality of hate speech online has been weaponised by Hindu nationalists and their supporters to gradually silence dissenting voices, and subordinate minority groups who are their primary target.
2. The “legislative process” leading up to the changes in Facebook’s content moderation policies and rules
In this section, I will briefly describe how Facebook updates its content moderation rules with reference to the specific example of the 2021 update to its community standards on hate speech related to the blunt distinction between “attacks on people” and “attacks on concepts and institutions.” This description is limited by the details that Facebook has made available online which do not name the third-party actors involved. In relation to this change, Facebook has stated that those engaged in the process that resulted in this change included “a broad range of academics and civil society organizations” (Meta, 2023b). These third-party actors included “experts in dangerous speech and atrocity prevention, human rights practitioners, social psychologists who study issues of personal identity, advocates for freedom of expression, and groups representing religious and non-religious world views” (Meta, 2023b).
Given the importance of the Indian market and the size of the user base from the country, Facebook has, over the last decade, regularly involved Indian stakeholders in closed-door consultations or “engagements” as part of the platform’s governance processes related to the drafting and updating of its content moderation rules and policies. The 2021 shift in Facebook’s Community Standards is emblematic of an important phase in Meta’s approach to content moderation more broadly as it grappled with increasing criticism from governments, international institutions, courts, media, and civil society of harms associated with content on Facebook. This phase began with the then Facebook Vice President of global policy management Monika Bickert’s announcement that Meta updated the values on which its content moderation policies were based, a move that signalled a shift from the platform’s exclusive focus on “voice” and instead including countervailing values such as privacy, dignity, safety, and authenticity (Bickert, Reference Bickert2019; Douek, Reference Douek2019).
However, in January 2025, marking the end of this phase, Meta announced that the platform company is reevaluating its approach to the regulation of content, including hate speech on its platform, thus aligning itself to the broader politics of the second Trump administration (Kaplan, Reference Kaplan2025). The Trump administration has time and again expressed its disdain for “woke” politics, and legal protections based on “protected characteristics” such as race, gender identity, and sexual orientation. Trump has also openly criticised what he perceives to be the overregulation of speech on social media platforms, especially after his account was temporarily blocked by Facebook and Twitter following his posts connected to the events of the Capitol riots in January 2021 (Perrigo, Reference Perrigo2021). Meta’s Chief Global Affairs Officer, Joel Kaplan, posted on the Facebook Newsroom that Meta would halt its third-party fact-checking programme in the United States and move to a “Community Notes” model instead. Kaplan’s post stated that Meta intended to “undo the mission creep” that had made its content moderation rules “too restrictive and too prone to over enforcement” (Kaplan, Reference Kaplan2025). While the most widely reported change in Meta’s policies in January 2025 was its decision to do away with third-party fact checkers, another important, but less commented on, aspect of Meta’s content moderation policies has been the change in language of its Community Standards from “hate speech” to “hateful conduct” (Meta, 2025). One interpretation of this shift in Meta’s language is that it signals a realignment of Facebook’s content moderation policies with the United States First Amendment and consequentialist approach to the regulation of speech on its platform that narrows the scope of harms to direct consequences between speech and material acts of violence that such speech results in (Cassano, Reference Cassano2025). This is a marked change from a previous phase (2010 onwards) where Meta had moved to a constitutive understanding of free speech that accommodated a wider set of “real-world” harms of speech, such as promoting a wider environment of hostility, intimidation, and exclusion, silencing and subordination of marginalised voices on the platform (Cassano, Reference Cassano2025). This approach was influenced by international human rights standards on freedom of speech and expression, the context of hate speech in Facebook’s growing markets in the Global South.
While the developments I have described above have taken place in 2025, this paper seeks to understand a specific change to Facebook Community Standards (FCS) or content moderation rules in 2021—that did away with a strict distinction between “attacks on people” and “attacks on concepts and institutions”—that has received very little attention in the existing literature in the field. Building on existing work in the field (Klonick, Reference Klonick2018; Dvoskin, Reference Dvoskin2020; Kettemann and Schulz, Reference Kettemann and Schulz2020; Dvoskin, Reference Dvoskin2022), I will argue that this 2021 update to Facebook’s Community Standards reflects a shift in the platform’s policies from the United States First Amendment influenced consequentialist approach to a constitutive approach to the regulation of hate speech that is responsive to the “real-world” harm of the virality of hate speech in local contexts. This shift was a response to increasing criticism from governments, civil society, and international and regional regulatory bodies that Facebook’s content moderation policies must address the changing nature of harms on the platform, including harms caused by the virality of hate speech online.
Global platforms such as Facebook, YouTube, and X have traditionally portrayed themselves as neutral pipes or transmitters of information to avoid legal liability for the content that they host. As Julie E. Cohen (Reference Cohen2017) has argued, these platforms have been able to successfully muster support from successive governments in the United States to ensure legal protections that have facilitated the emergence of the platform economy. Footnote 2 For instance, Section 230 of the Communications Decency Act of 1996 in the United States has facilitated the growth of platforms in the United States by providing them “safe harbor,” i.e., exempting them from the status of “publishers,” thus providing legal immunity from being prosecuted for harms that result from third-party content that they host. Law and legal institutions, Cohen has argued, have allowed for platforms to systematically appropriate personal data of users, towards supporting a business model based on virality, profitability, and the extraction of users’ data, while shielding them from being held accountable for harms that have resulted from their actions (Cohen, Reference Cohen2017). Cohen has argued that platforms are key organisational forms of the new informational economy that have helped transform existing markets through processes such as the datafication of basic factors of industrial production. For Cohen, the new informational economy is characterised by a transformation from earlier forms of industrial capitalism through increasing “propertization of intangible resources, the concurrent dematerialization and datafication of the basic factors of industrial production, and the embedding of patterns of barter and exchange within platforms” (Cohen, Reference Cohen2017, p. 135). Cohen, in her analysis of platforms, has drawn on Manuel Castells’s formulation that we are now in a contemporary phase of capitalism or “informational capitalism” where the production and accumulation of economic, cultural, and political capital is increasingly shaped by computer-based knowledge networks and technologies (Castells, Reference Castells2000). Cohen’s analysis of platforms is useful for understanding how virality, which has emerged as a concern specific to the new informational economy, has led to changes in the regulation of hate speech online on Facebook.
Virality is a specific mode of transmission where content is transmitted at great speed in a short time over large distances through key nodes. Virality involves “a social information flow process where many people simultaneously forward a specific information item, over a short period of time, within their social networks, and where the message spreads beyond their own [social] networks to different, often distant networks, resulting in a sharp acceleration in the number of people who are exposed to the message” (Nahon and Hemsley, Reference Nahon and Hemsley2013, p. 16). As Nahon and Hemsley have described, viral information flows can be identified through their S-shaped or sigmoid slow-fast-slow signature growth pattern (Nahon and Hemsley, Reference Nahon and Hemsley2013, pp. 20–2). In viral flows of information, the rate of sharing or diffusion is slow at first and then begins to accelerate as more people on the network share the piece of content with many others on the network, eventually reaching a tipping point or critical mass where the content reaches far into the network and becomes self-sustaining (Nahon and Hemsley, Reference Nahon and Hemsley2013). The decline in sharing and views in viral events follows a power-law distribution with a long tail (Nahon and Hemsley, Reference Nahon and Hemsley2013, pp. 20–2). Footnote 3 Virality is characterised by the concentration of a few key hubs that distribute connections to a large number of nodes of lesser importance (Parikka, Reference Parikka2007, pp. 97–8). Platforms such as Facebook, whose business models are sustained by providing greater connectivity to their users, have facilitated virality, resulting in forms of information cascades that have increased the longevity and reach of hate speech online (Nahon and Hemsley, Reference Nahon and Hemsley2013, p. 90; Cohen, Reference Cohen2017, pp. 149–50). Virality is characterised by its ability to impact users in emotive, non-cognitive, and subliminal ways, an aspect that platform companies have taken advantage of to increase user engagement with content on their platforms (Sampson, Reference Sampson2012; Nahon and Hemsley, Reference Nahon and Hemsley2013).
Within the new informational economy, platforms such as Facebook are actively shaping public discourse and not just being a channel of communication. Over the last 10 to 15 years, there has been greater recognition of the role of social media platforms in actively shaping social opinion and public discourse. Scholars in the field have argued that far from being a neutral pipe or conductor of information, platforms, based on their own political and economic interests, intervene through their choice of design, use of algorithms, and content moderation policies, all of which impact the freedom of speech of users and the public more broadly (Lessig, Reference Lessig1999; Gillespie, Reference Gillespie2015a). These scholars have argued that platforms actively govern speech across national jurisdictions through their global content moderation rules and policies (Gillespie, Reference Gillespie2015a; Gillespie, Reference Gillespie2015b). Platforms facilitate speech and connect users, and platform companies profit from the datafication and widespread dissemination of content through shares, retweets, likes, etc., and associated digital advertising. This body of scholarship views social media platforms such as Facebook, which are private corporations, as important sovereign actors that influence how hate speech is defined and understood. Dominant global platforms such as Facebook, YouTube, and Twitter, through their interventions and governance practices, have had a disproportionate influence on public discourse across domestic jurisdictions. One need not look further than Elon Musk’s disproportionate influence on public discourse through his leveraging of his ownership of X, and the enormous political influence he wielded in his short stint as part of the second Trump administration. Changes to Facebook’s content moderation policies, rules, and governance mechanisms have a significant impact on public discourse and reflect the priorities and intentions of the platform company. Increased awareness of Facebook’s role in influencing public discourse has resulted in greater public scrutiny from government regulators, courts, and civil society of the platform’s actions and policies.
Facebook’s policy changes related to its content moderation policies have been driven by heightened public scrutiny after the Cambridge Analytica scandal in 2018, where Facebook’s policies in relation to third parties’ use of private data of the platform’s users were widely criticised. Around the same time, Facebook also faced criticism over the platform’s refusal to take effective action against “fake news” on its platform. The negative publicity around these controversies led to Meta’s market value falling by 100 billion US dollars in 2018 (Klein, Reference Klein2018). At this point in time, Facebook faced increasing criticism and threats of regulation by domestic, regional, and national governments, courts, and international human rights bodies for the harms caused by the virality of speech on its platform. In 2018, the United Nations International Independent Fact-Finding Mission on Myanmar, chaired by Marzuki Darusman, found that Facebook was used effectively by nationalistic political parties and politicians, leading monks, academics, prominent individuals, and members of the Government in “shaping public opinion” and portraying Rohingyas and other Muslims “as an existential threat to Myanmar and to Buddhism.” According to the Independent International Fact-Finding Mission on Myanmar established by the United Nations Human Rights Council, this anti-minority campaign by Buddhist nationalists and their supporters in Myanmar “created a conducive environment for 2012 and 2013 anti-Muslim violence in Rakhine state and beyond, without strong opposition from the general population and enabled hardening of repressive measures against Rohingya and Kaman in Rakhine state and subsequent waves of State-led violence in 2016 and 2017” (The Independent International Fact-Finding Mission on Myanmar, 2018, Para 696, p. 166). The United Nations International Independent Fact-Finding Mission has pointed out that there is a longer history of anti-Muslim campaigns in Myanmar that precedes Facebook, which includes the use of print media such as books, pamphlets, and magazines by Buddhist nationalists and their supporters (The Independent International Fact-Finding Mission on Myanmar, 2018, Paras 697–702, pp. 166–7). As Lee has argued, along with the effective use of Facebook, other media such as the state-owned newspaper GNLM have been influential in amplifying anti-minority hate speech in Myanmar and have contributed to an environment “that excused and allowed nationalist hate speech, particularly against groups such as the Rohingya, and this helped provide license and encouragement for the Myanmar military’s campaign of ethnic cleansing” (Lee, Reference Lee2019, p. 3214). Facebook’s alleged role in these events in Myanmar forms a prominent part of the case filed by Gambia (backed by the Organisation of Islamic Cooperation) against Myanmar before the International Court of Justice in 2019, accusing Myanmar of violating the provisions of the Convention on the Prevention and Punishment of the Crime of Genocide (Rapp, Reference Rapp2021). Facebook has faced similar allegations, in relation to rising anti-immigrant sentiment in Germany and other parts of Europe (Taub and Fisher, Reference Taub and Fisher2018), and to ethnic conflict and communal violence in Sri Lanka (Al Jazeera, 2020) and Ethiopia (Jackson et al., Reference Jackson, Townsend and Kassa2022).
These criticisms of Facebook coincided with a phase of the platform’s growth beyond North America and Europe to other parts of the world, including in new markets such as India and other parts of the Global South. Facebook’s expansion of its global footprint was accompanied by a change in its content moderation from policies being based on an ad hoc and a loosely framed set of standards to one that included specific rules (Medzini, Reference Medzini2022, pp. 2233–40). As Facebook’s content moderation policies and rules evolved, Meta put in place a more elaborate and polycentric platform governance infrastructure to design and implement these policies and rules. A polycentric model of governance involves a constellation of actors in platforms’ regulation of speech such as governmental and substate actors (legislative assemblies, individual ministries, regional and local governments, police cyber cell units, regulators, national security agencies, etc.); individuals, groups, and non-governmental organisations of different persuasions and motivations (civil society groups, collectives, journalists, academics, etc.); and private actors (platform companies, advertisers, data brokers, traditional or legacy media companies, etc.) (Black, Reference Black2008, pp. 137–64; Gorwa, Reference Gorwa2022b). In the next section, I will discuss in detail the internal processes of how Facebook changed its rules and content moderation policies within its polycentric model of governance. By describing the “legislative process” involved in Facebook’s framing of its rules, I bring this paper into conversation with existing scholarship in the field that argues that social media platforms such as Facebook, which are private corporations, are important sovereign actors that influence how hate speech is defined and understood.
Facebook frames its content moderation policies based on discussions with multiple actors, including its internal teams and researchers, as well as external actors such as human rights activists, lawyers, and journalists. These discussions could range from formal meetings to informal closed-door discussions. In my capacity as a lawyer and researcher working in the area of hate speech regulation and free speech, I have participated in a few such discussions over the years. In December 2020, I was invited to one such “engagement” online organised by members of Facebook’s Product Policy teamsFootnote 4 from Singapore, Hyderabad, and Delhi. This “engagement” with Facebook’s representatives was designed by members of Facebook’s Product Policy teams to discuss proposed changes to Facebook’s then-existing rule that made a blunt distinction between “attacks against people” and “attacks against concepts and institutions.” This meeting was meant to get feedback from a select group of stakeholders, including policy experts, journalists, lawyers, academics, and human rights defenders based in or having expertise in the Indian context. The exact details of the meeting are confidential. This meeting was part of a wider process that Facebook had undertaken to get feedback on the proposed change from stakeholders in different regions of the world. This process resonates with the argument of Kettemann and Schulz that Facebook considers external stakeholders or third-party actors to be representatives speaking on behalf of, or as proxies for, the platform’s wider community of users and diverse publics (Kettemann and Schulz, Reference Kettemann and Schulz2020, pp. 15–20).
This meeting related to a change Facebook was then considering to a rule in its community standards on hate speech that constituted the basis of the platform’s understanding of free speech (Meta, 2025). This rule related to the distinction between “attacks against people” and “attacks against concepts and institutions” (such as symbols, revered figures, and community leaders). This distinction formed a fundamental basis of Facebook’s hate speech policies and draws on a consequentialist First Amendment based understanding of free speech jurisprudence in the United States. Facebook defines hateful conduct as direct attacks on people on the basis of protected characteristics such as race, caste, disability, etc. (Meta, 2025). This definition specifically excludes attacks on concepts and institutions (Meta, 2025). According to this rule, a post on the platform that said “Christians are evil” would constitute hate speech and be disallowed, whereas a post that said “Christianity is evil” or “Jesus Christ is evil” would not.
In 2021, based on feedback from both internal experts and teams and external stakeholders that form part of Facebook’s polycentric model of governance, the platform decided to do away with the blunt distinction between “attacks against people” and “attacks against concepts and individuals” by introducing a caveat that allows its content moderators to consider certain contexts in which “attacks on concepts and institutions” can constitute hate speech on the platform and should be regulated (Meta, 2025). Facebook’s Community Standards on hate speech now state that in certain cases, the platform requires additional information to regulate speech-acts. These cases include those that pertain to “Content attacking concepts, institutions, ideas, practices or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination against the people associated with that protected characteristic” (Meta, 2025).
In order to determine whether speech-acts that fall under this category “are likely to contribute to imminent physical harm, intimidation or discrimination,” Meta has listed several factors that its content moderators will examine. These include (a) whether such content could incite imminent violence or intimidation, (b) whether such speech has been circulated during a period of heightened tension, and (c) whether the targeted protected group has faced violence recently (Meta, 2025). Meta has stated that Facebook could, if it thought necessary, also consider whether the speaker occupied a position of authority or was a public figure (Meta, 2025). These changes have made Facebook’s hate speech rules more nuanced, broadened the scope of its understanding of harms caused by hate speech, and brought its policies in line with international human rights law norms such as the Rabat Plan of Action on the Prohibition of National, Racial, and Religious Hatred That Constitutes Incitement to Discrimination, Hostility, or Violence, 2013 (Rabat plan of action, 2013).Footnote 5
Within Facebook’s polycentric model of governance, the team that is primarily responsible for changes to its content moderation rules and policies is the Product Policy Team. The Product Policy Team comprises smaller teams based on their roles, such as teams responsible for policy development, responding to incidents of violence, child safety, and terrorism-related content, as well as a stakeholder engagement team that solicits inputs from external experts and stakeholders (Dvoskin, Reference Dvoskin2020). Facebook begins the process of changing its content moderation rules by identifying a specific theme or proposed change to its community standards or policies as a priority based on “signals” from “societal trends” that it identifies through actors involved in its governance model (Kettemann and Schulz, Reference Kettemann and Schulz2020, p. 21). Facebook’s internal stakeholders (which may include the leadership of the platform company) are then informed of these proposed changes (Kettemann and Schulz, Reference Kettemann and Schulz2020, p. 21). Once the specific themes for discussion have been identified, two types of deliberations take place. The first is termed “Heads Up,” a discussion that takes place within the Product Policy Team where members of the team introduce a broad topic that has been identified for discussion with Facebook’s external stakeholders (Meta, 2018). After the “Heads Up,” the platform’s internal subject matter experts related to the proposed change arrange for working groups to discuss and research these proposed changes. Based on the inputs of these working groups, the subject matter experts frame options for an update or change in the policy or community standards. These options are then sent back as “policy recommendations” to the Product Policy Forum (Meta, 2018). The Product Policy Forum, which Monika Bickert, the then head of Facebook’s Policy Team, has described as “mini-legislative sessions” (Bickert, Reference Bickert2018), is a fortnightly meeting of 80 to 100 Facebook employees across different teams held in English (Medzini, Reference Medzini2022, p. 2245).Footnote 6
The main purpose of these meetings is to propose and discuss updates to Facebook’s content moderation policies. Participants in these meetings include members from several of Facebook’s teams, including the Legal, Cybersecurity Policy, Trust and Safety, Community Operations, Counterterrorism, Diversity, Civil Rights and Human Rights, United States State and Federal Policy, and Public Policy teams (Meta, 2018). Other participants include regional Public Policy Team members, representing different regions of the world in which the platform operates (Meta, 2018). These regional Public Policy Team members, with whom I was in conversation for my research, were familiar with their own local contexts and therefore understood the local nuances of how the virality of hate speech online on the platform could lead to “real-world” harm in their regions.
As part of its efforts to make its internal content moderation processes more transparent, Facebook has started publishing updates to its community standards and content moderation policies on its “Newsroom” (Kettemann and Schulz, Reference Kettemann and Schulz2020, p. 17), a space on the company’s website where representatives of the company communicate information related to updates in their policies and rules (Meta, 2018). Meta has made publicly accessible slide decks of presentations prepared by the Facebook Product Policy Team. Meta has made these slide decks available for Facebook’s Product Policy Forum meetings from November 2018, except for gaps after the onset of the COVID-19 pandemic in January 2020. In these meetings, members of various Facebook teams propose and discuss changes to the FCS, and to Facebook’s advertisement and product-related policies, including changes to its News Feed ranking (Meta, 2018). While making these slide decks available to the public is a positive move, Facebook provides only brief details in these slide decks and does not identify the third-party actors involved in these processes.
An analysis of the slide decks related to this specific change in the FCS revealed that there were two broad schools of thought on how to approach this question. The first, broadly mirrored by the views of “traditional free speech advocates” in this discussion, is firmly of the view that speech and action must be distinguished. This view resonated with the United States First Amendment consequentialist view of free speech, although many of the actors who supported this position were from contexts across the world where increasing majoritarianism and authoritarianism have made it difficult to express dissent. Some of these actors were concerned about protecting dissenting and minority views within religions that risk being termed blasphemous, which in contexts such as Pakistan and Bangladesh could have deadly consequences for the speaker.Footnote 7 According to this school of thought, hate speech should only be regulated when it incites or is likely to incite violence or other unlawful acts. This view is aligned with a consequentialist understanding of United States First Amendment jurisprudence as reflected by the Brandenburg standard.Footnote 8 The second camp is that of those who have a constitutive view of hate speech based on a view that, under certain circumstances, and in specific contexts, hate speech in itself can constitute a harm. Those who belong to this camp are concerned that a narrower consequentialist definition of hate speech does not adequately capture the range of speech-acts that could under certain circumstances dominate, subjugate, and cause harm without having to lead to a separate unlawful action or consequence (Matsuda et al., Reference Matsuda, Lawrence, Delgado and Crenshaw1993; Langton, Reference Langton1993; Mackinnon, Reference Mackinnon1996; Maynard and Benesch, Reference Maynard and Benesch2019; Gelber, Reference Gelber2021).
As opposed to stakeholders supporting of maintaining the status quo (and giving users greater “voice”), stakeholders, who included regional subject matter experts, especially those from India and Southeast Asia, argued for Facebook’s Community Standards and content moderation policies to be able to account for the nuances of local context. The views of these stakeholders were shaped by the lived experience of people in their countries, where the distinction between “attacks on people” and “attacks on concepts and institutions” could run the risk of being perceived as meaningless and hyper-technical.
In this section, I have described the internal processes and discussions that led to Facebook’s change to its content moderation policies and rules on hate speech and the broad differences in opinion involved in relation to the specific change that I discuss in this paper. In the next section, I will show how the distinction between “attacks against people” and “attacks against concepts and institutions” that was at the heart of these discussions appears meaningless and hyper-technical in some circumstances, especially in contexts such as India.
3. “Attacks on people” v “attacks on concepts and institutions” and hate speech law in India
Facebook’s 2021 update to its Community Standards on Hate Speech reflects a shift in its content moderation policy. As Facebook’s user base in countries in regions such as South and Southeast Asia has grown over the years, the inability of such a blunt rule that distinguished between “attacks against people” and “attacks against concepts and individuals” to address diverse contexts outside of the United States and its divergence from international human rights law became more obvious. The limitations of the blunt distinction made by this rule were exacerbated by greater connectivity provided by the affordances of Facebook that have facilitated virality and led to an amplification of the illocutionary force and perlocutionary effects of the contents of the video in certain contexts. The feedback that Facebook received through actors involved in its polycentric model of governance, many of whom were well-versed in these diverse contexts where Facebook has become increasingly popular, was that the platform was unable to respond to the “real-world harms” caused by content on the platform.
Facebook’s 2021 update to its Community Standards on Hate Speech was a response to increasing public criticism of “real-world harms” caused by content on the platform, i.e., harms that have been brought about by the amplification of hate speech online and the specific mode of transmission—virality—enabled by the material infrastructures of speech on the platform. Facing the threat of greater government regulation, fines, and negative publicity on the effects of these “real-world” harms, Facebook initiated this exercise in updating its community standards. Facebook’s decision to introduce a caveat to this blunt distinction that this rule made brought the platform’s content moderation in line with international human rights standards, as exemplified by the Rabat Plan of Action.
By changing its community standards, Facebook has also brought its definition of hate speech closer to that of existing hate speech laws in India, where the penal provisions that criminalise hate speech such as Sections 196 and 302 of the Bharatiya Nyaya Sanhita (BNS) (Bharatiya Nyaya Sanhita, 2023) (corresponding Sections 153A and 295A of the Indian Penal Code [IPC] The Indian Penal Code, 1860) do not distinguish between “attacks on people” and “attacks on concepts and institutions.” In India, the history and scope of the definition of penal laws that criminalise hate speech are controversial. Critics have pointed to the dangers of an overbroad constitutive understanding of hate speech in criminal law that has been misused both by state authorities against minority groups to restrict dissenting views, as well as by religious and community groups to harass artists, writers, and public figures for hurting their sentiments (Kaur and Mazzarella, Reference Kaur and Mazzarella2009; George, Reference George2016; Narrain, Reference Narrain, Ramdev, Nambiar and Bhattacharya2016).
However, given that Facebook’s rules do not have any penal consequences, these debates around the potentially grave criminal consequences of such a broad definition of hate speech do not apply. The historical similarities between the change in the normative framework of hate speech laws in the IPC in the 1920s and Facebook’s Community Standards in 2021 are worth emphasising, especially given that one of the key debates at the heart of both these changes was the distinction that the law makes between “attacks on people” and “attacks on concepts and institutions.”
To understand this argument, I cite lawmakers in colonial India who enacted Section 295A to fill a gap in the then-existing law and to address the public outrage over the publication of a pamphlet Rangila Rasool (The Colourful Prophet). Rangila Rasool was a satirical account of the domestic life of the Prophet, written in the form of a booklet and printed in Lahore by a publisher called Mahashay Rajpal. Rajpal’s beliefs were inspired by the Arya Samaj, a reformist Hindu movement in this period that had waged campaigns that were part of a pattern of competitive communal politics and exclusionary nationalism between Hindus and Muslims (Stephens, Reference Stephens2013). These campaigns often involved attacks on the sexual morality of members of a religion and their founding religious figures and icons (Stephens, Reference Stephens2013, p. 48). Rajpal was prosecuted by the Punjab Government under Section 153A (promoting enmity between groups and corresponds to Section 196 of the BNS). After a protracted legal battle, Rajpal was acquitted of charges under Section 153A by the Lahore High Court in 1927. Justice Dalip Singh, the judge who decided this case, held that satire that targeted the personal life of a deceased religious figure did not fall within the scope of Section 153A, and recommended that the legislature enact a separate law that would be able to include literature of this nature in its scope (Raj Paul v King Emperor, 1927, p. 591). The Lahore High Court reasoned that a satirical account of the Prophet, although it amounted to an attack against a deceased religious leader, did not amount to an attack against a community (Raj Paul v King Emperor, 1927, p. 592).Footnote 9 It is startling to see the extent to which the Lahore High Court’s distinction in 1927 between “attacks on people” and “attacks on concepts and institutions” both predates and anticipates the theme of Facebook’s 2021 update.
In 1927, Justice Singh could not have convicted Rajpal under Section 298, the existing provision in the IPC that prohibited “uttering words, sounds, gestures, objects with a deliberate intent to wound religious feelings” did not cover speech in the form of print or other media. In the 1920s, as now, Muslim organisations and publics perceived this legal distinction between “attacks on people” and “attacks on concepts and institutions” to be meaningless. The controversy over Rangila Rasool led to the Indian Legislative Assembly in 1927, introducing Section 295A that criminalised “deliberate and malicious acts to outrage religious feelings of any class by insulting its religion or religious beliefs.” The Indian Legislative Assembly comprised both colonial officials and Indian representatives. The members of the Assembly intentionally framed Section 295A broadly to include within its scope speech-acts that amounted to “attacks on people and institutions,” including those that were published and distributed through print. This change allowed state officials to respond to speech-acts that attacked members of a religious community that were disguised as insults to religions, religious institutions, and religious figures. However, some members of the Assembly criticised Section 295A as potentially allowing for the criminalisation of legitimate religious debate, academic research, and bona fide historical work (Nair, Reference Nair2013, pp. 331–2). Criticism of the potential of the broad scope of the Section 295A to be misused by state authorities to criminalise legitimate speech and stifle dissent has proved prescient, as I have pointed out at the beginning of this section.
The debate around the misuse of Section 295A remains relevant today as Hindu nationalists and their supporters have mobilised different forms of media, including the affordances of Facebook and other social media platforms, to attack Muslims and other minorities. Despite these criminal provisions, “attacks on people” in the guise of “attacks on concepts and institutions” are a recurring phenomenon in contemporary India. For instance, in 2022, remarks targeting the Prophet by a prominent journalist and then spokesperson of the ruling Bharatiya Janata Party (BJP) on live television led to a massive controversy and a diplomatic crisis between the Indian government and Muslim-majority countries including Kuwait and Qatar and condemnation from the Organisation of Islamic Cooperation (Al Jazeera, 2022; Organisation of Islamic Cooperation General Secretariat, 2022). However, several Hindu nationalists and their supporters (many of them online) have supported the “right to free speech” of the journalist, hailing them as heroes. The invocation of free speech by Hindu nationalists in this case mirrors the increasing use of tropes such as free speech and individual liberty by neo-fascist and majoritarian forces within democracies to justify hate speech against minorities, migrants, and those politically opposed to their views. In India, state agencies in BJP-ruled states have seized upon the opportunity created by communal violence linked to speech-acts targeting the Prophet to bulldoze and demolish homes of Muslims who have been accused of taking part in the riots.Footnote 10 These instances reflect a growing state of impunity when it comes to speech-acts and violence against Muslims by Hindu nationalist groups. This impunity extends to speech-acts by Hindu nationalist individuals and groups and their supporters on Facebook.
4. Facebook’s 2021 update to its community standards on hate speech
In this section, I will show that while Meta has updated Facebook’s Community Standards on Hate Speech to address the “real-world harm” of the virality of hate speech online on the platform, Meta is reluctant to apply these rules to take stringent action against Hindu nationalist individuals, groups, and their supporters. This has led to a growing gap between the changes to Facebook’s rules and the platform’s unwillingness to implement these rules. It has also contributed to the overall impunity that Hindu majoritarian individuals and groups enjoy for their speech-acts targeting Muslims and other minorities.
In India, Hindu nationalists and their supporters have commonly used attacks against the Prophet and Islam to attack Muslims (Pandey, Reference Pandey2022). Hate speech targeting Muslims and other minorities is not new in post-independence India. However, as I have already stated in the introduction to this paper, in contemporary India, Hindu nationalist ideology has become increasingly routinised and normalised (Jaffrelot, Reference Jaffrelot2021; Hansen and Roy, Reference Hansen, Roy, Hansen and Roy2022, pp. 2–3). State violence in contemporary India, especially post-2014, has taken on blatantly anti-minority forms, and has been used to reconstitute existing legal and political norms in the country (Hansen and Roy, Reference Hansen, Roy, Hansen and Roy2022, p. 16).
Despite growing criticism of Meta’s lack of serious action against Hindu nationalists and their supporters responsible for anti-minority hate speech on Facebook, the platform company has been reluctant to take stringent action against them. One of the reasons for Meta’s inaction is that the platform company fears a backlash from the Indian government and individuals and groups ideologically aligned with the government that could impact the platform company’s business interests in the country and compromise the safety of its employees in India. Facebook’s reluctance to act against hate speech by Hindu nationalists and their supporters on its platform reveals a contradiction between the normative basis of its content moderation policies, which are targeted at curbing the “real-world” impact of the virality of hate speech online, and the platform’s unwillingness to implement these norms.
While Facebook’s 2021 update to its Community Standards on Hate Speech is part of a series of measures that Meta announced to address the “real-world” harm resulting from the virality of hate speech online on the platform, in India, there has been growing criticism of Meta’s reluctance to take proactive action to prevent Hindu nationalist hate speech on its platform. Emboldened by hateful comments made regularly by the national and state leadership of the BJP, Hindu nationalist crowds continue to mobilise both offline and online with greater impunity.
The power of these crowds was evident during the events leading up to the 2020 Delhi riots that resulted in the death of 53 persons and injuries to hundreds of others (Lokur et al., Reference Lokur, Gonsalves and Suresh2020, p. 8). Of those who were killed, two-thirds were Muslims (Lokur et al., Reference Lokur, Gonsalves and Suresh2020, p. 65). In the period leading up these riots, leading Hindu nationalist figures and influencersFootnote 11 used their ability to harness virality afforded by Facebook and other platforms to mobilise Hindu nationalists and their supporters, leading to devastating consequences (Delhi Minorities Commission, 2020; Lokur et al., Reference Lokur, Gonsalves and Suresh2020).Footnote 12 In the aftermath of the 2020 Delhi riots, Facebook has faced growing criticism on its response or lack of during the incident. A The Wall Street Journal article published in October 2021 reported that a team of Facebook’s internal researchers had determined in July 2020 that inflammatory and anti-minority content targeting Muslims was common on Meta’s platforms in India, including Facebook and WhatsApp (Purnell and Horwitz, Reference Purnell and Horwitz2021). According to the report, Facebook’s internal teams that had investigated the prevalence of hate speech on its platform have identified two Hindu nationalist groups with ties to the BJP for posting anti-Muslim content on the platform. Pursuant to their investigation, Facebook’s internal teams had recommended that one of these groups be banned from the platform (Purnell and Horwitz, Reference Purnell and Horwitz2021). The researchers had recommended that Facebook ban one of these organisations for violating its hate speech community standards (Purnell and Horwitz, Reference Purnell and Horwitz2021). However, Facebook did not take any action against this organisation (Purnell and Horwitz, Reference Purnell and Horwitz2021).
Soon after The Wall Street Journal’s news reports on Facebook’s refusal to act on hate speech by members of the BJP and its Hindu nationalist supporters, the Chairman of the Peace and Harmony Committee, a committee that it has constituted by the Delhi Legislative Assembly to investigate the causes of the 2020 Delhi riots, held a press conference alleging that from the complaints received by the Committee it appeared prima facie that Facebook had colluded with vested interests during the riots (Ajit Mohan v Legislative Assembly, National Capital Territory of Delhi & Ors, 2021). The Chairperson of the Committee, member of the opposition AAP party (which was then the ruling party in the state of Delhi) Raghav Chadha, called for an independent investigation into Facebook’s role during the riots (Ajit Mohan v Legislative Assembly, National Capital Territory of Delhi & Ors, 2021). The Committee summoned Ajit Mohan, the then Vice President and Managing Director, Facebook India, to testify before it with respect to how Facebook India enforced its rules and policies. In summoning Mohan, the Committee stated that it had received several complaints alleging the “intentional omission and deliberate inaction on the part of social media platform Facebook to apply hate speech rules and policies which has allegedly led to serious repercussions and disruption of peace and harmony across the NCT of Delhi” (Ajit Mohan v Legislative Assembly, National Capital Territory of Delhi & Ors, 2021, Para 16). In its summons, the Committee stated that a few of the complaints it had received had relied on The Wall Street Journal’s news reports on Facebook’s refusal to act on hate speech by members of the BJP and its Hindu nationalist supporters.
In his response to the Committee’s summons, the then Director of Trust and Safety, South and Central Asia, Vikram Langeh, stated that Facebook had banned “individuals and groups that proclaimed a hateful and violent mission from having a presence on its platforms” (Ajit Mohan v Legislative Assembly, National Capital Territory of Delhi & Ors, 2021, Para 17). Objecting to the Committee’s notice, Langeh called on its members to recall the summons (Ajit Mohan v Legislative Assembly, National Capital Territory of Delhi & Ors, 2021). The Committee responded stating that it was within its purview to summon Mohan to ensure good governance in the state of Delhi, and that Mohan’s non-compliance would be treated as a breach of privilege by the Committee (Ajit Mohan v Legislative Assembly, National Capital Territory of Delhi & Ors, 2021, Para 19). Mohan and Lange approached the Supreme Court challenging the Committee’s powers to summon representatives of the platform. Their argument was that representatives of the platform had already testified before the Parliamentary Standing Committee on Communications and Information Technology, headed by Congress Member of Parliament Shashi Tharoor. The Parliamentary Standing Committee had summoned Facebook India to depose before it in September 2020, on the subject of “safeguarding citizen’s rights and prevention of misuse of social/online news media platforms” with an emphasis on the security of women (Ajit Mohan v Legislative Assembly, National Capital Territory of Delhi & Ors, 2021, Para 13). The Facebook representatives also argued that the Delhi Legislative Assembly did not have the power over “law and order” and “police” since these were subjects under the control of the Union Home Ministry.
In its judgement, the Supreme Court declared that the Committee did have the power to summon representatives of Facebook. During these proceedings, the Delhi Legislative Assembly’s Peace and Harmony Committee reissued its summons to Facebook India. The Supreme Court in its judgement clarified that the Committee had the power to summon a representative of Facebook to testify before it, but was not a prosecuting agency, and could not compel any representative of Facebook to answer any questions related to “law and order” and “police” (Ajit Mohan v Legislative Assembly, National Capital Territory of Delhi & Ors, 2021, Paras 204–205). In its judgement, the court rightly refuted the view that Facebook was a neutral platform for posting third-party information and stressed the platform’s “ability to decide which content to amplify, suggest, and elevate” (Ajit Mohan v Legislative Assembly, National Capital Territory of Delhi & Ors, 2021, Para 6). The Supreme Court’s assessment of Facebook’s active role in shaping public discourse mirrors this recognition in the scholarship in the area that I have referred to earlier in this paper.
Following the Supreme Court decision, in November 2020, the then Head of Facebook’s Public Policy Team in India, Shivnath Thukral, testified before the Committee. Although the Supreme Court had clarified that the Delhi Legislative Assembly’s Committee was not a prosecuting agency, members of the committee used this opportunity to dramatic effect, livestreaming these proceedings on YouTube (Delhi Legislative Assembly Peace and Harmony Committee, 2021). In his testimony, Thukral, citing the Supreme Court’s decision, refused to answer any question that directly pertained to the Delhi riots (Delhi Legislative Assembly Peace and Harmony Committee, 2021). The members of the Committee pressed Thukral on whether Facebook’s hate speech rules were defined based on Indian law, and whether Facebook India defined hate speech in the Indian context (Delhi Legislative Assembly Peace and Harmony Committee, 2021). Thukral stated that the FCS applied globally, but it took into consideration local factors (Delhi Legislative Assembly Peace and Harmony Committee, 2021). Thukral repeatedly cited the addition of caste to the protected categories in the FCS on hate speech based on inputs from Indian civil society as an example of how the platform’s hate speech rules accounted for the Indian context (Delhi Legislative Assembly Peace and Harmony Committee, 2021). However, in his testimony, Thukral did not address the allegation of bias levelled against Facebook, and its inaction in relation to hate speech by Hindu nationalists and their supporters on Facebook (Delhi Legislative Assembly Peace and Harmony Committee, 2021).
Allegations of favouritism and reluctance on the part of Facebook to prevent hate speech targeting Muslims and minorities by Hindu nationalist figures in India, including state and substate actors (Purnell and Horwitz, Reference Purnell and Horwitz2020; Purnell and Roy, Reference Purnell and Roy2020), were further fuelled by revelations by the whistleblower and data analyst Sophie Zhang. Zhang revealed part of Facebook’s internal communications that showed the platform had taken selective action related to a network of fake accounts attempting to influence the 2019 Indian Parliamentary elections (Shree, Reference Shree2022). Zhang stated that despite her flagging four accounts, the platform did not act on one of those run by a BJP Member of Parliament and his associates (Shree, Reference Shree2022). According to Zhang, Facebook’s facilitation of virality of its content was central to its business interests and its drive to expand its user base and to increase the quantity and quality of time users spent on the platform (Shree, Reference Shree2022). Zhang’s account has highlighted how virality on Facebook has sparked contradictions between the stated purpose of its content moderation policies to prevent “real-world” harm, which evolved through the platform’s governance model, and the profitability of virality on Facebook that could be threatened if it were to take overt action against Hindu nationalists associated with the ruling BJP in India.
In 2020, in response to sustained pressure from citizens’ groups to conduct an independent assessment of Facebook India’s record, including around allegations of bias,Footnote 13 Meta commissioned an independent Human Rights Impact Assessment (HRIA) by the United States based law firm Foley Hoag LLP. The law firm interviewed 40 persons, including academics, journalists, and civil society stakeholders, as part of the assessment it conducted (Meta, 2022a). HRIA mechanisms are a method that the Facebook Oversight Board has recommended for the platform to be able to understand its impact on communities. However, after conducting this exercise, Meta has not published the full details of the HRIA, stating that it feared a backlash that could compromise the safety of its staff and external researchers involved in the reporting process in India (Purnell, Reference Purnell2022). Instead, the platform company has published a limited four-page results of the HRIA in its Meta’s Human Rights Report for 2020–2021. These results did not refer to any conclusions related to allegations of bias in content moderation on the platform in relation to India (Meta, 2022a, pp. 57–60). The only reference to problems arising from the virality of hate speech online on Meta’s platform in India in Meta’s summary of the HRIA on Facebook India’s record in its report was when it acknowledged “the potential for Meta’s platforms to be connected to salient human rights risks caused by third parties, including: restrictions of freedom expression and information; third party advocacy of hatred that incites hostility, hatred or violence; as well as violations of privacy and security of person” (Meta, 2022a, p. 59).
According to Meta’s summary of the HRIA, the platform company “faced criticism and potential reputational risks related to risks of hateful and discriminatory speech by end users” (Meta, 2022a, p. 59). The reference to “salient human rights risks caused by third parties” has portrayed the problem as lying with actors unconnected to the platform. However, this is not the case. Zhang’s revelations have provided an insight into how Facebook regulation works in relation to Hindu nationalist individuals, groups, and their supporters. The platform’s own political motivations, such as its business interests in India, and its fears of a backlash against its employees in India for any action taken against Hindu nationalist figures, are some of the factors that have determined how its global rules related to hate speech online and virality have been applied in the contemporary Indian context.
Meta’s reluctance to take proactive action against hate speech by Hindu nationalists and their supporters targeting minorities on Facebook is especially concerning given internal assessments by the platform’s researchers that indicate that along with the United States and Brazil, India is one of the most at-risk countries to the “real-world” harms of hate speech. In October 2021, leaked internal documents at Facebook revealed that at the time, the platform had designated India, along with Brazil and the United States, as part of “Tier Zero”; that is, countries that the platform deemed to be of the highest priority because of heightened risk of political violence and social instability (Newton, Reference Newton2021). Civil society groups have repeatedly argued that the impunity that Hindu nationalist individuals, groups, and their supporters enjoy on Facebook and other social media platforms have contributed to the silencing and subordination of Muslims and other minority groups in contemporary India (Soundararajan et al., Reference Soundararajan, Kumar, Nair and Greely2019; Manuvie et al., Reference Manuvie, Jalajadevi, Kahle and Khan2022; India Hate Lab, 2024). In this section of the paper, I have argued that there is a growing contradiction between Meta’s efforts to change its global rules on hate speech and the platform company’s reluctance to take effective measures to curb hate speech by Hindu nationalists and their supporters on Facebook. Facebook has changed its content moderation rules on hate speech to address the “real-world” harm of the virality of hate speech online on its platform, based on suggestions from a range of actors including those from India, these changes do not seem to have translated into effective action by the part of the platform in curbing the virality of hate speech online. Facebook’s reluctance to act to curb the virality of hate speech online by Hindu nationalists and their supporters has allowed for continued impunity for such speech-acts.
5. Conclusion
Facebook’s decision to qualify the rule that distinguishes “attacks on people” from “attacks on concepts and institutions” represents a broader shift that took place from the mid-2010s to the mid-2020s in the platform’s content moderation policies from a consequentialist United States First Amendment influenced position on free speech to a constitutive approach one. I understand this shift to be a response to criticisms from governments, regulators, and courts of the “real-world” harm of the virality of hate speech online on the platform in local contexts. More recently, from around the mid-2020s, with the advent of the second Trump administration, Meta has announced a reversal of this policy and a move back to a narrower and supposedly more “speech protective” approach to regulating speech acts on its platforms. Nonetheless, the processes through which the shift in the specific Facebook rule on hate speech that I have tracked in this paper show how the global rules set by social media platforms such as Facebook come into conversation with specific contexts such as India where the majority of their users are today.
In adopting a constitutive approach to hate speech, Facebook was responding to concerns around the “real-world” harm of the virality of hate speech online on the platform expressed by third-party actors who were as part of the “legislative process” leading up to the change in this rule. The 2021 update to Facebook’s Community Standards on “attacks on people” v “attacks on concepts and institutions” has allowed for a broader definition of hate speech that could address speech-acts by Hindu nationalist groups and their supporters in India that are attacks on The Prophet and on Islam, and a deliberate part of their strategy to create disharmony between the majority Hindus and minority Muslim community, to instigate violence between these communities, and to silence and subordinate Muslims. As participants in Facebook’s “legislative process” have argued, in such contexts, a strict distinction between “attacks against people” and “attacks against concepts and institutions” is meaningless and hyper-technical. This shift in Facebook’s Community Standards on hate speech aligned Facebook’s content moderation policies with international human rights principles such as the Rabat Plan of Action. In making this change, Facebook has deviated from the content moderation standard followed by other social media platforms such as X (formerly Twitter) that continue to moderate speech-acts based on this distinction (X Help Center, 2023).
In this paper, I have shown that Facebook’s content moderation rules do matter—a conclusion supported by extant scholarship in the field and mirrored by the views of institutions such as the Indian Supreme Court in its judgment related to the Delhi Peace and Harmony Committee proceedings. Given that platforms such as Facebook form important nodes in the ecosystem of information capitalism that they operate within, and their role in facilitating the virality of speech-acts including hate speech, the actions and inactions of platforms must be scrutinised. Meta’s changes to Facebook’s hate speech rules from a consequentialist to a constitutive approach to the regulation of speech-acts signalled a shift in the normative basis for the platform’s approach in the mid-2010s. However, Meta’s reluctance to take effective action against Hindu nationalists and their supporters who have weaponised hate speech on the platform to help consolidate hate speech against minorities and dissenting voices to construct and consolidate majority legality in India shows the limits of the normative changes in the FCS that I have highlighted. Meta’s recent announcements that it was reverting to an earlier United States First Amendment-centric consequentialist approach to the regulation of speech-acts, to prevent “over enforcement” of speech-acts through its content moderation rules (Kaplan, Reference Kaplan2025) make it highly unlikely that Facebook will implement its hate speech rules to take meaningful action against Hindu nationalists and their supporters responsible for hate speech on the platform. Meta’s recent reversal of its position on free speech indicates a re-alignment of the platform company’s norms from one that was in consonance with international human rights norms, and had accounted for the development of the law on hate speech in countries such as India, to a position that has reverted to its narrower United States First Amendment origins. Meta’s realignment of its position on free speech is inspired by an ultra-libertarian concept of liberty that underpins the views of contemporary fascist formations (Baxi & Eckert, Reference Baxi and Eckert2026). Increasingly, both in the United States and other majoritarian contexts, this ultra-libertarian concept of liberty is being applied selectively to protect speech-acts by supporters of majoritarian regimes. The weaponisation of hate speech and the virality of speech-acts afforded by Facebook and other platforms by Hindu nationalists and their supporters to silence and subordinate minorities and dissenting voices are among the many contemporary practices that have contributed to the production of majoritarian legality in India.
Acknowledgements
Thanks to the co-editors of this volume, Professor Pratiksha Baxi and Professor Julia Eckert, for their generous feedback on draft versions of this paper. I would like to thank Dr Sundhya Fuchs and Lubhyathi Rangarajan for their comments on an early version of this paper. I would also like to thank the two anonymous reviewers for their suggestions and comments during the peer review process.