Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-xtgtn Total loading time: 0 Render date: 2024-04-16T09:33:38.055Z Has data issue: false hasContentIssue false

11 - Dealing with Disinformation: Evaluating the Case for Amendment of Section 230 of the Communications Decency Act

Published online by Cambridge University Press:  24 August 2020

Nathaniel Persily
Affiliation:
Stanford University, California
Joshua A. Tucker
Affiliation:
New York University

Summary

Recent revelations surrounding Russian interference in the 2016 US presidential election and the role that "fake news" may have played in shaping voter preferences have sparked a broad conversation on how to combat the spread and influence of disinformation online. Emerging from this conversation has been a number of proposals that seek to pass legislation or promulgate regulations that would make it more difficult for disinformation to flow through the Web. To that end, these interventions will confront the long-standing legal protections provided by Section 230 of the Communications Decency Act of 1996 (CDA 230), a key legal provision which broadly shields platforms from legal liability for the actions of third-party users of their services. The present-day debates on how to address "fake news" will join the legacy of efforts to reform or eliminate the shield provided by CDA 230. This chapter seeks to address three questions given this historical background. First, would modifications to CDA 230 pave the way to an effective response to the challenges posed by disinformation online? Second, if so, should such modifications be made? Finally, how should such modifications be crafted?

Type
Chapter
Information
Social Media and Democracy
The State of the Field, Prospects for Reform
, pp. 252 - 285
Publisher: Cambridge University Press
Print publication year: 2020
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Introduction

Revelations surrounding Russian interference in the 2016 US presidential election and the role that “fake news” may have played in shaping voter preferences have sparked a broad conversation among researchers, policymakers, technologists, and others on how to combat the spread and influence of disinformation online. Emerging from this conversation has been a number of legislative or regulatory proposals that would make it more difficult for disinformation to flow through the Web.

Given the fragmented nature of information creation across the Web, many of these proposals rely on the central role that online platforms such as Google, Facebook, and Twitter play in shaping the distribution of information throughout the Web. By creating incentives – or penalties – encouraging platforms to take a more proactive role in removing or combating disinformation, these interventions seek to leverage the unique position of these companies as potentially the most effective “least cost avoiders” in addressing the challenge posed by disinformation.

To that end, these interventions will confront the long-standing legal protections provided by Section 230 of the Communications Decency Act of 1996 (CDA 230), a key legal provision that broadly shields platforms from legal liability for the actions of third-party users of their services. For the past two decades, this provision has been seen as a major driver in the growth of online services and a cornerstone supporting free expression on the Web. Simultaneously, CDA 230 has also been argued to inhibit platform responsiveness to the harms posed by harassment, defamation, child pornography, and a host of other activities online. The present-day debates on how to address “fake news” will join the legacy of efforts to reform or eliminate the shield provided by CDA 230.

This chapter seeks to address three questions given this historical background. First, would modifications to CDA 230 pave the way to an effective response to the challenges posed by disinformation online? Second, if so, should such modifications be made? Finally, how should such modifications be crafted?

“Part 1: The Disinformation Challenge” frames the challenge posed by disinformation online, specifically the threat posed by the confluence of politically motivated actors, financially motivated media outlets, and online “troll” culture. “Part 2: How Does CDA 230 Shape Efforts to Combat Online Political Disinformation?” examines the legislative history and case law surrounding CDA 230 and evaluates its impact on contending with the challenge of online disinformation. “Part 3: Should CDA 230 Be Modified to Address Political Disinformation?” takes up the challenge of whether or not CDA 230 should be modified in light of this analysis, concluding that partial amendment focused on the techniques leveraged by disinformation campaigns is warranted.

Part 1: The Disinformation Challenge

“Fake news” has become a commonplace term for characterizing the prevalence of false or inaccurate stories circulating online, considered a symptom of the poor state of information quality throughout media and society generally. These stories were widely distributed during the 2016 US presidential election, with one survey suggesting that close to one in five US adults saw headlines claiming (falsely) that the Pope had endorsed then-candidate Donald Trump and that protestors had been paid #3,500 to disrupt a Trump rally (Reference Silverman and Singer-VineSilverman and Singer-Vine 2016). These stories were also considered credible, with 64 percent and 79 percent of respondents reporting that they believed the stories to be “very or somewhat accurate,” respectively (Reference Silverman and Singer-VineSilverman and Singer-Vine 2016).

However, as has been observed by others, the use of “fake news” as a conceptual frame is problematic on a number of levels (Reference OremusOremus 2016; Reference SullivanSullivan 2017; Reference Nielsen and GravesNielsen and Graves 2017). In and of itself, the spreading of false information under the pretense of truth is, of course, not a novel phenomenon, either online or in channels of communications more generally (Barnoux 1966; Reference MulfordMulford 2008). Moreover, there are numerous problems with defining the contours of the “fake news” phenomena. Should it apply only to the outright fabrication of events and assertions about reality? Or does it also include the partial presentation of information or an unfair characterization of events?

A sharper framing of the nature of the purported threat is necessary to evaluate the case for modifying or eliminating CDA 230. This chapter focuses on three major developments that have been drivers motivating the post-2016 discussion around online disinformation and its regulatory response. This includes (1) the active spreading of disinformation by governments and state-owned media, (2) financially motivated actors pushing disinformation for the purposes of obtaining advertising revenue, and (3) the activities of online “troll” communities as a vector for spreading disinformation. This chapter narrows in specifically on campaigns of political disinformation, false information targeted to shape perceptions around some aspect of political discourse, rather than efforts which spread inaccurate stories on other topics such as corporate acquisitions or celebrity deaths (Reference LeeLee 2017).

As they have been less of a primary focus in the regulatory debates around what to do with “fake news,” this chapter also excludes some other types of falsity. It does not cover the inadvertent spread of false or misleading information through the Web that does not result from a coordinated effort. Similarly, this chapter focuses on activities primarily targeted at disseminating disinformation – distinguishing it from cases in which a campaign simply attempts to amplify a point of view or spread awareness of a fact.

There is no doubt these categories are by necessity blurry at the edges; a disinformation campaign may leverage an existing misconception spreading organically, or an effort to bring attention to a certain point of view may strategically frame the truth or even shade into falsehood. Issues frequently bleed across the fuzzy boundary between “political” and “nonpolitical” discourse, such as debunked theories around the dangers of vaccines promoted by “antivax” activists. However, through this rough framework, this section seeks to offer background context around some of the activities that have provided the impetus for recent calls for legislative and regulatory action on online disinformation.

Disinformation from State Actors

Perhaps the primary trigger for calls for a regulatory response to the challenges posed by disinformation threats online has been confirmation by the intelligence community that Russian state actors engaged in an active effort to shape discourse around the 2016 US presidential election (NCCIC 2016; National Intelligence Council 2017). The 2016 Russian campaign was a multifaceted effort aimed at undermining trust in targeted political figures. This included conspiracy theories such as “Pizzagate,” which spread the notion that Democratic nominee Hillary Clinton and Clinton campaign chairman John Podesta were members of an underground sex trafficking ring (Reference RobbRobb 2017). Beyond efforts to spread disinformation, the effort also included attempts to exacerbate political polarization, in one case stoking racial controversy around law enforcement between activist Black Lives Matter and Blue Lives Matter groups (Reference AlcindorAlcindor 2017; Reference SeetharamanSeetharaman 2017).

The campaign also operated through a range of different channels. State-owned media outlets such as Sputnik and Russia Today were leveraged to create and disseminate disinformation widely. These more obvious channels operated alongside more subtle “grassroots” infiltration of online communities and the purchase of targeted advertising across various social media platforms. Beyond the spread of disinformation, the campaign also engaged in hacking targeted at compromising private information held by political parties and candidates on both sides of the electoral race.

Although the 2016 Russian campaign has been a widely discussed example of state-driven online disinformation, the use of these techniques is not new. Researchers have tracked similar online disinformation campaigns launched by Russia to influence political discourse throughout Central and Eastern Europe, as well as the Middle East, in recent years (Reference ChenChen 2015; Reference Lange-Ionatamishvili, Svetoka, Geers and GeersLange-Ionatamishvili, Svetoka, and Geers 2015; NPR 2017; Reference WintourWintour 2017).

Nor are these campaigns specific to Russia. Researchers have found that social media has been leveraged for political disinformation purposes in a range of different contexts in recent years. Incidents include efforts seen in Mexico, Brazil, Canada, and China, to name a few (Reference FinleyFinley 2015; Reference Robertson, Riley and WillisRobertson et al. 2016; Reference Woolley and HowardWoolley and Howard 2017). These campaigns have been launched by state actors as in the Russian case, but have also been launched by a range of independent groups (Reference Robertson, Riley and WillisRobertson et al. 2016).

Financial Incentives for Disinformation

Recognition that politically motivated actors engaged in efforts to influence the 2016 election has emerged alongside a growing number of commentators highlighting the financial incentives driving the creation and dissemination of disinformation. Online advertising, in particular, has been seen as a motivator to create false but highly sharable content that drives monetizable page views to content online.

In the context of the 2016 US presidential election, businesses both within the country and abroad engaged in the creation of sites spreading disinformation through the Web. Media outlets included such sites as “The Denver Guardian,” which spread a range of conspiracy theories, such as one story connecting Clinton to the murder of an FBI agent investigating her use of a private email server, shared millions of times across Facebook (Reference ColerColer 2016). While the site was designed with the appearance of a local paper in Colorado, it was in actuality operated by a Los Angeles–based entrepreneur who also ran a collection of other sites profiting from the sharing of disinformation (Reference ColerColer 2016).

Outside the United States, journalists have uncovered groups of entrepreneurs in Macedonia and elsewhere profiting by selling advertisements running alongside disinformation catering to right-wing readers online (Reference TynanTynan 2016; Reference SubramanianSubramanian 2017). Stories included “news” of Pope Francis endorsing then-candidate Trump and fabricated reports of the candidate slapping a protestor at a campaign rally (Reference SubramanianSubramanian 2017). These sites sometimes acted as an amplifier rather than an originator of disinformation, copying content from other sites online and promoting them through swarms of fake accounts on social media platforms like Twitter and Facebook (Reference SubramanianSubramanian 2017).

Discussion around financial incentives for disinformation has not been limited to discussing the outlets producing and promoting this content online. Since many of the most prominent online platforms such as Google and Facebook are themselves reliant on advertising, critics and researchers have also underscored that the companies hosting this activity may have perverse incentives to harbor it, given that disinformation content is often widely shared and viewed (Reference Allcott and GentzkowAllcott and Gentzkow 2017; Reference ThompsonThompson 2017). For their part, these platforms have disputed this notion in numerous public statements and have taken action to restrict distributing advertising against disinformation (Reference Wingfield, Isaac and BennerWingfield, Isaac and Benner 2016; Reference LingLing 2017).

“Trolling Culture” As a Disinformation Source

Beyond the activities of politically and financially motivated actors, the participation of grassroots online “troll” culture has also been a force in facilitating political disinformation. Crowd activity – often performed anonymously – to shock and harass private citizens, public figures, and institutions for pure entertainment purposes has been a long-standing feature of social behavior on the Internet (Reference ColemanColeman 2015). These activities in the past have leveraged a wide array of tactics, from the manipulation of online polls to the strategic targeting of journalists and “swatting” – false emergency reports to law enforcement aimed at bringing police officers to a targeted address (Reference NorthNorth 2017). In recent years, many of these communities have been radicalized by far-right groups to “spread white supremacist thought, Islamophobia, and misogyny through irony and knowledge of internet culture,” as researchers Alice Marwick and Rebecca Lewis have documented (Reference Marwick and LewisMarwick and Lewis 2017; Reference SchreckingerSchreckinger 2017).

In the context of the 2016 US presidential election, many of these communities were involved in coordinated campaigns to spread political disinformation. This included promoting conspiracy theories that philanthropist George Soros was engaged in a nationwide campaign to fund protests against Trump and claims that Democratic National Committee (DNC) staffer Seth Rich was assassinated as part of a cover-up connected to the 2016 leak of emails from the DNC (Reference DreyfussDreyfuss 2017). Effectively, these campaigns drew on the efforts of volunteers, a loosely coordinated, informal coalition of overlapping “alt-right” groups. This brought together a wide range of actors, including gamer communities, users of the popular online discussion board Reddit, members of the white supremacist community Stormfront, and “alt-light” news outlets echoing some of the messages of the far-right but excluding some of the more controversial views, to name a few (Reference Marwick and LewisMarwick and Lewis 2017, p. 26). These techniques drew explicitly on these earlier “trolling” efforts. As Mike Cernovich, one prominent alt-right figure involved in both earlier campaigns against feminists in the video-game industry and in the 2016 election, put it, “troll tactics” were a means with which to “build [his] brand” (Reference MarantzMarantz 2016).

The involvement of these communities in targeted campaigns of political disinformation highlights the important point that these three sources – state-run, financially driven, and trolling – do not operate independently. Instead, numerous ties link these engines of online disinformation into an ecosystem of overlapping, occasionally cooperating groups. Notably, state-run efforts coordinated by Russia leveraged paid agents who in turn worked to infiltrate and mobilize online communities to spread political disinformation (Reference KosoffKosoff 2017). Similarly, state-run efforts also subsidize and support a variety of financially motivated media channels to spread “fake news” and disinformation through the Web (Reference Belford, Cvetkovska, Sekulovska and DojčinovićBelford, Cvetkovska, Sekulovska, and Dojčinović 2017). These groups also operate on their own, acting independently for their own reasons to engage in the distribution of disinformation.

The (Ambiguous) Impact of Online Disinformation

While all the activities discussed in this section are well documented, it is important to recognize that, at the time of writing, clear empirical evidence of their actual influence over political outcomes is still unclear. While some researchers have concluded that disinformation efforts did have an impact on the 2016 US presidential election, the issue remains a matter of scholarly debate (Reference Howard and KollanyiHoward and Kollanyi 2017; Reference Kollanyi, Bradshaw and NeudertKollanyi, Bradshaw, and Neudert 2017). Given the limited visibility into the operations of various disinformation activities and the data around overall political participation on social media and other platforms, it is likely that this issue will remain ambiguous for some time.

If they are indeed effective, the potential risk to democratic institutions and processes seem clear. The capability of foreign powers to effectively manipulate political discourse within a country raises difficult questions about the representativeness of elected officials and the decisions made by them. To the extent that much disinformation seen during the 2016 US presidential campaign focused on exacerbating political conflict and cementing polarization, such activities might also erode the ability for democracies to effectively act as engines for compromise between segments of society (Reference Epstein and GrahamEpstein and Graham 2007). Yet evidence on this front is ambiguous. It is unclear that the Internet is in fact increasing polarization (Reference Boxell, Gentzkow and ShapiroBoxell, Gentzkow, and Shapiro 2017). Moreover, it is unclear whether a more partisan media writ large is in turn making the public more polarized (Reference PriorPrior 2013).

However, regardless of whether or not they are indeed effective, these politically targeted activities – and public knowledge about them – still may raise threats to the health of democratic processes. Disinformation campaigns might accelerate erosion in public trust of institutions seen as critical to the maintenance of democracy. First, skepticism around the veracity of online information generally might also limit the influence of journalistic channels producing and distributing accurate information (Reference Barthel and MitchellBarthel and Mitchell 2017). This may hinder the ability for democracies to engage in authentic, effective deliberation and arrive at decisions considered “legitimate.”Footnote 1

Second, regardless of actual effectiveness, a broadly held perception that these disinformation campaigns do indeed have an impact may itself create distrust in the legitimacy of elected officials, particularly those supported by foreign governments and interests. This has been the case in the aftermath of the 2016 campaign, with numerous congressional inquiries and an ongoing special investigation attesting to the continued concerns by policymakers and the public as a whole.

Though ambiguity still exists, these risks and others have encouraged a live debate as to the set of responses – regulatory or otherwise – needed to combat these activities and limit their potential influence on the media and information ecosystem. These proposals confront the framework of CDA 230.

Part 2: How Does CDA 230 Shape Efforts to Combat Online Political Disinformation?

Because of its potential threat to democratic processes and institutions, policymakers and scholars have begun to propose a range of legal and regulatory responses to online political disinformation. As is the case in other contexts, attention has turned toward the central role that online platforms play in hosting and facilitating the objectionable activity.Footnote 2 One recent examination of online media and sharing behavior during the 2016 election season concluded simply that “[d]isinformation and propaganda are rooted in partisanship and are more prevalent on social media” (Reference Faris, Roberts and EtlingFaris et al. 2017). Specifically, the study found that the set of websites which receive a disproportionate amount of attention on Facebook were also cited by independent sources and media reporting as creators and distributors of “inaccurate if not blatantly false reporting” (Reference Faris, Roberts and EtlingFaris et al. 2017, p. 15).

Across a range of issues, these platforms serve as “least cost avoiders” – actors best positioned to manage the risk from certain activities. In the online disinformation context, platforms possess highly granular data about the activity across their services and have the ability to influence the distribution of content. This might be done algorithmically by modifying systems of recommendation that promote certain content over others or financially through the barring of ads supporting certain types of content online. In the case of large platforms such as Google and Facebook, the companies are also perceived to possess the resources and technical competence to effectively create effective systems of detection and mitigation.Footnote 3

Without the cooperation of these platforms, proposals to address political disinformation confront a recurring, more general challenge with the enforcement of policy online, namely the challenge of identifying and pursuing particular actors that break rules (Reference LessigLessig 1999). Laws broken by perpetrators of online political disinformation, and new laws that might be passed against these types of activities, will likely be limited by the costly requirements of identifying and enforcing rules against a disparate and continually evolving ecosystem of disinformation perpetrators.

To the extent that legislative and regulatory action will seek to shape the incentives that online platforms have to combat online political disinformation, these efforts will take place in the shadow of CDA 230, which provides strong protections shielding platforms from liability for actions taken by their users. This provision does in part inhibit the effectiveness of existing and novel legal levers that would target online political disinformation.

Two routes remain by which legal action might combat these campaigns while leaving CDA 230 untouched. One, which would build on the legal precedent set by the Ninth Circuit in the Roommates.com case, would leave courts to engage in line drawing around the degree to which platforms might elicit illegal disinformation activity. Second, legislation and regulation targeting the platforms themselves and focusing on changing the information environment surrounding online political disinformation, rather than creating liability for the acts themselves, would also avoid having to amend CDA 230.

A Brief History of CDA 230

Passed in 1996, CDA 230 provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” subject to a set of exceptions for criminal, intellectual property, state, and communication privacy laws.Footnote 4

This legislation was passed in response to the decision in Stratton Oakmont v. Prodigy Services, a 1995 decision that suggested that online service providers could be held liable for the defamatory content posted by users on their platforms to the extent that they exercised editorial control over that content.Footnote 5 This decision represented an application of established common law principles around the liability of distributors and publishers. Under that framework, “distributors” exercising limited editorial control over content they distributed – such as bookstores and libraries – only faced liability for defamation if they had knowledge of the content and failed to remove it. In contrast, “publishers” exercising more active editorial control and judgment – such as newspapers and magazines – were deemed to be liable for defamatory content as if they had originally published it regardless of knowledge.Footnote 6 Stratton Oakmont raised concerns that platforms would be unsustainable if exposed to liability for the acts of any individual user and would be deterred from taking proactive action to filter for offensive content (Reference Reidenberg, Debelak, Kovnot and MiaoReidenberg et al. 2012, pp. 5–6).

To that end, the original impetus for CDA 230, as evidenced by its caption, was to protect platforms from liability for “Good Samaritan” acts to remove offensive content (Reference Reidenberg, Debelak, Kovnot and MiaoReidenberg et al. 2012, p. 7). However, Congress also had a range of other objectives in the passage of CDA 230, including an intent to “promote the continued development of the Internet and other interactive computer services and other interactive media” and “preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.”Footnote 7

Over the subsequent two decades, courts reviewing CDA 230 interpreted the doctrine to shield online platforms from liability for a broad range of acts taken by their users.Footnote 8 In 1997, the Fourth Circuit in Zeran v. American Online concluded that CDA 230 worked to shield platforms from both traditional categories of publisher and distributor liability, rejecting an argument by the plaintiff that the provision only worked to block publisher liability.Footnote 9 In 2003, the Ninth Circuit in Batzel v. Smith concluded that the phrase “interactive computer services” in CDA 230 was not limited to services providing access to the Internet as in Zeran and earlier cases but also included “any information service or other systems” such as a listserv.Footnote 10 Later cases also confirmed that users who were independent of an online service provider could invoke the protection of CDA 230.Footnote 11 In 2008, the Fifth Circuit in Doe v. MySpace found that CDA 230 immunity applied broadly to tort claims, not just those premised on defamation as in the Stratton Oakmont decision.Footnote 12 This broad view of CDA 230 was followed a year later by the Ninth Circuit in Barnes v. Yahoo!Footnote 13. In that case, the Ninth Circuit clarified that the scope of CDA 230 could apply beyond causes of action sounding in tort to include any cause of action that “inherently requires the court to treat the defendant as the ‘publisher or speaker’ of content provided by another.”Footnote 14

Together, these decisions have established CDA 230 as a broad shield for online intermediaries and influence the scope of available legal options in combating disinformation.

CDA 230 Shields Platforms from Acts of Online Disinformation Committed by Users

CDA 230 and its impacts have been controversial, and the provision has remained the target of recurring efforts to modify it in various ways (Reference Reidenberg, Debelak, Kovnot and MiaoReidenberg et al. 2012, pp. 46-49).Footnote 15 In the context of online political disinformation, CDA 230 conflicts with efforts to use existing causes of action and create new causes of action that would hold platforms liable for the illegal actions of its users.

There are a number of potentially applicable causes of action. Online political disinformation is often false information about an individual, and to that end might give rise to the tort of defamation or libel. Cases might include activities to spread conspiracy theories such as the sex trafficking “Pizzagate” rumor discussed in the section “Disinformation from State Actors” (Reference RobbRobb 2017). Consistent with the cases discussed in the section “A Brief History of CDA 230,” CDA 230 would prevent an online platform that hosted such defamatory content posted by a user from itself being held liable for defamation.Footnote 16

Online political disinformation might also violate a number of other laws that are less prototypical cases for CDA 230. For instance, under federal law foreign nationals are prohibited from “[m]aking any contribution or donation of money or other thing of value, or making any expenditure, independent expenditure, or disbursement in connection with any federal, state or local election in the United States.”Footnote 17 This would include efforts by foreign actors to interfere in US elections through the purchase of advertising spreading falsehoods about a particular candidate. Courts reviewing the illegality of advertising in other circumstances beyond the election context have generally refused to impose liability on the platforms that host this material, absent some specific cases applying the holding in Roommates.com discussed in the section “Option One: Court-Driven Regulation via CDA 230.”Footnote 18 Accordingly, actions by agencies like the Federal Election Commission to enforce these laws against the platforms themselves would confront the limitations of CDA 230.Footnote 19

New causes of action would likely face similar barriers, too. In the wake of the 2016 presidential election, California state legislators have proposed a bill that would make it illegal “for a person to knowingly and willingly make, publish or circulate on an Internet Web site … a false or deceptive statement designed to influence the vote on … (a) Any issue submitted to voters at an election, (b) Any candidate for election to public office.”Footnote 20 Beyond the range of potential First Amendment challenges to laws attempting to make creating or spreading political disinformation illegal, CDA 230 would still work to inhibit the enforceability of those rules on platforms.Footnote 21

The impact of these limitations parallels long-standing critiques of CDA 230. Critics have argued since its passage that the shield provided by CDA 230 makes online platforms less responsive and proactive than they otherwise would be in dealing with defamatory content (Reference Reidenberg, Debelak, Kovnot and MiaoReidenberg et al. 2012, p. 26; Reference Brown BarbourBrown-Barbour 2015). Where information about the perpetrator of the defamation is scant, victims of defamation may be left without adequate routes for recovery.Footnote 22 Similar discussions have played out in the context of enforcing laws against harassment in cyberspace.Footnote 23 This challenge may be compounded where the perpetrators of this activity are operating outside of the United States, as it was during the 2016 US presidential election.

It should be observed that CDA 230 only operates to preclude the bringing of a suit against the platform seeking to find it liable as if it was the publisher of the defamatory content. Even if it did not, the fact that many acts of political disinformation will target public figures of various kinds may mean that claims like defamation and libel may as yet be relatively weak legal tools to bring to bear. For instance, even without CDA 230, a successful suit by a public figure would need to meet the standard set under New York Times Co. v. Sullivan, which requires proof of “actual malice.” This is a challenging burden that requires plaintiffs to show that the act was committed with “sufficient evidence to permit the conclusion that the defendant in fact entertained serious doubts as to the truth of his publication.”Footnote 24

Even in light of the limitations imposed by CDA 230, it is important to underscore that the law does not in effect bar all legal or regulatory interventions that would incentivize platforms to combat online political disinformation. Two routes provide a potential basis for changing the state of play around this issue.

Option One: Court-Driven Regulation via CDA 230

Since Zeran, courts have consistently found that CDA 230 provides broad protections against online platforms being held liable for the activities of their users. However, a set of cases suggest that, under certain circumstances, courts may be willing to narrow the scope of the shield provided by CDA 230.

Fair Housing Council of San Fernando Valley v. Roommates.com concerned a claim against a website that provided a service connecting prospective renters with open apartments and rooms.Footnote 25 The plaintiffs in the case alleged that the platform violated the federal Fair Housing Act (FHA) by eliciting information about the renters preferences around gender, sexual orientation, and family status. By publishing these preferences and allowing users to choose renters based on this criteria, the suit alleged a violation of FHA provisions that prohibited discrimination by landlords and tenants on this basis. Roommates.com invoked CDA 230, arguing that this would treat the platform as the one engaging in the discriminatory activity.

The Ninth Circuit – adopting a rationale parallel to that of the Seventh Circuit – held that Roommates.com did not receive immunity from CDA 230 since it played the role of a “information content provider” (Reference QuistQuist 2012). By designing a website registration process that included questions around categories like gender and sexual orientation, and providing a service that filtered based on these preferences, the court held that Roommates.com contributed “materially to the alleged illegality of the conduct.”Footnote 26

The end result of the holding in Roommates.com is that the specific design decisions made around a website can contribute to the determination of whether or not it can claim immunity under CDA 230. Notably, the Ninth Circuit rejected a claim by the plaintiffs that the platform should be liable for discriminatory posts made by users in an open-ended, optional “Additional Comments” text field on user profilesFootnote 27. Since Roommates.com did not solicit a specific type of content in this text field, and published them as written, it was not a codeveloper of the content and therefore received CDA 230 immunity for those activities.Footnote 28

The holding in Roommates.com has been inconsistently applied across jurisdictions in the years following the decision, leading some scholars to suggest that the case is not settled law and in fact has a “checkered legacy” (Reference GoldmanGoldman 2017, p. 2). The Tenth Circuit in FTC v. Accusearch adopted the reasoning in Roommates.com in 2009, finding that a site that sold illegally acquired phone records was not entitled to CDA 230 immunity because it “specifically [encouraged] development of what [was] offensive about the content.”Footnote 29 In contrast, the application of Roommates.com by the First Circuit in Doe v. Backpage in 2016 allowed a site publishing ads for prostitution to obtain CDA 230 immunity, noting that an online platform’s decisions in “structur[ing] its website and posting requirements are publisher functions entitled to section 230(c)(1) protection.”Footnote 30 Such a line of reasoning would seem to significantly limit the holding in Roommates.com.Footnote 31 These differing applications of the rule raise doubts as to whether existing legal precedent will be a stable basis on which to combat online political disinformation (Reference FeuermanFeuerman 2016).

Yet, if it is applied, the Roommates.com holding suggests that – even absent a legislative modification – the immunities provided by CDA 230 might be effectively thinned by courts assessing whether or not activities associated with campaigns of political disinformation should create liability for online platforms. At issue will be the question of whether or not the specific design of the platform makes it “responsible, in whole or in part, for the creation or development of” the offending content.Footnote 32 While this is an inquiry that will turn on the particular claim, the service under question, and the type of disinformation activity, the case provides some rough guidelines around what might not receive the benefit of CDA 230 immunity.

At the most basic level, under the Roommates.com holding it is unlikely that simply providing a space through which to engage in illegal acts around political disinformation will expose the platforms themselves to liability. Creating an open-ended box for posting content did not constitute codevelopment, enabling Roomates.com to obtain the benefit of CDA 230 at least for those elements of its platform. On this count, it is likely that simply making available the means of posting – even if leveraged by trolls, bots, and foreign agents to spread disinformation – will be a point on which platforms will be able to claim immunity.

However, the outcome is more ambiguous when considering other features common to web services. To the extent that disinformation campaigns operate through advertising channels provided by the platforms, a claim might be made that companies like Facebook and Google materially contribute to the illegality. Cases applying the Roommates.com holding have at times rejected the application of CDA 230 immunity in the advertising context. Merely profiting from advertisements that promote illegal services or are themselves illegal is insufficient by itself to make the platform liable.Footnote 33 However, a pricing arrangement that encourages the activity at issue may qualify as a material contribution. Courts have articulated a few possible scenarios here, including offering discounts to the problematic advertising and variable pricing structures that increase the profit of the platform in proportion to the value and volume of the illegal activity.Footnote 34 Variable pricing is the reality online; most large platforms rely on auction-based systems for delivering advertising, with many buyers competing to deliver their content to a given user.Footnote 35 To the extent that it can be shown that platforms systematically offer advertising that violates the law at a lower rate, there is potentially a claim of contribution that would block the application of CDA 230 immunity.

Also, content is typically curated, shaped, and personalized to users through an algorithmically generated “feed.”Footnote 36 This might be grounds to make an argument of codevelopment, particularly when one takes into account the fact that – in response to interest in a single piece of defamatory content – the platform may recommend further defamatory content. In this sense, it matches the search functionality in Roommates.com, in which the provision of a mechanism that highlighted content based on discriminatory criteria was seen to facilitate the illegal activity in a way that shed immunity under CDA 230.Footnote 37 Similar claims might also be made based on issues that have emerged in the advertising targeting context as well, in which the algorithmic generation of targeting criteria facilitates potentially illegal activity (Reference Angwin, Varner and TobinAngwin, Varner, and Tobin 2017).

In spite of this, Roommates.com also provides ample opportunity to reframe the curation and cocreation of content in a feed in a more charitable light. Under the holding, a “website operator who edits user-created content … retains his immunity for any illegality in the user-created content, provided that the edits are unrelated to the illegality” – examples include editing for length, correcting for spelling, or removing obscenity.Footnote 38 Arguably platforms like Facebook do the same here, simply collaging and presenting the content as posted, as opposed to actually changing its meaning or otherwise “[contributing] to the alleged illegality.”Footnote 39

It is also not so clear that simply recommending additional defamatory content based on the interest of a user in defamatory content rises to the level of “codevelopment.” In Roommates.com, the court focused on the fact that the platform used impermissibly discriminatory criteria in filtering and delivering apartment listings to users. However, in an effort to distinguish this case from other platforms that might arguably perform the same function, the court observed that the use of a “neutral [tool] to carry out what may be unlawful or illicit searches” on an “ordinary search engine” does not expose that service itself to liability.Footnote 40 This is arguably the case here; a platform like Facebook does not explicitly solicit and then filter based on defamatory content. This sets the case apart from the design in Roommates.com, in which the platform presented a set of predefined categories that themselves were characteristics it was illegal to discriminate against. Instead, on a platform like Facebook, the user effectively “searches” on a neutral tool through their browsing behavior, and the feed returns more content responsive to that behavior, regardless of the specific topic.

As this brief analysis highlights, the end result of the Roommates.com holding is ambiguous and depends a great deal on the platform in question, a critique that was voiced by the dissent in that case.Footnote 41 However, it seems clear that under certain circumstances platforms might not enjoy the benefits of CDA 230 immunity, and there appears to be at least a colorable claim that this would allow liability to be applied to the platforms for at least some of the tactics used by those driving political disinformation campaigns.

Option Two: Legislative Actions Beyond Amending CDA 230

CDA 230 is simultaneously a broad and narrow provision. It is broad in the sense that platforms are shielded from liability against a wide range of illegal acts that their users might perpetrate. At the same time, it is narrow in the sense that it does not preclude a wide range of actions that might address campaigns of political disinformation outside the lever of applying user level liability to the platform. Three recent proposals made by researchers and policymakers focusing on this issue provide examples of the types of activities not inhibited by the framework laid down by CDA 230 and the case law interpreting the provision.

First, CDA 230 does not preclude the imposition of transparency requirements that would mandate that platforms disclose information relevant to evaluating the credibility of information. These might be interventions at the level of the user – helping to inform consumers about the provenance and verification of content. This might manifest as rules locking in a set of standards around “dispute flags” that would appear alongside posts online to signal that a story has been contested by an approved fact-checking organization, like those being experimented with at the time of writing (Reference KafkaKafka 2017). It might also include more extensive disclosures to particular regulators or expert research groups who might then work to enforce rules and inform the public at large (Reference Diresta and HarrisDiresta and Harris 2017). This would formalize the more ad hoc data provided by Facebook and other companies about advertising activity during the 2017 congressional hearings on this issue (Reference Seetharaman and WellsSeetharaman and Wells 2017; Reference Hwang and WoolleyHwang and Woolley 2017).

Second, beyond greater transparency about the data platforms have “on hand,” CDA 230 does not preclude measures mandating that platforms require greater disclosure from their users, as well. Proposals on this front have focused on the processes around online advertising, seen to be one channel for political disinformation in the 2016 US presidential election. Researchers have proposed “know your customer” requirements for online advertisers paralleling similar rules imposed in the financial sector, as well as more stringent rules around the labeling of anonymous and automated accounts (Reference Diresta and HarrisDiresta and Harris 2017, p. 82). The Honest Ads Act – bipartisan legislation originally proposed in October 2017 but seeing little subsequent action – would require that large online platforms maintain a public file of all electioneering communications beyond a certain monetary threshold.Footnote 42 This file would include a copy of the advertisement, targeting data, as well as information about the purchaser of the advertisement.Footnote 43 Similar approaches outside the advertising context might attempt to prevent bots and astroturfing by mandating more stringent requirements on the creation of new user accounts and profiles on a service.

Third, CDA 230 does not preclude more dramatic interventions that would change the actual flow of information through platforms. As a means of limiting the influence of online platforms in shaping public discourse, policymakers have called for a form of “net neutrality” to apply to the content layer of the Web, such that platforms like “Facebook, Google, and Amazon – like ISPs – should be ‘neutral’ in their treatment of the flow of lawful information and commerce on their platforms” (Reference FrankenFranken 2017). Other approaches might require that algorithms take into account certain machine-readable indicators of “credibility” in promoting and ranking information.Footnote 44

All three of these approaches operate within the structure of CDA 230, enabling policymakers to address the tactics used by political disinformation campaigns without necessarily applying liability for individual acts to the platforms. This does not mean that they will not be otherwise rendered invalid. Courts have affirmed in a number of cases that algorithmic outputs are an exercise of the First Amendment rights of the platforms themselves.Footnote 45

Regulation that would shape these outputs will thereby confront these constitutional protections. Interestingly, as Tim Reference WuWu (2013) has argued, First Amendment protections will not cover the algorithmic outputs of “functional” platforms whose “involvement with information is too distant or mechanical to be speech.” These are circumstances in which CDA 230 immunity will be most likely to apply under Roommates.com because these platforms do not “materially contribute” to the offending content. To that end, the doctrines are somewhat complementary since regulations to directly shape algorithmic output will be most likely to survive First Amendment challenge in circumstances where the Roommates.com doctrine is likely to block attempts to impose liability on the platform.

Assuming an intervention met these constitutional requirements, these three approaches might enable action to be taken around these threats without a modification of the underlying law alongside the exception articulated in the Roommates.com case.

Conclusion: Some Routes Closed, Others Remain Open

Consistent with long-standing critiques of the provision on issues such as defamation and harassment, CDA 230 may provide perverse incentives for platforms to be less proactive on combating disinformation than would be preferable. It may also make online platforms less active on gathering and sharing information about perpetrators in a way that may hinder efforts to pursue these actors directly (Reference Reidenberg, Debelak, Kovnot and MiaoReidenberg et al. 2012).

At the same time, CDA 230 does not function as an absolute bar to action in the space. Under the holding in Roommates.com, judicial action might serve to impose liability on platforms to the extent that their specific design rises to the level of “codevelopment,” which would make them complicit in the commission of illegal acts. Furthermore, since CDA 230 narrowly applies to claims that would treat the platform as a “publisher or speaker” of content, it does not conflict with legislative interventions that would place obligations on the platforms directly. Many of the proposals that would require greater transparency, user disclosure, and modifications to underlying content algorithms continue to be open options under the structure of CDA 230.

The question of whether or not to modify CDA 230 to contend with disinformation threats therefore depends on a careful weighing. At issue is whether or not these remaining options are sufficient to meet the threat posed by political disinformation – and, relatedly, the potential practicality, benefit, and cost of modifying CDA 230 to impose individual liability more directly on the platforms themselves.

Part 3: Should CDA 230 Be Modified to Address Political Disinformation?

As public discussion around the challenge posed by online disinformation continues, there have been an increasing number of voices advocating for modification or removal of CDA 230. One recent op-ed in the Financial Times characterized the provision as a “loophole,” arguing simply that platform operators “no longer deserve the sort of blanket exemptions from liabilities that companies in every other industry incur as a cost of doing business” (Reference ForooharForoohar 2017). The Economist characterized the provision as an “implicit subsidy” for online platforms, arguing that “giving platforms a free pass is increasingly difficult for regulators and courts: they simply have become too important for the economy and society more generally” (The Economist 2017).

The relationship between CDA 230 and efforts to combat disinformation is complex. This section seeks to assess the argument for modification or elimination of CDA 230 by answering the following questions. First, given the status quo, are the range of possible interventions sufficient to address the threat posed by campaigns of political disinformation? Second, what would be the potential positive and negative impacts produced by such a modification? Third, practically speaking, if one were to modify CDA 230, what modification would be appropriate to address the challenge posed by political disinformation?

Are Interventions Within the CDA 230 Framework Sufficient?

As discussed in “Part II: How Does CDA 230 Shape Efforts to Combat Online Political Disinformation?,” CDA 230 does not function as a categorical block to potential legal interventions to address the challenges of political disinformation. Its impact is considerably more specific: It limits interventions that would serve to treat the platform as the publisher or speaker of an act, applying the liability of a given user to the platform as a whole. It does not hinder a range of potential legislative actions to mandate greater transparency from the platforms, enforce more robust disclosure on the part of users, or even modify the mechanics of how information is distributed by services like Facebook or Google. Nor does CDA 230 serve to block potential actions by courts using the precedent set in Roommates.com to selectively eliminate immunity. An immediate question is whether or not CDA 230 is merely a distraction. Are the potential tools that are available without modifying CDA 230 sufficient by themselves to contend with modern campaigns of political disinformation?

One significant challenge to regulatory or court-driven action in this space is the speed at which online disinformation campaigns are evolving. Russian political disinformation tactics have continuously incorporated new techniques, and their intervention in 2016 represents the culmination of years of previous experimentation in the space (Reference ChivvisChivvis 2017). To that end, even if a given fix is successful in limiting the influence of these tactics in the short term, it may become rapidly obsolete as perpetrators change their approach. A robust solution will be able to adapt quickly as the landscape of strategies evolves; and, in that respect, it is unclear if the options discussed in “Option One: Court-Driven Regulation via CDA 230” or “Option Two: Legislative Actions Beyond Amending CDA 230” will provide a sufficiently nimble response.

For example, laws that would mandate that platforms require greater disclosure of information from users and advertisers might be quickly rendered a dead letter as perpetrators of these campaigns find new ways to mask their identity. Particularly in the case of sophisticated disinformation campaigns of the most concern, such as those seen in the 2016 presidential election, perpetrators of these efforts may have the means by which to mask their involvement through corporate shells and other aliases (Reference LapowskyLapowsky 2017). We might expect in this case that defining a fixed set of reporting requirements would be easily evaded.

The current focus on advertising may also be too narrow. Although tools to block perpetrators of political disinformation from advertising platforms may limit easy access to powerful tools for targeting a given message, it is important to note that these campaigns can proceed even without access to paid promotion. Well-resourced campaigns may have access to the distribution capabilities of state-run media infrastructure, informal promotion relationships existing outside a platform’s advertising tools, and “grassroots” supporters willing to spread a given message.Footnote 46 Even narrower regimes focusing on limitations around political advertising in particular would miss efforts that seek to target and produce conflict outside the context of an election or campaign. The Facebook events staged by Russia to stoke conflict between Black Lives Matter and Blue Lives Matter groups, for instance, may not be activities hindered by “know your customer”–style laws.Footnote 47

Intervention by the courts to selectively apply CDA 230 immunity without amendment of the underlying language seems similarly fraught. Roommates.com and its progeny depend on a highly fact-specific inquiry that turns on the precise design of an online platform. This is likely to leave untouched a number of channels through which campaigns of political disinformation might flow and remain effective. Recall in that decision that the provision of an open-ended text box for posting content was sufficiently “hands-off” such that the Roommates.com service was still able to obtain CDA 230 immunity for discriminatory content posted on that portion of the site. Extending such a rule to the political context would mean that platforms might, for instance, be granted immunity for activities occurring on its freeform posting features but not for algorithmic feeds where the platform plays a more active “codevelopment” role. The outcome might be that platforms take more proactive action to combat disinformation on some portions of their services more than others, leaving the overall problem unchecked.

Platforms may be left with considerable legal ambiguity as to the bounds of what may or may not incur intermediary liability, producing inconsistent implementation of policy across platforms. Beyond the fact-specific nature of decisions based on Roommates.com, courts have also applied the rule inconsistently across jurisdictions and in some cases articulated reasoning that would seem to reject the reasoning in that case.

A second difficulty is that much remains unknown about the societal impacts of political disinformation campaigns, which makes crafting an effective response in the near-term a challenge. For instance, proposals have proliferated that would require better signaling to consumers about the quality and provenance of content they encounter on online platforms (Santa Clara University 2017). Yet it remains unclear whether or not labels indicating when a piece of content has been challenged by a fact-checking organization are indeed effective. One recent study suggests that simple repetition of “fake news” headlines are sufficient to increase user perceptions of accuracy, even when labeled as false or disputed (Reference Pennycook, Cannon and RandPennycook, Cannon and Rand 2017). There is also evidence to suggest an “implied truth” effect, in which labeling “fake news” as such modestly reduces its perceived accuracy while perversely raising the perceived accuracy of disinformation that goes untagged (Reference Pennycook and RandPennycook and Rand 2017). In the political context, recent experimental results suggest that, even when a political leader’s own statements are exposed as disinformation, the impact on actual voting intention may be limited (Reference Swire, Berinsky and LewandowskySwire et al. 2017). Further study will be needed to craft a meaningful response to these threats.

Finally, it is unclear if legislative and judicial bodies will have the technical competence to effectively administrate these interventions. Adopting a Roommates.com–based approach requires courts to play the primary role in interrogating specific design decisions and evaluating the extent to which they contribute to illegal conduct. Given that even specialists in the industry admit that the risks and complexity of managing these systems drive them to be conservative in their attempts to solve disinformation challenges, generalist courts may not do much better.Footnote 48

Crafting a legislative intervention encounters similar hurdles. Many of the proposals discussed in “Option Two: Legislative Actions Beyond Amending CDA 230” suffer from a lack of sufficient coverage, making some techniques of disinformation more difficult while continuing to leave others open for exploitation. One option for overcoming this limitation is to expand the scope of legislation to more comprehensively deal with disinformation by, for instance, prescribing certain algorithms that would take defined elements of information quality into account or mandating the regular audit of algorithmic behavior. Yet increasing the scope simultaneously expands the level of complexity, requiring legislators to engage more fully with the technical design of platforms at a detailed level. As in the judicial case, it is unclear if the legislative process will be able to do so effectively and in a timely manner.Footnote 49

What Are the Benefits and Potential Costs of Modification?

Modification of CDA 230 overcomes many of the deficiencies that are likely to hinder a regulatory or court-driven approach to the challenges posed by political disinformation. Importantly, exposing an “interactive computer service” to liability for illegal acts taken by users modifies the overall structure of incentives by shifting the burden to the platform. In essence, rather than specifying a detailed set of actions that should be taken to address disinformation, the government would set a priority about the activity to be minimized and delegate the decisions about how to achieve that end to the platform.

Such an arrangement avoids the challenges faced by the regulatory or court-driven approaches. Platforms are able to rapidly develop and implement measures to mitigate the impact of political disinformation. Importantly, they will be able to change tactics nimbly as the landscape of these campaigns continues to evolve, ensuring a more robust bulwark against these threats moving forward. Moreover, platforms are also best situated to assess the efficacy of certain proposed solutions and shed light on the current ambiguities about the impact and behavioral mechanisms underlying disinformation. The association of financial risk with failure to address the challenge would promote investment in this research work and empirically supported interventions based on it. Finally, modification of CDA 230 would shift responsibility to the actors with the technical expertise and deep understanding of the products necessary to develop a nuanced response to political disinformation.

That being said, the broad scope of CDA 230 means that modifications will create a range of downstream effects. While amendment or wholesale removal might limit the influence of political disinformation, it may produce a range of harms that will on net make such a change unwise.

First, applying user-level liability to platforms may render these services either insolvent or overly reactive in ways that harm freedom of expression. These are in some sense the “classic” arguments against creating exceptions within the CDA 230 framework (Reference Reidenberg, Debelak, Kovnot and MiaoReidenberg et al. 2012, pp. 35–37). On one hand, liability for the acts of any one of a large pool of users may threaten the financial viability of certain platforms Reference Reidenberg, Debelak, Kovnot and MiaoReidenberg et al. 2012, p. 36). This is particularly the case given the broad scope of “interactive computer service” under legal precedent, covering everything from a small blog or listserv to the biggest platforms operated by companies like Google and Facebook.Footnote 50 Larger platforms will have the financial resources and legal expertise to absorb this risk, while smaller businesses and services run by volunteers may not be able to manage litigation based on the acts of their users. One effect may be to further accelerate and reinforce consolidation to the set of largest companies as less well-resourced platforms exit the market or merge with better-positioned competitors.

On the other hand, platforms may become overly reactive to even the minute threat of legal liability, favoring removal of potentially offending content by default rather than making a considered evaluation of the risk (Reference Reidenberg, Debelak, Kovnot and MiaoReidenberg et al. 2012, p. 36). Insofar as modification to CDA 230 attempts to confront the challenge of disinformation, the incentive to minimize legal risk might prompt platforms to preemptively remove content that is true and valuable but likely to be a source of controversy. These takedowns may also disproportionately reflect the interests of those most willing and able to pursue legal claims against the platforms. Ironically, the creation of platform liability for disinformation may create another route by which to suppress true information.

Second, modification of CDA 230 may render platforms substantially less transparent and participatory than they would otherwise be. One concern motivating passage of CDA 230 in the 1990s was the notion that there was no practical means by which companies could effectively monitor the massive flows of user-generated content published through their services every day.Footnote 51 However, as others have pointed out, this limitation has become less daunting with time, as advances in machine learning and processing power have made it more possible to monitor and moderate content at massive scale. Indeed, many such systems are today used to administrate monitoring of child pornography and intellectual property violations, laws that were exempted from the purview of CDA 230.

To the extent that modification of CDA 230 exposes platforms to liability around the disinformation activities perpetrated by their users, it is likely that the same automated, algorithmic approach will be deployed to maximize and accelerate identification and removal of offending content.Footnote 52 Adoption of these automated methods to deal with questions of truth, falsity, and information quality may hinder other, alternative models that leverage user and community participation to filter for these criteria (Reference GrimmelmannGrimmelmann 2015; Reference RogersRogers 2017). Wikipedia, the collaboratively edited encyclopedia, has been relatively successful in resisting the influence of “fake news” through human moderation (Reference RogersRogers 2017, pp. 359–362). Research indicates that users are driven by a sense of ownership and community identity to take an active role in disputing facts and eliminating falsehoods (Reference RogersRogers 2017, p. 363). The presence of automated algorithms that moderate content can erode these motivations and inhibit the contributions of volunteers.Footnote 53 In that respect, automated systems are in tension with community-driven approaches to the problem of disinformation.

There are qualities of community-driven filtration of disinformation that make it preferable to automated, algorithmic approaches. Algorithms are opaque, possessing hidden biases that can be difficult to ascertain as a user. Similarly, algorithms are designed and maintained by the platform, effectively delegating decisions over truth and falsity to the company operating the service. In contrast, a filtration approach that leverages community debate and moderation can take place in a more transparent manner and leaves decisions about disinformation to the users. Amending CDA 230 may incentivize platforms to minimize risk by adopting the algorithmic approach, thereby squeezing out better, more participatory options.Footnote 54

Third, as Nicholas Reference BrambleBramble (2012)has written, the immunity provided under CDA 230 represents a regulatory strategy to avoid data enclosure and regulatory capture in information infrastructure. Specifically, CDA 230 – and the immunity provided to platforms under Section 512 of the Digital Millennium Copyright Act – positions online intermediaries as a balancing force against the influence of network providers such as Comcast and AT&T on one hand and the influence of content providers like Disney, Viacom, and the New York Times on the other (Reference BrambleBramble 2012, p. 364). By limiting platform exposure to user-level liability, “network providers and content owners are no longer the sole entities to determine under what conditions user access, participation and innovation shall take place within these [online] spaces” (Reference BrambleBramble 2012). These exceptions also arrange financial incentives in a manner that, at least theoretically, positions online intermediaries as advocates for the communicative interests of its users against these other actors.Footnote 55 The modification of CDA 230 would rewrite the balance of power between different interests with the ability to shape the broader information infrastructure in an undesirable way. This may outweigh the potential benefit gained by addressing the more narrow threats posed by political disinformation.

Meeting the specific challenge of political disinformation with a wholesale repeal of CDA 230 is unwarranted given the broader negative impacts that may result. To that end, the central question is one of tailoring: Precisely what kind of acts should be targeted by the crafting of an exception to CDA 230? Will the attribution of existing causes of action to the platforms be sufficient, or will new causes of action be needed?

What Modifications Are Practicable?

Particularly in the context of regulating information falsehood and quality, crafting an appropriate exception to CDA 230 is challenging. For one, as legal scholar Cass Sunstein (Reference Sunstein1992) has pointed out, “we do not know what a well-functioning marketplace of ideas would look like.” Since it is difficult to specify an ideal end state with precision, it becomes similarly difficult to identify the set of incentives that society should impose on the platforms in addressing online disinformation.

For another, there is the difficult and practical choice of ascertaining precisely what causes of action, when taken by an individual user, should expose the platform to liability. There are not many relevant laws that make the spreading of falsehoods illegal. Defamation is one obvious tort that could be given an exception under CDA 230 given that political disinformation often concerns a specific individual’s reputation or character. Yet, as discussed in “Part I: The Disinformation Challenge,” disinformation campaigns may seek to spread falsehood about a far broader set of topics than those concerning an individual’s reputation, and the standard for proving defamation against public figures is particularly high (Reference Brown BarbourBrown 2015).

Other activities that have been associated with political disinformation campaigns in the past would potentially run afoul of a host of laws, including statutes against cyberbullying and the tort of intentional infliction of emotional distress.Footnote 56 Insofar as a disinformation campaign made an effort to acquire and leak information, it might also commit invasion of privacy and violations of a state right to publicity.Footnote 57 These are all claims that could create liability for the platform if excepted from CDA 230 and in doing so encourage those platforms to combat perpetrators of these campaigns. While creating exceptions around these collateral acts might indirectly hinder the efficacy of a disinformation effort, they might still fail in addressing the core challenge: media manipulation and the spread of propaganda.

This scarcity of causes of action against individuals who perpetuate disinformation should come as no surprise. The First Amendment heavily limits regulations on the truth, falsity, or quality of information as content-based restrictions. The Constitution “demands that content-based restrictions on speech be presumed invalid.”Footnote 58 In 2012, the Supreme Court examined the specific question on laws against false statements in United States v. Alvarez.Footnote 59 At issue in that case was the Stolen Valor Act of 2005, which made false statements about decorations awarded by the armed forces punishable by a fine or imprisonment up to six months.Footnote 60

In evaluating the constitutionality of that law, a plurality of the Court noted that “falsity alone may not suffice to bring the speech outside the First Amendment” and rejected the argument that a government “interest in truthful discourse alone [was] sufficient to sustain a ban on speech.”Footnote 61 The Court argued for battling disinformation with counter-speech, stating “[t]he remedy for speech that is false is speech that is true. This is the ordinary course in a free society … [Society is] not well served when the government seeks to orchestrate public discussion through content-based mandates.”Footnote 62 Facing “the most exacting scrutiny,” laws that would punish the distribution of information on basis of its falsity must pass a very high constitutional bar to be permissible.Footnote 63

One approach may be to avoid the question of attempting to regulate against falsehood, or even political falsehood, per se. As in Alvarez, “some false statements are inevitable if there is to be an open and vigorous expression of views in public and private conversation, expression the First Amendment seeks to guarantee.”Footnote 64 The threat posed by coordinated campaigns of disinformation like those seen in the 2016 US presidential election and elsewhere is not simply that inaccurate information is being spread in the political realm. The distribution of political falsehoods is a long-standing feature of the history of US democratic institutions.Footnote 65

What is unique is that disinformation is being spread through means that by themselves erode trust in the outcomes of democratic processes and hinder effective discourse around policy. Part of this is the perceived – and potentially actual – advantage that tools such as bots, advertisement microtargeting, the financial resources of a foreign government, and other tactics confer to actors spreading a message and influencing the public. This advantage is independent of whether or not the information being spread is true or false, though in the immediate context it gives rise to the dramatic sense that the current state of affairs is one in which society is “counter[ing] a firehose of falsehood with a squirt gun of truth” (Reference Paul and CourtneyPaul and Courtney 2016). Rectifying that balance of power focuses on better equalizing the instrumentalities of discourse. Such a goal may be more tractable politically, legally, and intellectually than defining what the threshold of “truthiness” should be and delegating that to private actors and the government to interpret and enforce.

In short, while it may be difficult to specify concretely what an “ideal” marketplace of ideas looks like, it is more straightforward to articulate and halt what might be considered methods of unfair competition in the marketplace of ideas. Amendments to CDA 230 should be directed toward this aim rather than the broader objective of encouraging platforms to eliminate political falsehoods from the web writ large. To adopt the latter as a goal raises the risk of exceptions to CDA 230 that are entirely too broad and in the very least might require the creation of new causes of action that are of dubious legality under the First Amendment.

This approach would create exceptions to CDA 230 for a number of existing laws and potential new regulations. Portions of the Federal Election Campaign Act (FECA) would be excepted from CDA 230 to block foreign interference into political discourse.Footnote 66 FECA prohibits foreign interests from engaging in election-related spending through advertising and other electioneering communications as they did during the 2016 US presidential election. By excepting these rules from the immunity provided by CDA 230, platforms would be liable for these acts and face incentives to minimize or eliminate this activity from their systems.

A range of rules have been proposed that would limit “microtargeting,” the use of highly granular data to target messages to users in the elections context and beyond.Footnote 67 Some ideas include regulation that would require that data brokers specializing in the collection and distribution of user data provide citizens with the ability to access the dossier compiled about them and opt out of certain uses.Footnote 68 Another proposal would require provision of “comprehensive notices … [of] data processing practices” from those engaging in the collection of voter data.Footnote 69 If implemented, these rules might be subsequently reinforced by crafting an exception to CDA 230 that would align the incentives of the platforms with the enforcement of the law. This would increase the level of transparency available to the public around what kinds of targeting are in use and would block actors unwilling to provide that transparency from the platforms.

Limited exceptions for fraud to be built into CDA 230 might also be justified by the prevalence of unlabeled bots or paid agents purporting to be genuine users for the purposes of persuasion and mobilization. Such an exception would also work to align platforms with the objective of reducing or eliminating the creation of phony websites imitating or purporting to be local news outlets, as was seen in the 2016 election cycle (Reference ColerColer 2016).

What techniques are “unfair competition in the marketplace of ideas” is rightly a matter of public debate, and such an open-ended inquiry may raise may concerns about the wide range of liability platforms might face. In the very least, the elimination of immunity from platforms that actively support the use of these techniques in influencing public discourse seems warranted and less controversial. Scholars Danielle Keats Citron and Benjamin Wittes have proposed such an approach, proposing an amendment that would explicitly prevent the limitation of liability for “Bad Samaritan” websites and other content hosts that “purposefully encourage” a defined set of illegal acts” (Reference Citron and WittesCitron and Wittes 2017). While their focus is on issues of sex trafficking and nonconsensual pornography, a similar approach might be taken to contend with the challenges of political disinformation.

Combating disinformation would likely require a piecemeal fine-tuning of CDA 230. The incentive to engage in campaigns of political disinformation is not eliminated by simply making these efforts more challenging, and they will continue evolving as they have been in the past. Although exceptions to CDA 230 will likely make responses more agile than more rigid regulatory or judicial prescriptions, these campaigns are likely to find new channels through which to operate. We might also expect that the impact of these campaigns may change over time. For example, a public increasingly on guard to the possibility of online disinformation campaigns might result in an overall reduction in the persuasive impact of those campaigns in the future. At the same time, ever-improving techniques for fabricating believable fakes in video and other media may increase the persuasive capability of these tactics over time.Footnote 70 Enabling an open-ended evolution of these exceptions permits CDA 230 to adapt as our understanding of these techniques and the risk they pose changes over time.

Conclusion: The Twilight of the Crowd?

The rise of political disinformation, and the pervasiveness of disinformation more generally, represents an unexpected market failure in the figurative online marketplace of ideas. Much of the rhetoric in the early era of social media highlighted the extent to which the organic discourse of many participants – the much vaunted “wisdom of the crowds” – would help to weed out false information and produce a multifaceted representation of “the truth” (Reference SurowieckiSurowiecki 2005; Reference TaraborelliTaraborelli 2012). This was also implicit in the ideology of those designing the platforms now seen to be some of the greatest sources of disinformation. As Ev Williams, cofounder of Twitter, has stated “I thought once everyone could speak freely and exchange information and ideas, the world is automatically going to be a better place … I was wrong about that” (Reference StreitfeldStreitfeld 2017).

Early successes like Wikipedia did not generalize into a broader principle that crowds could effectively and reliably filter for truth and against falsity.Footnote 71 Regardless of its causal impact on voting behavior and political perceptions, the 2016 US presidential election cycle demonstrated in the very least that concerted efforts to spread disinformation can be wildly successful in being shared online rather than quickly weeded out. Organic filtration by the wisdom of the crowds was less robust against deliberate manipulation than originally expected (Reference HwangHwang 2017).

Efforts to amend or eliminate CDA 230 should be seen in the broader context of a retreat from open, participatory approaches to the problem of disinformation. In light of the perceived absence or weakness of a robust crowd, interventions have turned toward managing disinformation through legislative or judicial action or the judgments of private platform intermediaries. Making this shift can and should raise long-standing concerns about the influence and interests of platforms in regulating expression (Reference PasqualePasquale 2017). It also raises even longer-standing concerns about the role of government in regulating freedom of expression (Reference PasqualePasquale 2017, pp. 14–15).

However, the threat from political disinformation, particularly state-supported campaigns, continues to expand worldwide. Russian efforts leveraging these techniques continue to advance, and recent developments suggest that other nations like China are experimenting with the same playbook to advance their interests (Reference MozurMozur 2017). In parallel, manipulation by far-right domestic actors continues to advance in the United States (Reference Fang and WoodhouseFang and Woodhouse 2017).

In light of this, well-calibrated modification of CDA 230 may go a long way in helping to give the public and civil society a fighting chance by encouraging platforms to stabilize and balance the marketplaces of ideas they own and operate. Of particular importance is the reduction or elimination of techniques of distribution that – regardless of the truth or falsity of the messages channeled through them – erode trust in public discourse and democratic processes.

Ultimately, the end goal should not be to fully delegate responsibility around the truth value of information to the government or to the platforms. Instead, the primary objective should be the encouragement of publics that are themselves robust against the ever-evolving nature of disinformation. If the wisdom of the crowds has been less robust than was expected a decade ago, it is in part because the online spaces in which they operate have failed to create the proper circumstances under which they could succeed. Fine-tuning the bounds of CDA 230 represents one step in realizing and revitalizing this original vision.

Footnotes

1 For a review of theories of democracy based on the role of deliberation, see generally Reference DryzekDryzek (2002); see also Reference ArendtArendt (1971).

2 For a preliminary review of the role that different platforms have played in distributing disinformation, see Reference Feingold, Herman and AravindFeingold et al. (2017).

3 This perception is frequently bolstered by projects launched by the companies themselves. This includes systems like YouTube’s Content ID (www.youtube.com/watch?time_continue=2&v=9g2U12SsRns) that automate detection of IP infringement. It also includes technologies like Perspective (www.perspectiveapi.com/) that automate detection of IP infringement. It also includes technologies like Perspective (www.perspectiveapi.com/), a product that automates the detection of “toxic” comments online.

4 47 U.S.C. § 230.

5 See Stratton Oakmont, Inc. v. Prodigy Services Co., 1995 WL 323710 (N.Y. Sup. Ct. 1995).

6 Id.

7 47 U.S.C. § 230(a).

8 For a general review of leading cases in this space, see Reference GoldmanGoldman (2017).

9 Zeran v. American Online, Inc., 129 F.3d 32 (4th Cir. 1997).

10 Batzel v. Smith, 333 F.3d 1018 (9th Cir. 2003).

11 Barrett v. Rosenthal, 9 Cal.Rptr.3d 142 (Cal. Ct. App. 2004).

12 Doe v. MySpace, Inc. 528 F.3d 413 (5th Cir. 2008).

13 Barnes v. Yahoo!, 570 F.3d 1096 (9th Cir. 2009).

14 Id. at 1102.

15 See also Allow States and Victims to Fight Online Sex Trafficking Act of 2017, Pub. L. No. 115-164, 132 Stat. 1253 (2018).

16 See, e.g., Zeran v. American Online, Inc., 129 F.3d 32 (4th Cir. 1997).

17 52 U.S.C. § 30121. See also, 11 CFR 110.20.

18 See Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12 (1st Cir. 2016) (ads for prostitution); Chicago Lawyers’ Committee for Civil Rights under the Law v. Craigslist, 519 F.3d 666 (housing ads violating the Fair Housing Act).

19 See Federal Trade Commission v. Accusearch, Inc., 570 F.3d 1187 (10th Cir. 2009) (suits by an agency to enforce federal law still subject to CDA 230 analysis).

20 A.B. 1104, 2017–18 Cal. State Leg. (Cal. 2017).

21 See California A.B. 1104 Opposition Letter, Electronic Frontier Foundation (2017), www.eff.org/document/california-ab-1104-opposition-letter. This notes a number of First Amendment challenges to this type of legislation.

22 For a review of the large literature on this topic, see Reference Reidenberg, Debelak, Kovnot and MiaoReidenberg et al. (2012), pp. 29–31.

24 New York Times Co. v. Sullivan, 376 U.S. 254 (1964).

25 Fair Housing Council of San Fernando Valley v. Roommates.com, 521 F. 3d 1157.

26 Roommates.com, 521 F.3d at 1168.

27 Id. at 1173–1175.

28 Id. at 1175.

29 Federal Trade Commission v. Accusearch, Inc., 570 F.3d 1187, 1199 (10th Cir. 2009).

30 Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 22 (1st Cir. 2016).

31 It should be noted that legislation has been passed reversing the holding in Backpage. See Allow States and Victims to Fight Online Sex Trafficking Act of 2017, Pub. L. No. 115-164, 132 Stat. 1253 (2018).

32 Roommates.com, 521 F.3d at 1174.

33 See, e.g., Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 17 (1st Cir. 2016) (CDA 230 immunity exists even when platform specifically charges for advertisements promoting prostitution).

34 See Chicago Lawyers’ Committee for Civil Rights under the Law v. Craigslist, 519 F.3d 666, 672 (applying CDA 230 in part because platform did “not offer a lower price to people who include discriminatory statements in their postings”); NPS LLC v. StubHub, Inc., 25 Mass. L. Rptr. 478 (Super. Ct. 2009) (rejected CDA 230 immunity in part because the platform’s “revenue increased in direct proportion to the price of the ticket sold,” in contrast to a newspaper, which “generally charges a fixed price”).

35 See, e.g., Google’s “About the ad auction – AdSense Help,” (https://support.google.com/adsense/answer/160525?hl=en), which explains the Google auction system; see also Facebook’s Help Center (www.facebook.com/business/help/430291176997542), which explains the Facebook auction system.

36 See Facebook’s, “Welcome to News Feed,” (https://newsfeed.fb.com/?lang=en), which reviews how the system curates content for users.

38 Fair Housing Council of San Fernando Valley v. Roommates.com, 521 F. 3d 1157, 1169.

39 Id.

40 Id.

41 Id. at 1176–1189.

42 The Honest Ads Act, S. 1989, 115th Cong. (2017).

43 Id. at §8.

44 See, e.g., Reference MinaMina (2017), which discusses one initiative to develop “credibility indicators.”

45 See, e.g., Search King, Inc. v. Google Tech., Inc., No. 02–1457, 2003 WL 21464568, at *4 (W.D. Okla. May 27, 2003). See also Reference Volokh and FalkVolokh and Falk (2012), which reviews these cases in the context of search results.

46 See the section “Part 1: The Disinformation Challenge” in this chapter.

47 For examples of these kinds of tactics, see Reference SeetharamanSeetharaman (2017).

48 For an example of how “an understanding of the risks of machine learning (ML) drives small-c conservatism” at companies like Facebook, see Reference ConstineConstine (2017).

49 Cf. Reference MetzMetz (2015), which describes the complex infrastructure for managing Google’s code.

50 See Batzel v. Smith, 333 F.3d 1018 (9th Cir. 2003).

51 See Zeran v. America Online, Inc., 129 F.3d 327, 331 (4th Cir. 1997): “The amount of information communicated via interactive computer services is therefore staggering … It would be impossible for service providers to screen each of their millions of postings for possible problems.”

52 See, e.g., Fake News Challenge (www.fakenewschallenge.org/), a competition to build machine learning models to assist in the detection of “fake news.”

53 See Reference HalfakerHalfaker (2013), which describes how automation can create perverse effects that reduce volunteer contributions over time in the context of Wikipedia.

54 This is not to dispute that there are circumstances in which automation and bots can work productively with community-driven models. See Reference GeigerGeiger (2017).

55 See Reference BrambleBramble (2012), pp. 359–361. However, scholars contest the notion that a “balance of power” really exists in practice; see Reference Pasquale and BrachaPasquale and Bracha (2007).

57 State law rights to publicity have occasionally been cast as an intellectual property claim, attempting plaintiffs to use the exception under 47 U.S.C. § 230(e)(2). See, e.g., Cross v. Facebook, CIV 537384, 2016 WL 7785723 (Cal. Super. Ct. May 31, 2016), aff ’d in part and rev’d in part, No. A148623, 2017 WL 3404767 (Cal. Ct. App. Aug. 9, 2017).

58 Ashcroft v. American Civil Liberties Union, 542 U.S. 656, 660.

59 United States v. Alvarez, 567 U.S. 709.

60 See id., at 715.

61 id., at 719, 723.

62 id., at 727.

63 id., at 724 (citing Turner Broadcasting System, Inc. v. FCC, 512 U.S. 622, 642).

64 id., at 718.

65 See, e.g., Reference MeachamMeacham (2013), which details political lies during the 1800 US presidential campaign.

66 See 52 U.S.C. § 30121 and 11 C.F.R. § 110.20.

67 For an overview of some of the proposals around political advertisement microtargeting, see Reference RubinsteinRubinstein (2014).

68 See Reference RubinsteinRubinstein (2014), pp. 919–921; see also Executive Office of the President (2014), which advocates for approaches that give individuals the ability to “participate in the use and distribution of his or her information after it is collected.”

70 See, e.g., Matthias Niessner’s YouTube video “Face2Face: Real-time face capture and reenactment of RGB videos (CVPR 2016 Oral),” (www.youtube.com/watch?v=ohmajJTcpNk), demonstrating the use of machine learning to create believable simulations of political leaders speaking.

71 See, e.g., Reference Tapscott and WilliamsTapscott and Williams (2010), predicting the broader application of the collaborative model used by Wikipedia.

References

Alcindor, Y. (2017). Black lawmakers pressure Facebook over racially divisive Russian ads. New York Times, September 28. www.nytimes.com/2017/09/28/us/politics/facebook-russia-race-congressional-black-caucus.htmlGoogle Scholar
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211236.CrossRefGoogle Scholar
Angwin, J., Varner, M., & Tobin, A., (2017). Facebook enabled advertisers to reach “Jew haters.” ProPublica, September 14. www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-hatersGoogle Scholar
Arendt, H. (1971). Lying in politics: Reflections on the Pentagon Papers. The New York Review of Books, November 18. www.nybooks.com/articles/1971/11/18/lying-in-politics-reflections-on-the-pentagon-pape/Google Scholar
Barnouw, E. (1966). A Tower in Babel. New York: Oxford University Press.Google Scholar
Barthel, M., & Mitchell, A. (2017). Democrats, Republicans Now Split on Support for Watchdog Role. Pew Research Center report. www.journalism.org/2017/05/10/democrats-republicans-now-split-on-support-for-watchdog-roleGoogle Scholar
Belford, A., Cvetkovska, S., Sekulovska, B., & Dojčinović, S. (2017). Leaked documents show Russian, Serbian attempts to meddle in Macedonia. OCCRP, June 4. www.occrp.org/en/spooksandspin/leaked-documents-show-russian-serbian-attempts-to-meddle-in-macedonia/Google Scholar
Boxell, L., Gentzkow, M., & Shapiro, J. M. (2017). Greater Internet use is not associated with faster growth in political polarization among US demographic groups. Proceedings of the National Academy of Science, 114(40), 1061210617.CrossRefGoogle Scholar
Bramble, N. W. (2012). Safe harbors and the national information infrastructure. Hastings Law Journal, 64(2), 325384.Google Scholar
Brown Barbour, V. S. (2015). Losing their license to libel: Revisiting § 230 immunity. Berkeley Technical Law Journal, 30(2), 15051560.Google Scholar
Chen, A. (2015). The Agency. New York Times, June 2. www.nytimes.com/2015/06/07/magazine/the-agency.htmlGoogle Scholar
Chivvis, C. S. (2017). Understanding Russian “Hybrid Warfare”: And What Can Be Done about It. RAND Corporation report. www.rand.org/pubs/testimonies/CT468.htmlGoogle Scholar
Citron, D., & Wittes, B. (2017). The Internet will not break: Denying bad Samaritans Section 230 immunity 17. SSRN. https://papers.ssrn.com/abstract=3007720Google Scholar
Coleman, G. (2015). Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. London: Verso.Google Scholar
Coler, J. (2016). We tracked down a fake-news creator in the suburbs. Here’s what we learned. NPR.org, November 23. www.npr.org/sections/alltechconsidered/2016/11/23/503146770/npr-finds-the-head-of-a-covert-fake-news-operation-in-the-suburbsGoogle Scholar
Constine, J. (2017). Facebook security chief rants about misguided “algorithm” backlash. TechCrunch, October 7. http://social.techcrunch.com/2017/10/07/alex-stamos/Google Scholar
Diresta, R., & Harris, T. (2017). Why Facebook and Twitter can’t be trusted to police themselves. POLITICO Magazine, November 1. http://politi.co/2zppJMAGoogle Scholar
Dreyfuss, B. (2017). Seth Rich, conspiracy theorists, and Russiagate “Truthers.” The Nation, August 25. www.thenation.com/article/seth-rich-conspiracy-theorists-and-russiagate-truthers/Google Scholar
Dryzek, J. S. (2002). Deliberative Democracy and Beyond: Liberals, Critics, Contestations (1st ed.). New York: Oxford University Press.Google Scholar
The Economist. (2017). Internet firms’ legal immunity is under threat. The Economist, February 11. www.economist.com/news/business/21716661-platforms-have-benefited-greatly-special-legal-and-regulatory-treatment-internet-firmsGoogle Scholar
Epstein, D., & Graham, J. D. (2007). Polarized politics and policy consequences. Rand Corporation occasional paper. www.rand.org/content/dam/rand/pubs/occasional_papers/2007/RAND_OP197.pdfGoogle Scholar
Executive Office of the President. (2014). Big Data: Seizing Opportunities, Preserving Values. White House Washington report. https://obamawhitehouse.archives.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdfGoogle Scholar
Fang, L., & Woodhouse, L. A. (2017). How white nationalism became normal online. The Intercept, August 25. https://theintercept.com/2017/08/25/video-how-white-nationalism-became-normal-online/Google Scholar
Faris, R., Roberts, H., Etling, B. et al. (2017). Partisanship, Propaganda, and Disinformation: Online Media and the 2016 US Presidential Election. Cambridge: Berkman Klein Center.Google Scholar
Feingold, R., Herman, L., Aravind, A. et al. (2017). Fake News & Misinformation: The Role of the Nation’s Digital Newsstands Facebook, Google, Twitter, and Reddit. Stanford Law School Law and Policy Lab report. https://www-cdn.law.stanford.edu/wp-content/uploads/2017/10/Fake-News-Misinformation-FINAL-PDF.pdfGoogle Scholar
Feuerman, M. (2016). Court-side seats? The Communications Decency Act and the potential threat to StubHub and peer-to-peer marketplaces. Boston College Law Review, 57(1), 227260.Google Scholar
Finley, K. (2015). Pro-government Twitter bots try to hush Mexican activists. Wired, August 23. www.wired.com/2015/08/pro-government-twitter-bots-try-hush-mexican-activists/Google Scholar
Foroohar, R. (2017). Facebook’s self-policing needs an update. Financial Times, September 10. www.ft.com/content/f5d04d7e-9481-11e7-a9e6-11d2f0ebb7f0Google Scholar
Franken, A. (2017). We must not let Big Tech threaten our security, freedoms and democracy. The Guardian, November 8. www.theguardian.com/commentisfree/2017/nov/08/big-tech-security-freedoms-democracy-al-frankenGoogle Scholar
Geiger, R. S. (2017). Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture. Big Data & Society. https://doi.org/10.1177/2053951717730735CrossRefGoogle Scholar
Goldman, E. (2017). The ten most important Section 230 rulings. SSRN. https://papers.ssrn.com/abstract=3025943Google Scholar
Grimmelmann, J. (2015). The virtues of moderation. Yale Journal of Law and Technology, 17(1), 42109.Google Scholar
Halfaker, A., Geiger, R.S., Morgan, J.T. (2013). The rise and decline of an open collaboration system: How Wikipedia’s reaction to popularity is causing its decline. American Behavioral Scientist, 57(5), 664688.Google Scholar
Howard, P., & Kollanyi, B. (2017). Social media companies must respond to the sinister reality behind fake news. The Observer, September 30. www.theguardian.com/media/2017/sep/30/social-media-companies-fake-news-us-electionGoogle Scholar
Hwang, T. (2017). The madness of the crowd. Logic Magazine, March 15. https://logicmag.io/01-the-madness-of-the-crowd/Google Scholar
Hwang, T., & Woolley, S. (2017). The most important lesson from the dust-up over Trump’s fake Twitter followers. Slate, June 2. www.slate.com/articles/technology/future_tense/2017/06/the_lesson_of_the_dust_up_over_trump_s_fake_twitter_followers.htmlGoogle Scholar
Jeong, S. (2015). The Internet of Garbage. New York: Vox Media.Google Scholar
Kafka, P. (2017). Facebook has started to flag fake news stories. Recode, March 4. www.recode.net/2017/3/4/14816254/facebook-fake-news-disputed-trump-snopes-politifact-seattle-tribuneGoogle Scholar
Klein, D. O., & Wueller, J. R. (2017). Fake news: A legal perspective. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2958790Google Scholar
Kollanyi, B., Bradshaw, S., & Neudert, L.-M. (2017). Social media, news and political information during the US election: Was polarizing content concentrated in swing states? Project on Computational Propaganda Data Memo No. 2017.8. http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/09/Polarizing-Content-and-Swing-States.pdfGoogle Scholar
Kosoff, M. (2017). The Russian troll farm that weaponized Facebook had American boots on the ground. The Hive, October 18. www.vanityfair.com/news/2017/10/the-russian-troll-farm-that-weaponized-facebook-had-american-boots-on-the-groundGoogle Scholar
Lange-Ionatamishvili, E., Svetoka, S., & Geers, K. (2015). Strategic communication and social media in the Russia Ukraine conflict. In Geers, K. (Ed.), Cyber War in Perspective: Russian Aggression Against Ukraine (pp. 103111). Tallinn: NATO CCDCOE Publications.Google Scholar
Lapowsky, I. (2017). Russia wouldn’t need Trump’s digital team to spread fake news. Wired, July 13. www.wired.com/story/russia-trump-targeting-fake-news/Google Scholar
Lee, T. B. (2017). Dow Jones posts fake story claiming Google was buying Apple. Ars Technica, October 10. https://arstechnica.com/tech-policy/2017/10/dow-jones-posts-fake-story-claiming-google-was-buying-apple/Google Scholar
Lessig, L. (1999). Code and Other Laws of Cyberspace. New York: Basic Books.Google Scholar
Ling, J. (2017). Google chief says Google News will “engineer” Russian propaganda out of the feed. Motherboard, November 20. https://motherboard.vice.com/en_us/article/pa39vv/eric-schmidt-says-google-news-will-delist-rt-sputnik-russia-fake-newsGoogle Scholar
Marantz, A. (2016). Trolls for Trump. The New Yorker, October 31. www.newyorker.com/magazine/2016/10/31/trolls-for-trumpGoogle Scholar
Marwick, A., & Lewis, R. (2017). Media Manipulation and Disinformation Online. Data & Society Research Institute report.Google Scholar
Meacham, J. (2013). Thomas Jefferson: The Art of Power. New York: Random House.Google Scholar
Metz, C. (2015). Google is 2 billion lines of code – and it’s all in one place. Wired, September 16. www.wired.com/2015/09/google-2-billion-lines-codeand-one-place/Google Scholar
Mina, A. X. (2017). Knight Prototype Fund supports the Credibility Working Group. MisinfoCon, June 22. https://misinfocon.com/knight-prototype-fund-supports-the-credibility-working-group-c3dcc6667569Google Scholar
Mozur, P. (2017). China spreads propaganda to U.S. on Facebook, a platform it bans at home. New York Times, November 8. www.nytimes.com/2017/11/08/technology/china-facebook.htmlGoogle Scholar
Mulford, C. (2008). Benjamin Franklin’s savage eloquence: Hoaxes from the press at Passy, 1782. Proceedings of the American Philosophical Society, 152(4), 490530.Google Scholar
National Intelligence Council. (2017). Assessing Russian Activities and Intentions in Recent US Elections. Office of the Director of National Intelligence report, ICA 2017-01D.Google Scholar
NCCIC (National Cybersecurity and Communications Integration Center). (2016). GRIZZLY STEPPE – Russian Malicious Cyber Activity. Department of Homeland Security report, Jar-16-20296a.Google Scholar
Nielsen, R. K., & Graves, L. (2017). What do ordinary people think fake news is? Poor journalism and political propaganda. Columbia Journalism Review, October 24. www.cjr.org/analysis/fake-news-study.phpGoogle Scholar
North, A. (2017). When a SWAT team comes to your house. New York Times, July 6. www.nytimes.com/2017/07/06/opinion/swatting-fbi.htmlGoogle Scholar
NPR. (2017). Russian bots tweeting calls to fire McMaster, former FBI agent says. NPR.org, August 20. www.npr.org/2017/08/20/544817844/russian-bots-tweeting-calls-to-fire-mcmaster-former-fbi-agent-saysGoogle Scholar
Oremus, W. (2016). Stop calling everything “fake news.” Slate, December 6. www.slate.com/articles/technology/technology/2016/12/stop_calling_everything_fake_news.htmlGoogle Scholar
Pasquale, F. (2017). The automated public sphere. SSRN. https://papers.ssrn.com/abstract=3067552Google Scholar
Pasquale, F., & Bracha, O. (2007). Federal Search Commission? Access, fairness and accountability in the law of search. SSRN. https://papers.ssrn.com/abstract=1002453Google Scholar
Paul, C., & Courtney, W. (2016). Russian propaganda is pervasive, and America is behind the power curve in countering it. Rand Corporation (blog), September 12. www.rand.org/blog/2016/09/russian-propaganda-is-pervasive-and-america-is-behind.htmlGoogle Scholar
Pennycook, G., Cannon, T. D., & Rand, D. G. (2017). Prior exposure increases perceived accuracy of fake news. SSRN. https://papers.ssrn.com/abstract=2958246Google Scholar
Pennycook, G., & Rand, D. (2017). Assessing the effect of “disputed” warnings and source salience on perceptions of fake news accuracy. SSRN. https://papers.ssrn.com/abstract=3035384CrossRefGoogle Scholar
Prior, M. (2013). Media and political polarization. Annual Review of Political Science, 16, 101127.Google Scholar
Quist, M. D. (2012). “Plumbing the depths” of the CDA: Weighing the competing Fourth and Seventh Circuit standards of ISP immunity under Section 230 of the Communications Decency Act. George Mason Law Review, 20(1), 275309.Google Scholar
Reidenberg, J. R., Debelak, J., Kovnot, J., & Miao, T. (2012). Section 230 of the Communications Decency Act: A survey of the legal literature and reform proposals. SSRN. https://papers.ssrn.com/abstract=2046230Google Scholar
Robb, A. (2017). Pizzagate: Anatomy of a fake news scandal. Rolling Stone, November 16. www.rollingstone.com/politics/news/pizzagate-anatomy-of-a-fake-news-scandal-w511904Google Scholar
Robertson, J., Riley, M., & Willis, A. (2017). How to hack an election. Bloomberg Businessweek, March 31. www.bloomberg.com/features/2016-how-to-hack-an-election/Google Scholar
Rogers, J. (2017). Wikipedia and intermediary immunity: Supporting sturdy crowd systems for producing reliable information. Yale Law Journal Forum, 127, 358372.Google Scholar
Rolling Stone. (2011). Musician Started Bon Jovi Death Hoax. Rolling Stone, December 28. www.rollingstone.com/music/news/musician-started-bon-jovi-death-hoax-20111228Google Scholar
Rubinstein, I. (2014). Voter privacy in the age of big data. SSRN. https://papers.ssrn.com/abstract=2447956Google Scholar
Santa Clara University. (2017). Leading news outlets establish transparency standards to help readers identify trustworthy news sources. The Trust Project, November 16. www.scu.edu/ethics/focus-areas/journalism-ethics/programs/the-trust-project/trust-project-launches-indicators/Google Scholar
Schreckinger, B. (2017). World war meme. POLITICO Magazine, March/April. www.politico.com/magazine/story/2017/03/memes-4chan-trump-supporters-trolls-internet-214856Google Scholar
Seetharaman, D. (2017). Russian-backed Facebook accounts staged events around divisive issues. Wall Street Journal, October 30. www.wsj.com/articles/russian-backed-facebook-accounts-organized-events-on-all-sides-of-polarizing-issues-1509355801Google Scholar
Seetharaman, D., & Wells, G. (2017). Tech giants disclose Russian activity on eve of congressional appearance. Wall Street Journal, October 30. www.wsj.com/articles/facebook-estimates-126-million-people-saw-russian-backed-content-1509401546Google Scholar
Silverman, C., & Singer-Vine, J. (2016). Most Americans who see fake news believe it, new survey says. BuzzFeed, December 6. www.buzzfeed.com/craigsilverman/fake-news-surveyGoogle Scholar
Streitfeld, D. (2017). “The Internet is broken”: @ev is trying to salvage it. New York Times, May 20. www.nytimes.com/2017/05/20/technology/evan-williams-medium-twitter-internet.htmlGoogle Scholar
Subramanian, S. (2017). Inside the Macedonian fake-news complex. Wired, February 15. www.wired.com/2017/02/veles-macedonia-fake-news/Google Scholar
Sullivan, M. (2017). It’s time to retire the tainted term “fake news.” Washington Post, January 8. www.washingtonpost.com/lifestyle/style/its-time-to-retire-the-tainted-term-fake-news/2017/01/06/a5a7516c-d375-11e6-945a-76f69a399dd5_story.htmlGoogle Scholar
Sunstein, C. R. (1992). Free speech now. University of Chicago Law Review, 59, 255316.CrossRefGoogle Scholar
Surowiecki, J. (2005). The Wisdom of Crowds. New York: Anchor Books.Google Scholar
Swire, B., Berinsky, A., Lewandowsky, S. et al. (2017). Processing political misinformation: Comprehending the Trump phenomenon. Royal Society Open Science. http://doi.org/10.1098/rsos.160802CrossRefGoogle Scholar
Tapscott, D., & Williams, A. D. (2010). Wikinomics: How Mass Collaboration Changes Everything. New York: Penguin Group.Google Scholar
Taraborelli, D. (2012). Seven years after Nature, pilot study compares Wikipedia favorably to other encyclopedias in three languages. Wikimedia Foundation (blog), August 2. https://blog.wikimedia.org/2012/08/02/seven-years-after-nature-pilot-study-compares-wikipedia-favorably-to-other-encyclopedias-in-three-languages/Google Scholar
Thompson, N. (2017). Our minds have been hijacked by our phones. Tristan Harris wants to rescue them. Wired, July 26. www.wired.com/story/our-minds-have-been-hijacked-by-our-phones-tristan-harris-wants-to-rescue-them/Google Scholar
Tremble, C. (2017). Wild Westworld: Section 230 of the CDA and Social Networks’ Use of Machine-Learning Algorithms, Fordham Law Review 86 825.Google Scholar
Tynan, D. (2016). How Facebook powers money machines for obscure political “news” sites. The Guardian, August 24. www.theguardian.com/technology/2016/aug/24/facebook-clickbait-political-news-sites-us-election-trumpGoogle Scholar
Volokh, E., & Falk, D. M. (2012). First Amendment protection for search engine search results. White Paper commissioned by Google. www.volokh.com/wp-content/uploads/2012/05/SearchEngineFirstAmendment.pdfGoogle Scholar
Wingfield, N., Isaac, M., & Benner, K. (2016). Google and Facebook take aim at fake news sites. New York Times, November 14. www.nytimes.com/2016/11/15/technology/google-will-ban-websites-that-host-fake-news-from-using-its-ad-service.htmlGoogle Scholar
Wintour, P. (2017). Russian hackers to blame for sparking Qatar crisis, FBI inquiry finds. The Guardian, June 7. www.theguardian.com/world/2017/jun/07/russian-hackers-qatar-crisis-fbi-inquiry-saudi-arabia-uaeGoogle Scholar
Woolley, S. C., & Howard, P. N. (2017). Computational propaganda worldwide: Executive summary. Project on Computational Propaganda Working Paper No. 2017.11. http://comprop.oii.ox.ac.uk/publishing/working-papers/computational-propaganda-worldwide-executive-summary/Google Scholar
Wu, T. (2013). Machine speech. University of Pennsylvania Law Review, 161, 14951533.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×