We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Historically, local newsgatherers played a key democracy-enhancing role by keeping their communities informed about local events and holding local elected officials to account. As the market for local news has evaporated, more and more cities have become “news deserts.” Meanwhile, fewer national legacy news providers can afford to invest in the processes and expertise needed to produce high-quality news about our increasingly complex world. The true crisis of press legitimacy is the declining cultural investment in the systematic gathering of high-quality news produced by independent, transparent, and trustworthy sources.
Although scholars usually point to a handful of cultural and economic factors as undermining news quality and press credibility, various critics now identify a more covert culprit: the US Supreme Court. The Court is partly to blame for the press’s declining credibility, these critics claim, because the Court’s First Amendment decisions hinder the ability of state defamation law to hold the press accountable for defamatory falsehoods. The implication is that the press would regain much of its credibility if the Court would remove these constitutional barriers – especially the requirement that public officials and public figures demonstrate “actual malice” on the part of the press for a defamation claim to prevail. Nonetheless, as this chapter explains, the current landscape of high-profile defamation cases, and the public reaction to them, casts doubt on whether things could be so easy.
After the first 24–48 hours of a health emergency, the health emergency enters the maintenance phase. During the maintenance phase health officials provide maintenance messages that contain deeper risk explanations, promote interventions, continue to make commitments to the community, and address rumors and misinformation. Health emergencies often spend a lot of time in the maintenance phase, so it is imperative that emergency risk communicators provide clear, coordinated, and consistent messages about the health risks. By communicating credible, accurate, and actionable health information, a health agency can demonstrate the Crisis and Emergency Risk Communication (CERC) principles of Be First, Be Right, Be Credible, Show Respect, Express Empathy, and Promote Action. The chapter provides practical steps on how to write maintenance messages and provides quick response communication planning and implementation steps such as identifying communication objectives, audiences, key messages, and channels and developing communication products/materials. This chapter also includes key tips related to spokespeople, partner agencies, and call centers to ensure message consistency is achieved during the response. The rumor management framework is highlighted. A student case study analyzes the Mpox outbreak in Louisiana using the CERC framework. Reflection questions are included at the end of the chapter.
Neil Levy’s book Bad Beliefs defends a prima facie attractive approach to social epistemic policy – namely, an environmental approach, which prioritises the curation of a truth-conducive information environment above the inculcation of individual criti cal thinking abilities and epistemic virtues. However, Levy’s defence of this approach is grounded in a surprising and provocative claim about the rationality of deference. His claim is that it’s rational for people to unquestioningly defer to putative authorities, because these authorities hold expert status. As friends of the environmental approach, we try to show why it will be better for that approach to not be argumentatively grounded in this revisionist claim about when and why deference is rational. We identify both theoretical and practical problems that this claim gives rise to.
While peacekeeping operations have always been heavily dependent on host-state support and international political backing, changes in the global geopolitical and technological landscapes have presented new forms of state interference intended to influence, undermine, and impair the activities of missions on the ground. Emerging parallel security actors, notably the Wagner Group, have cast themselves as directly or implicitly in competition with the security guarantee provided by peacekeepers, while the proliferation of mis- and disinformation and growing cybersecurity vulnerabilities present novel challenges for missions’ relationships with host states and populations, operational security, and the protection of staff and their local sources. Together, these trends undermine missions’ efforts to protect civilians, operate safely, and implement long-term political settlements. This essay analyzes these trends and the dilemmas they present for in-country UN officials attempting to induce respect for international norms and implement their mandates. It describes nascent strategies taken by missions to maintain their impartiality, communicate effectively, and maintain the trust of those they are charged with protecting, and highlights early good practices for monitoring and analyzing this new operation environment, for reporting on and promoting human rights, and for operating safely.
Behind the black boxes of algorithms promoting or adding friction to posts, technical design decisions made to affect behavior, and institutions stood up to make decisions about content online, it can be easy to lose track of the heteromation involved, the humans spreading disinformation and, on the other side, moderating or choosing not to moderate it. This can be aptly shown in the case of the spread of misinformation on WhatsApp during Brazil’s 2018 general elections. Since WhatsApp runs on a peer-to-peer architecture, there was no algorithm curating content according to the characteristics or demographics of the users, which is how filter bubbles work on Facebook. Instead, a human infrastructure was assembled to create a pro-Bolsonaro environment on WhatsApp and spread misinformation to bolster his candidacy. In this paper, we articulate the labor executed by the human infrastructure of misinformation as hetoromation.
The spread of false and misleading information, hate speech, and harassment on WhatsApp has generated concern about elections, been implicated in ethnic violence, and been linked to other disastrous events across the globe. On WhatsApp, we see the activation of what is known as the phenomenon of hidden virality, which characterizes how unvetted, insular discourse on encrypted, private platforms takes on a character of truth and remains mostly unnoticed until causing real-world harm. In this book chapter, we discuss what factors contribute to the activation of hidden virality on WhatsApp while answering the following questions: 1) To what extent and how do WhatsApp’s sociotechnical affordances encourage the sharing of mis- and disinformation on the platform, and 2) How do WhatsApp’s users perceive and deal with mis- and disinformation daily? Our findings indicate that WhatsApp’s affordance of perceived privacy actively encourages the spread of false and offensive content on the platform, especially when combined with it being impossible for users to report inappropriate content anonymously. Groups in which such content is prominent are tightly controlled by administrators who typically hold dominant cultural positions (e.g., they are senior and male). Users who feel hurt by false and offensive content need to personally ask administrators for its removal. But this is not an easy job, as it requires users to challenge dominant cultural norms, causing them stress and anxiety. Users would rather have WhatsApp take on the burden of moderating problematic content. We close the chapter by situating our findings in relation to cultural and economic power dynamics. We bring attention to the fact that if WhatsApp does not start to take action to reduce and prevent the real-world harm of hidden virality, its affordances of widespread accessibility and encryption will keep promoting its market advantages, leaving the burden of moderating content to fall on minoritized users.
This chapter focuses on how it is possible to develop and retain false beliefs even when the relevant information we receive is not itself misleading or inaccurate. In common usage, the term misinformed refers to someone who holds false beliefs, and the most obvious source of false beliefs is inaccurate information. In some cases, however, false beliefs arise, not from inaccurate or misleading information, but rather from cognitive biases that influence the way that information is interpreted and recalled. Other cognitive biases limit the ability of new and accurate information to correct existing misconceptions. We begin the chapter by examining the role of cognitive biases and heuristics in creating misconceptions, taking as our context misconceptions commonly observed during the COVID-19 pandemic. We then explain why accurate information does not always or necessarily correct misconceptions, and in certain situations can even entrench false beliefs. Throughout the chapter, we outline strategies that information designers can use to reduce the possibility that false beliefs arise from, and persist in the face of, accurate information.
The time lag between when research is completed and when it is used in clinical practice can be as long as two decades. This chapter considers the dissemination and implementation of research findings. It also explores better ways to make research findings understood and used. On the one hand, we recognize the need to get new research into practice as soon as possible. On the other hand, we challenge the trend toward rapid implementation. When results are put into practice prematurely, patients may suffer unnecessary consequences of insufficiently evaluated interventions. We offer several examples of Nobel Prize winning interventions that had unintentional harmful effects that were unknown when the prize was awarded. To address these problems, we support the need for greater transparency in reporting studies results, open access to clinical research data, and the application of statistical tools such as forest plots and funnel plots that might reveal data irregularities.
Reading or writing online user-reviews of places like a restaurant or a hair salon is a common information practice. Through its Local Guides Platform, Google calls on users to add reviews of places directly to Google Maps, as well as edit store hours and report fake reviews. Based on a case study of the platform, this chapter examines the governance structures that delineate the role Local Guides play in regulating the Google Maps information ecosystem and how it frames useful information vs. bad information. We track how the Local Guides Platform constructs a community of insiders who make Google Maps better vs. the misinformation that the platform positions as an exterior threat infiltrating Google Maps universally beneficial global mapping project. Framing our analysis through Kuo and Marwick’s critique of the dominant misinformation paradigm, one often based on hegemonic ideals of truth and authenticity. We argue that review and moderation practices on Local Guides further standardize constructions of misinformation as the product of a small group of outlier bad actors in an otherwise convivial information ecosystem. Instead, we consider how the platform’s governance of crowdsourced moderation, paired with Google Maps’ project of creating a single, universal map, helps to homogenize narratives of space that then further normalize the limited scope of Google’s misinformation paradigm.
Democratic backsliding, the slow erosion of institutions, processes, and norms, has become more pronounced in many nations. Most scholars point to the role of parties, leaders, and institutional changes, along with the pursuit of voters through what Daniel Ziblatt has characterized as alliances with more extremist party surrogate organizations. Although insightful, the institutionalist literature offers little reflection about the growing role of social technologies in organizing and mobilizing extremist networks in ways that present many challenges to traditional party gatekeeping, institutional integrity, and other democratic principles. We present a more integrated framework that explains how digitally networked publics interact with more traditional party surrogates and electoral processes to bring once-scattered extremist factions into conservative parties. When increasingly reactionary parties gain power, they may push both institutions and communication processes in illiberal directions. We develop a model of communication as networked organization to explain how Donald Trump and the Make America Great Again (MAGA) movement rapidly transformed the Republican Party in the United States, and we point to parallel developments in other nations.
Public opinion surveys are vital for informing democratic decision-making, but responding to rapidly changing information environments and measuring beliefs within hard-to-reach communities can be challenging for traditional survey methods. This paper introduces a crowdsourced adaptive survey methodology (CSAS) that unites advances in natural language processing and adaptive algorithms to produce surveys that evolve with participant input. The CSAS method converts open-ended text provided by participants into survey items and applies a multi-armed bandit algorithm to determine which questions should be prioritized in the survey. The method’s adaptive nature allows new survey questions to be explored and imposes minimal costs in survey length. Applications in the domains of misinformation, issue salience, and local politics showcase CSAS’s ability to identify topics that might otherwise escape the notice of survey researchers. I conclude by highlighting CSAS’s potential to bridge conceptual gaps between researchers and participants in survey research.
The study of dis/misinformation is currently in vogue, however with much ambiguity about what the problem precisely is, and much confusion about the key concepts that are brought to bear on this problem. My aim of this paper is twofold. First, I will attempt to precisify the (dis/mis)information problem, roughly construing it as anything that undermines the “epistemic aim of information.” Second, I will use this precisification to provide a new grounded account of dis/misinformation. To achieve the latter, I will critically engage with three of the more popular accounts of dis/misinformation which are (a) harm-based, (b) misleading-based, and (c) ignorance-based accounts. Each engagement will lead to further refinement of these key concepts, ultimately paving the way for my own account. Finally, I offer my own information hazard-based account, which distinguishes between misinformation as content, misinformation as activity, and disinformation as activity. By introducing this distinction between content and activity, it will be shown that my account is erected on firmer conceptual/ontological grounds, overcoming many of the difficulties that have plagued previous accounts, especially the problem of the proper place of intentionality in understanding dis/misinformation. This promises to add clarity to dis/misinformation research and to prove more useful in practice.
Who should decide what passes for disinformation in a liberal democracy? During the COVID-19 pandemic, a committee set up by the Dutch Ministry of Health was actively blocking disinformation. The committee comprised civil servants, communication experts, public health experts, and representatives of commercial online platforms such as Facebook, Twitter, and LinkedIn. To a large extent, vaccine hesitancy was attributed to disinformation, defined as misinformation (or data misinterpreted) with harmful intent. In this study, the question is answered by reflecting on what is needed for us to honor public reason: reasonableness, the willingness to engage in public discourse properly, and trust in the institutions of liberal democracy.
Despite Kenya’s transformative and progressive 2010 Constitution, it is still grappling with a hybrid democracy, displaying both authoritarian and democratic traits. Scholars attribute this status to several factors, with a prominent one being the domination of the political order and wielding of political power by a few individuals and families with historical ties to patronage networks and informal power structures. The persisting issues of electoral fraud, widespread corruption, media harassment, weak rule of law and governance challenges further contribute to the hybrid democracy status. While the 2010 Constitution aims to restructure the state and enhance democratic institutions, the transition process is considered incomplete, especially since the judiciary’s role of judicial review is mostly faced with the difficult task of countering democratic regression.
It is frequently argued that false and misleading claims, spread primarily on social media, are a serious problem in need of urgent response. Current strategies to address the problem – relying on fact-checks, source labeling, limits on the visibility of certain claims, and, ultimately, content removals – face two serious shortcomings: they are ineffective and biased. Consequently, it is reasonable to want to seek alternatives. This paper provides one: to address the problems with misinformation, social media platforms should abandon third-party fact-checks and rely instead on user-driven prediction markets. This solution is likely less biased and more effective than currently implemented alternatives and, therefore, constitutes a superior way of tackling misinformation.
The structure of society is heavily dependent upon its means of producing and distributing information. As its methods of communication change, so does a society. In Europe, for example, the invention of the printing press created what we now call the public sphere. The public sphere, in turn, facilitated the appearance of ‘public opinion’, which made possible wholly new forms of politics and governance, including the democracies we treasure today.
State responses to the recent ‘crisis’ caused by misinformation in social media have mainly aimed to impose liability on those who facilitate its dissemination. Internet companies, especially large platforms, have deployed numerous techniques, measures and instruments to address the phenomenon. However, little has been done to assess the importance of who originates disinformation and, in particular, whether some originators of misinformation are acting contrary to their preexisting obligations to the public. My view is that it would be wrong to attribute only to social media a central or exclusive role in the new disinformation crisis that impacts the information ecosystem.
The 2024 presidential election in the USA demonstrates, with unmistakable clarity, that disinformation (intentionally false information) and misinformation (unintentionally false information disseminated in good faith) pose a real and growing existential threat to democratic self-government in the United States – and elsewhere too. Powered by social media outlets like Facebook (Meta) and Twitter (X), it is now possible to propagate empirically false information to a vast potential audience at virtually no cost. Coupled with the use of highly sophisticated algorithms that carefully target the recipients of disinformation and misinformation, voter manipulation is easier to accomplish than ever before – and frighteningly effective to boot.
The issue of mass disinformation on the Internet is a long-standing concern for policymakers, legislators, academics and the wider public. Disinformation is believed to have had a significant impact on the outcome of the 2016 US presidential election. Concern about the threat of foreign – mainly Russian – interference in the democratic process is also growing. The COVID-19 pandemic, which reached global proportions in 2020, gave new impetus to the spread of disinformation, which even put lives at risk. The problem is real and serious enough to force all parties concerned to reassess the previous European understanding of the proper regulation of freedom of expression.
The ‘marketplace of ideas’ metaphor tends to dominate US discourse about the First Amendment and free speech more generally. The metaphor is often deployed to argue that the remedy for harmful speech ought to be counterspeech, not censorship; listeners are to be trusted to sort the wheat from the chaff. This deep skepticism about the regulation of even harmful speech in the USA raises several follow-on questions, including: How will trustworthy sources of information fare in the marketplace of ideas? And how will participants know whom to trust? Both questions implicate non-regulatory, civil-society responses to mis- and disinformation.