We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Storytelling is everyday information behavior that, when it goes wrong, can propagate misinformation. From accurate data to misinformed stories, what goes wrong with the process? This chapter focuses on the dynamics of storytelling in misinformation as a problematic aspect of the COVID-19 pandemic in three widely circulated problematic stories. Storytelling offers a framework for researching collective experiences of information as a process that is inherently based in communities, with knowledge commons that are instantiated by the telling and retelling of stories, temporarily or permanently. To understand how difficult information is to govern in story form and through storytelling dynamics, this chapter uses storytelling theory to explore three recent cases of COVID-19 misinformation related to medicine misuse, exploiting vaccine hesitancy, and aftermath of medical racism. Understanding what goes wrong with these stories may be key to public health communications that engage effectively with communitiesÕ everyday misinformation challenges.Ê
Information hazing is the use of information to directly and indirectly harass and/or exclude newcomers. This is common in spaces with strong social cohesion where the dominant group is wary of accepting individuals who do vary from the group. The tech industry and its pipeline, computer science education, are two places where the lack of diverse and varied voices has led to numerous social harms. We have collected 30 syllabi from CS1 courses across the US to explore how the courses governing documents, and syllabi, curate the computer science education knowledge commons. Our evaluation highlights areas of policy, research, and student perspectives that are out of alignment both with practice in academia and industry standards. Requirements stemming from the expectation of independent assessment within the academic environment versus the common practice of open information and collaboration appear to clash within the academic integrity policies of many computer science courses. These competing priorities create opportunities for undue harm that create a fertile ground for the spread of misinformation, disinformation, and malinformation. These are usually the unanticipated consequences of policies written in good faith, but still exhibit the toxic, stressful, and isolating impacts of hazing.
Misinformation has shown itself in recent years to be an incredibly complicated and thorny societal problem. While most academic work on the subject looks at the largest and scariest examples of misinformation, it largely ignores the fundamental reality that to misinform is to be an imperfect person also known as a human. In this Introduction to the edited volume, Governing Misinformation in Everyday Knowledge Commons, the editors briefly explain the three scholarly traditions (The Everyday, Misinformation, and Governing Knowledge Commons) that are the nexus of each chapter in this tome. We also present several illustrative examples to highlight the thesis of this work, that misinformation is incredibly commonÊand only through addressing the context and risk around it can we create nuanced and culturally specific solutions.
Misinformation is ubiquitous in everyday life and exists on spectrum from innocuous to harmful. Communities manage issues of credibility, trust, and information quality continuously, so as to to mitigate the impact of misinformation when possible and evolve social norms and intentional governance to delineate between problematic disinformation and little white lies. Such coproduction of governance and (mis-)information raises a complex set of ethical, economic, political, social, and technological questions that requires systematic study and careful deliberation. The Conclusion discusses key themes across chapters in this volume, as well as connections to emergent themes from other books in this series, considering implications for future research, everyday life, and the governing knowledge commons framework.
Social media including Twitter can be considered knowledge commons, as a community of users creates and shares information through them. Although popular, Twitter is not free of problems, especially mis/dis-information that is rampant in social media. A better understanding of how users manage day-to-day issues on social media is needed because it can help identify strategies and tools to tackle the issue. This study investigated the actions and preferences of users who found mis/dis-information problematic on Twitter. Focusing on the action arena of knowledge commons, this study explored what participants did to manage problems, what they thought others should do, and what groups they thought should take responsibility. Four hundred responses were collected through an online survey. The top actions taken by participants were unfollowing, fact-checking, and muting. The participants wanted Twitter, Inc. to ban problematic users and to provide better tools to help filter and report issues. They viewed Twitter and individual users, especially influencers, as the groups most responsible for managing Twitter problems. Differences in actions and preferences by gender and frequency of Twitter use were found. Implications for policies, system design, and research were discussed.
Lay people often are misinformed about what is a secure password, what are the various types of security threats to passwords or password-protected resources, and the risks of certain compromising practices such as reusing passwords and required password expiration. Expert knowledge about password security has evolved considerably over time, but on many points, research supports general agreement among experts about best practices. Remarkably, though perhaps not surprisingly, there is a sizable gap between what experts agree on and what lay people believe and do. The knowledge gap might exist and persist because of intermediaries, namely professionals and practitioners as well as technological interfaces such as password meters and composition rules. In this chapter, we identify knowledge commons governance dilemmas that arise within and between different communities (expert, professional, lay) and examine implications for other everyday misinformation problems.
The spread of false and misleading information, hate speech, and harassment on WhatsApp has generated concern about elections, been implicated in ethnic violence, and been linked to other disastrous events across the globe. On WhatsApp, we see the activation of what is known as the phenomenon of hidden virality, which characterizes how unvetted, insular discourse on encrypted, private platforms takes on a character of truth and remains mostly unnoticed until causing real-world harm. In this book chapter, we discuss what factors contribute to the activation of hidden virality on WhatsApp while answering the following questions: 1) To what extent and how do WhatsApp’s sociotechnical affordances encourage the sharing of mis- and disinformation on the platform, and 2) How do WhatsApp’s users perceive and deal with mis- and disinformation daily? Our findings indicate that WhatsApp’s affordance of perceived privacy actively encourages the spread of false and offensive content on the platform, especially when combined with it being impossible for users to report inappropriate content anonymously. Groups in which such content is prominent are tightly controlled by administrators who typically hold dominant cultural positions (e.g., they are senior and male). Users who feel hurt by false and offensive content need to personally ask administrators for its removal. But this is not an easy job, as it requires users to challenge dominant cultural norms, causing them stress and anxiety. Users would rather have WhatsApp take on the burden of moderating problematic content. We close the chapter by situating our findings in relation to cultural and economic power dynamics. We bring attention to the fact that if WhatsApp does not start to take action to reduce and prevent the real-world harm of hidden virality, its affordances of widespread accessibility and encryption will keep promoting its market advantages, leaving the burden of moderating content to fall on minoritized users.
This chapter focuses on how it is possible to develop and retain false beliefs even when the relevant information we receive is not itself misleading or inaccurate. In common usage, the term misinformed refers to someone who holds false beliefs, and the most obvious source of false beliefs is inaccurate information. In some cases, however, false beliefs arise, not from inaccurate or misleading information, but rather from cognitive biases that influence the way that information is interpreted and recalled. Other cognitive biases limit the ability of new and accurate information to correct existing misconceptions. We begin the chapter by examining the role of cognitive biases and heuristics in creating misconceptions, taking as our context misconceptions commonly observed during the COVID-19 pandemic. We then explain why accurate information does not always or necessarily correct misconceptions, and in certain situations can even entrench false beliefs. Throughout the chapter, we outline strategies that information designers can use to reduce the possibility that false beliefs arise from, and persist in the face of, accurate information.
Reading or writing online user-reviews of places like a restaurant or a hair salon is a common information practice. Through its Local Guides Platform, Google calls on users to add reviews of places directly to Google Maps, as well as edit store hours and report fake reviews. Based on a case study of the platform, this chapter examines the governance structures that delineate the role Local Guides play in regulating the Google Maps information ecosystem and how it frames useful information vs. bad information. We track how the Local Guides Platform constructs a community of insiders who make Google Maps better vs. the misinformation that the platform positions as an exterior threat infiltrating Google Maps universally beneficial global mapping project. Framing our analysis through Kuo and Marwick’s critique of the dominant misinformation paradigm, one often based on hegemonic ideals of truth and authenticity. We argue that review and moderation practices on Local Guides further standardize constructions of misinformation as the product of a small group of outlier bad actors in an otherwise convivial information ecosystem. Instead, we consider how the platform’s governance of crowdsourced moderation, paired with Google Maps’ project of creating a single, universal map, helps to homogenize narratives of space that then further normalize the limited scope of Google’s misinformation paradigm.
This chapter explores patterns of misinformation in online conspiracy theory information worlds such as QAnon, as seen in publicly accessible Facebook pages. In particular, it examines issues related to expertise, authority, and gatekeeping; overt acts of misreading; the concept of connecting the dots, or seeing patterns and interconnections between often widely disparate phenomena; and interpretations of the symbolism in everyday life phenomena such as gestures, corporate logos, and celebrity photographs. It argues that, through its ubiquity and persistence, conspiracy theory misinformation functions as a persistent yet de-centered everyday activity, melding deeply serious interpretive acts with elements of gaming and sense-making.
Governing Misinformation in Everyday Knowledge Commons delves into the complex issue of misinformation in our daily lives. The book synthesizes three scholarly traditions - everyday life, misinformation, and governing knowledge commons - to present 10 case studies of online and offline communities tackling diverse dilemmas regarding truth and information quality. The book highlights how communities manage issues of credibility, trust, and information quality continuously, to mitigate the impact of misinformation when possible. It also explores how social norms and intentional governance evolve to distinguish between problematic disinformation and little white lies. Through a coproduction of governance and (mis-)information, the book raises a set of ethical, economic, political, social, and technological questions that require systematic study and careful deliberation. This title is also available as Open Access on Cambridge Core.
This chapter studies the first Hall of Fame established in the United States: NYU’s Hall of Fame of Great Americans in 1900. The episode shines a light on the American conception of greatness and how that relates to fame. At the beginning of the twentieth century, the United States faced an “inflation” of fame while greatness became a scarce resource. To understand the complex differences between greatness and fame, this chapter’s narrative weaves together the European tradition of status, the seedy transatlantic history of eugenics, and the unusual Hall of Fame candidacy of Edgar Allan Poe.