We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Inquiries into rural conditions are sometimes viewed as not worth pursuing in the first place. This chapter takes on “foundational myths,” or assumptions that make “rural” seem uncompelling as an investigative lens. The myth of rural hyper-simplicity suggests that rural challenges can be solved with easy, one-size-fits-all interventions, such as rural residents relocating. Conversely, the myth of rural hyper-complexity suggests that the rural United States is so vast and diverse that to pursue analyses beyond specific regions is an exercise in futility. Finally, the myth of rural immateriality suggests that “rural” is not an interesting or relevant lens, as other analytical frames (such as a population’s race or national origin) are more explanatory. The chapter addresses these myths by offering a definition of “rural” that incorporates intersectionality theory, highlights common challenges associated with population sparseness and remoteness from urban centers, and reveals rurality’s salience to law. This chapter introduces the concept of “urbanormativity,” or the treatment of rural conditions as deviant or incomprehensible. The chapter summarizes scholarly explorations of urbanormitivity in rural sociology and law and rurality. It seeks to establish that rural conditions are complex, but they have common themes that are capable and worthy of being understood and addressed.
We often hear that there is no way out of the modern economic and political tensions that fall along geographic lines. The media regularly declares that rural America is dying and that rural voters are driven only by anger. This narrative of hopelessness centers on the role that markets have played in abandoning rural regions and populations. In Reviving Rural America, Ann M. Eisenberg analyzes our society's laws and policies' role in the urban/rural divide to make the case for hope. She demonstrates how law and policy, as well as decision-makers acting on their own subjective values, have contributed to modern rural challenges. Each chapter debunks a common myth about rural people, places, and policies, helping reveal how we got to where we are now. Ultimately calling for our laws and policies to steward rural America holistically, as a collective resource for all, this book envisions an alternative, more resilient and more just future.
Dean John Wade, who replaced the great torts scholar William Prosser on the Restatement (Second) of Torts, put the finishing touches on the defamation sections in 1977.1 Apple Computer had been founded a year before, and Microsoft two, but relatively few people owned computers yet. The twenty-four-hour news cycle was not yet a thing, and most Americans still trusted the press.2
The term “content moderation,” a holdover from the days of small bulletin-board discussion groups, is quite a bland way to describe an immensely powerful and consequential aspect of social governance. Today’s largest platforms make judgments on millions of pieces of content a day, with world-shaping consequences. And in the United States, they do so mostly unconstrained by legal requirements. One senses that “content moderation” – the preferred term in industry and in the policy community – is something of a euphemism for content regulation, a way to cope with the unease that attends the knowledge (1) that so much unchecked power has been vested in so few hands and (2) that the alternatives to this arrangement are so hard to glimpse.
This chapter addresses an underappreciated source of epistemic dysfunction in today’s media environment: true-but-unrepresentative information. Because media organizations are under tremendous competitive pressure to craft news that is in harmony with their audience’s preexisting beliefs, they have an incentive to accurately report on events and incidents that are selected, consciously or not, to support an impression that is exaggerated or ideologically convenient. Moreover, these organizations have to engage in this practice in order to survive in a hypercompetitive news environment.1
What is the role of “trusted communicators” in disseminating knowledge to the public? The trigger for this question, which is the topic of this set of chapters, is the widely shared belief that one of the most notable, and noted, consequences of the spread of the internet and social media is the collapse of sources of information that are broadly trusted across society, because the internet has eliminated the power of the traditional gatekeepers1 who identified and created trusted communicators for the public. Many commentators argue this is a troubling development because trusted communicators are needed for our society to create and maintain a common base of facts, accepted by the broader public, that is essential to a system of democratic self-governance. Absent such a common base or factual consensus, democratic politics will tend to collapse into polarized camps that cannot accept the possibility of electoral defeat (as they arguably have in recent years in the United States). I aim here to examine recent proposals to resurrect a set of trusted communicators and the gatekeeper function, and to critique them from both practical and theoretical perspectives. But before we can discuss possible “solutions” to the lack of gatekeepers and trusted communicators in the modern era, it is important to understand how those functions arose in the pre-internet era.
The laws of defamation and privacy are at once similar and dissimilar. Falsity is the hallmark of defamation – the sharing of untrue information that tends to harm the subject’s standing in their community. Truth is the hallmark of privacy – the disclosure of facts about an individual who would prefer those facts to be private. Publication of true information cannot be defamatory; spreading of false information cannot violate an individual’s privacy. Scholars of either field could surely add epicycles to that characterization – but it does useful work as a starting point of comparison.
The commercial market for local news in the United States has collapsed. Many communities lack a local paper. These “news deserts,” comprising about two-thirds of the country, have lost a range of benefits that local newspapers once provided. Foremost among these benefits was investigative reporting – local newspapers at one time played a primary role in investigating local government and commerce and then reporting the facts to the public. It is rare for someone else to pick up the slack when the newspaper disappears.
An entity – a landlord, a manufacturer, a phone company, a credit card company, an internet platform, a self-driving-car manufacturer – is making money off its customers’ activities. Some of those customers are using the entity’s services in ways that are criminal, tortious, or otherwise reprehensible. Should the entity be held responsible, legally or morally, for its role (however unintentional) in facilitating its customers’ activities? This question has famously been at the center of the debates about platform content moderation,1 but it can come up in other contexts as well.2
Coordinated campaigns of falsehoods are poisoning public discourse.1 Amidst a torrent of social-media conspiracy theories and lies – on topics as central to the nation’s wellbeing as elections and public health – scholars and jurists are turning their attention to the causes of this disinformation crisis and the potential solutions to it.
Current approaches to content moderation generally assume the continued dominance of “walled gardens”: social-media platforms that control who can use their services and how. Whether the discussion is about self-regulation, quasi-public regulation (e.g., Facebook’s Oversight Board), government regulation, tort law (including changes to Section 230), or antitrust enforcement, the assumption is that the future of social media will remain a matter of incrementally reforming a small group of giant, closed platforms. But, viewed from the perspective of the broader history of the internet, the dominance of closed platforms is an aberration. The internet initially grew around a set of open, decentralized applications, many of which remain central to its functioning today.
Political scientist and ethicist Russell Hardin observed that “trust depends on two quite different dimensions: the motivation of the potentially trusted person to attend to the truster’s interests and his or her competence to do so.”1 Our willingness to trust an actor thus generally turns on inductive reasoning: our perceptions of that actor’s motives and competence, based on our own experiences with that actor.2 Trust and distrust are also both episodic and comparative concepts, as whether we trust a particular actor depends in part on when we are asked – and to whom we are comparing them.3 And depending on our experience, distrust is sometimes wise: “[D]istrust is sometimes the only credible implication of the evidence. Indeed, distrust is sometimes not merely a rational assessment but it is also benign, in that it protects against harms rather than causing them.”4
Almost all platforms for user-generated content have written policies around what content they are and are not willing to host, even if these policies are not always public. Even platforms explicitly designed to host adult content, such as OnlyFans,1 have community guidelines. Of course, different platforms’ content policies can differ widely in multiple regards. Platforms differ on everything from what content they do and do not allow, to how vigorously they enforce their rules, to the mechanisms for enforcement itself. Nevertheless, nearly all platforms have two sets of content criteria: one set of rules setting a minimum floor for what content the platform is willing to host at all, and a more rigorous set of rules defining standards for advertising content. Many social-media platforms also have additional criteria for what content they will actively recommend to users that differ from their more general standards of what content they are willing to host at all.
In February 2021, the Australian federal government enacted the “News Media and Digital Platforms Mandatory Bargaining Code,” which requires Facebook and Google to pay domestic news outlets for linking to their websites. It was a first-of-its-kind mechanism for redistributing revenue from Big Tech platforms to legacy journalism, and it has attracted global attention from policymakers looking to halt the internet-fueled decline of the traditional news industry. Thus, the success or failure of what critics call Australia’s “link tax” has significant implications for the future of both the World Wide Web and the news industry writ large.