Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-x4r87 Total loading time: 0 Render date: 2024-04-28T23:43:05.321Z Has data issue: false hasContentIssue false

Part III - Ensuring the Conditions of Agency

Published online by Cambridge University Press:  17 May 2021

Alan Rubel
Affiliation:
University of Wisconsin, Madison
Clinton Castro
Affiliation:
Florida International University
Adam Pham
Affiliation:
California Institute of Technology

Summary

Type
Chapter
Information
Algorithms and Autonomy
The Ethics of Automated Decision Systems
, pp. 97 - 134
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

5 Freedom, Agency, and Information Technology

So far, we have given some arguments about what we owe to each other, and we have supported these arguments in part by relating them to some of the ways algorithmic systems and technologies can conflict with our autonomy. One of our key tasks has been to examine the various complaints and criticisms of algorithms and other information technologies in terms of wrongs rather than harms. To this end, we have argued that these systems can create circumstances people cannot reasonably endorse and they can preclude people from information they are owed.

In Chapter 4, we discussed two forms of agency that go beyond the mere manifestation of intentional action and its natural causes.Footnote 1 One sort is practical agency, which involves making decisions, formulating plans, and executing strategies. The other is cognitive agency, which involves exercising evaluative control over our mental attitudes, in the form of personal consideration of our beliefs and values.Footnote 2 Autonomy, meanwhile, involves something above mere agency: self-government. Our view is that autonomy requires both procedural independence (requiring that an agent be competent and that their beliefs, preferences, desires, and values be authentic) and substantive independence (requiring that an agent be supported by their social and relational circumstances).

Agency is a broader concept than autonomy, in the sense that it is possible for one to act, decide, or plan without those actions or attitudes exemplifying the relevant sorts of self-government. However, we will not take a strong position here on the metaphysical conditions distinguishing autonomy from agency, because our focus is on both cases of diminished agency and diminished autonomy. Our argument in this chapter is that both sorts of case result in a shortfall of freedom, properly understood.

We argue in Section 5.1 that freedom has two fundamental conditions: that persons be undominated by others and that they have an adequate degree of autonomy and agency. However, we will argue in Section 5.2 that algorithmic systems can threaten both the domination-based and the agency-based requirements, either by facilitating domination or by exploiting weaknesses in human agency. We will explicate these types of threats as three sorts of challenges to freedom. The first we discuss are “affective challenges,” which involve the role of affective, nonconscious processes (such as fear, anger, and addiction) in human behavior and decision-making. These processes, we argue, interfere with our procedural independence, thereby threatening persons’ freedom by undermining autonomy. The second type of challenge is what we call “deliberative challenges.” These involve strategic exploitation of the fact that human cognition and decision-making are limited. These challenges also relate to our procedural independence, but they do not so much interfere with it as they exploit its natural limits. A third sort of challenge, which we describe as “social challenges,” involves toxic social and relational environments. These threaten our substantive independence and thus, our freedom. In Section 5.3, we sketch a policy agenda aimed at combating these challenges and promoting the conditions of freedom.

We have two main goals in this chapter. One is to extend our analysis to the affective, deliberative, and social challenges that algorithmic systems pose. The other is to relate our overall project to the notion of freedom. Our understandings of freedom and autonomy are closely linked, and it would be possible to consider affective, deliberative, and social challenges of algorithmic systems solely in light of autonomy. However, such an analysis would fail to connect algorithmic systems to a good that is on many views basic. Exploring these issues from the vantage point of freedom allows us to make that connection. In Chapter 6 we will draw on this conception in our discussion of epistemic paternalism.

5.1 Freedom as Undominated Self-government
5.1.1 The Forms of Freedom

The concept of freedom is often discussed in terms of one’s “negative” freedom; that is, in terms of noninterference, nonaggression, noncoercion, or in general, the absence of external constraints on one’s ability to act.Footnote 3 A person enjoys negative freedom when they are not being actively interfered with or coerced. Negative freedom is often considered primary, for a couple of reasons. One is that limitations on negative freedom are intuitively obvious; if one is prohibited by law or physically prevented from taking an action, their freedom has been restricted. Another reason is that some shortfalls in negative freedom are especially restrictive, which makes negative freedom appear to be the most important form of freedom. For instance, if one has been kidnapped or forced at gunpoint to hand over one’s possessions, one’s current political freedoms are of secondary concern.

Yet, looking beyond the obvious importance of negative freedom to human life, the value of noninterference does not swamp the value of all other forms of freedom one could value. Freedom must be understood as in two ways extending beyond the mere absence of interference. To see why, consider two cases. First, consider the case of a woman living under an oppressive political regime in which she is permitted to vote only with her husband’s permission. Supposing that her husband does, in fact, allow her to vote, she would have negative freedom. After all, no one has coerced her, interfered with her, or physically prevented her from voting. But her husband could have chosen otherwise and could have interfered. Depending on someone else’s approval to vote is a deeper sort of unfreedom underneath her thin, negative freedom to vote. Second, consider the case of a person who is not dependent on the approval of someone else to vote, but who does require a wheelchair for mobility. If there are stairs that they must navigate to access their polling place, and there are no ramps or alternative means of casting a ballot, they are not free to vote, their ample negative freedom (insofar as no one coerces, prohibits, or physically blocks access to voting) aside. They are unfree because they cannot effectively do what they wish to do, namely vote.Footnote 4

One conception of freedom that extends beyond negative freedom, corresponding to the case of the first woman, is called “republican freedom.” Beyond mere noninterference, republican freedom requires non-domination, which is to say, the absence of arbitrary power or domination. On this conception, even the possibility of interference counts as unfreedom. Noninterference that occurs at the whim of a benevolent ruler does not count as genuine freedom. For republican freedom, it is not enough that there is no interference. Rather, as Philip Pettit puts it, interference must be “robustly” absent, meaning that interference is “absent over variations in how far others are hostile or friendly.”Footnote 5 For the first woman to be free, her husband’s benevolence cannot play a role in her ability to vote; she must be able to do it without needing his permission.

The republican theory of freedom does not emphasize that the fullest sort of freedom requires having not just choices, but autonomous choices. Pettit, for instance, argues that his account “does not require that resources […] must be present over variations in your personal skills, the natural environment, or the structure of society,” because “[i]n order to choose freely between the options in a certain choice, it is enough that you actually have such assets available.”Footnote 6 The republican theory holds people to account for the choices they make, no matter why they made them or what challenges they faced leading up to the choice. Yet, if the affective and deliberative challenges to human agency are real, it must be in part through their influence on our choices. To fully understand our freedom, we must take into consideration not just our choices, but how those choices get made.

Another conception of freedom that is distinct from negative freedom, corresponding to the case of the second voter, is “positive freedom.” It involves not only the absence of external constraints, but the ability to effectively carry out one’s interests. While negative freedom is defined in terms of (and, to some extent, presupposes) the capacity for autonomy,Footnote 7 positive freedom requires the ongoing exercise of self-government. Republican freedom, moreover, cannot by itself guarantee positive freedom. This is because the republican account is primarily oriented toward the alleviation of subjugation, or “defenseless susceptibility to interference,”Footnote 8 rather than to the positive development of persons’ talents and capacities. But this glosses over the fact that one might be undominated by all others but nonetheless have interests that one cannot pursue because they lack the capacities to effectuate their desires or realize their interests. For the second voter to be free, they must not only be undominated by all others, their polling station must also have the relevant accessibilities, allowing them to actually vote rather than merely have the (empty) right to do so.

“Unfreedom” in the positive sense can be tricky to diagnose or discern, in part because it can become fully internalized. Consider, for instance, a third person who has been raised in an insular religious community. Suppose that in this community, people adhere to a narrow set of religious precepts, their access to information is restricted by their religious leaders, and their social roles are defined in terms of highly gendered categories. By the time the person reaches adulthood, they are likely to have internalized their community’s standards about the nature of personhood and about what counts as an adequate range of options. Unlike the first woman, this person is not externally constrained, in the sense that someone could prevent them from acting according to their desires, and unlike the second person, their choices have not been undermined or frustrated by inaccessibility. Indeed, they may even possess all the material resources necessary to exit the community. Yet there is an important sense in which they lack freedom, owing to how their space of possibility has been circumscribed; their agency itself has been short-circuited in a way that they cannot repair or (possibly) even recognize. To be clear, this is a generic, hypothetical example. Gesturing at this kind of example does not justify any inference about the authenticity or “false consciousness” in any real cases. Rather, it is an example to show that freedom is not reducible to persons’ abilities to act on their preferences, as preferences can be formed in response to oppressive conditions, which are themselves antithetical to freedom.

It’s worth pausing to explain this a bit more. We have distinguished negative freedom (freedom from external constraints) and positive freedom (ability to effectively carry out one’s interests). One might argue that these apparently distinct facets of freedom are reducible to a single conception. Gerald MacCallum, for example, argues that we can reconcile negative and positive freedom under a single, tripartite conception of freedom: between (1) a person, (2) their goals (their “doings” or “becomings”), and (3) the relevant constraints.Footnote 9 From this perspective, understanding freedom as primarily being about external constraints or as primarily being about individuals’ effective capacity to act is a mistake. Rather, what matters is a person, some action they wish to take or way they would like to live, and whether there are constraints on the person’s ability to take the action or to live the way they would like. Hence, a person driven by an addiction might be free of external constraints, but unfree insofar as their addiction prevents them from acting or living in a way that comports with their higher-order desires.

But even a conception that collapses positive and negative freedom cannot adequately explain our third case. The problem there is that despite the fact that there are no external constraints and despite the fact that the person is able to act upon their values, it is odd to say that they are free tout court. Being raised in oppressive circumstances may preclude them from developing reasonable sets of values and preferences. The limitations of freedom in this case are, as Christman puts it, to their “quality of agency.” That is, a person who has had unreasonable constraints on their ability to develop their own sense of value has diminished “effectiveness as an agent.”Footnote 10 More is needed to secure their effectiveness as an agent than the mere absence of external constraints and ability to act upon their preferences. Quality of agency requires that their beliefs and desires be both authentic and competently acquired for an adequate range of options. It also requires social and political structures that support those beliefs and desires and that one has sufficient affordances to act on those beliefs and desires. This quality of agency view can explain how challenges to authenticity, such as in cases of severe addiction, limit freedom. For instance, we might imagine a person who is driven by addiction but who also has enough resources and opportunities to sustain their habits in perpetuity.Footnote 11 Such a person lacks freedom in the fullest sense because their addiction limits their efficacy as an agent.

Our view of freedom aims to capture both freedom as quality of agency (which itself encompasses both negative and positive freedom) and non-domination (which encompasses republican freedom). In this sense it is as ecumenical as our conception of autonomy from Chapter 2. Specifically, we recognize both non-domination and autonomy as vital facets of freedom.Footnote 12 An agent’s freedom requires what we will call “ecological non-domination.” It is ecological in the sense that one’s freedom encompasses both facts about oneself (quality of one’s agency) and their environment (absence of external constraints and non-domination). One is not free without both.

Notice that it is possible to enjoy some aspects of autonomy without full republican freedom (or even negative freedom). Consider, for instance, the “Russian oligarchs”; the group of wealthy Russian kleptocrats who rapidly acquired enormous wealth in the wake of post-Soviet privatization. They enjoy unfettered positive freedom, in that they have at their disposal the means to purchase world-class football clubs,Footnote 13 record-breaking superyachts,Footnote 14 and former royal estates.Footnote 15 Yet, at the same time, their enjoyment of noninterference is not all that robust: Their personal safety depends on remaining in good favor with the regime. Mikhail Khodorkovsky’s oil company (Yukos) came to control a sizable fraction of Russia’s oil supply. Yet when Khodorkovsky engaged in a power struggle with the government, he was imprisoned for nearly a decade over trumped-up fraud charges. Similarly, the chairman of the country’s biggest telecom, Vladimir Yevtushenkov, was placed under house arrest by Russian authorities under suspicion of money laundering.Footnote 16 Indeed, to some extent even the president of Russia, Vladimir Putin,Footnote 17 faces this sort of precarity: Both his wealth and safety depend on retaining political power, and this in turn requires at least minimal cooperation with the oligarchs. For him, interference is absent, but perhaps not robustly absent. That is, it may not be absent over variations in the friendliness of others.

5.1.2 The Value of Freedom

It is still an open question as to the relation between freedom and morality. Why, in other words, is freedom good or morally significant (if it is)?

As in our discussion of autonomy earlier in this book, we can understand the value of freedom by considering its boundaries. We have already seen that autonomy is not good without qualification. For example, using one’s autonomous capacities to undermine others’ freedom is bad. Consider the oligarchs just mentioned: not only does their freedom permit them to capture a sizable proportion of the wealth and national product of Russia for their personal benefit; they also have a long history of using their freedom as a tool for promoting the domination and diminished quality of agency of others. With this sort of example in mind, John Danaher argues that freedom is an “axiological catalyst” – that it “makes good things better and bad things worse.”Footnote 18 Yet even this seems too strong when we consider cases that raise the “paradox of choice.”Footnote 19 This paradox is that although we often believe that having more options is good, in fact having a large number, wide diversity, or substantial nuance in our choices leads to anxiety and stress rather than happiness or satisfaction. A greater range of choices is simply not always better, for either the chooser or those who depend on their choices.

These cases suggest that it will be impossible to coherently explain either freedom or autonomy as intrinsically good. Of course, we need not ignore the fact that freedom and autonomy are almost always good. An easier proposition to defend is cast in terms of unfreedom: shortfalls in freedom (that is, either shortfalls in self-government or instances of domination) are prima facie bad. This commitment is compatible with freedom being outweighed by any number of other values in any number of cases, such as when we take the freedom of oppressors to be outweighed by the value of human rights. There may be cases where the protections licensed by the assumed value of freedom are morally and politically outweighed by the public interest, such as public health mandates. The shift in emphasis from freedom to unfreedom is motivated by the fact that in practice, it is easier to discern what is objectionable (or not) about a given freedom deficit than to discern how freedom (conceived of as an intrinsic good) must be limited.

This account has substantive implications. It holds that freedom deficits are usually morally bad and that unfreedom can be thought to serve morally good purposes only in certain circumstances. Figuring out how to best promote human freedom, then, is not as simple as settling debates about which sorts of interference would be objectionable to rational actors. Unlike freedom, which can be defined in the abstract, unfreedom can only be defined in terms of the specific foibles of human agency, specific architecture of human environments, and specific implications of social choices. As we show in the next section, the effects that shortfalls in freedom can have on our freedom are well illustrated in terms of the challenges to freedom mentioned earlier.

5.2 Three Challenges to Freedom: Affective, Deliberative, and Social

At the start of this chapter we introduced three challenges to freedom: the affective challenge, the deliberative challenge, and the social challenge. In this section we will examine each of these challenges by considering several different algorithmic systems. We will argue that, to the extent that these technologies can undermine either people’s autonomy or their freedom from domination without good reason, they can objectionably undermine our freedom.

5.2.1 Affective Challenges to Autonomy

The first sort of challenge to freedom is that human behavior is driven by affective influences, such as fear, anger, resentment, or addiction, as opposed to purely being driven by rational attitudes. These affective states, we will argue, can undermine the authenticity of people’s preferences and desires, threatening our freedom by threatening our autonomy.

People are so driven by affective states that it might seem difficult or impossible to clearly pinpoint any kernel of rational attitudes that lie underneath. Humans, it might seem, experience affective influences “all the way down,” in the sense that our authentic attitudes cannot be distinguished from any other attitudes one might have in a principled way.Footnote 20 But in some cases, the influence can be so dramatic that it is hard to accept that the resulting behavior could be freely chosen or even independently motivated. The AAA Foundation for Traffic Safety has found, for instance, that around 4 percent of drivers each year have gotten out of their cars to angrily confront another driver, and 3 percent have purposefully run into another car for the same reason.Footnote 21 Anger is not the only source of authenticity-undermining affective influence: It is difficult to believe that a person could have an authentic desire or preference to go to an internet cafe and play a game for fifty straight hours before dying of exhaustion, yet this has happened on several occasions.Footnote 22

As has been known since B. F. Skinner, people’s susceptibility to operant conditioning can be used by third parties (such as casinos) to reinforce certain patterns of behavior over others. Many digital platforms employ conditioning strategies in the same sort of way, engendering affective states in their users that undermine the authenticity of their choices. The targets of optimization are different, of course; mobile developers optimize in terms of clicks, views, watch times, and so on, rather than lever pulls or revenue benchmarks. However, the underlying strategy – engendering some sort of artificial dependency – is the same.

As Nir Eyal points out, engendering artificial dependencies is a crucial element of the general strategy of keeping users “engaged.” He outlines a four-step method for effectively “Skinner boxing” a digital platform.Footnote 23 The first step involves a trigger, which draws the user’s attention to the app or platform. The quintessential modern trigger is, of course, the “push notification,” which literally forces you to pay attention to it enough to dismiss it. The second step of Eyal’s method involves getting users into acting on the trigger. The best way of getting users to do this is to get them to anticipate some sort of reward. The third step involves tying user behavior to these rewards but making the rewards variable, in the sense that they are sometimes highly rewarding but at other times mundane, in a way that is unpredictable. Dopamine surges whenever the brain has been conditioned to expect a reward, but without variability, the experience becomes unsatisfyingly predictable. The fourth step, finally, involves getting users to make some sort of investment in or commitment to the app, to maintain their engagement with it in the future.

The Duolingo platform offers an illustration of how Eyal’s model can be employed. The app uses push notifications to draw the user’s attention to the app, sometimes coupled with a guilt trip. When a user has stopped using Duolingo for a period, they will receive a notification depicting the very cute Duolingo mascot in tears, stating that “Language Bird is crying,” that a failure to go back to learning a language will lead the mascot to “eat a poison loaf of bread,” and that the next email to the user will be an e-vite to Language Bird’s funeral.

Skill badges, which represent user achievements, are displayed at the end of rounds in a pleasingly unpredictable way. Finally, the platform has a number of mechanisms for investment: users’ earned skills decay over time, requiring ongoing practice, and the platform itself also contains a microtransaction market for premium avatars, allowing users to invest in the app using real money.

In several ways, then, Duolingo uses its users’ affective states to undermine their autonomy and, thus, limit their freedom. For the most part,Footnote 24 its freedom-impinging tactics do not raise anyone’s hackles, because there is a background presumption in any discussion of it that language acquisition really is valuable for its users (thereby serving their global autonomy if not quite their local autonomy). The same tactics, however, are more troubling in the context of employment, where people can be compelled to act in dangerous ways to protect their basic livelihood. Consider the practices of the ride-hailing company Uber.

Uber has come under substantial criticism on the basis of how its algorithms have negatively affected its drivers.Footnote 25 The company uses these systems for a variety of purposes: to track passengers, to anticipate market demand, and (at one point) even to identify and deceive regulators, media, and law enforcement. As we discuss in Chapter 7,Footnote 26 the company employs several practices to keep its drivers working longer than they might otherwise. Here we will discuss two sorts of practice, arguing that one serves to undermine the freedom of drivers while the other is more morally benign.

Both sorts of practice can be linked to the strategy outlined by Eyal. The first involves Uber’s user interface design choices. The driver app, for instance, is configured by default to remind drivers of their goals through push notifications, and it is also configured by default to queue another rider before the current rider has been delivered. Presumably, the motivation for Uber to employ this particular set of defaults rather than the alternative sets is that their chosen configuration is statistically more likely to induce drivers to accept more riders than the others. (In fact, it is likely that Uber has discerned the precise overall effect of this design choice through massive-scale A/B testing.)

Nevertheless, Uber’s design choices do not necessarily undermine drivers’ freedom in this case. It is plausible that drivers have their own reasons for such settings being one way or the other. It is plausible that someone might want to work a shift determined by a particular monetary goal or period of time and, thus, to simply queue as many customers as fast as possible without having to continue manually making choices at each possible choice point. Uber would certainly not increase the freedom of users by doing away with the choice altogether. So long as Uber does not covertly reset the defaults, or design the interface in a predatory fashion, the company’s UI design choices do not strike us as an issue of serious moral concern (or even, for that matter, an issue of freedom at all).

The second practice for keeping drivers working longer than they would otherwise, however, is less apparent to the drivers and much more troubling from the standpoint of autonomy. It involves such practices as “surge pricing,” which is a cost-multiplier that raises the price of a ride (sometimes dramatically) in a location at a time when the demand for drivers is high or the supply of drivers is low. Some of the additional revenue from this cost markup is passed to the drivers, so the surge-pricing system gives drivers a financial incentive to work more than they would otherwise, because they will be able to make more money than they would have otherwise. The “surge-pricing” mechanism is a familiar device in modern retail – it’s just a form of dynamic pricing – but essential to Uber’s use of it is an element of uncertainty and variability that is not readily apparent to the drivers at first: when drivers accept a ride at a “surge priced” rate, they do not know whether the ultimate payment for the ride will be surge priced.Footnote 27

The fact that these practices are opaque to drivers might lead one to question Uber’s motives. Our concern here, though, is the drivers’ autonomy. A recent report describes the phenomenon of “chasing the surge” at length.Footnote 28 Since 2016, Uber has employed the “boost” system, which grants automatic surge pricing to drivers who have completed a certain number of rides the previous week, in a tiered arrangement from “Bronze” to “Platinum.” One driver described his relationship with Uber in the following way:

I had some days off from work. So I was on the road. I mean, I just had coffee upon coffee, and I’m just on the road. So I ended up doing about, I thought it was over 100, but I did 94 rides … It was 94 rides in essentially 3 days … After you do 90 rides, I think it’s 90 or 95, they bump you to platinum … And then basically you’re always chasing platinum.Footnote 29

The authors of the report note that “[t]he stress of these games, be it chasing a surge or platinum, was a refrain in 50% of our interviews in surveys.”Footnote 30

What is the moral upshot of the Uber case? According to our view of the value of freedom, the verdict is mixed. The nudges Uber employs in its user interface, which for the most part serve obvious functions for its users (and which are also relatively low stakes and transparent), are unproblematic as a class. From the perspective of alleviating the domination of its users or promoting their autonomy, it is not clear that Uber could have done better.

Uber’s Skinner-boxing of its drivers to “chase platinum,” in contrast, is considerably more troubling and gives rise to a freedom-based complaint. The authenticity condition of autonomy precludes users being in affectively compromised states; if people’s actions lack authenticity – in the sense that those actions serve to alienate the person from their basic values – then those actions will not be autonomous. However, the boost aspect of the Uber surge-pricing system exists precisely to place users in such a state and, in so doing, urges them to “chase” overwork even at the expense of their health. The same freedom-impinging features that seem somewhat innocuous in the context of Duolingo and language acquisition become more troubling when a dangerous degree of overworking is a potential effect.

5.2.2 Deliberative Challenges to Agency

Exploiting our emotional vulnerabilities is not the only way to hack human agency. Even if we set aside the fact that human agency can be undermined by noncognitive affective states, human cognition itself is limited in ways that can be exploited to limit our quality of agency.

Our actual deliberative process is both bounded and inaccurate; the information we receive as reasoners is limited by our computational capacities and fitted to our choice environments, and our processing of the information we do get is subject to a variety of errors and biases.Footnote 31 Many of these biases – the availability bias, anchoring bias, conjunction fallacy, base-rate neglect, and so on – are well known.Footnote 32 To this end, in the next chapter we discuss problematic search suggestions and algorithmically curated news feeds, which allows people to expose themselves only to information that reinforces their existing beliefs. Here, we focus more on the boundedness of human agency.

Algorithmic systems, particularly digital platforms, introduce new sources of economic value – namely personal and consumer datasets at big data scaleFootnote 33 – as well as new sources of risk – data breaches and other forms of exposure.Footnote 34 These systems can be so complex both for developers and for end users that it is impossible to live up to the Reasonable Endorsement Test in practice. To be sure, the choice environment of our digital daily lives does not live up to this moral standard: most people do not and could not always read all of the fine print to which they claim to consent and the idea of “common terms” has been stretched thin by massive swathes of boilerplate that are unreadable by humans in practice.Footnote 35 In practice, we face an avalanche of digital “pseudo-contracts” – contracts that are so convoluted that they assume impossible levels of human competence and thus fail to represent any actual “meeting of the minds” between the parties.Footnote 36

Our constant subjection to these “contracts” violates our procedural independence, by exploiting the limits of our epistemic competence and thereby undermining our consent. One study, for instance, found that “only one or two of every 1,000 retail software shoppers” actually access the contract “and that most of those who do access it read no more than a small portion.”Footnote 37 In 2014, Europol conducted an experiment in which they offered access to a Wi-Fi hotspot behind a contract that granted access only if the recipient signed a so-called Herod clause, in this case, to agree “to assign their first born child to us for the duration of eternity.”Footnote 38 Six people still signed the contract. In 2015, a Guardian journalist resolved to read all the terms of conditions for all his services and wound up reading 146,000 words in a single week.Footnote 39

Most modern web services now have end-user licensing agreements (EULAs), but Brainly, a peer-to-peer learning platform in which users can provide homework help to one another, offers an especially striking example of a pseudo-contract. The platform’s end-user licensing agreement is expansive. First, the company is permitted to force users into arbitration. Second, it is permitted – despite its claim to protect users with a privacy policy – to license user content to third parties, distribute it through any media, or even sell personal data outright as part of bankruptcy, even after its users have terminated their accounts. Third, it is permitted to change the terms of service at any time without notice.

Each of the components of the Brainly pseudo-contract can be seen to exploit the undermined capacity of its users to reflect on their values, motivations, and decision-making in the digital environment. The company’s ability to automatically force users into arbitration allows it to settle disputes with its users on legal terrain that is broadly unfavorable to them; its privacy policy serves as a smokescreen that obscures the extent to which it can freely trade or sell user data; its right to change its terms of service at any time prevents and discourages users from reading any one version of it and contributes to the overwhelming deluge of information users must cope with.

This pseudo-contract exploits its users’ limited quality of agency. By holding them responsible for self-government in a context where this is humanly impossible, the contract exploits their lack of understanding of their circumstances for corporate benefit. In many cases, the stakes are sufficiently low: the opacity of Brainly’s EULA aside, it is difficult to argue that anyone has been harmed by having their content unknowingly open-sourced to the educationally minded public. Yet users have proven eager in the past to allow their likenesses to be sold by Instagram,Footnote 40 and (as we discuss in Chapter 8) they have more recently allowed their (and their friends’) personality profiles to be mined by Cambridge Analytica, even if they later come to regret these choices. In the context of complex algorithmic systems, epistemic competence – and therefore, freedom – is impossible to achieve.

5.2.3 Social Challenges to Freedom

So far in this section, we have argued that one way that human agency can be undermined is if its processes come to be directed by affective states rather than by conscious deliberation, violating the authenticity condition. Another way agency can be undermined, we have argued, relates to the fact that our cognitive capacities themselves are bounded, violating the competence condition. In this subsection, we explore a third way that agency can be undermined: through the influence of the affective states and erroneous deliberative processes of other people, in the context of an epistemologically toxic social environment.

Consider, for instance, YouTube’s recommendation algorithm, whose autoplay mechanism drives users toward echo chambers and other ideologically extreme and conspiratorial content. By YouTube’s own analysis, almost three-quarters of viewing time spent on the site is driven by this recommendation system as opposed to their viewers’ independent actions,Footnote 41 so the operation of this system is of overriding significance in determining what content people are exposed to. When left unsupervised by humans, it fosters an unsafe environment for children: it allows bad actors to expose them to distressing parody content, such as a fake Mickey Mouse cartoon depicting eye-gougingFootnote 42 or a fake Paw Patrol episode where one of the main characters attempts suicide.Footnote 43

YouTube’s recommendation system has also arguably been a source of violent self-radicalization. The reason has at least in part to do with YouTube’s business model and with the evolution of its incentive structure. Originally, the YouTube recommendation algorithm had been designed simply to maximize the number of clicks,Footnote 44 but this proved to offer content creators an incentive to post “clickbait” videos, ones that entice users to initially click on the video but that do not necessarily hold their interest through the duration of the video. In light of this, YouTube changed its algorithm in 2012 to optimize more for watch time than for number of clicks so that “creators would be encouraged to make videos that users would finish, users would be more satisfied and YouTube would be able to show them more ads.”Footnote 45 The change worked – watch time increased by 50 percent each year from 2012 to 2015 – but it also conferred an advantage to those content creators who produce naturally engaging videos: conspiracy theorists.

Guillaume Chaslot, a former Google engineer who left the company over issues with the development of the YouTube algorithm, wrote a piece of web software that examines which videos follow others via the recommendation system. When the Guardian examined the one thousand top-recommended videos, it found that YouTube had an 85 percent chance of recommending a pro-Trump video rather than a pro-Clinton video. The newspaper also interviewed several conspiracy theorists, whose videos normally receive only a few hundred views but whose traffic increased dramatically right before the 2016 election, and it found that most of these content creators got their traffic from the YouTube recommender system rather than from external links.Footnote 46 These videos, which almost always have titles like “WHOA! HILLARY THINKS CAMERA’S OFF … SENDS SHOCK MESSAGE TO TRUMP” and “Irrefutable Proof: Hillary Clinton Has a Seizure Disorder!” are more engaging than they are factual. The natural result of this state of affairs is a social environment in which divisiveness and sensationalism are in themselves advantageous traits for content, quite apart from whether or not that content reflects misinformation or authoritarian aims.

Given the affective and deliberative challenges to agency that we have already discussed, we should not be surprised that sensational and conspiratorial content grounded in fear, anger, and resentment turns out to deliver greater engagement than the messy realities of legitimate news reporting. But it is worth noticing how toxic social environments can aggravate the other challenges to agency: When users who are already prone to fear-conditioning are fed an escalating diet of misinformed, radicalizing content, their autonomy is socially comprised. For such users, both the possibility of autonomy and the possibility of high-quality agency are ruled out from the start.

5.2.4 Summarizing the Challenges to Freedom

As we have seen in this section, drivers who have become caught up in the gamified ridesharing ecosystem, information consumers who have become overwhelmed by a deluge of labyrinthian pseudo-contracts, and conspiracy theorists who have become radicalized all have been made less free in a broadly similar way: either their autonomy has been undermined or their quality of agency has been diminished.Footnote 47 However, the traditional conceptions of freedom (negative, positive, republican) that we canvassed at the beginning of the chapter do not provide an adequate explanation for these shortfalls of freedom. That is because drivers, information consumers, and conspiracy theorists all seem to satisfy the traditional conceptions. They are not subject to external constraints, they are not subject to arbitrary exercises of power, and they often even have the capacity to make choices to realize their preferences. The challenges to freedom we have outlined here make clear that freedom requires autonomy, high-quality agency, and non-domination.

Taking these three challenges seriously motivates an important shift in how we think about freedom. Recall that our account of freedom is ecological non-domination, which includes quality of agency and republican freedom. That account captures the emotionally volatile, cognitively limited, and fundamentally social nature of human agency. It is also consistent with the fact that it remains possible to coherently act even under some degree of individual, psychological disunity, such as when one is internally conflicted about a course of action. Surely the authenticity of Uber drivers, the competence of overwhelmed service users, and online radicalization of conspiracy theorists are not all-or-nothing affairs. People in those circumstances are neither fully free nor acting purely on reflex. When we reconsider human agency in light of its distinctive challenges, any plausible account of freedom must cope with this complexity.

We are finally able to make more precise the sense in which these users of technology have been made unfree: their beliefs, preferences, constraints, and values have been formed in a way that conflicts with agency. Freedom requires, in addition to non-domination and effective choice, that one’s beliefs, preferences, constraints, and values themselves have been formed without undue influence. It is only when people are both undominated and unstricken by such malformations that they can be said to be free in the fullest sense.

5.3 Ecological Non-domination, Policy, and Polestar Cases

The next question to consider is this: what kinds of moral claims does ecological freedom ground? Both the new cases we have examined in this chapter and the cases we have examined in previous ones can offer some guidance.

Preserving our autonomy – and thus, our freedom – requires managing the possible sources of compromising affective states. However, at least in the United States, policymakers have discussed the issue only where the analogy between the digital and non-digital addiction is sufficiently robust.Footnote 48 Consider, for instance, “loot boxes,” which are certain economic transactions within digital games that involve a randomized reward structure. These transactions bear a clear resemblance to traditional forms of gambling, such as slot machines and lotteries, including the fact that the model is primarily funded by a tiny proportion of “whales.”Footnote 49 There is, therefore, reason to think that the “loot box” market is noxious and, thus, apt for public regulation.Footnote 50 One bill has been introduced to ban the use of loot boxes in games marketed to children, but as of 2020, there are effectively no regulations in place about them. In the European Union, in contrast, the practice has already been banned completely by the authorities in at least two member states (Belgium and the Netherlands).Footnote 51 Our account offers, on relatively minimal and ecumenical grounds, support for regulation that nudges users to be self-critical about their agency.

Preserving our quality of agency might also require designing law and policy in a way that is mindful of human cognitive limitations. Robin Kar and Margaret Radin, for instance, offer a heuristic for courts: They should imagine that all text exchanged during the formation of the contract “be converted into oral form” and then imagined to occur “in a face-to-face conversation between the relevant parties.”Footnote 52 Courts then are not obliged to accept all boilerplate as meaningful from the start. Instead, they are enabled to scrutinize whether the text genuinely conforms to “the cooperative norms that govern language use to form a contract.”Footnote 53

Finally, we might consider ways of combating toxic online social environments, which serve as threats to all sorts of freedom, republican freedom included. Platforms have implemented some light-touch solutions on their own, at least for the most egregious problems. For instance, YouTube searches for content associated with the Islamic State, now attract banners promoting skepticism about their aims.Footnote 54 However, broader and more general solutions remain elusive. How can would-be political radicals be nudged toward content that promotes gentler aims and methods than violence, when the business models of the platforms hosting this content depend on the engagement produced by the more radical content? The depth of this problem suggests that the engagement-centric business model itself might need to be abandoned to see progress.Footnote 55

We can also reconsider some of the polestar cases in light of the techniques we have used to address the new cases. Do Eric Loomis or the teachers have a reason to think that their freedom was undermined?

For Loomis, the argument that his freedom was wrongly or unjustly curtailed is difficult to get off the ground. It is hard to argue that the use of COMPAS somehow undermined the authenticity of Loomis’s desires, so his autonomy-based arguments would rest on claims about how the proprietary nature of the system exploited his diminished quality of agency. However, as the Wisconsin Supreme Court noted, most of the information used to generate his risk assessment report was either static or under his control, so agency-based concerns are also somewhat implausible. Now, he certainly had his freedom curtailed by the court, but it is hard to argue that he was somehow dominated by it in a way that defies sensible criminal justice; as the court confirmed, he likely received the same sentence he would have otherwise. So whatever autonomy- and agency-based affronts he was unreasonably subjected to, he did not face a morally objectionable affront to his freedom.

For the teachers from Wagner and Houston, the freedom-based argument is more persuasive. As we also discussed in Chapter 4, the EVAAS system produces results that are fragile (and, as they acknowledge, unreproducible) and it produces those results without any specific explanation of its method or independent oversight. And – unlike in Loomis – were the two teachers to have gained the necessary understanding of the system, they would have been able to act differently (and a lot more directly): They could have anticipated their problems and been prepared to impugn the results to future employers, or they could have mounted a comprehensible public campaign against the system. But they were denied such choices and therefore had no choice but to litigate the issue. To the extent that litigation against technology companies is prohibitively costly for individual litigants such as Teresa Wagner or Jennifer Braeuner (and, for that matter, Catherine Taylor and Carmen Arroyo), such sources of domination constitute objectionable affronts to freedom.

There are general lessons to be learned even for those who do not find themselves on the wrong side of such systems. Each of us has an individual obligation to accept that we are driven partly by affective states, boundedly competent, and highly influenced by others. We can be influenced to agree to transactions we might not otherwise, we are unlikely to be aware of the finer details of most of the contracts we sign, and we are especially susceptible to radical ideas – good or bad – when we encounter them from within the community in which they arose. We need not take these features of our nature to undermine the legitimacy of every agreement or transaction we make through these apps and platforms, but we must accept that these features undermine the legitimacy of some of those agreements and transactions.

5.4 Why Not Manipulation?

Manipulation is a common element of many of the cases we consider in this chapter and the next. The “blind-draw” loot boxes resemble lotteries and slot machines in all ways other than that they are digital rather than physical, luring their users to play through classic tricks of the advertising trade. Facebook’s Click-Gap metric, which we discuss in the next chapter, offers the platform’s developers the ability to invisibly curate individual News Feeds. Cambridge Analytica’s targeting of Facebook users for the purposes of political advertising, too, involved influencing that was tailored to users’ deep tendencies. Even in cases where there does not appear to be any direct manipulator, such as the case of the YouTube recommendation system, manipulation is still involved, in virtue of involving someone who seems to have been manipulated. Considering these cases, one might wonder whether our analysis of the ethics of algorithmic systems could be subsumed under the rubric of manipulation. And if not, then what does make manipulation objectionable as a practice?

Before answering questions relating to our analysis to manipulation, we would do well to specify the concept itself, at least loosely. However, there is no consensus in the philosophical literature on what exactly constitutes manipulation, nor is there a consensus on how manipulation relates to autonomy and freedom. Moreover, there are problematic implications related to each possible conception. Some view manipulation in terms of threats to rationality. Raz, for instance, understands manipulation in terms of undercutting rational decision-making; as “pervert[ing] the way that a person reaches decisions, forms preferences or adopts goals.”Footnote 56 Yet manipulators need not always threaten rationality; in some cases rational implications can themselves be used manipulatively.Footnote 57 Others, such as Anne Barnhill, Karen Yeung, and Daniel Susser, Beate Roessler, and Helen Nissenbaum, conceive of manipulation in terms of forms of deception or hiddenness that undermine people’s self-interest and autonomy.Footnote 58 This, too, seems right, but many of our key cases, such as that of Catherine Taylor (Chapter 4), reflect vulnerabilities that do not include elements of deception, hiddenness, or trickery. At times, such as in the COMPAS, EVAAS, and CrimSafe cases, the practices we scrutinize shade closer to coercion than to deception. To this end, Joel Feinberg and Allen Wood understand manipulation as lying on a “spectrum of force” between compulsion and enticement.Footnote 59 But this analysis is also incomplete: It does not cover the cases of manipulation that seem to involve deception. So none of the analyses cover all of the cases that seem to fall under the concept of manipulation.

We do not ground our moral analysis in any specific conception of manipulation. Rather, we will only aim to highlight two salient points. First, a point about method: the moral permissibility of manipulation depends on how manipulation is construed. In considering the disputes mentioned earlier about how to identify manipulation, for instance, it might seem reasonable to adopt a broad or disjunctive conception, under which either deceptive or coercive practices count as manipulative. However, this leaves unresolved what unifies the concept of manipulation, that is, what all instances of the phenomenon have in common. Moreover, since entirely avoiding both deception and coercion seems impossible in practice, this broad sort of conception does not justify categorical condemnation of manipulation.

Our second point is that manipulation is probably best understood in a non-moralized way. Instead of viewing manipulation as constitutively wrong, we should view it as identified by “objective facts about a situation that give us good reasons for condemning or approving certain things.”Footnote 60 Insofar as reasons can be good yet nonetheless be undercut or outweighed, this view about the nature of manipulation is compatible with a moral evaluation of it as prima facie or pro tanto wrong – that is, as “not always wrong” but “generally wrong.”Footnote 61 So, just as we can accept the defeasible badness of unfreedom, in light of examples where paternalism serves the public interest, here we accept the defeasibility of the badness of manipulation in light of examples where manipulation serves those interests. These examples might be few and far between, but they remain important. As we argue in the next chapter, the managers and developers of some technological systems might have obligations to influence their users in ways that are non-persuasive, deceptive, or perhaps even coercive. In these cases, such as in the case of the Click-Gap metric (discussed in the next chapter), the obligation not to manipulate simply gives way.

In general, we think that the problems with algorithmic systems are closely related to, but not reducible to, the affronts to autonomy wrought by manipulation. To this end, Marjolein Lanzing discusses the manipulative aspects of information technologies in terms of affronts to informational and decisional privacy. Yet, for her, as for us, the fundamental lesson is about affronts to autonomy, not manipulation per se: she writes that “[s]ince informational and decisional privacy protect autonomy, autonomy is under threat.”Footnote 62 From this broader perspective, it is not only clear why manipulation is morally problematic, it is clear why manipulation is wrong to the extent that it engenders or reflects an affront to autonomy.

5.5 Conclusion

Human agency and autonomy are, as we have seen, difficult to pin down. As we have argued, understanding these important ideas as they apply to actual people requires examining human behavior in depth. Here, we have conceptualized human agency in terms of three “natural challenges” to human agency, and we have argued that each of these challenges influences the nuances of a third important idea: freedom. Human freedom is fundamentally ecological: it is shaped by our capacity to act on our emotions, the limits of our cognition, and the centrality of our social relations to our decision-making.

6 Epistemic Paternalism and Social Media

In the previous chapters, we explored various ways in which algorithms can undermine our freedom or threaten our autonomy, paying particular attention to the context of the prison sentencing and employee management (specifically, teachers and Uber drivers). In this chapter, we turn to a different context – which we briefly touched on in the previous chapter – that of our current media environment, which is in large part shaped by algorithms. We discuss some distinctively epistemic problems that algorithms pose in that context and some paternalistic solutions they call for. Our paternalistic proposal to these problems is compatible with respect to freedom and autonomy; in fact, our freedom and autonomy demand them.

Let us begin with some reflections on our current media environment. In 1995, MIT media lab founder Nicholas Negroponte foresaw a phenomenon that we are now all familiar with: the replacement of traditional newspapers with virtual newspapers, custom-fitted to each reader’s particular taste. In his speculations, he called the virtual newspaper “the Daily Me.” Cass Sunstein elaborates on the idea of the Daily Me:

Maybe your views are left of center, and you want to read stories fitting with what you think about climate change, equality, immigration, and the rights of labor unions. Or maybe you lean to the right, and you want to see conservative perspectives on those issues, or maybe on just one or two, and on how to cut taxes and regulation, or reduce immigration. Perhaps what matters most to you are your religious convictions, and you want to read and see material with a religious slant (your own). Perhaps you want to speak to and hear from your friends, who mostly think as you do, you might hope that all of you will share the same material. What matters is that with the Daily Me, everyone could enjoy an architecture of control. Each of us would be fully in charge of what we see and hear.Footnote 1

As Negroponte anticipated, custom-fitted virtual news has become widespread and popular. This has been facilitated by the advent of “new media” – highly interactive digital technology for creating, sharing, and consuming information. New media is now pervasive, with more Americans getting their news from social media (the predominant form of new media) than traditional print newspapers.Footnote 2

In 1995, the Daily Me might have sounded like a strict improvement on traditional news. However, we now know that the architecture of control that it affords us has serious drawbacks. Consider an episode that briefly caught the nation’s attention in the summer of 2019. About a month after the Mueller Report – the Investigation into Russian Interference in the 2016 Presidential ElectionFootnote 3 – was released, Justin Amash (who was a Republican member of the U.S. House of Representatives) gave a town hall to explain why he thought the report was grounds for impeaching the president. At the town hall, NBC interviewed a Michigan resident who stated that she was “surprised to hear there was anything negative in the Mueller Report at all about [the President]”Footnote 4 (Golshan 2019; emphasis added). At the time, it was hard to see how anyone could think this. Yet she thought the report had exonerated the president.

When the resident learned that the report contained negative revelations about the president, it was through serendipity. Amash was the only Republican representative calling for impeachment following the release of the report, and she happened to live in his district. Had it not been for this, she likely would have continued to believe that report had exonerated the president.

The Michigan resident is not a special case. Many people continue to believe that the Mueller Report explicitly concludes that the president and members of his campaign did nothing wrong. Moreover, the phenomenon of people being misinformed in similar ways is common. Along with the architecture of control afforded by customized news comes the danger of encapsulating ourselves in “epistemic bubbles,” epistemic structures that leave relevant sources of information out and walling ourselves off in “echo chambers,” epistemic structures that leave relevant sources of information out, and actively discredit those sources.Footnote 5 This is, in part, due to the automated way in which news feeds and other information-delivery systems (such as search results) are generated. Eli Pariser explains:

The new generation of Internet filters look at the things you seem to like […] and tries to extrapolate. They are prediction engines, constantly creating and refining a theory of who you are and what you’ll do and want next. Together, these engines create a unique universe of information for each of us and […] alters the way we encounter ideas and information.Footnote 6

This is why cases like the Michigan resident are far from isolated: Many of us are in epistemic bubbles or echo chambers built through the cooperation of internet filters and ourselves.

Consider the user interface of YouTube, the world’s largest social media platform.Footnote 7 The site’s homepage opens to a menu loaded up with algorithmically generated options, curated to match each user’s tastes. Immediately after watching any video on the homepage, users are met with algorithmically generated suggestions based on what they have just watched. The suggestions that appear on the home page and at the conclusion of videos are, of course, chosen to keep users on the site. So users who are inclined toward, say, left-of-center politics are more likely to receive suggestions for videos supporting that worldview. Given that two-thirds of Americans get news through social media – with one in five getting news from YouTubeFootnote 8 – it is no wonder we can consume a lot of news but be mis- or underinformed about current affairs of monumental importance.

New media’s design and popularity also facilitate the mass spread of misinformation. This is not only unfortunate but dangerous. Consider the recent surge in “vaccine hesitancy” the reluctance or refusal to vaccinate – a phenomenon that the World Health Organization now considers one of the top ten threats to global health.Footnote 9 The surge of vaccine hesitancy seems to be inseparable from the success of new media, with Facebook, Twitter, and YouTube playing a large role in spreading misinformation about vaccines.Footnote 10 Now, decades after it had been declared “eliminated,” measles has returned to the United States.Footnote 11 And in the recent COVID-19 pandemic, one early concern was that the low rates of seasonal flu vaccine would increase strain on health-care systems both by requiring more testing to differentiate COVID-19 and seasonal flu cases and by increasing strain on treatment resources.

This raises questions about the responsibility new media developers have to manage the architecture of control that its users currently enjoy. It also raises questions about the latitude that social media developers have in making alterations to their sites. On the one hand, it seems reasonable to think that developers should lead their users to consider a more diverse array of points of view, even if that is not in line with users’ immediate wishes. On the other, there seems to be something objectionably paternalistic about this: Users should be able to (at least in some sense) decide their information diets for themselves.

We will argue that there is plenty of room for epistemic paternalism online. Moreover, because the internet information environment is epistemically noxious, such epistemically paternalistic policies should be a persistent part of the internet information environment. This chapter proceeds as follows. First, we discuss an intervention that Facebook has run in hopes of demoting the spread of fake news on the site. We explain why the intervention is paternalistic and then, using the framework of this book, defend the intervention. We argue that while Facebook’s intervention is defensible, it is limited. It is an intervention that may pop some epistemic bubbles but will likely be powerless against echo chambers. We then discuss heavier-handed interventions that might be effective enough to dismantle some echo chambers, and we argue that at least some heavier-handed epistemically paternalistic interventions are permissible.

6.1 Demoting Fake News

In April 2019, Facebook announced that it would use a new metric, Click-Gap, to determine where to rank posts in its users’ News Feeds.Footnote 12 Click-Gap measures the gap between a website’s traffic from Facebook and its traffic from the internet at large, and it demotes sites with large gaps. According to Facebook, the idea is that “a disproportionate number of outbound Facebook clicks […] can be a sign that the domain is succeeding on News Feed in a way that doesn’t reflect the authority they’ve built outside it.”Footnote 13 Click-Gap attempts to identify and demote low-quality content, such as fake news, in News Feed to prevent it from going viral on the website.Footnote 14

We will argue that Click-Gap is an instance of epistemic paternalism and that it is morally permissible. We begin by explaining why we take Click-Gap to be an instance of epistemic paternalism. We then argue that it is permissible, despite its being paternalistic. It may seem strange that in a book about the threat that algorithms can pose to autonomy that we would take this line. As we will soon argue, however, the epistemically noxious online information environment creates a need for action. And, whether such action is paternalistic or executed algorithmically does not matter as such. Instead, what matters is whether these actions are consistent with respect for persons, properly understood. We will demonstrate that Click-Gap – and a host of other potential interventions – occupies this exact space. Certain paternalistic interventions, like Click-Gap, do not undermine users’ autonomy; they, in fact, support it.

Let’s begin by discussing our understanding of paternalism, as it deviates from the standard philosophical understanding of the concept. A standard conception of paternalism is as follows:

Paternalism: P (for “paternalist”) acts paternalistically toward S (for “subject”) by φ’ing (where “φ” denotes an action) iff the following conditions are met:

  • Interference: φ’ing interferes with the liberty or autonomy of S.

  • Non-Consent: P does so without the consent of S.

  • Improvement: P does so just because φ’ing will improve the welfare of S (where this includes preventing S’s welfare from diminishing), or in some way promote the interests, values, or good of S.Footnote 15

As we will soon argue, the interference condition is not met in the case of Click-Gap. Yet, as we have already noted, we take it that the intervention is an instance of paternalism. This is because we reject the interference condition.

Let us next explain why Click-Gap does not meet the interference condition. We take it that agents are autonomous when they enjoy procedural independence (i.e., when they are competent and their beliefs, preferences, desires, and values are authentic) and substantive independence (i.e., when their procedural independence is supported by their social and relational circumstances). We understand agents as free when they are undominated, autonomous, and sufficiently effective as agents. To show that Click-Gap does not meet the interference condition, then, we will show that the case does not, by the lights of these definitions, undermine freedom, autonomy, or quality of agency.

First consider autonomy. If Click-Gap is successful, it will influence the attitudes of Facebook users – some, for example, will adopt different views about vaccine safety than they otherwise would have. This influence is not enough, though, to undermine users’ autonomy. The relevant question for the purposes of assessing their autonomy is whether the attitudes the users adopt will be authentic – which is to say that those attitudes are ones that they would endorse upon critical reflection as consistent with their beliefs and values over time – not simply whether they were influenced in some way or other. And Click-Gap influences users by shielding them from content that may seem more credible than it actually is because it rose to the top of News Feed by gaming the News Feed algorithm. Presumably, the attitudes formed in the absence of such manipulation and misinformation are the sort agents can authentically endorse. Click-Gap, then, will prevent users from forming inauthentic attitudes. This is owed to the fact that people (at least typically) will desire that their beliefs be justified and accurate.

Fair enough, one might say, but what about users who do not care about the truth and want to be anti-vaxxers, come what may. Wouldn’t these – perhaps fictional – users have their autonomy undercut by Click-Gap if it changed their view? For all we’ve said, it is possible that these users would have their autonomy undercut if Click-Gap changed their views. But we are skeptical that Click-Gap could have an effect on such users. Click-Gap is not responding to the content of claims made by anti-vaxxers, it is simply demoting their posts. Moreover, de-emphasizing patently bad information (or misleading information) that happens to confirm antecedent views does not undermine those views per se. Rather, it mitigates unjustified confirmation. In the end, a committed anti-vaxxer isn’t likely to change their mind as a result of Click-Gap.Footnote 16 Its touch is far too light for that.

But what about a less narrowly defined group, such as committed skeptics of the medical establishment? Might they have a complaint? Again, either such skeptics are dogmatic skeptics, who will be skeptical come what may, in which case they are not likely to have their minds changed, or they are not, in which case Click-Gap will be – if anything – aiding them in their pursuit of the truth by filtering out low-quality information.

Let’s now turn to freedom. Some users may not like what Click-Gap does, but it does not dominate them, rob them of resources to effectuate their desires, or diminish their capacity to act as agents. Under the policy, users can still post what they were able to post before, follow whomever they were able to follow before, and so on. Now, Click-Gap does interfere with the freedom of purveyors of low-quality content of Facebook, but that is a separate matter (one that we will deal with later). We are not arguing that Click-Gap impedes no one’s freedom; we are arguing that the freedom it does impede is not socially valuable.

This raises a complication: Since purveyors are users, one might argue that the interference condition does apply in the case of Click-Gap. It is true that the interference condition is met in this special case, but this is beside the point. In the special case where the interference condition is met, the improvement condition (the condition that the intervention is taken because it will improve the welfare of the person whose freedom or autonomy is affected) is not met. The intervention is not taken to shield those sharing low-quality content from their own content. Rather, it is to shield potential recipients from that information.

So Click-Gap does not meet the interference and improvement conditions simultaneously. Hence, it would not be an instance of paternalism on the standard account.

But the standard account of paternalism has important limitations. The view of paternalism we wish to advance here addresses those limitations. On that conception, Click-Gap is an instance of paternalism. The reason we do not adopt the standard definition of paternalism is because the improvement condition itself is flawed. To see why, consider Smoke Alarm:

Smoke Alarm.Footnote 17 Molly is worried about the safety of her friend, Ray. Molly knows that there is no smoke alarm in Ray’s apartment and that he tends to get distracted while cooking. Molly thinks that if she were to suggest that Ray get one, he would agree. But – knowing Ray – she does not think he would actually get one. She thinks that if she were to offer to buy him one, he would refuse. She buys him one anyway, thinking that he will accept and install the already bought alarm.

Molly’s gifting a smoke alarm does not meet the interference condition. Ray adopts no attitudes that he is alienated from and he is left free to do what he wills. Yet – intuitively – Molly acts paternalistically toward him, at least to the extent that she acts to contravene his implicit choice not to install a smoke alarm in his apartment.

Our primary reason for rejecting the standard definition is its susceptibility to counterexamples like Smoke Alarm. But there are other issues. As Shane Ryan convincingly argues, there are also issues with the non-consent and improvement conditions.Footnote 18

Contra the non-consent condition (the condition that P φ’s without the consent of S): we seem to be able to act paternalistically, even when the paternalism is welcomed. Ryan drives this point with the example of a Victorian wife who has internalized the sexist norms of her culture and wills that her husband makes her important decisions for her.Footnote 19 The Victorian wife’s husband’s handling of her issues is paternalistic, even though she consents to it.

Contra the improvement condition (the condition that P φ’s just because it will improve the welfare of S), we can act paternalistically when we fail to improve anyone’s welfare.Footnote 20 Suppose the new alarm lulls Ray into a false sense of security, which results in even more careless behavior and a cooking fire. This is not enough to show that Molly’s gesture was not paternalistic. All that seems to be required is that she thought that buying the smoke alarm would make him better off. Whether it does is beside the point.

For these reasons, we adopt the following account of paternalism from Ryan (2016):

Paternalism*: P acts paternalistically toward S by φ’ing iff the following conditions are met:

  • Insensitivity: P does so irrespective of what P believes the wishes of S might be.

  • Expected improvement: P does so just because P judges that φ’ing might or will advance S’s ends (S’s welfare, interests, values or good).Footnote 21

By this definition, Click-Gap could qualify as a paternalistic intervention, so long as it is motivated by Facebook’s judgment that improving people’s epistemic lot improves their welfare.

Now, the connection between one’s epistemic lot and their welfare is contested and complicated. It is certainly true that in many instances having knowledge – say, about where the lions, tigers, and Coronaviruses are, and thus how to avoid them – is often conducive to welfare. But it is not clear that knowledge is always conducive to welfare. Sometimes knowledge is irrelevant to our ends: Consider Ernest Sosa’s example of knowing how far two randomly selected grains of sand in the Sahara are from one another.Footnote 22 Likewise, sometimes knowledge can be detrimental to our ends: Consider Thomas Kelly’s example of learning how a movie ends before you see it,Footnote 23 or Bernard Williams’s case of a father whose son was lost at sea but for his own sanity believes – however improbably – that his son is alive somewhere.Footnote 24

For these reasons, we will couch the rest of our discussion in terms of epistemic paternalism:

Epistemic Paternalism: P acts epistemically paternalistically toward S by φ’ing iff the following conditions are met:

  • Insensitivity: P does so irrespective of what P believes the wishes of S might be.

  • Expected epistemic improvement: P does so just because P judges that φ’ing might or will make S epistemically better off.

Following Kristoffer Ahlstrom-Vij we will understand agents as epistemically better off when they undergo epistemic Pareto improvements with respect to a question that is of interest to them (where epistemic Pareto improvements are improvements along at least one epistemic dimension of evaluation without deterioration with respect to any other epistemic dimension of evaluation).Footnote 25

Despite failing to meet the traditional conception of paternalism, Click-Gap is an instance of epistemic paternalism. Adam Mosseri, Vice President of Facebook’s News Feed, has stated that Click-Gap is part of an effort to make users better informed.Footnote 26 In other words, the policy is in place so as to make users epistemically better off (i.e., expected epistemic improvement is met). Mosseri takes it that Facebook has an obligation to stage interventions like Click-Gap in order to fight the spread of misinformation via Facebook products. In explaining this obligation, Mosseri states that “all of us – tech companies, media companies, newsrooms, teachers – have a responsibility to do our part in addressing it.”Footnote 27 The comparison to teachers is apt. Teachers often must be epistemically paternalistic toward their students. That is, they often must deliver information irrespective of the wishes of their students, just because it will epistemically benefit their students. Like teachers, Facebook won’t tailor its delivery of information to the exact wants of its constituency. That is, it won’t abandon Click-Gap in the face of pushback or disable it for users who do not want it. So insensitivity is met. This is, in part, due to the kind of pressure Facebook is reacting to when it takes part in interventions like Click-Gap, such as pressure from the public at large and governmentsFootnote 28 to fight the spread of fake news.

Now that we have articulated why Click-Gap is epistemically paternalistic, let’s now turn to the moral question: Is it permissible?

We will address this question by exploring it from two vantage points: that of Facebook users and purveyors. Let’s start by looking at the intervention from the perspective of Facebook users. What claims might users have against the policy? We have already argued that the policy is not a threat to autonomy or freedom. Further, it is not plausible that the intervention will harm users. Given this – and the fact that the intervention is driven by noble aims – it is hard to see how users could reasonably reject Click-Gap.

What about the purveyors?

They might claim that, unlike users, this policy does undermine their autonomy or freedom. But it is difficult to see how much (if any) weight can be given to this claim. Start with autonomy. Purveyors are not fed any attitudes from which they could be alienated or which undermine their epistemic competency, so any claims from procedural independence will lack teeth. Claims from substantive independence will also miss the mark. Limiting persons’ ability to expose others to misinformation does nothing to undermine means of social and relational support or create impediments to one’s ability to exercise de facto control over their life. Hence, this kind of epistemic paternalism undermines neither psychological nor personal autonomy.

Complaints from freedom fail in similar fashion. Click-Gap does introduce constraints on effectuating the desires of purveyors. But more is needed to show that this makes Click-Gap wrong. The claim that it is wrong to place any constraint on freedom is clearly false, and one our Chapter 5 account rejects. Our account says that morally considerable freedom is quality of agency, and one’s quality of agency is not diminished by limitations on the ability to disseminate misinformation. That’s because exercising autonomy requires the ability to advance one’s interests and abide fair terms of social cooperation. Sullying the epistemic environment may advance one’s interests, but it is not consonant with fair terms of social cooperation. Further, purveyors still are left otherwise free to promote their ideas on the site. One way of doing this – posting content that does well on Facebook and not elsewhere – has been made less effective, but they are left free to promote their ideas by other means.

Finally, purveyors might say they have an interest impeded by the intervention. Given that their content is still allowed on Facebook and that Click-Gap leaves open many avenues to promoting content, the interest is only somewhat impeded. Moreover, it is not a particularly weighty interest. And even though the interest is impeded, there is an open question of whether such impediments are justified. We think they are, once we consider the reasons that speak in favor of the intervention. We turn to those next.

It is reasonable to think that Click-Gap will prevent significant harms to individuals. Policies like this have been proven to be quite effective. Consider Facebook’s 2016 update to the Facebook Audience Network policy, which banned fake news from ads.Footnote 29 As a result of this ban, fake news shared among users fell by about 75 percent.Footnote 30 Since fake news is a driver of harmful movements, such as vaccine hesitancy, there is a strong consideration in favor of the policy. After all, those who wind up sick because of vaccine hesitancy are significantly harmed.

Harms are not the only things to consider. Policies like Click-Gap also support user autonomy. Consider the following:

A growing body of evidence demonstrates that consumers struggle to evaluate the credibility and accuracy of online content. Experimental studies find that exposure to online information that is critical of vaccination leads to stronger anti-vaccine beliefs, since individuals do not take into account the credibility of the content […]. Survey evidence […] shows that only half of low-income parents of children with special healthcare needs felt “comfortable determining the quality of health websites” […]. Since only 12% of US adults are proficient in health literacy with 36% at basic or below basic levels […], Fu et al. (2016) state that […] “low-quality antivaccine web pages […] promote compelling but unsubstantiated messages [opposing vaccination].”Footnote 31

Interventions like Click-Gap are an important element of respecting users’ autonomy. The intervention, if successful, will protect users from internalizing attitudes that would be inauthentically held.

Click-Gap is thus a policy that all parties effected could reasonably endorse. Further, Facebook must engage policies like Click-Gap: Users who adopt unwarranted beliefs because of fake news and individuals who contract illnesses because of vaccine hesitancy have a very strong claim against Facebook’s taking a laissez-faire approach to combating fake news on its site.

Interventions like Click-Gap, then, are not only permissible; they should be common. Such interventions, however, are limited. Click-Gap might be able to pop some users’ epistemic bubbles, but they are unlikely to dismantle sturdier structures such as echo chambers. So there is good reason to look into what we may do to chip away at these structures, a topic to which we now turn.

6.2 Dismantling Echo Chambers

In fall 2017, Reddit – a social media site consisting of message boards (“subreddits”) based on interests (e.g., science, world news, gaming) – banned r/incels,Footnote 32 a message board for “incels,” people who have trouble finding romantic partners.Footnote 33 The group was banned for hosting “content that encourages, glorifies, incites, or calls for violence or physical harm against an individual or a group of people.”Footnote 34 We take it that the ban was justified; banning a group is an intrusive step, but such intrusions can be justified when the stakes are high, such as when physical harm is threatened. Here, we would like to ask what Reddit may have done before the stakes were so high and what Reddit may have done before things got so out of hand.Footnote 35 We begin with some history.

The term “incel,” which was derived from “involuntary celibate,” was coined by “Alana Boltwood” (a pseudonym the creator of the term uses to protect her offline identity) in the early 1990s as part of Alana’s Involuntary Celibacy Project, an upbeat and inclusive online support group for the romantically challenged.Footnote 36 However – as of this writing – “incel” has lost any associations with positivity or inclusiveness.

In the 2000s and early 2010s, Alana’s Involuntary Celibacy Project inspired the founding of other incel websites, several of which were dominated by conversations that were “a cocktail of misery and defeatism – all mixed with a strong shot of misogyny.”Footnote 37 Here are some representative comments from one such site, Love-Shy.com:

The bulk of my anger is over the fact that virtually all women are dishonest to the point that even they themselves believe the lies they tell.

The reality, and I make no apologies for saying this, is that the modern woman is an impossible to please, shallow, superficial creature that is only attracted to shiny things, e.g. looks and money.

By some point in the early 2010s, “incel” and “involuntary celibate” were yoked to the negative, misogynistic thread of inceldom. One turning point was a highly publicized murder spree in Isla Vista in 2014, where Elliot Rodger – a self-identified incel – murdered six college students as part of a “revolution” against women and feminism.Footnote 38 Incels are now so strongly associated with misogyny and violence that the Southern Poverty Law center describes them as part of the “online male supremacist ecosystem” and tracks them on the center’s “hate map” (a geographical map of hate groups in America).Footnote 39

An online petition that called for banning r/incels described the group as “a toxic echo chamber for its users, […] a dark corner of the internet that finally needs to be addressed.”Footnote 40 The petition details the problematic content on r/incels:

Violence against women is encouraged. Rape against women is justified and encouraged. […] Users often canonize Elliot Rodger, […] [who is] is often referred to as “Saint Elliot” with many praising his actions and claiming that they would like to follow in his path.Footnote 41

Recall that we have described echo chambers as structures that, like epistemic bubbles, leave relevant sources of information out but, unlike epistemic bubbles, actively discredit those sources.Footnote 42 Let us now ask: Was r/incels in fact an echo chamber, as the petition claims?

We think so. r/incels did not just create a space where people with similar ideas about women and feminism congregated. It actively left dissenting voices out. As a petition to have r/incels banned states, “the moderators [of r/incels] have allowed this group to become the epicenter for misogyny on Reddit, banning users that disagree with the hate speech that floods this forum.Footnote 43

Dissent was not the only reason users were banned; some were excluded simply for being identified as women. It is difficult to run a rigorous study – it has been noted that “the community [of incels] is deeply hostile to outsiders, particularly researchers and journalists”Footnote 44 – but polls have found that r/braincels (another popular incel subreddit, which was banned in 2019) is nearly all men.Footnote 45 These demographics were kept up in part through the use of banning; it has been reported that in r/braincels, women were banned “on sight.”Footnote 46 All the while outsiders (derisively referred to as “normies”Footnote 47) and women (“femoids,”Footnote 48 “Stacys,”Footnote 49 and “Beckys”Footnote 50) were demeaned in the conversations they were excluded from.

This is disconcerting for many reasons, not least of which is the vulnerability to misinformation about women and dating manifest in users drawn to groups like r/incels. Consider the stories of the pseudonymous “Abe” and “John,” each of whom seems to be a typical incel.Footnote 51

Abe, 19, is a lifelong loner who claims to have once dated someone for a month. Abe turned to the internet for support. There he found a cadre of people who were happy to reinforce his belief that the problem was his looks (this is a common incel trope – that our romantic futures are determined by superficial, genetically encoded traits such as height, the strength of one’s jawline, and length of one’s forehead), and “how manipulative some women can be when seeking validation” (this reflects another trope – that women are shallow, opportunistic, and cruel).Footnote 52

One helpful way to understand this process comes from Alfano, Carter, and Cheong’s notion of technological seduction, which we first encountered in the previous chapter.Footnote 53 The core idea is that people can encounter ideas that fulfill psychological needs and are consistent with personal dispositions and be attracted – seduced – into reading, listening, and watching more related ideas. This often happens in a way that ramps up or becomes more extreme. Users become seduced into a kind of self-radicalization. They need not have had an antecedent belief in the seductive ideas to start identifying with them. The Abe example exhibits these characteristics. He came to the incel community predisposed to think he was an especially bad case and that women were cruel. The community was happy to indulge these thoughts. He then spent more time in the community and began to adopt even more fatalistic views about his dating prospects and more cynical views about women.

John, like Abe, turned to incel groups for support due to feelings of isolation. He too thinks that immutable features of his appearance have doomed him to a life of romantic isolation:

Most people will not be in my situation, so they can’t relate. They can’t comprehend someone being so ugly that they can’t get a girlfriend […] What I noticed was how similar my situation was to the other guys. I thought I was the only one in the world so inept at dating.Footnote 54

The truth, of course, is that many – if not most – people can relate to these feelings. As Beauchamp notes, “All of us have, at one point, experienced our share of rejection or loneliness.”Footnote 55 But when socially isolated young men congregate around the idea that they are uniquely a bad case, that anyone who says otherwise is an ideological foe, and when they can exclude perceived ideological foes from their universe of information (as well as anyone whom their conspiracy theories scapegoat and stereotype), the result is a toxic mix of radicalizing ideas and people vulnerable to their uptake. Note that John, too, seems to be a victim of technological seduction.

So what might we do to ameliorate this while respecting the autonomy of the members of groups like r/incels? In what follows we investigate two epistemically paternalistic approaches, one that involves making access to alternative points of view salient, another involves making the barriers of the echo chamber itself more porous.

6.2.1 Access to Reasons

Cass Sunstein discusses a number of remedies to echo chambers and filter bubbles that involve providing access to reasons that speak against the ideology of the chamber or bubble.Footnote 56

One such remedy involves the introduction of an “opposing viewpoint button,” inspired by an article by Geoffrey Fowler, arguing that Facebook should add a button to News Feed. This button would, in Fowler’s words, “turn all the conservative viewpoints that you see liberal, or vice versa.”Footnote 57 This would enable users to “realize that [their] […] news might look nothing like [their] […] neighbor’s.”Footnote 58 Such a button might not make very much sense in a subreddit, but a variation of it might. Perhaps Reddit could offer dissenting groups the opportunity to have a link posted to a subreddit’s menu that would take users to a statement or page that outlines an opposing point of view.

Or, perhaps, instead of a link to a statement, subreddits could have links to deliberative domains, “spaces where people with different views can meet and exchange reasons, and have a chance to understand, as least a bit, the point of view of those who disagree with them.”Footnote 59

Whether it is via a link to a statement or a deliberative domain, both proposals involve the adding of an option to access reasons from the opposing side. Were either taken unilaterally and in the spirit of improving users’ epistemic lot, they would be instances of epistemic paternalism.

Now we can ask, were Reddit to explore this option of adding access to reasons – either through an opposing viewpoint button or link to a deliberative space – would it be permissible?

We think so, and we will explain why in familiar fashion. The relevant perspective from which to view the intervention is that of the denizens of r/incels. The intervention would not limit the freedom or autonomy of any of the members of the group, nor would it harm them. They are left free to have whatever discussions they please, post whatever they’d like to, and so on. If their minds are changed by the intervention it will be through the cool exchange of reasons, a process of changing their minds from which they cannot be alienated.

6.2.2 Inclusiveness

While the proposals under the banner “access to reasons” are promising and permissible, such proposals might not go very far in addressing the issue of echo chambers. It’s likely that many users simply wouldn’t take advantage of the opportunity to use the links. And, if they did and the experience changed their mind, they would likely leave or be exited from the echo chambers they belonged to. So, while the above proposals may help an individual user escape an echo chamber, the “access to reasons” proposals – on their own – are likely not enough.

Let us, then, explore a proposal that may offer further assistance. At the moment, the moderators of subreddits (and similar entities such as Facebook groups and so on) are free to ban users from their discussions at their discretion. We saw earlier that women were banned from r/braincels “on sight.”Footnote 60 This, clearly, does not help the community’s distorted view of women. The power to ban gives moderators the ability to form and maintain echo chambers, as it gives them the power to literally exclude certain voices from their discussions.

Another class of interventions, then, might be aimed at limiting this power. What might this look like?

Sites like Reddit could give its users some entitlement to not being excluded from subreddits strictly for belonging to a protected class, and this could be accomplished by modifying moderators’ privileges to ban users at their own discretion. The site could, for example, discourage discrimination on the basis of protected attributes by stating that it is a behavior that is not allowed on the site and (partially) enforcing this by making some kind of appeals process for bans or suspending moderators who violate the policy.

As a supplement or alternative, a similar system could be set up for bans that do not result from breaking any site-wide or explicitly stated group rules. The idea here is that groups that have moderators who want to ban ideological foes would have to at least do so openly or not at all. The hope here is that many groups would not be okay with this as an explicit policy, reducing or eliminating cases where moderators have an unofficial policy of banning ideological foes.

Anyone familiar with Reddit might object to this suggestion on practical grounds, saying that the site is too large and anonymous for this to work. There are roughly 2 million subreddits in existence, with single subreddits having tens of millions of users.Footnote 61 Further, users are typically anonymous, and it is very easy to make new accounts. As a practical matter, the objection goes, such a change to the site is just not feasible.

Reddit’s scale and design do present practical difficulties for these proposals, but it does not make them unworkable. There are various ways in which the proposals can be implemented at scale. For example, the site could – for practical reasons – rule that appeals can only be made by certain accounts, such as accounts that have existed for more than a certain amount of time and have been verified. And this rule could, of course, be enforced using algorithms. Penalties for frivolous appeals could also be part of the policy. The site could also consider limiting investigations by only investigating moderators when patterns of appeals appear, for example, once a moderator has racked up a certain number of appeals from verified users in a certain time period. This, too, could be managed algorithmically to make the solution scalable.

Assuming that the practical objection could be addressed, we can then ask: Are these policies permissible?

We think so. To show this, let’s look at the policy from the point of view that might have complaints about the proposal: the moderators. What complaints might moderators have? Their autonomy isn’t being undercut, nor are they being harmed. So it does not seem that they could make complaints from autonomy or harm. However, they are being constrained in what they can do. So, perhaps, they can complain that their freedom is being encroached upon, specifically in the form of diminished quality of agency. This, it seems, is the only plausible complaint they might have. So it is the one we will explore.

But we now can return to a familiar refrain: That a course of action will limit an agent’s freedom is not enough to show that it is wrong. Legal bans on murder limit our freedom, but not wrongfully so. Limiting moderator’s privileges to ban users is, we think, a permissible constraint on their freedom. This is because the complaint from moderators that putting a check on banning limits their freedom is complicated by two factors. One factor is that their freedom to ban users limits users’ freedom to partake in the conversations they are banned from. So their claim to the freedom to ban users butts up against the freedom of the users they will ban. The other factor is that while some bans are non-objectionable – bans made in response to violations of Reddit’s site-wide ban on involuntary pornography, for example – the class of bans we are discussing here is objectionable. Users who have been banned based on their membership to a protected class can reasonably object to those bans and to a system that allows them.

6.3 Conclusion

Since much of the internet information environment is epistemically noxious, there is lots of room and opportunity for epistemically paternalistic interventions such as Click-Gap, opposing viewpoint buttons, and modifications to moderators’ privileges. Hence, many epistemically paternalistic policies can (and should) be a perennial part of the internet information environment. What should we conclude from that? One thing is that we should recognize that developers should engage in epistemic paternalism as a matter of course. Another is that our focus in evaluating epistemically relevant interventions should not be on whether such actions are epistemically paternalistic. Rather, it should be on how they relate to other values (such as well-being, autonomy, freedom, and so on).

Footnotes

5 Freedom, Agency, and Information Technology

1 The standard theory of agency holds that the agency involves intentional action and its causes. For the classic versions of this sort of account, see Davidson, Essays on Actions and Events; Goldman, Theory of Human Action; Bratman, Intention, Plans, and Practical Reason.

2 See also Hieronymi, “Two Kinds of Agency.”

3 On noninterference, see Berlin, “Two Concepts of Liberty”; Butt, Rectifying International Injustice: Principles of Compensation and Restitution between Nations; Rothbard, For a New Liberty: The Libertarian Manifesto; Hayek, The Constitution of Liberty.

4 One might object here that having stairs, while lacking ramp access or alternative means of voting, is indeed a way in which one is physically prevented from voting. That is true, and the line between negative freedom (lack of external constraints on one’s actions) and positive freedom (the ability to carry out one’s intentions) is blurry. See Cohen, “Freedom and Money.”

5 Pettit, Just Freedom, 50.

6 Pettit, 50.

7 Raz, The Morality of Freedom; Feinberg, “Autonomy.”

8 Pettit, “Freedom as Antipower,” 577.

9 MacCallum, “Negative and Positive Freedom.”

10 Christman, “Saving Positive Freedom,” 80.

11 Eric Clapton, for instance, has estimated that during the worst periods of his addiction, he was spending $16,000 per week. See “A Life in Twelve Bars.”

12 To some degree we are taking our cue from Anderson, Private Government, chapter 2.

13 BBC Staff, “Russian Businessman Buys Chelsea.”

14 Segal, “A Russian Oligarch’s $500 Million Yacht Is in the Middle of Britain’s Costliest Divorce.”

15 Cramb, “Scotland’s Most Expensive Sporting Estate Bought by Russian Vodka Billionaire.”

16 Guardian staff, “Sistema Boss Arrested in Russia on Money-Laundering Charges.”

17 Aslund, Russia’s Crony Capitalism.

18 Danaher, “Freedom in the Age of Algocracy”; Kagan “The Additive Fallacy.”

19 Schwartz, The Paradox of Choice.

20 Thanks to Sarah Worley and Dana Nelkin for drawing our attention to this point.

21 AAA Foundation for Traffic Safety, “Aggressive Driving | AAA Exchange.”

22 BBC Staff, “S Korean Dies after Games Session.”

23 Eyal, Hooked: How to Build Habit-Forming Products, 8–9.

24 However, see Lee, “Duolingo Redesigned Its Owl to Guilt-Trip You Even Harder.”

25 Rosenblat, Uberland, 98–100.

26 Rubel, Castro, and Pham, “Agency Laundering and Information Technologies.”

27 Rosenblat, Uberland, chapter 4.

28 Wells and Cullen, “The Uber Workplace in Washington, D.C.”

29 Wells and Cullen, 47.

30 Wells and Cullen, 47.

31 Simon, Models of Man: Social and Rational-Mathematical Essays on Rational Human Behavior in a Social Setting.

32 Kahneman, Knetsch, and Thaler, “Experimental Tests of the Endowment Effect and the Coase Theorem.”

33 Pham and Castro, “The Moral Limits of the Market: The Case of Consumer Scoring Data.”

34 Yaffe-Bellany, “Equifax Data-Breach Settlement”; Abrams, “Target to Pay $18.5 Million to 47 States in Security Breach Settlement.”

35 Benoliel and Becher, “The Duty to Read the Unreadable.”

36 Kar and Radin, “Pseudo-Contract & Shared Meaning Analysis,” 1140.

37 Bakos, Marotta-Wurgler, and Trossen, “Does Anyone Read the Fine Print?”

38 Fox-Brewster, “Londoners Give up Eldest Children in Public Wi-Fi Security Horror Show.”

39 Hern, “I Read All the Small Print on the Internet and It Made Me Want to Die.”

40 Arthur, “Facebook Forces Instagram Users to Allow It to Sell Their Uploaded Photos.” Instagram reversed this policy after public outcry.

41 Neal Mohan, YouTube’s Chief Product Officer, claimed this at the 2018 CES convention.

42 Orphanides, “Children’s YouTube Is Still Churning Out Blood, Suicide and Cannibalism.”

43 Maheshwari, “On YouTube Kids, Startling Videos Slip Past Filters.”

44 Roose, “The Making of a YouTube Radical.”

45 Roose.

46 Lewis, “Fiction Is Outperforming Reality.”

47 Rubel, “Privacy and Positive Intellectual Freedom,” 399–401.

48 See the Protecting Children from Abusive Games Act.

49 The name draws its source from casino slang and refers to those gamblers who are known to bet large amounts of money. And just as casinos compete for the largest high rollers, app developers depend heavily on those users who spend the most money: A recent Swrve survey showed that about 0.15 percent of mobile users contributed approximately half of all in-app purchases in “freemium” games.

50 Satz, Why Some Things Should Not Be for Sale: The Moral Limits of Markets; Castro and Pham, “Is the Attention Economy Noxious?”

51 Netherlands Gaming Authority, “Study into Loot Boxes: A Treasure or a Burden?”; Belgian Gaming Commission, “Loot Boxes in Three Video Games in Violation of Gambling Legislation.”

52 Kar and Radin, “Pseudo-Contract & Shared Meaning Analysis,” 1167.

53 Kar and Radin, 1167.

54 Alfano, Carter, and Cheong, “Technological Seduction and Self-Radicalization,” 25.

55 Lanier, Ten Arguments for Deleting Your Social Media Accounts Right Now.

56 Raz, The Morality of Freedom, 377–378.

57 Gorin, “Do Manipulators Always Threaten Rationality?”

58 Barnhill, “What Is Manipulation?”; Yeung, “‘Hypernudge’: Big Data as a Mode of Regulation by Design”; Susser, Roessler, and Nissenbaum, “Online Manipulation: Hidden Influences in a Digital World”; Lanzing, “‘Strongly Recommended’ Revisiting Decisional Privacy to Judge Hypernudging in Self-Tracking Technologies.”

59 Feinberg, Harm to Self: The Moral Limits of the Criminal Law, 3:189.

60 Wood, “Coercion, Manipulation, Exploitation,” 19–20.

61 Baron, “The Mens Rea and Moral Status of Manipulation,” 108.

62 Lanzing, “‘Strongly Recommended’ Revisiting Decisional Privacy to Judge Hypernudging in Self-Tracking Technologies,” 565.

6 Epistemic Paternalism and Social Media

1 Sunstein, #Republic: Divided Democracy in the Age of Social Media, 1 (emphasis added).

2 Shearer, “Social Media Outpaces Print Newspapers in the U.S. as a News Source.”

3 U.S. Department of Justice, “Report on the Investigation into Russian Interference in the 2016 Presidential Election, Volume I (‘Mueller Report’)”; U.S. Department of Justice, “Report on the Investigation into Russian Interference in the 2016 Presidential Election, Volume II (‘Mueller Report’).”

4 Caldwell and Moe, “Republican Justin Amash Stands by Position to Start Impeachment Proceedings” (emphasis added).

5 Nguyen, “Echo Chambers and Epistemic Bubbles.”

6 Pariser, The Filter Bubble, 9.

7 Kaiser and Rauchfleisch, “Unite the Right?”

8 Shearer and Matsa, “News Use across Social Media Platforms 2018.”

9 World Health Organization, “Ten Health Issues WHO Will Tackle This Year.”

10 Hussain et al., “The Anti-Vaccination Movement.”

11 Joy, “What’s Causing the 2019 Current Measles Outbreak?”

12 “News Feed is a personalized, ever-changing collection of photos, videos, links, and updates from the friends, family, businesses, and news sources you’ve connected to on Facebook.” Facebook, “News Feed.”

13 Rosen and Lyons, “Remove, Reduce, Inform.”

14 Rosen and Lyons.

15 Dworkin, “Paternalism.”

16 This is not to say that the architecture of social media sites cannot influence users in important ways. We take it that the “technological seduction” that sites like YouTube exhibit can encroach on autonomy by, for example, seeding and nurturing convictions that either cannot be endorsed upon reflection or have been seeded and nurtured through methods that agents are alienated from. See Alfano, Carter, and Cheong, “Technological Seduction and Self-Radicalization.”

17 This case is inspired by an example from Ryan, “Paternalism: An Analysis.”

18 Ryan.

19 Ryan.

20 Ryan.

21 Ryan.

22 Sosa, “For the Love of Truth?”

23 Kelly, “Epistemic Rationality as Instrumental Rationality: A Critique.”

24 Williams, “Deciding to Believe.”

25 Ahlstrom-Vij, Epistemic Paternalism.

26 Mosseri, “Working to Stop Misinformation and False News.”

27 Mosseri.

28 Germany is proposing a law to fine Facebook for advertisements containing fake news. See Olson, “Germany Wants Facebook to Pay for Fake News.”

29 Wingfield, Isaac, and Benner, “Google and Facebook Take Aim at Fake News Sites.”

30 Chiou and Tucker, “Fake News and Advertising on Social Media: A Study of the Anti-Vaccination Movement.”

31 Chiou and Tucker.

32 The URL of a subreddit begins with “reddit.com/r/.” For this reason, many subreddits, such as the subreddit for world news, are referred to as “r/worldnews.”

33 Beauchamp, “Incels: A Definition and Investigation into a Dark Internet Corner.”

34 “Update on Site-Wide Rules Regarding Violent Content.”

35 Of course, it is possible that in this particular case Reddit could not have prevented r/incels from becoming so toxic; perhaps that was inevitable. Nevertheless, we would like to explore some steps Reddit may have taken as a preventative measure.

36 Beauchamp, “Incels: A Definition and Investigation into a Dark Internet Corner.”

37 Baker, “What Happens to Men Who Can’t Have Sex.”

38 Glasstetter, “Shooting Suspect Elliot Rodger’s Misogynistic Posts Point to Motive.”

39 Janik, “‘I Laugh at the Death of Normies’: How Incels Are Celebrating the Toronto Mass Killing.”

40 Cochran, “Shut Down the Subreddit r/incels.”

41 Cochran.

42 Nguyen, “Echo Chambers and Epistemic Bubbles.”

43 Cochran, “Shut Down the Subreddit r/incels” (emphasis added).

44 Beauchamp, “Incels: A Definition and Investigation into a Dark Internet Corner.”

45 Beauchamp.

46 Beauchamp.

47 “[A]nyone who is broadly neurotypical, average-looking and of average intelligence.” See Squirrel, “A Definitive Guide to Incels.”

48 “A portmanteau of ‘female’ and ‘humanoid’ or ‘android,’ this term is used to describe women as sub-human or non-human. Some incels go further and use the term ‘Female Humanoid Organism,’ or FHO for short.” See Sonnad and Squirrell, “The Alt-Right Is Creating Its Own Dialect. Here’s the Dictionary.”

49 Women considered to be “air-headed, unintelligent, beautiful and promiscuous.” See Squirrel, “A Definitive Guide to Incels.”

50 “[T]he ‘average’ woman […] who ‘will likely die [sic] her hair green, pink, or blue after attending college’ and ‘posts provocative pictures because she needs attention’ despite being a ‘6/10.’” See Jennings, “Incels Categorize Women by Personal Style and Attractiveness.”

51 The stories of both John and Abe can be found in Beauchamp, “Incels: A Definition and Investigation into a Dark Internet Corner.”

52 Beauchamp.

53 Alfano, Carter, and Cheong, “Technological Seduction and Self-Radicalization.”

54 Beauchamp, “Incels: A Definition and Investigation into a Dark Internet Corner.”

55 Beauchamp.

56 Sunstein, #Republic: Divided Democracy in the Age of Social Media.

57 Fowler, “What If Facebook Gave Us an Opposing-Viewpoints Button?”

58 Fowler.

59 Sunstein, #Republic: Divided Democracy in the Age of Social Media.

60 Beauchamp, “Incels: A Definition and Investigation into a Dark Internet Corner.”

61 Reddit Metrics, “Top Subreddits.”

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×