Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-vfjqv Total loading time: 0 Render date: 2024-04-27T18:19:45.434Z Has data issue: false hasContentIssue false

Part IV - The Responsibilities of Agents

Published online by Cambridge University Press:  17 May 2021

Alan Rubel
Affiliation:
University of Wisconsin, Madison
Clinton Castro
Affiliation:
Florida International University
Adam Pham
Affiliation:
California Institute of Technology

Summary

Type
Chapter
Information
Algorithms and Autonomy
The Ethics of Automated Decision Systems
, pp. 135 - 188
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

7 Agency Laundering and Information Technologies

There have been numerous examples of automated decision systems going wrong in consequential ways. In 2018, an Uber-automated driving system failed to recognize a bicyclist, whom it struck and killed.Footnote 1 In 2012, the Target Corporation received international attention when, based on predictive analytics and an automated advertising system, it sent fliers targeting women seeking prenatal products to a minor before she had revealed her pregnancy to one of her parents.Footnote 2 In 2017, the news organization ProPublica was able to use Facebook’s automated system to make an ad buy targeting users with anti-Semitic affiliations.Footnote 3 The system even suggested additional racist categories to make the ad purchase more effective. As we have discussed, COMPAS yields different results depending on the race and ethnicity of defendants.Footnote 4 A common element in these stories is that the technology itself plays an important role. The existence and use of technological systems are a key part of the explanation of the events. Whether (and how) the technologies are relevant in assessing moral responsibility is considerably more complex.

Our discussion so far has focused on how persons deserve to be treated in light of their autonomy and conditions necessary for people to act autonomously. This chapter instead considers the moral agency of those who deploy information technologies (as collectors of big data, users of algorithmic decision systems, developers of social media sites, and so on).

We will argue that a type of moral wrong that can arise in using automated decision tools is “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. Laundering is not unique to information technologies. However, we argue that the concept of agency laundering helps understand important moral problems in several recent cases involving algorithmic systems. The moral concerns are not merely that values are instantiated within such systems, but that intermingling moral wrongs with morally permissible processes undermines a fundamental facet of responsibility itself.

We begin, in Section 7.1, with an account of responsibility to ground our arguments. In Section 7.2, we develop our account of agency laundering and explain its moral salience. In Sections 7.3 through 7.6, we offer several case studies that allow us to apply and further explain our conception. One is Facebook’s targeted advertising system and its response to complaints that it allows users to make racist ad purchases. This is a clear case of agency laundering. Next, we consider Uber’s use of algorithmic systems in its driver-management apps and show how we can distinguish cases of agency laundering from non-agency laundering in structurally similar cases. We then turn to public-sector uses, showing how school districts can launder agency in teacher-evaluation cases and how courts can avoid agency laundering by clarifying responsibility for decision systems. In Section 7.7, we explain how agency laundering is distinct from other concepts, especially the “responsibility gap.”Footnote 5

7.1 Agency and Responsibility

Our argument turns on the concept of responsibility. For a person to launder his or her agency requires that he or she be a moral agent in the first place, and being a moral agent requires that one be in some sense morally responsible. In this section, we first distinguish several facets of responsibility and how they relate to one another, which helps structure our understanding of agency laundering in the following section. Then, we offer a substantive account of responsibility, which will ground our understanding of the moral wrongs associated with agency laundering.

7.1.1 The Structure of Responsibility

In Punishment and Responsibility, H. L. A. Hart describes a ship captain who gets drunk, wrecks their ship, is convicted of criminal negligence, and whose employer is held financially liable for the loss of life and property.Footnote 6 Hart’s allegory and the distinctions he offers are useful in grounding our account.Footnote 7

To begin, a person might be responsible in virtue of a role. In Hart’s example, a person is responsible for a ship’s safety in virtue of the fact that they are the captain. A person’s role requires them to anticipate events in some domain and to take actions to avoid bad outcomes in that domain.Footnote 8 A ship captain should anticipate bad weather and obstacles and plot course accordingly. Parents should anticipate their children’s needs and plan ways to address them. Financial advisors should anticipate client needs and economic forecasts and guide clients’ actions suitably. Although one’s well-defined social roles (ship captain, parent, financial trustor) may give rise to specific responsibilities, the idea of role responsibility is broad enough to encompass general obligations one has as a moral agent. So, for example, adults have a responsibility to operate heavy machinery carefully, regardless of their specialized social roles; community members have a responsibility to pay applicable taxes; and people engaged in commerce have a responsibility to bargain in good faith.Footnote 9

Second is causal responsibility or the link between an agent’s action (or omission or dispositionFootnote 10) and an event that results from it. Chris Kutz calls this explanatory responsibility, as causation generally explains an event.Footnote 11 Any explanation of the shipwreck that ignores the captain and the captain’s drinking would be inadequate.

Causal responsibility in this sense does not entail moral responsibility. That is because of the third facet, capacity responsibility, which relates to whether an agent has the requisite capacities to be responsible for an outcome. One may lack capacity responsibility due to pathology or pre-reflective, non-deliberative action. In Hart’s example, it is possible that a ship captain’s drinking was due to extreme, clinical anxiety, in which case their intoxication is something they caused, but for which they lacked the required capacity to be responsible.Footnote 12 Alternatively, one may lack capacity due to lack of access to relevant information. That is, an agent must be in a position to access certain facts about their actions and their significance in order to be retrospectively morally responsible for them.

We can sum up the structure so far as follows. For a person to be (retrospectively) morally responsible – which is to say morally liable – for some event or outcome, they must have some role responsibility (either a specific duty that attaches to a social role or a general duty as a moral agent) and they must be causally responsible for the outcome (which is to say an action of theirs is a key part of the explanation of the outcome). Moreover, they must have capacity responsibility. That is, their action must not be the result of some pathology or pre-reflective action, and they must in some sense have access to relevant information.Footnote 13 For the remainder of the chapter we use “moral responsibility” and “moral liability” interchangeably, and they will refer to this conjunction of role responsibility, causal responsibility, and capacity responsibility.

7.1.2 The Content of Responsibility

With these distinctions in mind, we can turn to the content of moral responsibility. In other words, once we have determined that an actor has some kind of role responsibility, is causally responsible for an outcome, and has the requisite capacity to be responsible, there is a further question about what this responsibility means. There are two key features of the view we endorse here. First, moral responsibility is fundamentally relational and grounded in social roles. Second, being morally responsible for some action means that one is accountable for (and should be able to provide an account of) their reasons for that action.

The view that moral responsibility is fundamentally relational owes a great deal to Peter Strawson’s seminal article, “Freedom and Resentment.”Footnote 14 Holding a person responsible by forming reactive attitudes about him or her (e.g., appreciation, admiration, disdain) is a feature of interpersonal relationships in which one regards the other as a participant. We might resent the captain for getting drunk and steering their ship onto the rocks, or we might admire them for their skill in guiding the ship to safety during a storm. However, we do not form such reactive attitudes toward entities that are not participants in relationships; resentment and admiration are not reasonable reactions to the actions of infants or machines. If an autopilot algorithm successfully steers the ship to safety, it would be appropriate to be impressed, baffled, or happy, but not to feel respect and admiration for the algorithm itself.

Despite these important insights, precisely what (if anything) justifies reactive attitudes is a further question. As Marina Oshana points out, the mere fact (if it is) that people are committed to the appropriateness of their reactive attitudes toward (some) people for (some of) their actions cannot suffice to explain why those reactions are appropriate. We do not call a person morally responsible just because others regard them as responsible. Rather, “we call a person an appropriate subject of reactive attitudes because the person is [morally] responsible.”Footnote 15

While keeping in mind the important social function of responsibility attributions, our view aligns with the constellation of views for which an agent’s moral responsibility turns on whether they are answerable or accountable for their actions. Angela Smith, for example, argues that for an agent to be morally responsible for something is for the agent to be “open, in principle, to demands for justification regarding that thing.”Footnote 16 And blame is in effect a demand that the agent “justify herself.” Oshana’s view is related. She articulates an accountability view according to which a person is responsible if, and only if, “it ought to be the case that the person account for her behavior.” Giving such an account requires a person to provide a statement of their “beliefs or intentions” for their actions. “Thus,” Oshana explains, “‘X is accountable for Y’ can be unpacked as ‘It is appropriate that X explain her intentions in doing (or being) Y’.”Footnote 17

The key insight of the accountability views is that they identify not only who is morally responsible but what that responsibility involves. Specifically, it is justifiable to ask the responsible agent to account for their actions, omissions, or dispositions. They should be able to explain their intentions, reasons, and actions in terms that other relationship participants can understand.

7.2 Agency Laundering

With our discussion of responsibility in mind, we can return to the chapter’s central argument. Using an automated process to make decisions can allow a person to distance themself from morally suspect actions by attributing the decision to the system, thereby laundering their agency. Put slightly differently, invoking the complexity or automated nature of a decision system to explain an outcome allows a party to imply that the action is something for which they are not morally responsible.

Compare money laundering.Footnote 18 Where one has such large amounts of illicit cash that spending it or placing it into legitimate financial instruments would be suspicious, one can launder it by mingling it with other, legitimate streams of income so that the illicit cash appears legal. For example, one might add the illegal cash to money received in a legal, cash-dependent business.Footnote 19 The bad thing (income from an illicit source) is hidden by the existence of some other, similar phenomenon. To be clear, we are not making an argument by analogy; decisions are not like cash. Rather, the point is that it is possible to obscure the source of responsibility for actions and make them appear unsuspicious by mingling them with other actions.

Consider a minor example (“Chair”). Suppose that Cheese State University vests department chairs with control over curriculum. A chair and several members of their department would like to get rid of phlogiston studies (“P-studies”) because they think it is unimportant. The chair could do this unilaterally by removing courses, reassigning instructors, and altering degree requirements but wants to avoid the wrath of the department phlogistologists. The chair therefore delegates curriculum decisions to a committee of people who they know want to eliminate P-studies. When P-partisans complain, the chair responds that it was the committee’s decision, though the chair knew from the beginning what that decision would be. By impaneling a committee to ensure the results the chair wanted, the chair obscures their own role in the decision. The committee appears to be the relevant power, though it remained the chair.

There are several features of Chair to address initially. First is that the chair had legitimate institutional authority to make the decision, and if they had moved to eliminate P-studies unilaterally it would have happened. The chair’s institutional authority is a form of role responsibility for their department’s curriculum. The chair has the responsibility to anticipate educational needs, department resources, student demand, scholarly trends, and so forth, and to ensure that the department’s offerings adequately address them. And the chair’s de facto power to alter the curriculum is a form of causal responsibility; when the curriculum changes, the chair’s actions are an essential part of the explanation why.

Second, although the chair has power to make the decision, they draw in a separate body by giving the committee some degree of causal responsibility. Because the curriculum change would not occur without the committee’s work, the committee is an essential part of the explanation for the curriculum change. It is not the only cause, as it is mixed with the chair’s actions. Third, when the chair forms the committee, they imply it is neutral, would weigh evidence fairly, and might act in a way that the chair doesn’t anticipate. But that’s a ruse – ex hypothesi, the chair knows that the committee will act just as the chair wishes.

Fourth, the chair’s actions obscure their causal responsibility with respect to the curriculum. The chair is able to obscure the fact that they orchestrated the result by making the committee partially causally responsible (i.e., a key part of the explanation) for the result. Fifth, although the chair appears to fulfill their responsibility in shepherding the curriculum, appointing the committee obscures the chair’s designs to eliminate P-studies.

The following is a definition of agency laundering that incorporates these features of Chair. An agent (a) launders their agency where

  1. (1) a is morally responsible with respect to some domain X, and

  2. (2) a ensures that b (some process, person, or entity) has some causal responsibility with respect to X, and

  3. (3) a ascribes (implicitly or explicitly) morally relevant qualities to b’s actions (e.g., relevance, neutrality, reliability), and

  4. (4) in virtue of (2) and (3), a obscures the scope of their causal responsibility with respect to X, and

  5. (5) in virtue of (4), a fails to adequately account for events within X for which they are morally responsible.

This definition only gets us so far. It sets out the structure of agency laundering, which tracks and incorporates the structure of moral responsibility from Section 7.1.1. However, it does not explain the moral problem of agency laundering itself (if there is one). That’s our next task.

There are several ways in which the chair may have acted wrongly. One possibility is that it is unjustifiable to eliminate phlogistology in any case. But let’s leave that aside and assume that it’s permissible to eliminate it based on its substance and the context. More important is that the chair’s ascription of morally relevant qualities to the committee is misleading, and they have therefore deceived people about the process involved. Regardless of whether getting rid of P-studies is justifiable, the chair’s obscuring their reasons and intentions in impaneling the committee do not appear justifiable. Others with whom the chair has a relationship have a claim to understand such an important facet of their professional lives.

A still deeper moral problem is that the chair’s action allows them to avoid the core demand of responsibility, which is to provide an account. Regardless of whether the chair is meeting their role responsibilities with respect to the curriculum, they are forestalling others’ ability to demand an account for the chair’s actions within a domain of their legitimate concern.Footnote 20 This is the defining feature of agency laundering, and it turns on the substantive account of responsibility given in Section 7.1.2. There, we explained that responsibility is first about social relations. We hold others responsible for their actions in part by forming reactive attitudes, and such reactive attitudes are key in understanding responsibility. However, our view is that moral responsibility is also a matter of whether agents are open to demands to justify their actions and whether it is appropriate for others to demand an account of their reasons and intentions.

Now we come full circle. Agency laundering involves a kind of misdirection (as in (2)–(4)). But, crucially, the misdirection undermines others’ ability to demand reasons for an agent’s actions. In other words, the laundering part of agency laundering cuts straight to the heart of what responsibility is by undermining the ability of others to ask the agent to provide an account.

Department members will be unable to ask the chair for their reasons and intentions in eliminating phlogistography, because the chair’s actions look like formation of a committee that (apparently) deliberated about and then eliminated the subfield. Department members would reasonably believe that all the chair has to provide is an account of delegation to the committee. But an account that focused on the committee would not be an account of the chair’s actual reasons and intentions, which are about engineering an outcome, not initiating a process to weigh things.Footnote 21

It is worth explaining the role of condition (5) a bit further. What matters about (5) is that it distinguishes cases like Chair from structurally similar cases of delegation. Consider a variation in which the chair thinks P-studies should be eliminated, and they know that there are so few P-sympathists that any full committee will have a majority of P-eliminationists. Nonetheless, the chair delegates the curriculum decision to a committee because of their commitment to inclusive, democratic department governance. As in Chair, (1)–(3) obtain. And (4) plausibly obtains, as the chair’s causal responsibility in forming the committee may obscure their causal role in deciding to review P-studies. But the key difference is that the committee formation in “Democratic Chair” is not a sham, constructed so that the chair can avoid having to account for their actions regarding the curriculum. Just as in the original example, department members will reasonably believe that the action for which the chair should provide an account is the formation of the committee. But in democratic chair, that is the only action for which the chair should provide an account.

So that’s the account. Let’s turn to some cases. These will help us understand how predictive, automated decision systems can launder agency.

7.3 Facebook and Anti-Semitic Advertising

In 2017, ProPublica published a report detailing an investigation into Facebook-targeted advertising practices.Footnote 22 Using Facebook’s automated system, the ProPublica team found a user-generated category called “Jew hater” with more than 2,200 members. While two thousand Facebook users choosing to identify as “Jew hater” in their profiles seems like a lot, Facebook’s platform helpfully informed the ProPublica team that it was too small an audience for an effective ad buy. To help ProPublica find a larger audience (and hence have a better ad purchase), Facebook suggested a number of additional categories. For example, it suggested including the category “Second Amendment,” presumably because of some overlap in users’ choices of interests in their profiles. ProPublica used the platform to select other profiles displaying anti-Semitic categories, and Facebook approved ProPublica’s ad with minor changes.

Facebook’s platform also allows clients to target ads by excluding profiles by age, geographic, and race and ethnic categories. For example, advertisers can target users in specific places and income ranges while excluding people with specific “ethnic affinities.” Many of these affiliations are generated automatically, based on content users and their friends have liked or shared. In some cases it is not the category that creates a problem, but the purpose of the ad. Targeting an ad by age makes sense in some contexts (life insurance, toys) but is discriminatory in others (job recruitment).Footnote 23

When ProPublica revealed the anti-Semitic categories and other news outlets reported similarly odious categories,Footnote 24 Facebook responded by explaining that algorithms had created the categories based on user responses to target fields (e.g., answers to questions about education and hobbies). It also pledged to address the issue. But Facebook was loath to claim it had responsibility. Chief Operating Officer Sheryl Sandberg claimed in a public response that “[w]e never intended or anticipated this functionality being used this way.”Footnote 25 That is no doubt true, though Facebook wishes to both sell advertising and employ as little labor as possible to monitor how that advertising functions.

Is it agency laundering? An agent (Facebook) launders its agency where

  1. (1) Facebook has moral responsibility with respect to targeted advertising, and

  2. (2) Facebook ensures that its algorithmic advertising process has some causal responsibility with respect to targeted advertising on its platform, and

  3. (3) Facebook ascribes morally relevant qualities to its algorithmic advertising process’s actions, and

  4. (4) in virtue of (2) and (3), Facebook obscures the scope of its causal responsibility with respect to targeted advertising on its platform, and

  5. (5) in virtue of (4) Facebook fails to adequately account for events within a domain for which it is morally responsible: specifically, the way in which its advertising platform helps target advertising to racists.

Each of these conditions appears to obtain. Certainly, Facebook has causal responsibility with respect to targeted advertising on its platform. A more difficult question is whether Facebook has role, or prospective, responsibility. The clearest sense in which they have role responsibility is that they have de jure authority over their platform, and they have a general responsibility to be good members of the broad community of people who use the platform. More specifically, they have (in our view) a specific responsibility to ensure that their platform does not facilitate racists to easily find an audience to whom they can advertise.

The claims that Facebook has such specific moral responsibilities will no doubt be controversial. Others may argue that Facebook has a moral responsibility to be a mere conduit of communication among members.Footnote 26 That is unconvincing for a couple of reasons. For one, this case is about advertising. Any claims about how Facebook should structure information between end users tell us nothing about Facebook’s responsibility vis-à-vis advertisers. Moreover, Facebook already acts as if it has responsibilities with respect to both content and advertising. It has community standards, by which it judges and removes content, and it restricts certain kinds of advertising.Footnote 27 In any case, agency laundering only requires that Facebook have general responsibilities within this domain.

Facebook’s categories are derived in part by automated systems. It takes a hands-off approach, letting users generate profile information, letting an algorithm pick out characteristics from user profiles, letting advertisers peruse those categories, and letting an algorithm suggest compatible categories to build better ad target groups. Thus, Facebook ensures that an algorithmic process has causal responsibility (i.e., is a key part of the explanation) for what ads appear to whom on Facebook’s platform. That’s condition (2).

Facebook’s business model includes allowing advertisers to target groups of people narrowly and effectively. It does this in a way that avoids the labor costs associated with human approval of ad targets or human oversight of ad categories and purchases. In so doing, Facebook implies that its algorithmically generated categories and suggestions are relevant to advertisers (otherwise, advertisers would have no reason to purchase ads). And the fact that one can place ads based on those categories without oversight implies that Facebook believes (at least implicitly) that whatever ads served to whatever audience are appropriate. These are morally relevant qualities, as per our third condition. The algorithms’ causal responsibility and implication that they are appropriate obscure the scope of Facebook’s causal responsibility (condition (4)).

Finally, in automating its advertising process, Facebook is able to claim that it “never intended or anticipated this functionality being used this way.” It effectively distances itself from the fact that a system for which it is (causally and morally) responsible allows noxious (and in the case of discriminatory categories, illegal) advertising. That is, the causal responsibility of the algorithm’s suggestions deflects from Facebook’s causal responsibility in creating a platform that uses the algorithm, minimizes the labor that would be required to better monitor advertising categories, and profits from the automated system. Its attribution of morally salient characteristics (relevance, usefulness) presupposes that its optimization is consistent with Facebook’s responsibilities, though it is not.

Here is where understanding Facebook’s actions as agency laundering is a difference-maker. Conditions (1)–(4) describe several important moral features. But the crux of laundering is condition (5). The fact that Facebook is morally responsible with respect to targeted advertising means that it is appropriate to demand that Facebook provide an account of its intentions and reasons in facilitating racists in easily finding an audience to whom they can advertise. Facebook has inserted an automated procedure into its advertisement purchasing procedure, and it suggests that the algorithms are the natural object to scrutinize rather than Facebook’s reasons and intentions with respect to building a system that deploys them and lets them run with minimal supervision. In doing so, Facebook undermines the central feature of responsibility by deflecting demands for an account of Facebook’s reasons, intentions, and actions in helping racists target advertise. Hence, the automated process is a mechanism by which Facebook launders its agency.

There are several potential rejoinders to our argument here. One might disagree about what Facebook’s responsibilities are. One might argue instead that it is advertisers and users who bear responsibility for populating Facebook’s categories with racist characteristics. Certainly, it is true that users populating categories with anti-Semitic and other racist ads bear responsibility for those actions, and any advertiser targeting ads based on such categories bears responsibility for doing so. But, as in Chair, that others have acted wrongly does not tell us much about Facebook’s responsibility. One might further argue that Facebook has not laundered its agency because it has agreed to address the problem. But the fact that Facebook has indicated an intention to address these problems demonstrates that it is a problem within Facebook’s control.

A related objection concerns the degree, or the severity, of Facebook’s failure to fulfill its responsibilities (assuming that it has some). Perhaps Facebook knew of problems in how its algorithms functioned to allow malignant actions. But perhaps instead it was merely negligent.Footnote 28 This is no doubt an area others will reasonably dispute. It does not matter for our analysis of laundering, though. Facebook laundered whatever degree of agency it had. Moreover, it can launder its agency even if it meets its substantive role responsibilities. That the advertising platform afforded the opportunity to target advertising in a racist way is something for which detailed explanation of intentions, reasons, and actions is warranted.

Another potential objection is that it may well be that no particular Facebook contractor or developer acted with discriminatory intent, alleviating any potential moral responsibility any of them might have for the outcome.Footnote 29 However, Facebook’s role responsibility is not reducible to any particular individual developer within Facebook. Rather, the company’s responsibility is better understood as widely distributed across its contractors, employees, and other stakeholders. Moreover, Facebook’s platform is complex, and the consequences of its operations over time are impossible to predict. Coeckelbergh and Wackers argue that organizations deploying such complex, vulnerable systems have obligations to manage their operations not only legally, but with a certain positive “imagination” regarding systemic crises or other harms.Footnote 30 In other words, it is unjustifiable to simply let such complex systems run their course and cause harm.

There is a further, related question about whether the conception of responsibility we have outlined here is properly attributable to collectives. There is significant philosophical debate about collective responsibility, and we cannot do it justice here. But we can note two things. First, the accounts of responsibility we outline in Section 7.2 need not be limited to individual wills. Certainly, we do have reactive attitudes toward collections of people, and those targets may be apt. Further, it seems plausible to attribute reasons to groups, in which case it seems plausible that such a group may be responsible in the sense that it ought to be the case that the collective be accountable. Second, even if it is the case that a collective’s responsibility is reducible to the responsibility of its individual members, this would imply that those individuals have laundered their agency. In any case, whatever responsibility there is, Facebook’s reliance on algorithms to do work and to explain its failures is (on the conception outlined here) an instance of agency laundering.

7.4 Uber and Driver Management

Another private-sector example shows how our concept of agency laundering can distinguish between structurally similar cases. The ride-hailing company Uber has received substantial social, regulatory, and academic criticism based on its AI-driven, algorithmic systems. Uber uses such systems to map routes, track passengers, monitor drivers, anticipate demand, steer driver behavior, and (at one point) identify and deceive regulators. Many of these uses have been criticized elsewhere on the grounds that they are deceptive, unfair, opaque, or even illegal.Footnote 31 Our task here, though, is to consider whether any are instances of agency laundering and, if so, whether analyzing them as agency laundering sheds light on moral concerns with Uber’s practices.

Recall two issues that we touched on in Chapter 5: how Uber uses algorithmic systems to keep its drivers working. One way is by providing reminders of individual drivers’ goals. For example, the Uber app might display a message that the driver is very close to their goal of earning $50 for their shift, which may induce them to take more riders. Similarly, Uber at times sends drivers their next ride requests before they have delivered their current rider. This creates a kind of “queue effect,” much like video platforms that keep people watching by immediately starting the next episode of a series.Footnote 32 A number of critics – including drivers – object to these practices on the grounds that they rely on nonrational mechanisms or are manipulative.Footnote 33

Another way that Uber gets drivers to keep working involves the prospect of dynamically priced fares, which allows drivers to increase their per-hour earnings by driving during high-demand/low-supply periods. When there are lots of passengers seeking rides and relatively few drivers working, Uber will charge higher (surge) prices and drivers thus earn more. Uber’s driver app will often prompt drivers to work at times that Uber anticipates will be high demand. So, it might say that, for example New Year’s Eve will probably have surge pricing.Footnote 34 However, such prompts do not guarantee surge pricing, and drivers do not know when they accept a ride whether it will be surge-priced. In some cases, the Uber app estimates surge pricing, but fares during that period are normal, either because demand does not materialize or because enough drivers are working to offset the demand.Footnote 35

These two cases are structurally similar: app-based mechanisms that prompt drivers to work somewhat more than they would have otherwise. Recall our argument that only surge pricing interferes with people’s valuable freedoms. Relatedly, only the surge pricing appears to be a case of agency laundering. To see this, begin by running both through our understanding of laundering.

  1. (1) Uber has moral responsibility with respect to fielding drivers, and

  2. (2) Uber ensures that its algorithm has some causal responsibility with respect to fielding drivers, and

  3. (3) Uber ascribes morally relevant qualities to its app-based prompts to drivers, and

  4. (4) In virtue of (2) and (3), Uber obscures the scope of its causal responsibility with respect to fielding drivers, and

  5. (5) In virtue of (4), Uber fails to adequately account for events within a domain for which it is morally responsible: specifically, the way in which its interface induces driving.

Conditions (1) and (2) are clear enough. Although Uber claims to be a technology company merely connecting riders and drivers through a platform, it nonetheless plays a large role in getting people to both drive and ride. It enters into contractual relationships with drivers and riders, maintains standards for drivers and equipment, subjects drivers to background checks, adjudicates disputes, and so forth. And there is no question that its algorithms are a key part of the explanation of which drivers are driving when. Uber ascribes morally relevant features to the algorithms: that they reflect drivers’ own goals, that they are reliable, that they are based on a neutral assessment of facts on the ground (condition (3)).

The differences in the cases concern conditions (4) and (5).

Begin with the case of goal-reminders and queuing effects. It is difficult to see how Uber obscures its causal responsibility in incentivizing driving when it uses these tactics to spur drivers into taking more rides. Certainly, Uber is drawing on (or even exploiting) behavioral psychology, and behavioral psychology is an essential part of the explanation of drivers’ decisions to drive. But that fact, and the fact that Uber has set up a system in which algorithms instantiate such strategies, is not obscured.

Now consider condition (5). Here, too, it is difficult to see how Uber fails to adequately account for events within a domain for which it is morally responsible. As a provider of ride-hailing services, Uber has an interest in keeping enough drivers on the road, and it is using a straightforward tactic to promote this interest. Further, Uber has been clear about the practice. In a recent New York Times article, a spokesperson for Uber describes goal-reminding and queuing as ways to incentivize driving.Footnote 36 As far as we can tell, Uber does not launder its agency when it uses goal-reminders and queuing.

One plausible counterargument here is that, at least in extreme cases, using such tools undermines drivers’ wills so much that it obscures the scope of Uber’s causal responsibility. Perhaps the interface is sufficiently gamified that users have hallmarks of addiction, or perhaps the quality of drivers’ wills is so degraded that decisions to drive do not count as drivers’ own. In that case, Uber’s causal responsibility would be far greater than it appears and any adequate account of Uber’s responsibility would include an explanation of how it circumvents drivers’ wills. That possibility is worth both empirical and philosophical examination. Nonetheless, at least weaker forms of nudging seem well within the range of responsible employer behavior and not cases of agency laundering.

Contrast the goal-reminders and queuing with the surge-pricing case. Uber uses machine learning techniques to predict high-demand/low-supply times and uses those predictions to prompt drivers to work. This does seem to obscure Uber’s causal responsibility in fielding drivers, per condition (4). The judgment that surge pricing is likely to occur appears to be an inference about how the world outside Uber is operating, and Uber is merely reacting to it. Indeed, in comments pertaining to the phenomenon of surge pricing, Travis Kalanick (Uber’s cofounder and former CEO) said, “We are not setting the price. The market is setting the price.”Footnote 37 In fact, Uber is causally responsible for setting up a system in which there are pay differentials (where driving at surge times is more attractive to drivers) and then using those facts to induce driving. Surge periods are not a natural feature about the world that Uber measures, but a period defined and deployed by Uber.

More important, though, is condition (5). By predicting surge pricing and signaling the likelihood of surge pricing to drivers, Uber simultaneously exploits surge pricing and makes it less likely. In other words, by using surge pricing as an inducement to drivers seeking a better wage, Uber helps ensure that supply matches demand more closely. Thus, it creates for drivers reasonable expectations of better pay and fails to meet them. Then, when the prices are not offered, Uber tells the drivers it is the market that is making the decision, not Uber. But this is a failure to account for the situation a driver finds themself in when, for example, they have driven to Times Square on New Year’s Eve under Uber’s advice that there will be a surge and then find that the surge has disappeared. As Alex Rosenblat observes, “When drivers follow this advice and find that they have been dispatched to pick up a passenger for a nonpremium-priced ride, meaning that surge pricing has disappeared, they feel tricked.”Footnote 38 It is not the market that reached out to the driver to quell the surge. It is not the market that decided how Uber’s payment system works. Rather, it is Uber that sets up a system where a driver who responds to its enticements may not get surge rates if the campaign to get drivers to an area has worked. And this is the action within its domain of moral responsibility for which Uber owes an account. Uber’s claim that it is simply the market’s doing is an inadequate account, satisfying condition (5). Thus, Uber launders its agency.

So our conception of agency laundering is sensitive enough to distinguish between different uses of algorithmic systems to influence driver behavior. The next question is whether analyzing each in terms of agency laundering adds something of value beyond simply analyzing Uber’s responsibilities to its employees. We believe that it does. The laundering analysis emphasizes the fact that use of tools (committees, bureaucracies, technologies) may be a way to simultaneously violate duties and undermine accountability. This is a way to show that laundering adds something. By calling it laundering, we can make clear what is happening. But more importantly, our argument picks out a discrete moral infirmity, viz., eroding others’ ability to demand Uber provide an account of its reasons and intentions. In Uber’s case, use of a tool to both predict surge pricing and induce drivers looks similar to use of other prompts. One might be tempted to think of it as a case of nudging (or perhaps of manipulation). But that would miss the fact that by tying the process to a prediction about facts on the ground, Uber can deflect attention from its own responsibility for creating a situation in which it simultaneously predicts surge pricing and makes it less likely.

7.5 VAMs and Teacher Evaluation

Facebook, Uber, and other large technology firms receive substantial attention. It would be a mistake, however, to think that agency laundering is primarily the province of the private sector. The depth and importance of agency laundering may be even greater in public agencies. To demonstrate, we will reexamine Houston Schools’ use of EVAAS. Recall that EVAAS is in practice not auditable for two interrelated reasons. First, all of its scores are so deeply interconnected with other scores that the only way to recalculate a score is to recalculate them all. Second, recalculating all of the scores is a complex task and thus very costly.

Despite its shortcomings, Houston Schools defended its use of EVAAS on the grounds that it reliably measures student progress.Footnote 39 But this claim, even if true, is largely irrelevant to the question of whether the use of EVAAS is justified. Student progress and the contributions an individual teacher makes to student progress are distinct quantities. To measure one is not to measure the other. Indeed, the American Statistical Association (ASA) issued a statement in 2014 pointing out that most studies conclude that teachers have only a marginal effect on the test scores that VAMs (such as EVAAS) take as inputs.Footnote 40 So, even if EVAAS reliably measures student progress, this is a poor proxy for teacher effectiveness.

Has the school district laundered its responsibility for firing teachers? We think so.

  1. (1) Houston Schools is morally responsible with respect to hiring, firing, and promoting teachers, and

  2. (2) Houston Schools ensures that EVAAS has some causal responsibility in making those determinations, and

  3. (3) Houston Schools ascribes morally relevant qualities to EVAAS, and

  4. (4) In virtue of (2) and (3), Houston Schools obscures the scope of its causal responsibility with respect to hiring, firing, and promoting teachers, and

  5. (5) In virtue of (4), Houston Schools fails to adequately account for events within a domain for which it is morally responsible: specifically, the “exiting” of teachers deemed ineffective through EVAAS.

In virtue of Houston Schools’ role as an employer, the first condition is met. When Houston Schools implements EVAAS to aide in personnel decisions it meets the second condition. Houston Schools meets the third condition implicitly by using EVAAS for high-stakes decisions. It meets the third condition explicitly by invoking EVAAS’s reliability in measuring student progress as a reason in favor of using EVAAS. The fourth condition is met when Houston Schools repeatedly refers to one good thing that EVAAS does (measure student progress) to obscure the fact that Houston Schools is implementing a system in which teachers are fired based on measures for which the teachers are not responsible (recall the statement from the ASA mentioned earlier). Finally, the fifth is met because teachers who are fired on account of their EVAAS scores are given a faulty accounting of why they were fired. They are told they are being fired for being ineffective, when, given EVAAS’s flaws, this is likely not the case. Hence, understanding Houston Schools’ actions as laundering shows us that there is something going on beyond lack of transparency; the mechanism of evaluation positively misdirects those who would seek reasons for how teachers are treated. It forestalls teachers’ ability to demand an account for the school district’s actions, within a domain of their legitimate concern.

Note that Houston Schools uses EVAAS while fulfilling its public function of managing an education system. That means that the public has a collective stake in how the system functions and has an interest in the actions Houston Schools undertakes. Hence, the fact that Houston Schools’ laundering makes accountability all the more difficult matters in a way that accountability of private firms does not; it suggests that use of EVAAS must conform to standards of public reason, rather than aligning only to the isolated wishes of the district.Footnote 41

7.6 COMPAS and Criminal Sentencing

So far, we have described agency laundering in both private-sector and public-sector cases. And in the Uber case we saw how use of algorithmic decision systems will not be agency laundering where the agent does not obscure their causal responsibility for outcomes. We will return to Loomis to demonstrate how a public entity can avoid agency laundering by making clear their moral responsibility for an outcome.

Loomis and COMPAS have been the subjects of significant criticism. However, we think that the Wisconsin Supreme Court’s opinion in the case shows how actors can avoid agency laundering in deploying algorithmic systems. Hence, the case shows that our understanding of agency laundering is not so broad as to be meaningless.

A trial court launders its agency where

  1. (1) the trial court has moral responsibility with respect to sentencing, and

  2. (2) the trial court ensures that COMPAS has some causal responsibility with respect to sentencing, and

  3. (3) the trial court ascribes morally relevant qualities to COMPAS, and

  4. (4) in virtue of (2) and (3), the trial court obscures the scope of its causal responsibility with respect to sentencing, and

  5. (5) in virtue of (4), the trial court fails to adequately account for decisions pertaining to sentencing, specifically the decision to sentence Loomis in the maximum range.

The trial court certainly has moral responsibility with respect to sentencing. But did the trial court ensure that COMPAS had some causal responsibility with respect to sentencing? The judge referenced Loomis’s risk scores and they plausibly had an effect on sentencing. This, though, was only one of the factors the judge described. He also considered important the conduct Loomis admitted as part of the read-in charges and Loomis’s conduct while under prior supervision. Let’s interpret this as giving some degree of causal responsibility to COMPAS. It is, after all, at least plausible that the COMPAS score is a key part of the explanation for Loomis’s sentence. The court’s use of COMPAS implies that it is useful, reliable, and fair, which are morally relevant qualities, per condition (3).

The question of agency laundering in Loomis turns on conditions (4) and (5). Although the judge in the case referenced the COMPAS assessment in his decision, he also indicated that his own judgment (based on Loomis’s conduct and history) led him to a similar conclusion. There is some possibility that the judge was confabulating by ascribing his own reasons to the outcome COMPAS reached. If that’s true, it would not be that the use of COMPAS obscures the trial court’s causal responsibility. Rather, it would be that the court’s description of its reasons obscures the scope of COMPAS’s causal responsibility.

The key issue, though, is whether the trial court fails to adequately account for decisions pertaining to sentencing. Consider the following from the Wisconsin Supreme Court’s decision.

We determine that because the circuit court explained that its consideration of the COMPAS risk scores was supported by other independent factors, its use was not determinative in deciding whether Loomis could be supervised safely and effectively in the community. Therefore, the circuit court did not erroneously exercise its discretion.Footnote 42

The passage makes clear that tools like COMPAS cannot be used alone, and use of such scores has to be supported by other factors that are independent of the tool. Similarly, the court required that courts weigh all relevant factors in order to sentence an individual defendant,Footnote 43 and it prohibited trial courts from using scores to determine whether to incarcerate a person or not, to determine the length and severity of sentence, and to determine aggravating or mitigating factors in sentencing.Footnote 44 And the court required that any PSI that uses a COMPAS report carry a number of warnings about the limitations of such reports.

The supreme court’s Loomis opinion places responsibility squarely on the trial court in using tools like COMPAS. It prohibits trial courts from relying completely on the COMPAS algorithm, and it requires trial courts to use other factors to support any use of risk assessment algorithms. Hence, the court forecloses the ability of trial courts to use algorithms as a way to distance themselves from responsibility. Thus, Loomis addresses condition (5), and it is not a case of agency laundering.Footnote 45

Deploying tools like COMPAS could certainly be a means by which courts (and others in the criminal justice system) can launder their agency. However, the Loomis decision is tailored precisely to avoid that. Hence, it appears to be a good test case for our view. It is the use of an algorithmic system where one does not launder their agency. As a result, it can demonstrate how other actors may fail in their moral responsibilities, even where their actions superficially resemble the Loomis court’s. The court did not forestall others’ ability to demand an account for its actions within a domain of their legitimate concern, but a different court (or different actor within a criminal justice system) might do so by failing to provide its own reasons for decisions.

7.7 Related Concepts and Concerns
7.7.1 The Responsibility Gap

Agency laundering can help shed light on some other concepts related to moral issues in technology. One of these is the “responsibility gap.” In a 2004 article, Andreas Matthias argued that in some cases a technological system may be sufficiently sophisticated that no person or persons are responsible for the outcomes it causes.Footnote 46 The idea is that machine learning systems may be so opaque to human developers and users that it is impossible to predict how those systems will behave. Where such systems cause harm, it may be (on Matthias’s view) a mistake to attribute responsibility to the developer, the owner, or any other person. The rules by which machine learning systems act “are not fixed during the production process, but can be changed during the operation of the machine, by the machine itself.Footnote 47 These actions do not mesh with traditional accounts of responsibility “because nobody has enough control over the machine’s actions to be able to assume the responsibility for them.”Footnote 48 He provides several examples. One is an elevator system that, having used an AI system to adapt to use patterns over time, leaves an executive stranded and late for a meeting. Another is a machine learning system to diagnose lung cancer, but which has a high false-positive rate (and causes emotional and financial stress to people diagnosed). Yet another is an AI children’s toy that, in learning to navigate a new home environment, injures a child.

There has been a great deal of discussion of the responsibility gap in the years since Matthias’s article was first published. Here, we want to illustrate how agency laundering is distinct from responsibility gaps and how it can explain where responsibility fits in the gaps.

Note first that Matthias’s conception of the responsibility gap focuses on an automated system’s causal responsibility for some outcome. In the toy case, Matthias posits that the responsibility gap pertains to the action of knocking over and injuring the child. Our account of agency laundering, however, considers a wider range of actions. Imagine that the toy manufacturer developed, marketed, and sold the toy without fully testing its ability to knock over and injure a toddler. The manufacturer would seem to have causal and role responsibility with respect to whether its toys injure children (condition (1)). It would also ensure that the toy has causal responsibility for whether it injures children (as Matthias describes the case, the child’s injury is explicable only by describing how the toy operates) (condition (2)). By selling the toy, the manufacturer attributes morally relevant qualities to the toy (age appropriateness, safety) (condition (3)). It would also be difficult to provide an adequate account of a toy manufacturer’s distribution of a toy that has the affordances (size, weight, mobility, unpredictability) to knock over a small child (condition (5)).

The question, then, is whether the manufacturer obscures the scope of its causal responsibility with respect to the injury (condition (4)). Nothing in the example (either Matthias’s version or ours) suggests that it does. However, if the manufacturer were to posit a responsibility gap (e.g., by saying that the machine learning process was opaque, and hence the manufacturer could not anticipate injury), that would fulfill condition (4) and be an instance of agency laundering. In other words, invoking the idea of a responsibility gap is a mechanism by which people may launder their agency.

There are other possibilities as well. One might set up a system expressly to avoid being held to account. Such a scenario would appear to be a form of preemptive laundering, and it would be advanced by whatever responsibility gap it creates. A different possibility is that one creates a gap between one’s actions and outcomes for good reasons but, in so doing, ensures that there will be a responsibility gap. Suppose, for example, an agency responsible for assessing how likely persons accused of a crime are to reoffend. To address a known problem of arbitrary assessments by human decision-makers, it deploys a system similar to COMPAS (while acknowledging biases similar to those in COMPAS). This would look like a case similar to democratic chair in Section 7.3. The agency’s use of the system would not be a means to avoid having to account for some other action. Rather, it is the decision to deploy the system that requires an account, and that decision is not obscured.Footnote 49

We leave open whether there are genuine cases of responsibility gaps – that’s a topic others have addressed more thoroughly than we can do here.Footnote 50 But our analysis of agency laundering requires thinking about role and causal responsibilities of people who deploy technologies like those Matthias contemplates. That forces one to consider a wider range of actions than the operations of an AI system and can help distinguish genuine responsibility gaps from responsibility obfuscation and agency laundering.

7.7.2 Bias Laundering, Masking, and Humans in the Loop

At a 2016 conference sponsored by the Society for the Advancement of Socio-Economics, Maciej Cegłowski stated that “machine learning [is] an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias.”Footnote 51 Although Cegłowski does not spell out what laundering is or why it matters morally, there do seem to be some points of similarity and difference worth noting. What Cegłowski’s comment picks out is the ability to obscure something important and deflect disapproval. So if an algorithm (e.g., for predictive policing) is built on criminal justice data, which is itself based on over-policing Black communities, the algorithm may be a mathematically neutral tool that reflects biases that already exist. The tool’s neutrality can appear neutral tout court to the extent that one fails to examine the bias in the underlying data sources.

However, our understanding of agency laundering is a general account of laundering, and it is broad enough that it encompasses bias laundering. Agency laundering can obscure many different kinds of wrongs and limiting the concept to bias laundering would fail to capture them. Likewise, there is no need to link laundering tightly with machine learning or algorithmic decision-making. As we’ve explained, any process or socio-technical system can be a mechanism for laundering. More importantly, our account explains just how laundering is related to responsibility, both structurally and morally. Finally, in our view the thing that is laundered is typically agency, and that is typically the appropriate target of analysis.Footnote 52 One might at times act as an agent, yet launder responsibility, but in any case these are the things laundered. Thus, while we agree that machine learning can be a means of disclaiming responsibility, just what it means to “disclaim responsibility” and just what it is that one is responsible for are difficult questions to answer. This chapter is an attempt to do just that.

A similar concept is “masking” or the intentional use of algorithmic systems to obfuscate discrimination.Footnote 53 Barocas and Selbst describe masking as a way of using data mining to return discriminatory results while hiding whatever discriminatory intent one might have behind an information system. Certainly, masking could be part of laundering. However, the other elements of laundering – relevant role responsibilities, ascription of morally relevant qualities, tension with fundamental aspects of responsibility (viz., accountability) – are not necessarily elements of masking.

There is a third important issue related to socio-technical systems, responsibility, and moral liability. The distinction between systems with humans in and out of the loop are well established. Systems employing humans in the loop include things like automated cars that provide for human override, autonomous weapons systems that require humans to approve strikes, and content moderation in which humans help teach algorithms what content is objectionable and make decisions in cases for which automated systems are not yet adept. Control in such systems is itself a complicated concept, and there is an active area of scholarship surrounding whether (and if so, how) there can be meaningful human control even for systems that leave humans out of particular decision loops.Footnote 54

Of particular importance for our project is that having humans in the loop may itself obscure causal and moral responsibility. Ben Wagner notes that there are many purportedly automated systems that rely on humans to take an active role, fix mistakes, or replace system decisions.Footnote 55 However, he argues that the actual human role may be compromised by the design of the system. For example, there may be insufficient time to make decisions, they may grow weary or inured to a process, or they may lack sufficient training and experience to make good decisions. He outlines a number of criteria important in determining whether systems are “quasi-autonomous,” such that humans in the loop “have responsibility but little agency” (or, in our usage, humans have causal but not capacity responsibility, and hence cannot be morally liable).

Madeleine Elish considers similar scenarios in which human actors have a causal role within socio-technical systems (including AI).Footnote 56 She argues that responsibility for outcomes may be misattributed to human actors within such systems, creating a kind of “moral crumple zone” that protects the system from attributions of responsibility. In our conception, humans’ causal responsibility could obscure the causal responsibility of a technical system (of course it cannot obscure the moral responsibility of a technical system, for such a system does not have capacity responsibility).

The systems that Wagner and Elish envision are ones that could potentially launder agency (though not necessarily). Suppose, for example, an autonomous vehicle has a human in the loop, but the human has too little time to respond when needed and causes an accident. That would seem to fulfill the conditions (1) and (2): some entity with moral responsibility has ensured a human has causal responsibility. The questions are whether placing a human in the loop attributes morally relevant qualities to the human’s actions (efficacy, perhaps), whether doing so obscures the causal responsibility of the larger system and thereby fails to adequately account for the moral responsibility of the larger entity. What is key for our view, though, is that the mechanism for laundering need not be technological at all; that is, humans in the loop can be a means of laundering just as well as automation itself.

7.7.3 Concerns

One potential objection to our conception of agency laundering is that it is merely a metaphor and as such does not add a great deal to our ability to analyze and evaluate the relationship between information technologies and responsibility. There are a couple of reasons to think otherwise. Using the concept of laundering takes its cue from the idea of money laundering, which is of course metaphorical. Crooks do not literally wash money. Rather, they obscure its sources by mixing it with money from legitimate sources. Hence, whatever actions work to obscure the source of illicit funds also serve to launder those funds. Laundering is a way of obscuring the source of morally weighty states of affairs by mixing actions with technologies, procedures, or bureaucracies. Part of the value of using the laundering metaphor (for both money and agency) is that it plays a “descriptive role in helping a lay person understand” what the underlying phenomenon is.Footnote 57 That is, a metaphor can help capture the gist of a concept and, in this case, give people an intuitive grasp of the underlying concerns before following the entire argument.

Note, too, that the concept of agency laundering can help us both to make judgments in difficult cases and to explicate antecedent moral wrongs more fully. So, for example, in the Facebook case, it is unclear just what the moral wrong is in using an automated targeted advertising system that bad actors can exploit. It is plausible that Facebook did not act wrongly in developing and using such a system. However, the advertising platform is still within a domain for which Facebook has moral responsibility, and its conflation of its actions with an automated system’s actions undermines the foundation of responsibility, viz., providing an account.

Similarly, the concept of agency laundering can explain why Uber acts wrongly in some cases (surge pricing) but not in others (goal prompts, queuing).Footnote 58 Both actions are within Uber’s domain of responsibility, and both are actions where there is an open question about whether Uber infringes its substantive role responsibility. Our account of agency laundering can help evaluate what, if anything, Uber does wrong. The account may be of particular use in public-facing cases, where organizations have a remit to serve the public and derive legitimacy from public trust and support. In cases like Houston and Loomis, the possibility of socio-technical systems forestalling persons’ abilities to demand an account of organizations’ actions within a domain of legitimate concern is particularly important. Drawing on the metaphor of laundering here helps capture content of the concept.

A further advantage of our account is that it may help in understanding what kinds of rights to explanation people have in the context of automated or algorithmic decision systems, for example in the GDPR.Footnote 59 As we discussed in Chapter 4, such a right (if there is one) is generally discussed as an individual right in the face of adverse decisions.Footnote 60 But agency laundering is a problem not just for individuals whose interests have been affected. It is also a general problem, and our arguments about laundering and forestalling others’ abilities to demand an account within areas of their legitimate interest extend further. After all, how a massive social media company helps target ads, how an international employer of drivers with millions of users and drivers induces use, how a school district evaluates and fires teachers, and how a criminal justice system wields its power are areas of general legitimate interest, regardless of whether a particular individual has a claim to an explanation of a discrete event.

7.8 Conclusion

Our goals in this chapter were to, first, explain a type of wrong that arises when agents obscure responsibility for their actions. We have outlined this type of wrong and called it “agency laundering.” Second was to draw on several cases to help specify our account of agency laundering. We have argued that some of these (Facebook advertising, Uber’s surge-pricing prompts, Houston Schools’ use of EVAAS) involve laundering and two (other Uber prompts, use of COMPAS in the Loomis case) do not. Third, we have argued that analyzing these cases in terms of agency laundering both helps understand the cases and adds something morally. Lastly, we have distinguished agency laundering from other relevant concepts.

We have not given the final word on agency laundering here. One further question concerns the degree to which laundering must be intentional. In other words, can a person who uses a tool to make decisions launder their agency inadvertently? This appears compatible with our definition of agency, though the moral importance of such laundering warrants further consideration. Another question concerns how widely the concept of agency laundering applies. A few people with whom we’ve discussed this project have asked whether large-scale social processes (e.g., political events and movements) can serve either as mechanisms or as sites of laundering. Perhaps so, though that would involve sorting through complex issues of causal responsibility and conceptual questions of capacity responsibility.

8 Democratic Obligations and Technological Threats to Legitimacy

8.1 Two New Technologies

In 2011, UCLA anthropology professor Jeff Brantingham launched PredPol, which is an algorithmic system for predicting and preventing crime. The idea at the heart of PredPol is that crimes follow predictable, geographic patterns.Footnote 1 Thus, it may be possible to reduce crime by deploying police resources to places where predictive analytics suggests crime will occur. To facilitate this, PredPol has a graphical interface that overlays a red box (indicating a predicted crime “hotspot”) on top of a city map. This allows police to concentrate on those areas in the hopes of deterring crime.

There is evidence that PredPol is effective in some respects, especially in addressing property crime. And it has a degree of support. Time magazine called predictive policingFootnote 2 one of the “50 Best Inventions” of 2011.Footnote 3 Today, PredPol is a national leader in predictive policing technology and dozens of cities across the United States use it.Footnote 4

There are, however, several important criticisms of the technology. One is that its ability to predict property crimes may not translate well to predicting (much less preventing) other crimes. Another concern is that predictive policing in general simply recreates biases in policing practices and that predictive tools will be deployed in ways that harm communities of color, much in the same way that stop-and-frisk policies, pretrial detention, and sentencing do.Footnote 5

Not long after PredPol was getting off the ground, a different set of algorithmic systems was developing in a way that would have profound implications for electoral politics. The political data analytics firm Cambridge Analytica began building specialized personal profiles and using large datasets collected from social media to stage political influence operations at scale. While use of profiling and data to influence politics is nothing new, Cambridge Analytica’s actions are novel in several ways. The data on which their operation was built was massive and obtained in intrusive ways (including by gaining access via friends’ permissions in Facebook). In addition, the data was collected under the imprimatur of a prestigious academic institution (Cambridge University). Finally, the kinds of information pushed to users were difficult for others to see, as it was targeted on social media. This left fewer obvious trails as it was happening than, say, television or mail ads. In parallel with the Cambridge Analytica efforts was a Russia-sponsored disinformation campaign, also making use of social media and recommendation systems. This was carried out by an organization called the Internet Research Agency (IRA).

Although PredPol and Cambridge Analytica/IRA address fundamentally different parts of modern life, they both have important implications for democratic governance. PredPol is a tool that helps structure how governments exercise their enormous power to investigate and sanction crime. The ability of the state to visit harms upon its constituents is at the heart of liberal theorists’ commitments to limit state power.Footnote 6 This concern about state-sanctioned harm is sharpened by concerns about unsanctioned state violence that is often unaccountable.Footnote 7 Cambridge Analytica, in contrast, is not primarily about how a state deploys its power. Rather, it is about the connection (or disconnection) between democratic processes and governance. Put differently, it is about the necessary conditions for democratic processes to provide grounds for governments to hold power and implement policy decisions.Footnote 8

Questions about justifying exercise of power via policing and questions about democratic processes and justifying power are, at root, about political legitimacy. So far in this book, we have examined algorithmic decision systems from three autonomy-based perspectives: in terms of what we owe autonomous agents (Chapters 3 and 4), in terms of the conditions required for people to act autonomously (Chapters 5 and 6), and in terms of the responsibilities of agents (Chapter 7). Political legitimacy is another way in which autonomy and responsibility are linked. This relationship is the basis of the current chapter, and it is important in understanding the moral salience of algorithmic systems. We will draw the connection as follows: In Section 8.2 we will outline a conception of political legitimacy. In Section 8.3 we will explain that the connection between political legitimacy and autonomy is that legitimacy is grounded in legitimating processes, which are in turn based on autonomy. Algorithmic systems – among them PredPol and the Cambridge Analytica-Facebook amalgam – can hinder that legitimation process and conflict with democratic legitimacy, as we argue in Section 8.4. We will conclude by returning to our old friends, Loomis, Wagner, and Houston Schools.

8.2 POLITICAL LEGITIMACY: THREE CONCEPTIONS AND A HYBRID VIEW

Governments exercise enormous power over their denizens and expend vast resources promoting a range of policies (education, transportation infrastructure, defense, public health, commercial regulation, scientific research, information collection, parklands, safety and policing, and on and on). Those powers can be exerted in ways that are justifiable or unjustifiable, that are good or bad, and that are useful and not useful. The exercise of political authority can, in other words, be legitimate or not. In its broadest sense, political legitimacy refers to the justification of political authority, where “authority” just means having a certain political right to act coercively. But what constitutes such a right, and how could anyone ever acquire it? In this section, we discuss three conceptions of legitimacy – descriptive, democratic (or “will” based), and epistemic (or “normative authority” based) – and then offer our own, hybrid view based on a recent account from Fabienne Peter.

8.2.1 Legitimacy and Descriptive Criteria

One family of views distinguishes legitimate from illegitimate authority in terms of purely descriptive (or empirical), as opposed to normative, criteria. This sort of view, which finds its roots in Max Weber’s work,Footnote 9 is often associated with the social sciences. Such descriptive accounts examine how authority (understood as people recognizing a body’s ability to exert power) can be established and maintained.Footnote 10

Weber himself offered three methods of establishing and maintaining the ability to exert power: through tradition, through charisma, and through rational appeal. In other words, the continuity of traditional leadership, the transformative personal characteristics of charismatic rulers, and the inherent rationality of law can each, for better or worse, contribute to establishing and maintaining a political regime. Later theorists have refined Weber’s list, distinguishing, for example, claims of traditional authority staked on “convention” from those staked on “sacredness” and distinguishing claims of charismatic authority based on “personal ties” from those based on “personal qualities.”Footnote 11 In the context of algorithmic systems, Ari Waldman provides an account of this descriptive sense of legitimacy as “the socially constructed propriety of authority to make decisions for others.”Footnote 12 Given that “[t]he managerial ethos inside corporations operating in a permissive, neoliberal regulatory environment will twist process to serve corporate ends,” legitimacy requires that regulators “go beyond process to rebalance the structures of power.”Footnote 13

Work on this process of establishing recognition of a right to exert authority is important (and we will return to it in Section 8.3). However, it addresses different questions than those about the normative foundation of authority. The mere fact (if it is) that people are willing to recognize a government’s authority to exert power on the basis of, say, a leader’s charisma does not tell us anything at all about whether the government’s actions are justifiable. To understand this, we must also understand how a right to exert authority could be justified by moral or political principles.Footnote 14

8.2.2 Legitimacy and Democratic Criteria

One normative conception of legitimacy holds that it is a function of democratic consent. Legitimacy in this sense depends on some sort of approval of authority by a state’s citizens. This can be cached out in a couple of different ways.

On one view, approval requires the kind of epistemic competence and non-alienation that are conditions of what we called psychological autonomy in Chapter 2. Most people never agree to be governed in this way, of course, so explicit consent cannot be the requirement for legitimacy. One might then consider hypothetical agreement as the grounds for legitimacy, arguing that a sufficient condition of legitimate authority is that reasonable persons could consent to it, or one might argue that it is a necessary condition of legitimate authority that people could consent to it. Indeed, the account of psychological autonomy from John Christman that we use as an exemplar in Chapter 2 links autonomy to legitimacy in this fashion. Christman writes that “the test for legitimacy of political principles should be the following: principles are legitimate only if the (reasonable) citizens to whom they apply would not be understandably alienated from them.”Footnote 15 What matters is not explicit agreement, but whether people would identify with political principles if given adequate opportunity to reflect on them in light of their values and the sources and history of those values.

Another view linking autonomy to legitimacy is based on the degree to which political bodies and decisions reflect constituent wills. What is important in legitimacy is that persons subject to authority have some say in how that authority is constituted and deployed. In this way, what Marina Oshana calls “political autonomy” more closely reflects her sense of personal autonomy than psychological autonomy.Footnote 16 Legitimacy in this sense turns on the degree to which political processes afford citizens the genuine opportunity to participate, rather than principles reflecting their wills.

Both views of autonomy ground legitimacy in democratic will. Rawls, for instance, offers a principle of legitimacy that is defined in terms of “public reason.” On this view, political power is justified “when it is exercised in accordance with a constitution the essentials of which all citizens as free and equal may reasonably be expected to endorse in the light of principles and ideals acceptable to their common human reason.”Footnote 17 Similarly, Pettit’s account is grounded in “popular control.” He argues that political power is justified when all citizens have equal access to the prevailing system of influence over the government and that system imposes a direction welcomed by all.Footnote 18 In both cases, the account of legitimacy is premised on the notion of individuals as the “self-originating sources of valid claims” in the sense that “their claims have weight apart from being derived from duties or obligations specified by the political conception of justice, for example, from duties and obligations owed to society.”Footnote 19 What gives their claims this sort of weight is their autonomous wills.

Both Rawls’s and Pettit’s views align with descriptive accounts in that both Rawls and Pettit acknowledge that legitimacy is tied to stability. However, they diverge from descriptive accounts in maintaining that legitimacy requires that political authority meet normative criteria as well. Specifically, political authority must obey certain rules and democratic procedures. Rawls describes the ideal of political legitimacy not in terms of stability per se (i.e., in terms of a “modus vivendi”), but in terms of “stability for the right reasons,” where each citizen is compelled to obey the public constitution from their own individual perspective rather than merely out of self-interest. And Pettit describes the ideal of legitimacy in terms of passing a “tough luck” test, according to which authority is legitimate when people can think that “when public structures and policies and decisions frustrate their personal preferences, that is just tough luck.”Footnote 20 Note that these normative criteria for political legitimacy are versions of the reasonable endorsement principles we outline in Chapter 3. Since these principles index the degree of legitimacy to facts about the wills of constituents, we can, following Fabienne Peter, call them “will” principles of legitimacy.

8.2.3 Legitimacy and Epistemic Criteria

A third view of legitimacy ties it to whether systems or decisions meet epistemic criteria.

To understand this set of views, it is useful to start with what Peter calls the “normative facts” view. On this conception, what matters for legitimacy turns on whether political decisions track the truth of the matter as to what is morally right to do. That is, the normative facts view assumes something akin to moral realism, thinks that propositions about what governments should do can be true or false, and pegs legitimacy to whether such decisions indeed track those normative facts. And they do so regardless of what people’s beliefs about them are.

The problem of a normative facts view is that it leaves out the link between legitimacy and autonomy. That is, legitimate decisions need not be ratified in any sense by the people subject to them. Peter explains this as a violation of what she calls the “access constraint”: Political legitimacy, whatever it is, must “be such that it can settle political deliberation (at least temporarily),” and to do this, it “must involve the attitudes of at least some citizens.” Therefore, if normative facts are relevant to our political decision-making, it will be “through our beliefs about them.”Footnote 21

In this vein (i.e., bridging normative facts and beliefs about normative facts), Raz offers a “service” conception of legitimacy. On this view political authority is legitimate only if it is made in service of people’s underlying reasons. The “normal” justification for authority, in other words, is that authority done right does nothing more than guide people according to the reasons they already have. As Raz puts it, “[T]he normal way to establish that a person has authority over another person involves showing that the alleged subject is likely better to comply with reasons which apply to him […] if he accepts the directives of the alleged authority as authoritatively binding and tries to follow them, rather than by trying to follow the reasons which apply to him directly.”Footnote 22 Later theorists following Raz have explained further how this sort of epistemic preemption might work.

David Enoch, for instance, extends Raz’s notion of a duty as a special sort of reason, describing authority as giving rise to obligation when there is some “normative structure in the background” allowing the authority to give its subject those sorts of reasons.Footnote 23 On wildlife tours, the tour guide often serves (and expects to serve) as this sort of preemptive decision-maker. The guide is the authority on, for instance, when the group needs to leave a group of animals behind rather than continue to take pictures – because of the dangerous nature of the circumstances and the special knowledge the guide has acquired through experience over time.

In any case, when the normal justification is present, Raz argues, the duties prescribed by the authority can override even the dictates of the democratic will and thus can offer grounds for “a ready embrace of various paternalistic measures,” which would not pass democratic muster.Footnote 24 Here, we might find the justification for such policies as smoking bans, bans on trans fats, helmet laws, and mask mandates.

8.2.4 The Disjunctive Conception of Legitimacy

As Peter points out, neither the democratic nor the epistemic criteria seem on their own to fully encapsulate the normative grounds of political legitimacy.Footnote 25 If we accept what she calls a “will-based” conception of political legitimacy, exemplified by Rawls’s and Pettit’s democratic criteria, we run into what Peter calls the arbitrariness objection: that “[t]he validity of at least some practical claims depends on third-personal sources of validity,” and thus that these will-based conceptions “support undue arbitrariness in political decision-making.” We cannot hold the wills of all citizens to be politically relevant; individuals are not self-originating sources of valid claims but are instead often deeply irrational or unreasonable. Hence, democratic processes (via voting, consensus, public reason, or the like) are not sufficient on their own to justify exercise of political authority. There are, in other words, exercises of authority that are illegitimate even if they genuinely reflect citizen will. They may be so ill-conceived and so harmful, or they may be so detrimental to democratic processes and structures themselves, that they can not be legitimated by democratic processes.

A purely “belief-based” conception of political legitimacy (corresponding to Raz’s and Enoch’s epistemic criteria) is also inadequate. Peters argues that these accounts run the problem of “epistemic underdetermination.” Specifically, there are few (if any) political decisions for which there is sufficient evidence to ensure that they are optimal. As Peter puts it, “[T]he epistemic circumstances of politics are such that for most political decisions, there will not be a decisive normative authority.”Footnote 26 So even if we reject democratic adjudication as the univocal source of normative authority, the epistemic circumstances are often so complex and uncertain that normative authority remains difficult or impossible to establish. The main tension is that persons’ wills and epistemic criteria both seem to be key conditions of justifiable exercise of political authority. Thus, the determinative question seems to be how to incorporate the autonomous wills and beliefs of citizens, given the need for policy that is appropriately responsive to facts.Footnote 27

To address these issues, Peter defends a hybrid account of the normative grounds of political legitimacy. This account holds that legitimacy includes “both responsiveness to normative authority and adjudication between valid, but conflicting, claims as grounds of legitimacy.” The conception itself is “disjunctive,” which is to say that legitimacy can derive from either will or belief. Moreover, it favors normative authority over democratic processes; legitimacy is tied to “how the decision reflects normative authority, when normative authority can be established, or, when normative authority cannot be established, […] how it adjudicates between the conflicting, but valid claims made by the citizens.” In other words, democratic adjudication is secondary; it is a source of legitimacy only “as a response to difficulties with establishing normative authority.”

Peter’s hybrid account of the grounds of legitimacy is on the right track, but we want to refine it slightly. We cannot define our way out of the threats to legitimacy presented by our most difficult and contentious cases simply through disjunctive addition. It is not as if the arbitrariness problem somehow disappears in cases where normative authority cannot be established. (Indeed, it seems more likely that the two problems are self-reinforcing and thus run together.) It may well be possible to balance power among parties to a conflict in some cases where normative authority is epistemically underdetermined, but we should not mistake this modus vivendi for an arrangement that is genuinely politically legitimate.

Peter’s concern with securing a secondary ground of legitimacy beyond normative authority is connected to her concern with the function of political legitimacy. She writes that “[a] plausible conception of political legitimacy should be able to settle which decisions are and which are not legitimate even under unfavourable epistemic circumstances,” and this is something that can often only be settled by the balancing of wills. Nor, she argues, can we “claim that all decisions made in the absence of decisive normative authority would be illegitimate,” because this “would itself amount to a political decision about which we could ask whether it is legitimate,” leading to a regress.

Peter considers the secondary role of democratic adjudication in the context of a well-entrenched topic of disagreement: abortion. If we grant for the sake of argument that there is no decisive normative authority on that issue, it is not clear that any political method of adjudicating the conflicting claims on the issue will ever be able to provide the sort of consolation that might soothe parties who have been disenfranchised by “tough luck,” to say nothing of providing stability for the right reasons. This seems like precisely the sort of issue that democratic adjudication can settle, but not in the “right” way.

To see the grounds and function of political legitimacy correctly, we need to keep the secondary notion of democratic adjudication separate from the primary notion of normative authority. This is because there are political decisions, systems, or regimes that might be democratically ratified, but that are illegitimate in the epistemic, normative authority sense. Thus, there will be some policies that appear legitimate in the democratic sense but are not legitimate in the normative authority sense. Our view is similar to Peter’s hybrid model, with the caveat that policies can be legitimate via a combination of normative authority and democratic will. As evidence of good policy increases (i.e., as it becomes increasingly evident that some policy, law, or action is justified via normative authority), then the stringency of the democratic facet of legitimacy is weaker.

8.3 Legitimating Processes

As we explained in the previous section, there are two facets to normative legitimacy. One, characterized by “will” theories generally, is vulnerable to arbitrariness. The other is characterized by normative authority, which is in turn a function of justified belief in structures, policies, and actions of states that are in fact good. This second facet is not vulnerable to arbitrariness in the way that will theories are. That is because justification is baked into the definition. Any policy, structure, or action that is both correct and justifiably believed to be correct is, ipso facto, legitimate. The primary limitation of the normative authority facet of legitimacy is epistemic; determining what is in fact the right policy or action is difficult, to say the least (cf., Section 1.5).

When we consider technological threats to legitimacy – gaming of algorithms in social media, use of algorithms in predictive policing, for example – it is tempting to move straight from a conception of legitimacy to a claim that some set of institutions, policies, or structures are themselves illegitimate, all things considered. It will no doubt be true that some political actions will fail to meet both the will conditions and the normative authority conditions for legitimacy and those actions will thus be unjustifiable exercises of authority. However, for any particular policy taking place within a complex set of social and political structures against background facts that are themselves challenging to interpret, all-things-considered judgments about legitimacy will be difficult. Hence, it is a far larger task than we can address here. Moreover, making that kind of final judgment about legitimacy itself would take us too far afield from the subject of this book, which is about autonomy. Finally, legitimacy (like autonomy) is not a binary, success concept. It is not something that an institution, polity, law, or action simply has or lacks. There is a spectrum of legitimacy, and limitations based on will or based on normative authority will often diminish legitimacy without warranting a conclusion that a policy decision crosses some floor below which it is illegitimate.

For these reasons, our focus here is on legitimation.Footnote 28 Will theories, normative authority theories, Peter’s disjunctive theory, and our hybrid interpretation of Peter’s view all rely on exercises of persons’ autonomy as a foundation for legitimacy of governments, laws, policies, and the like. That process of exercising autonomy, either through will or through justified beliefs about normative claims (or both), is the mechanism by which use of political authority is justified. And in turn, members of a polity have the responsibility to use their autonomy to understand and support (or not support) actions by their government.Footnote 29 It is that process of legitimation that algorithmic systems can disrupt. In other words, algorithmic systems may in some cases hinder people from fulfilling their responsibilities of legitimating government policies, actions, laws, and the like.

Our argument for the centrality of autonomy to legitimacy begins with the conception of legitimacy outlined earlier. For political authority to be justified, it must be based on either democratic will or normative authority (i.e., where the exercise of political authority is good and there is justified belief that it is good), or a degree of both. As normative authority increases – that is, where policy is good and there is excellent epistemic justification for it – the degree to which it must be supported by mere will decreases.

Both paths to legitimate authority are based on a process of legitimation, each of which is itself grounded in autonomy. We explained in Section 8.2.2 that will-based theories are based on some version of agreement, public reason, or hypothetical consent and that Christman and Oshana explicitly link legitimacy to their conceptions of autonomy. The idea is straightforward. For one’s agreement to a system of governance and to the actions, policies, and decisions of a government to carry force, the agreement must be based on conditions that meet the criteria of autonomy and freedom. One must be epistemically competent, the reasoning upon which one’s agreement is based must satisfy authenticity conditions (as Christman puts it, one must not be understandably alienated from them). One should have the room to determine whether to support political decisions by having a degree of substantive independence. Moreover, people performing their legitimating responsibilities should be free of substantial impediments to the quality of their agency. As we explained in Chapter 5, one’s agency may be impaired by affective, deliberative, and social challenges. Where the quality of their agency is sufficiently challenged in those ways, their ability to fulfill the responsibility of legitimating government policies is limited.

The link between autonomy and legitimation by normative authority is not as intuitively clear as between autonomy and will theories. Normative authority requires, first, that a policy, law, or action by a government be a good one. But the mere fact that a policy is good cannot confer legitimacy because of what Peter calls the “access constraint.” Legitimacy demands that people believe that the policy is good and that they do so for sound reasons. Put another way, the legitimation process requires that people have sufficient information and understanding to assess a policy and believe that it is justified on its merits. The ability to do that is based on autonomy. To form justified beliefs about whether a policy is good, one must be epistemically competent, they must have some sense of value from which they are not alienated, and their reasoning must be substantively independent enough that their ability to reason is not compromised. Finally, the quality of their agency should not be so diminished that it undermines the degree to which their beliefs and actions are indeed their own.

8.4 Technological Threats to Legitimacy

Before we apply our arguments about legitimacy, legitimation, and autonomy to algorithmic systems, let’s summarize the conception of legitimacy that we have developed.

  1. 1. For a government policy, law, or action to be normatively politically legitimate, requires either (a) normative authority, (b) democratic will (within certain bounds, dictated by (a)), or (c) some combination of both (a) and (b).

  2. 2. For a policy, law, or action to be legitimate via normative authority requires that the policy, law, or action in fact be good, and meet the access constraint.

  3. 3. The access constraint functions to ensure that autonomy enters the process by having belief-forming mechanisms in place under sufficient quality of agency.

  4. 4. For a policy, law, or action to be legitimate via democratic will requires some process of voting or public reason.

  5. 5. For a process of voting or public reason to confer legitimacy requires that persons engaged in that process do so autonomously.

With that in mind, let’s return to PredPol, Cambridge Analytica, and the IRA.

8.4.1 PredPol, Normative Authority, and Legitimation

Our analysis of PredPol and legitimation is centered on legitimacy through normative authority. Recall that legitimacy via normative authority involves both normative claims (i.e., that some policy is in fact a good one) and the access constraint. The access constraint requires that the reason some policy is implemented is that people implementing it have epistemically sound reason to believe that the policy is in fact good. Based on these criteria, there are four key questions for determining whether a policy is legitimate based on the normative authority view: (1) Is the policy a good one? (2) Is there ample evidence that the policy is a good one? (3) Does the evidence that the policy is good drive the beliefs about the policy? and (4) Do the beliefs about the policy reflect the actual implementation of the policy?

We will consider the first two questions together, since we cannot address the question of whether the policy is good independent of evidence for it. So, is use of PredPol in fact good policy? Certainly, reduction in crime is a worthwhile objective. Hence, whether use of PredPol is good policy will turn on whether it is effective, whether it has bad consequences that offset any good ones, or whether it impinges other claims. There is some evidence PredPol is effective. In a July 2011 pilot test, the Santa Cruz, CA police department piloted a crime-deterrence program using PredPol. Zach Friend, the department’s crime analyst, reported encouraging results. Burglaries in 2011 were down 27 percent that July, compared with July in the previous year. Speaking to the question of effectiveness, Friend added, “The worst-case scenario is that it doesn’t work and we’re no worse off” (though who the “we” refers to is unclear).Footnote 30

However, the evidence of PredPol’s efficacy is equivocal. Santa Cruz’s drop in burglaries in 2011 is not enough to conclude that PredPol was the cause; after all, cities are complex entities and a one-year drop in one category of crime does not warrant a conclusion about a single cause and tells us nothing about other crimes. There have been only a few academic studies on predictive systems like PredPol. Those that exist yield mixed results, with some showing predictive methods having some crime-reducing effects, but others showing it has no effect at all.Footnote 31 For example, a study by the RAND Corporation testing a place-based system like PredPol found that the predictive system had no statistically significant effect on crime overall.Footnote 32

There is also reason to think that use of PredPol has negative effects that offset whatever advantages in crime reduction it confers. Several cities using predictive policing technology have found that police simply stay in the locations indicated by the algorithm. That is, they stayed in the “red box” that PredPol overlays on a map. Susan Merritt, San Francisco Police Department’s chief information officer, remarked that “[i]n L.A. I heard that many officers were only patrolling the red boxes, not other areas […] People became too focused on the boxes, and they had to come up with a slogan, ‘Think outside the box’.”Footnote 33 And there is evidence that this has deleterious effects. Patrol officers stayed in the “red boxed” areas and engaged in “intelligence gathering,” for example questioning people, running people’s records, investigating for potential (though unreported) narcotics use, and so forth. This, in turn, had the effect of slowing response times to emergencies.Footnote 34

A further consequence that offsets whatever benefits PredPol has is disproportionate use. The Los Angeles Police Department (LAPD) has used predictive methods as a reason to deploy helicopters (as opposed to patrol cars) to the 500 foot by 500 foot boxes that denote “hot spots.” These practices outrun the design of the predictive tool for several reasons. For one, it is difficult to discern unspecified criminal activity from a helicopter. And helicopters’ deterrent effect is fleeting. Criminologist Geoffrey Alpert – a specialist in the use of helicopters in policing – says that helicopters, which are loud and obvious, will at best deter criminals while they are overhead but that criminals will simply return when the helicopters are not around.Footnote 35 Moreover, accommodating helicopter flight paths, the search area had to be expanded well beyond the “red box,” conflicting with the purported specificity of PredPol.Footnote 36 Add to this the annoyance residents experience having loud helicopters overhead, and there are several negative consequences offsetting PredPol’s advantages.

Finally, there are potential rights impingements relevant in determining whether PredPol is good policy. Predictive models indicating a “high crime area” can be a factor in determining whether police have legal authority to stop and frisk a person.Footnote 37 Hence, people may be stopped and frisked more often in a PredPol-designated area, and such stops are more likely to be permissible under the Fourth Amendment. However, it is another question whether merely being in a high-crime area can help morally justify a stop. And because of the source of data, the patterns of high-crime areas are skewed toward neighborhoods in which residents are predominantly from racial and ethnic minorities.

So, even if use of PredPol has some positive effects on a justifiable policy goal, it is not warranted to conclude that it is good policy overall and that there is ample evidence for it being good policy. Let’s assume that the answer to the third question – whether evidence that the policy is good drives beliefs about the policy – is yes. That is not to say that the inference is correct; the access constraint merely requires a connection between evidence and belief on the assumption that the policy is good and justifiable.

The legitimation problem for use of PredPol comes in steps (3) and (4). It is possible that what evidence there is that PredPol can help reduce some types of crime grounds municipalities’ use of the technology. However, the actual implementation of PredPol does not appear to reflect the evidence of its efficacy. Even if PredPol is well suited for predicting some kinds of crimes, the evidence for that does not warrant belief about PredPol’s use as a general law enforcement tool, much less as a reason to park police officers in red-boxed areas or to act as general investigators (and slow responses to emergencies). It likewise does not support belief that deploying helicopters to fly over red-boxed and surrounding areas is warranted. It also does not provide grounds for thinking that individual persons who happen to be in red-boxed areas are themselves engaged in criminal activity. To reiterate, the access constraint is the requirement that belief in a policy’s justification be based on the fact (if it is a fact) that the policy is a good one. Here, the fact that PredPol is effective in one area instead grounds the apparent belief that it is a good tool for intensive policing of discrete areas and conducting stops of individuals. In this way, the evidence for PredPol’s efficacy in one area actually circumvents an important legitimating process. The evidence for its efficacy in one area stands in for evidence of efficacy for the tasks in which it is actually used. This is a barrier for citizens in being able to form beliefs about whether a policy is a good one. In other words, deploying PredPol well outside the range of circumstances for which there is evidence of its efficacy undermines citizens’ abilities to fulfill their responsibility of exercising their autonomy in legitimating policy.

Here we want to be very clear about the limitations of this discussion. The mere fact that a policy is not good does not render it illegitimate. Any kind of policymaking (broadly construed) is complex, takes place under epistemic limitations, and will affect lots of people with considerable interests in the policy. Moreover, for a policy to be “good” in the strong sense necessary for it to be legitimate via normative authority is a very tall order. And the fact that use of PredPol does not seem to clear that threshold is therefore unsurprising. Use of PredPol might instead be legitimate based on the democratic, will-based view. Hence, our argument is emphatically not an attempt to make an all-things-considered judgment that using PredPol is not democratically legitimate (perhaps it is, perhaps it isn’t). Rather, as we emphasized earlier, our purpose here is to examine autonomy and the process of legitimation. And on that question, PredPol is an informative case because it precludes citizens’ abilities to legitimate predictive policing by undercutting their ability to assess its policy rationale.

8.4.2 Cambridge Analytica, the IRA, and Will-Based Authority

What are often referred to as the “Cambridge Analytica scandal” and “Russian hacking case” are in fact an interconnected, overlapping, and confusing welter of activities, organizations, tools, and technologies that grew up before and became widely known in the wake of the 2016 UK Brexit campaign and the 2016 US presidential election. The events are well documented in that there have been multiple tell-all books,Footnote 38 government investigations,Footnote 39 academic reports,Footnote 40 and news stories.Footnote 41 However, the interwoven stories are obscured by their very complexity, redactions in official reports and indictments, disinformation and deflection campaigns, and failures of public institutions to follow through with further investigations. Given the scope of the stories, we cannot possibly do them justice here. However, we can pull out a few important threads to ground our arguments about legitimation.

Cambridge Analytica is a data analytics and political consultancy firm based in London, UK. It is an offshoot of the SCL Group. In the years before the Brexit campaign and the 2016 US presidential election, the firm became interested in using the gigantic, rich datasets generated on social media platforms – particularly Facebook – to build ever more sophisticated models of voters and behavior. In order to pursue this work, they partnered with Cambridge University psychologist Aleksandr Kogan, who was already collecting social media information for his own research. Kogan had developed (along with other researchers) a personality app called “This Is Your Digital Life.” The app, which included a “lengthy psychological questionnaire,”Footnote 42 was actually only downloaded by around 270,000 users, but it ultimately harvested user data from those users and from their friends, a group that might number up to 87 million.Footnote 43 This data was shared with Cambridge Analytica. On the basis of psychological profiles and data about Facebook users, Cambridge Analytica offered micro-targeted advertising based on their data-driven psychometrics. They did this first for Republican Party nomination seeker Ted Cruz.Footnote 44 When Cruz dropped out of the nomination race, they shifted their operation to candidate Donald Trump.

The targeted advertising and social media campaign spearheaded by Cambridge Analytica is of particular note. Christopher Wylie is a data scientist and former employee of Cambridge Analytica who eventually became a whistleblower and author of a book about the company. He describes several key features of the company’s strategies. For example, he describes a project targeting African American voters with messages designed to decrease their motivation for voting. He also describes using the techniques of using its influence to create in-person meetings of angry citizens.Footnote 45 As Wylie put things, in the original report published by The Guardian: “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.”Footnote 46

The Russian Internet Research Agency (IRA) is a Russian state-supported organization that was established in 2013. It operates “like a sophisticated marketing agency,” has trained “over a thousand people to engage in round-the-clock influence operations,” and has targeted citizens in a variety of places, including Russia, Ukraine, and the United States.Footnote 47 In February 2018, the U.S. Department of Justice indicted the IRA and a number of Russian nationals who worked with the IRA for conspiracy to commit fraud, wire fraud, and bank fraud.Footnote 48

The charges were based in part on disinformation “active measures” carried out on social media.Footnote 49 These measures included creation of fictitious groups that engaged social media users with inflammatory and divisive content. The IRA created fake profiles that followed other IRA sites and groups, thereby increasing their footprint and visibility to real people on social media platforms. To reach larger audiences, the IRA purchased Facebook ads that were then placed in potential followers’ Facebook newsfeeds.Footnote 50 The inflammatory, divisive content promoted by the IRA sought to engage with social media from a range of political affiliations. The U.S. Senate Select Committee on Intelligence found that the IRA’s targeting was particularly intensive toward African Americans. It writes, “By far, race and related issues were the preferred target of the information warfare campaign designed to divide the country in 2016.”Footnote 51 It did this by creating pages and groups that sought to engage users that the social media platform identified as African American and by targeting geographic areas that are predominantly African American. These include pages such as “Blacktivist” and social media posts related to police shootings and NFL player kneeling protests of police shootings.Footnote 52 Other groups the IRA targeted with incendiary messaging include groups with names that imply a range of political dispositions (“Being Patriotic,” “Stop All Immigrants,” “Secured Borders,” “Tea Party News,” “Black Matters,” “Don’t Shoot Us,” “LGBT United,” and “United Muslims of America”).Footnote 53 The IRA’s social media accounts “reached tens of millions of U.S. persons” and had “hundreds of thousands of followers.”Footnote 54

These accounts were used to organize in-person rallies, recruit activists to perform political tasks, and promote IRA content.Footnote 55 The IRA’s activity is linked to social media users being exposed to a great deal of misinformation. A study of Twitter by the Oxford Internet Institute found that in the run-up to the 2016 presidential election, “users got more misinformation; polarizing, and conspiratorial content than professionally produced news.”Footnote 56

The breadth of activities by the IRA is striking. Its activities range across social media platforms, including Facebook, YouTube, Instagram, and Twitter. It targets groups from a range of social and political perspectives. Its tactics include aiming to suppress votes, aiming to direct voters toward third parties, aiming to depress turnout, encouraging secessionist movements (including in Texas and in California, comparing them to Brexit), and denigrating a range of leaders across parties.Footnote 57 They also aimed both to seed news media with content generated by Russian disinformation campaigns and to undermine trust in traditional news media.Footnote 58

There is no official recognition or definitive documentation connecting Cambridge Analytica and Russian active measures. In his book, Wylie recounts Cambridge Analytica’s involvement with Russian actors, message-testing about Vladimir Putin, close connections to people involved with pro-Russia factions in Ukraine politics,Footnote 59 and involvement with Russian oil firm Lukoil.Footnote 60 SCL’s promotional materials indicate that they did work for Russia. However, SCL denies being involved in political campaigns at the behest of Russia, insisting that their work in Russia was for “private” interests.Footnote 61 It is worth noting here that the distinction between state and nominally private interests in kleptocracy is blurry.

So the existence and nature of the connection between the IRA and Cambridge Analytica are unclear. However, what matters here is not whether there was a single group of people acting in concert to sow anger and seed disinformation in the United States and elsewhere. Rather, our focus here is on the nexus of intrusive data collection (itself possible based on Facebook’s weak privacy protections and permissive terms of service (themselves difficult to comprehend and rarely read), and the imprimatur of Cambridge University), predictive analytics using that data to better target influence, and algorithmic systems that suggest advertising to clients (see Section 7.4) and promote content to users. That is, our concern here is a socio-technical, big-data, and algorithmically aided group of systems that affect the attitudes, beliefs, dispositions, and actions of people within democratic states. Whether Cambridge Analytica entered into agreements or acted in explicit concert with the IRA is neither here nor there for our purposes in this chapter.

We should also emphasize that the effect of Cambridge Analytica’s and the IRA’s actions on the outcomes of particular elections is unclear, and no single set of events could ever be said to be the sole cause of complex phenomena like election outcomes and broad policy-making. It is disputed just how well these kinds of tactics actually work. Kogan argued that the “accuracy” of the data was exaggerated in media reports and that “[w]hat Cambridge has tried to sell is magic.”Footnote 62 At first, the Cruz campaign credited the data-driven approach with its win in the 2016 Iowa caucus,Footnote 63 but NPR reports that, as the election cycle moved forward, the campaign grew more skeptical and eventually phased out the psychological profiling after later losing the South Carolina primary.Footnote 64

Nonetheless, the tactics are important from the standpoint of political legitimacy. As a team of psychologists have recently shown, there is an increasing amount of evidence for “the effectiveness of psychological targeting in the context of real-life digital mass persuasion,” and “tailoring persuasive appeals to the psychological profiles of large groups of people allowed us to influence their actual behaviors and choices.”Footnote 65 The Cambridge Analytica case, they comment, “illustrates clearly how psychological mass persuasion could be abused to manipulate people to behave in ways that are neither in their best interest nor in the best interest of society.”Footnote 66

This creates a problem of legitimacy. Whereas in the PredPol case we consider the normative authority path to legitimacy, here our concern is the will or democratic path. Recall that this path relies on persons’ agreement (whether tacit, hypothetical, public reason, or voting processes) to be governed and to the actions, policies, and decisions of a government, and for citizens’ agreement to carry force, it must be based on conditions that meet the criteria of autonomy. And, again, our concern here is not an all-things-considered judgment about legitimacy, but about the ability of citizens to fulfill their responsibilities of legitimation.

There are several ways in which the exploitation of data, psychological profiles, social media, and algorithmic systems creates a drag on legitimation. Consider first Christman’s test for legitimacy, which requires that political principles be such that reasonable citizens would not be understandably alienated from them.Footnote 67 The fake groups that the IRA created and advertised were promoted by fake user profiles but eventually were followed by real people, referenced real issues, and mimicked names and language of genuine social, religious, and political groups (e.g., Stop All Immigrants, Blacktivist, LGBTQ United). The language, the titles, and linking to real issues and groups drew in people who came to express their views and wills on topics with real stakes. However, by joining, engaging with, and thereby amplifying those groups and messages (and in some cases populating in-person eventsFootnote 68), it is likely that their wills were not autonomous because the expression is one from which people would be alienated. That is, if people were to “engage in sustained critical reflection,” which requires reflecting on how they came to their commitment, the expressions would be incompatible with their sense of self and practical identity over time.Footnote 69 And, if a person earnestly engaged in political debates about policing, rights of LGBTQ+ persons, and immigration were to reflect on the fact that Facebook groups like Stop All Immigrants, Blacktivist, and LGBTQ United were ersatz communities built by Russian agents to exploit and enrage them, they would in all likelihood be alienated from their support. In other words, the social media campaigns undermine the legitimation process by undermining the authenticity condition for autonomy.

Consider next the ways in which psychological profiling and targeting on social media works. It works, first, to engage people by keeping them scrolling on social media feeds.Footnote 70 It also seeks to create sustained engagement. One mechanism for creating such engagement is provoking strong emotions – including affinity for a cause and anger at opponents. The tactics draw people in and exploit their emotions. These tactics, for reasons we described in Chapter 5.2, are instances of the affective, deliberative, and social challenges to agency. They thus undermine the quality of people’s agency and thereby conflict with their autonomy. This diminishes people’s ability to perform their legitimating responsibilities.

The fake accounts also promoted falsehoods. First, they promoted falsehoods about the nature of the groups themselves. But they also promulgated false claims. This circumvents persons’ autonomy by undermining its epistemic requirements. And it creates yet another drag on the legitimation process.

Finally, there is an issue that goes back to Feinberg’s understanding of “ideal” autonomy and to social/relational conceptions of autonomy.Footnote 71 People are not isolated, atomistic individuals. They are parts of families, communities, and social groups. Such groups are vital to autonomy both in the sense that they are important in causing people to flourish and to develop their autonomy and in the sense that autonomy involves the ability to act and realize one’s values within communities. The tactics of Cambridge Analytica and the IRA undermine that. They specifically seek to create ersatz communities that displace genuine ones and to rend broader communities into factions that neither trust nor communicate effectively with each other. Deep disputes are an important and ever-present facet of democratic polities. However, encouraging disparate groups to further distrust others and undermining the ability of those groups to communicate conflicts with the social facet of autonomy and, hence, with the legitimation process.

8.5 Once More Past the Pole

The issue of legitimation in the cases of PredPol, Cambridge Analytica, and the IRA is momentous. How police exercise their power to put others in harm’s way, efforts to affect an election via social profiling, and interference of a hostile nation in a US presidential election cut to the quick of democratic legitimacy. But issues of political authority and legitimation are not limited to matters at the heart of democratic procedures and at the far reaches of state-sanctioned exercise of power. They extend also to ordinary administrative tasks as well, among them risk assessment and teacher evaluation.

COMPAS raises similar issues to PredPol on the normative authority arm of political legitimacy. It is no doubt true that efficient allocation of scarce resources for supervising people in the criminal justice system is good policy. There are substantial open questions about that system overall, of course. But if we accept the premises that the state should sanction some kinds of actions with a range of penalties and that supervision and resources (e.g., drug and alcohol treatment, job programs, counseling, housing) are appropriate responses, assessing persons for placement within those programs would appear to be a reasonable approach. But, as we explained in Section 3.4.3, the use for which COMPAS is designed and suited is different than the use to which it is put in Loomis. Hence, even if there is evidence that COMPAS is well calibrated in assessing risk, that same evidence does not underwrite use for sentencing purposes. As with PredPol, there is a disconnect between evidence and tool use, and that creates a drag on the epistemic facet of legitimation via normative authority.

The VAM cases are a bit simpler in that the tool itself has substantial flaws, as we discussed in Sections 1.1 and 3.4. Evaluating teachers is a reasonable policy goal. However, the fact that the tools to do so have such big flaws precludes use of EVAAS and TVAAS from legitimation via normative authority. There is not a similar impediment to legitimation via democratic will. Note two things, though. First, that some institution, government, law, or policy is legitimate – which is to say based on a justified exercise of political power – is not the same as it being morally justified. Political legitimacy is in that sense a lower bar. Second, the fact that the algorithmic systems in these cases do not place a drag on the legitimation process is not a sufficient condition to ensure legitimacy. As we made clear in Section 8.4, our concern is that some kinds of algorithmic systems hinder citizens’ abilities to exercise their responsibility to legitimate policies. VAMs do not obviously hinder fulfilling that responsibility.

8.6 Conclusion

Autonomy is foundational for political legitimacy. It grounds each of the accounts we canvassed in Section 8.2, including the hybrid account we endorse. We argued that one component part of legitimacy is the process of legitimation. That is, citizens have a responsibility to exercise their autonomy to legitimate governments, policies, laws, and actions. Algorithmic systems can impede citizens from fulfilling those responsibilities. PredPol (and to an extent COMPAS) is an example of an impediment to legitimation via normative authority. Whatever evidence there is of PredPol advancing a good policy goal (and there is some), that deflects from assessing how PredPol is actually used. That conflicts with the access condition, whereby belief in a policy’s value has to be linked to evidence for its actual value.

The cases of Cambridge Analytica and the IRA undermine the second arm of political legitimacy. They create conditions where citizens would be alienated from the source of their beliefs and attitudes; they substitute ersatz involvement for genuine involvement and generate false beliefs.

But we should reiterate here that political legitimacy goes much further than the process of legitimation. For one, there are questions about different levels or targets of legitimacy. The fact, if it is, that policies using particular technologies create a drag on legitimation or, worse, are not legitimate all-things-considered does not tell us whether the agencies using the technologies, the municipalities, states, or nations using those technologies are legitimate or much else. It is important not to overinterpret drags on legitimation to make broader conclusions and legitimacy per se. Related is that legitimate authority can get things wrong and still retain legitimacy to exercise authority. That breadth of legitimacy is why our conclusions here are limited to the ability of citizens to exercise their responsibilities.

9 Conclusions and Caveats

It is fitting that the last example we introduced in the book was about the Internet Research Agency’s (IRA) use of social media, analytics, and recommendation systems to wage disinformation campaigns and sow anger and social discord on the ground. At first glance, it seems odd to think of that as primarily an issue of technology. Disinformation campaigns are ancient, after all; the IRA’s tactics are old wine in new boxes. That, however, is the point. What matters most is not particular features of technologies. Rather, it is how a range of technologies affect things of value in overlapping ways. The core thesis of our book is that understanding the moral salience of algorithmic decision systems requires understanding how such systems relate to an important value, viz., persons’ autonomy. Hence, the primary through line of the book is the value itself, and we have organized it to emphasize distinct facets of autonomy and used algorithmic systems as case studies.

To review, we have argued that three broad facets of autonomy are affected by algorithmic systems. First, algorithmic systems are relevant to what we owe each other as autonomous agents. That is the focus of Chapters 3 and 4. In Chapter 3 we addressed the material conditions that we owe others and argued that respecting people as autonomous demands that any algorithmic system they are subjected to must be one that they can reasonably endorse. It does not require that they value particular outcomes or that they not be made worse off by such systems. Rather, systems must either comport with agents’ own ends or be consistent with fair terms of social cooperation. We argued that persons being able to reasonably endorse a system turns on the system’s reliability, responsibility, stakes, and relative burden. Chapter 4 turned to the issues of what information we owe others. There we argued that people are owed information as a function of their practical agency (i.e., their ability to act and carry out plans in accord with their values) and as a function of their cognitive agency (i.e., their ability to exercise evaluative control over mental states, including beliefs, desires, and reactive responses). We offered several principles for information access grounded in agency.

The second connection between algorithmic systems and autonomy is ensuring the conditions under which people are autonomous. In Chapter 5 we considered the relationship between algorithmic systems and freedom. We explained that algorithms bear upon negative, positive, and republican freedom and offered a general account of freedom as ecological non-domination. Key to understanding that ecology is recognizing three key challenges to freedom: affective challenges, deliberative challenges, and social challenges. In Chapter 6 we offered some suggestions for addressing some facets of those challenges. Specifically, we argue that a kind of epistemic paternalism is both permissible and (under some conditions) obligatory.

Chapters 7 and 8 shift focus to the responsibilities of agents in light of the fact that they are autonomous. In Chapter 7 we argue that algorithmic systems allow agents deploying such systems to undermine a key component of responsibility, viz., providing an account for actions for which they are responsible. Specifically, we argue that complex systems create an opportunity for “agency laundering,” which involves a failure to meet one’s moral responsibility for an outcome by attributing causal responsibility to another person, group, process, or technology. Chapter 8 addresses a different facet of responsibility. Citizens within democratic states have a responsibility to exercise their autonomy in order to legitimate political authority. That is, they have a responsibility to help ensure that governments, laws, policies, and practices are justifiable. However, some kinds of algorithmic systems hinder citizens’ abilities to do that. They can do so by undermining the epistemic conditions necessary to underwrite the “normative authority” path to legitimacy or by undermining the exercise of autonomy necessary to underwrite the “democratic will” path to legitimacy.

9.1 Further Work

In one sense, that is a lot of terrain to have covered. And yet even within the scope of autonomy and algorithmic systems, there is much more work to do. Throughout the book, we pause to point out how various topics bear upon one another. There are, however, connections across the chapters that warrant more attention.

In Chapter 4 we address informational components to autonomy, and we argue that people have claims to information about algorithmic systems based on practical agency, cognitive agency, and democratic agency. There is a question, though, about whether such information is a condition for people to be able to reasonably endorse systems. That is, the precise relationship between what we owe people materially (per Chapter 3 and the Reasonable Endorsement Test) and what we owe people informationally (per the principles of informed practical and cognitive agency) is worth examining. Similar concerns arise in understanding the relationship between practical, cognitive, and democratic agency and political legitimacy. We note in Chapters 4 and 8 that the ability to exercise democratic agency is a component of the legitimating process. We explain how that relationship functions within the normative authority and democratic will “arms” of legitimacy. But a number of questions remain. Just what kinds of processes are subject to legitimation at all? Certainly, direct actions of government agents can be legitimate or not, but what about private actions? Or what about private actors whose influence on state affairs is enormous? Moreover, what is the extent of information necessary for citizens to fulfill their legitimating responsibilities?

There are further connections to be drawn between Chapter 5’s discussion of quality of agency and other facets of autonomy. To the extent that the challenges to freedom limit people’s quality of agency (and hence, positive freedom), are they also limitations on people’s ability to reasonably endorse states of affairs? It also seems plausible that such challenges are an impediment to exercising practical, cognitive, and democratic agency. It is therefore worth exploring whether even greater epistemically paternalistic actions are justifiable (or even obligatory) than those we outline in Chapter 6.

We should also point out that the relevance of agency laundering may be even broader than we outline in Chapter 7. Laundering may be applicable in other cases we discuss throughout the book. For example, it would be worth considering it in background checks (Chapter 4) and predictive policing (Chapter 8). When presenting on the topic of agency laundering to academic audiences, we have often received questions about whether it could be applied to political actions (e.g., Brexit). While we cannot do justice to that question here, we can say that the use of sophisticated profiling and influence operations is a plausible mechanism for laundering. Hence, examining influence campaigns as potential sites of laundering is worthwhile.

And moving beyond the topics we’ve covered, the range of open questions is vast. Driverless vehicles, for example, raise numerous issues with respect to responsibility, human control, and worker displacement. Robots, including those that interact with humans, provide care and companionship, and displace labor are a topic of growing philosophical and moral concern. Weapons of war raise numerous issues relevant to human control and responsibility.

9.2 Caveats: Baseline Issues

An important question that one might raise about this project concerns baseline comparisons. So, while COMPAS, EVAAS, TVAAS, PredPol, and other systems may have important problems with respect to autonomy, one might argue that despite those issues, the technologies are better than the relevant alternatives. What matters is not that algorithmic systems have flaws compared to some ideal, but whether they are meaningfully better than relevant alternatives. Having a system like COMPAS that assesses risk may be better than humans, who have well-known biases, who act arbitrarily, and who are harder to audit.

That’s a reasonable criticism. However, it does not undercut the project for a number of reasons. First, even if it is the case that algorithmic systems are better than systems that precede them, it does not follow that they are justifiable. So, even if it is the case that using COMPAS is better than judges at determining likelihood of reoffense, it does not follow that use of COMPAS is itself justifiable. The space of reasonable alternatives need not be some antecedent system and some novel algorithmic system. There could be better algorithms; lower-stakes algorithms; algorithms that do not impose disparate relative burdens; that respect practical, cognitive, and democratic agency; that preserve quality of agency; that do not serve to launder agency; that allow citizens to fulfill their responsibilities of legitimation; and so forth.

Second, even where technologies are better than some alternatives, they may reveal underlying moral concerns. Consider a study from October 2019.Footnote 1 Obermeyer et al. studied a machine learning system that examined health records in order to predict which patients were in “high risk” categories and either defaulted them into a care program or referred them for screening into a care program. The study determined that Black patients identified as high risk were significantly less healthy than White patients so-identified; that is, Black patients had higher numbers of chronic conditions than similarly categorized White patients. This entailed Black patients were less likely to receive appropriate care than White patients. The reason for the difference, according to the researchers, is that the machine learning algorithm reflected health-care expenditures. That is, risk levels were correlated with the amount of care and treatment that patients received in prior years. As the authors put it, “Black patients generate lesser medical expenses, conditional on health, even when we account for specific comorbidities. As a result, accurate prediction of costs necessarily means being racially biased on health.”Footnote 2 That disparity may arise for a couple of reasons. One is that poor patients (who are disproportionately Black) face substantial barriers to receiving healthcare, even if they are insured. For example, transportation time to care facilities may be greater and they may face greater difficulty in getting time off of work. There are also social barriers, including worse treatment of Black patients by care providers and distrust of health-care providers.

Notice, though, that the algorithm at work in the Obermeyer study is likely better than a system that did not use sophisticated technologies to identify high-risk patients and nudge them toward care programs. That is, it was likely better for Black patients as well as White patients and worse for no one. It was just more advantageous for White patients. But there is nonetheless a moral problem with the algorithm. Hence, “better than a prior baseline” should not end our inquiries. In the health system case, the study’s authors developed a different analysis that sorted patients into high-risk groups based on underlying health conditions, and it performed similarly well for Black and for White patients. Our conclusions in this book, we hope, can provide grounds for analyzing what better algorithmic systems would look like.

9.3 Bigger Pictures

A further objection to many of our arguments in the book will have to do with background conditions and structures and whether we have properly identified the most morally salient features of cases. In the case of COMPAS and Loomis, one might argue that the US criminal justice system has so many infirmities that focusing on use of a single tool is beside the point. In Loomis, one could argue that plea deals generally are unjust, in virtue of the fact that they take place in a system where alternatives to plea deals are trials with either expensive counsel or overstretched public defenders, high conviction rates for cases reaching jury verdicts, and long sentences for guilty verdicts. Plea deals to long sentences are common, in other words, because of other kinds of injustices and statutory penalties that outpace what is necessary for either deterrence or desert. Likewise, one might argue that the appropriate focus in analyzing use of VAMs to evaluate K-12 teachers is on issues such as school funding, the vast differences in resources available in wealthy and poor school districts, how those differences track race and ethnicity, the need to ensure excellent teaching, and administrative pressures on teachers generally. One might argue that background checks are a symptom of broader problems of wealth and income inequality, the austere state of healthcare for many Americans, and the fact that landlords have much greater legal power than tenants.

It is certainly the case that one can focus on different facets of problems, be they in criminal justice, education, rental markets, social media, disinformation campaigns, or anything else. But note that how criminal justice systems assess risk, allocate supervisory resources, and sentence people convicted of crimes are constitutive parts of the criminal justice system, not discrete actions. And different, constitutive parts of the criminal justice system may warrant different analyses, and it is not clear that overall systems are best analyzed in a univocal way. And in any case, our work here should be seen as sitting alongside other work on criminal justice, education, social media, disinformation, and so on. Systemic arguments and narrower arguments may be complementary rather than conflicting.

Finally, perhaps the biggest limitation to the project is that the sand is shifting beneath our feet. New ways of using technology continue apace, and some of the systems we discuss over the course of the book will soon have their problems resolved, change, or be replaced by new ones. But those newer algorithmic systems could also be used in ways that fail to respect autonomy, inhibit practical and cognitive agency, limit freedom, launder agency, and create drags on legitimation. That brings us full circle. What matters is not the particular technologies or the specific ways that those technologies are used. Rather, the underlying moral questions are a better anchor. Autonomy is certainly not the only moral value, and the nature, scope, and value of autonomy are contested. Moreover, the ways autonomy interacts with other values require continual reassessment. But, as we stated in Chapter 1, a rock-bottom assumption of this book is that autonomy matters – hopefully, considering autonomy helps us sort through a number of questions about technologies on the ground.

Footnotes

7 Agency Laundering and Information Technologies

1 Levin and Wong, “Self-Driving Uber Kills Arizona Woman in First Fatal Crash Involving Pedestrian.”

2 Duhigg, “How Companies Learn Your Secrets.” Note that a number of commentators believe the story makes too close a connection between predictive analytics and pregnancy-related advertising. There are reasons to send such advertising to people who are not pregnant, the advertising may have been based on criteria unrelated to pregnancy, and others. Harford, “Big Data.”

3 Angwin, Varner, and Tobin, “Facebook Enabled Advertisers to Reach ‘Jew Haters.’”

4 Angwin et al., “Machine Bias,” May 23, 2016.

5 Matthias, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata.”

6 Hart, Punishment and Responsibility; Essays in the Philosophy of Law, 211.

7 Our account of the structure of responsibility follows closely those articulated by Nicole Vincent and Chris Kutz. Both Vincent and Kutz recast Hart’s ship captain case to distinguish various facets of responsibility. See Vincent, “A Structured Taxonomy of Responsibility Concepts”; Kutz, “Responsibility.”

8 Antony Duff calls this “prospective” responsibility. See Duff, “Responsibility.” Here we should note that we are only discussing morally justifiable roles, where the holders of role responsibility are themselves moral agents. Hence, being assigned a role within a criminal organization, or being assigned a role when one lacks the capacity to act morally, cannot confer role responsibility in the required sense.

9 Goodin, “Responsibilities”; Goodin, “Apportioning Responsibilities”; Williams, “Responsibility as a Virtue”; Vincent, “A Structured Taxonomy of Responsibility Concepts.”

10 For the sake of simplicity, we will refer to “actions” in discussing responsibility. However, our account extends to omissions and dispositions. Note, too, that causal responsibility is complicated in overdetermination cases. But those cases do not affect our analysis here.

11 Kutz, “Responsibility,” 549.

12 This, of course, may not absolve the captain completely. Fischer and Ravizza, Responsibility and Control: A Theory of Moral Responsibility, 49–51, for an explanation of “tracing” responsibility to prior actions.

13 There remain some controversial issues, including for example Frankfurt-style cases in which one may be responsible or not regardless of whether they do or do not know how their actions will be causally effective. But the issues in those cases turn on the link between causal responsibility and the ability to do otherwise. That does not affect our arguments.

14 Strawson, “Freedom and Resentment.”

15 Oshana, “Ascriptions of Responsibility,” 1997, 75 (emphasis added).

16 Smith, “Attributability, Answerability, and Accountability,” 577–578. Within this group of views, there is substantial debate about whether person X is responsible for Y in virtue of Y being attributable to X, of X being answerable for Y, or of X being accountable for Y. Scanlon’s view focuses on attributability (Scanlon 2008). Shoemaker distinguishes between attributability, answerability, and accountability (Shoemaker 2011). Smith (like Shoemaker) distinguishes a thing being attributable to a person and that person being responsible for it; however, she views accountability as a species of answerability. What is important for our purposes is that each of the views in this constellation recognizes that the content of responsibility claims is that responsible agents are those for whom it is appropriate, or for whom it ought to be the case, that they provide an account of their intentions, interests, and reasons for an action.

17 Oshana, “Ascriptions of Responsibility,” 1997, 77. Fischer and Ravizza provide an accountability view that bridges (a) Strawson’s attention to the social function of holding others responsible by way of reactive attitudes and (b) accountability views’ attention to reasons. Specifically, they maintain that an agent is responsible if they are an apt target of reactive attitudes. More important here, though, is that being morally responsible for actions requires that agents exercise “guidance control.” That requires that agents be at least weakly reasons-responsive, which is to say that where the agent has access to strong reasons in favor or against an action, they will act in accordance with those reasons. It also requires that the source of actions be the agent, which is to say that the reason-responsiveness is internal to the agent. Fischer and Ravizza, Responsibility and Control: A Theory of Moral Responsibility, 31–41.

18 18 U.S. Code § 1956 – Laundering of monetary instruments.

19 Other aspects of money laundering are about concealing identities of agents, for example by routing illicit money through shell corporations and bank accounts in permissive jurisdictions.

20 Two other accounts addressing causal and moral responsibility in the computing context are worth noting here. First, Daniel Dennett posits that machines may be credited with (i.e., responsible for) some tasks (e.g., Deep Blue beating Kasparov) but cannot be responsible for others (e.g., murdering Kasparov). Dennett, “When Hal Kills, Who’s to Blame? Computer Ethics.” We would argue that this difference tracks the causal/moral responsibility distinction, though that is not Dennett’s claim. Helen Nissenbaum argues that the increased use of computing systems poses a threat to accountability, based on four key barriers. These include the problem of many hands, the existence of bugs that cause computing failures, the ability to use computers as scapegoats, and the separation of system ownership from legal liability for problems. In doing so she notes that distributed causal responsibility can function to obscure responsibility and blameworthiness. See Nissenbaum, “Computing and Accountability,” 74. Our view of laundering can apply to each of the barriers she discusses but does not reduce to any of them. Consider the example of “blaming the computer” or pointing to the computer as the sole source of causal responsibility. That considered by itself would not seem to be a case of laundering, but instead just a straightforward denial of responsibility. If, instead, it included a process by which a party ensures the computer has causal responsibility, ascribes morally relevant qualities to the computer’s actions, obscures the party’s causal responsibility, and in so doing fails to adequately account for events for which the party is morally responsible, it could be laundering. In other words, merely blaming something else does not rise to laundering. Laundering is, we take it, more insidious in that it forestalls others’ abilities to demand an account of actions within domains of their legitimate concern.

21 Note that agency laundering does not require that one infringe one’s substantive role responsibilities (except to the extent that one’s role responsibility includes being transparent about one’s causal responsibility). In Chair, for example, it is plausible that the chair was fulfilling their role responsibilities with respect to the department’s curriculum. We return to this point in Section 7.4.

22 Angwin, Varner, and Tobin, “Facebook Enabled Advertisers to Reach ‘Jew Haters.’”

23 Note that Facebook has recently taken measures aimed at reducing discriminatory advertising (Levin 2019).

24 Oremus and Carey, “Facebook’s Offensive Ad Targeting Options Go Far beyond ‘Jew Haters.’”

25 Sandberg, “Last Week We Temporarily Disabled Some of Our Ads Tools.”

26 Note that this is a possible moral claim that one might make about Facebook and other media organizations. This is a distinct question from what kinds of legal rights and obligations information intermediaries have in light of (inter alia) 104th United States Congress; an act to promote competition and reduce regulation in order to secure lower prices and higher quality services for telecommunication consumers and encourage the rapid deployment of new telecommunications technologies; European Union, Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the internal market (directive on electronic commerce); European Union, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Keller, “The Right Tools.”

27 See, e.g., Facebook, “Community Standards Enforcement.”

28 One can frame this as a question of capacity responsibility. That is, if Facebook did not have epistemic access to the relevant information about the possibility of misuse, it would not have the necessary capacity to be morally responsible. Note here that epistemic access is not limited to actual knowledge, but the ability to garner it under reasonable conditions. Hence, Facebook’s moral responsibility will turn on the degree to which it could reasonably have known about potential for misuse. And that would define its degree of agency laundering.

One further complicating issue is mitigation. Facebook or another social media company might use its suggestion system to better understand relations among (for example) racists or purveyors of disinformation to promote anti-racist or epistemically sound information. The degree to which that would mitigate or deepen laundering is a question beyond what we can cover here. Thanks to an anonymous reviewer for making this point.

29 Binns, “Fairness in Machine Learning: Lessons from Political Philosophy.”

30 Coeckelbergh and Wackers, “Imagination, Distributed Responsibility and Vulnerable Technological Systems: The Case of Snorre A.”

31 One tool, named “Greyball,” was developed to surreptitiously ban users who Uber believed were violating the company’s terms of service. Uber eventually used Greyball to surreptitiously ban people Uber believed to be government regulators investigating whether Uber was operating illegally. See Isaac 2017.

32 Scheiber, “How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons.”

33 Scheiber; Calo and Rosenblat, “The Taking Economy: Uber, Information, and Power.”

34 Rosenblat, Uberland, 128–132.

35 Rosenblat, 98–100.

36 Scheiber, “How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons.”

37 Clark, “Uber Denies Researchers’ ‘Phantom Cars’ Map Claim.”

38 Uberland, p. 129.

39 “Defendant’s Original Answer and Defenses,” in Houston Fed of Teachers, Local 2415 v. Houston Ind Sch Dist, 251 F. Supp. 3d.

40 Morganstein and Wasserstein, “ASA Statement on Value-Added Models,” 2.

41 Binns, “Algorithmic Accountability and Public Reason.”

42 Wisconsin v. Loomis, 881 N.W.2d paragraph 9.

43 Wisconsin v. Loomis, 881 N.W.2d paragraph 74.

44 Wisconsin v. Loomis, 881 N.W.2d paragraphs 88–98.

45 Note that Loomis demonstrates another way one can launder even while fulfilling one’s substantive role responsibilities. Imagine that the trial court had deliberated about its decision but did not explain its reasoning for the sentence. Suppose instead it merely wrote that it agreed with the COMPAS report’s assessment with no further comment. That would obscure the scope of the court’s causal responsibility and would fail to provide an adequate account of the decision. But in that case, the court would not have violated some other substantive role responsibility.

46 Matthias, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata.”

47 Matthias, 177 (emphasis in original).

48 Matthias, 177.

49 Thanks to a reviewer here for pointing out these possibilities and noting their similarities to “Chair” and “Democratic Chair.”

50 Johnson and Verdicchio, “Reframing AI Discourse.”

51 Cegłowski, “The Moral Economy of Tech.” Thanks to Suresh Venkatasubramanian for pointing us to this talk.

52 We appreciate an anonymous reviewer raising the question of whether there are cases where one maintains agency but launders accountability instead. Our sense is that any such case would involve minimizing one’s agency. In other words, accountability is the thing that is avoided, and one avoids it by laundering the degree to which one is (morally) responsible, which is in turn a function of a person’s agency in a process. Likewise, money laundering is a way to forestall accountability, and it is the laundering of some other thing (viz., money) that helps avoid the accountability.

53 Barocas and Selbst, “Big Data’s Disparate Impact,” 692–693, 712–714.

54 Santoni de Sio and van den Hoven, “Meaningful Human Control over Autonomous Systems: A Philosophical Account.”

55 Wagner, “Liable, but Not in Control?”

56 Elish, “Moral Crumple Zones.”

57 Thanks to an anonymous reviewer for this language and description. In this signaling respect, our use of a metaphor here works similarly to the “crumple zone” metaphor in Elish, discussed in Section 8.2.

58 Note that there may be other, non-laundering moral wrongs involved in goal prompts and queuing, as discussed in Section 7.5.

59 Regulation (EU) 2016/679 of the European Parliament and the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119).

60 Wachter, Mittelstadt, and Russell, “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR”; Selbst and Powles, “Meaningful Information and the Right to Explanation”; Kaminski, “The Right to Explanation, Explained.”

8 Democratic Obligations and Technological Threats to Legitimacy

1 The analogies advocates use to explain PredPol are varied and tend to liken crime to phenomena outside of complex, modern society. One analogy is with earthquakes. After an earthquake, it is likely that there will be another one in the same area, an aftershock. Similarly for crime on this theory, certain “place-based” crimes (such as a burglary) are followed by crimes in the same area. See Goode, “Sending the Police before There’s a Crime.” Another analogy is to hunter-gatherers, in part because the PredPol grew out of Brantingham’s work using computer models to understand hunter-gatherers. Brantingham states that “[c]riminals are effectively foragers … [c]hoosing what car to steal is like choosing which animal to hunt. The same decision-making processes go into both of these choices.” See Hoff, “Professor Helps Develop Predictive Policing by Using Trends to Predict, Prevent Crimes.” We are not convinced that these analogies are helpful beyond the fact that some kinds of crime exhibit geographic patterns.

2 Note that in this chapter we will use “predictive policing” to refer to “place-based” predictive policing, which focuses on forecasting where crimes will occur. This is often contrasted with “person-based” predictive policing. Person-based predictive policing raises similar concerns, but we will not focus on it here. For an excellent overview of these issues, see Ferguson, The Rise of Big Data Policing, chapter 3.

3 Grossman et al., “The 50 Best Inventions.”

4 Ferguson, The Rise of Big Data Policing, 65–67.

5 Edwards, “Predictive Policing Software Is More Accurate at Predicting Policing than Predicting Crime”; Ferguson, The Rise of Big Data Policing, 73–74.

6 Shklar, “The Liberalism of Fear.”

7 Schwartz, “After Qualified Immunity.”

8 For the purposes of this chapter we won’t draw sharp distinctions between government actions, decisions, policies, and laws. While they can be distinguished (both legally and philosophically) and different justificatory burdens may be appropriate to each, our concern here is about legitimacy and legitimation at a general level that is applicable to each.

9 Weber, Economy and Society.

10 This is a question of internal legitimacy or legitimacy within a state. There may be questions as to whether a state can justify itself internationally. See Peter, Democratic Legitimacy, chapter 1.

11 Matheson, “Weber and the Classification of Forms of Legitimacy.”

12 Waldman, “Power, Process, and Automated Decision-Making,” 614.

13 Waldman, 616.

14 For a helpful and recent overview of the topic of the grounds of political legitimacy, see Peter, “The Grounds of Political Legitimacy.”

15 Christman, The Politics of Persons: Individual Autonomy and Socio-Historical Selves, 239 (emphasis in original).

16 Oshana, Personal Autonomy in Society, 97–100.

17 Rawls, Political Liberalism, 137.

18 Pettit, Just Freedom, chapter 5.

19 Rawls, “Justice as Fairness,” 242.

20 Pettit, Just Freedom, 112.

21 Peter, “The Grounds of Political Legitimacy,” 377.

22 Raz, The Morality of Freedom, 53.

23 Enoch, “Authority and Reason-Giving,” 31.

24 Raz, The Morality of Freedom, 422.

25 Peter, “The Grounds of Political Legitimacy.”

26 For a longer discussion of this problem, see Peter, “The Epistemic Circumstances of Democracy.”

27 Estlund, Democratic Authority: A Philosophical Framework.

28 Note that “legitimation” in this normative sense is different from, though related to, the descriptive sense of legitimation in descriptive accounts of legitimacy. The descriptive sense of legitimation refers to the processes by which a governing entity establishes and maintains power, regardless of whether those processes justify that power. Descriptive legitimation may be necessary for normatively legitimate policies to function.

29 To be clear, it is neither possible nor desirable that every member of a polity understand every kind of government institution, law, and action. Rather, there are some broad states of affairs that most any member of society has a responsibility to understand and support (or not): some conception of fair representation and enfranchisement, basic human rights. Others will have responsibilities based on their communities, experiences, and expertise. The responsibility is to exercise autonomy (properly understood as social and relational) in some facets of social life.

30 Goode, “Sending the Police before There’s a Crime.”

31 Ferguson, The Rise of Big Data Policing, 72; Meijer and Wessels, “Predictive Policing: Review of Benefits and Drawbacks.”

32 Hunt, Saunders, and Hollywood, “Evaluation of the Shreveport Predictive Policing Experiment.”

33 Bond-Graham, “All Tomorrow’s Crimes”; Ferguson, The Rise of Big Data Policing, 79.

34 Ferguson, The Rise of Big Data Policing, 79; Hunt, Saunders, and Hollywood, “Evaluation of the Shreveport Predictive Policing Experiment.”

35 Mather and Winton, “LAPD Uses Its Helicopters to Stop Crimes before They Start.”

36 Ferguson, The Rise of Big Data Policing, 81; Mather and Winton, “LAPD Uses Its Helicopters to Stop Crimes before They Start.”

37 Ferguson, The Rise of Big Data Policing. For example, in Illinois v. Wardlow, 528 U.S. 119, the Supreme Court determined that simply being in high-crime area did not suffice to justify a Terry stop, but it can be relevant in determining whether other facts are sufficiently suspicious for a stop.

38 Wylie, Mindf*ck: Cambridge Analytica and the Plot to Break America; Kaiser, Targeted: The Cambridge Analytica Whistleblower’s Inside Story of How Big Data, Trump, and Facebook Broke Democracy and How It Can Happen Again.

39 U.S. Department of Justice, “Report on the Investigation into Russian Interference in the 2016 Presidential Election, Volume I (‘Mueller Report’)”; Select Committee on Intelligence, United States Senate, “Report of the Select Committee on Intelligence, United States Senate, on Russian Active Measures Campaigns and Interference in the 2016 U.S. Election, Volume I: Russian Efforts against Election Infrastructure with Additional Views.”

40 Howard et al., “Social Media, News and Political Information during the US Election: Was Polarizing Content Concentrated in Swing States?”; DiResta et al., “The Tactics & Tropes of the Internet Research Agency.”

41 Cadwalladr and Graham-Harrison, “Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach”; Kang and Frenkel, “Facebook Says Cambridge Analytica Harvested Data of up to 87 Million Users.”

42 Confessore, “Cambridge Analytica and Facebook.”

43 Kang and Frenkel, “Facebook Says Cambridge Analytica Harvested Data of up to 87 Million Users.”

44 Davies, “Ted Cruz Campaign Using Firm That Harvested Data on Millions of Unwitting Facebook Users.”

45 Wylie, Mindf*ck: Cambridge Analytica and the Plot to Break America, 127–129.

46 Cadwalladr and Graham-Harrison, “Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach.”

47 DiResta et al., “The Tactics & Tropes of the Internet Research Agency,” 6.

48 U.S. v. Internet Research Agency, LLC (Indictment, February 16, 2018).

49 U.S. v. Internet Research Agency, LLC (Indictment, February 16, 2018).

50 U.S. Department of Justice, “Report on the Investigation into Russian Interference in the 2016 Presidential Election, Volume I (‘Mueller Report’),” 25.

51 Select Committee on Intelligence, United States Senate, “Report of the Select Committee on Intelligence, United States Senate, on Russian Active Measures Campaigns and Interference in the 2016 U.S. Election, Volume II: Russia’s Use of Social Media and Additional Views,” 6.

52 Select Committee on Intelligence, United States Senate, 6–7.

53 U.S. Department of Justice, “Report on the Investigation into Russian Interference in the 2016 Presidential Election, Volume I (‘Mueller Report’),” 24–25.

54 U.S. Department of Justice, 26.

55 U.S. Department of Justice, 31–32.

56 Howard et al., “Social Media, News and Political Information during the US Election: Was Polarizing Content Concentrated in Swing States?” 1.

57 DiResta et al., “The Tactics & Tropes of the Internet Research Agency,” 8–10.

58 DiResta et al., 65–66.

59 Wylie, Mindf*ck: Cambridge Analytica and the Plot to Break America, 142.

60 Wylie, 141, 155–156.

61 Hakim and Rosenberg, “Data Firm Tied to Trump Campaign Talked Business with Russians”; Wylie, Mindf*ck: Cambridge Analytica and the Plot to Break America, 139–159.

62 Weaver, “Facebook Scandal: I Am Being Used as Scapegoat – Academic Who Mined Data.”

63 Hamburger, “Cruz Campaign Credits Psychological Data and Analytics for Its Rising Success.”

64 Detrow, “What Did Cambridge Analytica Do during the 2016 Election?”

65 Matz et al., “Psychological Targeting as an Effective Approach to Digital Mass Persuasion,” 12717.

66 Matz et al., 12717.

67 Christman, The Politics of Persons: Individual Autonomy and Socio-Historical Selves, 239 (emphasis in original).

68 Wylie recounts that in summer 2014, Cambridge Analytica began creating fake Facebook and other social media pages and groups with politically charged identities that were tailored to be fed into the news feeds of susceptible users by recommendation algorithms. As people joined the groups, Cambridge Analytica would set up meetings in small spaces (in order to make the group feel big). Because of the nature of the groups, and the nature of the content fed into the groups, “[p]eople would show up and find a fellowship of anger and paranoia.” Wylie, Mindf*ck: Cambridge Analytica and the Plot to Break America, 128. These groups would, when large enough and prompted by Cambridge Analytica, meld with similar groups, creating a network of engaged, angry people who felt they were under siege. Wylie, 127–129.

69 Christman, The Politics of Persons: Individual Autonomy and Socio-Historical Selves, 155.

70 Wylie, Mindf*ck: Cambridge Analytica and the Plot to Break America, 127; Eyal, Hooked: How to Build Habit-Forming Products.

71 See Sections 2.2.2, 2.6, and 3.3.

9 Conclusions and Caveats

1 Obermeyer et al., “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.”

2 Obermeyer et al., 450.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×