Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-jr42d Total loading time: 0 Render date: 2024-04-18T19:29:25.239Z Has data issue: false hasContentIssue false

Part I - Some Cases, Some Ground Clearing

Published online by Cambridge University Press:  17 May 2021

Alan Rubel
Affiliation:
University of Wisconsin, Madison
Clinton Castro
Affiliation:
Florida International University
Adam Pham
Affiliation:
California Institute of Technology

Summary

Type
Chapter
Information
Algorithms and Autonomy
The Ethics of Automated Decision Systems
, pp. 1 - 42
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

1 Introduction

1.1 Three Cases
1.1.1 Big Bars Bad: Loomis and COMPAS

A little after 2 a.m. on February 11, 2013, Michael Vang sat in a stolen car and fired a shotgun twice into a house in La Crosse, Wisconsin. Shortly afterward, Vang and Eric Loomis crashed the car into a snowbank and fled on foot. They were soon caught, and police recovered spent shell casings, live ammunition, and the shotgun from the stolen and abandoned car. Vang pleaded no contest to operating a motor vehicle without the owner’s consent, attempting to flee or elude a traffic officer, and possession of methamphetamine. He was sentenced to ten years in prison.Footnote 1

The state of Wisconsin also charged Loomis with five crimes related to the incident. Because Loomis was a repeat offender, he would face a lengthy prison sentence if convicted. Loomis denied being involved in the shooting, and he maintained that he joined Vang in the car only after the shooting. Nonetheless, Loomis waived his right to a jury trial and pleaded guilty to two less severe charges (attempting to flee a traffic officer and operating a motor vehicle without owner consent). The plea agreement dismissed the three most severe chargesFootnote 2 but stipulated that they would be “read-in” such that the court would consider them at sentencing and would consider the underlying, alleged facts of the case to be true. In determining Loomis’s sentence, the circuit judge ordered a presentence investigative report (“PSI” or “presentence report”), using a proprietary risk assessment tool called COMPAS that is developed by Northpointe, Inc.Footnote 3

COMPAS takes as inputs a large number of data points about a defendant’s criminal behavior, history, beliefs, and job skills, and generates a series of risk scales. These include pretrial release risk (likelihood that a defendant will fail to appear in court or have a new felony arrest if released prior to trial), risk of general recidivism (whether a defendant will have subsequent, new offenses), and risk of violent recidivism.Footnote 4 Among the factors that COMPAS uses to assess these risks are current and pending charges, prior arrests, residential stability, employment status, community ties, substance abuse, criminal associates, history of violence, problems in job or educational settings, and age at first arrest.Footnote 5 Using information about these factors and a proprietary algorithm, COMPAS generates bar charts corresponding to degree of risk. According to Northpointe, “[b]ig bars, bad—little bars, good,” at least as a first gloss.Footnote 6 Users can dig deeper, though, to connect particular risk factors to relevant supervisory resources.

Loomis’s COMPAS report indicated that he presented a high risk of pretrial recidivism, general recidivism, and violent recidivism.Footnote 7 The presentence report recounted Northpointe’s warning about the limitations of COMPAS, explaining that its purpose is to identify offenders who could benefit from interventions and to identify risk factors that can be addressed during supervision.Footnote 8 Likewise, the presentence report emphasized that COMPAS scores are inappropriate to use in determining sentencing severity.Footnote 9 Nonetheless, the prosecution urged the court to use Loomis’s risk scores, and the circuit court referenced the scores at sentencing.Footnote 10 The presentence and COMPAS reports were not the only bases for the sentence: The other charges (i.e., those to which Loomis did not plead guilty) were read in, meaning that the trial court viewed those charges as a “serious, aggravating factor.”Footnote 11 The court sentenced Loomis to “within the maximum on the two charges” amounting to two consecutive prison terms, totaling sixteen and a half years.Footnote 12

1.1.2 School-wide Composite Scoring: Wagner and TVAAS

In 2010, the state of Tennessee began requiring that school systems evaluate teachers based on value added models (VAMs). VAMs are algorithmic tools used to measure student achievement.Footnote 13 They seek to isolate and quantify teachers’ individual contributions to student progress in terms of the influence they have on their students’ annual standardized test scores.Footnote 14

One VAM endorsed by the state legislature is the Tennessee Value-Added Assessment System (TVAAS), a proprietary system developed by SAS, a business analytics software and services company. The TVAAS system included standardized tests for students in a variety of subjects, including algebra, English, biology, chemistry, and US history. Roughly half of teachers at the time of the case taught subjects not tested under TVAAS. Nonetheless, because of the law requiring teacher evaluation on the basis of VAMs, teachers of non-tested subjects were evaluated on the basis of a “school-wide composite score,” which is the average performance of all students on all subjects in that school. In other words, it is a score that is identical for all teachers in the school regardless of what subjects and which students they teach.

Teresa Wagner and Jennifer Braeuner teach non-tested subjects (physical education and art, respectively). From 2010 to 2013, each received excellent evaluation scores based on observations of their individual classes combined with their schools’ composite scores. In the 2013–14 school year, however, their schools’ composite scores dropped from the best possible score to the worst possible score, while their individual classroom observation scores remained excellent. The result was that Wagner’s and Braeuner’s individual, overall evaluations decreased from the highest possible to middling. This was enough to preclude Wagner from receiving the performance bonus she had received in previous years and to make Braeuner ineligible for consideration for tenure. Moreover, each “suffered harm to her professional reputation, and experienced diminished morale and emotional distress.”Footnote 15 Nonetheless, the court determined that the teachers’ Fourteenth Amendment equal protection rights were not impinged on the grounds that use of TVAAS passed the rational basis test.Footnote 16

1.1.3 “Exiting” Teachers: Houston Fed of Teachers and EVAAS

In 2012, the Houston Independent School District (“Houston Schools”) began using a similar SAS-developed proprietary VAM (EVAAS) to evaluate teachers. Houston Schools had the “aggressive goal of ‘exiting’ 85% of teachers with ‘ineffective’ EVAAS ratings.”Footnote 17 And in the first three years using EVAAS, Houston Schools “exited” between 20 percent and 25 percent of the teachers rated ineffective. Moreover, the district court determined that the EVAAS scores were the sole basis for those actions.Footnote 18

As in Wagner, the Houston Schools court determined that the teachers did not have their substantive due process rights violated because use of EVAAS cleared the low rational basis standard.Footnote 19 However, the court determined that the teachers’ procedural due process rights were infringed. Because the system is proprietary, there was no meaningful way for teachers to ensure that their individual scores were calculated correctly. The court noted that there were apparently no mechanisms to correct basic clerical and coding errors. And where such mistakes did occur in a teacher’s score, Houston Schools refused to correct them because the correction process disrupts the analysis. In response to a “frequently asked question,” the school district states:

Once completed, any re-analysis can only occur at the system level. What this means is that if we change information for one teacher, we would have to run the analysis for the entire district, which has two effects: one, this would be very costly for the district, as the analysis itself would have to be paid for again; and two, this re-analysis has the potential to change all other teachers’ reports (emphasis in original).Footnote 20

That last point is worth stressing. Each teacher’s individual score is dependent on all other teachers’ scores. So a mistake for one teacher’s score affects all others’ scores. As the court states, “[T]his interconnectivity means that the accuracy of one score hinges upon the accuracy of all.”Footnote 21

1.1.4 So What?

Taking a step back from the specifics of the three cases, it is worth considering the impetus for decision-makers to adopt proprietary, algorithmic systems such as COMPAS, TVAAS, or EVAAS. Using sophisticated algorithms based on large datasets to help anticipate needs and better manage complex organizations like criminal justice systems and school systems makes a certain degree of sense. Human decision-makers have significant epistemic limitations, are prone to many kinds of biases, and at times act arbitrarily. And there are enormous advantages to using data-driven systems in lots of domains, generally. However, such systems have substantial problems.

A best-selling book by Cathy O’Neil describes similar systems as “Weapons of Math Destruction” because they hide harms, biases, and inadequate models behind complicated and inscrutable veneers.Footnote 22 In another widely popular book, mathematician Hannah Fry offers a series of cautionary tales about over- and misuse of algorithmic systems, even while being optimistic about the power of such systems to do important work.Footnote 23 In a series of articles for the news organization ProPublica, Julia Angwin and others make the case that risk assessment algorithms used in criminal justice are racially biased.Footnote 24 Others have argued that algorithmic systems are harmful, oppressive, opaque, and reflect and perpetuate discrimination.Footnote 25

Despite the growing literature on algorithmic harm, discrimination, and inscrutability, there remain several puzzles related to the cases we have described. Consider, for instance, Loomis. It is plausible that Loomis was not harmed in that he received exactly the sentence he would have received without the PSI. After all, he had a violent criminal history; the charges in the case were related to a violent, dangerous crime; and he admitted to the underlying conduct on which the charges were based. The circuit court specifically concluded that he had been driving the car when Vang fired the shotgun, that the shooting might have resulted in killing one or more people, and that Loomis had not taken full responsibility for his role. Moreover, because he is White, and the COMPAS algorithm appears to disadvantage BlackFootnote 26 defendants (as we will discuss in Chapter 3), the judge’s use of the COMPAS report likely did not expose Loomis to racial discrimination. Nonetheless, something seems off about using COMPAS in the case, and we will argue that he was wronged, regardless of whether his sentence was ultimately appropriate. But just how so is a difficult question.

Likewise, something seems off in the Wagner and Houston Schools cases, but it is not straightforward to pin down whether the teachers were wronged (and, if so, why). It is certainly true that some teachers were harmed in each case, but that is not enough to conclude that they were wronged. After all, any teacher that does not receive a bonus, becomes ineligible for tenure, or is laid off is harmed. But such harms are wrongful only if they are unwarranted. Moreover, it is an open question whether the VAMs used in those cases were either unfair or unjust. We will argue that the use of algorithmic systems in these cases is wrongful. But again, that conclusion requires substantial explanation.

Answering these questions is the central task of this book. And our central thesis is that understanding the moral salience of algorithms requires understanding how they relate to the autonomy of persons. Understanding this, in turn, requires that we address three broad issues: what we owe people as autonomous agents (Chapters 3 and 4), how we preserve the conditions under which people are free and autonomous (Chapters 5 and 6), and what the responsibilities of autonomous agents are (Chapters 7 and 8).

Before we go any further, let’s clarify our target.

1.2 What Is an Algorithm?

The academic literature and wider public discourse about the sorts of systems we have been discussing involve a constellation of concepts such as “algorithms,” “big data,” “machine learning,” and “predictive analytics.”Footnote 27 However, there is some ambiguity about these ideas and how they are related, and any discussion of emerging technologies requires some ground-clearing about the key concepts. There are, however, some general points of overlap in the literature. We won’t attempt to settle any taxonomical debates here once and for all, but we will fix some of the important concepts for the sake of clarity.

Among the key concepts we will use, “algorithm” is among the most important, but its usage also invites confusion. At its most basic, an algorithm is just an explicit set of instructions for solving a problem. The instructions may be for a digital computer, but not necessarily so: a recipe for chocolate chip cookies, a set of instructions for operating a combination lock, and even the familiar procedure for long division are all algorithms. In contrast to this broad concept, we are considering algorithms in terms of their functional roles in complex technological systems.Footnote 28 The term “algorithm” is also ambiguous in this more specific setting. It can be used to refer either to a set of instructions to complete a specific task or to a system that is driven by such algorithms. This distinction makes a difference in patent law. Inventions built upon an abstract mathematical algorithm (such as a special mechanical process for molding synthetic rubber) can be patented, while the algorithm itself (meaning the equations used to guide the process or system) cannot.Footnote 29

Our focus here, however, is algorithms in the more applied, systematic sense. That is, we are concerned with algorithms that are incorporated into decision systems. These systems take a variety of forms. Some are parts of mechanical systems, for example, sensor systems in modern cars that activate warnings (e.g., for obstacles nearby) or control safety features (e.g., emergency brakes). Others are parts of information systems, for example, recommendation systems for videos (e.g., Netflix, YouTube), music (Spotify, Pandora), books (Amazon, Good Reads), and maps (Google maps). Still others are incorporated into complex social structures (supply chain logistics, benefits services, law enforcement, criminal justice). These systems have become ubiquitous in our lives; everything from border security to party planning is now managed by algorithms of one sort or another. When we discuss COMPAS, EVAAS, and the Facebook News Feed in one breath, we are discussing algorithms in this broad sense. Moreover, algorithms in this sense are best understood as constitutive parts of socio-technical systems. They are not purely sets of instructions for carrying out a task and they are not mere technological artifacts. Rather, they are used by individuals and groups and affect other individuals and groups such that they constitute an interrelated system that is both social and technological. For the remainder of the book we will refer to these kinds of systems in several ways, including “automated decision systems,” “algorithmic decision systems,” and (for the sake of terseness) simply “algorithms.”

Another key concept is “big data.” This term is often used to describe any data-mining approach to a problem using large datasets, but this washes over much of what makes such datasets a distinctive ingredient of modern technological systems. Datasets that are “big” in the sense of big data are usually enormous and high dimensional; often they consist of hundreds of thousands of rows and thousands of columns. However, a dataset that is merely big in this sense will not render the statistical magic often discussed under the rubric of predictive analytics. Rather, the systems and datasets that underlie algorithmic decision systems also have a number of other special properties.Footnote 30 These additional properties are often summarized in terms of the “three V’s”: volume, velocity, and variety. In other words, datasets that are big in the relevant sense are not only big in volume. They also have high velocity, meaning that they are often continuously updated or are created in real time, for example, systems offering driving route instructions that are updated to account for traffic conditions. Finally, they are diverse in variety, meaning that they encompass both data that is structured (i.e., organized in a predefined format), in the sense of being organized and comprehensible for analysis, and data that is unstructured (i.e., not organized in a predefined format).

As with the concepts of algorithms and big data, “predictive analytics” is not defined by a well-codified set of rules, systems, or practices. At root, the term describes the application of data-mining techniques in developing predictive models, but it is more than that. Many of the model-building techniques, such as linear regression, are standard statistical methods that have been known for hundreds of years.Footnote 31 The characteristic feature of modern predictive analytics is not its use of algorithms or even the size or complexity of its datasets, but rather the analytical possibilities offered by machine learning.

Machine learning involves training computers to perform tasks according to statistical patterns and inferences rather than according to human-coded logical instructions. This approach incorporates different kinds of processes, the broadest categories of which are “supervised” and “unsupervised” learning. Supervised learning is the more straightforward and familiar of the two forms of machine learning. It involves systems that have been trained on large numbers of examples, either for classification (i.e., for classifying future examples) or for regression (i.e., for performing regression analysis). What makes the computer’s learning supervised in these cases is that both classification and regression processes involve a “supervision signal,” which is constructed from training on a set of pre-labeled examples and which defines the desired sort of output in advance. Classification, for instance, involves sorting novel examples into a known set of discrete values (e.g., determining whether a given image is of a cat, a dog, or a rabbit), given a set of pre-labeled training examples. Regression involves predicting some real-valued output (e.g., determining the value of a rental property in a complex market), given some set of examples.

In contrast to supervised learning, unsupervised learning involves analysis using large numbers of examples but lacks a supervision signal. Unsupervised learning algorithms, then, are not given right answers in advance for the purposes of future prediction; rather, they are designed to somehow discern or reduce the deep structure of the (often high dimensional) dataset for explanatory purposes. This can take the form of “clustering,” in which the data is “naturally” grouped according to the distances between its data points, or “dimensionality reduction,” in which the dataset is either compressed or broken down for intuitive visualization. In recent years, these techniques have found applications in data center regulation, social media sentiment analysis, and disease analysis based on patient clustering.

There is widespread recognition that there are ethical issues surrounding complex algorithmic systems and that there is a great deal of work to be done to better understand them. To some extent, concern about these issues is related to beliefs about the potential of unsupervised learning to help realize strong forms of AI.Footnote 32 The reality is more pedestrian.Footnote 33 Outside of cutting-edge AI labs such as OpenAI or DeepMind, machine learning is mainly a matter of employing familiar techniques such as classification, regression, clustering, or dimensionality reduction, at a big data scale. So rather than grappling with ghosts in machines that have not yet begun to haunt us, we aim to address the practical issues we already face.

1.3 Algorithms, Ethics, and Autonomy

We began this introduction by describing several recent legal disputes. Loomis, Wagner, and Houston Teachers will be polestar cases throughout the book. But at root, this book addresses moral questions surrounding algorithmic decision systems. Whether use of COMPAS violates legal rights is a distinct (though related) question from whether it impinges moral claims. Moreover, the proper scope of legal claims and how the law and legal systems ought to treat algorithmic systems are moral questions. Concerns about algorithmic systems have come from a range of sectors and include guidance from nongovernmental organizations, government agencies, legislators, and academics. For example, the UK’s Nuffield Foundation published a road map for research on ethical and societal implications of algorithmic systems. They argue that there are important conceptual gaps that need to be facilitated by philosophical analysis. In their canvas of various sets of AI principles offered by scientific, engineering, corporate, and government groups, “most of the principles prosed for AI ethics are not specific enough to be action guiding.”Footnote 34 Likewise, they point to a gap in the philosophical literature on ethics in algorithms, data, and AI.Footnote 35

Government entities have also recognized moral concerns and the need for greater research on these issues as well. The US President’s National Science and Technology Council’s 2016 report, “Preparing for the Future of Artificial Intelligence,” outlined a number of ethical concerns surrounding AI and algorithmic systems.Footnote 36 While the report focuses on transparency and fairness, the issues it raises have autonomy implications as well. The Ethics Advisory Group to the European Data Protection Supervisor (EDPS-EAG) issued a report in 2018 outlining a slate of ethical concerns surrounding digital technologies, including algorithmic decision systems. In particular, the advisory group explained the importance of linking foundational values – among them autonomy, freedom, and democracy – to digital technologies. The UK parliament appointed a Lords Select Committee on Artificial Intelligence in 2017 to examine a handful of issues in development and adoption of AI (within which they include algorithmic systems), one of which is “What are the ethical issues presented by the development and use of artificial intelligence?”Footnote 37 Among their recommendations are principles protecting “fairness and intelligibility” and prohibiting automated systems from having the power to “hurt, destroy, or deceive human beings.”Footnote 38 Members of both houses of the U.S. Congress have introduced an Algorithmic Accountability Act that would impose requirements to create impact assessments and address bias and security issues.Footnote 39

The academic literature is also expanding in its criticism of algorithmic systems. Kirsten Martin, for instance, argues that big data generates negative externalities in the form of additional surveillance, which she calls “surveillance as pollution.”Footnote 40 Tal Zarsky argues that automated decision-making introduces both efficiency- and fairness-based problems.Footnote 41 Danielle Citron and Frank Pasquale argue for imposing auditing procedures on the process of algorithmic development on the basis of those problems.Footnote 42 Karen Yeung argues that the inoffensive practice of nudging, which classically involves only subtle forms of behavioral modification through manipulation of “choice architectures,” can be galvanized by predictive analytics to produce “hypernudging” platforms whose effects at scale wind up being radically paternalistic.Footnote 43 Cathy O’Neil groups such systems under the banner of “weapons of math destruction,”Footnote 44 arguing that they enjoy an aura of epistemic respectability that encourages us to use them beyond their actual capacities. Citron argues that addressing these problems requires nothing short of a new constitutional paradigm – a new “technological due process.”Footnote 45

Algorithmic systems (including both predictive systems and digital platforms) have come under substantial economic, political, and philosophical criticism. We agree with much of it. However, for a few reasons we do not defend any overall moral or ethical conclusion about the technologies themselves. First, the fact that they are rapidly advancing as part of an ongoing process means that the horizon for productive commentary on current technology is time delimited. Second, we acknowledge that these technologies – whatever their life spans might be – can be employed for useful aims as well as for pernicious ones. There are few global, all-things-considered moral judgments that can be made about, for instance, the governance of Facebook’s News Feed or use of risk assessment algorithms. Third, we acknowledge that the algorithmic landscape of predictive analytics and digital platforms is here to stay in some form or other. It is possible to exert some influence on how these systems are employed and perhaps even develop new conceptions of fair play to cope with these changes, but predictive analytics and digital platforms will not be eliminated altogether.

For those reasons we aim to look beyond the particular features of the technologies as much as possible, treating the technologies themselves as case studies that are useful for making certain moral and social issues vivid and concrete, rather than as the sources of ontologically distinctive philosophical issues. In many cases, the philosophical issues have more to do with our psychological features and our social structures than with the inherent hazards of technological systems considered abstractly. Hence, what unifies the systems discussed in the book is not strictly about the technologies per se, but rather about the human values that are implicated by the designs and modes of operation of all of those socio-technical systems. All these systems, we argue, raise philosophical issues about autonomy.

Focusing on autonomy is important for several reasons. Primarily, it connects unease about algorithmic systems to a good that has deep, stable, and plausible moral value. This is a rich seam to mine in the same way that autonomy issues have been fundamentally important in other domains of applied ethics. Arguments grounded in autonomy connect concerns about algorithmic systems to an area with a broad and well-developed philosophical literature.

Moreover, by drawing out the importance of autonomy, our account can address concerns about algorithmic decision systems that are not captured by accounts that focus on fairness, harm, and bias. Algorithmic systems often reflect, harden, and create unfair structures; this is an enormous moral concern. However, that is only part of the moral importance of algorithmic systems. Loomis shows why: Loomis was plausibly wronged, but it is not clear that he has been treated unfairly (at least in the sense that COMPAS treats him differently from other, similarly situated defendants) and it does not appear that he has been materially harmed by use of COMPAS. Note, too, that while there are some scholars who have addressed whether certain kinds of algorithmic systems conflict with autonomy via manipulation,Footnote 46 our view is that autonomy is grounds for a much broader evaluation of algorithmic systems. Loomis, Wagner, and Houston Schools do not appear to involve manipulation in any strong sense.

Our focus on autonomy also provides a foundation for moral concerns that are often under-explained (e.g., transparency, filter bubbles). Specifically, a focus on autonomy can serve to route around at least some disputes about bias and fairness. It has become clear that there are different ways that a single system can be plausibly unfair to members of different groups.Footnote 47 Determining which facets of fairness matter the most requires considering different values. We will argue that an appeal to autonomy can help in that regard. Finally, note that our approach is consistent with other important critiques; that is, a concern about autonomy for the most part adds to rather than contradicts extant critiques.

1.4 Overview of the Book

To get at the different ways in which algorithmic systems bear upon autonomy, we have divided the book into four main parts. Part I is introductory and ground-clearing. Chapters 1 and 2 serve as introductory chapters, outlining the conceptual foundations and philosophical commitments that ground our arguments throughout the book. The primary task of Chapter 2 is providing an account of autonomy and its importance. It begins with a high-level explanation of autonomy itself and then canvasses several key conceptions of autonomy and the philosophical concerns underlying them. We then advance our ecumenical account of autonomy. Specifically, we draw on procedural, psychological accounts of autonomy as well as social accounts of personal autonomy. We argue that while they have important differences in how they explain key facets of autonomy, in practice they are substantially overlapping. Hence, we can draw on both as a foundation for the arguments we make in the book. That is because fully realized autonomy demands both procedural independence (which includes both epistemic competence conditions and authenticity conditions) and substantive independence (which includes conditions of reciprocal support and non-domination from society more broadly).

Part II builds upon our account of autonomy and our polestar cases to address the kinds of moral claims and the nature of the respect that persons are owed in virtue of their autonomy. In Chapter 3 we argue that people are owed systems of rules and practices that they can reasonably endorse. We begin the chapter with a closer consideration of VAMs used for K-12 teacher assessment. We argue that the problem with such tools cannot be reduced to concerns about their reliability or their potential for bias. Rather, teachers have a claim to incorporate their values into their lives as they see fit. And respecting teachers requires recognizing them as value-determiners, neither thwarting nor circumventing their ability to act according to those values without good reason. Moreover, as agents they are capable of abiding fair terms of social agreement (so long as others do too), and hence “good reasons” will be reasons that they can abide as fair terms of cooperation. Teachers can endorse those reasons either as consistent with their own values or as a manifestation of fair social agreement.

We argue that VAMs fail to respect teachers as agents because they are used in a way that teachers cannot reasonably endorse, for four interrelated reasons. First is their reliability, which many have questioned. Second is that their results can be based on factors for which teachers are not morally responsible (e.g., student performance may correlate with teachers’ age, ethnicity, or gender). Third is stakes. The fact that an algorithmic system is unreliable or measures factors for which persons are not responsible is important primarily as a function of the stakes involved. Fourth is the relative burdens placed upon people subject to them. We conclude by applying our framework to our polestar cases (Loomis, Wagner, and Houston Schools).

One important, oft-cited criticism of algorithmic systems is that they lack transparency. Such systems can be opaque because they are complex, protected by patent or trade secret, or deliberately obscure. In the EU, there is a debate about whether the General Data Protection Regulation (GDPR) contains a “right to explanation,” and if so what such a right entails. Our task in Chapter 4 is to address this informational component of algorithmic systems. We argue that information access is integral for respecting autonomy, and transparency policies should be tailored to advance autonomy.

To make this argument we distinguish two facets of agency (i.e., capacity to act). The first is practical agency or the ability to act effectively according to one’s values. The second is what we call cognitive agency, which is the ability to exercise what Pamela Hieronymi calls “evaluative control” (i.e., the ability to control our affective states such as beliefs, desires, and attitudes). We argue that respecting autonomy requires providing persons sufficient information to exercise evaluative control and properly interpret the world and one’s place in it. We draw this distinction out by considering algorithmic systems used in background checks, and we apply the view to our polestar cases.

While Part II of the book considers what we owe people given that they have autonomy, Part III addresses our responsibility to secure the conditions under which people can act autonomously. Chapter 5 considers the relationship between algorithmic systems and freedom. There is substantial dispute about the concept and moral value of freedom. A key area of dispute is whether freedom is best understood in terms of negative, positive, or republican freedom. We offer an account according to which freedom is ecological and includes both republican freedom (which is to say, freedom from others’ exercise of arbitrary power) and positive freedom, properly understood (i.e., where positive freedom is a function of quality of agency). We argue that algorithmic systems in several ways conflict with ecological freedom.

Chapter 6 addresses a specific condition of autonomy from Chapter 2, viz., epistemic competence. It has been clear since the early 2000s that internet communication technologies generally and algorithmically driven information systems in particular create epistemically noxious environments. These include phenomena like filter bubbles and echo chambers as well as more insidious phenomena like toxic recommendation systems.Footnote 48 Most every media platform now employs content moderation systems to screen for the truly awful content that would make sites like YouTube, Facebook, Reddit, and others unnavigable.Footnote 49 That practice is relatively uncontroversial. We argue that there are further obligations on sites to exercise a kind of epistemic paternalism.

While Parts II and III begin from the premise that people have certain claims as agents (specifically, claims to the conditions that foster autonomy), Part IV shifts focus to the obligations of agents. Chapter 7 considers the autonomy and responsibility of those who deploy information technologies (as collectors of big data, users of algorithmic decision systems, developers of social media sites, and so on). Specifically, we argue that there is a type of wrong that arises when autonomous agents obscure responsibility for their actions, which we call “agency laundering.” At root, agency laundering involves a failure to meet one’s moral responsibility for an outcome by attributing causal responsibility to another person, group, process, or technology, and it does so by undermining a key component of responsibility itself, viz., accountability. We apply our conception of agency laundering to a series of examples, including Facebook-automated advertising suggestions, Uber driver interfaces, as well as to our polestar cases.

We then turn to the ways in which autonomy underwrites democratic governance. Political authority, which is to say the ability of a government to exercise power, may be justifiable or not. Whether it is justified and how it can come to be justified is a question of political legitimacy. In Chapter 8 we consider several views of legitimacy and argue for a hybrid version of normative legitimacy based on one recently offered by Fabienne Peter.Footnote 50 We explain that each facet of the hybrid requires a legitimation process that is itself grounded in autonomy. We argue that the autonomy view is a basis for criticism of the legitimation processes related to a predictive policing technology and to actions of Cambridge Analytica and the Internet Research Agency in recent elections. In Chapter 9 we offer some conclusions and caveats.

1.5 A Heuristic

It is worth stepping back for a moment before launching into the main substance of the book’s arguments in order to explain its purpose and approach. This book is about ethics and algorithmic decision systems, such as COMPAS, EVAAS, YouTube and other recommendation systems, Facebook Ad services, and others. Its aim is to better understand moral concerns about those systems. So we will consider questions such as the following: Is it morally justifiable for a court to use an algorithmic system such as COMPAS in determining whether, and if so for how long, to sentence a defendant to incarceration? Should school systems use algorithmic systems such as EVAAS to promote, reward, and fire teachers? Does YouTube have an obligation to better police the videos that are suggested to viewers?

These questions are multifaceted, and there are lots of different ways we might construe them. There is a meta-ethical issue of what the “should” and “justifiable” in the questions mean or refer to.Footnote 51 There are questions of legal doctrine such as whether the use of a system like COMPAS impinges upon due process rights and whether algorithmic systems violate contract terms or statutory protections. Those kinds of questions have ethical implications and will come up now and again in the book, but they are not the focus.

Instead we will focus on ethics (which we use interchangeably with “morals”). How, though, does one do that? There is no univocal recipe for answering moral questions. However, we can offer a heuristic for evaluating moral questions.Footnote 52 It is not the only way to think through problems, and it is useful in part because it shows just how difficult resolving some moral problems actually is. Nonetheless, it is a way to keep oneself in check and (perhaps more importantly) letting others understand what exactly one is doing. The hope is that following this kind of heuristic will help readers recognize what (if anything!) an ethical argument contributes and what its limitations are.

The first step is clarifying relevant concepts. In considering whether it is justifiable to use algorithmic systems in criminal justice decisions, for example, we will need to specify a number of concepts in order to make progress. An obvious one is “algorithmic system,” which we have tried to clarify in Section 1.2. Do we mean literally any algorithm, including an ink-and-paper set of sentencing guidelines? Or are we talking about only sophisticated, big data–driven systems? Or even machine learning–based systems? Another concept to clarify is “criminal justice decisions.” That could mean decisions about sentencing, supervision, early release, or something else altogether. As we will see in our discussion of COMPAS throughout the book, those issues matter. We will also spend substantial time working to clarify moral concepts. These include “autonomy,” “agency,” “fairness,” “freedom,” “legitimacy,” and others.

The next step is to get one’s empirical factsFootnote 53 straight. Of course, this is easier said than done. If we want to address the question of whether some use of algorithmic systems in criminal justice decisions is justifiable or not, we will want to know something about how such systems operate, what effects they have, and what the underlying picture of the criminal justice system looks like. We might want to know something about how people are likely to use such systems; how those systems interact with social structures like courts, prisons, parole boards, and the like. There is a world of facts surrounding any moral question, some of which are contested, some of which are unknowable, and some of which people get wrong. Regardless, any moral claim will rest on some understanding of the facts on the ground, and getting those straight, getting a clear-eyed view of what facts are tenuous and unknown, and having a sense of what we would like to know is vital in addressing moral questions.

Step three is discerning and applying the correct moral theory to the questions at hand. A few thousand years of religious and philosophical disputes have not answered the question of which moral theories are correct. It is debatable as to whether (and to what extent and by what metric) there has been progress, and there is an important debate about the degree of cultural convergence and divergence on fundamental questions of right and wrong. However, the entire premise of the project of debating the justifiability of anything – using animals for food, abortion, policing, capital punishment, universal suffrage, civil disobedience, mandating vaccinations, and, our topic, applying algorithmic systems in various contexts – is that there can be better and worse (i.e., more and less morally justifiable) answers. And hence, we need some basis on which to find our way on these kinds of questions.

Of course, the question of which moral theory is correct comprises its own, vast area of inquiry. Among the possibilities are consequentialism, deontology, virtue ethics, and contractarianism, each of which has myriad versions. And even once one has some set of moral values one believes to be correct, it is yet another daunting task to figure out just how to apply them. If one is a thorough-going consequentialist who believes that an action is right just in case that act leads to the best overall consequences, it remains an open question of how to figure out how to apply that to actions on the ground.

There are, however, a number of views of morality that are critical of the notion that there are moral principles to apply in the first place, even while accepting that actions can be more or less justified. Some of these views are forms of pragmatism, holding that the practice of theorizing is inseparable from practical consideration. Others adhere to “casuistry,” according to which one can find justification by comparing contested situations to similar “paradigmatic” cases. Still others are forms of moral particularism, according to which general moral principles (e.g., “lying is bad,” “killing is wrong,” “people should be treated equally”) carry no moral weight and provide no role in explaining why actions are right or wrong. Rather, particular acts are right or wrong, and any generalities we draw from particular acts are neither explanatory nor applicable to other cases. But even these “anti-theory” positions continue to insist that there is an account of moral justification (even if it is not one based on explanatorily prior moral principles). And so, if you are searching for the answer, as we are, to the question of whether using algorithmic systems under certain conditions is right or wrong, then you are committed at some level (and perhaps only implicitly) to there being better and worse accounts of moral justifiability on which to base such judgments. We should therefore understand “moral theory” in this third step of the heuristic (i.e., discern and apply the correct moral theory) to be quite broad. It is not only the discernment and application of moral principles but also (or, on the anti-theory views, rather) the discernment and application of the correct understanding of moral justification.

The final part of the heuristic is to apply principles of critical reflection to the concepts, empirical claims, and moral theories one has at one’s disposal. This does not imply that one can simply apply rote, bloodless, “pure” reason to problems. That would be impossible, and it would be unwarranted in light of the inevitable conceptual questions, factual lacunae, and steep burdens of determining the correct moral theories. Rather, it requires a much more limited (but no less important) set of constraints. One should follow one’s reasons where they lead. If the concepts, empirical claims, and moral theory entail that some action is impermissible, one should recognize that entailment and either accept the conclusion or aim to look for the flaw in the premises that led one there. Another element of good reasoning is to recognize just how fallible reasoning is; in other words, one should avoid dogmatism. Related is to accept that one’s conceptual apparatus, understanding of facts, and moral theory are revisable. That’s a good thing. After all, we are bound to have mistaken beliefs and commitments.

This is a project in applied moral philosophy and it aims to use the heuristic we have outlined. It considers some questions that are legal, and it adopts some factual claims. Its contribution comes in several parts. First, it clarifies some relevant concepts, including (among others) autonomy, agency, freedom, paternalism, responsibility, and legitimacy. Second, as to the empirical facts, we largely take our cues from others, drawing on court cases and related documents, other academics’ empirical work on algorithmic systems, and journalists’ investigations on related issues. Third, we advance some claims of normative moral theory. Most of these are grounded in autonomy. However, we do not advance the view that autonomy is the only moral value relevant to analyzing and evaluating algorithmic systems. Rather, our view is that autonomy is an important value, and many moral concerns about algorithmic systems are best understood as, at bottom, issues of autonomy.

Put slightly differently, our project is based on the premise that people are, to some degree, and within important limitations, able to govern themselves. They can determine what kinds of things matter to them, how to use those values to order their lives, and come to agreements with others about how to navigate differences in belief about values. The capacity for autonomy, we argue, gives rise to moral claims that others respect (to some degree and within important limits) persons’ decisions about self-government. Our task, then, is to craft a set of considerations surrounding algorithmic systems based on autonomy. There are many other considerations. Some involve consequences. Some are about the law, including to what extent the law itself provides moral reasons. Yet another is virtue. There are also deeper questions of justice. Still others involve religious and quasi-religious issues. Others involve the proper scope of freedom for technologists to develop and implement systems. Still others involve trade secrets and capitalism. There is no way to adequately address all of these in a volume like this. But we submit that many potential readers will agree with our rock-bottom premise that autonomy matters. So we will start there, clarifying the concept and positing some moral principles that rest on it. On to Chapter 2.

2 Autonomy, Agency, and Responsibility

The central claim of this book is that understanding the moral salience of algorithmic systems requires understanding how they bear upon the autonomy of persons. In Parts II through IV of the book, we explain in detail several different ways in which algorithmic systems are important in relation to autonomy and agency. But before we can do that, we have the basic ground-clearing task of providing an account of autonomy and its value. The discussion in this chapter will set out the basics of our view. However, autonomy is a foundation for many of the concepts and arguments we develop in later chapters, such as responsibility, liberty, paternalism, and democratic legitimacy. We will explain these concepts in more detail in the chapters that draw on them.

We begin by setting out some basics of autonomy, including some key distinctions. Then we explain two broad categories of competing views, focusing on exemplars of each. We offer an ecumenical account of autonomy that incorporates features of both psychological autonomy and personal autonomy, and which requires both procedural and substantive independence. The chapter concludes with an explanation of some of the ways that considerations of autonomy bear upon our arguments in later chapters.

2.1 Autonomy Basics

Autonomy is at root self-government. Individuals can (to some degree and with more or less success) develop their own sense of value, make decisions about what matters most to them, and act accordingly. But just below the surface of that general statement is a rich, complex, and deeply contested set of questions about the scope, nature, and moral importance of autonomy. Our task in this chapter is to offer an account that shows our philosophical commitments, positions itself within some of the weightiest philosophical debates about autonomy, and explains where (and why) we decide not to make stands. The account we offer here is lightweight and ecumenical. It is lightweight because it takes on minimal commitments. It is ecumenical in that it is compatible with a broad range of views. This approach can go a long way in helping understand the moral importance of algorithmic systems without having to resolve some of the deepest and most vexing disputes about the nature and scope of autonomy.

We can get a sense of the basic contours of autonomy by considering some archetypical ways that autonomy can be undermined.Footnote 1 For one, a person’s actions and preferences are not autonomous to the extent that they are the result of coercion or deceit. Where an employer gives an employee the option of working after they have clocked out or being fired, the employee’s choice to keep their job is of course their genuine preference. However, that choice is not autonomous because it is constrained in an illegitimate and coercive way. Suppose instead that the employer constrains the employee’s options through deceit. The employer lies to the employee that the company will fail if the employee does not put in extra, unpaid hours after they have clocked out. Again, the employee’s choice to do unpaid work is based on their genuine desire to keep the company afloat and retain their job. It is not, however, an autonomous choice because they have been deliberately deceived.

A more subtle type of case concerns adaptive preferences. Suppose that the employer continuously asks the employee to work extra, unpaid hours in order for their business to thrive. The employee believes that other employment options are limited and fears losing their job if they do not do the unpaid work. Although it is a burden, the employee does indeed want the business to thrive. They may consciously embrace the practice to make the burden seem manageable. This would seem to conflict with their autonomy in that they formed their preference to work extra, unpaid hours in response to illegitimate pressure from the employer and their belief that other alternatives were bad. Finally, suppose the employer comes to rely on the employee in a close working relationship, provides appropriate encouragement, and genuinely values the employee’s skill and professionalism. Indeed, the employer values it so much that they regularly promote other, similarly qualified employees to better-paid, management positions so that they can keep drawing on the employee’s talents directly. The employee may come to think that the management positions are not really attractive anyway, as they involve a steep learning curve, increased time demands, and working with people they do not know. These are, in the employee’s mind, grapes that are out of reach and probably sour anyway. What the employee prefers conflicts with their autonomy in that they formed their preferences in response to their limited opportunity.Footnote 2

The key to each of these examples is that just what it means for individuals to develop their own sense of value, the conditions under which individuals’ decisions are their own, and precisely when people act according to their own preferences is itself a nuanced question. Hence, it is not enough to say that autonomy is simply a matter of people being able to do what they want. Rather, the deeper question is this: Under what conditions are people and their choices, preferences, and values properly understood as autonomous?

The view that we will advocate here is that although autonomy of preference and choice is important, this sort of autonomy is limited. A fuller understanding of autonomy will focus on autonomy of persons, which is to say the social conditions under which a person is autonomous. Our view, though, is ecumenical in that it incorporates both psychological views and personal autonomy views. The reason is that the normative considerations substantially overlap, as we explain in Section 2.4.

2.2 Some Distinctions

With our first gloss in mind and having described several ways in which persons’ autonomy may be impinged, it will be useful to make a few distinctions. After doing so, we can fill out our conception.

2.2.1 Global versus Local

Autonomy can apply to a relatively narrower or wider range of circumstances. For example, a person may be autonomous with respect to local decisions. That is, they may be able to make decisions about actions with immediate effect and may be able to ensure that those decisions comport with their values. Suppose, for example, that Ali exercises substantial control over most aspects of her life and is able to do more or less as she pleases. However, suppose that her employer routinely assigns her to projects that she does not like. When Ali asks for different responsibilities, the employer ignores her. This is a case in which Ali lacks local autonomy over the circumstances of her employment, and that is true regardless of whether she could find another job relatively quickly and regardless of whether she can effectively govern other facets of her life. In other words, her global autonomy is consistent with her lack of local autonomy.Footnote 3

It is also true that a person may be locally autonomous but lack global autonomy. Suppose that Bari lacks financial resources, lives in an isolated community, and has had little opportunity to develop her talents. And suppose that her family and community expect her to fulfill a strict set of social obligations: care for her siblings and older relatives, cook for the family, and obey her husband. However, she has a great deal of latitude in how she fulfills those expectations. Bari, in this case, lacks global autonomy even while being able to exercise local autonomy. Notice that Bari may lack global autonomy either because the social expectations placed upon her are strictly enforced or because she has internalized those expectations and formed adaptive preferences favoring her actual circumstances.

Distinguishing local and global autonomy matters in two ways. First, particular cases of coercion and manipulation are wrong (when they are wrong) at least in part because they involve failures to treat people as autonomous, but they need not be so substantial as to undermine a person’s overall ability to govern their life. Second, the fact (if it is a fact) that some groups of people are not currently the subject of manipulation and coercion is not enough to ensure that their autonomy is respected globally. They may have developed values and preferences under oppressive or otherwise limiting circumstances, or their circumstances may be constrained overall.

2.2.2 Capacity, Exercise, Successful Self-government

Closely related to the distinction between global and local autonomy are a number of different meanings for autonomy that are relevant in moral and political philosophy. Joel Feinberg, for one, distinguishes the capacity to govern oneself, successful self-government, the personal ideal of self-government, and moral claims that one might assert that reflect a person’s sovereignty over theirself. Footnote 4

The first meaning of autonomy is the capacity for (global) self-government. There are a number of rock-bottom, baseline capacities that a person must have to self-govern. They must be able to consider the world and make rational assessments and decisions. “Rational” here means only that there is a close connection between facts and inferences, and a person’s decisions either line up with well-enough ordered assessments of facts and reasons, or they make decisions fully aware that they do not line up. The second issue of capacity is that a person must not be so profoundly damaged, and must not have had their world so dramatically circumscribed, that their understanding of it prevents well-enough ordered assessments of facts and reasons. Capacity autonomy does not require much; it is a bar low enough that adults can generally clear it. Note that “capacity” is distinct from “potential.” Infants and young children lack the (present) capacity to be autonomous, but they have the potential to develop that capacity. So the “capacity for autonomy,” we might say, refers to the current state of a person and whether they can exercise self-government at this moment so long as the situation in which they are placed is conducive to that exercise. A person held hostage has the capacity for autonomy, should they escape or be released. A disenfranchised person in an apartheid state has the capacity for autonomy, should the governing regime change.

Contrast capacity autonomy with the successful exercise of autonomy, which Feinberg calls “autonomy as condition.”Footnote 5 A person with the capacity to govern themself may be constrained by circumstances, hindered by other people, or fail to use their capacity. Autonomy in this sense involves several things. It includes subjecting one’s values to scrutiny, that is, exercising the capacity to self-scrutinize. It also involves engaging in some degree of self-determination, which is to say one must act on their values, principles, beliefs, and so forth to steer their course. Feinberg suggests that it also involves a degree of self-legislation by adopting moral principles and holding oneself to those principles. This way of conceptualizing autonomy owes a great deal to Kant. The third (“autonomy”) formulation of the Categorical Imperative is “the idea of the will of every rational being as a will giving universal law” and requires a person act according to a maxim of one’s will and such that their will “could at the same time have as its object itself as giving universal law.”Footnote 6 Maxims are compelling, and one acts autonomously in following universal maxims, in that one both sets and follows them.

Successful exercise of autonomy does not require that one make up this self-legislation whole cloth; one will invariably adopt principles that are based on one’s community. The capacity to self-govern, the values an agent develops, and the ways in which they incorporate those values into their life are socially situated.Footnote 7 Moreover, developing one’s sense of what is important depends on social conditions that nurture the ability to do so.Footnote 8 Social structures may delimit the conceptions of value that are available for a person to draw upon in developing their own principles and their own sense of value. Persons’ abilities to incorporate their values into their important decisions will depend on the prevailing “social forms” and the opportunities that exist in the broader social context.Footnote 9 Nonetheless, holding oneself to their principles autonomously requires that one adhere to them because they are principles, not merely because others happen to adhere to those principles. This in turn requires authenticity, self-control, and taking moral responsibility for one’s actions.

Such conditions form the basis of a third conception: autonomy as ideal. The successful-exercise conception outlines important component parts of autonomy. However, people are not isolated, wholly self-legislating individuals. Rather, they are parts of families, communities, and other social groups, and exercising autonomy must be compatible with being parts of social groups. Moreover, the elements of successfully exercising autonomy (self-determination, authenticity, self-control, integrity) can be used to bad ends. Autonomy is not the only relevant value, and it is a mistake to elevate it above the social and historical context of human life. Abstracting autonomy away from humans’ social nature altogether, Feinberg explains, ignores the fact that no one selects “his country, his language, his social community and traditions. No individual invents afresh his tools, his technology, his public institutions and procedures [sic].”Footnote 10 Moreover, all of those things (country, language, tradition, etc.) are key parts of being human, and we all become self-aware within those contexts and “as part of ongoing social processes.”Footnote 11 Hence, a person’s successful exercise of their capacity to self-govern according to their individual preferences may not be ideal, in part because it risks ignoring one’s social circumstances and it may conflict with one’s responsibilities to other community members.

Lastly, we may understand autonomy as a right. Individuals with the capacity for autonomy have certain valid claims and others have correlative obligations. One is that individuals have a claim to be recognized as having the capacity to govern themselves. Related to recognition is that individuals have decisional prerogatives; where others interfere with a person’s decisions, that interference is a limitation on autonomy (though it may well be justifiable). The extent of those claims is a further question we return to throughout the book. What matters here, though, is that the notion of autonomy as a right is distinct from the notions of autonomy as a capacity, a condition, or an ideal.

2.3 The Key Split

These distinctions are important and useful, but we still owe our own account. The view we advance in this book incorporates two different conceptions of autonomy. The first focuses on the relationship between a person and their motives, intentions, values, and preferences; that is, it holds that autonomy is primarily understood in terms of how an individual’s desires, preferences, and actions relate to their history and psychology. For lack of a better term, we will call this “psychological autonomy.” A different way of understanding autonomy focuses on an individual’s social conditions, relationships to other people, status within a community and polity, and the range of options and opportunities in which one develops a sense of self and others. Following Marina Oshana, we will call this “autonomy of persons” or “personal autonomy.”Footnote 12 The boundaries between these two ways of understanding autonomy are blurry, as a person’s social circumstances are closely related to their desires, values, and preferences. Nonetheless, both facets are important in understanding the relationship between algorithmic decision systems and autonomy. We will argue that both components are morally important, and the conceptual differences between them matter less than the fact that both conceptions can underwrite similar social, moral, political, and legal claims.

To explain the two different ways of understanding autonomy, we will examine an exemplar of each.

2.3.1 Psychological Autonomy

There are different ways to characterize psychological conceptions of autonomy. Oshana emphasizes that such views are fundamentally concerned with psychological authenticity.Footnote 13 Perhaps the best, most succinct characterization of this set of views is that they are “accounts of the autonomous agent’s special relation to her own motives.”Footnote 14 Another fruitful way to understand such views is that they are procedural accounts, which is to say that autonomy does not turn on a person’s particular desires, preferences, and beliefs.Footnote 15 Rather, what matters is whether their motivations and actions stand in the right relation to their psychology.Footnote 16

An important objection to procedural views is that focusing on the individual and their psychology may not adequately account for the ways individuals are fundamentally social, embedded within relationships, embodied, and historical. Those social and relational facts are inextricable from how a person develops values, preferences, and desires. A view of autonomy whereby a person is autonomous just in case their actions comport with their higher-order preferencesFootnote 17 will miss the fact that a person’s highest-order preferences may be formed in oppressive (or otherwise severely delimited) circumstances and hence are themselves suspect.Footnote 18 We address this concern in our discussion of personal autonomy in Section 2.3.2.

Nonetheless, autonomy must have at least some procedural, psychological component, and such views need not be so narrowly constructed that they recognize only lone individuals, independent of their social relations and historical selves. Consider John Christman’s account, which places individuals’ social and historical contexts at the center of autonomy.

Christman argues that individual autonomy has two key requirements: competence and authenticity.Footnote 19 Each is necessary and both are jointly sufficient for a person to be autonomous. The competence conditions Christman describes are similar to baseline autonomy requirements in other accounts. First, a person must have some set of fundamental values and commitments and the ability to “effectively form intentions” to act on the basis of those values and commitments.Footnote 20 Second, they must have the capacity to critically reflect on their basic values and commitments, their motivations, and other facets of their decision-making abilities. In other words, autonomy requires that a person have the capacities to form, develop, and critically reflect on their values and to intentionally and effectively act in accord with those values. That’s the easy part.

More controversial, and more difficult to reckon, is the authenticity requirement. This is a conditional requirement that a person would not be alienated from their basic values and commitments were they to “engage in sustained critical reflection” on them.Footnote 21 By “alienated,” Christman just means that the person would believe that their values and commitments were incompatible with their sense of themself and their practical identity over time. As Christman puts it, those values and commitments could “not be sustained as part of an acceptable autobiographical narrative organized by her diachronic [i.e., existing over time] practical identity.”Footnote 22 For the critical reflection to be adequate to ensure autonomy, it must be sustained over time, occur in a range of conditions, consider processes that affected how the person came to form their values and commitments, and not be distorted by other factors.Footnote 23

There are three key features of Christman’s account that will help as we develop our own view. The first is that the view is proceduralist; it is based on the mechanisms and processes by which persons come to have values and commitments and how they incorporate those into beliefs and actions. Procedural accounts do not depend on the content of persons’ values. An advantage to procedural views is that they do not presume that any particular values, commitments, and beliefs are inconsistent with autonomy. Procedural views contrast with perfectionist views, which build into their conceptions of autonomy at least some requirements for the content of persons’ values and commitments.

Second, Christman develops his account with an eye to addressing important lines of criticism of psychological autonomy views. These criticisms are (1) that selves are decentered and historical (i.e., there is no isolated, asocial self that is cut off from the circumstances that form one’s sense of self) and (2) that values and commitments form in social contexts and in relation to other people and communities.Footnote 24 Third, Christman’s view is internalist, which is to say that the criteria for autonomy are indexed only to a person’s own psychology.

2.3.2 Personal Autonomy

Christman’s understanding, like those of Frankfurt, Dworkin, Fischer and Ravizza, and others, is that the relation between a person and their intentions (in Christman’s case, whether the competence conditions and the authenticity conditions obtain) forms sufficient conditions for a person to be autonomous. Such accounts have received substantial criticism in recent years because of their focus on individuals rather than their social connections, communities, structures, physical embodiment, emotion, and so forth. We return to some of these critiques in Section 2.6.

To capture this separate family of conceptions of autonomy, consider Marina Oshana’s thoroughly social and relational account. Oshana’s starting point is that views like Christman’s, which center on an agent’s relation to his or her values and preferences, are at root about persons’ psychologies. However, Oshana argues, people are not reducible to their psychological states, and an adequate account of personal autonomy must involve more than a person’s psychological history, competency, and authenticity.Footnote 25 Autonomy should instead be understood primarily as a characteristic of persons, and personal autonomy is inherently a social phenomenon: “Autonomy is not a phenomenon merely enhanced or lessened by [social relations]. Social relations do not just causally facilitate or impair the exercise of autonomy. Rather, appropriate social relations form an inherent part of what it means to be self-directed.”Footnote 26

Oshana’s argument draws on several cases in which agents fully accept and internalize values that subordinate their own interests: a person raised in oppressive circumstances who embraces the subservience their community demands, a person who voluntarily becomes a part of a total institution in which they commit to serving the institution and its hierarchy, and so on. Among Oshana’s examples is a woman who chooses and values being subservient in a marriage and being the “angel of the house.” She has no say in important facets of family financial and life decisions, and she develops none of the professional and educational skill and social capital that would allow her to change her situation. In Oshana’s conception, the woman’s reasons for her choice are consistent with her values, she is reflective about those values, she finds the life wholly gratifying, and her values are not based on social conditions that established or reinforced a belief in her inferiority. In other words, Oshana’s conception is of a person who meets both a competence condition and authenticity condition. Nonetheless, Oshana argues that the “angel of the house” lacks autonomy precisely because of her social status and her personal relationships, regardless of whether she is the architect of both. Similarly, Oshana argues, a person who surrenders to the strictures of a monastic religious order such that the order controls all facets of their life lacks autonomy regardless of whether doing so was a choice that comports with their deepest values.

The point of Oshana’s examples is that there are plausibly individuals who do not have the power to manage important aspects of their lives because of restrictive or oppressive social circumstances, and yet their situations may be consistent with their authentic values. Nonetheless, persons’ de facto ability to manage important facets of their lives is “tantamount to governance over their selves.”Footnote 27 And because self-governance just is autonomy, the lack of de facto power is incompatible with autonomy.

To sharpen her argument, Oshana contrasts the angel of the house case with the “would-be surrendered woman,” who has a great deal of financial, social, and educational independence, is professionally very successful, and can exercise a great deal of global and local control over the course of her life.Footnote 28 However, her values and self-conception are to live like the angel of the house, deferential to a controlling partner. In other words, her actions and motivations in navigating life are inauthentic (at least in Christman’s sense). Oshana maintains that she is nonetheless autonomous precisely because her social and relational circumstances allow her to govern herself, even if she has not been able to do so in a way that satisfies her deeper value commitments.

There are several important consequences of Oshana’s conception. One is that it de-emphasizes autonomy with respect to preferences and values and instead emphasizes autonomy of persons. On Oshana’s view a person can be autonomous with respect to their desires, preferences, and values, but still not be autonomous in a morally important sense.

Another feature is that Oshana’s view is weakly perfectionist. Her view is that autonomy requires that a person recognize themself as the person with primary authority over their life; that is a substantive, non-proceduralist value and her view is hence perfectionist. It is only weakly perfectionist because recognizing oneself as having authority over their life is a relatively nondemanding requirement. Many different conceptions of value will be compatible with it.Footnote 29

Third, which is related to the first consequence and is perhaps most controversial, is that authenticity (in Christman’s sense, in Frankfurt’s sense, or anyone else’s) is neither necessary nor sufficient for a person to be autonomous. On Oshana’s view a person can be autonomous even if they are, upon reflection, alienated from their desires, motivations, and aspects of their character. However, if they have latitude to change that desire and to act on an alienating desire, Oshana’s view is that they are still autonomous.

Fourth is that social conditions are key for autonomy in multiple ways. They may be causally important. Christman agrees on this point – indeed, it is hard to see how it could be otherwise. Social conditions are also at least partially constitutive of autonomy. Regardless of the connection between a person’s values and preferences and one’s psychology, one cannot be autonomous on Oshana’s view if others fail to foster, support, and respect their self-governance.

The fundamental difference between Christman’s and Oshana’s accounts concerns whether autonomy properly applies to one’s psychology (including its contents over time and the social and historic processes affecting it) or to one’s person (including one’s de facto power to determine their affairs). This difference entails that the conditions of autonomy are different on the two accounts. Whereas Christman’s view demands only competence and authenticity, which are procedural requirements, Oshana’s view also includes additional requirements that are associated with appropriately conducive social conditions and choice architectures.

The conditions for personal autonomy on Oshana’s account do have some points of contact with Christman’s. In particular, she argues that personal autonomy requires a kind of procedural independence, which in turn includes a number of competence conditions. Foremost among these is epistemic competence. A person must be self-reflective and self-aware, and they must conceive of themself as a person “who can affect the world in light of a perspective and plan for life that is of her making.”Footnote 30 Related is that a person must be rational. Again, this is not the caricature of “rational” in the sense of being coldly calculating. Rather, one must be attuned to their environment and understand the world around them. They must be able to develop and be disposed to follow through with plans based on their own ends and their sense of value. Furthermore, they must be able to distinguish choices, actions, and relationships that are conducive and not conducive to self-governance.

Another condition that is compatible with Christman’s view is that agents must have self-respect. Respect involves recognizing the inherent worth of a person and treating them accordingly. To respect another person, one cannot understand that person’s value as deriving solely from their usefulness to oneself. That, in turn, demands recognizing them as autonomous and not “treating [them] in a manner that makes light of [their] autonomy.”Footnote 31 Likewise, respecting oneself demands that one understand one’s value (and the value of one’s commitments) as inherently valuable.Footnote 32 Christman’s view would account for self-respect under non-alienation.Footnote 33

What is most distinctive of Oshana’s account is that it posits conditions that go beyond an agent’s relationship to their own intentions and preferences. One of these involves an agent’s control. The view that autonomy requires an agent have control over their actions is familiar in accounts of psychological autonomy. For example, Fischer and Ravizza argue that autonomy demands that agents exercise a kind of “guidance control,” such that the source of actions is the agent themself.Footnote 34 However, guidance control can be understood as local guidance: Is this action something over which the agent exercises guidance? Oshana’s view is that personal autonomy requires a more global control over his or her ability to determine how they live. Can one, in other words, effectively act to advance their interests and satisfy their commitments, or do their social circumstances allow others to severely limit their ability to exercise control (if they choose to do so). Oshana writes:

Autonomy necessitates a fairly robust variety of control of a sort that must be effective within a person’s social situation … . We cannot claim a person is autonomous if she is party to social relations or institutions that would enfeeble her ability to determine how she will live if it were the will of others that they do so.Footnote 35

It is neither possible nor necessary for autonomy that an agent be able to exercise control over all aspects of their life. Just how much control autonomy demands, though, is a vexing question. Here Oshana’s conception follows closely Joseph Raz’s view that mere choice and control over that choice is insufficient. Rather, autonomy requires an agent have access to a variety of relevant, attractive options. A person with the choice of whether to eat bland food now or eat it later does not have autonomy over their diet, and a person having to constantly make decisions that affect their very survival may have a variety of options, but they are not attractive, and the person is therefore not autonomous with respect to their life’s course.Footnote 36

The crux of Oshana’s account, and another reason it is distinctive, is her argument that autonomy demands substantive independence. For a person to be substantively independent they must have sufficient social and relational support, and they must not have such substantial social and relational impediments that prevent them from enjoying de facto ability to determine their life course.

Substantive independence is itself multifaceted. First it involves social conditions that afford a person some baseline level of social and psychological security. A person with very little such security is vulnerable to arbitrary actions by others and hence will lack the de facto power necessary for global autonomy. Second, substantive independence requires that a person be able to have values and pursue interests that are different from those of people with relatively greater power and influence, and to do so without risk of reprisal. The idea here is that if others use one’s values and interests as a reason to exact a toll on the person, then they are dependent on others’ forbearance of their values and interests. But one cannot be self-governing when they depend on that forbearance.

Substantive independence also requires a degree of financial self-sufficiency. That simply means that one is not subject to the control of others through financial means. And if, for example, one is dependent on a state or employer for financial support, the terms of that support or employment cannot be contingent on inappropriate conditions or subject to arbitrary termination.Footnote 37 Further, substantive independence requires that a person not be subject to misinformation that curtails their ability to exercise their agency over facets of their life.

Oshana summarizes her understanding of substantive independence by drawing on Philip Pettit’s work on republican freedom. Specifically, she argues (following Pettit) that understanding freedom as merely freedom of choice misses the mark. The idea, which we address at length in Chapter 5, is that many accounts of freedom (or liberty) focus on negative liberty or whether a person is subject to external constraints, imposed by others, which prevent them from engaging in the activities that they wish to pursue. Other accounts address positive liberty, which is to say persons’ de facto ability to engage in the activities they wish. A person might be free of others’ constraints to, for example, produce a movie yet might not have the financial or social wherewithal to actually do so. In that case they would have negative liberty but lack positive liberty. Both of those facets of liberty focus on freedom of choice.

Oshana thinks these conceptions leave out the fundamental importance of freedom of the chooser. A person may be fortunate enough that others do not interfere with their actions, and they may have resources to act more or less according to their values. However, if other people have social power to interfere with them and constrain their ability to function, or if they have to negotiate obstacles and order their life to make such interference less likely, their freedom is nonetheless constrained. Specifically, their social freedom is diminished.

Consider, for example, a series of lawsuits concerning “stop-and-frisk” policies in New York City.Footnote 38 From 2004 through 2012, New York City police conducted over 4.4 million “Terry” stops of people in the streets. Terry stops are short, informational police stops that do not rise to the level of a full search. Because they are limited in scope, the legal requirement for conducting a Terry stop is lower than the “probable cause” standard required under the Fourth Amendment of the U.S. Constitution. Specifically, police may stop people under the Terry standard if they have “specific and articulable facts which, taken together with rational inferences from those facts, reasonably warrant that intrusion.”Footnote 39 Hence, police may stop persons if they have reasonable suspicion that “criminal activity may be afoot” and may conduct a brief frisk for weapons so long as it is based on a reasonable suspicion that the person is armed and dangerous. Fifty-two percent of the Terry stops conducted during this period included a frisk for weapons, though 98.5 percent of the frisks turned up no weapon. A very small percentage of the stops resulted in either arrest (6 percent) or summons (6 percent). The overwhelming majority of the persons stopped were Black (52 percent) or Hispanic (31 percent); 10 percent of the people stopped were White (the population of New York City at the time was 23 percent Black, 29 percent Hispanic, and 33 percent White). Police used some kind of physical force in 23 percent of stops of Black people, 24 percent of stops of Hispanic people, and 17 percent of stops of White people. Finally, despite the fact that the Terry standard is a low hurdle, police conducting stop-and-frisks often did not state a specific crime they suspected to be afoot (in 2004, police failed to state a specific crime for only 1 percent of stops, but by 2009 police failed to state a specific crime for 36 percent of stops).Footnote 40 Moreover, data collected from the forms that police filled out after stops indicated that many more stops were “apparently unjustified,” as officers often simply checked boxes to justify stops post hoc (e.g., checking boxes indicating persons stopped had made “furtive movements” or made “furtive movements” along with having a “suspicious bulge” in their clothing).Footnote 41 This pattern led to a series of lawsuits, and the U.S. District Court determined that the city had violated persons’ Fourth and Fourteenth Amendment rights by acting with deliberate indifference to unconstitutional police stops.Footnote 42

What is important for our purposes is how they illustrate Oshana’s conception of substantive independence and its relation to republican freedom. The idea is this: Considered in isolation, any particular stop-and-frisk event constitutes a relatively small imposition of freedom. It does not take a long time and hence in most cases will not prevent one from going about their business shortly. And, hence, the mere fact of being subject to a stop-and-frisk does not undermine one’s substantive independence. However, stopping the analysis there omits the overweening and arbitrary nature of the New York City stop-and-frisk program in practice at scale. Because the stops were so frequent, often failed to meet even the low Terry hurdle, and were so divorced from actual criminal conduct and from actual weapons possession, they infected the daily lives of residents (and in particular Black and Hispanic residents) with persistent exposure to arbitrary power. The ability to go about one’s life free from being stopped was, in effect, at the whim of the police. It is in that way that people’s freedom was impinged, and it is in that way that their substantive independence was reduced. And, hence, their personal autonomy was diminished.

2.4 Reconciling Psychological and Personal Autonomy

In the previous sections we described two categories of autonomy views. The first, which focuses on an individual’s special relationship to their values, intentions, and motivations, is exemplified by John Christman’s account. Christman’s view is procedural in that it does not index autonomy to the content of an individual’s values and motivations. The view is relational in that it understands the importance of a person’s history and social circumstances in supporting values and preferences from which a person would not be alienated. As noted in Section 2.3.1, there are numerous competing views of autonomy in this category, articulating different kinds of procedural conditions necessary for persons to be autonomous. The second category, exemplified by Marina Oshana’s view, understands psychological autonomy as an insufficient account of what matters morally, which is personal autonomy. Oshana’s view is that while elements of psychological autonomy are important, they are neither necessary nor sufficient for personal autonomy. Personal autonomy demands more, including substantive independence.

This split seems wide at first blush, one focusing on a person’s psychology and its history, the other focusing instead on facts about a person’s place in the social, material world (and in a weakly perfectionist way at that). However, for several reasons, we need not draw a conclusion about which family of views is better supported. To begin, our project is different from the projects of Christman, Oshana, Buss, Frankfurt, Dworkin, and others. We are not aiming at a new, comprehensive account of autonomy and its importance. Rather, we are beginning with the premise that autonomy is important, adopting a minimal, ecumenical approach to the existing accounts, and developing an understanding of the moral salience of automated decision systems from the account built from convergence among the other accounts. Hence, differences in the families of views matter here only to the extent that they affect an analysis of autonomy-based moral claims.

And when we look at that set of issues, the categories of accounts of autonomy (including Christman’s and Oshana’s) converge. Both types of account can agree about many of the conditions that are important for individuals’ autonomy. Consider Oshana’s example of the woman raised in oppressive circumstances. She argues that the woman in that case lacks autonomy because the social conditions necessary for her to act with substantive independence are lacking, regardless of whether her (current, local) values and beliefs are procedurally independent. Christman agrees that the woman lacks autonomy, but he argues that this is because, in all likelihood, she would experience alienation if she were to reflect on her values and preferences in light of the oppressive processes affecting their formation.Footnote 43 What’s important for our account, though, is that Oshana and Christman agree that she lacks autonomy, it is the result of a moral wrong, and the lack of autonomy is a key part of the explanation for why her treatment is morally wrong.

Oshana’s would-be surrendered woman case is a bit more difficult. Oshana, recall, argues that this person is autonomous because she is procedurally and substantively independent. She has the social wherewithal to act differently than she does, regardless of whether her current actions are ones from which she would be alienated (if she were to critically reflect upon them). Christman’s view is that the would-be surrendered woman lacks autonomy precisely because her actions do not meet the authenticity conditions. However, as Oshana has constructed the case, Christman would not be able to point to a moral wrong undermining her autonomy. Instead, Christman could argue that a more thoroughgoing examination of a person’s history and the social structures in which a person’s values are formed are likely to uncover limitations rooted in competence conditions or authenticity conditions. However, if there are no such limitations, Christman would allow that she lacks autonomy.Footnote 44

What is important, though, is that Christman and Oshana’s positions about what is morally relevant are compatible. What matters to each is that the would-be surrendered woman’s social and relational circumstances are not particularly constrained and that she does experience alienation. The difference is whether that alienation is incompatible with the best-supported conception of autonomy, not whether there is a moral infirmity, and not what the source of such an infirmity might be.

Moreover, even in the relatively narrow range of cases in which Oshana and Christman would disagree about whether a person is autonomous, they can agree that there are autonomy-based wrongs. Consider instead a case in which a person is raised in oppressive circumstances, internalizes facets of that oppression, and in which they come to have values, motivations, and preferences that reflect that oppression. And suppose that even with sufficient opportunity to critically reflect on those values, motivations, and preferences and their genealogy, they would experience no alienation. Christman would have to conclude that the person is autonomous (after all, they meet his competence and authenticity conditions). Oshana would conclude that they lack autonomy only if they lack sufficient social opportunity to change their views and act accordingly. She would disagree that the person is autonomous only if their current social circumstances reinforce the values, motivations, and preferences formed under limited conditions. But Christman and Oshana could agree that there is an autonomy-based wrong in their limited circumstances. Christman could argue that those circumstances are unjust because they tend to be wrongs that lead to alienation. Oshana’s view is that the wrong is based on the fact of social limitation per se. In other words, the precise explanation of the wrong will vary. However, the views overlap in the cases of moral wrongs and the type of wrongs, while differing in the explanation of them.

There is also the possibility of an account of autonomy that bridges the psychological authenticity, procedural accounts with social-relational accounts. Recently, Zi Lin has argued that instead of self-rule, autonomy should be understood as independence from other-rule.Footnote 45 Her idea is that psychological authenticity requires something other than non-alienation or confluence with higher-order values. Those accounts index authenticity solely to a person’s psychology (and in Christman’s view, their diachronic psychology). Such views have some well-known conceptual problems, including the problem of regress and problem of problematic influencers. Both of these, Lin argues, can be resolved by building autonomy around independence from other-rule. Hence, on Lin’s view, psychological autonomy has a relational component that is constitutive of autonomy, rather than merely being causally relevant.

2.5 An Ecumenical View

There are plenty of disputes about the nature of autonomy: whether it describes a person’s relation to his or her values, motivations, and intentions or describes a person’s social and relational circumstances; whether competence and authenticity conditions are necessary or sufficient for a person to be autonomous; whether social conditions are constitutive of autonomy or merely causally relevant to whether one is autonomous. But, as we explain in the previous section, the different views can agree that certain things matter because of autonomy. That is the basis of our ecumenical view of autonomy in which we distill a number of key points about autonomy and its value from different conceptions.

To begin, any plausible view of autonomy will recognize the importance of procedural independence (though they may disagree about whether procedural independence is necessary for autonomy, sufficient for autonomy, or merely important for autonomy). Procedural independence requires several things. One is epistemic competence, which is to say one must be to some degree self-reflective, self-aware, and understand themself as able to actualize a life plan. Respecting people as autonomous demands fostering epistemic competence.

Notice, though, that affording people the circumstances in which they can exercise their epistemic competence is a facet of personal autonomy. Hence, severely or deliberately constrained information environments are ways in which an epistemically competent person will fail to have personal autonomy because they lack substantive independence.

Another component of psychological autonomy is rationality, or the ability to understand the world around oneself, to make close connections between facts and inferences, and to make decisions that line up with well-enough ordered assessments of facts and reasons. As noted, this is a low bar. Rationality is also related to personal autonomy. Constraints on a person’s ability to reason clearly can come from poor informational environments or from psychological stressors that thwart their ability to think clearly, hence inducing poor decisions.

Procedural independence also includes some degree of authenticity. Oshana argues that authenticity is not required, and this is a key difference between her and Christman. Our view, however, is not that authenticity is necessary or sufficient for autonomy. Rather, authenticity is important for two reasons. One is that authenticity is evidence of personal autonomy. The ability to reflect on one’s values and preferences and to recognize them as compatible with one’s sense of self and practical identity over time is an important test of the degree to which one’s values and preferences are one’s own, and (hence) that one self-governs. Likewise, lack of authenticity – that is, where a person would be alienated from their preferences and values upon reflection – is defeasible evidence that one’s personal or social autonomy is compromised. And fostering authenticity is itself morally important in that individuals are morally valuable in part because individuals are capable of determining for themselves what is of value in their lives, and they are (hence) the source of that value.Footnote 46 In other words, structures that make it likely that individuals would experience alienation from their desires and values (in Christman’s conception, where structures make it likely that a person will have inauthentic preferences and motivations) are in all likelihood antithetical to personal autonomy. Those structures may be morally justifiable overall, but that would be in spite of their relationship to autonomy.

Beyond procedural independence and its competence and authenticity components, autonomy requires substantive independence. Whether one is personally autonomous turns on their circumstances and their environment.Footnote 47 This includes social conditions such as choice among a range of attractive options (per Raz), control over meaningful facets of life, conditions of self-respect, relative financial independence, and so forth. In this we follow Oshana’s conception closely. There are, however, a few ways in which substantive independence is insufficient in ensuring autonomy overall. A person can be substantively independent (in terms of having financial wherewithal, not subject to others’ arbitrary power) but can have competence conditions undermined. Likewise, one can have local substantive independence but have been subject to conditions that make it difficult to have authentic preferences and desires (we will return to this in Chapters 4, 5, and 6). Of course, those limitations may be understood in terms of more global substantive independence.

We can summarize our view and provide a foundation for the arguments in the remainder of the book with a few primary propositions.

The first set of these concerns Feinberg’s distinctions between the various meanings of “autonomy.” First, autonomy as a capacity predominantly grafts onto psychological autonomy; it is entirely possible for a person to have a capacity for autonomy, but be prevented from exercising it (i.e., being denied personal autonomy). Second, successful exercise of autonomy in Feinberg’s sense demands substantive independence; however, the autonomy one exercises in that sense is psychological. Third, autonomy as ideal (which is to say that individual autonomy properly integrated into a larger matrix of values, including one’s responsibilities and community concerns) reflects persons’ obligations to respect others’ psychological autonomy and ensure the conditions of personal autonomy. Fourth, the idea that autonomy is a right should be understood in the sense that persons have claim-rights grounded in their autonomy.

The next propositions reflect the fact that both psychological and personal autonomy are important and (hence) that both procedural and substantive independence are valuable.

To begin, procedural independence is morally important, but limitations on a person’s procedural independence are important in large part because of conditions that impose such limitations (including limitations placed by other people, by organizations, by social structures, by natural causes, and so forth). Fostering conditions that promote procedural independence is the crucial part. It is similarly important that others respect the actions of those who lack procedural independence. They may have capacity autonomy, after all, even if it is unsuccessfully exercised.

Substantive independence is important beyond its role in fostering psychological autonomy. It does not require that one can literally do without others – business partners may need each other’s expertise and efforts, but that does not undermine their substantive independence. Rather, what matters is whether others make demands that are arbitrary, demand inauthentic or epistemically unjustifiable actions, or undermine one’s ability to act for one’s own reasons. That does not mean any kind of compromise is antithetical to respecting autonomy. What matters is whether agreements and dependence are such that one could agree to them as part of valuable social relationships and other goods. It also matters how global those contradictions are.

2.6 Objections

So far, we have considered a few different conceptions of autonomy and offered the account we will use to ground the arguments in the remainder of the book. There are, however, important critiques of autonomy, both as a concept and as a basic value. One family of criticism is that autonomy-based moral theories (and deontological theories generally) are simply mistaken about what matters. Consequentialists and virtue ethicists (among others) might argue that other values are the proper measure of moral value. As important as those criticisms are, we won’t offer a defense here. Rather, we will simply confirm that a rock-bottom assumption of this project is that autonomy is morally valuable, and it is an important enough (and rich enough) value that it can ground the arguments we offer throughout. If one disagrees with that assumption, this project probably won’t be persuasive.

Much more important in our view are criticisms levied against the very concept of autonomy and its value. Feminists, for instance, have critiqued autonomy-based theories as atomistic, divorced from social responsibilities, unmindful of the importance of relationships in identity formation, hyperrational, and disembodied. These criticisms have provided a compelling corrective to major strains of autonomy scholarship. The principal views that we have drawn on for our account are sensitive to these concerns and explicitly incorporate the critiques into their accounts. Indeed, the critiques are so clearly correct that any plausible contemporary account of autonomy will be explicitly relational. Nonetheless, it is worth canvassing a few of the critiques here.Footnote 48

One set of critiques Mackenzie and Stoljar characterize as “metaphysical.” Specifically, such critiques reject autonomy views on the grounds that autonomy attributions assume that agents are atomistic. That, in turn, can mean several things. One possibility is that individuals are causally independent or isolated from others; that is, individuals are self-creating sources of values and desires. Of course, that is false; one can discern autonomy in some sense even while recognizing that people’s understandings are caused by their social milieu.Footnote 49 A different possibility is that agents are atomistic in the sense that they are independent of social and family relationships or that persons have intrinsic properties that do not depend on how they relate to others. Again, individuals are not like this at all. First, as a matter of empirical fact humans are hyper-social. Second, people’s identities and values do not cohere with absent relationships with others. Nonetheless, autonomy remains conceptually coherent so long as we allow that there are important ways in which an individual can form their values under competence and authenticity conditions and has some claim to avoiding arbitrary interference.Footnote 50 A final way of understanding atomistic individualism is that persons are metaphysically distinct. Mackenzie and Stoljar point out that this is not so much a critique as an obvious fact.Footnote 51

A related set of critiques are based on conceptions of care. The idea is that traditional understandings of autonomy have under- or devalued women’s perspectives and traditional, gendered social roles. Hence, relationships of care have been systematically excluded from autonomy conceptions. But those roles are of fundamental human value, and ignoring their centrality is a mark against any view. Mackenzie and Stoljar point out, though, that a number of feminist scholars, such as Jennifer Nedelsky and Evelyn Fox Keller, incorporate understandings of care into conceptions of autonomy. Acting as an agent in the world – a constitutive part of autonomy – should involve interdependence, nurturing, and care. And such capacities are indeed types of competence conditions for autonomy.Footnote 52

The upshot of these critiques is that while traditional conceptions of autonomy are cramped and implausible, more capacious understandings can address some of those shortcomings. Specifically, any reasonable account of autonomy will recognize that the sources of self, meaning, and value will be intimately bound up with one’s social, community, and family relationships. We are not self-executing sources of value; rather, we are sources of value within social, relational contexts. Moreover, the competencies of autonomy will involve one’s ability to enter and foster those relations. And lastly, the procedural and substantive independence conditions (e.g., those articulated by Christman and Oshana, and upon which we draw heavily) provide an explanation for why oppressive conditions (based on gender, or race, or ethnicity, or class, or happenstance) conflict with autonomy.Footnote 53

2.7 Conclusion: Related Concepts and Moral Salience of Autonomy

In this chapter we have made some basic distinctions about autonomy and canvassed two important, representative views (each exemplifying a different family of conceptions of autonomy). We have argued that those groups of views have substantial normative overlap, and we have used that overlap to advance an ecumenical view of autonomy. That conception provides a foundation for the arguments in the rest of this book. Before turning to those arguments, though, it is worth briefly describing why autonomy matters and how it grounds other concepts that will figure into the chapters that follow.

One way that autonomy matters morally is built into our account. People are capable of determining their values, desires, and preferences, and they can use those values, desires, and preferences to guide their decisions and steer their lives. They are, in other words, a source of value. To the extent that others severely constrain individuals’ ability to form their own sense of value (e.g., by limiting their procedural or substantive independence), they stifle the degree to which individuals’ values are their own. Moreover, thwarting persons’ abilities to act on their own values by coercion, deception, or severely constrained choice architecture is a way of circumventing autonomy and an affront to persons as choosers and self-governors.

However, the moral salience of autonomy goes well beyond limitations on persons’ abilities to act on their own desires and values. Consider again the distinctions we set out at the beginning of this chapter. The ability to act on one’s own desires and values is a matter of successfully exercising one’s capacity for autonomy. That assumes that people have the capacity to autonomous. It may not be enough, though, to simply assume people have that capacity. Rather, it is plausible that there is a social responsibility to promote that capacity. Indeed, that is an important justification for education.Footnote 54

Throughout the book, we consider different kinds of moral concerns about automated systems, all of which are rooted in autonomy. In Chapter 3, we address broader social claims regarding use of algorithmic systems. The systems we discuss in that chapter – K-12 teacher evaluation systems and criminal justice risk assessment tools – do not directly limit autonomous individuals’ choices via deception, coercion, or manipulation, and they do not obviously undermine substantive independence. Rather, we argue that autonomy is a foundation for a claim that persons be subject only to social systems that they could reasonably endorse. In Chapter 4, we make the case that autonomy includes more than the ability to act on one’s values and preferences. It also has an important informational component. That informational component is vital regardless of whether a person is able to put information into practice. Autonomy demands, in our view, the ability to exercise both practical agency (the ability to function effectively in important arenas) and cognitive agency (the ability to exercise a kind of evaluative control and understand one’s place in the world, regardless of one’s ability to affect it).

Later, we will expand our discussion of the social conditions of freedom. We have already discussed how Oshana’s account incorporates facets of Philip Pettit’s understanding of republican freedom into her understanding of personal autonomy. Our task in Chapter 5 is to draw out the autonomy- and agency-based conditions of freedom. In Chapter 6, we argue that successful exercise of autonomy may in many cases demand a degree of epistemic paternalism. That is, given the competence and authenticity requirements for autonomy, certain kinds of media entities may be permitted (or even obligated) to exert a degree of editorial control over the content posted on their platforms. In Chapter 7 we explain the moral requirements of autonomous agents. That is, autonomous persons are capable of legislating and following moral principles. That capacity creates obligations to act responsibly. Obfuscating that responsibility will in many cases be a distinct kind of wrong. Finally, in Chapter 8, we will consider how autonomy plays a crucial role in underwriting political legitimacy.

Having introduced our polestar cases, situated concerns about algorithmic systems in a broader discourse, and explained our strategy in Chapter 1, and having provided our catholic conception of autonomy here, we can begin directly addressing the autonomy-grounded moral concerns in algorithmic systems. Next stop: Chapter 3.

Footnotes

1 Introduction

1 Jungen, “Vang Gets 10 Years in Prison for Drive-by Shooting.”

2 First degree recklessly endangering safety, possession of a firearm by a felon, and possession of a short-barreled shotgun or rifle (all as party to a crime). See Wisconsin v. Loomis, 881 N.W.2d paragraph 11.

3 The tool used is part of a suite of assessment tools developed for use at various stages in the criminal justice system with different algorithms and software packages geared toward (among others) defendants who are recently incarcerated or under state supervision (COMPAS Core), persons who will soon reenter their community after incarceration (COMPAS Reentry), young people (COMPAS Youth), and general case management (Northpointe Suite Case Manager). The tool used in Loomis is COMPAS Core (which we call “COMPAS” for simplicity).

4 Northpointe, Inc., “Practitioner’s Guide to COMPAS Core,” 27–28.

5 Northpointe, Inc., 24.

6 Northpointe, Inc., 4.

7 Wisconsin v. Loomis, 881 N.W.2d paragraph 16.

8 Wisconsin v. Loomis, 881 N.W.2d paragraph 16.

9 Wisconsin v. Loomis, 881 N.W.2d paragraph 18.

10 Wisconsin v. Loomis, 881 N.W.2d paragraph 19.

11 Wisconsin v. Loomis, 881 N.W.2d paragraph 20.

12 Wisconsin v. Loomis, 881 N.W.2d paragraph 22.

13 Walsh and Dotter, “Longitudinal Analysis of the Effectiveness of DCPS Teachers.”

14 Isenberg and Hock, “Measuring School and Teacher Value Added in DC, 2011–2012 School Year.”

15 Wagner v. Haslam, 112 F. Supp. 3d.

16 112 F. Supp. 3d at 698. In reviewing government regulations under the Fourteenth Amendment’s Equal Protection Clause, courts apply increasingly stringent levels of scrutiny (and are therefore more likely to find violations of the equal protection clause) based on types of classification used and how fundamental the right affected is. Where government regulation does not use a suspect class or affect a fundamental right, it is subject to the rational basis test. This is the least stringent level of scrutiny, and requires only that the regulation be rationally related to a legitimate government purpose. This is a high bar for plaintiffs to clear. See 16B Am Jur 2d Constitutional Law §§ 847–860.

17 Houston Fed of Teachers, Local 2415 v. Houston Ind Sch Dist, 251 F. Supp. 3d at 1174.

18 Houston Fed of Teachers, Local 2415 v. Houston Ind Sch Dist, 251 F. Supp. 3d at 1175.

19 Houston Fed of Teachers, Local 2415 v. Houston Ind Sch Dist, 251 F. Supp. 3d at 1183.

20 Houston Independent School District, “EVAAS/Value-Added Frequently Asked Questions.”

21 251 F. Supp. 3d 1168, 1178.

22 O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.

23 Fry, Hello World: Being Human in the Age of Algorithms.

24 Angwin et al., “Machine Bias,” May 23, 2016.

25 Citron, “Technological Due Process”; Sweeney, “Discrimination in Online Ad Delivery”; Citron and Pasquale, “The Scored Society: Due Process for Automated Predictions”; Sweeney, “Only You, Your Doctor, and Many Others May Know”; Barocas and Selbst, “Big Data’s Disparate Impact”; Calo and Rosenblat, “The Taking Economy: Uber, Information, and Power”; Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor; Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information; Noble, Algorithms of Oppression; Rosenblat, Uberland.

26 Regarding capitalization of “Black” and “White,” we are persuaded by the arguments in Appiah, “The Case for Capitalizing the ‘B’ in Black.”

27 Mittelstadt et al., “The Ethics of Algorithms.”

28 Select Committee on Artificial Intelligence, “AI in the UK: Ready, Willing and Able?” 15; Fry, Hello World: Being Human in the Age of Algorithms.

29 See Diamond v. Diehr, 450 U.S. 175 (1981).

30 Kitchin, “Big Data, New Epistemologies and Paradigm Shifts.”

31 Finlay, Predictive Analytics, Data Mining and Big Data, 3; Sloan and Warner, “Algorithms and Human Freedom.”

32 On its website, OpenAI describes its mission as “to ensure that artificial general intelligence (AGI) – by which we mean highly autonomous systems that outperform humans at most economically valuable work – benefits all of humanity.” OpenAI, “About OpenAI.” DeepMind, meanwhile, describes itself as “a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence.” DeepMind, “About DeepMind.” For a somewhat recent book-length analysis of these issues, see Bostrom, Superintelligence.

33 Marcus, “Deep Learning.”

34 Whittlestone et al., “Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research,” 11.

35 Whittlestone et al., 46–47.

36 National Science and Technology Council, “Preparing for the Future of Artificial Intelligence.”

37 Select Committee on Artificial Intelligence, “AI in the UK: Ready, Willing and Able?” 12.

38 Select Committee on Artificial Intelligence, 125. Related reports and recommendations have come from Japanese Society for Artificial Intelligence, “Ethical Guidelines”; Association for Computing Machinery, US Public Policy Council, “Statement on Algorithmic Transparency and Accountability”; Campolo et al., “AI Now 2017 Report.”

39 Algorithmic Accountability Act of 2019, H.R. 2231; Algorithmic Accountability Act of 2019, S. 1108.

40 Martin, “Ethical Issues in the Big Data Industry,” 75.

41 Zarsky, “The Trouble with Algorithmic Decisions.”

42 Citron and Pasquale, “The Scored Society: Due Process for Automated Predictions.”

43 Yeung, “‘Hypernudge’: Big Data as a Mode of Regulation by Design.” See also Lanier, Ten Arguments for Deleting Your Social Media Accounts Right Now.

44 O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.

45 Citron, “Technological Due Process.”

46 For accounts addressing algorithmic systems and autonomy, see Yeung, “‘Hypernudge’: Big Data as a Mode of Regulation by Design”; Lanzing, “‘Strongly Recommended’ Revisiting Decisional Privacy to Judge Hypernudging in Self-Tracking Technologies”; Danaher, “The Threat of Algocracy: Reality, Resistance and Accommodation”; Danaher, “Toward an Ethics of AI Assistants”; Susser, Roessler, and Nissenbaum, “Online Manipulation: Hidden Influences in a Digital World.” They most often address direct effects of automated systems on individuals’ decision procedures. Certainly, such cases are important from the standpoint of autonomy. However, our aim here is to address a broader range of issues surrounding autonomy. We address this issue further in Chapter 5.

47 Binns, “Fairness in Machine Learning: Lessons from Political Philosophy”; Binns, “Algorithmic Accountability and Public Reason”; Corbett-Davies and Goel, “The Measure and Mismeasure of Fairness.”

48 Among the most odious of these are YouTube algorithmic recommendation systems that serve disturbing (including violent and sexualized) content on channels specifically designed and marketed for children. See Maheshwari, “On YouTube Kids, Startling Videos Slip Past Filters”; Orphanides, “Children’s YouTube Is Still Churning out Blood, Suicide and Cannibalism.”

49 For an overview of the enormous labor and the labor practices involved in content moderation, see Roberts, Behind the Screen.

50 Peter, “The Grounds of Political Legitimacy.”

51 For example, one might ask whether the claim that X should happen is merely expressive of support for a view, or whether it seeks to say something true about the world, but is mistaken because there are no such facts, or the like. We won’t take up that debate here, though we invite anyone who would like to have that debate to attend the Madison Metaethics Workshop at the University of Wisconsin-Madison each fall.

52 Here, we offer a modified version of one Tom Regan outlines in Regan, “Introduction to Moral Reasoning.” He offers what he calls a process for “ideal moral judgment,” which strikes us as overly optimistic. We’ll call it instead a “heuristic for better moral judgment.” That may also be overly optimistic, even if substantially less so than Regan.

53 One might call these “non-moral facts,” though that has the potential to derail us. At least one of the authors is adamant that all facts are non-moral. At least one tentatively believes that moral facts are types of natural facts. And at least one has argued that there are moral facts and those facts are nonnatural. It is possible that some of these describe multiple authors and that some authors are described by more than one of these sentences. What matters for our purposes here is that there are issues about metaethics that lurk underneath the surface of any project in applied ethics. Nonetheless, we can make progress without resolving those underlying questions.

2 Autonomy, Agency, and Responsibility

1 Following Brighouse, School Choice and Social Justice, 66.

2 Note that the third and fourth cases are ones in which the person’s preferences and values are not autonomous. It is a further question as to whether the employee is globally autonomous and a further question still whether the employer undermined, infringed, or failed to respect their autonomy. Our task in this chapter is get a handle on the concept of autonomy. We take up questions about the moral demands of autonomy in subsequent chapters.

3 Meyers, Self, Society, and Personal Choice, 48.

4 Feinberg, “Autonomy,” 28.

5 Feinberg, “Autonomy,” 30.

6 Kant, Groundwork of the Metaphysics of Morals, sec. 4:432.

7 Mackenzie and Stoljar, Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self, 4.

8 Oshana, Personal Autonomy in Society, 90.

9 Raz, The Morality of Freedom; Mackenzie, “Relational Autonomy, Normative Authority and Perfectionism.”

10 Feinberg, “Autonomy,” 45.

11 Feinberg, 45.

12 Oshana, Personal Autonomy in Society, 49.

13 Oshana, 21–46.

14 Buss and Westlund, “Personal Autonomy,” sec. 2.

15 “Procedural” does not quite capture the difference, though. That is because procedural accounts are typically contrasted with “perfectionist” views, which maintain that there are certain values that are intrinsically part of autonomy. However, there are perfectionist accounts of a person’s relation to their own motives. Benson, “Taking Ownership: Authority and Voice in Autonomous Agency.”

16 There is a wide range of autonomy views with varied accounts of the relation between an agent and his or her motives and intentions. Some focus on the relationship between motivational structure and desires. Frankfurt, “Freedom of the Will and the Concept of a Person”; Dworkin, The Theory and Practice of Autonomy; Buss and Westlund, “Personal Autonomy.” Some views are internalist in that they consider only how facts internal to an agent (e.g., motives, desires, intentions) relate to one another. Others index autonomy to some external standard; Fischer and Ravizza argue that autonomy requires an agent to be responsive to reasons (which are extrinsic to a person’s psychology). Fischer and Ravizza, Responsibility and Control: A Theory of Moral Responsibility. What is important for our account, though, is that each of these accounts relies on some feature(s) of a person’s psychology as a condition for autonomy, and those features can be understood in terms of competence or authenticity.

17 Frankfurt, “Freedom of the Will and the Concept of a Person”; Dworkin, The Theory and Practice of Autonomy.

18 Meyers, “Personal Autonomy and the Paradox of Feminine Socialization”; Meyers, Self, Society, and Personal Choice.

19 Christman, The Politics of Persons: Individual Autonomy and Socio-Historical Selves.

20 Christman, 154.

21 Christman, 155.

22 Christman, 155.

23 Marilyn Friedman offers a related view, positing that an agent is autonomous when the “agent chooses or acts in accord with wants or desires that she has self-reflectively endorsed.” See Friedman, Autonomy, Gender, Politics, 5.

24 See also Sandel, Liberalism and the Limits of Justice.

25 Oshana, Personal Autonomy in Society, 46.

26 Oshana, 49 (emphasis added).

27 Oshana, 67.

28 Oshana, 64–65.

29 See also Benson, “Taking Ownership: Authority and Voice in Autonomous Agency.”

30 Oshana, Personal Autonomy in Society, 77.

31 Oshana, 81.

32 See also Hill, Jr., “The Kantian Conception of Autonomy.”

33 Christman, The Politics of Persons: Individual Autonomy and Socio-Historical Selves, 182.

34 Fischer and Ravizza, Responsibility and Control: A Theory of Moral Responsibility.

35 Oshana, Personal Autonomy in Society, 83.

36 Raz, The Morality of Freedom, 373–376.

37 Oshana, Personal Autonomy in Society, 87; see also Meyers, Self, Society, and Personal Choice, 12.

38 Floyd v. City of New York, 959 F. Supp. 2d.

39 Terry v. Ohio, 392 U.S.

40 All the statistics in this paragraph are from “Overview of Uncontested Statistics,” in Floyd v. City of New York, 959 F. Supp. 2d at 572–575.

41 Floyd v. City of New York, 959 F. Supp. 2d at 559–560.

42 Floyd v. City of New York, 959 F. Supp. 2d at 562.

43 Christman, The Politics of Persons: Individual Autonomy and Socio-Historical Selves, 172–177.

44 In his response, Christman questions whether a richer account of specific cases would reveal ways in which their subjects would be alienated. Christman, 168–169.

45 Lin, “New Perspectives on the Moral Significance of Coercion, Manipulation, and Bodily Violence,” 50–59.

46 cf. Korsgaard et al., The Sources of Normativity.

47 Indeed, some scholars refer to “environmental” conditions of autonomy to refer to components that are extrinsic to one’s psychology. See Piper, “Autonomy: Normative.” See also Chapter 5, where we describe the concept of ecological freedom.

48 Here we follow Mackenzie and Stoljar, Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self, chapter 1.

49 Mackenzie and Stoljar, 8; See also Baier, Postures of the Mind: Essays on Mind and Morals.

50 Mackenzie and Stoljar, Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self, 8.

51 Mackenzie and Stoljar, 8.

52 Mackenzie and Stoljar, 9–10; see also Nedelsky, “Reconceiving Autonomy: Sources, Thoughts and Possibilities,” 7–36; Keller, Reflections on Gender and Science, chapter 5.

53 For further discussions of these and related criticisms of autonomy, and for feminist “rehabilitations” of autonomy, see Stoljar, “Feminist Perspectives on Autonomy.”

54 Brighouse, School Choice and Social Justice; Gutmann, Democratic Education.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×