To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number of recent cases involving automated, or algorithmic, decision-systems. We apply our conception of agency laundering to a series of examples, including Facebook’s automated advertising suggestions, Uber’s driver interfaces, algorithmic evaluation of K-12 teachers, and risk assessment in criminal sentencing. We distinguish agency laundering from several other critiques of information technology, including the so-called “responsibility gap,” “bias laundering,” and masking.
This chapter addresses autonomy’s role in democratic governance. Political authority may be justifiable or not. Whether it is justified and how it can come to be justified is a question of political legitimacy, which is in turn a function of autonomy. We begin, in section 8.1, by describing two uses of technology: crime predicting technology used to drive policing practices and social media technology used to influence elections (including by Cambridge Analytica and by the Internet Research Agency). In section 8.2 we consider several views of legitimacy and argue for a hybrid version of normative legitimacy based on one recently offered by Fabienne Peter. In section 8.3 we explain that the connection between political legitimacy and autonomy is that legitimacy is grounded in legitimating processes, which are in turn based on autonomy. Algorithmic systems—among them PredPol and the Cambridge Analytica-Facebook-Internet Research Agency amalgam—can hinder that legitimation process and conflict with democratic legitimacy, as we argue in section 8.4. We conclude by returning to several cases that serve as through-lines to the book: Loomis, Wagner, and Houston Schools.
One important criticism of algorithmic systems is that they lack transparency, either because they are complex, protected by intellectual property, or deliberately obscure. There is a debate about whether the EU’s General Data Protection Regulation (GDPR) contains a “right to explanation” This chapter addresses the informational component of algorithmic systems. We argue that information access is integral for respecting autonomy, and transparency policies should be tailored to advance autonomy. We distinguish two facets of agency (i.e., capacity to act). The first is “practical agency,” or the ability to act effectively according to one’s values. The second is “cognitive agency,” which is the ability to exercise what Pamela Hieronymi calls “evaluative control”. We argue that respecting autonomy requires providing persons sufficient information to exercise evaluative control and properly interpret the world and one’s place in it. We draw this distinction out by considering algorithmic systems used in background checks, and we apply the view to key cases involving risk assessment in criminal justice decisions and K-12 teacher evaluation.
Chapter 3 takes the conception of autonomy outlined in chapter 2 and explains how it grounds moral evaluation of algorithmic systems. It begins by offering a view of what it takes to respect autonomy and to respect persons in virtue of their autonomy, drawing on a number of different normative moral theories. The argument starts with a description of a K-12 teacher evaluation program from Washington, DC. It then considers several puzzles about the case. Next, the chapter provides an account of respecting autonomy and what that means for individuals’ moral claims. It explains how that account can help us understand the DC case, and we will offer a general account of the moral requirements of algorithmic systems. Specifically, we offer the Reasonable Endorsement Test, according to which an action is morally permissible only if it would be allowed by principles that each person subject to it could reasonably endorse. The chapter applies that test to the Loomis, Houston Schools, and Wagner cases. Finally, the chapter explains why the book does not focus directly on “fairness.”
Algorithms influence every facet of modern life: criminal justice, education, housing, entertainment, elections, social media, news feeds, work… the list goes on. Delegating important decisions to machines, however, gives rise to deep moral concerns about responsibility, transparency, freedom, fairness, and democracy. Algorithms and Autonomy connects these concerns to the core human value of autonomy in the contexts of algorithmic teacher evaluation, risk assessment in criminal sentencing, predictive policing, background checks, news feeds, ride-sharing platforms, social media, and election interference. Using these case studies, the authors provide a better understanding of machine fairness and algorithmic transparency. They explain why interventions in algorithmic systems are necessary to ensure that algorithms are not used to control citizens' participation in politics and undercut democracy. This title is also available as Open Access on Cambridge Core.
The mythology of the Market is strongly evident, indicated by the corporate camouflage of existential desire by the wide range of constructed desires. This mythology has materialised in the personalisation of the idea of the corporation. Its functioning is revealed by the commodification of individuals within models of regulatory capitalism and by the structural embedding of debt as credit. These trends have been promoted by the digitisation of corporate function, by algorithmic profiling of individuals as consumers and by the exploitation of Big Data. This has morphed into surveillance capitalism. The non-mythological way forward would start with focusing on all shareholders, including all citizens on whom the corporation impacts. This is the reimagining of corporations on purpose-based, fiduciary principles. This in turn would require the redrafting of competition and consumer protection law, as well as shifting the control of personal data to the individual. It would also require changes to employee relations strategies.
The State has been a mythological entity through its history, from its sovereign phase to its present dispersed, nodal, regulatory phase. This dispersal raises important questions about gradual disappearance of public accountability. It also points to such other key issues as the dilution of personal responsibility, especially when considered in the context of the determinative implications of neuroscientific research. These trends are further emphasised by the increasingly avaricious, non-consensual digitisation of the State and the threat to democratic values posed by such trends as data brokering and algorithmic friendliness. The consequential move to a non-mythological State can be produced by the reimagining of agencies as purpose-based and which operate on existential, fiduciary principles in a manner that avoids Pettit’s republicanism. How this transition can take place is evidenced by the difference between mythological and non-mythological criminal justice, a model for which is presented.
This work will address the problems of contemporary accounts of privacy by placing them in a new context of the mythological social dynamic that has constrained the West. This dynamic has driven a trajectory of failed mythological magnitudes – Deity, State, Market and now Technology – by which we have tried to avoid existential reality rather than embrace it. This avoidance is why privacy is vulnerable to the imminent impact of the latest form of that dynamic, neuroscience: while ‘privacy’ comes from early forms of this dynamic, neuroscience is now its most powerful form and will overwhelm that sense of privacy. Privacy needs to be removed from this dynamic, reconceiving it through existential, respectful self-responsibility. It will then survive this challenge and will, counterintuitively, embrace neuroscientific benefits, including their promotion of this new privacy through the technological control of the citizen. We will examine the dynamic, how it produced present notions of privacy through a singular form of normalisation, and how it is being re-formed by the mythological algorithms of neuroscience. To disengage privacy, we will need new ethical principles and reimagined social infrastructure – law, State and Market, these best understood by reconceiving regulation. That will provide the necessary support for self-responsibility.
Any new account of privacy, especially one with the unusual combination of elements as is presented here, will require a conducive institutional environment. The environment proposed here is based on a new notion of regulation, in the broad sense of social control rather than the narrower sense of subsidiary legislation and despite the successes claimed for that latter form. In this broad sense, present regulation derives from the spread of power from sovereign institutions as means by which that sovereign power is transformed but still empowered and which can be seen in such shapes as biopower and the algorithmic determinism of human behaviour. The presently dominant form of regulation, responsive regulation, is best seen as mythological, especially through the manner in which it is informed by the republicanism of Pettit. In response, the new sense of regulation as social control focuses on the reimagining of institutions and the promotion of the existential interests of the individual to centre-stage. This is the reverse of current priorities.
We need to understand neuroscience as an emanation of artificial intelligence. By that, a range of methods is being used to understand not only how the brain functions but also how it might be brought to function. Such neural change will increasingly come from connecting the brain to external sources of intelligence, both artificial and human. Yet the algorithms that are driving these developments are not neutral. As the world is itself increasingly being claimed to be algorithmic, we need to see not only that algorithms – and the data they interpret – are designed but that this design carries personal and cultural presumptions. We are re-creating the world through algorithms and that is both a form of idealism and one which is, because of that cultural frame, mythological in the sense of the dominant social dynamic. That is, because algorithmic designs are not determined by each individual, they are technologies of subjection, willing embraced or imposed. They are formative not only of the world but also of the individual self. This process is as evident in virtual and augmented realities as it is in clinical neuroscience.
The illegitimacy of present accounts of privacy is revealed by the manner in which normalisation has long taken place through a series of social transitions. Other historical perspectives of societal evolution have been adopted, but the mythological analysis here is distinctive. Following Christian confessionalism and pastoralism, we see the methods of governmentalizing discipline that led to the civilising of the sovereign State through the rise of the bourgeoisie; then the liberalism and neoliberalism that ultimately promoted the dominance of the Market over the State, by which the consumer has been constructed; and now the Technological ‘algorisation’ of social and individual perspective and practice. Many of the elements that have accumulated in this long process are thereby being brought to bear in technologies of the self as self-creation. Each of these regimes was founded on the distancing and camouflage of existential reality, inducing subjection to the ideas and practices promoted within these mythological magnitudes and primarily for the benefit of their respective dominant interests.
A valid new sense of privacy would need to be founded on the principles of the existential, respectful self-responsibility of all individuals and the promotion of which would need to be complemented by a reimagined State, Market and technological design principles. This will allow the embrace, not the denial, of the value of technological development, especially in neuroscience. In this context, each individual would have an evolving personal technology strategy with progressive/enhancement and conservative/protection elements. From that, respectful self-responsibility would require both sharing information and acquiring it, all typically under the individual’s control, including through data and algorithms that are designed and applied under their direction. The initiatives undertaken by the IEEE and MyData are moves in the right direction, but they remain prey to mythological interpretation. The principles of this new sense of privacy are then tested by application to standard and well-known privacy dilemmas, including on case law.
In developing a new ethic as a foundation for a non-mythological notion of privacy, we need first to put aside the informational ethics of Floridi, as that is founded on the conception of the individual as, ontologically, information. We demonstrate that this is a mythological position. Capurro has seen the errors of that argument in the dehumanisation of the individual. In moving forward, we examine the value of the full range of the standard ethical qualities on which our relationship with technology is said to be best based and thereby how we should manage its intrusions into privacy. These include dignity, liberty, identity, responsibility, democratic principles, equality, human rights and the common good. However, each of these is shown ultimately to be vulnerable to a range of shortcomings. It is argued that only respectful self-responsibility – that is, responsibility to and for oneself which is respectful of others and which relies on existential values – can act as a solid ethical foundation, although these other principles can be claimed to be of secondary value. We conclude the argument here by pointing out how that principle would not fall prey to bourgeois aspirations.