Not a day goes by without a new story on the perils of technology. We hear of increasingly intelligent machines that surpass human capability, comprehension, and control. We hear of tech billionaires imploring each other to abandon artificial intelligence (AI) that poses ‘profound risks to society’.Footnote 1 We hear of genetic technologies capable of altering the human genome in ways we cannot predict and which are leading us to a future two-tier humanity consisting of those of us who are genetically enhanced and those who are not. How can we respond to these stories? What should we do politically?
When the well-known futurist and technology expert, Amy Webb, was recently asked by interviewer Wajahat Ali what we should do to address tech-generated harm to human society, her answer, although surely right, nevertheless seemed impossibly impracticable:Footnote 2
Webb: The challenge has always been complacency. The antidote to that is paying better attention. If you see something, say something – just like you are on the New York subway.
Ali: I’m not complacent – I’m exhausted, I’m overwhelmed! There’s … ‘income inequality’, there’s ‘climate change’ … I do not really understand this stuff. Look, I’m not a brilliant tech expert like you, I’m just an average Jane, Joe or José. Even if ‘I see something and I say something’, no one’s going to care. What hope does the average person have of changing something?
Webb: There is no one big solution – no big switch that can be flipped – but we can make a thousand incremental changes through the decisions that we are making every day. I know that everyone is exhausted, but it’s what we have to do!
What Amy Webb is describing here is a huge political challenge. Rather than being able to point to anyone in particular as the root cause, this is a problem of vast arrays of complex, multitudinous ‘everyday’ actions of individuals, groups and institutions – otherwise known as structural dynamics. So, who should take up the task of deciding which micro-incremental changes to make, and coordinate them to sufficient effect? What tools can we develop to politically address structural challenges of this sort, especially when states are so often captured by the interests of tech industries and seduced by the promise of technological efficiency and betterment?
I argue that a radical rethink about how the substantial coordinating capabilities of state-level governance is devised and deployed is urgently required.
At the core of this book is an anxiety about the failure of democratic politics to govern transformative technologies.Footnote 3 By way of example, I have chosen to focus on AI and reproductive and genetic technologies (Repro-tech) as two technologies set to dominate future societies. While there is a great deal of hype around these extraordinary scientific developments, the question remains of what sort of practical political methods could be deployed to address the harms they may bring alongside the great many advantages they promise. By way of exploring this question, I want to move beyond the usual arguments and legal devices that serve to identify tech developers, and users, as being at fault for individual acts of wrongdoing, recklessness, incompetence or negligence (important as these questions of liability are) and ask instead how we might address the broader structural dynamics intertwined with the increasing use of AI and Repro-tech. My argument will be that to take a much sharper structural perspective on these transformative technologies is a vital requirement of contemporary politics.
Book Structure
Chapter 1 The Problem of Structural Injustice: Where Nobody Is Liable, Who Is Responsible?
Before engaging with the specific questions of how AI and Repro-tech relate to structural injustice and what this might mean for tech governance, I begin with a theoretical discussion of what structural injustice is and how it operates. This is an unusual place to start a book on tech governance but I argue that it has significant value for thinking differently about the challenges of addressing tech-generated harms to society.Footnote 4
Through the process of writing this book and also as co-editor of What is Structural Injustice? (Browne and McKeown, Reference Browne and McKeown2024),Footnote 5 I came to appreciate that there were a great many different interpretations of the concept of structural injustice. At the centre of these debates, however, is the late Iris Marion Young’s extraordinary text Responsibility for Justice (2011). The array of interpretations and criticisms inspired by Young’s last work are likely to be, in part at least, because it was published posthumously and based on drafts and notes, with several gaps and inconsistencies inevitable in any draft. Nevertheless, a wealth of scholarship emanating from engagement with this work has taken us in many different intellectual directions, opening up new theoretical and practical opportunities to address injustice. As I shall set out, I develop my own particular interpretation of structural injustice, in part based on my exploration of Young’s work and that of her critics.
I begin by focusing on an overlooked thread in Young’s work – the untraceabilty of structural dynamics, which although not tied to technology by Young, is, I suggest, a valuable lens through which to consider technology’s impact on society. Not only do I see untraceability as a constant theme in Young’s account of structural injustice, but also, in fact, a defining feature. This argument, I suggest, brings with it profound implications for the sort of politics we might employ to address the structural injustices intertwined in the rise of transformative technologies and which, I argue, are not catered to in current liability-based governance measures such as global AI Ethics Frameworks, General Data Protection Regulation (GDPR), licensing laws or international bans on human cloning, for example (as important as these sorts of approaches are).
Put simply, ‘untraceability’ in this context means that the causes of structural injustice are too complex and convoluted to be meaningfully traced to an agent of fault. This is not an observation about the ineptitude of our political commitment but rather one of political impossibility in a given time and place. This is not to say that there will never be new social scientific and technical tools in the future that will give us new insights, but rather to emphasise the political inefficacy of attempting to trace blame where none can be meaningfully traced.
As I shall explain, because structural injustice is conceptually distinct from other sorts of injustice, to turn our political machinery, in whatever form, towards tracing culpable agents for the purpose of addressing such injustice (what Young called the ‘liability model of responsibility’),Footnote 6 is misguided. We need a different kind of political sensibility to address structural injustices.
In order to set the ground for my arguments later in this book on the structural dynamics of AI and Repro-tech, I take time in Chapter 1 to pull out the elements of Young’s work that offer a persuasive account of the untraceability of structural injustice and I consider the ways in which, influenced by her critical engagement with the work of Hannah Arendt, Young leads us to the question of ‘where nobody is liable, who is responsible?’ This question, I argue, is important in the context of tech governance and as such, serves as a central theme of the book
Many intellectual disputes with Young’s work are built on the seemingly logical question that if we cannot conceivably trace responsibility for structural injustice then how can we claim that any particular individual, group or institution is politically responsible for addressing structural injustice? This is certainly an important question, and in the course of the book I shall try to explain why I think we need to look beyond liability in developing a more progressive politics around the structural dynamics of transformative technologies.
I should stress, however, that while the book is in places a dialogue with the work of Young, I do not view this as an exercise in discovering Young’s ultimate meaning. The primary intention is not to analyse, critique or defend Young’s work per se. I freely admit that my version of structural injustice is built on a purposefully selective reading of Young’s arguments, and consequently I create a much sharper version of structural injustice than may be found in Young’s work or in the work of other thinkers on structural injustice. Nevertheless, Young’s claim that remedies for structural injustice ought not to rely on ‘traceable liability’ is the direction I develop and advocate in this book because I see it as a way to address forms of injustice and harm that are often missed by a politics too heavily weighted towards liability.Footnote 7
Young’s approach was not to construct an entire new theory of political responsibility for the unintended structural consequences of human behaviour, but rather to adopt an approach of ‘pragmatic theorizing’ (Reference Rousseau and Gourevitch1997: 17). The theoretical activity of such an approach ‘is not concerned to give an account of the whole’ but rather is constituted by ‘categorizing, explaining and developing arguments that are tied to a specific practical and political problem’ (Reference Rousseau and Gourevitch1997: 17). Following this approach, I focus on the relationship between structural injustice and the governance of transformative technologies, AI and Repro-tech, using the UK context as my example. As I shall discuss in later parts of the book, the UK Government boasts that it is both an ‘AI superpower’ and the world’s leader in ‘gold standard’ Repro-tech governance, thereby providing a useful backdrop against which to rethink governance mechanisms more directly towards the public interest with a structural focus.
Chapter 2 Artificial Intelligence and Ground Truth
We are the first generation in the history of humanity that has given machines the power to make decisions that historically could only be made by people. If we get it wrong, every generation that follows will pay a price for our mistakes.
Humanity’s increasing reliance on AI and robotics is driven by compelling narratives of efficiency in which the human is a poor substitute for the extraordinary computational power of machine learning, the creative competences of generative AI as well as the speed, accuracy and consistency of automation in so many spheres of human activity. Indeed, AI is fast becoming the core technological foundation of contemporary societies estimated to contribute US$15.7 trillion to the global economy by 2030 (World Economic Forum, 2022).
Most thinking on how to manage the downside risks to humanity of this seismic societal shift to AI and robotics is set out in a direct fault-based relationship. To use a well-known example that caught a great deal of media attention, Elaine Herzberg tragically died when hit by a self-driving Uber car that failed to stop as she wheeled her bike across the road in Tempe, Arizona. Despite being a self-driving car, the Uber car required a safety (human) driver at all times. The ‘operator’, Rafaela Vasquez on that occasion, had been streaming an episode of the television show The Voice and was accused of not watching the road with full attention (Wired, 2022).Footnote 8 Was Vasquez guilty of negligent homicide? This is a question of liability (although not necessarily a straightforward one). To take a wider structural perspective, however, is to ask questions such as ‘What decisions will be left for humans to make in the future and why does it matter?’ Structural questions such as these, I suggest, must be thought of in terms of contemporary politics and governance mechanisms.
Alongside our increasing reliance on AI to think for us, we are also steadily replacing aspects of human labour with robotic alternatives. What interests me here is not so much the usual stories of driverless cars replacing drivers or medical robots replacing doctors, but rather the broader structural dimensions of the fact that the realm of ‘jobs for humans’ is diminishing.
As Chief Decision Scientist at Google, Cassie Kozyrkov (Reference Kozyrkov2020), describes, in AI the truth of an algorithmic calculation is always subject to the desires of its designer and we should not lose sight of this partial capability when transferring more and more human roles over to AI.
Accompanied by a rising public concern about the potential prevalence of unregulated AI-generated harm, states, international organisations and corporations have attempted to address these fears with a range of corporate national and international forms of liability-based governance such as the innovative EU AI Act, which is by far the most comprehensive political attempt to locate (or deter) those directly responsible for AI-generated harm. I argue that while such approaches are vital for combating injustice exacerbated by AI and robotics, too little thought goes into political approaches to the structural dynamics of AI’s impact on society. By way of example, I examine the UK’s current ‘pro-innovation’ approach to AI governance and explore how it fails to address the structural injustices inherent in increasing AI usage.
Chapter 3 Repro-tech and the Genetic Supermarket
What will our reproductive habits look like in the future, and why does it matter? One part of the answer to this question, for those who can afford it, is Repro-tech. Just before his death in 2015, Carl Djerassi (leader of the team that first developed oral contraception in 1951) made a prediction.Footnote 9 He estimated that by 2050 it would be commonplace for women in the wealthier economies to undergo elective egg freezing (or ‘proactive’ egg freezing) to enable more control over the relationship between their fertility and their careers. Djerassi was convinced that egg freezing would not only lead to profound changes in the reproductive habits of future generations but also raise fundamental questions for humanity itself. By 2019, the technology futurist James Metzl called on his audience of ‘high flying professionals’ to think about their reproductive choices:
Raise your hand if you are thinking of having a child more than ten years from now… If your hand is in the air and you are a woman, you should probably freeze your eggs. If your hand is up and you are a man, I encourage you to freeze your sperm as soon as possible. No matter how young and fertile you are … there’s a not insignificant chance you are going to conceive your children in a laboratory, so you may as well freeze your eggs and sperm now when you are at your biological peak.
Key to both Djerassi’s and Metzl’s predictions are the rising capabilities of in vitro pre-implantation genetic technologies (PGTs). Originally designed to screen for a range of genetic conditions such as sickle cell disease or Huntington’s disease, new markets are set to emerge where prospective parents will be promised the opportunity to select the personality characteristics of their unborn children – what the political theorist Robert Nozick (Reference Nozick1974) thought would result in a ‘genetic supermarket’. Djerassi predicted that such PGTs would become the ‘ultimate factor’ in proactive egg-freezing markets, and science fiction would fast become fact with the use of gene-editing technologies. While these technologies would only remain affordable to a minority of the world’s population, the structural consequences of genetic enhancement are likely to create an infinitely deeper unequal divide in human society.
Here, the sort of structural perspective discussed in Chapter 1 can help us to think beyond individual usage of IVF, which has brought great positives to human societies, and look to the macro implications of ‘fertility insurance markets’ and ‘genetic supermarkets’ propelled by the promise of genetically crafted children at convenient life cycle intervals and the predictions of future fusion of AI and synthetic organisms. Unlike the case of AI, there has been a long-standing tradition of regulating Repro-tech. The UK’s Human Fertilisation & Embryology Authority (HFEA) is a regulatory public body created in 1990 in light of a report authored by the philosopher Mary Warnock, and is widely regarded internationally as the gold standard of regulators and the first to govern technologies as complex as gene editing and cloning. However, though we might see some elements of promise in Warnock’s approach for a wider model of technology governance, such as an insistence on a much wider scope of professional perspectives than was usually included in technical policy decisions, I chart, nevertheless, what I see as the general demise of regulatory landscapes in line with the dominant US-based ‘state capture’ school of thought that I discuss in more depth in Chapter 4. In particular, I consider the inherent tensions of a regulator functioning at arm’s length from politicians on the one hand but simultaneously being required to facilitate private sector growth on the other. Thinking back to the discussions of Chapter 1, it is also clear that the weight of interest in governance deliberations is largely oriented towards those of the tech industry and its leaders rather than towards more macro-structural considerations that I argue we ought to consider in the context of the public interest.
I suggest such an exploration of the regulatory workings of state governance, which in the UK context employs some 300,000 staff with budgets amounting to over £220 billion, provides some important insights into how we might think differently about the governance of transformative technologies into the future.
Chapter 4 Putting the Public into the Public Body
In this chapter, I bring together the arguments from the previous chapters of the book. My reading of Young’s work on structural injustice leads us to an uncomfortable political realisation. The usual tools deployed for addressing social harms and injustices – the tracing of liability in contextual moral or legal terms – are not useful for structural injustice, which is much more complex and amorphous in shape, not least in the context of AI and Repro-tech. Young was sceptical that the state could work against structural injustices because of the degree to which states tend to be beholden to the private interests of corporates and other private actors. While I agree on the high incidence of state capture, I argue it need not be so. Indeed, my view is that without the coordinating power of the state to change micro-level behaviour, we have little chance of addressing the negative structural dynamics of AI and Repro-tech despite the extraordinary capabilities of social movements and civil society groups.
Even though no simple political solution is apparent, I argue that one essential approach is to focus specifically on the question of whose interests are at play in the governance of transformative technologies as they operate in the background conditions of structural injustice. Thinking back to the arguments made in Chapter 3 about the functioning of regulators, I argue that the macro-level coordinating powers of a state can be redeployed to address background conditions of structural injustice through the direct reweighting of private and public interest within the mechanisms of governance itself. This is an alternative to current attempts in the context of tech governance to deploy a politics grounded in tracing fault or leaving structural patterns outside political focus altogether. Through a radical reshaping of large-scale regulatory public body landscapes, a new form of lay-centric governance can be incorporated to deliver the sorts of decisions that a state defined by its current relationship with tech industries cannot.
This sort of lay-centric model of public body landscape currently does not exist in governance structures, and an active case needs to be made for its creation.
Chapter 5 Conclusion: Technology is the Answer, What Was the Question?
In my concluding remarks I argue that what I have constructed in this book goes some way to thinking more clearly about how we might address the relationship between transformative technologies, AI and Repro-tech, and structural injustice. That is to say, I have tried to begin give a response to the ‘how’ of politics rather than only the ‘what’.