We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter takes stock of the current situation confronting political theory. I introduce the concept of “digital lifeworlds” and explain its relevance in the narrative of humanity. I use Max Tegmark’s distinctions between Life 1.0, 2.0, and 3.0, respectively, for guidance in locating digital lifeworlds in history. We do not know if Life 3.0 (the kind of life that designs both its culture and physical shape, the physical shape of individuals) will ever arise. But if it does, it will be from within digital lifeworlds – lifeworlds that already fundamentally change our lives and thus require intense scrutiny even if there will never be a Life 3.0. To understand these lifeworlds, we need appropriate notions of “data,” “information,” and “knowledge” and characterize the connections among them. To that end, we enlist Fred Dretske’s understanding of knowledge in terms of flow of information. Such a notion of knowledge allows for a broader range of knowers than humans (to whom classical analyses were limited): It includes both animals and artificially intelligent beings as knowers. I also draw on Luciano Floridi’s work on the philosophy of information for a related look at digital lifeworlds from a more detached standpoint (“infospheres populated by inforgs”).
Deepfakes are a new form of synthetic media that broke upon the world in 2017. Bringing photoshopping to video, deepfakes replace people in existing videos with someone else’s likeness. Currently, most of their reach is limited to pornography and efforts at discreditation. However, deepfake technology has many epistemic promises and perils, which concern how we fare as knowers and knowns. This chapter seeks to help set an agenda around these matters to make sure that this technology can help realize epistemic rights and epistemic justice and unleash human creativity, rather than inflict epistemic wrongs of any sort. In any event, the relevant philosophical considerations are already in view, even though the technology itself is still very much evolving. This chapter puts to use the framework of epistemic actorhood from Chapter 5.
The Amish are an unusual case of a community intensely concerned with maintaining control over how technology shapes its future. Though the community’s old-fashioned ways strike many as perplexing, in the age of AI there are good reasons as to why technology and its regulation should be just about as central to mainstream politics as they are to the way the Amish regulate their affairs. Technology is not neutral, as many still think, but is intensely political. This also means that political philosophy and philosophy of technology should be more closely related than they typically are. Mainstream philosophy of technology has unfolded largely separately from mainstream political philosophy. The primary exception is the Marxist tradition that has long investigated the role of technology in the dialectical unfolding of history as Marx theorizes it. We use the Marxist tradition to identify three senses in which technology is political (the foundational, enframing, and interactive senses) and argue that the Rawlsian tradition also has good reason to recognize versions of these senses. In this era, political philosophy must always also be philosophy of technology.
British writer H. G. Wells was a major advocate for a universal declaration of human rights of the kind later passed in 1948. Wells paid much attention to the importance of knowledge for his era, more than found its way into the actual declaration. At this stage, an enhanced set of epistemic rights that strengthen existing human rights – as part of a fourth generation of human rights – is needed to protect epistemic actorhood in those four roles introduced in Chapter 5. Epistemic rights are already exceedingly important because of the epistemic intrusiveness of digital lifeworlds in Life 2.0, and they should also include a suitably defined right to be forgotten (that is, a right to have certain information removed from easy accessibility through internet searches). If Life 3.0 does emerge, we might also need a right altogether different from what is currently acknowledged as human rights, the right to exercise human intelligence to begin with. The required argument for the validity of the right to the exercise of human intelligence can draw on the secular meaning-of-life literature. I paint with a broad brush when it comes to the detailed content of proposed rights, offering them manifesto-style.
Modern democracies involve structures for collective choice that periodically empower relatively few people to steer the social direction for everybody. As in all forms of governance, technology shapes how this unfolds. Political theorists have typically treated democracy as an ideal or an institutional framework, instead of considering its materiality, the manner in which democratic possibilities are to some extent shaped by the objects needed to implement them. Specialized AI changes the materiality of democracy, not just in the sense that independently given actors now deploy different tools. AI changes how collective decision-making unfolds and what its human participants are like. This chapter reflects on the past, present, and future of democracy and embeds into these basic reflections an exploration of the challenges and promises of AI for democracy in this digital century. We explore specifically how to design AI to harness the public sphere, political power, and economic power for democratic purposes. Thereby, this chapter also continues the discussion from Chapter 2 by developing how technology is political in the foundational sense. This chapter also investigates current questions about how AI could threaten or enrich the democratic processes of the present.
I introduce a distinction between “slow and relatively harmonious” and “fast and radical” as far as the integration of AI into human life is concerned. Regarding the “slow and relatively harmonious” scenario, I explore a set of questions about how it would make sense for humans to acknowledge some such status in machines. But we must also ask whether self-conscious artificial intelligences would be morally equivalent to humans. I do so by asking what an increase in moral status for machines means for the political domain. Chapter 3 explored why AI would affect the democratic process in the near future. Here our concern is with a scenario further along. One question is whether there is a cognitive capacity beyond intelligence and self-consciousness that is needed for involvement in the political domain. Paying attention to what is appropriate to say about animals in that regard is useful. As far as the “fast and radical” scenario is concerned, I first explore why philosophically we are so dramatically unprepared to deal with an intelligence explosion, with a focus on what kind of moral status superintelligences might acknowledge in us. Finally, I attend to Tegmark’s discussion of political scenarios that could arise after an intelligence explosion and add a public-reason scenario that could offer a vision for a political context shared between humans and superintelligences.
We first explore how damaging untruth can be, especially in digital lifeworlds. In digital lifeworlds information spreads at a pace and volume unheard of in analog contexts. But misinformation and disinformation spread the same way, which enhances how individuals can tell stories about themselves or have them substantiated in echo chambers in the company of likeminded people. These considerations provide support for a right to truth. However, next we see that untruth plays a significant role as an enabler of valued psychological and social dynamics. The considerations that pull into the opposite direction notwithstanding, there can therefore be no comprehensive right to truth. Contrary to a well-known Bible verse, for most people it is anyway not the truth that sets them free. It is acceptance of worldviews in likeminded company that does so (worldviews that tend to contain plenty of untruths), in any event if being set free means having an orientation in the world. But that there can be no such comprehensive right is consistent with there being a right to truth in specific contexts. Still, the moral concern behind truthfulness is in this context not best captured in terms of an actual right to truth.
L’avenir est fait de la même substance que le présent.
—Simone Weil2
Weitermachen!
—Herbert Marcuse3
This book seeks to help set an agenda in a new domain of inquiry where things have been moving fast, an agenda that brings debates that have long preoccupied political thinkers into the era of AI and Big Data (and possibly the age of the singularity). Our discussions have been exploratory, rather than guided by a set of theses and the need to argue for them. Some topics we covered are genuinely new, but others continue older debates – though often in ways that call for a breaking down of boundaries as political thought has traditionally drawn them. One point I have made throughout is that the advent of AI requires that the relationship among various traditions of political thought be reassessed. All such traditions must fully integrate the philosophy of technology. Technological advancement will continue for the time being, one way or another, if only because of geopolitical rivalry. Therefore, the task for political thought is to address the topics that likely come our way and to distinguish among the various timeframes (such as Life 2.0 and Life 3.0) in which they might do so.
Foucault problematizes the relationship between knowledge and power in ways that more traditional epistemology has not, with power always already shaping what we consider knowledge. To capture the nexus between power and knowledge, he introduces the term “episteme.” The significance of an era’s episteme is easiest to see in terms of what it does to possibilities of self-knowledge. Therefore I pay special attention to this theme by way of introducing the theoretical depth of Foucault’s notion. I then develop Foucault’s ideas further, specifically for digital lifeworlds. With this vocabulary in place, I introduce the notion of “epistemic actorhood” that lets us capture the place of an individual in a given episteme. It is in terms of this place that we can turn to the notions of epistemic rights and epistemic justice. Epistemic actorhood comes with the four roles of individual epistemic subject, collective epistemic subject, individual epistemic object, and collective epistemic object. Using this vocabulary we can then also articulate the notions of an epistemic right and of epistemic justice and develop them in the context of digital lifeworlds. Digital lifeworlds engage individuals both as knowers and knowns in new ways. The framework introduced in this chapter captures this point.
“Surveillance capitalism” is a term coined by Shoshana Zuboff to draw attention to the fact that data collection has become so important for the functioning of the economy that the current stage of capitalism should be named for it. “Instrumentarian power” is a kind of power that deploys technology to obtain ever more knowledge about individuals to make their behavior predictable and thus monetizable. “Social physics” is a term used by computer scientist Alex Pentland to describe the potential of quantitative social science to put Big Data to beneficial use. The primary goal of this chapter is to discuss what it takes to secure the Enlightenment for digital lifeworlds. Since this chapter is the last in a row of chapters concerned with rights, we also discuss (and reject) the position that rights, especially human rights, are enough to articulate a promising normative vision for society. This discussion draws on insights from Horkheimer and Adorno’s Dialectic of Enlightenment, which has synergies with Zuboff’s work. Contrary to a neoliberal understanding, a strong view of democracy, as discussed in Chapter 3, is also required for a promising normative vision for society. So is a plausible theory of distributive justice, as discussed in Chapter 9.
Meaning of life and technology are not normally theorized together. But once we realize that all human activity is always technologically mediated, we see that any acts in pursuit of personal significance, too, are so mediated. However, this point then opens the possibility that technology enters the quest for meaning the wrong way. This chapter explores what that possibility means and how to respond to it. I use as my starting point Nozick’s proposal for how to think about the meaning of life. Nozick’s account makes central the idea of “limited transcendence,” essentially folding the kind of transcendence normally involved in interaction with divinity into a finite life. Nozick’s high-altitude view does not make sufficiently clear how technology enters. But once we bring in additional ideas from Ihde and Arendt, we can see clearly how it does. Next, we turn to Wiener’s classic God & Golem. Wiener is concerned with “gadget worshippers,” people who surrender control over their lives to machines in ways that are not appropriate to what these machines can do. Working with this notion, we can throw light on how technology can enter into the quest for meaning the wrong way and offer advice for how to counterbalance this challenge.
In the age of Big Data and machine learning, with its ever-expanding possibilities for data mining, the question of who is entitled to control the data and benefit from insights that can be derived from them matters greatly for the shape of the future economy. Therefore, this topic should be assessed under the heading of distributive justice. There are different views on who is entitled to control data, often driven by analogies between claims to data and claims to other kinds of things that are already better understood. This chapter clarifies the value of approaching the subject of control over data in terms of (a notion of moral, rather than legal) ownership. Next, drawing on the work of seventeenth-century political theorist Hugo Grotius on the freedom of the seas, and thus on possibilities of owning the high seas, I develop an account of collective ownership of collectively generated data patterns and explore several important objections. Since control over data matters enormously and is poorly understood, we should treat questions about it as genuinely open. This is a good time to bring to bear unorthodox thinking on the matter.
With the rise of far-reaching technological innovation, from artificial intelligence to Big Data, human life is increasingly unfolding in digital lifeworlds. While such developments have made unprecedented changes to the ways we live, our political practices have failed to evolve at pace with these profound changes. In this path-breaking work, Mathias Risse establishes a foundation for the philosophy of technology, allowing us to investigate how the digital century might alter our most basic political practices and ideas. Risse engages major concepts in political philosophy and extends them to account for problems that arise in digital lifeworlds including AI and democracy, synthetic media and surveillance capitalism and how AI might alter our thinking about the meaning of life. Proactive and profound, Political Theory of the Digital Age offers a systemic way of evaluating the effect of AI, allowing us to anticipate and understand how technological developments impact our political lives – before it's too late.