Hostname: page-component-68c7f8b79f-fc4h8 Total loading time: 0 Render date: 2025-12-24T16:56:04.570Z Has data issue: false hasContentIssue false

The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking, by Shannon Vallor. Oxford University Press, 2024. 272 pp.

Review products

The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking, by Shannon Vallor. Oxford University Press, 2024. 272 pp.

Published online by Cambridge University Press:  22 December 2025

Jeffery Smith
Affiliation:
Seattle University, USA
Onur Bakiner
Affiliation:
Seattle University, USA
Rights & Permissions [Opens in a new window]

Abstract

Information

Type
Book Review
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of the Society for Business Ethics

The title of Shannon Vallor’s new book, The AI Mirror, powerfully conveys her deep concern—and some optimism—with the rapid development of artificial intelligence (AI). Drawing on her earlier work in virtue ethics, and leveraging a rich body of work in literature, philosophy, and the history of ideas, Vallor’s book raises some of the most pertinent questions that ethicists and policymakers will need to answer in the coming years about the future of AI. All these questions are beautifully connected to her analogy of AI as a mirror, and are structured around seven chapters.

Vallor’s analogy of AI as a mirror is not exclusive to AI. Technology generally serves as a kind of mirror in that it reflects our shared experiences, values, and existing social relationships. “All technologies are mirrors, because all technologies are extensions of human values into the built world” (133). Vallor makes the case, however, that AI uniquely amplifies these values. Consider that one of the technologies that we commonly associate with the term “AI” is machine-learning (ML) technology that utilizes large language models (LLMs) trained on large swaths of natural language data. It is inescapable that LLMs reflect our past; indeed, the fact that language is not only the medium through which we imbue the world with meaning, but also how we think about it, necessarily makes AI a conservative technology in the simple sense that the effort to replace creative human judgment and decision-making with AI replicates the very linguistic, social and cognitive patterns that have long existed. AI is “literally built to conserve the patterns of the past and extend them into our futures” (57). This fact applies not just to LLMs, a technology in which machines can creatively generate text and natural language responses to human queries, but the entire range of ML algorithms that have enabled human-like decisions in areas such as employment, criminal sentencing, insurance underwriting, medical diagnostics, and the placement of news on social media platforms.

This conservatism is at the heart of Vallor’s basic concern with AI as a mirror. If AI primarily functions through the replication of the past, then which human communities and experiences are represented in the data on which algorithms are trained and tested? The problem here is that since there are countless ways that AI has become “the dominant surface in which we ‘see’ ourselves” (54), what is reflected to us in AI passes as—or is taken to be—objective when, in fact, what is reflected may be the experience of a limited population or interest group. Vallor writes in Chapter 5 that:

When we gaze in our AI mirrors at the words and images they generate, at the predications and classifications they make, we are not seeing objective truths, under any definition of the “objective.” We are seeing reflections of what humans valued enough to describe or record in the data. But not all humans (133).

Vallor’s concern is that AI inferences, sometimes built and trained on parochial historical data, might detract from the rich diversity of human experience or marginalize underrepresented social groups.

Developers of AI also frame its very application; thus, its use proceeds within a narrow range of interests that might otherwise be taken to be universal by users. This has the potential to reaffirm a harmful idea that Vallor thinks is implicit in the business culture that supports the unprecedented level of investment in AI—that is, that AI is not merely valuable as a means but has become valuable as an end in itself. AI is slowly defining, for example, the ways we interpret human experience; one need only think of the countless ways that AI has now been pressed into service for such ends as relationship advice, companionship, religious counseling, and moral guidance (Sahota Reference Sahota2024). These deeply human uses of AI stand above the routinized uses of AI in functional areas such as human resource management or law enforcement. Vallor therefore warns us to be wary of the narrative built around the inherent “progress” offered by AI since comparable notions of social progress have been used in startling similar ways to justify the patterns of “resource extraction, power consolidation, hierarchical control, exclusion, and subjugation of marginalized populations” (75). She argues, following Ruha Benjamin (Reference Benjamin2019), that the racialized uses of AI in policing, criminal justice, and economic development illustrate this basic problem.

Reflecting a circumscribed past through ML technology is only one dimension of the mirroring problem for Vallor. A larger set of problems emerge from what we lose as humans as we become more reliant on the mirrors created by AI. She worries that human self-understanding and self-development are threatened.

One way in which our humanity is under threat concerns the uniqueness of human intelligence in an age of AI. “Artificial general intelligence” (AGI) brings this problem of self-understanding to the fore. The promise of AGI is that it can duplicate human learning and understanding by integrating advanced knowledge across an array of disciplines and subject domains. Vallor sees our contemporary discussion of AGI as bearing some striking similarities to the machines created in Samuel Butler’s 1872 novel set in the fictional land of Erewhon. AGI is thought to be by some—not unlike the machines in Butler’s novel—a full replication of human intelligence and potentially an “existential threat to humanity” (Roose Reference Roose2023).

Vallor’s technical and philosophical acumen shine through in her critical review of AGI. The problem for Vallor is not AGI’s supposed threat (she describes its actual threat as “negligible,” 162) but how it has hijacked a necessary conversation about how human intelligence can grow alongside AI. Here her analysis liberates us from the panic caused by the rhetoric of existential calamity while also offering assurance that humans can maintain their dignity even as AI continues to rapidly develop. Machines, Vallor argues, have no “lived experience” and therefore no “mental model” of what data represent. They do not find any meaning in the statistical inferences that serve as ML “decisions” or “judgments.” Without this lived experience machines will hallucinate and make obvious mistakes because they do not share the human concern for truth. More fundamentally, however, machines will lack an intelligence that can be responsive to a world where meanings are open-ended, dynamic, and infinitely complex. So basic tasks like how to richly interpret past events, draw comparisons across disparate decision-making contexts, make normative judgments in practical situations with incommensurate values, and reason about matters that are not straightforwardly deductive, will be things that machines cannot do well. Knowing how to use language for Vallor is not simply a matter of prediction about what words to place in a sequence or what terms are most likely associated with others. Language is, to follow the later Wittgenstein, an active process of being within a set of lived practices, of existing within the fabric of a shared “lifeworld.”

If we add to this the premise that language is the vehicle through which we not only speak, but perceive, reason, and deliberate, then it is not a stretch to infer that human intelligence will remain unique. While machines can “produce outputs in the form of reasons that very much mirror our own” they do this without replacing a human process of reasoning (107, emphasis added). Vallor summarizes more sophisticated debates in the philosophy of mind and while doing so effectively defends the conclusion that human intelligence is not likely to be replaced by machines any time soon.

A second, more plausible way that Vallor thinks our humanity is compromised in an era of AI centers on our own moral development. This problem is much more concerning in her eyes and picks up on many claims she develops in her earlier book Technology and the Virtues (Reference Vallor2016). The problem is not that AI is technically capable of replacing humans or rendering them unimportant; the problem is rather that AI is being developed and used by humans in a way that short circuits the cultivation of virtues that are essential to our own flourishing. Instances of this effect abound. AI has begun to stand in for detailed, case-by-case clinical judgments normally made by physicians. It can draw inferences on complex questions that would ideally call for collaboration among scientists. Algorithms have adversely impacted citizens’ commitment to political deliberation by promoting falsehoods and conspiracies. It can replace and thereby hinder human creativity in visual art, literature, and music.

In these and other cases Vallor—drawing on ancients such as Mencius and Aristotle—calls for a “technomoral wisdom” on the part of engineers, computer scientists, entrepreneurs, and policymakers that assures that moral virtues, such as honesty, justice, and civility, are integrated into their work alongside epistemic virtues like humility, perspective, flexibility, and open-mindedness. Only then will AI be designed and deployed in a way that casts off many of the vices—speed, scale, innovation, productivity, overconfidence, and social control—that underlie AI’s current expansion and normalization. Most importantly, even though AI may indeed provide us with the tools to advance the common good, the practical wisdom that Vallor has in mind resists a “naïve faith in technology to solve our problems [which,] in the absence of moral and political wisdom, will only worsen it” (179). In one of the most interesting literary references suggested by the book’s title, Vallor stresses that practical wisdom in the Roman tradition—prudentia—is depicted as a woman holding a mirror. Vallor finds value in this association of practical wisdom with a mirror because it allows Prudentia a selective view of the past from which to learn. The mirror held by Prudentia also allows a kind of self-scrutiny. So, unlike the image of Narcissus, whose self-love grew through the adoration of his own reflection in the water, a mirror can provide the surface on which we critically discern what is true and what may be good for ourselves.

Some readers will ask what Vallor’s call for practical wisdom means, given that she offers little guidance in the area of AI governance. What do we collectively need to do to arrive at a state of technomoral wisdom? Here we should be careful not to ask too much of one book. The value of Vallor’s analysis is that, above all, the future of responsible AI requires an awareness of our very human skills, feelings, interests, values, identities, vulnerabilities, and limitations. Each one of us does spend a considerable portion of our lives maximizing some kind of utility through calculation, optimization, and prediction, but we are much more than that—unless, of course, we reduce ourselves to input-processing, output-producing machines in a process Langdon Winner calls “reverse adaptation” (77). Recognizing human worth in ourselves and others should foreground ethical, legal, and technical AI governance models. If AI is made to serve as a “powerful amplifier” of human ability (28), all the better, but the fundamental challenge of being human in the age of AI is to build a better society for all humans—nothing less, nothing more.

Other readers will wonder whether the mirror analogy does all of the work Vallor intends. The image of Prudentia illustrates how the analogy is multifaceted and may convey positive as well as negative connotations when we think of the “AI mirror.” Vallor’s analogy nonetheless is thought provoking in a way that helps frame and prioritize the ethical problems associated with AI. In invoking Lewis Carroll’s Through the Looking Glass in Chapter 3, Vallor reminds us that, in addition to a simple reflection of the here and now, mirrors create the illusion of a place in the future, as if we can walk through the mirror’s surface into a new world. One is drawn here to the rhetoric around AGI, or the prospect of human-like agency held by machines, as examples of this kind of mirroring. What we “see” in the use of AI is a projection of its potential, into the future, with sometimes alarming consequences. Whatever we think of Vallor’s own skepticism that these outcomes are likely or not, the very notion that AI’s use gives way to these illusions is an effective way to think through the assumptions we are making about the technologies in question.

Ultimately, Vallor’s observation that we risk losing sight of what it means to be human in the age of AI should qualify whatever hopefulness we have in the technology. Still, while the book does not elaborate on technical fixes or regulatory models, her call for cultivating the virtues needed for humans to productively engage with AI provides a meaningful frame for those looking to moderate their relationship with AI as well as those who will help craft the future of AI law and public policy. Business ethicists, too, will naturally walk away from Vallor’s book with renewed reasons to justify calls for greater AI safety, explainability, and transparency. But there is also a unique lesson in Vallor’s analysis that moves the public discussion beyond these now standard elements of responsible AI. Her work reminds us that the development and use of AI involves deeper, existential questions about how and when it is appropriate to deploy AI technologies in the first place. (cf. Samuel, Reference Samuel2024). Executives, engineers, and AI entrepreneurs should find inspiration from Vallor’s book as they lead productive conversations around such foundational matters.

Jeffery Smith (, corresponding author) is professor and Frank Shrontz Chair in Professional Ethics in the Albers School of Business and Economics at Seattle University. His work focuses on the philosophical foundations of corporate responsibility and ethical decision-making in organizations. His articles have appeared in Business Ethics Quarterly, Ethical Theory and Moral Practice, and the Journal of Business Ethics, among other journals.

Onur Bakiner is professor of political science and director of the Technology Ethics Initiative at Seattle University. His current research lies at the intersection of AI governance, technology policy, and human rights. He has published recently in journals such as Big Data & Society and Law and Innovation and Technology. He is the author of the book Truth Commissions: Memory, Power and Legitimacy (University of Pennsylvania Press, 2015).

References

REFERENCES

Benjamin, Ruha. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Polity.Google Scholar
Roose, Kevin. 2023. “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.” New York Times, May 30. https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html.Google Scholar
Sahota, Neil. 2024. “How AI Companions Are Redefining Human Relationships in the Digital Age.” Forbes, July 18. https://www.forbes.com/sites/neilsahota/2024/07/18/how-ai-companions-are-redefining-human-relationships-in-the-digital-age/.Google Scholar
Samuel, Sigal. 2024. “Shannon Vallor Says AI Does Present An Existential Risk—But Not the One You Think.” Vox, November 21. https://www.vox.com/future-perfect/384517/shannon-vallor-data-ai-philosophy-ethics-technology-edinburgh-future-perfect-50.Google Scholar
Vallor, Shannon. 2016. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press.CrossRefGoogle Scholar