Hostname: page-component-76fb5796d-vfjqv Total loading time: 0 Render date: 2024-04-28T06:59:45.729Z Has data issue: false hasContentIssue false

Josh Simons: Algorithms for the People: Democracy in the Age of AI. (Princeton: Princeton University Press, 2023. Pp. 320.)

Review products

Josh Simons: Algorithms for the People: Democracy in the Age of AI. (Princeton: Princeton University Press, 2023. Pp. 320.)

Published online by Cambridge University Press:  08 January 2024

Benjamin Gregg*
Affiliation:
University of Texas at Austin, Austin, Texas, USA
Rights & Permissions [Opens in a new window]

Abstract

Type
Book Review
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press on behalf of University of Notre Dame

Artificial intelligence cannot bear legal or moral responsibility for the consequences of its use. Responsibility lies with those human beings who design, develop, and deploy it, from lead engineers and project managers to corporate leadership, and from government regulators to elected representatives. Either these persons assign responsibility to themselves (or their colleagues, or associates, or employees) or responsibility is assigned to them by nonelite outsiders and laypersons.

Josh Simons envisions as much: the citizens of a political community assigning responsibility via democratic participation in public-sphere decision-making, about some features of corporate design of AI. Citizens would assign responsibility for AI-related, inequality-based social injustice that follows from ranking systems, which are predictive tools employed in machine learning. The design of ranking systems—with unforeseen political ramifications of uncertain because obscured provenance—imposes an “artificial kind of scarcity on vast quantities of content and websites” and restricts “who is seen and heard by whom” (195). With “top-line metrics, values, and concepts built into [their] design” (195), ranking systems “bake in political choices, naturalizing the web it ceaselessly shapes” (127). They “direct citizens’ attention” and “shape the exercise of self-governance” (181) in ways that “corrupt the public sphere” by producing “filter bubbles and social division” (135).

Simons focuses on two corporations, Facebook and Google. They each design ranking systems to maximize advertising revenue and exercise exclusive authority over them. By distributing content via ranking, they mold social infrastructure. Ranking “determine[s] what appears where on which people's newsfeeds and search results,” in this way “shap[ing] the ideas and information that citizens engage with on a vast scale” (181) while “exercise[ing] a kind of infrastructural power” to shape “our public sphere and organize our information ecosystem” (135). By organizing their decision-making in ways that unintentionally perpetuate the marginalization of groups often excluded from equal participation, ranking too often compounds deep-seated disparities in influence across domains from housing to education, from the workforce to the criminal justice system. This pattern extends to clients of corporate ranking systems, of course, such as law enforcement unwittingly perpetuating racial disparities in policing a city that draws on these ranking systems in a context where “US discrimination law often fails to ensure that machine learning models are built to advance equality and may even block the kinds of design choices required to use machine learning to address patterns of inequality” (57).

As with AI in general, the political challenges of ranking systems are not inherent to the technology but follow from its human administration (which is why humans, not AI, bear responsibility). Machine learning systems may certainly offer positive social contributions. For example, they can empower institutions to create precise definitions of disadvantage tailored to specific decision-making scenarios. In admissions to competitive universities, they can incorporate factors like the interplay between geographical location, school attendance, and race. In hiring decisions, they can consider the interaction between sex and socioeconomic status. But, says Simons, they will do so only if a political community commits to such deployment by articulating a “positive ideal of political equality that goes beyond discrimination to establish laws and regulatory structures to govern decision-making” (78) in a wide range of institutions—from public schools to law enforcement to welfare administration—so as not to exacerbate entrenched power inequalities, and sometimes to address those inequalities directly.

Liberal democratic political community stands at the center of Simons's vision. He proposes regulation of the digital infrastructures that big tech continuously pumps into the global market. He frames it as the self-emancipation of digital consumers from the seducements of products and services cleverly promoted through corporate market power. Emancipation takes the form of regulating, legally and perhaps administratively, but also in terms of political culture, corporations like Google and Facebook, encumbering them with responsibility to design ranking systems that support “healthy information architectures and thriving civic spaces” (107). Machine learning can contribute to advancing social equality among citizens but only if political community imposes corresponding duties on relevant institutions.

To that end, Simons reimagines the corporation as a “democratic utility,” operating under public oversight and democratic governance while facilitating collective self-governance by shaping the very parameters of self-government. The corporation so understood would design infrastructural ranking systems in new ways by targeting regulatory responses “toward the functions of those ranking systems to a flourishing democracy, exploring the activities they support, who they affect and make vulnerable, and how best to empower citizens to design them to support a health public sphere and civic information architecture” (182).

Simons would also create a new platform regulator, the “AI Platforms Agency,” to construct mechanisms that enable inclusive and participatory governance in the development and assessment of machine learning systems. The agency would expand the range of kinds of persons involved in different decisions regarding machine-learning design. The current tiny elite of technocratic experts providing “technical explanations of the inner logic of machine learning models” would now be joined by three mechanisms of empowered, participatory decision-making guided by “principled justification that surface the political values built into technical choices” (185): “citizen assemblies,” to specify the obligations to be imposed on corporations; “mini-publics,” to monitor corporate machine-learning design with a platform for collecting information and synthesizing evidence, connecting corporate decision-makers with the interests of civil society stakeholders, experts, and the general public; and “citizen juries,” to render sometimes controversial judgments about individual instances of content moderation or about specific machine learning systems.

Facebook-like corporations would then design ranking systems in ways to further a Habermasian digital public square while Google-like companies would become digital public libraries. Together they would create a common-purpose ethical framework operating as a self-organized, self-governing, common-pool resource for access to a broad range of information: a species of knowledge commons (the notion, prominently developed by Elinor Ostrom, that common resources can be managed effectively if managed by those who use them).

The first half of Simons's book displays considerable technical expertise in machine-learning design as well as personal experience in its corporate deployment and makes a very useful and significant contribution. But the second half fails to redeem the promise of Simons's morally compelling, radically ambitious, institutional vision of public regulation of AI design in companies like Google and Facebook. (Such redemption is no easy task, as I know from my own attempts to tap corporate potential to advance social justice, e.g., in B. Gregg, “Beyond Due Diligence: The Human Rights Corporation,” Human Rights Review 22 [2021]: 65–89.) This notion of governance of the information environment, combining openness and popular participation, ensuring that information is freely available to everyone, and ensuring that the public has a say in deliberating about how best to design ranking systems, confronts the legal rights and political privileges of private capital. While corporations are compatible with democratic communities, they are hardly in the democracy business. While popular civic participation might advance social justice in some cases, it seeks to undermine it in others. Simons makes the empirically unwarranted assumption that participants would simply share goals and value commitments “to justify how the system they build advances shared goals” (185). He does not flesh out his positive ideal of political equality nor explicate the standard by which to evaluate the success of “algorithms for the people,” what he calls a “flourishing democracy.” Simons might address these lacunae by developing the book's incipient notion of a knowledge commons.