A. Introduction
The explosive growth of generative artificial intelligence (GenAI) has triggered widespread debate over how we should regulate these powerful new technologies.Footnote 1 Many observers advocate for comprehensive new regulatory frameworks that treat generative AI as a sui generis phenomenon demanding novel forms of oversight.Footnote 2 But this approach misses a crucial point. We need to understand these systems as part of the ongoing evolution of algorithmic control over our information environment. Using my earlier analysis of how generative AI builds upon existing forms of algorithmic content generation as a case study,Footnote 3 I argue for a two-track regulatory strategy: maintain agnosticism about AI’s uncertain future while implementing focused interventions to address specific, documented harms in the present. This balanced approach recognizes both the revolutionary potential and evolutionary nature of generative AI, helping us avoid the twin dangers of regulatory overreach and regulatory paralysis.
Regulatory agnosticism, as advanced here, recognizes that overly broad or restrictive regulations, however well-intentioned, risk stifling innovation and chilling the development of beneficial AI applications that have yet to be conceived.Footnote 4 The history of communications technologies is replete with examples of unanticipated societal impacts and unintended regulatory consequences.Footnote 5 Given the vast potential of generative AI across domains as diverse as creative expression, scientific discovery, education, and healthcare,Footnote 6 imposing all-encompassing governance frameworks at this nascent stage could prematurely constrain valuable use cases and stunt the technology’s evolution.
Yet this agnosticism toward future applications must be tempered by a laser-like focus on addressing the known challenges and harms associated with algorithmic systems, particularly in the digital media context. Generative AI represents the latest chapter in what I have characterized as the progressive algorithmization of our information ecosystem, building upon the troubling trajectories of search engines and social media platforms.Footnote 7 These predecessor technologies have already given rise to a range of well-documented societal harms, including the viral spread of disinformation, the formation of polarized echo chambers, and the erosion of a shared public discourse.Footnote 8 Generative AI threatens to dramatically amplify these harms by enabling the automated production of persuasive content at an unprecedented scale,Footnote 9 posing grave new challenges for content authenticity, information quality, and the very integrity of the digital public sphere.Footnote 10
When we examine whether the European Union and China’s contrasting regulatory approaches offer workable solutions, we see two sharply different visions of AI governance. The EU’s AI Act represents a sweeping attempt to regulate all AI applications through a single comprehensive framework built around risk categories.Footnote 11 China, by contrast, has pursued a more focused, domain-specific approach,Footnote 12 as reflected in measures like the Interim Measures for the Management of Generative AI Services—regulations that specifically target content security and algorithmic transparency in digital media platforms. By comparing these divergent strategies and asking whether they adequately address the fundamental challenges that generative AI poses to public discourse while preserving space for beneficial innovation, we can better understand the virtues of a more targeted regulatory approach that speaks directly to the distinctive problems these technologies create for the digital public sphere.
In light of these pressing risks, the Article contends that a targeted regulatory approach narrowly focused on remedying the specific algorithmic harms associated with generative AI’s applications in digital media is more prudent than sweeping, one-size-fits-all AI legislation. This call for narrower, domain-specific regulation aligns with the understanding of generative AI as an evolutionary development in the algorithmic mediation of communication, rather than a sui generis phenomenon demanding entirely novel governance paradigms. As rapid advances in AI continue to transform the landscape of public discourse and knowledge production, we must resist the temptation of hasty, reactive legislation in favor of more nuanced, multifaceted policymaking approaches harnessing the collective wisdom of diverse stakeholders, leveraging international knowledge-sharing, and adaptively responding to the technology’s breathtaking pace of change. The path ahead requires threading an exceedingly delicate needle between the twin perils of stifling overregulation and reckless techno-optimism—a daunting but essential undertaking as we strive to imbue the development of generative AI with an abiding commitment to democratic values and the flourishing of the human spirit.
The argument proceeds in three sections. Section B examines the key challenges posed by generative AI and digital media platforms, with a particular focus on the bypass effect, echo chambers, and informational capitalism. Section C undertakes a comparative analysis of the regulatory approaches adopted by the European Union and China, highlighting the limitations of the EU’s horizontal, risk-based framework embodied in the proposed AI Act and the potential advantages of China’s more targeted, iterative strategy. Finally, Section D advances the central thesis that a domain-specific regulatory approach focused on remedying known algorithmic harms in digital media is likely to prove more effective than sweeping, all-encompassing AI governance frameworks.
B. Digital Media Harms
The rise of generative AI represents another significant transformation in our digital infrastructure and its relationship to democratic self-governance. While many commentators fixate on GenAI’s novel capabilities for automated content creation, we should understand this technology as the latest development in a longer historical process of private control over public discourse. The key analytical point is to see how GenAI amplifies and accelerates existing problems of platform power rather than introducing entirely new challenges. For the past twenty years, we have witnessed the steady transformation of the public sphere through digital intermediaries and their algorithms. GenAI does not so much create new questions as it pushes existing ones to their logical conclusion, forcing us to confront fundamental issues about democratic legitimacy, private governance, and the future of free expression that have been brewing since the early days of social media.Footnote 13
The deployment of algorithmic systems for content moderation, recommendation, and personalization across social media and search platforms offers essential lessons for understanding GenAI’s societal implications.Footnote 14 These systems have demonstrably contributed to the fragmentation of public discourse, accelerated the spread of misinformation, and undermined trust in democratic institutions.Footnote 15 Moreover, they have generated serious concerns about privacy, surveillance, and the concentration of power in the hands of a small number of dominant technology companies.Footnote 16
My premise is, therefore, that when we consider the emergence of GenAI, we must understand it as part of the continuing transformation of our digital infrastructure and its effects on democratic discourse. The lessons we’ve learned from content moderation and recommendation algorithms—their impact on public communication and democratic deliberation—provide essential guidance for addressing the challenges that GenAI presents to constitutional democracy.Footnote 17
In what follows, I identify three central challenges that GenAI shares with existing algorithmic systems that shape public discourse: the bypass effect, echo chambers, and the commodification of information flows. Drawing on our experience with digital platforms and insights from law, media studies, and political economy, I seek to demonstrate how these challenges reflect deeper tensions in the relationship between technology, democracy, and private power.Footnote 18
My aim in this analysis is not to provide final answers, but to spark a broader conversation about what GenAI means for democratic self-governance and how we might regulate these powerful new technological systems.Footnote 19 The central question before us is how to preserve democratic values and public participation in an era of algorithmic control over speech and information. By confronting these challenges directly and experimenting with new forms of democratic oversight, we can work toward a digital public sphere that genuinely serves democratic ends. Footnote 20
I. Digital Media Platforms
In this Section, I show how generative artificial intelligence—particularly chatbots and media generators—represents the latest evolution in what I have previously termed “digital media platforms.”Footnote 21 These platforms include services like X (formerly Twitter), Google’s search infrastructure, and OpenAI’s ChatGPT. While they share some features with traditional media organizations and earlier algorithmic systems, they operate in fundamentally different ways that require us to rethink our existing frameworks for analysis and regulation.
Previously, I argued that these platforms share five foundational characteristics.Footnote 22 First, they are constituted through algorithmic processes that perform essential governance functions: content moderationFootnote 23, personalized information deliveryFootnote 24, and media generation.Footnote 25 These algorithmic frameworks are not merely technological tools but rather represent fundamental organizing principles through which these platforms structure public discourse.Footnote 26
Second, these platforms shape information consumption through three distinct algorithmic mechanisms. Search algorithms determine what information users encounter when actively seeking knowledge, structuring the architecture of online information accessibility. Feed and recommendation algorithms, deployed across social media and content platforms, curate what information users passively encounter, personalizing their exposure based on behavioral data. Finally, generative algorithms create new information in response to user queries, synthesizing and presenting knowledge in novel ways.
Third, these platforms depend on vast data infrastructures that shape their operational character. The algorithms that constitute their operational core—whether for recommendations, moderation, or generation—require massive data repositories.Footnote 27 This dependency creates distinctive dynamics: network effects that privilege established data aggregators and raise profound questions about informational power concentration.Footnote 28
Fourth, these platforms have achieved a scope of influence that transcends traditional boundaries, operating at a global scale while remaining concentrated in the hands of a small number of corporate entities. This concentrated power has significant implications for information flow, market competition, and local information ecosystems.Footnote 29 The observation that major GenAI developers frequently overlap with dominant social media and search providers—as exemplified by Google, Facebook, and Microsoft—underscores this convergence.Footnote 30
Fifth, these platforms have assumed critical gatekeeping functions previously performed by traditional media institutions.Footnote 31 However, their gatekeeping differs fundamentally in its reliance on algorithmic governance rather than traditional editorial control. This shift represents a transformation in how societies structure and regulate information flows, with GenAI’s emergence potentially amplifying these dynamics.
When we examine the shared features of these platforms, we see how their basic structure creates recurring problems for democratic self-governance and the organization of public discourse. Through their algorithmic architecture and concentrated private power, these platforms generate three fundamental challenges that we must address. First, through what I call the bypass effect, they evade traditional mechanisms of democratic accountability. Second, they create echo chambers that fragment public discussion and undermine democratic deliberation. Third, through their business models and market dominance, they concentrate unprecedented economic and social power in the hands of a small number of private companies.
In the sections that follow, I examine how these three challenges—the bypass effect, echo chambers, and informational capitalism—manifest across different types of digital media platforms. Through this analysis, I demonstrate how GenAI systems, despite their novel capabilities, fundamentally extend and intensify these existing platform dynamics rather than introducing entirely new challenges. This framework provides a foundation for understanding both the continuities and discontinuities in how different digital media platforms shape public discourse and democratic governance.
1. The Bypass Effect
The emergence of generative AI platforms, much like the advent of digitalization and social media, heralds a dramatic shift in the control and dissemination of information. This change exemplifies what I’ve previously called the “bypass effect.”Footnote 32 In Pre-digital media ecosystems, community norms and local gatekeepers—ranging from local media elites, such as the local newspaper, to public intellectuals—played a crucial role in shaping public discourse, setting standards for acceptable speech, and managing the flow of information.Footnote 33 These gatekeepers, deeply embedded in their respective communities, were instrumental in enforcing community-specific norms around speech and information, including aspects like insults, hate speech, and misinformation.
In prior works, I examined the impact of social media’s global influence and its disconnect from local contexts, highlighting how this shift poses a challenge to the existing political structure.Footnote 34 The digital revolution has reshaped the media landscape, transforming the role of traditional media from gatekeepers of information to gatewatchers within a more open and democratized information ecosystem.Footnote 35 Unlike the concentrated control typical of mass media, where few entities governed the distribution of content, the internet has introduced a markedly decentralized media setting. This novel environment enables broader production and distribution of information, marked by its extensive reach and lowered cost. The essence of this transformation lies in a pivotal shift: “[I]t is no longer speech itself that is scarce, but the attention of listeners.”Footnote 36
As attention shifts from the limited number of speakers to the limited number of listeners, the role of mass media undergoes a transformation.Footnote 37 In this digital media environment, traditional mass media, such as TV, radio, and newspapers, though still important, is merely one of many influences in the sphere of public discourse.Footnote 38 This has transformed the way information is spread and the influence of media in crafting societal narratives.Footnote 39
An inherent challenge lies in the attempt by these platforms to apply uniform speech norms across a diverse, global user base.Footnote 40 Despite efforts to tailor their enforcement to resonate with local communities and engage with local stakeholders, the inherent contradiction of this global-local dichotomy renders the mission somewhat quixotic. This tension underscores a fundamental reconfiguration in the dynamics of speech regulation, paradoxically making the power to influence speech both more dispersed— individuals can post content directly to a mass audience without requiring acceptance by traditional mediaFootnote 41—and centralized—since the platform internet is dominated by very few corporations that are managed by a handful of individuals.Footnote 42 Both centralization and dispersion, however, bypass the effective influence of local media on public discourse.
GenAI represents a further shift in the landscape of information dissemination and public discourse. This technology stands in contrast to social media platforms which, despite their worldwide influence, maintain at least a basic framework of community standards crafted by humans.Footnote 43 On the one hand, these standards, although developed in remote headquarters and implemented through a combination of algorithms and global content moderators, still reflect human decision-making and oversight.Footnote 44 On the other hand, GenAI functions through advanced algorithms that independently create and distribute content, thereby bypassing conventional gatekeeping mechanisms altogether.
The rise of GenAI intensifies what we call the bypass effect—the ability to circumvent traditional controls over public discourse and information flows. Throughout modern history, social elites, particularly newspaper editors and public intellectuals, served as primary gatekeepers of societal narratives. While social media platforms began to challenge this arrangement by questioning established media’s gatekeeping function and broadening participation in public discourse, they did not fundamentally displace traditional media’s role in content creation. Indeed, much of what circulates on social media continues to originate from these conventional sources.Footnote 45
With the growing spread of AI-generated content, we are witnessing a further evolution. The capacity to create content, once predominantly in the hands of local gatekeepers, is increasingly transitioning to global technology corporations and their AI systems.Footnote 46 This transition is not merely a redistribution of content creation power but also a potential diminishment of the barriers posed by language, once a significant obstacle to the globalization of content. The erosion of this linguistic barrier heralds a future where content is not only universally accessible but also universally producible.
The crux of this change lies in the training methodology of generative AI, typically characterized by the ingestion of vast, globally sourced datasets. This approach presents a significant challenge: attuning the AI to the nuances of local speech patterns and cultural contexts proves immensely difficult.Footnote 47 Consequently, the content generated by these AI systems exhibits a propensity for unpredictability, often lacking the necessary context and sensitivity to resonate with specific communities.Footnote 48 This inherent unpredictability, compounded by the “bypass effect,” raises concerns about the future of public discourse. As the influence of local norms and values in shaping public narratives diminishes, the potential risk to the cohesion and identity of local communities grows. One could hypothesize that the global nature of training data employed in LLMs, coupled with the increasing prevalence of data generated by these models themselves, ushers in the emergence of a singular, global culture.Footnote 49 Depending on one’s perspective, this concept can be interpreted as either a utopian synthesis of elements from worldwide cultures or a dystopian homogenization that erases the vibrant tapestry of local and regional diversities.
Global datasets and inherent unpredictability do not shield GenAI from cultural imperialism concerns, such as Silicon Valley elites applying US-based speech values globally in content moderation.Footnote 50 Similar to social media platforms, companies wielding generative AI must heavily moderate their models’ outputs.Footnote 51 Unmoderated LLMs risk generating harmful content, necessitating post-training moderation mechanisms.Footnote 52 In simple terms, these algorithms are akin to content moderation algorithms on platforms: they try to figure out whether the content generated by the LLM breaks with a set of pre-determined rules, and if it does, they delete it. Bots like ChatGPT, therefore, are already trained on these existing social (speech) norms and are thereby inserted into the internet culture wars, with some characterizing them as being “trained to be woke.”Footnote 53
It is likely that generative AI content moderation is significantly more effective than social media content moderation since corporations such as OpenAI control both the generation of content and the content moderation itself, which enables them to be very careful with regard to enforcing their own rules on the content that is presented to the user. This tendency towards very careful speech control is also motivated by the unclear status of generative AI under the common liability shields enjoyed by social media.Footnote 54
The growing ability to download open-source AI models and run them on personal computers, combined with the emergence of AI personal agents, presents an intriguing shift in how we interact with these technologies. These developments allow individuals to maintain greater control over both content generation and filtering, potentially addressing concerns about cultural imperialism and the centralized power of global platforms. Yet while this decentralization might appear to resolve certain problems, it fails to adequately address the fundamental challenge of the bypass effect. The erosion of shared social foundations necessary for maintaining a coherent political community persists, regardless of whether the bypassing occurs through centralized platforms or personalized AI agents running on individual machines.
In the following Subsection, I discuss the potential ramifications of such personalization—and the potential creation of ever more narrow, well-calibrated echo chambers—on the political and social fabric of our communities.
2. Echo Chambers
As we have seen, by bypassing traditional media gatekeepers,Footnote 55 digital media platforms have fundamentally altered the media landscape. This alteration has dismantled the once-common media experience that is central to the formation of a unified “public.”Footnote 56 This bypass was complemented by the shift to a personalized media experience curated by recommendation algorithms on platforms like YouTube.Footnote 57 Through algorithmic personalization, each user’s experience becomes distinct and separate, diverging from the mass media era’s collective narrative and shared information environment.Footnote 58 This fragmentation represents a significant shift from the traditional mechanisms through which a societal “public” is forged and maintained.Footnote 59
The absence of gatekeepers, combined with the personalized business models of social media and search, fosters the creation of digital echo chambers.Footnote 60 Digital echo chambers can be defined as “environments in which the opinion, political leaning, or belief of users about a topic gets reinforced due to repeated interactions with peers or sources having similar tendencies and attitudes.”Footnote 61 Selective exposure and confirmation bias, the inclination to seek out information that aligns with existing opinions, likely contribute to the formation of echo chambers on social media.Footnote 62 In examining social networks and the influence of digital media on forming like-minded groups, the research consistently reveals the existence of ideologically similar social clusters.Footnote 63 Furthermore, these homophilic social formations are often linked to an increase in hate speech and sentiments against outgroups.Footnote 64 The digital public sphere is, therefore, fragmented into myriad subgroups, each confined to its echo chamber, thus diminishing the possibility of a collective conversation and a cohesive public opinion. The evolution of social media lays the foundation for grasping the wider impacts of personalized generative AI in democratic societies.
Now, with the advent of personalized GenAI, Footnote 65 we are likely to witness an even deeper fragmentation of the epistemic and social fabric that social media initiated. Unlike social media, whose social nature necessitates operating within the confines of shared platforms, personalized generative AI represents a more radical individualization of media experience. Each user can interact with a unique AI entity, tailored to their specific preferences and viewpoints.Footnote 66 We are already seeing the early stages of such developments: Anthropic’s Claude allows users to customize their AI’s communication style through preset options like Formal and Explanatory, or by providing sample content that reflects their preferred tone. Similarly, Meta’s AI Studio enables users to create personalized AI characters with distinct personalities and traits for automated interactions across platforms like Instagram and WhatsApp. These developments signal a shift toward increasingly individualized AI experiences, where each user can shape their AI’s characteristics to match their specific needs and preferences. Footnote 67
This technological advancement might intensify the decline of the common public dialogue crucial for democratic participation. Essentially, the trend towards personalized generative AI doesn’t just extend the patterns set by social media; it markedly enhances them. Personalized generative AI employs algorithms to deliver customized content to each user, creating isolated experiences that diverge from a shared public narrative. This effect, akin to echo chambers already seen in social media, is amplified in generative AI. It crafts text, images, videos, and audio that resonate with individual preferences and convictions, potentially cocooning us in bespoke informational realms. Such echo chambers could, conceivably, cultivate a singular information environment tailored to one person.
GenAI surpasses social media in fostering echo chambers in more ways than one. For instance, ideologically-driven social media platforms like Truth Social or Gab struggle to gain traction, largely due to the network effects inherent in the social component of these platforms.Footnote 68 GenAI, however, is not subject to such network effects post-training.Footnote 69 It’s quite feasible for smaller groups to operate their own specialized GenAI models—envision an ideologically “Republican AI” versus a “Democratic AI.”Footnote 70 The training data for these models would be selectively curated to reflect each model’s ideological leanings, and content moderation algorithms could be tweaked to exclude information that contradicts their foundational ideology.Footnote 71 Thus, a conservative might receive content from the Republican AI that reinforces their beliefs, while opposing facts are filtered out. Each faction becomes more deeply embedded in their respective, polarized realities.
Without shared facts and experiences, citizens cannot engage in reasoned democratic debate and collective will-formation.Footnote 72 Without adequate oversight and transparency, generative AI poses a risk to the integrity of truth and the trust placed in crucial democratic institutions such as journalism. Consequently, the effects of social media in fragmenting discourse and spreading misinformation serve as a pressing caution about the potential ramifications of deploying personalized generative AI without appropriate safeguards.Footnote 73
Thus, the evolution from a unified public sphere—once the hallmark of mass media—to a splintered one through social media, and now potentially to an even more atomized one via generative AI, signals a worrying transformation in the democratic landscape. This shift poses critical challenges for the formation of a cohesive public opinion, a cornerstone of democratic theory and practice.Footnote 74
3. Informational Capitalism
As we examine how digital media platforms operate as a distinct form of media institution, we must confront informational capitalism as the third major challenge they present, alongside the bypass effect and echo chambers. This challenge connects directly to what Zuboff identifies as surveillance capitalism—a system that monetizes personal information through advertising and requires massive data collection and processing.Footnote 75 This business model transforms the basic relationships between individuals, private companies, and the state, raising fundamental questions about privacy, fairness, and democratic accountability.Footnote 76
While still in its early stages, corporations developing GenAI systems are likely to face similar challenges stemming from the logic of informational capitalism. First, GenAI exhibits an even more pronounced data maximization imperative compared to social media platforms.Footnote 77 The training of their models and their continuous improvement require vast quantities of data, creating strong incentives for these companies to gather information and maintain secrecy regarding their data sources.Footnote 78 This dynamic is already discernible in the way GenAI corporations such as OpenAI, Google, and Anthropic obfuscate the origins of their training dataFootnote 79 and employ permissive user data collection and usage policies.Footnote 80 The relentless drive for data maximization stands in stark contrast to users’ privacy and data protection interests, and has already embroiled GenAI providers in legal controversies.Footnote 81
Second, GenAI platforms may ultimately embrace the logic of surveillance capitalism, depending on their chosen business model. We can already envision how targeted advertising could seamlessly integrate into chatbot interactions—imagine ChatGPT offering sponsored links to noodle makers or local Sichuan pepper producers when discussing Dandan noodles.Footnote 82 While most chatbot providers currently use freemium subscription models that don’t require constant data collection, the long-term viability of this approach remains uncertain. Footnote 83 Though the choice of subscription revenue shows an awareness of advertising’s problems,Footnote 84 the promise of personalized ad revenue may prove too tempting to resist. This could lead these platforms to adopt the same engagement-maximizing incentives we’ve seen in social media, potentially promoting addictive design patterns that undermine individual autonomy and well-being.Footnote 85
Finally, as generative AI companies do not fully internalize the social costs of their activities, they may be inclined to minimize oversight and accountability measures that do not directly enhance their profits.Footnote 86 This could result in a prioritization of commercially viable AI models at the expense of long-term ethical considerations, as exemplified by the NYT v. OpenAI lawsuit, which highlights the inherent conflict of interest that exists even at the training stage of GenAI.Footnote 87 Cybersecurity measures may be inadequate if not directly profit-enhancing, jeopardizing user data integrity and privacy.Footnote 88 Efforts to address biases in AI systems, crucial for ensuring fairness and non-discrimination, might be underemphasized unless they align with financial objectives.Footnote 89 Transparency and accountability mechanisms could also be compromised in the absence of direct financial incentives, undermining public trust and hindering democratic oversight.Footnote 90
The challenges posed by informational capitalism in the context of digital media platforms, including GenAI, are deeply interconnected with the issues of the bypass effect and echo chambers discussed earlier. The business models and incentives driving these platforms shape not only their data collection practices and content moderation policies but also the very architecture of the digital public sphere.Footnote 91 As we grapple with the societal implications of these technologies, it is crucial to recognize the complex interplay between the economic imperatives of informational capitalism, the fragmentation of the public sphere, and the transformative impact on democratic discourse.Footnote 92
II. AI Regulation and Media Harms
1. EU AI Act
The EU’s AI Act exemplifies what I call a horizontal approach to regulation, an ambitious attempt to govern all AI systems through a single, unified framework organized around risk categories.Footnote 93 While this comprehensive strategy aims to create regulatory harmony across the EU, it struggles to keep pace with the rapid evolution of generative AI and its diverse applications.Footnote 94 The Act relies heavily on regulatory sandboxes to foster innovation while asserting broad extraterritorial reach, it applies to any AI system used in the EU, regardless of where it was developed or trained.Footnote 95 Yet this horizontal approach reveals significant limitations. Most critically, it stumbles in applying broad regulatory principles to swiftly evolving technologies.Footnote 96 The governance challenges posed by platform-specific harms risk being lost in this sweeping framework.Footnote 97
The EU’s risk-based approach sorts AI systems into distinct categories based on their potential for harm.Footnote 98 This “ladder approach” creates a hierarchy of regulatory obligations.Footnote 99 Systems posing unacceptable risks to fundamental rights face outright prohibition.Footnote 100 High-risk systems must meet strict certification requirements.Footnote 101 Low-risk systems face minimal oversight.Footnote 102 Systems falling between these categories must meet transparency obligations.Footnote 103 Notably, the final text introduces a new “systemic risk” category for general-purpose AI models that could significantly affect the market.Footnote 104
The Act seeks to regulate AI systems throughout their entire lifecycle, from data collection through deployment, addressing everything from data quality to continuous risk management.Footnote 105 It pays particular attention to general-purpose AI models, recognizing their unique regulatory challenges.Footnote 106 These models are defined by their broad capabilities across diverse tasks and potential for integration into various downstream applications.Footnote 107
The EU grounds this framework in technological neutrality, focusing on AI’s uses rather than its technical specifics. This risk-based categorization aims to future-proof the law, providing regulatory certainty through clear, unified rules.Footnote 108 Yet this horizontal approach faces serious limitations.Footnote 109 The challenge of applying broad frameworks to rapidly evolving technologies becomes particularly acute with generative AI.Footnote 110 Recent amendments addressing general-purpose AI models highlight the difficulty of creating stable, comprehensive regulation in this dynamic field.Footnote 111
The Act’s risk management model faces further criticism. While it aims to promote “trustworthy AI” through risk acceptability standards, critics argue this effectively outsources crucial decisions to AI providers with inherent conflicts of interest.Footnote 112 The Act assumes clear distinctions between acceptable and unacceptable risks—an assumption that may prove unrealistic given AI’s complexity.Footnote 113
The framework’s inflexibility poses additional problems.Footnote 114 Risk classifications will require constant revision as technology evolves.Footnote 115 The Act’s broad definitions of AI and autonomy risk capturing irrelevant technologies.Footnote 116 Heavy compliance costs and legal uncertainty could accelerate market concentration, while complex requirements may stifle innovation, particularly for smaller firms.Footnote 117
2. The EU AI Act and Media Harms
These broad structural limitations become particularly acute when we examine the Act’s treatment of platform governance and digital media. As I will show, the Act’s horizontal approach reveals fundamental weaknesses in addressing the specific challenges that algorithmic systems pose to democratic discourse and public communication.
Consider three specific failures that flow directly from this structural mismatch. First, the Act’s focus on individual AI systems blinds it to the bypass effect—the way platforms systematically evade democratic accountability.Footnote 118 While the Act prohibits certain “unacceptable-risk” AI systems that enable subliminal manipulation, it demonstrates a fundamental inadequacy in addressing the more nuanced mechanisms through which digital media platforms—including generative AI—shape public discourse.Footnote 119 The bypass effect, through which these platforms evade traditional democratic accountability mechanisms, remains unaddressed within the Act’s regulatory framework. Moreover, the absence of specific content moderation obligations within its high-risk AI requirements leaves the governance of algorithmic content curation largely unregulated.
Second, the Act’s technological neutrality principle prevents it from meaningfully addressing echo chambers and information fragmentation. Despite substantial empirical evidence documenting how algorithmic systems fragment public discourse through personalization and targeting, the Act maintains a conspicuous silence on AI-powered targeted advertising and profiling.Footnote 120 The absence of transparency obligations for AI-based content personalization, coupled with insufficient provisions addressing generative AI’s potential to amplify these dynamics, represents a significant oversight in the regulatory architecture.Footnote 121
Third, the Act’s approach to data governance fails to confront the power dynamics of informational capitalism.Footnote 122 While it establishes provisions regarding generative AI training data, these requirements lack meaningful transparency regarding data collection practices, user consent, and the scale of data exploitation.Footnote 123 The limited data transparency requirements and absence of explicit GDPR compliance obligations for generative AI interfaces leave the fundamental structures of algorithmic control unchallenged.
These failures demonstrate why horizontal, technology-neutral regulation cannot effectively govern algorithmic media systems. This analysis indicates that a more targeted regulatory approach, focused specifically on digital media platforms and their documented algorithmic harms, might prove more effective than the current comprehensive framework.Footnote 124 Such an approach would enable more precise governance of generative AI’s role in our information ecosystem while maintaining regulatory agnosticism toward potential applications in less understood domains. China’s regulatory response to generative AI, to which we turn to now, offers an instructive contrast, demonstrating both the potential benefits and limitations of a more targeted, domain-specific approach.
C. Chinese AI Regulation
China’s approach to AI regulation offers a striking contrast to the EU model, embodying a state-led framework driven by techno-nationalist ambitions and an emphasis on social stability.Footnote 125 This regulatory strategy simultaneously promotes AI development while tightly controlling its effects on public discourse.Footnote 126 China’s ambition to achieve global AI leadership by 2030 shapes its regulatory choices, reflecting a view of AI as essential to both economic growth and national security.Footnote 127 The result is a distinctive regulatory model that encourages innovation while maintaining strict oversight of AI’s role in the public sphere.Footnote 128
What distinguishes China’s AI governance is its vertical approach to regulation, targeting specific AI applications and risks rather than attempting to create comprehensive rules.Footnote 129 Unlike the EU’s horizontal framework, China’s strategy enables more precise, nimble interventions in response to emerging technologies.Footnote 130 This adaptability proves particularly valuable in addressing the evolving challenges posed by algorithmic systems in our information ecosystem.Footnote 131 When new issues arise, regulators can amend specific provisions without overhauling entire regulatory frameworks.
The speed of China’s regulatory response to technological change demonstrates this advantage.Footnote 132 While the EU spent years finalizing its AI Act, China rapidly introduces and refines measures through public consultation.Footnote 133 This agility allows regulators to address emerging risks more effectively, particularly regarding algorithmic mediation of public discourse.Footnote 134
The Cyberspace Administration of China (CAC) stands at the center of this regulatory framework.Footnote 135 The CAC’s dominance in AI regulation flows from its ability to set the agenda and bring issues before the CCP Central Committee.Footnote 136 Once the Committee approves regulation, the CAC drafts the rules, drawing on expertise from think tanks affiliated with the Ministry of Industry and Information Technology and the Ministry of Science and Technology.Footnote 137 By bringing other ministries in as co-signatories, the CAC creates bureaucratic buy-in that enhances enforcement.Footnote 138
The CAC’s focus on controlling online content makes it a natural leader in early AI regulation.Footnote 139 Yet questions remain about whether it will maintain this position as AI governance expands to domains like autonomous vehicles or financial technology.Footnote 140 While the Ministry of Science and Technology played a significant role in early policies like the 2017 AI Plan and established high-level principles for AI ethics, it has stepped back from more targeted regulation.Footnote 141
China’s targeted approach to platform governance emerged clearly in its 2021 Provisions on Management of Algorithmic Recommendations.Footnote 142 These rules directly addressed the Party’s concerns about algorithms undermining its control over public discourse—what we identified earlier as the bypass effect.Footnote 143 The regulations require algorithmic recommendation providers to “uphold mainstream value orientations” and “actively transmit positive energy.”Footnote 144 They mandate platform intervention in trending topics to align with government priorities and grant users rights like disabling algorithmic recommendations.Footnote 145 Providers with “public opinion properties or social mobilization capacity” must register their algorithms.Footnote 146 These measures created a template for future algorithmic regulation.
The regulation of synthetic media followed a similar pattern. Between 2017–2019, the Party identified deepfakes as a threat to its information environment.Footnote 147 The resulting Provisions on Deep Synthesis Internet Information Services imposed content review obligations and labeling requirements for AI-generated content.Footnote 148 The regulations adopted Tencent’s term “deep synthesis” over the politically charged “deepfakes,” covering most forms of generative AI including image, video, voice, and text generation.Footnote 149
China’s approach incorporates technical standards and multi-tiered obligations for different actors in the AI ecosystem.Footnote 150 The National Technical Committee 260 develops standards for content labeling and security.Footnote 151 AI service providers face obligations as content producers, responsible for ensuring legal compliance, accuracy, personal information protection, and preventing misinformation.Footnote 152
The regulatory framework for generative AI in China reached a new milestone with the July 2023 Interim Measures for the Management of Generative AI Services,Footnote 153 which became the world’s first binding instrument specifically targeting generative AI systems.Footnote 154 These measures addressed the unique challenges posed by large language models like ChatGPT, establishing a dual-obligation system that treats generative AI service providers simultaneously as content producers and technical service providers.Footnote 155 This approach reflects China’s recognition that generative AI blurs traditional distinctions between content creation and service provision, requiring providers to both label AI-generated content and assume responsibility for its compliance with content security requirements. The Measures specifically target public-facing generative AI services while excluding research and industrial applications, illustrating China’s balancing act between controlling information security risks and promoting technological development.Footnote 156
China’s vertical and iterative regulatory strategy for AI is further exemplified by plans to develop a comprehensive AI Law, which was added to the State Council’s Legislative Plan in 2023.Footnote 157 This approach allows Chinese regulators to rapidly respond to emerging technologies while testing specific regulatory mechanisms before incorporating them into broader frameworks.Footnote 158 The Cyberspace Administration of China (CAC) remains the central regulatory authority, drawing expertise from various ministries to create bureaucratic buy-in that enhances enforcement.Footnote 159 Though criticized for potentially imposing excessive burdens on service providers, this regulatory model represents China’s strategic choice to decelerate AI development in service of political stability while maintaining technological advancement—an approach that, while driven by domestic concerns, may inadvertently influence global AI governance by mitigating an international “race to the bottom” in AI development.Footnote 160
This vertical and iterative regulatory strategy allows China to refine its approach as technologies evolve. Unlike the EU’s comprehensive framework, China targets specific applications and revises regulations when gaps emerge. While this creates compliance challenges, Chinese regulators view this as an acceptable trade-off in managing a rapidly changing technology.Footnote 161
Interestingly, these targeted regulations now appear to be laying groundwork for broader legislation. In June 2023, China’s State Council announced plans to draft a national Artificial Intelligence Law.Footnote 162 This suggests how targeted regulation might provide a foundation for more comprehensive frameworks, offering an alternative path to the EU’s top-down approach.
I. Chinese AI Regulations and Media Harms
When we examine China’s regulatory framework through the lens of platform governance, we see a strikingly different approach to addressing algorithmic harms in the public sphere. Unlike the EU’s sweeping framework, China effectively separates the governance of media-specific harms from broader AI development.Footnote 163 This separation proves instructive, not because of China’s particular goals for information control, but because it demonstrates how targeted platform regulation can address specific democratic challenges without requiring comprehensive governance of all AI applications.
China’s tailored regulations directly confront the three public discourse challenges we identified earlier. First, regarding the bypass effect, China’s framework explicitly addresses how algorithmic content dissemination circumvents traditional gatekeepers. The Algorithmic Recommendations provisions establish platforms as new, regulated gatekeepers by requiring intervention in content recommendations.Footnote 164 Similarly, the Deep Synthesis regulations impose specific content review obligations, treating AI service providers as content producers responsible for their algorithmic outputs.Footnote 165
Second, China tackles echo chambers through concrete mechanisms rather than general principles. For example, the Recommendation Algorithms regulation grant users meaningful control over algorithmic recommendations, including the ability to disable them entirely.Footnote 166 These measures are complemented by the Deep Synthesis and Generative AI regulations, which require labeling of synthetic content and impose “content producer” responsibilities on AI providers.Footnote 167 While the specific content interventions mandated may serve information control ends—such as requiring adherence to “correct political direction”—the regulatory structure itself recognizes and addresses how algorithmic curation shapes public discourse.
Third, the framework confronts informational capitalism’s challenges through targeted interventions. The Chinese regulatory approach addresses algorithmic governance problems by establishing public accountability mechanisms. Requirements for algorithm registration, training data standards, and content labeling create transparency obligations that make visible the previously hidden processes of algorithmic influence. These provisions directly counter surveillance capitalism by exposing the algorithmic systems that quietly extract behavioral data and predict user actions for profit.Footnote 168 The regulations treat algorithmic systems with ‘public opinion properties’ as digital infrastructure requiring oversight, rather than purely private technological tools.Footnote 169 Content security obligations shift responsibility onto platform operators, recognizing their role as intermediaries who shape the informational environment through their data collection practices and algorithmic curation decisions.Footnote 170 This accountability disrupts the surveillance capitalist model by making companies answerable for how they deploy personal data to influence user behavior. By prohibiting algorithmic price discrimination and anti-competitive practices,Footnote 171 the framework acknowledges how algorithmic systems can reinforce existing power asymmetries in the digital economy and limits how platforms can monetize behavioral prediction to manipulate consumers. These interventions represent an attempt at making algorithmic power subject to public oversight.
Crucially, while implementing these controls on public-facing content, China maintains regulatory agnosticism toward AI development in other domains. Non-public applications in industry, research, and education largely fall outside the scope of these measures.Footnote 172 The Measures specifically exclude research, enterprise, and industrial applications that don’t directly serve the public from regulatory oversight.Footnote 173 This narrower regulatory scope aims to promote broader technology adoption and development in non-public sectors. This targeted approach avoids overburdening AI development broadly while focusing regulatory attention on documented platform harms primarily in areas where public interests and security face the greatest risks.
This separation proves effective because it aligns regulatory tools with specific challenges. The vertical and iterative approach allows for rapid adaptation as technologies evolve, while avoiding the inflexibility of comprehensive frameworks. By treating AI service providers as digital intermediaries with constitutional responsibilities, the regulations establish clear accountability for algorithmic harms in the public sphere. This bifurcated approach represents a conscious balancing act between the state’s information control imperatives and its technological development strategy aimed at achieving global AI leadership.Footnote 174 The regulatory leniency toward non-public applications may provide Chinese AI companies with competitive advantages over their Western counterparts in industrial domains while still maintaining targeted control over public-facing applications that might influence democratic values or create societal risks.
D. Conclusion
The generative AI media harms case study offers a powerful lens through which to examine the broader challenges of AI governance in the algorithmic age. By tracing the ways in which AI-powered content generation and curation technologies are transforming the digital public sphere, this analysis illuminates the complex interplay of technical, economic, and political forces that are shaping the trajectory of AI development and deployment. At the same time, it underscores the urgent need for governance frameworks that can effectively address the distinctive risks and harms posed by these technologies to democratic discourse and public deliberation.
A comparative analysis of the European Union and China’s divergent approaches to AI regulation yields critical insights for the path forward. The EU’s experience with the AI Act demonstrates the limitations of broad, horizontal governance frameworks that seek to regulate all AI applications through a single, comprehensive risk-based scheme. While admirably ambitious in scope, the AI Act struggles to contend with the rapid pace of technological change and the contextual specificity of AI’s societal impacts. Its reliance on rigid risk categorization and one-size-fits-all rules render it ill-equipped to address the particular challenges that generative AI poses to public discourse, from the erosion of shared epistemic foundations to the entrenchment of echo chambers and the concentration of informational power.
China’s regulatory approach, by contrast, offers a compelling model of targeted, domain-specific governance that prioritizes agility and iterative adaptation over exhaustive scope. By focusing narrowly on the concrete harms of algorithmic content generation and curation in the digital media context, China’s regulatory interventions, from the Deep Synthesis Provisions to the Generative AI Services Measures, demonstrate the potential of a more surgical, dynamic approach to AI governance. Crucially, this strategy enables regulators to take decisive action against well-documented dangers to the public sphere while maintaining a stance of principled agnosticism toward the still-uncertain long-term trajectory of AI development in other domains.
The targeted intervention in the case study suggests that effective regulation of algorithmic harms in the public sphere need not depend on comprehensive AI governance frameworks. While China’s specific objectives around information control and social stability may not align with democratic values, the structural features of its approach warrant careful attention. The separation between media regulation and general AI development offers several advantages.
First, this model enables regulators to address known platform harms without waiting for consensus on broader AI governance. China’s targeted regulations tackle specific challenges like algorithmic content amplification and synthetic media while maintaining space for AI development in other domains. This avoids the paralysis that can result from attempting to create all-encompassing frameworks.
Second, the vertical approach matches regulatory tools to concrete problems rather than imposing uniform requirements across different contexts. Requirements for algorithm registration, content labeling, and user controls directly address how platforms shape public discourse. This specificity contrasts sharply with the EU’s risk categorization system, which may miss the nuanced ways algorithmic systems influence democratic communication.
Third, maintaining regulatory agnosticism toward non-public AI applications prevents overburdening beneficial innovation while still addressing documented harms. This balance proves particularly important given the rapid evolution of AI capabilities. By focusing oversight on public-facing content and platform governance, regulators can better calibrate interventions to actual rather than hypothetical risks.
The effectiveness of this separation demonstrates that societies can address algorithmic challenges to democratic discourse without adopting comprehensive AI governance frameworks prematurely. While China deploys these regulatory tools to bolster information control, the structural features of its approach-targeted platform regulation, concrete obligations for algorithmic systems shaping public discourse, and regulatory agnosticism toward other applications—offer important lessons for democratic societies grappling with these same challenges.
Taken together, the insights from the case study suggest that the path forward for AI governance is one of principled pragmatism, a middle course that balances targeted interventions to mitigate AI’s known harms with a healthy humility about the technology’s long-term societal implications. Policymakers should resist the temptation of premature or overly broad regulation, focusing instead on developing domain-specific governance frameworks through participatory, multistakeholder processes. This approach would enable more effective tailoring of regulatory interventions to the distinctive affordances and risks of AI systems in particular social contexts, from the public sphere to healthcare, criminal justice, and beyond.
At the same time, a domain-specific approach must remain grounded in a set of overarching democratic principles and values. These include a commitment to human rights and fundamental freedoms, an emphasis on transparency and accountability, and a recognition of the vital importance of public participation and deliberation in shaping the development and deployment of transformative technologies. By anchoring AI governance in these core values, we can work to ensure that the algorithmic revolution ultimately serves to empower rather than undermine democratic self-determination.
As the generative AI revolution accelerates, the case study examined in this article offers a stark reminder of both the stakes and the urgency of this governance challenge. The decisions we make in the coming years about how to regulate these technologies will shape the future of the digital public sphere—and with it, the very possibility of democratic self-governance in the algorithmic age. We must rise to this challenge with both boldness and humility, coupling decisive action to defend democracy against the dangers of unaccountable algorithmic power with a spirit of open-minded agnosticism about the transformative potential of artificial intelligence. Only by striking this balance can we hope to chart a course toward an AI-enabled future that remains accountable to the will of the people and resilient in the face of technological disruption. The road ahead is uncertain, but the destination is clear: an algorithmic society in which the power of artificial intelligence is harnessed not for domination or control, but for the flourishing of the human spirit and the vitality of democratic life.
Acknowledgements
The author thanks the editors of the German Law Journal for their dedicated and serious work.
Competing Interests
The author declares none.
Funding Statement
No specific funding has been declared in relation to this Article.