A. Introduction
We stand at a curious moment in the history of law and technology. Nations around the world are scrambling to regulate or deregulate artificial intelligence, each convinced they are in a “race”—for dominance, for values, for the future itself. Brussels votes on comprehensive AI Acts. Beijing issues the world’s first copyright ruling on AI-generated content. Washington debates whether chatbots should have First Amendment rights. The underlying premise of this volume is that this framing as a zero-sum competition fundamentally misunderstands both the nature of AI and the task before us.
The truth is more sobering and more hopeful: We are not racing against each other but experimenting together, trying to govern technologies that respect neither borders nor traditional legal categories. The real question is not who will “win” the AI race, but how we can learn from each other’s experiments fast enough to keep pace with systems that evolve by the microsecond.
This special volume of the German Law Journal brings together fifteen contributions that demonstrate why comparative law has never been more essential—or more challenging. The authors span continents and legal traditions, from Beijing to Brussels, from Silicon Valley to Sydney. What unites them is a shared recognition: AI’s challenges are genuinely novel, and no single jurisdiction has all the answers. Progress requires not just expanding our regulation but also expanding our imagination; not just rules but ongoing dialogue across legal systems.
B. The Comparative Imperative
What makes comparative law urgent in the AI context is not just intellectual curiosity but practical necessity. AI systems train on global data, operate across borders, and create effects that ripple through interconnected economies. A large language model trained in California shapes discourse in Cologne. An algorithmic pricing system tested in Shenzhen changes market dynamics in São Paulo. We govern globally networked systems with territorially bounded law—a mismatch that only comparative dialogue can begin to address.
Moreover, the sheer novelty of AI’s challenges means no legal system can afford insularity. When Christoph Busch analyzes how “custobots” break European consumer law’s assumptions about human agency, he’s identifying a problem every jurisdiction will face.Footnote 1 When Weiwei Shen proposes a “reasonable robot” standard for accident liability in China, he’s contributing to a global conversation about legal personhood and responsibility. These are not just national solutions but experiments that inform everyone’s regulatory future.Footnote 2
The contributions here embody different modes of comparative engagement. Some, like Tania Sourdin and Bin Li, explicitly compare governance approaches across jurisdictions, mapping how different countries layer external law with internal controls.Footnote 3 Others, like Jiyu Zhang and Xinmeng Li on fair use in AI training, work within one legal system while consciously contributing to global debates.Footnote 4 Still others, like Danny Friedmann’s analysis of “generative search,” examine phenomena that inherently cross borders, making comparison unavoidable.Footnote 5
What emerges is not convergence on a single model but productive divergence—different systems experimenting with different approaches to shared problems. The EU’s rights-based framework, China’s stability-focused approach, and the market-oriented solutions of the United States—each illuminates different facets of the governance challenge. These diverse perspectives came together at the Comparative AI Law Conference hosted at Peking University School of Transnational Law, Shenzhen in September 2024, a gathering that both produced this special volume and demonstrated how Comparative AI Law as a distinct field of inquiry is fruitful and necessary to meet common challenges. Our task is not to choose among them but to learn from all of them.
The contributions to this volume cluster around four interconnected themes —Adaptive Governance, Power and Markets, Rights and Remedies, and Concepts and Categories. Collectively, these contributions reflect the ways in which AI unsettles foundational legal concepts and compels a reassessment of established regulatory frameworks. Rather than merely introducing novel questions, AI disrupts the normative and conceptual architectures of legal systems, exposing tensions between technological affordances and traditional doctrines.
C. Adaptive Governance
Traditional regulation assumes relatively stable targets. You identify a harm, craft a rule, enforce compliance. But AI systems evolve continuously, through retraining, fine-tuning, and emergent behaviors even their creators do not anticipate. Several contributions explore how governance itself must become more adaptive.
Gilad Abiri argues that horizontal, one-size-fits-all AI laws fail because they either overregulate benign uses or miss actual harms. Instead, he proposes a flexible baseline paired with domain-specific rules targeting particular risks like disinformation in social media or bias in hiring algorithms. His analysis shows how different sectors need different regulatory intensities.Footnote 6
Tania Sourdin and Bin Li go further, mapping multi-layered governance systems across jurisdictions. They show how effective oversight combines external regulation with internal audits, AI literacy training, stakeholder panels, and contractual safeguards. Their comparative analysis reveals that countries succeeding with AI governance create entire ecosystems of accountability rather than relying on law alone.Footnote 7
Jianfeng Cao’s exploration of “value alignment as governance” represents the most radical rethinking. Rather than treating alignment as a technical problem, Cao demonstrates how China is embedding alignment throughout its regulatory stack—from constitutional principles to industry standards to specific testing protocols. He shows how different alignment techniques—RLHF versus constitutional AI—embed different values, making alignment itself a form of governance.Footnote 8
Xin Dai’s article exemplifies adaptive governance through his analysis of the EU AI Act’s social scoring ban. Rather than accepting the ban at face value, Dai dissects its over- and under-inclusiveness, showing how it catches beneficial uses while missing real harms. He proposes alternative safeguards—transparency requirements, due process rights, and sector-specific caps—that could achieve the ban’s goals more effectively. His work demonstrates how comparative analysis can improve even well-intentioned regulations.Footnote 9
D. Power and Markets
AI does not just create new products; it fundamentally restructures economic power. Several contributions examine how platforms use AI to accumulate and exercise new forms of market control.
Sang Yop Kang reveals how AI-driven dynamic pricing allows platforms to extract essentially all consumer surplus. By personalizing prices based on each user’s maximum willingness to pay, platforms convert consumer data into perfect price discrimination. Kang traces how this transforms markets from discovery mechanisms into extraction machines, proposing remedies from consent-based data rules to tax-style redistribution of algorithmic rents.Footnote 10
Ling Hu’s analysis goes further, arguing that digital platforms have evolved into “productive organizations” that blur the line between markets and firms. Through AI-powered matching, recommendation, and resource allocation, platforms do not just facilitate transactions—they manage entire economic ecosystems. Hu shows how this requires rethinking competition law, labor law, and data protection, all of which assume clear organizational boundaries that platforms systematically dissolve.Footnote 11
Danny Friedmann investigates a different power dynamic: How “generative search” makes entire industries disappear. When AI systems ingest content to answer queries directly, they do not compete with publishers—they eliminate the need for users to visit sources at all. Friedmann documents this “great disappearance act” and its companion phenomenon, algorithmic shadow-banning, that silences disfavored voices. He proposes licensing schemes, transparency mandates, and user remedies to preserve diverse information ecosystems.Footnote 12
These analyses reveal AI’s role in concentrating economic power in new ways. Unlike traditional monopolies that control production or distribution, AI-powered platforms control the interfaces through which economic life occurs. The contributions suggest this requires not just traditional antitrust but new forms of structural regulation.
E. Rights and Remedies
How do fundamental rights operate when machines make decisions about humans? Several contributions explore this fraught terrain.
The copyright analyses reveal deep tensions in applying traditional rights to AI. Jiyu Zhang and Xinmeng Li argue that AI training constitutes “non-expressive use” that should qualify as fair use under Chinese law. They demonstrate how treating training as infringement would create “licensing market failure,” blocking socially beneficial AI development.Footnote 13 But Xin Zhang, Jinghe Fan, and Danyang Guo, analyzing China’s first Artifical Intelligence Generated Content (AIGC) copyright case, map a more complex value chain requiring nuanced balancing. They propose expanding fair use while creating “whitelist” mechanisms to protect creators’ interests.Footnote 14
Emanuel V. Towfigh examines how generative AI creates what he calls “moving target” discrimination—bias that shifts faster than anti-discrimination law can respond. Current EU equality frameworks assume relatively stable protected categories and discrimination patterns. But AI systems can develop novel forms of bias through their training, creating discrimination along dimensions law does not recognize. Towfigh proposes dynamic alignment requirements and refreshed liability doctrines that can keep pace with evolving AI behaviors.Footnote 15
Yilu Zuo’s analysis of Chinese content moderation complicates simplistic narratives about AI and rights. Rather than just enhancing state control, AI creates a complex triangle of state, platform, and citizen, with new spaces for negotiation and resistance. Zuo shows how China’s “no-dispute” policy and medium-based approach create different affordances for expression than Western models assume. This comparative perspective reveals how rights operate differently in different AI governance regimes.Footnote 16
Ray Campbell offers a pragmatic vision: Using AI-powered paraprofessionals to close America’s access-to-justice gap. By combining non-lawyer expertise with narrow-domain AI, Campbell argues we could finally provide legal services to the millions who cannot afford lawyers. His proposal to license competence rather than credentials shows how AI might expand rather than restrict rights—if we are willing to rethink professional monopolies.Footnote 17
F. Concepts and Categories
Perhaps AI’s deepest challenge is how it “breaks” the categories law uses to organize itself. Several contributions explore these categorical disruptions.
Christoph Busch’s “custobots” exemplify the problem. When AI agents shop on our behalf, conducting thousands of micro-negotiations per second, who is the consumer? What is consent? How do mistake, fraud, and manipulation operate between machines? Busch proposes an “average custobot” standard and new protection mechanisms, but his analysis reveals how thoroughly AI scrambles consumer law’s basic concepts.Footnote 18
Elisabeth Paar’s examination of AI in courts shows how “human-in-the-loop” rhetoric obscures actual practice. Judges using AI decision support systems often cannot understand or explain the AI’s reasoning, reducing human oversight to rubber-stamping. Paar proposes bright-line divisions between tasks for judges and tasks for machines, but her analysis reveals the difficulty of maintaining meaningful human control over algorithmic justice.Footnote 19
Weiwei Shen’s work on humanoid robot accidents pushes furthest in rethinking legal categories. Shen proposes a “reasonable robot” standard for tort liability, manufacturer-centric strict liability for AI decisions, and even symbolic “re-training” of robots as a form of corrective justice. These proposals show how thoroughly AI forces us to reconsider basic concepts like reasonableness, causation, and remedy when applied to non-human agents.Footnote 20
These categorical challenges are not just theoretical—they determine whether existing law can govern AI at all. The contributions suggest we need not just new rules but new conceptual frameworks adequate to artificial agents.
F. Conclusion: From Race to Dialogue
The metaphor of an “AI race” implies a finish line, winners and losers, a moment when one approach proves supreme. This fundamentally misunderstands our situation. We are not racing but exploring unmapped territory, where every step forward reveals new challenges requiring collective wisdom.
The contributions to this volume demonstrate what this exploration requires: technical sophistication without technological determinism, regulatory creativity without regulatory capture, protection of human values without pretending humans are the only agents that matter anymore. Above all, they show why comparative law is not a luxury but a necessity—our best tool for learning fast enough to govern systems that evolve faster than any single legal system can adapt.
Each experiment described here—China’s value alignment requirements, Europe’s custobot protections, and the paraprofessionals of the United States—offers lessons for all of us. Some will fail, teaching us what does not work. Others will succeed in unexpected ways, revealing new possibilities. The key is maintaining open channels for this knowledge to flow across borders and legal traditions.
These contributions suggest that while each jurisdiction must craft AI governance suited to its own legal culture and values, none can afford to ignore experiments elsewhere. The bridges built through comparative scholarship serve a practical function: They reduce the cost of learning from others’ successes and failures. As algorithms trained in one jurisdiction shape outcomes in another, platforms operating globally create local harms, and technical standards set anywhere affect systems everywhere, understanding diverse regulatory approaches becomes a pragmatic necessity. We need not converge on a single model, but we do need to comprehend the range of possibilities. In that spirit, we invite readers to engage with these contributions not as foreign curiosities but as part of our common project: Ensuring that artificial intelligence serves human flourishing, wherever humans may be.
***
Acknowledgements
The editors of this Special Issue would like to thank Dean Mark Feldman for generously hosting the conference at Peking University School of Transnational Law in Shenzhen and the big and enthusiatisc crew at STL for ensuring that everything ran smoothly.
Furthermore, we like to warmly thank Julia Schönemann—not only for being an essential part in organizing the conference at STL but also in graciously coordinating the production of this Special Issue.
Special thanks go to Klaas Eller, Co-Editor in Chief of the German Law Journal, who as the editor responsible for this Special Issue provided excellent guidance to us and went above and beyond in his support.
We also thank Emma Gilliam, Lead Executive Editor at Washington & Lee Law School, and, through her, all the student editors, for their thoughtful and diligent work on this Special Issue. The scale of this collection placed a significant burden on the entire editorial team, and we are deeply grateful for their extraordinary commitment. They went the extra mile to bring this Special Issue to life.
Last but not least we would like to thank the German Law Journal for giving us the opportunity to present the topics and discourses of our conference in a Special Issue.
Competing Interests
The authors declare none.
Funding Statement
No specific funding has been declared for this article. Open access funding provided by Max Planck Society.