A. Introduction
Throughout history, as technological advancements have reshaped society, legal systems, particularly the law of accidents,Footnote 1 have been forced to evolve in order to address the new challenges these innovations introduce. From the invention of the automobile to the development of medical devices, the law has adapted to ensure that individuals and entities are held accountable for the harms caused by emerging technologies. These adaptations often involve reevaluating existing liability mechanisms and introducing new legal apparatuses to better fit the unique circumstances presented by novel technologies.
Humanoid robots represent the latest frontier in this ongoing process of legal adaptation. Designed to mimic human behavior, appearance, and even cognitive functions, humanoid robots are increasingly integrated into sectors ranging from healthcare and customer service to manufacturing and home assistance. Their potential to improve productivity, enhance elderly care, and create unprecedented opportunities is immense. However, they also introduce new categories of risk, as injuries caused by humanoid robots present unique challenges that current legal frameworks are ill-equipped to address.
The complexity of humanoid robot-related accidents exceeds that of previous technological innovations in three key characteristics: humanoid simulation, artificial intelligence, and hybrid control. As a result, assigning liability for accidents that occur due to robot malfunctions or erratic behavior becomes a far more complicated task. In cases of humanoid robot accidents, questions arise not only about who is liable—the manufacturer, the programmer, the owner, or the user—but also about how to assess fault when humanoid robots’ actions are controlled by both humans and algorithms. As humanoid robots become further integrated into society, China’s current accident law system is not yet fully equipped to address the unique risks posed by these technologies. In light of this, a new liability mechanism is needed to fill these gaps.
This Article critically examines these concerns by analyzing the current gaps in Chinese accident law and proposing a new liability mechanism tailored to the challenges posed by humanoid robot accidents.Footnote 2 By drawing on lessons from previous technological revolutions, such as those brought about by automobiles, this Article will explore how liability mechanisms can be adapted and restructured to ensure that victims of humanoid robot accidents receive appropriate compensation, while also encouraging innovation in the field of humanoid robotics. In doing so, it will offer a pathway for creating a forward-looking legal regime that not only responds to current challenges but is also flexible enough to accommodate future advancements in this rapidly evolving field.
This Article proceeds by first examining the technological characteristics of humanoid robots that pose unique challenges to existing legal frameworks, specifically focusing on human-body simulation, artificial intelligence, and hybrid control. It then analyzes the limitations of China’s current legal system in addressing accidents involving humanoid robots, highlighting the inadequacies of traditional fault standards, the difficulties in identifying algorithmic malfunction, and the shortcomings of current remedies. Following this, the Article presents a typological analysis of humanoid robot accidents, categorizing them based on the source of fault—human or algorithm—to clarify the applicability of existing legal principles. Finally, the Article proposes a reconstructed liability mechanism tailored to the unique challenges of humanoid robot accidents, advocating for new approaches to evidence gathering, algorithmic malfunction regulation, liability allocation, and victim remedies. It concludes by envisioning a path towards an “accidental utopia.”
B. Technological Characteristics of Humanoid Robots and Their Influence on Liability
At the 2024 World Robot Conference in Beijing, twenty-seven humanoid robots were showcased.Footnote 3 These robots are not only enhancing their ability to perform practical tasks but are also developing the capability to forge emotional connections with humans.Footnote 4 China has solidified its role as a global leader in robotics, holding more than 190,000 active patents related to robots as of July 2024—accounting for approximately two-thirds of the world’s total.Footnote 5 In the past two years, China’s Ministry of Industry and Information Technology has issued several executive orders and guidelines to promote the deployment of humanoid robots in sectors such as intelligent manufacturing, domestic services, and hazardous environment operations.Footnote 6 As a result, humanoid robots, featuring lifelike silicone skin and human-like movements, are becoming increasingly integrated into various public and private sectors in China. Humanoid robots even took to the stage during the 2025 Spring Festival Gala, China’s most-watched television show.Footnote 7
As humanoid robots become more prevalent across various sectors, their integration raises new concerns about the potential for accidents,Footnote 8 necessitating a careful examination of three critical technological characteristics that must be addressed in any legal framework, namely, human-body simulation, artificial intelligence, and hybrid control.
I. Human-body Simulation
Human-body simulation is the defining hardware feature differentiating humanoid robots from other robots. Humanoid robots are engineered with mechanical structures that closely replicate human sensory organs and limbs, facilitating interactions and operations within human-centric environments, particularly in interacting with human tools. Their stature, often approximating the average human height, further enhances their integration into human society.
Recent advancements in humanoid robotics, exemplified by models such as Engineered Arts’ Ameca, Boston Dynamics’ Atlas, Tesla’s Optimus, Figure AI’s Figure 01, and EXRobots’ Ex Robots, highlight the rapid evolution of these technologies. Ameca, for instance, demonstrates sophisticated capabilities in facial expression and vocalization, while Atlas exhibits exceptional agility and adaptability to challenging terrains. Figure 01 showcases advanced cognitive functions, including visual perception, planning, and verbal reasoning. Some of these humanoid robots provide even better performance than humans. For example, using machine vision, some medical robots can match photos collected by sensors with those in a pathology database on high-resolution, high-precision electronic grids that are completely imperceptible to the human eye, spotting differences that even experienced specialists might miss.Footnote 9
The rapid advancement of human-body simulation presents a set of legal and ethical challenges. The capacity of these machines to mimic human appearance and behavior enhances their suitability for service roles within human-centric environments, which have been shaped by centuries of human use. First of all, humanoid robots, by virtue of their humanoid form, can seamlessly integrate into these environments, particularly when equipped with bipedal locomotion that allows them to navigate spaces designed for human bodies. Secondly, the vast majority of tools and infrastructure are designed for human use. Humanoid robots can directly utilize these tools, from simple handheld devices to complex medical equipment, enhancing their utility in a variety of tasks. Last but not least, the anthropomorphic design of humanoid robots fosters greater social acceptance and facilitates more natural human-robot interaction. Through facial expressions, gestures, and language, these robots can better understand and respond to human needs, creating a more intuitive and empathetic user experience.Footnote 10
Even within the relatively controlled environment of industrial settings, robot accidents are not unusual.Footnote 11 As humanoid robots become more prevalent in our daily lives, extending into domains such as healthcare, hospitality, and domestic settings, the legal implications of human-robot interactions are poised to expand significantly. These interactions may give rise to a new range of legal issues at the time of accidents. More critically, the nature of these accidents transforms. The replication of human-like motion, sensory perception, and even emotional interaction not only increases the probability of physical harm, as evidenced by existing industrial robot accidents, but also introduces a new dimension of psychological harm. The heightened interaction between humans and humanoid robots can trigger a wide range of emotions, including happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction, and amusement,Footnote 12 in stark contrast to the more limited interactions typical between humans and traditional robots. Studies demonstrate that individuals can form genuine attachments to these humanoid robots, experiencing feelings of companionship, empathy, and even love.Footnote 13 This emotional investment transforms the human perception of robots, from a mere tool to something akin to a social actor, a transformation intensified by the robot’s apparent autonomy driven by machine learning algorithms. In other words, human-body simulation, capable of eliciting a wide spectrum of human emotions, may create novel accident scenarios.
As a result, the liability mechanism governing humanoid robots must proactively address not only the increased frequency of accidents but also the qualitatively different and potentially more severe emotional consequences stemming from their human-like design and behavior.
II. Artificial Intelligence Footnote 14
Human-shaped machines may resemble us physically through body simulation, but true humanoid robots require artificial intelligence—the ability to think like humans. Without artificial intelligence, a machine is just a machine, even if it’s in a human shape, it can’t be called a “robot.” It’s precisely because it simulates human intelligence that we label it a “robot.” Powered by technologies like large language models, machine learning, and data analysis, some of today’s humanoid robots exhibit intelligence comparable to, or even exceeding, our own.
From a computer science perspective, machine learning is a central enabling technology for the forms of AI most relevant to contemporary humanoid robots. To understand why, it is instructive to examine the limitations of the counterpart of machine learning algorithms—algorithms without machine learning capabilities, referred to as “traditional algorithms.” While traditional algorithms have long been the bedrock of computer programming, providing precise and predictable outcomes, they suffer from a fundamental inflexibility. Designed to operate within strictly defined parameters, traditional algorithms struggle to adapt to novel or unexpected circumstances. This rigidity is a direct consequence of their deterministic nature, where every output is strictly determined by the input and the predefined rules of the algorithm. In contrast, machine learning algorithms, through their ability to learn from data, can adapt to changing environments and perform tasks that would be prohibitively complex for traditional programming approaches.Footnote 15
Take the navigation system of a humanoid robot as an example. Traditional algorithmic approaches to robotic navigation, while effective in highly controlled environments, face significant limitations when confronted with the complexities of real-world scenarios.Footnote 16 Consider the task of guiding a humanoid robot through a bustling city intersection. While a conventional algorithm might successfully navigate a factory floor with predictable obstacles, it would struggle to account for the dynamic and unpredictable nature of urban traffic, including pedestrians, cyclists, and vehicles. This is because traditional algorithms rely on a predefined set of rules and conditions, making them ill-equipped to handle unforeseen circumstances. Such limitations can be attributed to what is known as the “knowledge acquisition bottleneck,”Footnote 17 where the algorithm’s ability to adapt to new situations is constrained by the programmer’s ability to anticipate and code for every possible eventuality. This inherent inflexibility has hindered the widespread deployment of robots in complex, real-world environments.
Machine learning algorithm has fundamentally changed how humanoid robots function.Footnote 18 Unlike traditional algorithm, machine learning algorithm allows humanoid robots to learn from experience, adapt to changing circumstances, and make independent decisions.Footnote 19 In a nutshell, machine learning algorithm is both the program itself and the programmer who constantly modifies itself. Each new run of the machine learning algorithm serves as a steppingstone for the next run, improving through the continuous accumulation of operational data. Most importantly, machine learning algorithms will bring uncertainty to the behavior of humanoid robots. As Ryan Calo has said, one of the important characteristics of machine learning algorithms is that they can produce innovative and unpredictable outputs, which he calls “emergence.”Footnote 20 It is precisely through “emergence” that machine learning algorithms can help humanoid robots cope with dynamic scenarios that traditional algorithms cannot handle. Of course, “emergence” is not without its threshold and costs. The black-box nature of deep learning models, particularly in unsupervised and reinforcement learning settings, makes it difficult, sometimes impossible, to predict and explain the behavior of humanoid robots.Footnote 21 This lack of transparency poses profound legal and ethical implications, as it becomes increasingly challenging to assign liability for the actions of autonomous machines.
The integration of artificial intelligence into humanoid robots introduces a new layer of complexity to traditional liability mechanisms. This complexity arises from two primary sources: the multifaceted nature of AI development and the inherent unpredictability of machine learning algorithms. On the one hand, unlike traditional machines, humanoid robots often involve multiple developers and suppliers, each contributing to the hardware, software, and algorithms that underpin the robot’s functionality.Footnote 22 This distributed development process can make it challenging to pinpoint a single entity responsible for a malfunction or accident. Beyond establishing culpability, these varied scenarios highlight the broad spectrum of applicable legal principles, such as negligence, product liability, and contractual obligations, including warranties, consumer responsibilities, and liability restrictions.Footnote 23 On the other hand, machine learning algorithms can exhibit emergent behaviors, meaning that they can produce outcomes that were not explicitly programmed. This unpredictability makes it difficult to anticipate and prevent accidents, and it can also complicate the task of assigning liability after an incident. These factors combine to create significant uncertainty regarding the allocation of liability in cases involving humanoid robots. Traditional liability mechanism, which typically relies on relatively clear causal connections between a defendant’s conduct and a resulting harm, is inadequate for addressing the complex and often unpredictable nature of humanoid robot accidents.Footnote 24
III. Hybrid Control
Despite the advancements in artificial intelligence, at this stage, humanoid robots remain subject to human control. Even in the most sophisticated of these machines, human intervention remains a critical component, from the initial programming and activation to the ultimate ability to deactivate the system. This human-in-the-loop paradigm is evident in the three-step process typically involved in human-robot interaction: instruction, reception, and execution.Footnote 25 While machine learning algorithms grant robots a degree of autonomy in carrying out tasks, humans retain the overarching authority to initiate actions, modify behaviors, and ultimately terminate operations.Footnote 26 This complex interplay between human operators and autonomous algorithms in humanoid robots is another defining characteristic of humanoid robotics.
From a legal liability perspective, the determination of causality within human-algorithm hybrid control systems presents an exponential increase in complexity.Footnote 27 Questions arise regarding whether a harmful event was caused by algorithmic failure, human error, or external factors. The intricate relationship between human input, algorithmic decision-making, and the resulting outcome makes it difficult to establish a clear causal chain. Furthermore, the emergent properties of machine learning algorithms, which can produce unexpected results, further complicate the task of assigning liability.
The unpredictability inherent in machine learning systems that are being built in an increasing number of humanoid robots challenges traditional notions of foreseeability and culpability. As a result, the legal system must grapple with novel questions, such as: who is liable when a machine learning algorithm makes a decision that results in harm? How can we determine the extent to which a human operator should be held responsible for the actions of a humanoid robot? And how can we ensure that victims of harm have adequate remedies when the causes of accidents are so difficult to understand?
To sum up, the ongoing advancement of humanoid robotics, characterized by the convergence of physical embodiments and increasingly sophisticated artificial intelligence, reflects a trajectory reminiscent of the Tyrell Corporation’s famous motto in Blade Runner: “More human than human.” However, despite these advancements, the current state of humanoid robotics still represents a transitional phase. While exhibiting increasing autonomy, these machines remain subject to a complex interplay of human control and algorithmic decision-making, thereby complicating the attribution of liability for their actions in accidents.
C. Limitations of the Traditional Legal System in China
In the digital age, humans frequently encounter errors when using algorithms. A common example is a writer losing substantial work due to a software crash, despite the existence of an auto-save function. Even if the software’s auto-save feature is flawed, lawsuits against the software company are rare. This situation highlights the limitations of tort law in addressing algorithmic failures.Footnote 28 Typically, liability serves as a secondary recourse, with contractual terms, such as those found in software user agreements, governing the primary legal relationship between the user and the service provider. In practice, courts are hesitant to recognize tort claims based solely on algorithmic errors, often requiring additional elements like harm to third parties or personal injury.Footnote 29 However, the increasing prevalence of humanoid robots presents a new challenge. As robots become more integrated into human society and take on tasks traditionally performed by humans,Footnote 30 the potential for harm resulting from algorithmic failures grows significantly. When such failures cause personal injury or property damage, courts are more likely to entertain tort claims, as contract law is insufficient to deal with these accidents.
The Chinese liability mechanism, while undergoing modernization, still grapples with the complexities of emerging technologies, particularly with regard to humanoid robots. Traditional legal concepts within Chinese civil law, such as strict liability and fault-based liability, may prove insufficient to address the nuanced issues presented by accidents involving these robots. Specifically, the unique characteristics of humanoid robots generate at least three distinct challenges for the existing Chinese liability mechanism.
I. Standard of Fault
The standard of “fault” as it applies to humanoid robots is fundamentally different from that of traditional machines. In accidents involving traditional machines, where product liability was not applied, human error is often the primary focus, with legal doctrines like negligence and emergency defense being designed to address situations where a human operator has made a mistake.Footnote 31
Unlike traditional machines, humanoid robots exhibit a degree of autonomy enabled by machine learning algorithms. This autonomy, coupled with the intricate nature of human-algorithm hybrid control, makes it challenging to pinpoint the precise cause of an accident and to establish the standard of fault. For instance, consider a scenario where a humanoid robot, while performing household chores, accidentally damages a valuable object or injures a child. In such cases, traditional liability analysis, which might focus on the operator’s level of care, may be insufficient. The robot’s actions are influenced by a multitude of factors, including the operator’s instructions,Footnote 32 the robot’s internal algorithms, and real-time data processing. As a result, if a humanoid robot causes harm, it may be difficult to determine whether the fault lies with the human operator, a flaw in the machine learning algorithm, or a combination of both. Furthermore, the appropriate boundaries of negligence in hybrid control circumstances remain uncertain.Footnote 33 Traditional Chinese accident law, primarily based on conventional product liability principles, fails to adequately address the complexities of hybrid control circumstances, particularly the shared fault and potential joint liability arising from human-algorithm hybrid control. Lastly, the intricacies of human-robot interaction, including the potential for miscommunication or misunderstanding, further complicate the issue. Human-robot interactions are not merely input-output exchanges, but rather involve a dynamic interplay of human intent, algorithmic interpretation, and real-time adaptation, making it exceptionally challenging to isolate a single, definitive causal factor. To sum up, unlike traditional machines, where the operator is typically in direct control of the machine’s actions, humanoid robots can exhibit a degree of autonomy that makes it difficult to establish the standard of fault.
II. Algorithmic Malfunction
Even if the liability mechanism works on a theoretical and textual level, the practical effectiveness of the liability mechanism to ensure compensation of the victim rests upon how readily victims can establish liability in adjudication. To that end, the evidentiary mechanism for identifying algorithmic malfunction becomes another major challenge to the existing liability mechanism.
As discussed above, the increasing delegation of control to artificial intelligence systems has made the behavior of humanoid robots more unpredictable. This unpredictability is exacerbated in dynamic environments where robots must continuously adapt to changing conditions, potentially leading to unforeseen behaviors that neither the operator nor the algorithm developers could have reasonably anticipated. Indeed, in many instances, neither the human user nor the programmer who developed the algorithm can fully comprehend the automated decision-making process.Footnote 34 This is not unique to humanoid robots; it is a common characteristic of machine learning algorithms in general.
Consider the case of AlphaGo, the renowned machine learning algorithm that defeated top human Go players in 2017. Even the engineers who created AlphaGo could not fully explain the intricacies of its moves—otherwise the engineers themselves would be the world champions. This highlights the inherent challenge of interpreting and understanding the decision-making processes of complex algorithms. If even in the highly controlled environment of a 19×19-grid Go board, engineers struggle to explain the algorithm’s choices, the difficulty is magnified exponentially in the complex and unpredictable scenarios encountered by humanoid robots in real-world service settings. The Boeing 737 MAX crashes of 2018 and 2019 serve as a stark reminder of this issue.Footnote 35 The aircraft’s Maneuvering Characteristics Augmentation System (MCAS), a flight control system, was implicated in both accidents.Footnote 36 Despite extensive investigations, Boeing engineers and the official investigation teams took a long time to pinpoint and agree to the exact nature of the algorithmic malfunction that led to the crashes.Footnote 37
Humanoid robots, like modern aircraft, operate in complex environments where various factors, such as user input, environmental changes, and external influences, can impact algorithmic decision-making. Consequently, pinpointing algorithmic malfunction presents a significant challenge, hindering the establishment of a factual basis for legal evidence. This difficulty is compounded by inherent informational asymmetry: mechanical information is typically controlled by the robot’s manufacturer, while those seeking compensation for damages often face a substantial disadvantage in accessing and understanding the product’s design and operation. This disparity significantly impedes the evidence-gathering process necessary to identify algorithmic malfunction.
III. Remedies for Humanoid Robots’ Damages
The traditional tort system faces significant challenges in providing effective redress for damages caused by malfunctioning humanoid robots. Machine learning algorithms, which empower these robots to perform tasks autonomously, introduce the possibility of robots initiating harmful actions. While traditional accident law offers avenues for financial compensation by assigning legal responsibility to the robot’s developer or user,Footnote 38 it falls short when attempting to rectify the harmful behavior itself. As Mark Lemley and Bryan Casey have observed, traditional legal remedies like punitive damages or injunctions are designed to deter human behavior; however, traditional deterrents such as imprisonment or financial penalties are inapplicable to humanoid robots.Footnote 39
Furthermore, while some scholars have begun exploring legal remedies and related rules for robots,Footnote 40 comparatively little attention has been paid to the psychological remedies. Unlike humans, humanoid robots lack moral agency, emotions, and the capacity to understand punishment. However, unlike traditional machines, humanoid robots feature human-body simulation and can elicit emotional responses in human victims. Consequently, victims of harm might derive some psychological relief from physically interacting with or damaging the offending robot. These considerations present a novel challenge, further complicated by the rapid advancement of humanoid robot technology and the need to anticipate future scenarios.
D. Typological Analysis of Humanoid Robot Accidents
In Chinese law, accident liability determination continues to rely substantially on fault-based reasoning.Footnote 41 A common example is road traffic accidents.Footnote 42 Article 76 of the Road Traffic Safety Law establishes the following liability mechanism: in accidents between vehicles, a fault-based liability mechanism is implemented; and if both parties are at fault, they bear responsibility proportionally. In accidents between vehicles and non-motor vehicles or pedestrians, the principle of presumed fault and no-fault liability is implemented.
However, as previously mentioned, the presence of artificial intelligence and human-machine hybrid control complicates fault determination in humanoid robot accidents. According to the subject of fault of the humanoid robot—human or algorithm—we can divide humanoid robot accidents into the following two categories: human fault and robot fault.
I. Human Fault
Under the current Chinese liability mechanism, human fault encompasses both intentional acts and negligence, including a distinct category of “gross negligence.”Footnote 43
1. Intentional Acts
Intentional acts can be further analyzed in two scenarios. The first involves the deliberate use of humanoid robots to cause harm, such as in cases of murder, assault, or property damage. These actions clearly lie outside the scope of accident law. The second scenario encompasses the infliction of deliberate least-cost harm, where an actor, confronted with a choice between two undesirable outcomes, intentionally chooses the lesser of two evils. This scenario, often likened to the trolley problem,Footnote 44 assumes the individual possesses sufficient time for reasoned decision-making. Traditional accident law addresses such situations through doctrines such as “inevitable necessity,”Footnote 45 which is also recognized within Chinese accident law.Footnote 46 Crucially, neither of these intentional acts presents novel legal challenges uniquely attributable to the use of humanoid robots.
2. Negligence
Human negligence involving humanoid robot accidents may initially appear analogous to negligence in traditional mechanical accidents, often arising from carelessness or overconfidence.Footnote 47 Recognizing the opaque nature of human cognition, accident law employs the reasonable person standard as the touchstone for evaluating negligence, or gross negligence.Footnote 48 However, the increasing sophistication of humanoid robots may elevate the standard of care expected of their users, thereby expanding the scope of what constitutes negligence. Furthermore, delegating tasks to a humanoid robot does not automatically diminish the user’s responsibility. For instance, a driver excessively reliant on autonomous driving features may be more inclined to engage in distracting activities, such as reading, sleeping, or using a mobile phone, rather than diligently monitoring the vehicle’s operation. Consequently, humanoid robots often incorporate enhanced warning systems to mitigate the risks associated with user inattention or misuse. Nevertheless, these considerations impact the apportionment of liability at the margins; they do not fundamentally alter the nature of human negligence in this context. Therefore, instances of human negligence in humanoid robot accidents remain amenable to analysis within the established framework of traditional accident law.
II. Robot Fault
Robot fault, distinct from human fault which stems from human fallibilities, manifests in three primary forms, each rooted in the robot’s algorithmic programming and operational design.
1. Traditional Algorithmic Defects
This category encompasses traditional algorithmic flaws similar to those found in conventional robots, which are not equipped with machine learning algorithms—examples include warning system failures, unresponsive controls, user privacy breaches, and cybersecurity vulnerabilities. These traditional algorithmic defects in humanoid robots can be treated analogously to hardware defects of conventional robots. When a humanoid robot’s hardware design is flawed or fails to meet industry technical standards, principles of product liability may be applicable.Footnote 49 Consequently, no substantial modifications to existing accident law are required to address these traditional algorithmic defects.
2. Machine Learning Algorithmic Defects
This category involves defects specific to the machine learning algorithms employed by humanoid robots. These defects can manifest during the data collection process—for example, insufficient image resolution, incorrect object recognition—or during data analysis and processing—miscalculation of outputs, for example. Ultimately, these defects can lead to erroneous automated decision-making, inappropriate algorithmic intervention, and, consequently, accidents. Furthermore, humanoid robots may present systemic harm issues stemming from machine learning algorithms. For example, systemic discrimination arising from machine learning algorithms can result in differentiated safety standards for users based on factors such as income level or gender.Footnote 50 These issues fall within the scope of machine learning algorithmic defects, a type of algorithmic malfunction for which traditional accident law struggles to provide adequate solutions. Therefore, addressing these novel challenges may necessitate the development of a new liability mechanism specifically tailored to the technical features of machine learning algorithms.
3. “Intentional” Algorithmic Act
This category involves machine learning algorithms that “intentionally” cause accidents.Footnote 51 The objective of such robot fault is to minimize harm or to achieve deliberate least-cost harm, but the “intent” demonstrated by the algorithm is more formally rational than human intent. Technically, algorithms differ from humans in their information processing speed. Even in a fraction of a second before an accident, an algorithm can perform rational calculations and make automated decisions.Footnote 52 In other words, it is a “decision,” rather than a “reaction.” Therefore, seemingly accidental events may, in fact, be the result of a rational choice made by the algorithm,Footnote 53 exhibiting a level of formal logic exceeding that of a human. Although the rational choices of machine learning algorithms are typically based on correlation, rather than causation,Footnote 54 in emergency situations involving the control of humanoid robots, the automated decisions made by the algorithm approach human intent. This may be understood as a functionally intentional form of automated decision-making in emergency situations, as distinguished from “intentional” automated decision-making resulting from programmer intervention or traditional algorithmic programming.
A related issue, akin to the trolley problem, is particularly salient: to avoid harm to the user, the algorithm “intentionally” harms a third party as a sacrifice.Footnote 55 In an imminent accident scenario, the humanoid robot, based on the information that it learned, faces a choice: protect the user or third parties—other persons or property? Autonomous driving manufacturers, driven by commercial interests, naturally tend to prioritize the protection of users in their algorithm design. From a business model perspective, users are direct customers, while third parties are not.Footnote 56 Moreover, considering the information cost, the sensors within humanoid robots struggle to provide reliable judgments concerning dynamic external scenarios—such as the quantification, resistance, and evasive capabilities of third parties—in contrast to the relatively predictable nature of human user responses. As a result, the occurrence of a humanoid robot algorithm “intentionally” causing harm to a third party in unavoidable circumstances becomes more probable. The challenge then arises: how can a third party, who sustains personal injury or property damage from such an algorithm’s “intentional” harm, seek legal redress? This scenario exposes a gap in the conventional accident liability mechanism.
E. Reinventing a Liability Mechanism for Humanoid Robot Accidents
The preceding classification of humanoid robot accidents distinguishes between two sets of interacting agents: natural persons and algorithms, and traditional algorithms and machine learning algorithms. This typological approach serves not as an end in itself, but as a means to elucidate the layered legal issues arising from humanoid robot accidents, ultimately determining which categories are amenable to traditional accident liability mechanism and which suggest the need for a new mechanism.
Through this typological analysis (Fig. 1), the feasibility and limitations of traditional accident liability mechanisms in the context of humanoid robot accidents become apparent. Traditional liability mechanisms remain applicable only in situations where human fault and standard algorithmic defects are present, and even then, contingent upon the clear demarcation of responsibility between human and robot.
Typology of liability constellations

Traditional accident liability mechanism proves inadequate for addressing the complex scenarios involving the commingling of human and machine fault, machine learning algorithmic defects, and, especially, accidents “intentionally” caused by machine learning algorithms. Addressing these novel challenges posed by humanoid robots is the central focus of constructing a robust and effective liability mechanism for such accidents. The following sections will explore the development of such a system, concentrating on the issues of commingled human-machine fault, machine learning algorithmic defects, accidents “intentionally” caused by these algorithms, and remedies for humanoid robot harms.
I. Establishing Evidentiary Process for Determining the Legal Facts of Hybrid Control
The foregoing classification of humanoid robot accidents, distinguishing between natural persons and algorithms, as well as traditional and machine learning algorithms, reveals the limitations of traditional accident liability mechanisms. While these frameworks adequately address scenarios involving human fault and traditional algorithmic defects, they falter when confronted with the complexities of commingled human-machine fault, machine learning algorithmic defects, and “intentional” robot actions. Without a clear understanding of who or what is at fault—human, algorithm, or a combination thereof—meaningful liability cannot be assigned. Therefore, establishing a robust evidentiary process is crucial to support the effective operation of a liability mechanism for humanoid robot accidents.
The transition from exclusive human control to human-machine hybrid control, coupled with the inherent uncertainty of machine learning algorithms in the determination of legal facts, poses a significant challenge to traditional fault-based accident liability mechanisms.Footnote 57 Distinguishing between user misoperation and algorithmic defects, for example, often proves difficult in practice. Similar challenges pervade the determination of control in human-machine hybrid control, a critical technical characteristic that cannot be circumvented in the regulation of humanoid robots.
How can we determine whether the dominant control in human-machine hybrid control at the time of the accident resided with the user or the machine learning algorithm? Even in situations where the user appears to be in control, did the machine learning algorithm faithfully execute the user’s commands? These questions present significant obstacles to factual determination for legal adjudication. For example, in disputes concerning autonomous driving accidents, the central point of contention frequently revolves around whether the proximate cause of the accident was a command originating from the user or the algorithm.Footnote 58
Therefore, regulatory authorities should implement ex ante, in situ, and ex post measures to establish the legal facts regarding the existence of machine learning algorithmic defects and whether the machine learning algorithm “intentionally” caused the accident. Ex ante, regulatory agencies should mandate that manufacturers regularly file information pertaining to the functions of their machine learning algorithms, the processes of algorithmic automated decision-making, and other relevant data.Footnote 59 In situ, the operation of humanoid robots should be monitored by a dedicated safety monitoring platform to track algorithmic operational information. Manufacturers should be incentivized to appropriately retain operational data and logs, while ensuring the protection of personal information and data security, to facilitate evidence collection in potential accidents. Analogous to established practices in civil aviation and rail transport, humanoid robots require an Event Data Recorder (EDR) system—a “black box”—which, upon triggering specific conditions, meticulously records data including speed, location, time, safety system status, algorithmic control actions, human user control actions, and other pertinent information. Ex post, regulatory agencies should initiate third-party algorithmic testing and auditing.Footnote 60 This measure is aimed not only at tracing the root cause and revealing design or execution flaws in the machine learning algorithm but also at evaluating the causal link between any such flaws and the accident.Footnote 61 Critically, victims seeking compensation for damages often face a significant disadvantage compared to manufacturers regarding access to and understanding of product design and operational information. This informational asymmetry can impede the fair apportionment of risk, especially in cases involving technical complexity.Footnote 62 Therefore, facilitating victims’ access to evidence for legal proceedings, especially at the ex post stage, is essential for legal remedies.
II. Regulating Algorithmic Malfunction Through the “Reasonable Person” Technical Standard
Given the inherent potential for accidents arising from the use of humanoid robots, the objective of a liability mechanism should adopt a more pragmatic approach, shifting from the idealistic goal of complete accident prevention to the more realistic aim of accident mitigation.Footnote 63 This pragmatic philosophy can be traced back to the origins of traffic accident law, notably through the work of Ralph Nader, whose book, Unsafe at Any Speed,Footnote 64 profoundly shaped the American traffic accident liability mechanism and influenced international discourse. Under Nader’s influence, the dominant adjudicative principle became that vehicle manufacturers are not obligated to produce flawless vehicles, but are bound by a duty to “exercise reasonable care in the design and manufacture of their product to minimize the personal injuries to which its users are exposed; and to not place its users in an unreasonable risk of personal injury when a collision occurs.”Footnote 65 The concepts of “reasonable care” and “unreasonable risk of personal injury” effectively integrate the standard of reasonableness into the product design, development, and manufacturing processes. This form of ex-ante regulation, framed as a safety standard, is not merely beneficial but essential for products like humanoid robots, which are capable of actively initiating harm. To effectively address humanoid robot accidents, we must consider specific technological realities, weigh the costs of technological implementation, and adopt reasonable ex-ante regulatory mechanisms to control algorithmic malfunction, thereby reducing the incidence of such accidents.
Crucially, technical standards inherently incorporate fault-based liability standards from accident law.Footnote 66 To assess user negligence, specifically, whether the user exercised reasonable care, the traditional accident liability mechanism employs the “reasonable person” standard, the baseline threshold of acceptable human conduct.Footnote 67 Analogous standards can be applied to humanoid robot algorithms. If an algorithm’s behavior surpasses the reasonable person standard—in other words, if replacing the algorithmic control with reasonable person control would not have averted the accident—this equates to fulfilling the duty of reasonable care. Consequently, in such instances, even if an accident occurs, the algorithm is not considered to have malfunctioned, and no robot liability arises. This type of algorithmic malfunction determination standard can be integrated into industry technical standards, solidifying its place in the design, development, and manufacturing stages of humanoid robots through a technical safe harbor mechanism, under which compliance with specified safety and transparency standards may serve as a defense or partial defense to liability.Footnote 68 Mirroring the “reasonable care” and “unreasonable risk of personal injury” principles in the aforementioned traffic accident liability mechanism, the humanoid robot’s algorithm should be evaluated retrospectively from its behavioral effects. Algorithmic duties of care should be determined by comparison to natural person duties of care, taking into account the risk tolerance levels of different industries. If a humanoid robot algorithm fails to meet the algorithmic duty of care standards established by regulatory bodies, it must bear the corresponding risk of liability. As these “reasonable person” technical standards gain wider acceptance, the incidence of humanoid robot algorithmic malfunction and the resulting accidents they cause, which fall outside these standards, will be curtailed, and the predictability of liability will be enhanced.
III. Manufacturer as the Least Cost Avoider
The “Least Cost Avoider” principle, as articulated by Guido Calabresi in The Costs of Accidents, offers a compelling framework for analyzing accident liability.Footnote 69 This theory posits that, to minimize total social costs, liability should be assigned to the party best positioned to avert the risk of accidents at the lowest cost.Footnote 70 This perspective shifts the focus of accident liability mechanisms toward maximizing social welfare. From a law and economics perspective, the “Least Cost Avoider” theory is particularly relevant when the cost of establishing legal facts is prohibitively high. In such circumstances, minimizing total social costs becomes a practically feasible and optimal objective.
In humanoid robot accidents, the excessive cost of discovering legal facts remains, under current technological and regulatory conditions, a significant challenge. Given the technical complexities inherent in humanoid robots and the potentially numerous responsible parties involved, determining liability, in no small number of cases, is both exceedingly difficult and prohibitively expensive. Applying the least cost avoider theory requires identifying, within the humanoid robot ecosystem, the participant best situated to effectively prevent such accidents at the most reasonable cost, and, post-accident, to efficiently determine and allocate responsibility. In humanoid robot accidents, the multiplicity of developers, the complexities of human-machine hybrid control, and the high attribution costs associated with machine learning algorithms render the least cost avoider approach particularly well-suited to mitigating the costs of attribution in resolving accident disputes.
The question then turns to: Who is the least cost avoider in the context of humanoid robot accidents? The relevant parties typically include the user, the owner, and the manufacturer. Each will be considered in turn.
First, the user is not the least cost avoider. Human-machine hybrid control implies that the user cannot continuously maintain complete control over all actions of the humanoid robot. Even when the user possesses some degree of control during operation, they are at an informational and technological disadvantage compared to the manufacturer. Ex ante, the user’s control is generally limited to activation, deactivation, or setting certain automated control functions. Ex post, if liability is concentrated on the user, they face significant challenges in proving algorithmic malfunction and establishing the causal link between such malfunction and the resulting harm.Footnote 71 Furthermore, data from the humanoid robot at the time of the accident may be considered by the manufacturer as proprietary or potentially self-incriminating, and thus they are often reluctant to disclose it.Footnote 72 Such reluctance further complicates the discovery of algorithmic malfunctions.
Second, the humanoid robot owner is not the least cost avoider. Transferring all risk to the owner, requiring them to assume all risks associated with use at the time of purchase, would significantly disincentivize the acquisition of such high-risk products. This is particularly true given the current market competition between traditional assistive devices and humanoid robots. If owners bear all responsibility, they will likely opt for less risky traditional devices. Moreover, like the user, the owner typically has no control over the design, assembly, or quality control of the humanoid robot, and therefore lacks the capacity to mitigate the harm that may be caused by the robot’s autonomous behavior. Thus, even if all liability were assigned to the owner, the owner would not be capable of effectively reducing the total social cost of accidents.
This Article argues that the manufacturer should be designated the least cost avoider.Footnote 73 First, manufacturers possess the lowest information acquisition costs regarding human-machine hybrid control and algorithmic fault, facilitating the resolution of information asymmetries in accident disputes. By designating manufacturers as the least cost avoider, the law would incentivize them to use their informational advantage to identify other culpable parties within the liability chain, employing the latter’s fault as a defense to mitigate their own liability. Second, manufacturers control the technical design and modification of humanoid robots and are thus best positioned to manage potential accident risks. Assigning liability to manufacturers incentivizes them to minimize such risks, avoid accident costs, and provide both compensatory and even retributive remedies.Footnote 74 This is especially so for behavioral remedies, where manufacturers are uniquely positioned to implement corrective measures. Leveraging their control over both software and hardware, they can rectify humanoid robot behavior, provide compensatory relief, and even, considering the psychological dimensions of retribution, incorporate elements of retributive justice into the design itself. Moreover, even if manufacturers cannot entirely eliminate the risks associated with certain machine learning algorithms, they retain the critical option of forgoing their use to ensure product safety.Footnote 75 Third, given the current diverse landscape of humanoid robot development, manufacturers, with deeper pockets, are better positioned than individual users or owners to distribute accident costs through collectivized pricing mechanisms and insurance schemes.Footnote 76 Manufacturers occupy a central position in the development chain, thus the cost of procuring higher-quality components can be passed on to owners through increased sales prices. Similarly, insurance mechanism, such as no-fault insurance, can transfer liability risk to insurance companies and policyholders.Footnote 77 Therefore, with respect to humanoid robot accidents, the liability mechanism should designate the manufacturer as the least cost avoider, and subsequently refine the system through internal liability shifting or specific exculpatory provisions.
However, designating the manufacturer as the least cost avoider does not necessitate the imposition of strict liability. A zero-accident humanoid robot is an unrealistic expectation. While strict liability may appear superficially appealing, it could stifle manufacturers’ incentives to develop and implement machine learning algorithms, ultimately harming the industry and potentially conflicting with national industrial development policies. Therefore, while identifying the humanoid robot manufacturer as the least cost avoider, regulatory agencies should, commensurate with the risk tolerance of different industries—e.g., manufacturing, healthcare, home automation, logistics—implement technology safe harbors and related liability exemptions, thereby creating a more nuanced and adaptable accident liability mechanism.Footnote 78 If fault-based liability provides sufficient incentive for manufacturers to develop humanoid robot algorithms, and can reduce user fault without increasing the risk of additional algorithmic fault, then it can generate net safety benefits for society as a whole.
IV. Behavioral Correction and Retribution for Humanoid Robots
As previously discussed, humanoid robots, designed to mimic human appearance, movements, and even emotional expressions, elicit complex social and emotional responses. Consequently, when a humanoid robot, exhibiting human-like behaviors and seemingly making its own choices, causes harm, the victim’s experience may extend beyond the purely physical or material. The breach of trust, the sense of betrayal, and the emotional pain caused by a seemingly “sentient” being can be profound, especially when the lines of control are blurred by the hybrid control model. In such instances, traditional legal remedies, such as monetary compensation, may prove insufficient, and in particlar, China’s current legal system lacks provisions for addressing these unique psychological harms. The victim may crave not only material restitution but also a symbolic acknowledgment of the robot’s “wrongdoing,” a form of retribution that resonates with their emotional experience of betrayal by something they perceived as having a degree of agency. Despite humanoid robots lacking human-like moral comprehension hence “no soul to damn,”Footnote 79 users might still demand an emotional form of ethical justice from them.Footnote 80 As Christina Mulligan puts it, “taking revenge against wrongdoing robots, specifically, may be necessary to create psychological satisfaction in those whom robots harm.”Footnote 81
This is where the concept of behavioral correction and retribution for humanoid robots gains relevance. While the robot itself cannot experience punishment, the act of rectifying its harmful behavior, perhaps through software updates or physical modifications, coupled with symbolic acts of “punishment,” such as public deactivation, formal withdrawal from service, or even physical striking,Footnote 82 could provide a sense of closure and justice for the victim, addressing a psychological need often unmet by traditional legal remedies.
In addition, punishing robots can operate as a symbolic form of social condemnation, expressing society’s disapproval of certain behaviors in ways that go beyond purely economic sanctions. As humanoid robots are entering the social sphere, they are subject to the moral expectations of that sphere. Therefore, “retribution” is not a primitive desire, but a return to the historical function of tort law: maintaining social peace and moral order.Footnote 83 Through the symbolic acts of “punishment”, society conveys certain moral boundaries to the public, similar to how fines imposed on corporations not only compensate for losses but also function to publicize their violations.Footnote 84
F. Conclusion: Towards the “Accidental Utopia”
The rapid integration of humanoid robots into society necessitates a reimagined liability mechanism to address the unique risks posed by their technological complexity—human-body simulation, artificial intelligence, and hybrid control. Traditional legal frameworks for accidents in China, rooted in fault-based liability, struggle to assign responsibility in accidents involving autonomous algorithms, emergent behaviors, and intertwined human-machine decision-making. This Article highlights three critical gaps: the inadequacy of fault standards for autonomous systems, challenges in identifying algorithmic malfunctions, and the insufficiency of traditional remedies for psychological and systemic harms.
To mitigate these challenges, a tailored liability mechanism is proposed. Key recommendations include establishing evidentiary processes to clarify fault attribution, designating the manufacturer as the least cost avoider to incentivize risk mitigation, and integrating “reasonable person” technical standards for algorithmic behavior. Additionally, behavioral correction and symbolic retribution for robots could address victims’ psychological needs.
Unless a “Goda” is developed, human physical ability will remain largely unchanged, in stark contrast to the rapidly expanding capabilities of humanoid robots. As these robots become increasingly prevalent, tasks will progressively shift from human to humanoid robot in a world that is shared by them, thereby elevating the risk of humanoid robot-related accidents. In this context, a reconstructed liability mechanism seeks to establish an “accidental utopia,” where reduced accidents, fair compensation, and adaptable legal systems coexist with advanced robotic autonomy.
Acknowledgements
The author declares none.
Funding Statement
No specific funding has been declared in relation to this Article.
Competiting Interests
The author declares none.
