Hostname: page-component-5d59c44645-mrcq8 Total loading time: 0 Render date: 2024-02-23T19:14:11.938Z Has data issue: false hasContentIssue false

How to Regulate Moral Dilemmas Involving Self-Driving Cars: The 2021 German Act on Autonomous Driving, the Trolley Problem, and the Search for a Role Model

Published online by Cambridge University Press:  03 November 2023

Lando Kirchmair*
Department of Social Sciences and Public Affairs, University of the Bundeswehr Munich, Munich, Germany


With the promulgation of the Autonomous Driving Act in summer 2021, Germany took the worldwide lead on regulating self-driving cars. This Article discusses the (non-)regulation of moral dilemmas in this act. To this end, it clarifies the role of the so-called trolley problem, which influenced the report of the German Ethics Commission that paved the way for this act in particular and the relationship between philosophical reasoning, empirical studies, and the law in general. By introducing the international legal community to the (non-)regulation of moral dilemmas in the German act, the Article critically reviews the German goal, which is to serve as a European and international role model. This will be preceded by a discussion as to why self-driving cars should be allowed as well as the moral dilemmas they cause which should be regulated by the law.

Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (, which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
© The Author(s), 2023. Published by Cambridge University Press on behalf of the German Law Journal

A. Introduction

Mobility is a central element of human coexistence and mobility revolutions have marked decisive turning points in recent human evolution. Footnote 1 Thus, the invention of the disc wheel around 3,500 BC—a special testimony to human ingenuity without a direct role model in nature Footnote 2 —was a milestone that facilitated the transportation of goods and later also of people considerably. Footnote 3 Another great achievement was the spoked wheel invented in the Bronze Age around 2,000 BC. Footnote 4 Fast forward to the modern age and the patent registered by Karl Benz in 1886—the “Benz Patent-Motorwagen Nummer 1,” in plain language, the car, which was fitted with wire-spoked wheels with hard rubber tires and an internal combustion engine Footnote 5 —was another tremendous invention that, again, contributed to a mobility revolution. Footnote 6 Whereas until then, carts and carriages had to be pulled by draft animals, the so-called automobile released propulsion technology from that dependence. Currently we are witnessing the next great leap in human mobility: The “auto-auto,” as a funny German nickname Footnote 7 goes, is one designation for an autonomous, in other words, self-driving car, that is both propelled and controlled without human intervention. Footnote 8

While the period between the invention of the wheel and its enhanced version—the spoked wheel—was about 1,500 years, the interval between the independence of propulsion technology and its full enhancement now—also including the automation of control technology—was ten times shorter, lasting only about 150 years. Although the invention and improvement of the wheel both represent milestones in mobility, they were presumably not accompanied by legal standards in any significant way. Beginning in the second half of the 19th century, however, technology law is credited with a process of institutional consolidation that has also affected the regulation of cars. Footnote 9 Law, thus, is playing a crucial role in the current revolution of human mobility. One might even go so far as to say that law shares responsibility for the success or failure of modern mobility technology. Footnote 10 Many important areas, such as traffic law, data protection law, liability and criminal law, as well as public law, are affected by auto-autos, and striking the right balance between regulation of and freedom for the development of the technology is no easy task. Footnote 11

The German legislature is the first worldwide that has dared to face this difficult task comprehensively. Footnote 12 Section 1a, which made the “operation of a motor vehicle by means of a highly or fully automated driving function […] permissible if the function is used as intended,” paved the way and was already inserted in the Road Traffic Act (StVG) in 2017. Footnote 13 In July 2016, an Ethics Commission was tasked to deliver a report on ethical and legal questions regarding the introduction of automated vehicles, which was also published in 2017. Footnote 14 This high-level report was the basis for the draft act on autonomous driving published in February 2021, which—for the first time worldwide Footnote 15 —aimed at a comprehensive regulation of self-driving cars on public roads. Footnote 16 The draft then entered into force, almost unchanged, as the Autonomous Driving Act on July 27, 2021. Footnote 17 Almost a year later a decree on the approval and operation of vehicles with autonomous driving functions was adopted on June 24, 2022. Footnote 18 The path for the use of highly and fully automated vehicles has, thus, already been prepared in Germany; further regulatory steps in other countries are likely to be expected. The German act holds, among many other things, in Section 1e paragraph 2, that motor vehicles with an autonomous driving function need to have a specific accident prevention system, or “System der Unfallvermeidung.” The concrete regulation of moral dilemmas in connection with self-driving cars is—needless to say—of great importance but also very controversial. This is no wonder, as many lives are at stake.

This Article focuses on the question as to how to regulate such moral dilemmas involving self-driving cars by law and first holds that self-driving cars should be allowed on the roads. If self-driving cars work, this is a legal duty included in the positive dimension to protect life as guaranteed by Article 2 of the European Convention on Human Rights (ECHR)—see Section C. Footnote 19 Nevertheless, the most recent mobility revolution and all its promises come with a significant burden: Self-driving cars will still be involved in accidents and the technology includes the possibility to program how these accidents should take place. For the first time in the history of traffic law, we have to make decisions about moral dilemmas, about life and death, in cold blood. This is a situation that is unprecedented in its extent and a great challenge for society; likewise, the law constitutes a veritable challenge for ethicists and lawyers—see Section D. Footnote 20 This Article argues in the following that this “possibility” must be faced by the law—not by private companies or individuals—see Section E. Footnote 21

After the stage has been set, the central interest of this Article comes into play, namely the regulation of so-called moral dilemmas, in other words, situations in which, according to all available options, comparable harm occurs—for example, a group of two or three people is killed because it is not possible to prevent both scenarios. The German act, which includes a provision on an accident-avoidance system and thereby regulates moral dilemmas to some extent, will be analyzed in order to clarify whether this act might indeed serve as a role model beyond Germany. For this purpose, the Article will also look at the report of the German Ethics Commission, whose “rules” constitute the basis of the act. To understand the position taken by the Ethics Commission, the Article revisits the so-called “trolley problem,” which prominently arose out of the debate between the Oxford philosopher Philippa Foot Footnote 22 and the American philosopher Judith Jarvis Thomson. Footnote 23 Looking back at this problem, and related but importantly different trolley cases constructed by German criminal lawyers, we come to understand that the current discussion is suffering from a conflation of hypothetical scenarios from different debates in philosophy, empirical studies, and the law—see Section F. Footnote 24 This insight is important when finally discussing the accident regulation of self-driving cars when facing moral dilemmas in Europe, the US, and beyond. The positive obligation of states according to Article 2 ECHR to take measures to save lives surely includes a prominent role for the “minimize harm” principle, potentially even to the extent that human lives have to be offset—see Section G. Footnote 25 In the end, we will see that the 2021 German Act on Autonomous Driving provides some important elements for collision-avoidance systems in self-driving cars but falls short of being a role model for Europe or the US due to its reluctance to regulate moral dilemmas comprehensively.

B. What is in a Name? The Designation of Different Automation Levels

Similar to the development of the automobile—which did not happen at once but owed its existence to numerous, sometimes parallel, technological innovations—the self-driving car will not be ready for use tomorrow or the day after without a precursor. There are many developments and different levels of automation, and these are typically described in six stages according to the classification of the Society of Automotive Engineers (SAE) International, a non-profit association of automotive engineers concerned with technical standards. Footnote 26 The lowest level includes vehicles without any automation—Level 0: No driving automation. The next level comprises well-known and already approved driver-assistance systems such as the Automatic Breaking System (ABS) and Electronic Stability Program (ESP) systems, or more recently, lane departure warning—Level 1: Driver assistance. Partially automated vehicles promise to take over certain activities, such as independent parking or maneuvering a vehicle in a traffic jam—Level 2: Partial automation. Conditionally automated vehicles are, to a certain extent, “autonomous” but still require the possibility of human intervention—Level 3: Conditional automation. Footnote 27 The need for human intervention is no longer given at the next, highly automated level but is still possible—Level 4: High automation. Footnote 28 In the final stage, human drivers are not only obsolete but can no longer intervene at all—Level 5: Full driving automation. This distinction between automated and fully self-driving, in other words, potentially driverless, vehicles in road traffic is important not only from a technical perspective but also from a legal one as it entails different legal regulatory requirements. It is worth pointing out that the classification of Level 5 no longer includes the designation—widely used and also formerly used by the SAE—of such vehicles as being “autonomous.” Footnote 29 This is to be welcomed since the term autonomy is typically used in law and philosophy for self-determination or self-governance. Footnote 30 This does not apply to self-driving cars. In the legal and philosophical sense, only those vehicles could be called autonomous which do not move on the basis of pre-programmed decisions without a driver but which actually make autonomous decisions, for example, with the help of machine learning. Footnote 31 Concurrently, for the purpose of this Article, we will speak of self-driving cars throughout, without wanting to exclude lower levels of automation since moral dilemmas—albeit in a somewhat modified form—can already occur with driver-assistance systems.

C. Why Self-Driving Cars Should be Allowed

The introduction of the automobile initially claimed a comparatively large number of lives: “In the first four years after Armistice Day more Americans were killed in automobile accidents than had died in battle in France.” Footnote 32 Even today, the number of traffic fatalities is by no means insignificant, and the primary cause of accidents is clearly human error. Footnote 33 This is not only relevant for drivers but also for those otherwise involved in road traffic, such as cyclists or pedestrians, who are often affected.

Generally speaking, the law of technology is potentially both technology-preventing and technology-enabling. Footnote 34 In the light of traffic fatalities, the use of self-driving cars is promising. After all, it is quite conceivable that self-driving cars can be designed and programmed in such a way that many (fatal) accidents will actually be avoided. This is true despite the current public debate on accidents involving, for example, various Tesla models in May 2016 and April 2021, both in the US, which were caused by automated vehicles in test mode or by so-called “autopilots” that were not adequately monitored by humans. Footnote 35 A central promise that speaks in favor of the approval of functioning self-driving cars is—despite these accidents—the resulting improvement in traffic safety. To put it bluntly, self-driving cars do not speed, make phone calls, drive under the influence of alcohol or drugs, or fall asleep at the wheel. Footnote 36

Everyone’s right to life as enshrined in Article 2 ECHR, Paragraph 1, covers not only the fundamental prohibition of the state to intentionally end human life, but also the obligation to take precautionary measures to prevent dangerous situations. Footnote 37 The state, thus, has a duty to protect individuals from threats, also from other individuals. However, this duty to protect is not easy to grasp. Footnote 38 The case law of the European Court of Human Rights (ECtHR) in this regard is quite casuistic. Footnote 39 For example, it includes protective measures for individuals whose lives are threatened by environmental hazards and dangerous activities. Footnote 40 However, there are limits to this duty to protect. The state does not have to prohibit road traffic, for example, simply because it is dangerous. A certain risk is therefore part of life. Footnote 41 In any case, however, the state must take legal measures to regulate dangers emanating from road traffic. Footnote 42 The establishment of appropriate and effective traffic regulations, such as a blood alcohol limit, is, accordingly, necessary to protect individuals against particular dangers. Footnote 43 All in all, it is important to bear in mind that the state has to make great efforts to protect life: “When there is a risk of serious and lethal accidents of which the state has—or ought to have—knowledge, the state may be obliged to take and enforce reasonable precautionary measures.” Footnote 44 Insofar as self-driving cars function and fulfill their promise of significantly increased road safety, it can be assumed that the approval of self-driving cars is to be subsumed under the state’s duty to protect life under Article 2 ECHR. Footnote 45 In this vein the German Ethics Commission also postulated in Rule 6 that “[t]he introduction of more highly automated driving systems, especially with the option of automated collision prevention, may be socially and ethically mandated if it can unlock existing potential for damage limitation.” Footnote 46

D. Why Moral Dilemmas Involving Self-Driving Cars Are New

In connection with the promise of increased safety, however, there is a great deal of uncertainty. How should self-driving cars behave when all available options cause harm? Self-driving cars will change road traffic massively and thus the street scene and everyday lives of almost everyone. Footnote 47 Therefore, the ethical and legal risks and problems associated with them must be clearly regulated.

One of the main problems is to decide how self-driving cars should be programmed for moral dilemmas, in other words, for situations in which, according to all available options for action, comparable harm occurs, for example, a group of two persons or a group of three persons is seriously injured or killed because neither scenario can be prevented. Footnote 48 This is not to say that self-driving cars will constantly be confronted with trolley-problem-like situations, which have been criticized as too fictitious for real-world regulatory problems. Footnote 49 However, the difficulty of deciding what is an acceptable probability of collision when initiating a particular maneuver, and what is the relationship to the expected harm in situations where all options involve at least a possibility of collision and harm, is a difficult ethical question that we have not had to answer in advance until the advent of self-driving cars. Furthermore, the millions of cars and kilometers traveled will increase the likelihood of the most extraordinary scenario, all of which can be regulated in advance. The legal question of how to deal with such dilemmas in road traffic is rather new in that, to date, similar accident constellations have always had to be accepted as fate, so to speak, as subconscious human decisions determined the way accidents happened. Footnote 50 Reactions in a fraction of a second cannot be compared with conscious, reflected decisions. This new possibility is a gift and a burden at the same time. Self-driving cars, for instance, include the chance to save more lives; however, the price is high as the decision has to be made that someone will die to save other lives. This is a tremendously difficult question that most legal orders have declined to answer so far, at least in a state of normalcy. Yet, ignoring the technical possibility of saving as many lives as possible also means letting people die. Either way, a solid justification is necessary. Other scenarios, like planes hijacked by terrorists, are fundamentally different and, therefore, only of little argumentative help. Footnote 51

E. Moral Dilemmas Must be Regulated by the Law

The question as to how to make decisions on moral dilemmas involving self-driving cars must be answered by the law. The state has a legal duty to ensure that fatal risks are diminished (Article 2 ECHR). It cannot be left up to car manufacturers or private individuals to decide whether and how they program or choose the programming for self-driving cars in moral dilemmas since companies or private individuals are not allowed to make life-or-death decisions except in emergency situations like in emergency aid cases when their own lives are at stake. Footnote 52 Thus, announcements such as those made by a representative of Mercedes who said that self-driving Mercedes cars will be programmed in crash scenarios to prioritize the safety of their owners rather than, for instance, pedestrians, which puts a price tag on weighty decisions about life and death scenarios in road traffic, must be stopped. Footnote 53

F. How to Inform the Regulation of Moral Dilemmas Involving Self-Driving Cars?

It is the very definition of a moral dilemma that there is no easy solution. Hence, if we agree that self-driving cars can cause or might be involved in situations which constitute a moral dilemma, there is no easy answer. Current attempts to argue for specific regulations of moral dilemmas for self-driving cars might be inclined to inform decisions by referring to legal scholars, mostly criminal lawyers, who have also discussed so-called “trolley cases.” In these debates, however, the criminal lawyers were usually not concerned with a discussion of adequate laws but questions as to whether it would be right to punish individuals in specific situations. Moral dilemmas have plagued many philosophers too. A famous discussion in philosophy and psychology focused on the so-called “trolley problem.” Discussed scenarios are easily adaptable to hypothesize about how the outcome of accidents involving self-driving cars with unavoidable fatalities should be programmed. The trolley problem and similar scenarios, however, are no easy fit to the question as to how to regulate moral dilemmas involving self-driving cars. The debate is huge and complex. Footnote 54 Nevertheless, a major reason for being cautious is the fact that the debate around trolley problems in philosophy originally had quite different goals than what currently seems to be at center stage. Recent empirical studies have also aimed to inform the regulation of moral dilemmas with self-driving cars. It is, however, a difficult and potentially misleading task to simply ask laypeople in studies what they think would be the right thing to do. None of these debates is meaningless when discussing how to make decisions on moral dilemmas involving self-driving cars. Yet, it is important not to conflate the different starting points and goals of these debates when we aim at informing the current regulation of moral dilemmas with self-driving cars. This will be demonstrated taking the German Ethics Commission and the 2021 German Act on Autonomous Driving as a basis.

I. Trolley Cases in Criminal Law

Constructed cases are lawyers’ bread and butter, at least in the classroom. From this point of view, it is not surprising that the jurist Josef Kohler already offered a hypothetical scenario with the title Autolenker-Fall, the “case of the car driver,” in 1915. Footnote 55 Kohler proposed to:

Consider the fact that a car can no longer be brought to a standstill over a short distance but that it is still possible to steer it so that instead of going straight ahead, it goes right or left. If there are now persons straight ahead, to the right and left who can no longer avoid being killed, the driver is not in a position to avoid killing people, but he can steer the car to one side or the other by moving the steering wheel. Can we punish him here for causing the death of A, whereas if the car had continued in a straight line without being steered, B or C would have perished? Footnote 56

Hereby Kohler formulated a decision problem which lawyers still ponder today. It is, however, important to understand why. As often, we learn a great deal about his intentions when we closely read his question at the end of the scenario. His intention as a criminal lawyer was to discuss whether the action chosen by the car driver is punishable under criminal law, or whether the emergency law—“Notrecht” in German—prevents the punishment of an individual who had no choice but to kill someone. There is another scenario, again proposed by a German criminal lawyer, which is strikingly similar to the trolley cases discussed in philosophy. In 1951, Hans Welzel described the following scenario:

On a steep mountain track, a freight car has broken loose and is hurtling down the valley at full speed towards a small station where a passenger train is currently standing. If the freight car were to continue racing along that track, it would hit the passenger train and kill a large number of people. A railroad official, seeing the disaster coming, changes the points at the last minute, which directs the freight car onto the only siding where some workers are unloading a freight car. The impact, as the official anticipated, kills three workers. Footnote 57

Due to the similarity of this scenario, the German Ethics Commission considered it to be “familiar in a legal context as the ‘trolley problem’.” Footnote 58 This is misguided, however, as it conflates important differences between the intentions of Kohler and Welzel on the one hand and the discussion of the “trolley problem” by Philippa Foot and Judith Jarvis Thomson on the other hand. Footnote 59 Welzel’s argument, in broad strokes, was to demonstrate that the railroad official is not culpable for having redirected the freight car. Despite their similarity with currently suggested crash scenarios involving self-driving cars, simply adopting these examples is misguided because when we discuss the ethics of self-driving cars and the question as to how to legally prescribe the programming of these cars for dilemma situations, we must not conflate this with the justification in criminal law of personal actions in terms of emergency aid or personal culpability. While in Kohler’s and Welzel’s cases alike the discussion revolves around the action of an individual and the question whether this individual—ex post—should be punished or not, regulating self-driving cars is a societal question to be answered ex ante by the law maker or an Ethics Commission. Neither the opinion of Kohler, who answered his own question straightforwardly with “certainly not,” or of Welzel, who also considered the railroad official not culpable, must lead directly to the conclusion that self-driving cars ought to be programmed to take a different path in such scenarios. It is not criminal law which is—alone—responsible for the question as to how to decide on such moral dilemmas. And still, despite the fact that this debate is at the core of something else, it does not exclude the possibility that the solution to such and similar scenarios discussed by many criminal lawyers might be informative for the debate on self-driving cars and moral dilemmas. Footnote 60

II. The Traditional “Trolley Problem” in Philosophy

The traditional “trolley problem” in philosophy became famous through a debate between Philippa Foot and Judith Jarvis Thomson. The Oxford philosopher Philippa Foot published an article in 1967 which had an enormous impact on philosophical debates in the following decades worldwide. Footnote 61 Her intention was to discuss the ethics of abortion by making an argument for the distinction between positive and negative rights, in contrast to the doctrine of double effect. To her, negative duties, namely what we owe to other persons in terms of non-interference, were stronger than positive duties, namely what we owe to other persons in the form of aid. Footnote 62

To illustrate her point, she, too, constructed hypothetical scenarios. One scenario, which later became famous, goes like this:

Someone is the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. Footnote 63

Her scenario, too, was designed to put the driver of the tram in a conflict, namely a conflict of two negative duties, instead of one negative and one positive duty. Taking either track, the tram driver would kill an innocent person. Because both duties were negative duties, she argued that the tram driver might steer the tram to save five persons, not as the doctrine of double effect but because the distinction between positive and negative duties was decisive.

This argument was challenged by Judith Jarvis Thomson some ten years later. She coined the term “trolley problem, in honor of Mrs. Foot’s example.” Footnote 64 To Thomson, after having considered more hypothetical scenarios, it seemed that it is not always the case that the distinction between positive and negative duties guides us in a morally acceptable way. To her rather, “what matters in these cases in which a threat is to be distributed is whether the agent distributes it by doing something to it, or whether he distributes it by doing something to a person.” Footnote 65 To make her point, she changed Foot’s scenario slightly. In Thomson’s case it is not the action of the driver of the tram but the action of a bystander—who might change the points in order to redirect the trolley—which we have to evaluate. Footnote 66 As Thomson supposes, also after having asked several colleagues for their opinions, most persons also consider it morally acceptable for the bystander to change the points in order to save five persons. In this case, however, the bystander violated a negative duty—to not kill one person on the other track—in order to fulfil a positive duty—to aid five persons who would die if the bystander did not act. Footnote 67 This, she states, “is serious trouble for Mrs. Foot’s thesis.” Footnote 68

Judith Jarvis Thomson then goes on to discuss more scenarios, defining the “trolley problem” as the difficulty to explain why the bystander may redirect the trolley but we must not push a fat man off a bridge in order to stop a trolley killing one person—the fat man—in order to save five others. For her “‘kill’ and ‘let die’ are too blunt to be useful tools for the solving of this problem,” Footnote 69 but an “appeal to the concept of a right” could suffice. Footnote 70 If someone must infringe a stringent right of an individual in order to get something that threatens five to threaten this individual then he may not proceed according to Thomson. Footnote 71

The problem is that in both cases we are dealing with negative and positive rights and duties in a similar way, but morally it seems that this should not be decisive as the bystander should redirect the trolley, but the fat man should not be killed in order to save five other persons endangered by the trolley. The “trolley problem,” therefore, at least in the debate between Philippa Foot and Judith Jarvis Thomson is not about how to solve such moral dilemmas. Footnote 72 On the contrary, the right thing to do, morally speaking, is stipulated in all of the scenarios discussed by them. It is rather about the perplexity of how to explain the apparently different moral judgments in rather similar, even almost identical, scenarios. Because this is difficult, and has proven to remain difficult until today, this has been labeled a “problem.”

Hence, the important point for the current debate is that conclusions from the apparent fact that the bystander should redirect the trolley in order to save five people by killing one person should not be made lightheartedly when considering the programming of self-driving cars in such and similar situations. This insight often seems to be neglected, however, at least in discussions in non-academic magazines when someone makes the case for the relevance of the ‘trolley problem’ for moral dilemmas involving self-driving cars. Footnote 73 For this and other reasons, most moral philosophers do not consider the “trolley problem-debate” to be particularly useful for the discussion of the ethics of self-driving cars. Footnote 74

III. Empirical Studies on Moral Dilemmas Involving Self-Driving Cars

What is important to note for our purpose, thus, is that the hypothetical trolley cases were originally designed for quite specific purposes: In the case of Kohler and Welzel for discussing intricate issues of criminal law and culpability in emergency situations, and in the case of Foot and Thomson in order to discover moral principles or rather philosophical arguments aimed at justifying and explaining quite perplexing but nevertheless strong moral intuitions in seemingly only slightly different scenarios. The moral intuitions in all of these exercises were presupposed. In the application of such trolley cases over time and especially in relation to the ethics of self-driving cars, something changed.

It is important to note the shift in the application of trolley cases which has taken place over time, especially in relation to the ethics of self-driving cars. Trolley cases these days seem to be rather an inspiration in order to find out what the right thing to do would be and, thus, how self-driving cars should be programmed for moral dilemmas. This was impressively demonstrated in the large “Moral Machine Experiment,” Footnote 75 which asked over two million persons online to give their opinion on various scenarios to find out what laypersons thought was the right thing to do in a moral dilemma situation involving self-driving cars. These scenarios included characters as diverse as children and the elderly, doctors and the homeless, and different group sizes, to name just a few examples, and to give an idea of the breadth of this and similar efforts to decipher lay moral preferences in the context of self-driving cars. Footnote 76

An important finding reported in the experiment by Edmond Awad and his colleagues, and many similar studies, is that most people think that given a moral dilemma, self-driving cars should be programmed to save five people, even if one person has to die as a result. Footnote 77 Yet, this does not mean that we can decide on the programming of moral dilemmas involving self-driving cars on the basis of empirical studies alone. It is, for instance, a much more complex question concerning what the right programming would be than to simply consider trolley cases like experiments. Despite this caveat, it would be throwing the baby out with the bathwater if we were to ignore such experiments if they validly show a significant tendency in laypersons. Footnote 78 The criticism that morality cannot be established in an experiment like the Moral Machine Experiment Footnote 79 hinges as much upon answering the question as to what public morality is as the experiment by Awad and his colleagues itself. Is morality only to be found in the “ivory tower” of ethical theory building or is it also connected to what the majority of laypersons consider to be the right thing to do? If the latter has to play a role, the study design in order to find morally relevant intuitions becomes crucial. Footnote 80

Just one example shall illustrate why too simple a study design might be problematic. Apparently, in the discussion between Philippa Foot and Judith Jarvis Thomson, a striking and recurring issue was the trouble that numbers alone “won’t do.” They were puzzled by the findings that in such and such a circumstance, it seems to be morally acceptable to save five persons over one, but if a—sometimes only slight—change in the circumstances occurs, the verdict seems to change too, and it is no longer morally acceptable to save five persons over one. To ignore this important element of the discussion–and in fact this was the whole point of the discussion—is to dangerously introduce “morally acceptable” findings of so-called surveys or experiments into the real-life debate on how to regulate moral dilemmas caused by self-driving cars. Footnote 81

After having clarified potentially misinformed starting points or conflations of various quite different debates, we will take a look at the rules suggested by the German Ethics Commission, which are the basis for the 2021 German Act on Autonomous Driving.

IV. The Rules of the German Ethics Commission on Automated and Connected Driving

In 2017, an Ethics Commission set up by the German Federal Minister of Transport and Digital Infrastructure, including, among others, lawyers and philosophers and chaired by former judge of the Federal Constitutional Court of Germany Udo Di Fabio, delivered a Report on “Automated and Connected Driving.” Footnote 82 This report, which was not only published in German, but also in English, was more than a simple report as it came up with “[e]thical rules for automated and connected vehicular traffic.” Footnote 83 Specifically for “[s]ituations involving unavoidable harm,” working group 1, chaired by Eric Hilgendorf, was installed. Footnote 84 While the rules of the Commission deal with various topics, Footnote 85 this Article focuses on rules concerning moral dilemmas. Rule 1 of the Commission entails the insight that “[t]echnological development obeys the principle of personal autonomy, which means that individuals enjoy freedom of action for which they themselves are responsible.” Footnote 86 Rule 2 holds that “[t]he protection of individuals takes precedence over all other utilitarian considerations. The objective is to reduce the level of harm until it is completely prevented.” Footnote 87 Both “rules” will very likely meet with wide acceptance. This similarly holds true for Rule 7, stating that “[i]n hazardous situations that prove to be unavoidable, despite all technological precautions being taken, the protection of human life enjoys top priority in a balancing of legally protected interests.” Footnote 88 Hence, “damage to animals or property” never must override protection of humans. It will be hard to find someone arguing to the contrary in relation to this rule too. Footnote 89

Rule 8 is a caveat: “Genuine dilemmatic decisions, such as a decision between one human life and another, depend on the actual specific situation […]. They can thus not be clearly standardized, nor can they be programmed such that they are ethically unquestionable.” Footnote 90 The Commission added:

It is true that a human driver would be acting unlawfully if he killed a person in an emergency to save the lives of one or more other persons, but he would not necessarily be acting culpably. Such legal judgements, made in retrospect and taking special circumstances into account, cannot be readily transformed into abstract/general ex ante appraisals and thus not into corresponding programming activities either. Footnote 91

This is a warning that the scenarios presented by the criminal lawyers above must not be taken as direct role models.

After having emphasized the uncertain grounds for the regulation of moral dilemmas, the Ethics Commission holds, in Rule 9, that “[i]n the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” Footnote 92 Rule 9 includes, too, that “[i]t is also prohibited to offset victims against one another.” Footnote 93 While the “[g]eneral programming to reduce the number of personal injuries may be justifiable,” the Commission added that “[t]hose parties involved in the generation of mobility risks must not sacrifice non-involved parties.” Footnote 94 Precisely this addition to Rule 9 is crucial for our purpose.

Interestingly, the members of the Ethics Commission could not agree on a consistent explanation for this rule. Footnote 95 On the one hand, the Ethics Commission refers to a judgment by the Federal Constitutional Court of Germany concerning a hijacked airplane Footnote 96 stating that “the sacrifice of innocent people in favor of other potential victims is impermissible, because the innocent parties would be degraded to mere instruments and deprived of the quality as a subject.” Footnote 97 On the other hand, the Commission states in relation to self-driving cars that the “identity of the injured or killed parties is not yet known,” which distinguishes self-driving car scenarios from the trolley cases. If the programming, furthermore, “reduced the risk to every single road user in equal measure,” then “it was also in the interests of those sacrificed before they were identifiable as such in a specific situation,” to program self-driving cars in order to “minimize the number of victims.”Footnote 98 This “could thus be justified, at any rate without breaching Article 1(1) of the [German] Basic Law.” Footnote 99 The confusion is made complete by the succeeding paragraph, which reads as follows:

However, the Ethics Commission refuses to infer from this that the lives of humans can be “offset” against those of other humans in emergency situations so that it could be permissible to sacrifice one person in order to save several others. It classifies the killing of or the infliction of serious injuries on persons by autonomous vehicles systems as being wrong without exception. Thus, even in an emergency, human lives must not be “offset” against each other. According to this position, the individual is to be regarded as “sacrosanct.” No obligations of solidarity must be imposed on individuals requiring them to sacrifice themselves for others, even if this is the only way to save other people. Footnote 100

The verdict in this paragraph is, then, called into question by the following paragraph on constellations in which “several lives are already imminently threatened.” Footnote 101 Hence in scenarios as put forward, for example, by Kohler above, where a car would kill a group of three persons, but a redirection might avoid killing all three persons and hit only one, or two, the Commission found that–despite openly admitting disagreements between the experts–minimizing harm in this case might be permissible. Footnote 102

However, what is of particular importance for our purpose is a discussion not only of moral principles and rules but of legal solutions. Due to technological progress, we must not only think hypothetically about such scenarios but implement laws which regulate the programming of self-driving cars, also including moral dilemmas they might cause or be involved in. Before we enter the discussion of the controversy on “offsetting” human lives, we will see what has been included in the German act.

V. The 2021 German Act on Autonomous Driving

The provision related to dilemma problems of self-driving cars in the 2021 German Act on Autonomous Driving, Section 1e paragraph 2 No. 2 provides that motor vehicles with an autonomous driving function must have an accident-avoidance system that:

(a) [I]s designed to avoid and reduce harm,

(b) in the event of unavoidable alternative harm to different legal interests, takes into account the importance of the legal interests, with the protection of human life having the highest priority; and

(c) in the case of unavoidable alternative harm to human life, does not provide for further weighting on the basis of personal characteristics. Footnote 103

Rules 2 (lit a)) and 7 (lit b)) of the Ethics Commission were, thus, directly implemented in the law. However, Rule 9 of the Ethics Commission was not fully included. While the prohibition of taking personal characteristics into account was enshrined in the law Footnote 104 — there is no general legal prohibition concerning the “offsetting” of human victims against one another. Neither is there an explicit obligation to protect as many persons as possible even at the risk of directly harming fewer persons.

The 2022 Autonomous Driving Decree which concretizes the 2021 Autonomous Driving Act did not change this lacunae. However, in response to the aforementioned statement by the Mercedes official, a so-called “Mercedes rule” has been established as a “functional requirement for motor vehicles with autonomous driving function”: Footnote 105

If, in order to avoid endangering the lives of the occupants of the motor vehicle with autonomous driving function, a collision can only be avoided by endangering the lives of other participants in the surrounding traffic or uninvolved third parties (unavoidable alternative endangerment of human life), the protection of the other participants in the surrounding traffic and uninvolved third parties must not be subordinated to the protection of the occupants of the autonomously driving motor vehicle.

Beyond these rules, no other regulation of moral dilemmas has been introduced in the law or the decree. This non-regulation is central issue concerning the role model potential of this regulation beyond Germany as empirical studies, as briefly referred to above, found a strong moral preference in laypersons to protect a greater number of persons in case of moral dilemmas. Footnote 106 As we have seen, it was difficult for the German Ethics Commission to come up with a straightforward proposal so the German legislature—maybe due to the controversy in the Ethics Commission—refrained from regulating moral dilemmas in detail. This is, furthermore, understandable considering German history, the Kantian tradition, and last but not least, human dignity as enshrined in Article 1(1) of the German Basic Law. However, these conditions are not given—at least not explicitly or to the same extent—beyond Germany, neither in the rest of Europe nor internationally. In the following sections, therefore, this Article will investigate whether there might be good reasons for doubting that a so-called “utilitarian programming” of self-driving cars, a programming that would lead to the protection of the greatest number of persons saved—even at the expense of killing someone, or some smaller number of persons—in the case of moral dilemmas, actually could be a role model for European and international regulations.

G. Potential “Utilitarian Programming” of Moral Dilemmas Involving Self-Driving Cars in Europe and the US

For the sake of simplicity, the following discussion is based on a specific example of such moral dilemmas. For this purpose, a case is assumed which is similar to the classical trolley problem case described by Foot of a car which can no longer be brought to a standstill. In the case of non-intervention, the car would steer straight into a group of five persons and only in the case of an intervention and redirection would the car avoid colliding with this group but would kill another person instead. The question is how to program self-driving cars for such a dilemma.

I. The Rejection of Utilitarian Programming by the Legal Literature

The position in the German legal literature is rather straightforward. According to Article 1(1) of the Basic Law of the Federal Republic of Germany, “[h]uman dignity shall be inviolable.” Most legal scholars interpret this as a strict prohibition to “offset” human lives. Footnote 107

As far as can be seen, Iris Eisenberger is the first voice in the Austrian legal literature who also took a position on the potential programming of moral dilemmas caused by self-driving cars. Footnote 108 She, too, rejected a utilitarian option for action, taking into account fundamental rights and especially Article 2 ECHR, the right to life. According to her, in a case of conflict that cannot be avoided, it is inadmissible to inscribe in an algorithm the option of action according to which the smallest number of persons would be killed. Footnote 109 To her, the “offsetting” of human lives would violate human dignity in the context of the Austrian legal order as well. Footnote 110 This position meets with opposition in Austria, however. Footnote 111 Beyond Austria and Germany, it is especially empirical studies which suggest that a large number of laypersons would actually favor minimizing harm—even at the cost of killing someone innocent.

II. When the Legal Literature Faces Opposition from Empirical Studies

The silence of the German act concerning moral dilemmas faces strong opposition from empirical studies investigating the moral preferences of laypersons. Footnote 112 These studies show a strong preference for minimizing harm even to the extent that in the case of moral dilemmas, the largest number of persons possible should be saved at the cost of the smaller number of human lives. Footnote 113 Empirical studies, hence, suggest that the moral intuitions of laypersons at least feed a certain skepticism concerning the strict reading of human dignity by German legal scholars, the Ethics Commission, and, finally, the German act. This is relevant, among other things, because the programming of self-driving cars should also meet with the acceptance of most persons. Tracing these intuitions, this Article questions the position taken by the German act outlined above. This critique is fed by Article 2 ECHR, which, arguably, is the criterion for a European regulation. Article 2 ECHR, in contrast to Article 1(1) of the German Basic Law, at least does not prohibit a “utilitarian” programming of self-driving cars. Footnote 114

III. A Technological Caveat: Damage Avoidance Probability and Estimated Harm Instead of Targeted Killing

First of all, it is important to emphasize that the general discussion so far has been clouded by a conflation of different scenarios. Indeed, the starting point of many considerations is a classic scenario where the offsetting of human lives is actually more realistic than in the trolley case: A plane hijacked by terrorists and the question as to what means the state may use to save the lives of innocent persons. In this consideration, it is assumed that shooting down the hijacked plane—to save threatened persons, for example, in a crowded stadium where the plane is heading—necessarily entails the targeted killing of the innocent passengers on board. Footnote 115

In road traffic scenarios, however, the almost certain death of innocent persons is not a prerequisite for the discussion on regulating crashes with self-driving cars. In the case of car accidents, we are not dealing with targeted killings in the vast majority of situations, excluding terrorist activities, which might actually be prevented with well-designed self-driving cars (in this case they probably even deserve to be called autonomous). Clear predictions about the probability of a collision for certain trajectories and survival in case of a collision after a car accident (estimated harm) are rather difficult to make. This is also relevant for the discussion of moral dilemmas as programming might involve the killing of the smaller number of persons to save the larger number; however, ex ante such an outcome is uncertain. It might, thus, be that, concerning crashes with self-driving cars in the case of a moral dilemma, ex ante, we do not talk about purposefully killing one person in order to save two lives. Rather, we are “only” dealing with the question of harm avoidance probabilities, which, nevertheless, might include fatalities.

The technology for accurately predicting collisions that will affect specific persons, for determining the damage caused by such a collision and for possible distinctions based on personal criteria such as age, profession, origin, etc. is a distant dream—or rather—nightmare. Footnote 116 The current technology of automatic object recognition is plagued with more banal mishaps. An example is with the mistaken result of an automatic recognition of objects such as a green apple with a sticker that reads “i pod,” which is immediately identified as an iPod manufactured by Apple. A legal evaluation of any accident-avoidance system must thus be based on the technological possibilities. Footnote 117 It seems that also for self-driving cars the old principle that “ought” implies “can” holds true. For the regulation of self-driving cars, this means that those conflict scenarios should be discussed first which could actually be applied technically on the roads in the near future or in the medium term. For example, in a situation in which personal injury can no longer be avoided, an algorithm might well be able to detect the number of persons potentially threatened in two different scenarios. However, whether and with what probability those persons in the two scenarios will suffer life-threatening injuries after the collision probably already goes far beyond the technological achievements that can be hoped for in the near future. Crash regulation programming of self-driving cars, thus, must not be predominantly discussed against a false background. Fictitious terrorist scenarios in which innocent persons are deliberately killed are thus a misleading foil for collision-avoidance systems. Footnote 118 Consequently, hypothetically equating the probability of killing fails to recognize the difference between a terrorist scenario and a car accident. To put it bluntly, often it is not lives that are “offset,” but the probabilities of damage avoidance that are compared.

Having said that, however, it is also important that the law which is aimed at regulating self-driving cars is designed to speak to such cars. Their programming requires much more precise regulation than humans do. While human behavior can be addressed by general rules, self-driving cars require a very specific programming. Footnote 119 In other words, to make a law talk to the code, we have to put numbers in the law. Footnote 120 In particular, the regulation on collision avoidance systems is not comprehensive and works with very general rules such as avoiding and reducing harm. In the end, this leaves a lot of freedom for car manufacturers to program self-driving cars to deal with moral dilemmas. They should follow rules such as prioritizing humans over non-humans, but not with certain personal characteristics, and not subordinating humans outside of self-driving cars to their passengers. But if these rules are followed, there is simply a generalized harm reduction rule that can lead to many different outcomes.

Following the example of the German Ethics Commission, a commission of experts appointed by the European Commission published an “Ethics of connected and automated vehicles report” in September 2020. Footnote 121 With regard to the regulation of moral dilemmas, the guidelines put forward in this report—similarly to the comments made above on the probability of avoiding harm—point out that it could be difficult for self-driving cars to switch from a generally appropriate risk minimization program to a dilemma situation. Therefore, this commission decided to refrain from providing a concrete solution to all possible dilemmas and emphasized the need to involve the public in solving such scenarios. Footnote 122 And yet, despite the technological caveat mentioned above, we will still face actual moral dilemmas. Footnote 123 Therefore, we must not shy away from the question as to whether the law should prescribe the minimization of harm also in the event that someone ends up being killed. Nevertheless, it is important to mind the correct context, the likelihood of the occurrence of various scenarios, and the technological possibilities which will likely guide and limit the potential solutions.

IV. Article 2 ECHR, the “Offsetting” of Human Lives and a European-Wide Regulation of Moral Dilemmas Involving Self-Driving Cars

Insofar as—as previously claimed—the state must regulate moral dilemmas according to Article 2 ECHR, the question arises as to whether Article 2 ECHR also includes specifications for the design of the regulation. The scope of protection of Article 2 ECHR is the right to life. An interference with the scope of protection occurs through an intentional killing or a life-endangering threat. Examples include state-practiced euthanasia, a killing or disproportionate use of force in the course of official police action—for example, by neglecting a person under state supervision—or by forcing someone to engage in an activity dangerous to life. Footnote 124 These examples are obviously not directly applicable to the scenario discussed here—state vehicles excepted. However, the right to life can also be violated by omission. In this regard, too, state custody is usually a prerequisite, when, for example, the state must not let prisoners die of thirst. Footnote 125 This does not apply to our situation either. Nevertheless, it cannot be denied that a state regulation which prescribes a probability of avoiding harm in such a way that, in a moral dilemma, the group with the larger number of persons should be protected and the group with the smaller number of persons should be sacrificed, represents an encroachment on the scope of protection of Article 2 ECHR. Footnote 126 In principle, an interference with the right to life under Article 2 paragraph 2 ECHR is justified only to “(a) defend someone against unlawful violence; (b) lawfully arrest someone or prevent someone lawfully deprived of his liberty from escaping; (c) lawfully put down a riot or insurrection.” Footnote 127

In Germany the right to life is linked to the guarantee of human dignity and an “offsetting” of life is strictly rejected. Footnote 128 Basically, this position is strongly colored by Immanuel Kant. Footnote 129 It is questionable whether human dignity, which is also guaranteed in principle in Austrian and other European constitutions as well as in the European Convention on Human Rights, entails similarly strong specifications beyond Germany. Footnote 130 A well-known difference already exists in the fact that in Austria and several other jurisdictions, in contrast to Germany, the guarantee of human dignity is not explicitly anchored in the constitution. Despite manifold, specific, concrete references, there is no “comprehensive justiciable right to a dignified life” in Austria. Footnote 131 Consequently, the same situation cannot be assumed in Austria that leads the prevailing opinion in German jurisprudence and, finally, also the German act to deny “utilitarian” solutions to moral dilemmas. This holds true for other European legal orders. Neither it is explicitly enshrined in the ECHR. Footnote 132

The argumentation of the ECtHR in the Finogenov case is instructive for utilitarian programming, including a reference to the ruling of the Federal German Constitutional Court on the Air Security Act. Footnote 133 The use of gas when rescuing hostages in the Dubrovka Theatre in Moscow, which was directed against the terrorists but also affected the hostages, did not constitute a violation of the right to life for the ECtHR because the potentially lethal force was justified in view of the threat situation. The decisive factor here was the hostages’ real chance of survival. Footnote 134 This is particularly relevant for the probability to avoid harm as discussed above, although it must be conceded that the measure which affected the hostages was also intended to protect them.

Moreover, it must be conceded that only a ruling by the ECtHR will probably provide clarity on how these new types of moral dilemmas, which are likely to occur in the near future, can be resolved in conformity with the Convention. After all, Article 2 ECHR was not adopted with an awareness of the new moral dilemmas that will arise as a result of self-driving cars. The potential justification of limitations to the right to life according to Article 2 paragraph 2 ECHR, such as the defense against unlawful violence, does not apply to moral dilemmas in connection with self-driving cars.

However, the ECtHR recognizes in principle, albeit in the context of exceptional circumstances, that according to Article 2 paragraph 2 ECHR, in the case of absolute necessity, the use of force resulting in death can also be justified. Footnote 135 For the scenario discussed here, it makes a considerable difference whether the state bows to the perfidious logic of terrorism and agrees, in mostly hypothetical and extremely unlikely situations, to deliberately kill innocent passengers in order to ensure the survival of a larger number of other persons or whether the state is regulating road traffic, which, inevitably, day in and day out—despite measures to increase road safety—will probably always involve the occurrence of dangerous situations, in a way that minimizes risk. Distinguishing between the scenarios is important Footnote 136 because persons who use roads face a similar risk. For example, all persons engaging in road traffic—unlike plane passengers and stadium visitors in the case of a hijacked plane being directed into a crowded stadium–could be assigned to a hazard community. Footnote 137 It may be instructive for this take to think of all persons involved in road traffic as being behind a Rawlsian veil of ignorance Footnote 138 and then to present the options for action to these persons in such a way that they do not know whether they belong to the threatened group with fewer or more persons. Footnote 139 An ex ante preference for the larger group of persons from the initially anonymous hazard community of all road users could thus be justified in advance and is to be judged differently than an ex post decision in an individual case. Footnote 140 While from an ex ante perspective, it is rather a statistical decision, an ex post decision almost makes it impossible to exclude personal stories from a justification of the decision.

Generally speaking, it can be stated in support of this argument that, taken to the extreme, the assertion that one life is worth the same as several would even lead to the extinction of humankind because one life is not capable of reproducing itself. Basically, the law also knows how to differentiate. Footnote 141 Murder is punished differently than genocide. Also in international humanitarian law, human victims are considered as collateral damage in such a way that an attack—despite civilian victims—can be lawful as long as the number is in proportion to the military advantage. Footnote 142 Thus, Eric Hilgendorf also rightly suspects that “[t]he evaluation that one human life ‘weighs’ as much as several, indeed as infinitely many human lives, may only be sustained if such decisions remain textbook fictions.” Footnote 143

Article 2 ECHR arguably neither prohibits nor prescribes a “utilitarian” programming for moral dilemmas involving self-driving cars and the so-called “margin of appreciation” conceded by the ECtHR to the Member States might also play a role. Footnote 144 Yet, a decision must be taken by the law and a justification of this decision is necessary. Either way, an interference with Article 2 ECHR will be the case. If a “utilitarian” programming were to be prohibited, the state would let the larger number of persons die even though they could have been saved. If a “utilitarian” programming were to be prescribed, the state would be responsible for the deaths of the smaller group of persons. If no programming were to be mandated as in the German act, the decision would be left to private actors. Footnote 145 No-action programming—in other words, in our scenario, not redirecting the car—would leave the larger number of persons marooned. Random programming would ultimately, albeit less often, still cost more lives in the aggregate than a “utilitarian” programming. Footnote 146 This would not meet the state’s duty to protect according to Article 2 ECHR either; in other words, the abandonment of the larger number of persons would again be in need of justification.

The recently amended General Safety Regulation of the European Union had the goal to establish the EU to be in the avant-garde for allowing so-called level 4 self-driving cars on European roads. Footnote 147 This regulation has to stand the test of the arguments made here too. Whether the German act, at least concerning the solution of moral dilemmas, is an ideal role model, is questionable, however. Technically, the Commission Implementing Regulation (EU) 2022/1426 of 5 August 2022 lays down rules for the application of Regulation (EU) 2019/2144 of the European Parliament and of the Council as regards uniform procedures and technical specifications for the type-approval of the automated driving system (-ADS-) of fully automated vehicles. Footnote 148 Particularly interesting for our purpose is ANNEX II of this Regulation on performance requirements. In relation to the regulation for Dynamic Driving Tasks (DDT) under critical traffic scenarios—emergency operation—it reads as follows:

2.1 The ADS [Automated Driving System] shall be able to perform the DDT for all reasonably foreseeable critical traffic scenarios in the ODD [Operational Design Domain].

2.1.1. The ADS shall be able to detect the risk of collision with other road users, or a suddenly appearing obstacle (debris, lost load) and shall be able to automatically perform appropriate emergency operation (braking, evasive steering) to avoid reasonably foreseeable collisions and minimise risks to safety of the vehicle occupants and other road users. In the event of an unavoidable alternative risk to human life, the ADS shall not provide for any weighting on the basis of personal characteristics of humans. The protection of other human life outside the fully automated vehicle shall not be subordinated to the protection of human life inside the fully automated vehicle.

2.1.2. The vulnerability of road users involved should be taken into account by the avoidance/mitigation strategy.

These rules are in many ways similar to the German act presented before. For instance, also in the realm of this regulation it is clear that the “weighting on the basis of personal characteristics of humans” is clearly forbidden. However, in relation to the moral dilemma discussed here more extensively, the constellation of deciding between a larger and a smaller group of people, these rules remain silent as does the German act. It hinges upon the interpretation of the passage stating that “reasonably foreseeable collisions” need to be avoided “and minimis[ing] risks to safety of the vehicle occupants and other road users” are prescribed. Depending on the interpretation this may include a situation in which minimal risk stands for directing the car against a smaller group of persons in order to safe a larger group or not. Footnote 149

V. The Regulation of Moral Dilemmas Involving Self-Driving Cars in the US

Self-driving cars—albeit so far mostly in test mode—are no novelty on roads in the US. Therefore, accidents happen as well. As of July 1, 2022, the Department of Motor Vehicles (DMV) in the US state of California, for instance, has received 486 Autonomous Vehicle Collision Reports. Footnote 150 The regulation of self-driving cars in the US is divided between the federal level, and the individual states. Footnote 151 How individual states regulate self-driving cars varies significantly. Forerunner states in terms of technological development likely also lead the regulation of this technology. Despite that, in relation to moral dilemma situations, which will become a practical problem when self-driving cars become more frequent, there is simply no regulation in the US at either the federal or individual state levels.

Take the regulation in Arizona, for example. In March 2021, the Legislature enacted HB 2813. Footnote 152 Thereby standards for driverless vehicles were established in Arizona. The law does not distinguish between test mode and the normal operation of self-driving cars. Commercial services like passenger transportation, freight transportation and delivery operations can, thus, be offered. In order to use a self-driving car without a human driver, however, the operator must previously submit a law enforcement interaction plan according to Section 28-9602. The car must follow federal laws and standards, comply with all traffic and vehicle safety laws and, most interestingly for our case, according to Section 28-9602 C. 1. (B), in case of failure of the automated driving system, “achieve a minimal risk condition.” Footnote 153 This condition is defined as “a condition to which a human driver or an automated driving system may bring a vehicle in order to reduce the risk of a crash when a given trip cannot or should not be completed.”Footnote 154

While these requirements are very understandable, moral dilemmas are simply not addressed. This is a severe lack, which might impede the trust in self-driving cars once they become more frequent on public roads. Therefore, a regulation is necessary. Car manufacturers won’t take the first move. So far, they are interested in safety questions and do address also extreme situations to some extent. Yet, they do not include moral dilemmas in their considerations. Footnote 155 Who would blame them? The nature of a dilemma is that either way, the outcome involves trouble. Such negative consequences are hard to accept, and likely do not contribute to selling cars. Hence, if regulation does not force producers to deal with this issue, they will likely avoid facing it. So far in the US, states took the primary role in regulating self-driving cars. This might change, however. If a federal law would be enacted in the future, Footnote 156 or if further state laws are adopted, it could be wise to include a provision on moral dilemmas. Thereby important decisions about live and death would not be decided by car producers but by the legislator. The provision in the German act—even though very likely not as a detailed blueprint—might serve as a role model in as much as that this act is the first one worldwide to actually contain a regulation of moral dilemmas. The lack of addressing the dilemma discussed above, however, significantly reduces the role-model potential of the German act significantly.

H. Conclusion

Decisions about life and death are always difficult, bringing us to the edge of what the law can legitimately provide—and possibly—beyond. Footnote 157 In particular, moral dilemmas as discussed here represent a problem at the intersection of ethics, technology, and the law, Footnote 158 which makes regulation even more difficult. However, to lump different scenarios of moral dilemmas together under one constitutional umbrella is not a successful strategy. It is precisely the sensitive subject matter and the threat to such fundamental legal rights as well as to human life that demand an ethical and legal assessment appropriate to the situation in question. Footnote 159 Frightening terrorist scenarios, pandemic-related triage decisions, questions of organ transplantation or, precisely, the regulation of moral dilemmas arising through self-driving cars all touch upon fundamental aspects of human life but nevertheless exhibit important differences.

Self-driving cars, insofar as they increase road safety, should be allowed on European roads due to the state’s duty to protect human life enshrined in Article 2 ECHR. Decisions about moral dilemmas in connection with self-driving cars should not, according to the argumentation in this Article, be left to car manufacturers. Basically, the 2021 German Act on Autonomous Driving provides a solid role-model character for further European-wide regulations of self-driving cars including collision-avoidance systems. However, since crashes involving self-driving cars mostly do not present themselves in advance as decisions about life and death but as probabilities of avoiding harm, the duty to protect life by the state has the consequence that in situations in which comparable harm occurs according to all available options, the protection of a larger group of persons might to be preferred over the protection of a smaller group as the minimization of risks. While this might be debated, what seems clear is that, for true moral dilemmas, the law needs to provide a regulation. This contrasts with the partial silence of the German act concerning moral dilemmas. To put it bluntly, lives should not be “offset,” but rather the probability of avoiding damage should be increased. The state’s duty to protect life, its positive obligations stemming from Article 2 ECHR, might well speak in favor of preferring the protection of the larger group even in a case where the probability of killing the persons in the smaller group borders on certainty. These considerations are relevant for future adaptations of regulations under European and US law, which will continue to be a great challenge due to the sensitive issues and the different positions in these legal orders.


I thank Norbert Paulo, the participants of the Panel “Regulating Digital Technologies” at the Biennial Conference of the Standing Group on Regulatory Governance of the European Consortium of Political Research at the University of Antwerp in July 2023 and especially the organizers as well as the discussant Anne Meuwese and an anonymous review for helpful comments, criticism, and discussions on earlier versions. Furthermore, I would like to thank Helen Heaney for stylistic advice, Alexander Berghaus and Jan-Hendrik Grünzel for research assistance, and the student editors of the GLJ for their valuable and detailed work in preparing the manuscript for publication.

Competing Interests

The author declares none.

Funding Statement

This research is part of the project EMERGENCY-VRD and funded by – Digitalization and Technology Research Center of the Bundeswehr which we gratefully acknowledge. is funded by the European Union – NextGenerationEU.


1 Some arguments made here have aldready been presented in German: Lando Kirchmair, Artikel 2 EMRK und die Notwendigkeit moralische Notfälle selbstfahrender Autos (europaweit) zu regulieren, 30 Journal für Rechtspolitik 50–62 (2022).

2 Veronika R. Meyer & Marcel Halbeisen, Warum gibt es in der Natur keine Räder? Nur scheinbar ein Paradox, 36 Biologie in unserer Zeit, 120, 120–23 (2006). Even if the place of invention has not yet been clarified conclusively, the area of the earliest advanced civilizations in Mesopotamia seems to be a likely candidate. Cf. Mirko Novák, Buchbesprechung von Mamoun Fansa, Stefan Baumeister (Herausgeber): Rad Und Wagen—Der Ursprung einer Innovation Wagen im Vorderen Orient und Europa, 35 Die Welt des Orients 280, 283 (2005).

3 Cf. Mamoun Fansa & Stefan Burmeister, Rad und Wagen: Der Ursprung einer Innovation Wagen im Vorderen Orient und Europa (2004) (providing a comprehensive explanation). The groundbreaking nature of this invention also becomes clear—according to Geo Chronik magazine, in the reflection of physicist Ernst Mach, who said the following in 1883: “Take away the wheel—and little will be left.” See Die 100 genialsten Erfindungen, in Geo Chronik (Sept. 17, 2018). See also Florian Klimscha, “Wheeled Vehicles” in Digital Atlas of Innovations, (stating that “first proofs of the existence of the wheel are located in the period between 4000 and 3600 BCE.”), I am grateful to Shiyanthi Thavapalan for pointing me to the Atlas.

4 See David W Anthony, The Horse, the Wheel, and Language. How Bronze-Age Riders from the Eurasian Steppes Shaped the Modern World (2007) (highlighting at 397 that two-wheeled chariots with standing drivers “were the first wheeled vehicles designed for speed, an innovation that changed land transport forever. The spoked wheel was the central element that made speed possible.” While the exact date is not important to our purpose here, it is nevertheless interesting that Anthony makes the case that [at 402] “the earliest chariots probably appeared in the steppes before 2000 BCE” [and consequently did not originate from prehistoric Near Eastern wagons, see at 399 et seq]. In general, he provides an account of how the original Indo-European speakers, and their domestication of horses and use of the wheel spread language and transformed civilization).

5 For a fascinating historical insight into the origins of the automobile, with combustion engines and electric motors competing against each other as early as the late 19th century, see Kurt Möser, Frühe Elektromobilität auf der Straße: Kultur und Technik, in Zeiten der Elektromobilität: Beiträge zur Geschichte des elektrischen Automobils (Theo Horstmann & Peter Döring eds., 2018). For more detail, see Gijs Mom, The Electric Vehicle: Technology & Expectations in the Automobile Age (2004).

6 For more on the Benz Patent Motor Car, which has been a German World Document Heritage since 2011, see particularly Werner Oswald, Mercedes -Benz -Personenwagen (2001); see also Olaf Fersen, Ein Jahrhundert Automobiltechnik: Personenwagen 10 (1986). Cf. Marco Matteucci, Richard von Frankenberg & Hans -Otto Neubauer, Geschichte Des Automobils (1995); Möser, Frühe Elektromobilität auf der Straße, supra note 5, at 22 (referring to the year 1886 as being the “big bang” of mobility). On the (pneumatic) tire, see Horst W. Stumpf, Geschichte Des Luftreifens, in Handbuch Der Reifentechnik (Horst W. Stumpf ed., 1997).

7 For the pointed moniker “Auto-Auto,” see for example, Freiheit ohne Lenkrad: Das selbststeuernde Fahrzeug verändert unser Leben, Der Spiegel (Feb. 26, 2016),

8 See Tracy Hresko Pearl, Hands Off the Wheel: The Role of Law in the Coming Extinction of Human-Driven Vehicles, 33 Harv . J. L. & Tech. 427, 428–29 (2020) (pointing to the fact that mobs were throwing stones at the newly introduced automobiles in 1904 as well as in 2018).

9 Miloš Vec, Kurze Geschichte Des Technikrechts, in Handbuch Des Technikrechts 5 (Martin Schulte & Rainer Schröder eds., 2011). For particular information regarding automobile law with further references, see id. at 33 n.228.

10 See Jessica S. Brodsky, Autonomous Vehicle Regulation: How an Uncertain Legal Landscape May Hit the Brakes on Self-Driving Cars, 31 Berkeley Tech. L. J. 851–78 (2016); see also Matthias Breitinger, Kabinett erlaubt teilautomatisiertes Fahren, ZeitOnline (Apr. 13, 2016, 3:29 PM), (quoting Former German Transport Minister Alexander Dobrindt, calling automated and connected driving “the biggest mobility revolution since the invention of the car.. .”). Cf. P. S. Sriraj, Autonomous Vehicles & Mobility Impacts on Transit & Freight: Factors Affecting Adoption, Challenges, and Opportunities, in Are We There Yet? The Myths & Realities of Autonomous Vehicles 3 (Michael A. Pagano, Kazuya Kawamura & Taylor Long eds., 2020); Stan Caldwell & Chris T. Hendrickson, Are We There Yet, & Where Is It We Need to Go? Myths & Realities of Connected & Automated Vehicles, in Are We There Yet? The Myths & Realities of Autonomous Vehicles 47 (Michael A. Pagano, Kazuya Kawamura & Taylor Long eds., 2020).

11 Eric Hilgendorf, Automatisiertes Fahren Als Herausforderung Für Ethik Und Rechtswissenschaft, in Handbuch Maschinenethik 357 (Oliver Bendel ed., 2019). Cf. Iris Eisenberger, Konrad Lachmayer & Georg Eisenberger, Autonomes Fahren und Recht (Iris Eisenberger et al. eds., 2017) (writing on Austrian law); Bernd H. Oppermann & Jutta Stender -Vorwachs et al., Autonomes Fahren: Rechtsprobleme, Rechtsfolgen, Technische Grundlagen (Bernd H. Oppermann & Jutta Stender-Vorwachs eds., 2020) (providing a comprehensive analysis of different legal disciplines primarily concerning Germany).

12 The first jurisdiction to generally allow self-driving cars was Nevada, USA, by passing the Assembly Bill No. 511–Committee on Transportation, approved by the governor on 16 June 2011.

13 Straßenverkehrsgesetz [StVG] [Autonomous Driving Act], March 5, 2003, BGBl I at 310, 919, revised March 2, 2023, BGBl 2023 I No. 56 (Ger.), The translation (also below, if not indicated otherwise) was by the author.

14 Ethics Commʼn, Report on Automated & Connected Driving, (June 2017), For comments on the report by one of its members, see Christoph Luetge, The German Ethics Code for Automated & Connected Driving, 30 Phil. & Tech. 547, 547–58 (2017).

15 The regulation of self-driving vehicles is very dynamic. For instance, the French legislator has come up with an adaption of the Mobility Orientation Act as well as empowerments for the government which ultimately allow “delegated-driving vehicles” on public roads. Several other countries, such as Australia, China, Japan, and the United Kingdom, have developed plans to introduce self-driving cars and are adapting their laws accordingly. For an overview on the various regulatory activities in several countries, see Michael Rodi, Driving Without Driver: Autonomous Driving as a Legal Challenge (Uwe Kischel & Michael Rodi eds., 2023).

16 For an overview on the draft, see Eric Hilgendorf, Straßenverkehrsrecht Der Zukunft, 76 JuristenZeitung 444, 444–54 (2021).

17 Gesetz zum autonomen Fahren [The Autonomous Driving Act] July 12, 2021, BGBl. 2021 I, No. 48 at 3089, 3108 (Ger.) [hereafter the 2021 Autonomous Driving Act].

18 Verordnung zur Genehmigung und zum Betrieb von Kraftfahrzeugen mit autonomer Fahrfunktion in festgelegten Betriebsbereichen (Autonome-Fahrzeuge-Genehmigungs-und-Betriebs-Verordnung - AFGBV) [Decree on the approval and operation of motor vehicles with autonomous driving functions in specified operating ranges] June 24, 2022, BGBl. I at 986, revised July 20, 2023, BGBl. 2023 I No. 199 [hereafter 2022 Autonomous Driving Decree].

19 See infra Section C.

20 See infra Section D.

21 See infra Section E.

22 Philippa Foot, The Problem of Abortion & the Doctrine of the Double Effect, Oxford Rev. 5, 5–15 (1967).

23 Judith J. Thomson, The Trolley Problem, 94 Yale L.J. 1395, 1395–1415 (1985).

24 See infra Section F.

25 See infra Section G.

26 On-Road Automated Driving (ORAD) Committee, Revision of Recommended Practice J3016: Taxonomy & Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems, Society of Automated Engineers (SAE) International 1, 1–35 (June 15, 2018) For an explicit inclusion of this typification, see ERTRAC Working Group, Automated Driving Roadmap, 7 ERTRAC Roadmap 1, 5 (2017)

27 One of the first internationally valid system approval according to the UN-R157 norm of a level 3 vehicle was awarded to Mercedes, whose 2022 S-class is permitted on German roads. See German Federal Motor Vehicle and Transport Authority Press Release 49/2021, KBA erteilt erste Genehmigung zum automatisierten Fahren (Dec. 9, 2021),; Subsequently, the Nevada Department of Motor Vehicles (DMV) approved the Mercedes Drive Pilot too in January 2023. See David Shepardson, Mercedez-Bens to Deploy Advanced Automated Driving System in Nevada, Reuters (Jan. 6, 2023), California followed suit in June 2021. See State of California Department of Motor Vehicles Press Release, California DMV Approves Mercedes-Benz Automated Driving System For Certain Highways and Conditions (June 8, 2023), The Honda Legend (SAE level 3 too) has already been offered for lease in a limited edition in January 2021 in Japan. See Colin Beresford, Honda Legend Sedan with Level 3 Autonomy Available for Lease in Japan, Car and Driver (March 4 2021), Cf. on Japan Takeyoshi Imai, Legal Regulation of Autonomous Driving Technology: Current Conditions and Issues in Japan, 43 IATSS Research 263–67 (2019). On 22 June 2022, the amendment ECE/TRANS/WP.29/2022/59/Rev.1 to UN Regulation No. 157 was adopted. It entered into force in January 2023 for those state parties which decide to apply it. With this amendment the World Forum for Harmonization of Vehicle Regulations of the UN extended the maximum speed for Automated Driving Systems (ADS) for passenger cars and light duty vehicles up to 130 km/h on motorways and made automated lane changes possible.

28 See also Iris Eisenberger et al., Automatisiertes Fahren: Komplexe Regulatorische Herausforderungen, 10 ZVR 383, 384 (2016). For emphasis on these distinctions used also for legal classifications, see Bryant W. Smith, A Legal Perspective on Three Misconceptions in Vehicle Automation, in Road Vehicle Automation (Gereon Meyer & Sven Beiker eds., 2014). For the distinction for Germany in particular, see also Hilgendorf, supra note 11, at 356 (referring to the proposal by the Federal Highway Research Institute (BASt), which distinguishes between the following driving levels: “Driver only, assisted, partially automated, highly automated, and fully automated”).

29 See ORAD Committee, supra note 26, at 28, 29 (explicitly discouraging the use of the term “autonomous” in parts 7.1.1 and 7.2). Cf. Anna Haselbacher, Rechts Überholt? Zum Aktuellen Stand Des Rechtsrahmens “Automatisiertes Fahren,” 46 jusIT 127, 127–33 (2020).

30 See, e.g., John Christman, Autonomy in Moral & Political Philosophy, Stan. Encyclopedia Phil. (June 29, 2020),

31 Cf. Christian J. Gruber & Iris Eisenberger, Wenn Fahrzeuge selbst lernen: Verkehrstechnische und rechtliche Herausforderungen durch Deep Learning?, in Autonomes Fahren und Recht (Iris Eisenberger et al. eds., 2017).

32 Peter D. Norton, Fighting Traffic: The Dawn of the Motor Age in the American City 25 (2011). For comprehensive information on pedestrian protection, see for example, Matthias Kühn et al., Fußgängerschutz: Unfallgeschehen, Fahrzeuggestaltung, Testverfahren 57–58 (2007).

33 In 2019, approximately 22,800 people and in 2021 approximately 22,000 people were killed in road accidents in the EU. See Eurostat, Road safety statistics in the EU, (June 20, 2023), In the USA, approximately 43,000 people were killed in road accidents in 2021. See UNECE Transport Statistics Database, In 2018, 1.35 million people worldwide were killed in road accidents, see World Health Organization (WHO), Global Status Report on Road Safety (June 17, 2018), Cf. Sirwan K Ahmed et al, Road traffic accidental injuries and deaths: A neglected global health issue (2023) 6(5) Health Science Reports e1240 holding that “[r]oad traffic injuries are a substantial yet underserved public health issue around the world”. Regarding the causes of accidents, they state that “[t]he main contributors to traffic accidents include poor road conditions, reckless passing, drowsy driving, sleepwalking, intoxication, illness, use of mobile phones, eating and drinking in the car, inattention in the event of a street accident, and the inability of other drivers to react quickly enough to the situation.”

34 See Rainer Schröder, Verfassungsrechtliche Rahmenbedingungen Des Technikrechts, in Handbuch Des Technikrechts (Martin Schulte & Rainer Schröder eds., 2011).

35 Cf. Jean - François Bonnefon, The Car That Knew Too Much: Can a Machine Be Moral? 93 (2021).

36 To the extent that self-driving cars will greatly increase road safety, yet simultaneously achieve an inclusionary effect—such that many more individuals can now use these vehicles—a greatly increased volume of traffic can be expected. See Austin Brown, Demanding a Better Transportation Future Through Automation, in Are We There Yet? The Myths & Realities of Autonomous Vehicles 30 (Michael A. Pagano et al. eds., 2020). Potential developments that make self-driving cars more ecological are not necessarily linked to the control technology becoming independent. Increased mobility, in the sense of increased traffic volume, should thus also be legally regulated to the extent that this new technology should also be sustainable. See also Lando Kirchmair & Sebastian Krempelmeier, Mit Auto-Autos gegen die Klimakrise?, Verfassungsblog (Dec. 9, 2021, 10:08 AM),

37 See William Schabas, The European Convention on Human Rights: A Commentary 126 (2015) (“The positive obligations result from the requirement that the right to life be ‘protected by law.’”); see also Janneke Gerards, Right to Life, in Theory & Practice of the European Convention on Human Rights 367 (Pieter van Dijk et al. eds., 2018). Cf. Christoph Grabenwarter & Katharina Pabel, Europäische Menschenrechtskonvention: Ein Studienbuch § 19 para. 5, § 20 para. 2 (2021); Walter Berka, Christina Binder, & Benjamin Kneihs et al., Die Grundrechte: Grund - Und Menschenrechte in Österreich 285 (2019).

38 Berka et al., supra note 37, at 287.

39 See, e.g., Öneryildiz v. Turkey (GC), App. No. 48939/99, para. 71 (Nov. 30, 2004), (regarding various areas under the jurisdiction of the member states); Asiye Genç v. Turkey, App. No. 24109/07, para. 71 (Jan. 27, 2015), (relating to criminal and private law norms); see also Ciechońska v. Poland, App. No. 19776/04, para. 66 (June 14, 2011),; Pretty v. United Kingdom, App. No. 2346/02, para. 39 (Apr. 29, 2002), (“The consistent emphasis in all the cases before the Court has been the obligation of the State to protect life.”).

40 Berka et al., supra note 37, at 285; Gerards, supra note 37, at 358, 371 (pointing to, for instance, Öneryildiz, App. No. 48939/99; Ali Murat Cevrioğlu v. Turkey, App. No. 69546/12 (Oct. 4, 2016),, concerning dangerous activities; and Budayeva ea v. Russia, App. No. 15339/02 (Mar. 20, 2008),, regarding natural disasters). See also Isabel Schübel-Pfister, Art. 2 Recht auf Leben, in Konvention zum Schutz der Menschenrechte und Grundfreiheiten: Kommentar, para. 39a (Ulrich Karpenstein & Franz C. Mayer eds., 2022).

41 Cf. generally Lando Kirchmair & Daniel-Erasmus Khan, Gibt Es Ein Recht Auf Null-Risiko? Die Risikogesellschaft Vor Dem Bundesverfassungsgericht, in Das Risiko: Gedanken Übers Und Ins Ungewisse (Helga Pelizäus & Ludwig Nieder eds., 2019).

42 See Grabenwarter & Pabel, Europäische Menschenrechtskonvention , supra note 37, §§ 20, 23 n.111 (referencing Nicolae Virgiliu Tănase v. Romania (GC), App. No. 41720/13, para. 135 (June 25, 2019),

43 Berka et al., supra note 37, at 286.

44 Gerards, supra note 37, at 372.

45 See, e.g., Pretty, App. No. 2346/02 at para. 39 (“The consistent emphasis in all the cases before the Court has been the obligation of the State to protect life.”). For the literature, see Iris Eisenberger, Das Trolley-Problem im Spannungsfeld autonomer Fahrzeuge: Lösungsstrategien grundrechtlich betrachtet, in Autonomes Fahren und Recht 101 (Iris Eisenberger et al. eds., 2017). See also Lando Kirchmair, “Autonomous Vehicles” in Encyclopedia of the Philosophy of Law and Social Philosophy (Mortimer Sellers & Stephan Kirste, eds., 2023 forthcoming).

46 Ethics Commʼn, supra note 14, at 11.

47 This is probably true despite the fact that we still do not know what the "autonomous transportation future" will look like. On competing visions as to how autonomous futures might look, see Jake Goldenfein, Deirdre K. Mulligan, Helen Nissenbaum & Wedny Ju, Through the Handoff Lens: Competing Visions of Autonomous Futures, 35 Berkeley Tech. L.J. 835, 835–910 (2020).

48 For an overview on moral dilemmas, see for example, Walter Sinnott-Armstrong, Moral Dilemmas, Encyclopedia Phil., (last visited Sept. 18, 2023); Terrance McConnell, Moral Dilemmas, Stan. Encyclopedia Phil. (July 25, 2022),

49 See, e.g., Heather M. Roff, The folly of trolleys: Ethical challenges and autonomous vehicles, 17 December 2018, brookings, However, see Norbert Paulo, The Trolley Problem in the Ethics of Autonomous Vehicles, 73 (4) Philosophical Quarterly 1046–1066 (2023), making the case that the trolley problem can be of relevance in the ethics of self-driving cars.

50 See also Noah J. Goodall, Machine Ethics & Automated Vehicles, in Road Vehicle Automation 95–96 (Gereon Meyer & Sven Beiker eds., 2014); Jochen Feldle, Notstandsalgorithmen 22–23 (2018).

51 For this argument, see infra note 93.

52 Cf. Pretty, App. No. 2346/02 at para. 37. See also Eisenberger, supra note 45, at 102. For a discussion on potential individual ethics settings, see Lando Kirchmair, Autonomous Vehicles: Crashes, in Encyclopedia of the Philosophy of Law and Social Philosophy (Mortimer Sellers & Stephan Kirste, eds., 2023 forthcoming).

53 See, e.g., Matthias Breitinger, Ein Mercedes-Fahrerleben ist nicht mehr wert als andere, ZeitOnline (Oct. 17, 2016, 5:14 PM),; see also Sven Nyholm, The Ethics of Crashes with Self-Driving Cars: A Roadmap, 13 Phil. Compass 1, 2–5 (2018). Cf. Bonnefon, supra note 35, at 63 (giving context to this supposedly outrageous statement).

54 See, e.g., Stijn Bruers & Johan Braeckman, A Review & Systematization of the Trolley Problem, 42 Philosophia 251, 251–69 (2014); David Edmonds, Would You Kill the Fat Man? The Trolley Problem & What Your Answer Tells Us About Right & Wrong (2014).

55 Josef Kohler, Das Notrecht, 8 Archiv für Rechts und Sozialphilosophie 411, 411–49 (1915) See for this and the following subsection already Kirchmair, supra note 52.

56 Kohler, supra note 55, at 431. Cf. Feldle, supra note 50, at 26.

57 Hans Welzel, Zum Notstandsproblem, 63 Zeitschrift für die Gesamte Strafrechtswissenschaft 47, 47–56 (1951).

58 Ethics Commʼn , supra note 14, at 17 (citing Welzel, supra note 57, at 51).

59 On the latter, see Foot, supra note 22.

60 For an overview of the doctrinal German discussion, see Armin Engländer, Das Selbstfahrende Kraftfahrzeug und die Bewältigung dilemmatischer Situationen, 9 Zeitschriften -Informations -Service ( ZIS ) 608, 608–18 (2016). Cf. Welzel, supra note 57 (expressing the opinion that the railroad official had to redirect the car for ethical reasons).

61 See Foot, supra note 22; see also Philippa Foot, The Problem of Abortion & the Doctrine of the Double Effect, in Virtues & Vices & Other Essays in Moral Phil. (Philippa Foot ed., 2002) (1967).

62 Foot, supra note 61, at 29 (“My conclusion is that the distinction between direct and oblique intention plays only a quite subsidiary role in determining what we say in these cases, while the distinction between avoiding injury and bringing aid is very important indeed.”).

63 Foot, supra note 61, at 23.

64 Judith J. Thomson, Killing, Letting Die, and the Trolley Problem, 59 Monist 204, 206 (1976).

65 Id. at 216.

66 This scenario was presented in an article appearing some ten years after Thomson’s first response to Philippa Foot. Thomson, supra note 23, at 1397.

67 Note that the introduction of the bystander—instead of the tram driver used by Philippa Foot—was intended to show that the bystander would not be responsible for killing the five persons, as he would not be driving the tram in contrast to the driver, but most persons nevertheless considered changing the points morally acceptable.

68 Thomson, supra note 23, at 1398.

69 Id. at 1401.

70 Id. at 1406.

71 Id. at 1411.

72 See Foot, supra note 61, at 30 (“I have only tried to show that even if we reject the doctrine of double effect, we are not forced to the conclusion that the size of the evil must always be our guide.”). Thomson’s words are similarly instructive, see Thomson, supra note 64, at 217 (“[T]he thesis that killing is worse than letting die cannot be used in any simple, mechanical way in order to yield conclusions about abortion, euthanasia, and the distribution of scarce medical resources.”); see also Thomson, supra note 23, at 1414 (“[I]t is not the case that we are required to kill one in order that another person shall not kill five, or even that it is everywhere permissible for us to do this.”).

73 For an overview, see, e.g., Nyholm, supra note 53.

74 Noah J. Goodall, Away from Trolley Problems & Toward Risk Management, 30 Applied A.I. 810, 810–21 (2016); Noah J. Goodall, From Trolleys to Risk: Models for Ethical Autonomous Driving, 107 Amer. J. Pub. Health 496 (2017); Sven Nyholm & Jilles Smids, The Ethics of Accident-Algorithms for Self-Driving Cars: An Applied Trolley Problem?, 19 Ethical Theory & Moral Prac. 1275, 1275–89 (2016); Johannes Himmelreich, Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations, 21 Ethical Theory & Moral Prac. 669, 669–84 (2018); J. C. Gerdes et al., Designing Automated Vehicles Around Human Values, in Road Vehicle Automation 6, 44 (Gereon Meyer & Sven Beiker eds., 2019); Nicholas G. Evans, Ethical Algorithms in Autonomous Vehicles: Reflections on a Workshop, in Road Vehicle Automation 7, 162 (Gereon Meyer & Sven Beiker eds., 2020). However, consider Paulo, supra note 49.

75 Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, & Iyad Rahwan, The Moral Machine Experiment, 563 Nature 59, 59–64 (2018).

76 See, e.g., Leon R. Sütfeld et al., Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure, 11 Frontiers in Behavioral Neuroscience, 1–13 (2017); or Awad et al, supra note 75.

77 This finding also holds true despite the important criticism of the MME expressed by Yochanan E. Bigman & Kurt Gray, Life & Death Decisions of Autonomous Vehicles, 579 Nature E1, E1–E2 (2020).

78 Cf., Norbert Paulo, Leonie Möck and Lando Kirchmair, The Use and Abuse of Moral Preferences in the Ethics of Self-Driving Cars, in Experiments in Moral and Political Philosophy (Fernando Aguiar, Antonio Gaitán and Hugo Viciana, eds., 2024 forthcoming).

79 See, e.g., John Harris, The Immoral Machine, 29 Cambridge Q. Healthcare Ethics 71, 71–79 (2020).

80 See, e.g., Dietmar Hübner & Lucie White, Crash Algorithms for Autonomous Cars: How the Trolley Problem Can Move Us Beyond Harm Minimisation, 21 Ethical Theory & Moral Pract. 685, 685–98 (2018).

81 For a nuanced and sophisticated argument as to why and how the debate on the trolley problem is of normative relevance for the regulation of self-driving cars, see Klicken oder tippen Sie hier, um Text einzugeben. Paulo, supra note 49.

82 Cf. Ethics Commʼn, supra note 14, at 8–9 (detailing the exact composition of the Commission).

83 Ethics Commʼn, supra note 14, at 10.

84 Id. at 7.

85 For an overview with commentaries of one of its members, see Luetge, supra note 14.

86 Ethics Commʼn , supra note 14, at 10.

87 Id. at 10.

88 Id. at 11.

89 However, it might not be impossible in relation to critics of speciesism.

90 Ethics Commʼn, supra note 14, at 11 (noting that this statement is in line with the general positions of Foot, supra note 61, and Thomson, supra note 69).

91 Ethics Commʼn, supra note 14, at 11 (noting that this relates to the discussion by Kohler, supra note 55, and Welzel, supra note 57).

92 See Ethics Commʼn , supra note 14, at 11 (noting that this contradicts the findings of Awad et al., supra note 75, who, contrary to Rule 9, claim to have found significant preferences in favor of various personal characteristics over others); see also Bigman & Gray, supra note 77 (criticizing such empirical findings for methodological reasons).

93 Ethics Commʼn, supra note 14, at 11.

94 Id. at 18.

95 Id.

96 Judgment of the First Senate, BVerfGE 115, 118, 1 BvR 357/05, Feb. 15, 2006 (Ger.), [hereafter Amendment to the Aviation Security Act].

97 Ethics Commʼn, supra note 14, at 18.

98 Ethics Commʼn, supra note 14, at 18. (See also ibid “[D]amage to property to take precedence over personal injury, personal injury to take precedence over death, lowest possible number of persons injured or killed.”).

99 Id.

100 Id.

101 Id.

102 Id. (“[T]he Commission has not yet been able to bring its discussions to a satisfactory end, nor has it been able to reach a consensus in every respect. It thus suggests that in-depth studies be conducted.”).

103 Autonomous Driving Act, at Section 1e para. 2 No. 2 (Ger.).

104 See Rule 9, sentence 1.

105 See 2022 Autonomous Driving Decree, Annex I, 1.1. (“Kann eine Kollision zur Abwendung einer Gefährdung des Lebens der Insassen des Kraftfahrzeugs mit autonomer Fahrfunktion nur durch eine Gefährdung des Lebens anderer Teilnehmender des umgebenden Verkehrs oder unbeteiligter Dritter vermieden werden (unvermeidbare alternative Gefährdung von Menschleben), darf der Schutz der anderen Teilnehmenden des umgebenden Verkehrs und der unbeteiligten Dritten nicht nachrangig gegenüber dem Schutz der Insassen des autonom fahrenden Kraftfahrzeugs erfolgen.”) On the “Mercedes-rule,“ see supra note 53.

106 See Bigman & Gray, supra note 77; see also Awad et al., supra note 75.

107 See, e.g., Jutta Stender-Vorwachs & Hans Steege, Öffentliches Recht: Grundrechtliche Implikationen Autonomen Fahrens, in Autonomes Fahren: Rechtsprobleme, Rechtsfolgen, Technische Grundlagen (Bernd H. Oppermann & Jutta Stender-Vorwachs eds., 2020). But see, critically in relation to the German legal order, Eric Hilgendorf, Recht und autonome Maschinen: Ein Problemaufriß, in Das Recht vor den Herausforderungen der modernen Technik 11, 26 (Eric Hilgendorf & Sven Hötitzsch eds., 2015). Although he, too, classifies the killing of innocent persons as unlawful, he argues for a “gradation in injustice” and thus for applying the basic idea of harm minimization here: “The killing of an individual cannot be justified by the saving of many, but we are morally as well as legally obliged to keep the number of fatalities as low as possible.”

108 See Eisenberger, supra note 45.

109 Eisenberger, supra note 45, at 105. Cf. Benjamin Kneihs, Art 2 EMRK, in Rill -Schäffer -Kommentar Bundesverfassungsrecht 8 (Benjamin Kneihs & Georg Lienbacher eds., 2001) (stating, concerning self-defense, that Art 2 (2) ECHR would legitimize killing only if this were the consequence of an absolutely necessary use of force and not in the context of targeted killing). Lethal force would come into consideration only against the aggressor themselves according to a particularly strict proportionality standard which would apply in this case.

110 Eisenberger, supra note 45, at 106.

111 In relation to Austria—although in the context of the hypothetical discussion of a hijacked plane—see Christoph Bezemek, Unschuldige Opfer Staatlichen Handelns, 15 Journal für Rechtspolitik 121, 125 (2007); see also Harald Eberhard, Recht auf Leben, in Handbuch Menschenrechte: Allgemeine Grundlagen—Grundrechte in Österreich Entwicklungen—Rechtsschutz 80, 88 (Gregor Heißl ed., 2009) (regarding the constellation of shooting down the hijacked passenger aircraft as emergency aid and thus amenable to the proportionality test). But see Kristoffel Grechenig & Konrad Lachmayer, Zur Abwägung von Menschenleben: Gedanken zur Leistungsfähigkeit der Verfassung, 19 Journal für Rechtspolitik 35, 35–45 (2011).

112 See supra Section F.III.

113 See Awad et al., supra note 75 (explaining in the scenario involving group size of victims, the results show a significant preference for saving more lives, even if a third option to treat both groups equally, as tested by Bigman & Gray, supra note 77, is available). But see Jean-François Bonnefon et al., The Social Dilemma of Autonomous Vehicles, 352 Science 1573, 1574 (2016) (“Overall, participants strongly agreed that it would be more moral for AVs to sacrifice their own passengers when this sacrifice would save a greater number of lives overall.”).

114 See espec. Kirchmair, supra note 1.

115 See, e.g., Bezemek, supra note 111, at 126 (explaining this consideration opens up the possibility for the state to counter such an unlawful attack with the use of force, considering a strict proportionality standard).

116 See Volker Erb, Automatisierte Notstandshandlungen, in Rechtsstaatliches Strafrecht: Festschrift für Ulfrid Neumann zum 70 Geburtstag 788 (Frank Saliger et al. eds., 2017). From a technological perspective, see Michael Botsch & Wolfgang Utschick, Fahrzeugsicherheit und automatisiertes Fahren: Methoden der Signalverarbeitung und des maschinellen Lernens (2020). It is important to mention that this scenario has only been brought up by Awad et al., supra note 75—as has been posited by one of its authors, Bonnefon , supra note 35, at 114–15—in order to demonstrate how dangerous it is to purely look at the moral intuitions of laypersons.

117 J. C. Gerdes & Sarah M. Thornton, Implementable Ethics for Autonomous Vehicles, in Autonomes Fahren: Technische, rechtliche und gesellschaftliche Aspekte 92 (Markus Maurer ed., 2015).

118 This argument is similar but not identical to the arguments made by those authors cited supra note 74, who—among other things—make the point that the trolley problem would be a misleading scenario for regulating the crashes of self-driving cars.

119 See, e.g., Lando Kirchmair & Norbert Paulo, Taking ethics seriously in AV trajectory planning algorithms, 5 Nature Machine Intelligence 814, 814–815 (2023) (“We need more precise legal rules, rather than mere ethical guidelines, when designing adequate trajectory planning algorithms for AVs.”).

120 On the reverse point that code can have the force of law, see Lawrence Lessig, Code and Other Laws of Cyberspace. Version 2.0 (2006).

121 Directorate-General for Research and Innovation, Ethics of Connected & Automated Vehicles: Independent Expert Report, European Comm’n (Sept. 17, 2020), [hereafter Independent Expert Report on Road safety, Privacy, Fairness, Explainability, and Responsibility].

122 See Independent Expert Report on Road Safety, Privacy, Fairness, Explainability, and Responsibility, supra note 105.

123 This is an important difference to the argument made by the authors cited supra note 74.

124 Berka, Binder & Kneihs, supra note 37, at 282.

125 Id. at 283.

126 Insofar as the discussion relates exclusively to a probability of avoiding harm—instead of killing—it would still be necessary to discuss, in accordance with Article 8 ECHR, the right to respect for private and family life and thus the impairment of physical integrity.

127 See, e.g., Kneihs, supra note 109 (describing the classic example: The so-called “final shot fired”).

128 See, e.g., Stender-Vorwachs & Steege, supra note 107.

129 See Immanuel Kant, Kritik der praktischen Vernunft: Grundlegung zur Metaphysik der Sitten (Wilhelm Weischedel ed., 1974); see also Hilgendorf, supra note 107, at 24 (pointing out that the legal position goes back to decisions to murder the mentally ill in the Third Reich). For a recent example of application, see Amendment to the Aviation Security Act (describing the decision of the Federal Constitutional Court of Germany concerning the Air Security Act); see also Daniel-Erasmus Khan, Der Staat Im Unrecht: Luftsicherheit Und Menschenwürde, in Der Staat Im Recht: Festschrift Für Eckart Klein Zum 70 Geburtstag (Marten Breuer et al. eds., 2013). But see Reinhard Merkel, § 14 Abs. 3 Luftsicherheitsgesetz: Wann und warum darf der Staat töten?, 62 Juristen Zeitung 373 (2007). Explicitly based on this, on the question of how moral dilemmas should be classified in the programming of self-driving cars, see Stender-Vorwachs & Steege, supra note 107, at 401 (“A legal regulation that requires the algorithm to be programmed in such a way that the life of the occupant of an autonomous car takes a back seat to other vehicle occupants or pedestrians or cyclists is contrary to the guarantee of human dignity.”). Cf. Merkel, supra 129, at 403 (describing a series of further prohibitions on potential programming that culminates in requiring vehicle manufacturers to “mandatorily program their vehicles defensively so that accident situations are virtually eliminated”).

130 In relation to Austria, see Anna Gamper, Human Dignity in Austria, in Handbook of Human Dignity in Europe (Paolo Becchi & Klaus Mathis eds., 2019); see also Benjamin Kneihs, Schutz von Leib und Leben sowie Achtung der Menschenwürde, in Grundrechte in Österreich (Detlef Merten et al., 2014). With special reference to the ECHR, see Lennart von Schwichow, Die Menschenwürde in der EMRK: Mögliche Grundannahmen, ideologische Aufladung und rechtspolitische Perspektiven nach der Rechtsprechung des Europäischen Gerichtshofs für Menschenrechte (2016); Paolo Becchi et al., Handbook of Human Dignity in Europe (Paolo Becchi & Klaus Mathis eds., 2019).

131 Anna Gamper, Gibt es ein “Recht auf ein menschenwürdiges Sterben?” Zum Erkenntnis des VfGH vom 11.12.2020, G 139/2019, 3 Juristische Blätter 137, 141 (2021). Human dignity, anchored in the case law of the ECtHR on Article 3 ECHR, prohibits, according to the Austrian Constitutional Court, “a gross disregard of the person concerned as a person that impairs human dignity,” (e.g., VfSlg. 19.856/2014 with further references) but does not constitute an explicit prohibition of the “harm minimization principle,” as the Federal Constitutional Court of Germany has pronounced with regard to Section 14 para. 3 of the Aviation Security Act. This is probably also true, although the Pretty case refers to dignity and freedom as the essence of the Convention. See Pretty, App. No. 2346/02 at para. 65.

132 See Sebastian Heselhaus & Ralph Hemsley, Human Dignity & the European Convention on Human Rights, in Handbook of Human Dignity in Europe (Paolo Becchi & Klaus Mathis eds., 2018) (pointing to Additional Protocol 13 adopted in 2002 and the case law of the ECtHR, which includes many references to human dignity). Nevertheless, human dignity is not enshrined as a separate human right in the ECHR.

133 See the Aviation Security Act (Ger.).

134 See Finogenov et al. v. Russia, App. No. 18299/03, para. 227 (Dec. 20, 2011), But see Grabenwarter & Pabel, supra note 37, at § 20 para. 14, and Gerards, supra note 37, at 361 (discussing the question of causality).

135 See, e.g., Gerards, supra note 37, at 364–65; see also Bezemek, supra note 111, at 129. But see Gülec v. Turkey, App. No. 54/1997/838/1044, para. 73 (Jul. 27, 1998),; Zara Isayeva v. Russia, App. No. 57950/00, paras. 180–81, 200 (Feb. 24, 2005),; Medka Isayeva et al. v. Russia, App. No. 57949/00, para. 178 (Feb. 24, 2005),

136 See also Hilgendorf, supra note 107, at 25–26.

137 For more detail, see Claus Roxin, Strafrecht: Allgemeiner Teil 739 (2006); see also Hilgendorf, supra note 107, at 25 (denying a community of danger for road traffic without further justification).

138 John Rawls, A Theory of Justice 118 (1999).

139 See, e.g., Jan Gogoll & Julian F. Müller, Autonomous Cars: In Favor of a Mandatory Ethics Setting, 23 Sci. & Engʼg Ethics 681, 681–700 (2017); Derek Leben, A Rawlsian Algorithm for Autonomous Vehicles, 19 Ethics & Info. Tech. 107, 107–15 (2017); see also Grechenig & Lachmayer, supra note 111, at 42.

140 See Ethics Commʼn , supra note 14, at 18. But see supra Section F.IV.

141 This must be recalled, in particular to those voices that attest to “errors of reasoning” in arguments that, in their view, contradict the guarantee of human dignity. See, e.g., Stender-Vorwachs & Steege, supra note 107, at 402.

142 But see Andreas T. Müller, Abwägung von Menschenleben im Völkerrecht, in Verhältnismäßigkeit im Völkerrecht (Björnstjern Baade et al. eds., 2016).

143 Hilgendorf, supra note 107, at 25.

144 See, e.g., Onder Bakircioglu, The Application of the Margin of Appreciation Doctrine in Freedom of Expression & Public Morality Cases, 8 Ger. L.J. 711, 711–33 (2007).

145 See Ethics Commʼn, supra note 14, at 19 (predicting this when discussing related liability issues).

146 See Bonnefon , supra note 35, at 19; see also Awad et al., supra note 75 (explaining first tested scenarios, including an option to decide at random, were not met with much interest by the study participants).

147 Joshua Posaner, EU Plans to Approve Sales of Fully Self-Driving Cars, Politico (July 5, 2022),

148 See Commission Implementing Regulation 2022/1426 of 5 August 2022 laying down rules for the application of Regulation 2019/2144, Aug. 26, 2022, 2022 O.J. (L 221/1) 1 [hereinafter Regulation on Performance Requirements].

149 Regulation of Performance Requirements, at Annex II. The definitions given in Article 2 paras. 14–15 respectively, including “‘minimal risk maneuver (MRM),’ meaning a maneuver aimed at minimizing risks in traffic by stopping the vehicle in a safe condition or minimal risk condition,” and “‘minimal risk condition (MRC),’ meaning stable and stopped state of the vehicle that reduces the risk of a crash,” are not actually helpful in order to clarify our query.

150 See State of Cal. Dep’t of Motor Vehicles (DMV), Autonomous Vehicle Collision Reports (Sept. 6, 2023), But see U.S. Gov’t Dep’t of Transp., Nat’l Highway Traffic Safety Admin. (2022),

151 For an overview, see Jeremy A. Carp, Autonomous Vehicles: Problems & Principles for Future Regulation, 4 Univ. Penn. J. L. & Publ. Aff. 81, 81–148 (2018).

152 H.R. 2813, 55th Legis., Reg. Sess. (Ariz. 2021).

153 Id.

154 Id.

155 Andreia Martinho et al., Ethical Issues in Focus by the Autonomous Vehicles Industry, 41 Transp. Rev. 556, 565–66 (2021) (explaining that the only company that comes close to describing what its self-driving cars would do in a moral dilemma is Nuro, a company specialized in automating passenger-less delivery technology, whose cars would prioritize humans over their content in extreme cases).

156 See, e.g., Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution Act, H.R. 3388, 115th Cong. (2017).

157 See, e.g., Khan, supra note 129 (suggesting that while the law simply cannot standardize certain situations, public officials should certainly act, even without a legal basis or even disregarding it, in order to achieve certain results).

158 See Hilgendorf, supra note 11, at 358.

159 For a new “risk ethics” in this regard, see Goodall, supra note 74; Maximilian Geisslinger et al., Autonomous Driving Ethics: From Trolley Problem to Ethics of Risk, 34 Phil. & Tech. 1033, 1033–55 (2021).