How to Regulate Moral Dilemmas Involving Self-Driving Cars: The 2021 German Act on Autonomous Driving, the Trolley Problem, and the Search for a Role Model

Abstract With the promulgation of the Autonomous Driving Act in summer 2021, Germany took the worldwide lead on regulating self-driving cars. This Article discusses the (non-)regulation of moral dilemmas in this act. To this end, it clarifies the role of the so-called trolley problem, which influenced the report of the German Ethics Commission that paved the way for this act in particular and the relationship between philosophical reasoning, empirical studies, and the law in general. By introducing the international legal community to the (non-)regulation of moral dilemmas in the German act, the Article critically reviews the German goal, which is to serve as a European and international role model. This will be preceded by a discussion as to why self-driving cars should be allowed as well as the moral dilemmas they cause which should be regulated by the law.


A. Introduction
Mobility is a central element of human coexistence and mobility revolutions have marked decisive turning points in recent human evolution. 1Thus, the invention of the disc wheel around 3,500 BCa special testimony to human ingenuity without a direct role model in nature 2 -was a milestone that facilitated the transportation of goods and later also of people considerably. 3Another great achievement was the spoked wheel invented in the Bronze Age around 2,000 BC. 4 Fast forward to the modern age and the patent registered by Karl Benz in 1886-the "Benz Patent-Motorwagen Nummer 1," in plain language, the car, which was fitted with wire-spoked wheels with hard rubber tires and an internal combustion engine5 -was another tremendous invention that, again, contributed to a mobility revolution. 6Whereas until then, carts and carriages had to be pulled by draft animals, the so-called automobile released propulsion technology from that dependence.Currently we are witnessing the next great leap in human mobility: The "auto-auto," as a funny German nickname 7 goes, is one designation for an autonomous, in other words, self-driving car, that is both propelled and controlled without human intervention. 8hile the period between the invention of the wheel and its enhanced version-the spoked wheel-was about 1,500 years, the interval between the independence of propulsion technology and its full enhancement now-also including the automation of control technology-was ten times shorter, lasting only about 150 years.Although the invention and improvement of the wheel both represent milestones in mobility, they were presumably not accompanied by legal standards in any significant way.Beginning in the second half of the 19 th century, however, technology law is credited with a process of institutional consolidation that has also affected the regulation of cars. 9Law, thus, is playing a crucial role in the current revolution of human mobility.One might even go so far as to say that law shares responsibility for the success or failure of modern mobility technology. 10Many important areas, such as traffic law, data protection law, liability and criminal 4 See David W ANTHONY, THE HORSE, THE WHEEL, AND LANGUAGE.HOW BRONZE-AGE RIDERS FROM THE EURASIAN STEPPES SHAPED THE MODERN WORLD (2007) (highlighting at 397 that two-wheeled chariots with standing drivers "were the first wheeled vehicles designed for speed, an innovation that changed land transport forever.The spoked wheel was the central element that made speed possible."While the exact date is not important to our purpose here, it is nevertheless interesting that Anthony makes the case that [at 402] "the earliest chariots probably appeared in the steppes before 2000 BCE" [and consequently did not originate from prehistoric Near Eastern wagons, see at 399 et seq].In general, he provides an account of how the original Indo-European speakers, and their domestication of horses and use of the wheel spread language and transformed civilization).law, as well as public law, are affected by auto-autos, and striking the right balance between regulation of and freedom for the development of the technology is no easy task. 11he German legislature is the first worldwide that has dared to face this difficult task comprehensively. 12Section 1a, which made the "operation of a motor vehicle by means of a highly or fully automated driving function [ : : : ] permissible if the function is used as intended," paved the way and was already inserted in the Road Traffic Act (StVG) in 2017. 13In July 2016, an Ethics Commission was tasked to deliver a report on ethical and legal questions regarding the introduction of automated vehicles, which was also published in 2017. 14This high-level report was the basis for the draft act on autonomous driving published in February 2021, which-for the first time worldwide 15 -aimed at a comprehensive regulation of self-driving cars on public roads. 16he draft then entered into force, almost unchanged, as the Autonomous Driving Act on July 27, 2021. 17Almost a year later a decree on the approval and operation of vehicles with autonomous driving functions was adopted on June 24, 2022. 18The path for the use of highly and fully automated vehicles has, thus, already been prepared in Germany; further regulatory steps in other countries are likely to be expected.The German act holds, among many other things, in Section 1e paragraph 2, that motor vehicles with an autonomous driving function need to have a specific accident prevention system, or "System der Unfallvermeidung."The concrete regulation of moral dilemmas in connection with self-driving cars is-needless to say-of great importance but also very controversial.This is no wonder, as many lives are at stake.This Article focuses on the question as to how to regulate such moral dilemmas involving self-driving cars by law and first holds that self-driving cars should be allowed on the roads.If self-driving cars work, this is a legal duty included in the positive dimension to protect life as guaranteed by Article 2 of the European Convention on Human Rights (ECHR)-see Section C. 19 Nevertheless, the most recent mobility revolution and all its promises come with a significant burden: Self-driving cars will still be involved in accidents and the technology  The regulation of self-driving vehicles is very dynamic.For instance, the French legislator has come up with an adaption of the Mobility Orientation Act as well as empowerments for the government which ultimately allow "delegated-driving vehicles" on public roads.Several other countries, such as Australia, China, Japan, and the United Kingdom, have developed plans to introduce self-driving cars and are adapting their laws accordingly.For an overview on the various regulatory activities in several countries, see MICHAEL RODI, DRIVING WITHOUT DRIVER: AUTONOMOUS DRIVING AS A LEGAL CHALLENGE (Uwe Kischel & Michael Rodi eds., 2023).16   For an overview on the draft, see Eric Hilgendorf, Straßenverkehrsrecht Der Zukunft, 76 JURISTENZEITUNG 444,  444-54 (2021).includes the possibility to program how these accidents should take place.For the first time in the history of traffic law, we have to make decisions about moral dilemmas, about life and death, in cold blood.This is a situation that is unprecedented in its extent and a great challenge for society; likewise, the law constitutes a veritable challenge for ethicists and lawyers-see Section D. 20 This Article argues in the following that this "possibility" must be faced by the law-not by private companies or individuals-see Section E. 21 After the stage has been set, the central interest of this Article comes into play, namely the regulation of so-called moral dilemmas, in other words, situations in which, according to all available options, comparable harm occurs-for example, a group of two or three people is killed because it is not possible to prevent both scenarios.The German act, which includes a provision on an accident-avoidance system and thereby regulates moral dilemmas to some extent, will be analyzed in order to clarify whether this act might indeed serve as a role model beyond Germany.For this purpose, the Article will also look at the report of the German Ethics Commission, whose "rules" constitute the basis of the act.To understand the position taken by the Ethics Commission, the Article revisits the so-called "trolley problem," which prominently arose out of the debate between the Oxford philosopher Philippa Foot 22 and the American philosopher Judith Jarvis Thomson. 23Looking back at this problem, and related but importantly different trolley cases constructed by German criminal lawyers, we come to understand that the current discussion is suffering from a conflation of hypothetical scenarios from different debates in philosophy, empirical studies, and the law-see Section F. 24 This insight is important when finally discussing the accident regulation of self-driving cars when facing moral dilemmas in Europe, the US, and beyond.The positive obligation of states according to Article 2 ECHR to take measures to save lives surely includes a prominent role for the "minimize harm" principle, potentially even to the extent that human lives have to be offset-see Section G. 25 In the end, we will see that the 2021 German Act on Autonomous Driving provides some important elements for collision-avoidance systems in self-driving cars but falls short of being a role model for Europe or the US due to its reluctance to regulate moral dilemmas comprehensively.
B. What is in a Name?The Designation of Different Automation Levels Similar to the development of the automobile-which did not happen at once but owed its existence to numerous, sometimes parallel, technological innovations-the self-driving car will not be ready for use tomorrow or the day after without a precursor.There are many developments and different levels of automation, and these are typically described in six stages according to the classification of the Society of Automotive Engineers (SAE) International, a non-profit association of automotive engineers concerned with technical standards. 26The lowest level includes vehicles without any automation-Level 0: No driving automation.The next level comprises well-known and already approved driver-assistance systems such as the Automatic Breaking System (ABS) and Electronic Stability Program (ESP) systems, or more recently, lane departure warning-Level 1: Driver assistance.Partially automated vehicles promise to take over certain activities, such  as independent parking or maneuvering a vehicle in a traffic jam-Level 2: Partial automation.Conditionally automated vehicles are, to a certain extent, "autonomous" but still require the possibility of human intervention-Level 3: Conditional automation. 27The need for human intervention is no longer given at the next, highly automated level but is still possible-Level 4: High automation. 28In the final stage, human drivers are not only obsolete but can no longer intervene at all-Level 5: Full driving automation.This distinction between automated and fully self-driving, in other words, potentially driverless, vehicles in road traffic is important not only from a technical perspective but also from a legal one as it entails different legal regulatory requirements.It is worth pointing out that the classification of Level 5 no longer includes the designation-widely used and also formerly used by the SAE-of such vehicles as being "autonomous." 29This is to be welcomed since the term autonomy is typically used in law and philosophy for self-determination or self-governance. 30This does not apply to self-driving cars.In the legal and philosophical sense, only those vehicles could be called autonomous which do not move on the basis of pre-programmed decisions without a driver but which actually make autonomous decisions, for example, with the help of machine learning. 31Concurrently, for the purpose of this Article, we will speak of self-driving cars throughout, without wanting to exclude lower levels of automation since moral dilemmas-albeit in a somewhat modified form-can already occur with driver-assistance systems.

C. Why Self-Driving Cars Should be Allowed
The introduction of the automobile initially claimed a comparatively large number of lives: "In the first four years after Armistice Day more Americans were killed in automobile accidents than had See ORAD Committee, supra note 26, at 28, 29 (explicitly discouraging the use of the term "autonomous" in parts 7.1.1 and 7.2).Cf.Anna Haselbacher, Rechts Überholt?Zum Aktuellen Stand Des Rechtsrahmens "Automatisiertes Fahren," 46 JUSIT 127, 127-33 (2020).died in battle in France." 32Even today, the number of traffic fatalities is by no means insignificant, and the primary cause of accidents is clearly human error. 33This is not only relevant for drivers but also for those otherwise involved in road traffic, such as cyclists or pedestrians, who are often affected.
Generally speaking, the law of technology is potentially both technology-preventing and technology-enabling. 34 In the light of traffic fatalities, the use of self-driving cars is promising.After all, it is quite conceivable that self-driving cars can be designed and programmed in such a way that many (fatal) accidents will actually be avoided.This is true despite the current public debate on accidents involving, for example, various Tesla models in May 2016 and April 2021, both in the US, which were caused by automated vehicles in test mode or by so-called "autopilots" that were not adequately monitored by humans. 35A central promise that speaks in favor of the approval of functioning self-driving cars is-despite these accidents-the resulting improvement in traffic safety.To put it bluntly, self-driving cars do not speed, make phone calls, drive under the influence of alcohol or drugs, or fall asleep at the wheel. 36veryone's right to life as enshrined in Article 2 ECHR, Paragraph 1, covers not only the fundamental prohibition of the state to intentionally end human life, but also the obligation to take precautionary measures to prevent dangerous situations. 37The state, thus, has a duty to protect individuals from threats, also from other individuals.However, this duty to protect is not easy to grasp. 38The case law of the European Court of Human Rights (ECtHR) in this regard is quite casuistic. 39 To the extent that self-driving cars will greatly increase road safety, yet simultaneously achieve an inclusionary effectsuch that many more individuals can now use these vehicles-a greatly increased volume of traffic can be expected.threatened by environmental hazards and dangerous activities. 40However, there are limits to this duty to protect.The state does not have to prohibit road traffic, for example, simply because it is dangerous.A certain risk is therefore part of life. 41In any case, however, the state must take legal measures to regulate dangers emanating from road traffic. 42The establishment of appropriate and effective traffic regulations, such as a blood alcohol limit, is, accordingly, necessary to protect individuals against particular dangers. 43All in all, it is important to bear in mind that the state has to make great efforts to protect life: "When there is a risk of serious and lethal accidents of which the state has-or ought to have-knowledge, the state may be obliged to take and enforce reasonable precautionary measures." 44Insofar as self-driving cars function and fulfill their promise of significantly increased road safety, it can be assumed that the approval of self-driving cars is to be subsumed under the state's duty to protect life under Article 2 ECHR. 45In this vein the German Ethics Commission also postulated in Rule 6 that "[t]he introduction of more highly automated driving systems, especially with the option of automated collision prevention, may be socially and ethically mandated if it can unlock existing potential for damage limitation."46D. Why Moral Dilemmas Involving Self-Driving Cars Are New In connection with the promise of increased safety, however, there is a great deal of uncertainty.How should self-driving cars behave when all available options cause harm?Self-driving cars will change road traffic massively and thus the street scene and everyday lives of almost everyone. 47herefore, the ethical and legal risks and problems associated with them must be clearly regulated.
One of the main problems is to decide how self-driving cars should be programmed for moral dilemmas, in other words, for situations in which, according to all available options for action, comparable harm occurs, for example, a group of two persons or a group of three persons is seriously injured or killed because neither scenario can be prevented. 48 self-driving cars will constantly be confronted with trolley-problem-like situations, which have been criticized as too fictitious for real-world regulatory problems. 49However, the difficulty of deciding what is an acceptable probability of collision when initiating a particular maneuver, and what is the relationship to the expected harm in situations where all options involve at least a possibility of collision and harm, is a difficult ethical question that we have not had to answer in advance until the advent of self-driving cars.Furthermore, the millions of cars and kilometers traveled will increase the likelihood of the most extraordinary scenario, all of which can be regulated in advance.The legal question of how to deal with such dilemmas in road traffic is rather new in that, to date, similar accident constellations have always had to be accepted as fate, so to speak, as subconscious human decisions determined the way accidents happened. 50Reactions in a fraction of a second cannot be compared with conscious, reflected decisions.This new possibility is a gift and a burden at the same time.Self-driving cars, for instance, include the chance to save more lives; however, the price is high as the decision has to be made that someone will die to save other lives.This is a tremendously difficult question that most legal orders have declined to answer so far, at least in a state of normalcy.Yet, ignoring the technical possibility of saving as many lives as possible also means letting people die.Either way, a solid justification is necessary.Other scenarios, like planes hijacked by terrorists, are fundamentally different and, therefore, only of little argumentative help. 51

E. Moral Dilemmas Must be Regulated by the Law
The question as to how to make decisions on moral dilemmas involving self-driving cars must be answered by the law.The state has a legal duty to ensure that fatal risks are diminished (Article 2 ECHR).It cannot be left up to car manufacturers or private individuals to decide whether and how they program or choose the programming for self-driving cars in moral dilemmas since companies or private individuals are not allowed to make life-or-death decisions except in emergency situations like in emergency aid cases when their own lives are at stake. 52hus, announcements such as those made by a representative of Mercedes who said that selfdriving Mercedes cars will be programmed in crash scenarios to prioritize the safety of their owners rather than, for instance, pedestrians, which puts a price tag on weighty decisions about life and death scenarios in road traffic, must be stopped. 53 How to Inform the Regulation of Moral Dilemmas Involving Self-Driving Cars?It is the very definition of a moral dilemma that there is no easy solution.Hence, if we agree that self-driving cars can cause or might be involved in situations which constitute a moral dilemma, there is no easy answer.Current attempts to argue for specific regulations of moral dilemmas for self-driving cars might be inclined to inform decisions by referring to legal scholars, mostly 49 See, e.g., Heather M. Roff, The folly of trolleys: Ethical challenges and autonomous vehicles, 17 December 2018, brookings, https://www.brookings.edu/articles/the-folly-of-trolleys-ethical-challenges-and-autonomous-vehicles/.However, see NORBERT PAULO, The Trolley Problem in the Ethics of Autonomous Vehicles, 73 (4) PHILOSOPHICAL QUARTERLY 1046-1066 (2023), making the case that the trolley problem can be of relevance in the ethics of self-driving cars.See, e.g., Matthias Breitinger, Ein Mercedes-Fahrerleben ist nicht mehr wert als andere, ZEITONLINE (Oct.17, 2016, 5:14 PM), https://www.zeit.de/mobilitaet/2016-10/autonomes-fahren-schutz-fahrer-hersteller;see also Sven Nyholm, The Ethics of Crashes with Self-Driving Cars: A Roadmap, 13 PHIL.COMPASS 1, 2-5 (2018).Cf.BONNEFON, supra note 35, at 63 (giving context to this supposedly outrageous statement).
criminal lawyers, who have also discussed so-called "trolley cases."In these debates, however, the criminal lawyers were usually not concerned with a discussion of adequate laws but questions as to whether it would be right to punish individuals in specific situations.Moral dilemmas have plagued many philosophers too.A famous discussion in philosophy and psychology focused on the so-called "trolley problem."Discussed scenarios are easily adaptable to hypothesize about how the outcome of accidents involving self-driving cars with unavoidable fatalities should be programmed.The trolley problem and similar scenarios, however, are no easy fit to the question as to how to regulate moral dilemmas involving self-driving cars.The debate is huge and complex. 54Nevertheless, a major reason for being cautious is the fact that the debate around trolley problems in philosophy originally had quite different goals than what currently seems to be at center stage.Recent empirical studies have also aimed to inform the regulation of moral dilemmas with self-driving cars.It is, however, a difficult and potentially misleading task to simply ask laypeople in studies what they think would be the right thing to do.None of these debates is meaningless when discussing how to make decisions on moral dilemmas involving self-driving cars.Yet, it is important not to conflate the different starting points and goals of these debates when we aim at informing the current regulation of moral dilemmas with self-driving cars.This will be demonstrated taking the German Ethics Commission and the 2021 German Act on Autonomous Driving as a basis.

I. Trolley Cases in Criminal Law
Constructed cases are lawyers' bread and butter, at least in the classroom.From this point of view, it is not surprising that the jurist Josef Kohler already offered a hypothetical scenario with the title Autolenker-Fall, the "case of the car driver," in 1915. 55Kohler proposed to: Consider the fact that a car can no longer be brought to a standstill over a short distance but that it is still possible to steer it so that instead of going straight ahead, it goes right or left.If there are now persons straight ahead, to the right and left who can no longer avoid being killed, the driver is not in a position to avoid killing people, but he can steer the car to one side or the other by moving the steering wheel.Can we punish him here for causing the death of A, whereas if the car had continued in a straight line without being steered, B or C would have perished? 56reby Kohler formulated a decision problem which lawyers still ponder today.It is, however, important to understand why.As often, we learn a great deal about his intentions when we closely read his question at the end of the scenario.His intention as a criminal lawyer was to discuss whether the action chosen by the car driver is punishable under criminal law, or whether the emergency law-"Notrecht" in German-prevents the punishment of an individual who had no choice but to kill someone.There is another scenario, again proposed by a German criminal lawyer, which is strikingly similar to the trolley cases discussed in philosophy.In 1951, Hans Welzel described the following scenario: On a steep mountain track, a freight car has broken loose and is hurtling down the valley at full speed towards a small station where a passenger train is currently standing.If the freight car were to continue racing along that track, it would hit the passenger train and kill a large number of people.A railroad official, seeing the disaster coming, changes the points at the last minute, which directs the freight car onto the only siding where some workers are unloading a freight car.The impact, as the official anticipated, kills three workers. 57e to the similarity of this scenario, the German Ethics Commission considered it to be "familiar in a legal context as the 'trolley problem'." 58This is misguided, however, as it conflates important differences between the intentions of Kohler and Welzel on the one hand and the discussion of the "trolley problem" by Philippa Foot and Judith Jarvis Thomson on the other hand. 59Welzel's argument, in broad strokes, was to demonstrate that the railroad official is not culpable for having redirected the freight car.Despite their similarity with currently suggested crash scenarios involving self-driving cars, simply adopting these examples is misguided because when we discuss the ethics of self-driving cars and the question as to how to legally prescribe the programming of these cars for dilemma situations, we must not conflate this with the justification in criminal law of personal actions in terms of emergency aid or personal culpability.While in Kohler's and Welzel's cases alike the discussion revolves around the action of an individual and the question whether this individual-ex post-should be punished or not, regulating self-driving cars is a societal question to be answered ex ante by the law maker or an Ethics Commission.Neither the opinion of Kohler, who answered his own question straightforwardly with "certainly not," or of Welzel, who also considered the railroad official not culpable, must lead directly to the conclusion that self-driving cars ought to be programmed to take a different path in such scenarios.It is not criminal law which is-alone-responsible for the question as to how to decide on such moral dilemmas.And still, despite the fact that this debate is at the core of something else, it does not exclude the possibility that the solution to such and similar scenarios discussed by many criminal lawyers might be informative for the debate on self-driving cars and moral dilemmas. 60

II. The Traditional "Trolley Problem" in Philosophy
The traditional "trolley problem" in philosophy became famous through a debate between Philippa Foot and Judith Jarvis Thomson.The Oxford philosopher Philippa Foot published an article in 1967 which had an enormous impact on philosophical debates in the following decades worldwide. 61Her intention was to discuss the ethics of abortion by making an argument for the distinction between positive and negative rights, in contrast to the doctrine of double effect.To her, negative duties, namely what we owe to other persons in terms of non-interference, were stronger than positive duties, namely what we owe to other persons in the form of aid. 62o illustrate her point, she, too, constructed hypothetical scenarios.One scenario, which later became famous, goes like this: Someone is the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed.Foot, supra note 61, at 29 ("My conclusion is that the distinction between direct and oblique intention plays only a quite subsidiary role in determining what we say in these cases, while the distinction between avoiding injury and bringing aid is very important indeed.").

63
Foot, supra note 61, at 23. Her scenario, too, was designed to put the driver of the tram in a conflict, namely a conflict of two negative duties, instead of one negative and one positive duty.Taking either track, the tram driver would kill an innocent person.Because both duties were negative duties, she argued that the tram driver might steer the tram to save five persons, not as the doctrine of double effect but because the distinction between positive and negative duties was decisive.
This argument was challenged by Judith Jarvis Thomson some ten years later.She coined the term "trolley problem, in honor of Mrs. Foot's example." 64To Thomson, after having considered more hypothetical scenarios, it seemed that it is not always the case that the distinction between positive and negative duties guides us in a morally acceptable way.To her rather, "what matters in these cases in which a threat is to be distributed is whether the agent distributes it by doing something to it, or whether he distributes it by doing something to a person." 65To make her point, she changed Foot's scenario slightly.In Thomson's case it is not the action of the driver of the tram but the action of a bystander-who might change the points in order to redirect the trolleywhich we have to evaluate. 66As Thomson supposes, also after having asked several colleagues for their opinions, most persons also consider it morally acceptable for the bystander to change the points in order to save five persons.In this case, however, the bystander violated a negative dutyto not kill one person on the other track-in order to fulfil a positive duty-to aid five persons who would die if the bystander did not act. 67This, she states, "is serious trouble for Mrs. Foot's thesis." 68udith Jarvis Thomson then goes on to discuss more scenarios, defining the "trolley problem" as the difficulty to explain why the bystander may redirect the trolley but we must not push a fat man off a bridge in order to stop a trolley killing one person-the fat man-in order to save five others.For her "'kill' and 'let die' are too blunt to be useful tools for the solving of this problem," 69 but an "appeal to the concept of a right" could suffice. 70If someone must infringe a stringent right of an individual in order to get something that threatens five to threaten this individual then he may not proceed according to Thomson. 71he problem is that in both cases we are dealing with negative and positive rights and duties in a similar way, but morally it seems that this should not be decisive as the bystander should redirect the trolley, but the fat man should not be killed in order to save five other persons endangered by the trolley.The "trolley problem," therefore, at least in the debate between Philippa Foot and Judith Jarvis Thomson is not about how to solve such moral dilemmas. 72On the contrary, the right thing to do, morally speaking, is stipulated in all of the scenarios discussed by them.It is rather about the perplexity of how to explain the apparently different moral judgments in rather similar, even almost identical, scenarios.Because this is difficult, and has proven to remain difficult until today, this has been labeled a "problem."Note that the introduction of the bystander-instead of the tram driver used by Philippa Foot-was intended to show that the bystander would not be responsible for killing the five persons, as he would not be driving the tram in contrast to the driver, but most persons nevertheless considered changing the points morally acceptable.

72
See Foot, supra note 61, at 30 ("I have only tried to show that even if we reject the doctrine of double effect, we are not forced to the conclusion that the size of the evil must always be our guide.").Thomson's words are similarly instructive, see Thomson, supra note 64, at 217 ("[T]he thesis that killing is worse than letting die cannot be used in any simple, mechanical way in order to yield conclusions about abortion, euthanasia, and the distribution of scarce medical resources.");see also Thomson, supra note 23, at 1414 ("[I]t is not the case that we are required to kill one in order that another person shall not kill five, or even that it is everywhere permissible for us to do this.").
Hence, the important point for the current debate is that conclusions from the apparent fact that the bystander should redirect the trolley in order to save five people by killing one person should not be made lightheartedly when considering the programming of self-driving cars in such and similar situations.This insight often seems to be neglected, however, at least in discussions in non-academic magazines when someone makes the case for the relevance of the 'trolley problem' for moral dilemmas involving self-driving cars. 73For this and other reasons, most moral philosophers do not consider the "trolley problem-debate" to be particularly useful for the discussion of the ethics of self-driving cars. 74

III. Empirical Studies on Moral Dilemmas Involving Self-Driving Cars
What is important to note for our purpose, thus, is that the hypothetical trolley cases were originally designed for quite specific purposes: In the case of Kohler and Welzel for discussing intricate issues of criminal law and culpability in emergency situations, and in the case of Foot and Thomson in order to discover moral principles or rather philosophical arguments aimed at justifying and explaining quite perplexing but nevertheless strong moral intuitions in seemingly only slightly different scenarios.The moral intuitions in all of these exercises were presupposed.In the application of such trolley cases over time and especially in relation to the ethics of self-driving cars, something changed.
It is important to note the shift in the application of trolley cases which has taken place over time, especially in relation to the ethics of self-driving cars.Trolley cases these days seem to be rather an inspiration in order to find out what the right thing to do would be and, thus, how selfdriving cars should be programmed for moral dilemmas.This was impressively demonstrated in the large "Moral Machine Experiment," 75 which asked over two million persons online to give their opinion on various scenarios to find out what laypersons thought was the right thing to do in a moral dilemma situation involving self-driving cars.These scenarios included characters as diverse as children and the elderly, doctors and the homeless, and different group sizes, to name just a few examples, and to give an idea of the breadth of this and similar efforts to decipher lay moral preferences in the context of self-driving cars. 76n important finding reported in the experiment by Edmond Awad and his colleagues, and many similar studies, is that most people think that given a moral dilemma, self-driving cars should be programmed to save five people, even if one person has to die as a result. 77Yet, this does not mean that we can decide on the programming of moral dilemmas involving self-driving cars on the basis of empirical studies alone.It is, for instance, a much more complex question concerning what the right programming would be than to simply consider trolley cases like experiments.Despite this caveat, it would be throwing the baby out with the bathwater if we were to ignore such experiments if they validly show a significant tendency in laypersons. 78he criticism that morality cannot be established in an experiment like the Moral Machine Experiment 79 hinges as much upon answering the question as to what public morality is as the experiment by Awad and his colleagues itself.Is morality only to be found in the "ivory tower" of ethical theory building or is it also connected to what the majority of laypersons consider to be the right thing to do?If the latter has to play a role, the study design in order to find morally relevant intuitions becomes crucial. 80ust one example shall illustrate why too simple a study design might be problematic.Apparently, in the discussion between Philippa Foot and Judith Jarvis Thomson, a striking and recurring issue was the trouble that numbers alone "won't do."They were puzzled by the findings that in such and such a circumstance, it seems to be morally acceptable to save five persons over one, but if a-sometimes only slight-change in the circumstances occurs, the verdict seems to change too, and it is no longer morally acceptable to save five persons over one.To ignore this important element of the discussion-and in fact this was the whole point of the discussion-is to dangerously introduce "morally acceptable" findings of so-called surveys or experiments into the real-life debate on how to regulate moral dilemmas caused by self-driving cars. 81fter having clarified potentially misinformed starting points or conflations of various quite different debates, we will take a look at the rules suggested by the German Ethics Commission, which are the basis for the 2021 German Act on Autonomous Driving.

IV. The Rules of the German Ethics Commission on Automated and Connected Driving
In 2017, an Ethics Commission set up by the German Federal Minister of Transport and Digital Infrastructure, including, among others, lawyers and philosophers and chaired by former judge of the Federal Constitutional Court of Germany Udo Di Fabio, delivered a Report on "Automated and Connected Driving." 82This report, which was not only published in German, but also in English, was more than a simple report as it came up with "[e]thical rules for automated and connected vehicular traffic." 83Specifically for "[s]ituations involving unavoidable harm," working group 1, chaired by Eric Hilgendorf, was installed. 84While the rules of the Commission deal with various topics, 85 this Article focuses on rules concerning moral dilemmas.Rule 1 of the Commission entails the insight that "[t]echnological development obeys the principle of personal autonomy, which means that individuals enjoy freedom of action for which they themselves are responsible." 86Rule 2 holds that "[t]he protection of individuals takes precedence over all other utilitarian considerations.The objective is to reduce the level of harm until it is completely prevented." 87Both "rules" will very likely meet with wide acceptance.This similarly holds true for Rule 7, stating that "[i]n hazardous situations that prove to be unavoidable, despite all technological precautions being taken, the protection of human life enjoys top priority in a 78 Cf balancing of legally protected interests." 88Hence, "damage to animals or property" never must override protection of humans.It will be hard to find someone arguing to the contrary in relation to this rule too. 89ule 8 is a caveat: "Genuine dilemmatic decisions, such as a decision between one human life and another, depend on the actual specific situation [ : : : ].They can thus not be clearly standardized, nor can they be programmed such that they are ethically unquestionable." 90The Commission added: It is true that a human driver would be acting unlawfully if he killed a person in an emergency to save the lives of one or more other persons, but he would not necessarily be acting culpably.Such legal judgements, made in retrospect and taking special circumstances into account, cannot be readily transformed into abstract/general ex ante appraisals and thus not into corresponding programming activities either. 91is is a warning that the scenarios presented by the criminal lawyers above must not be taken as direct role models.
After having emphasized the uncertain grounds for the regulation of moral dilemmas, the Ethics Commission holds, in Rule 9, that "[i]n the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited." 92Rule 9 includes, too, that "[i]t is also prohibited to offset victims against one another." 93While the "[g]eneral programming to reduce the number of personal injuries may be justifiable," the Commission added that "[t]hose parties involved in the generation of mobility risks must not sacrifice non-involved parties." 94Precisely this addition to Rule 9 is crucial for our purpose.
Interestingly, the members of the Ethics Commission could not agree on a consistent explanation for this rule. 95On the one hand, the Ethics Commission refers to a judgment by the Federal Constitutional Court of Germany concerning a hijacked airplane 96 stating that "the sacrifice of innocent people in favor of other potential victims is impermissible, because the innocent parties would be degraded to mere instruments and deprived of the quality as a subject." 97On the other hand, the Commission states in relation to self-driving cars that the "identity of the injured or killed parties is not yet known," which distinguishes self-driving car scenarios from the trolley cases.If the programming, furthermore, "reduced the risk to every single road user in equal measure," then "it was also in the interests of those sacrificed before they were identifiable as such in a specific situation," to program self-driving cars in order to "minimize the number of victims." 98This "could thus be justified, at any rate without breaching Article 1(1) of the [German] Basic Law." 99The confusion is made complete by the succeeding paragraph, which reads as follows: However, the Ethics Commission refuses to infer from this that the lives of humans can be "offset" against those of other humans in emergency situations so that it could be permissible to sacrifice one person in order to save several others.It classifies the killing of or the infliction of serious injuries on persons by autonomous vehicles systems as being wrong without exception.Thus, even in an emergency, human lives must not be "offset" against each other.According to this position, the individual is to be regarded as "sacrosanct."No obligations of solidarity must be imposed on individuals requiring them to sacrifice themselves for others, even if this is the only way to save other people. 100e verdict in this paragraph is, then, called into question by the following paragraph on constellations in which "several lives are already imminently threatened." 101Hence in scenarios as put forward, for example, by Kohler above, where a car would kill a group of three persons, but a redirection might avoid killing all three persons and hit only one, or two, the Commission found that-despite openly admitting disagreements between the experts-minimizing harm in this case might be permissible. 102owever, what is of particular importance for our purpose is a discussion not only of moral principles and rules but of legal solutions.Due to technological progress, we must not only think hypothetically about such scenarios but implement laws which regulate the programming of selfdriving cars, also including moral dilemmas they might cause or be involved in.Before we enter the discussion of the controversy on "offsetting" human lives, we will see what has been included in the German act.

V. The 2021 German Act on Autonomous Driving
The provision related to dilemma problems of self-driving cars in the 2021 German Act on Autonomous Driving, Section 1e paragraph 2 No. 2 provides that motor vehicles with an autonomous driving function must have an accident-avoidance system that: (a) [I]s designed to avoid and reduce harm, (b) in the event of unavoidable alternative harm to different legal interests, takes into account the importance of the legal interests, with the protection of human life having the highest priority; and (c) in the case of unavoidable alternative harm to human life, does not provide for further weighting on the basis of personal characteristics. 103les 2 (lit a)) and 7 (lit b)) of the Ethics Commission were, thus, directly implemented in the law.However, Rule 9 of the Ethics Commission was not fully included.While the prohibition of taking personal characteristics into account was enshrined in the law104 there is no general legal 98 ETHICS COMM'N, supra note 14, at 18. (See also ibid "[D]amage to property to take precedence over personal injury, personal injury to take precedence over death, lowest possible number of persons injured or killed.").

Id. ("[T]he Commission
has not yet been able to bring its discussions to a satisfactory end, nor has it been able to reach a consensus in every respect.It thus suggests that in-depth studies be conducted.").prohibition concerning the "offsetting" of human victims against one another.Neither is there an explicit obligation to protect as many persons as possible even at the risk of directly harming fewer persons.
The 2022 Autonomous Driving Decree which concretizes the 2021 Autonomous Driving Act did not change this lacunae.However, in response to the aforementioned statement by the Mercedes official, a so-called "Mercedes rule" has been established as a "functional requirement for motor vehicles with autonomous driving function": 105 If, in order to avoid endangering the lives of the occupants of the motor vehicle with autonomous driving function, a collision can only be avoided by endangering the lives of other participants in the surrounding traffic or uninvolved third parties (unavoidable alternative endangerment of human life), the protection of the other participants in the surrounding traffic and uninvolved third parties must not be subordinated to the protection of the occupants of the autonomously driving motor vehicle.
Beyond these rules, no other regulation of moral dilemmas has been introduced in the law or the decree.This non-regulation is central issue concerning the role model potential of this regulation beyond Germany as empirical studies, as briefly referred to above, found a strong moral preference in laypersons to protect a greater number of persons in case of moral dilemmas. 106s we have seen, it was difficult for the German Ethics Commission to come up with a straightforward proposal so the German legislature-maybe due to the controversy in the Ethics Commission-refrained from regulating moral dilemmas in detail.This is, furthermore, understandable considering German history, the Kantian tradition, and last but not least, human dignity as enshrined in Article 1(1) of the German Basic Law.However, these conditions are not given-at least not explicitly or to the same extent-beyond Germany, neither in the rest of Europe nor internationally.In the following sections, therefore, this Article will investigate whether there might be good reasons for doubting that a so-called "utilitarian programming" of self-driving cars, a programming that would lead to the protection of the greatest number of persons saved-even at the expense of killing someone, or some smaller number of persons-in the case of moral dilemmas, actually could be a role model for European and international regulations.
G. Potential "Utilitarian Programming" of Moral Dilemmas Involving Self-Driving Cars in Europe and the US For the sake of simplicity, the following discussion is based on a specific example of such moral dilemmas.For this purpose, a case is assumed which is similar to the classical trolley problem case described by Foot of a car which can no longer be brought to a standstill.In the case of nonintervention, the car would steer straight into a group of five persons and only in the case of an intervention and redirection would the car avoid colliding with this group but would kill another person instead.The question is how to program self-driving cars for such a dilemma.

I. The Rejection of Utilitarian Programming by the Legal Literature
The position in the German legal literature is rather straightforward.According to Article 1(1) of the Basic Law of the Federal Republic of Germany, "[h]uman dignity shall be inviolable."Most legal scholars interpret this as a strict prohibition to "offset" human lives. 107s far as can be seen, Iris Eisenberger is the first voice in the Austrian legal literature who also took a position on the potential programming of moral dilemmas caused by self-driving cars. 108She, too, rejected a utilitarian option for action, taking into account fundamental rights and especially Article 2 ECHR, the right to life.According to her, in a case of conflict that cannot be avoided, it is inadmissible to inscribe in an algorithm the option of action according to which the smallest number of persons would be killed. 109To her, the "offsetting" of human lives would violate human dignity in the context of the Austrian legal order as well. 110This position meets with opposition in Austria, however. 111Beyond Austria and Germany, it is especially empirical studies which suggest that a large number of laypersons would actually favor minimizing harmeven at the cost of killing someone innocent.

II. When the Legal Literature Faces Opposition from Empirical Studies
The silence of the German act concerning moral dilemmas faces strong opposition from empirical studies investigating the moral preferences of laypersons. 112These studies show a strong preference for minimizing harm even to the extent that in the case of moral dilemmas, the largest number of persons possible should be saved at the cost of the smaller number of human lives. 113mpirical studies, hence, suggest that the moral intuitions of laypersons at least feed a certain skepticism concerning the strict reading of human dignity by German legal scholars, the Ethics Commission, and, finally, the German act.This is relevant, among other things, because the programming of self-driving cars should also meet with the acceptance of most persons.Tracing these intuitions, this Article questions the position taken by the German act outlined above.This 107 critique is fed by Article 2 ECHR, which, arguably, is the criterion for a European regulation.Article 2 ECHR, in contrast to Article 1(1) of the German Basic Law, at least does not prohibit a "utilitarian" programming of self-driving cars. 114

III. A Technological Caveat: Damage Avoidance Probability and Estimated Harm Instead of Targeted Killing
First of all, it is important to emphasize that the general discussion so far has been clouded by a conflation of different scenarios.Indeed, the starting point of many considerations is a classic scenario where the offsetting of human lives is actually more realistic than in the trolley case: A plane hijacked by terrorists and the question as to what means the state may use to save the lives of innocent persons.In this consideration, it is assumed that shooting down the hijacked plane-to save threatened persons, for example, in a crowded stadium where the plane is heading-necessarily entails the targeted killing of the innocent passengers on board. 115n road traffic scenarios, however, the almost certain death of innocent persons is not a prerequisite for the discussion on regulating crashes with self-driving cars.In the case of car accidents, we are not dealing with targeted killings in the vast majority of situations, excluding terrorist activities, which might actually be prevented with well-designed self-driving cars (in this case they probably even deserve to be called autonomous).Clear predictions about the probability of a collision for certain trajectories and survival in case of a collision after a car accident (estimated harm) are rather difficult to make.This is also relevant for the discussion of moral dilemmas as programming might involve the killing of the smaller number of persons to save the larger number; however, ex ante such an outcome is uncertain.It might, thus, be that, concerning crashes with self-driving cars in the case of a moral dilemma, ex ante, we do not talk about purposefully killing one person in order to save two lives.Rather, we are "only" dealing with the question of harm avoidance probabilities, which, nevertheless, might include fatalities.
The technology for accurately predicting collisions that will affect specific persons, for determining the damage caused by such a collision and for possible distinctions based on personal criteria such as age, profession, origin, etc. is a distant dream-or rather-nightmare. 116he current technology of automatic object recognition is plagued with more banal mishaps.An example is with the mistaken result of an automatic recognition of objects such as a green apple with a sticker that reads "i pod," which is immediately identified as an iPod manufactured by Apple.A legal evaluation of any accident-avoidance system must thus be based on the technological possibilities. 117It seems that also for self-driving cars the old principle that "ought" implies "can" holds true.For the regulation of self-driving cars, this means that those conflict scenarios should be discussed first which could actually be applied technically on the roads in the near future or in the medium term.For example, in a situation in which personal injury can no longer be avoided, an algorithm might well be able to detect the number of persons potentially threatened in two different scenarios.However, whether and with what probability those persons in the two scenarios will suffer life-threatening injuries after the collision probably already goes far 114 See espec.Kirchmair, supra note 1. 115   See, e.g., Bezemek, supra note 111, at 126 (explaining this consideration opens up the possibility for the state to counter such an unlawful attack with the use of force, considering a strict proportionality standard).beyond the technological achievements that can be hoped for in the near future.Crash regulation programming of self-driving cars, thus, must not be predominantly discussed against a false background.Fictitious terrorist scenarios in which innocent persons are deliberately killed are thus a misleading foil for collision-avoidance systems. 118Consequently, hypothetically equating the probability of killing fails to recognize the difference between a terrorist scenario and a car accident.To put it bluntly, often it is not lives that are "offset," but the probabilities of damage avoidance that are compared.
Having said that, however, it is also important that the law which is aimed at regulating selfdriving cars is designed to speak to such cars.Their programming requires much more precise regulation than humans do.While human behavior can be addressed by general rules, self-driving cars require a very specific programming. 119In other words, to make a law talk to the code, we have to put numbers in the law. 120In particular, the regulation on collision avoidance systems is not comprehensive and works with very general rules such as avoiding and reducing harm.In the end, this leaves a lot of freedom for car manufacturers to program self-driving cars to deal with moral dilemmas.They should follow rules such as prioritizing humans over non-humans, but not with certain personal characteristics, and not subordinating humans outside of self-driving cars to their passengers.But if these rules are followed, there is simply a generalized harm reduction rule that can lead to many different outcomes.
Following the example of the German Ethics Commission, a commission of experts appointed by the European Commission published an "Ethics of connected and automated vehicles report" in September 2020. 121With regard to the regulation of moral dilemmas, the guidelines put forward in this report-similarly to the comments made above on the probability of avoiding harm-point out that it could be difficult for self-driving cars to switch from a generally appropriate risk minimization program to a dilemma situation.Therefore, this commission decided to refrain from providing a concrete solution to all possible dilemmas and emphasized the need to involve the public in solving such scenarios. 122And yet, despite the technological caveat mentioned above, we will still face actual moral dilemmas. 123Therefore, we must not shy away from the question as to whether the law should prescribe the minimization of harm also in the event that someone ends up being killed.Nevertheless, it is important to mind the correct context, the likelihood of the occurrence of various scenarios, and the technological possibilities which will likely guide and limit the potential solutions.
IV. Article 2 ECHR, the "Offsetting" of Human Lives and a European-Wide Regulation of Moral Dilemmas Involving Self-Driving Cars Insofar as-as previously claimed-the state must regulate moral dilemmas according to Article 2 ECHR, the question arises as to whether Article 2 ECHR also includes specifications for the design of the regulation.The scope of protection of Article 2 ECHR is the right to life.An interference 118 This argument is similar but not identical to the arguments made by those authors cited supra note 74, who-among other things-make the point that the trolley problem would be a misleading scenario for regulating the crashes of self-driving cars.with the scope of protection occurs through an intentional killing or a life-endangering threat.Examples include state-practiced euthanasia, a killing or disproportionate use of force in the course of official police action-for example, by neglecting a person under state supervision-or by forcing someone to engage in an activity dangerous to life. 124These examples are obviously not directly applicable to the scenario discussed here-state vehicles excepted.However, the right to life can also be violated by omission.In this regard, too, state custody is usually a prerequisite, when, for example, the state must not let prisoners die of thirst. 125This does not apply to our situation either.Nevertheless, it cannot be denied that a state regulation which prescribes a probability of avoiding harm in such a way that, in a moral dilemma, the group with the larger number of persons should be protected and the group with the smaller number of persons should be sacrificed, represents an encroachment on the scope of protection of Article 2 ECHR. 126n principle, an interference with the right to life under Article 2 paragraph 2 ECHR is justified only to "(a) defend someone against unlawful violence; (b) lawfully arrest someone or prevent someone lawfully deprived of his liberty from escaping; (c) lawfully put down a riot or insurrection." 127n Germany the right to life is linked to the guarantee of human dignity and an "offsetting" of life is strictly rejected. 128Basically, this position is strongly colored by Immanuel Kant. 129It is questionable whether human dignity, which is also guaranteed in principle in Austrian and other European constitutions as well as in the European Convention on Human Rights, entails similarly strong specifications beyond Germany. 130A well-known difference already exists in the fact that in Austria and several other jurisdictions, in contrast to Germany, the guarantee of human dignity is not explicitly anchored in the constitution.Despite manifold, specific, concrete references, there is no "comprehensive justiciable right to a dignified life" in Austria. 131 Insofar as the discussion relates exclusively to a probability of avoiding harm-instead of killing-it would still be necessary to discuss, in accordance with Article 8 ECHR, the right to respect for private and family life and thus the impairment of physical integrity.127   See, e.g., Kneihs, supra note 109 (describing the classic example: The so-called "final shot fired").JURISTEN ZEITUNG 373 (2007).Explicitly based on this, on the question of how moral dilemmas should be classified in the programming of self-driving cars, see Stender-Vorwachs & Steege, supra note 107, at 401 ("A legal regulation that requires the algorithm to be programmed in such a way that the life of the occupant of an autonomous car takes a back seat to other vehicle occupants or pedestrians or cyclists is contrary to the guarantee of human dignity.").Cf.Merkel, supra 129, at 403 (describing a series of further prohibitions on potential programming that culminates in requiring vehicle manufacturers to "mandatorily program their vehicles defensively so that accident situations are virtually eliminated").Anna Gamper, Gibt es ein "Recht auf ein menschenwürdiges Sterben?" Zum Erkenntnis des VfGH vom 11.12.2020,G 139/ 2019, 3 JURISTISCHE BLÄTTER 137, 141 (2021).Human dignity, anchored in the case law of the ECtHR on Article 3 ECHR, prohibits, according to the Austrian Constitutional Court, "a gross disregard of the person concerned as a person that impairs human dignity," (e.g., VfSlg.19.856/2014 with further references) but does not constitute an explicit prohibition of the "harm minimization principle," as the Federal Constitutional Court of Germany has pronounced with regard to Section 14 para.3 of the Aviation Security Act.This is probably also true, although the Pretty case refers to dignity and freedom as the essence of the Convention.See Pretty, App.No. 2346/02 at para.65. situation cannot be assumed in Austria that leads the prevailing opinion in German jurisprudence and, finally, also the German act to deny "utilitarian" solutions to moral dilemmas.This holds true for other European legal orders.Neither it is explicitly enshrined in the ECHR. 132he argumentation of the ECtHR in the Finogenov case is instructive for utilitarian programming, including a reference to the ruling of the Federal German Constitutional Court on the Air Security Act. 133The use of gas when rescuing hostages in the Dubrovka Theatre in Moscow, which was directed against the terrorists but also affected the hostages, did not constitute a violation of the right to life for the ECtHR because the potentially lethal force was justified in view of the threat situation.The decisive factor here was the hostages' real chance of survival. 134his is particularly relevant for the probability to avoid harm as discussed above, although it must be conceded that the measure which affected the hostages was also intended to protect them.
Moreover, it must be conceded that only a ruling by the ECtHR will probably provide clarity on how these new types of moral dilemmas, which are likely to occur in the near future, can be resolved in conformity with the Convention.After all, Article 2 ECHR was not adopted with an awareness of the new moral dilemmas that will arise as a result of self-driving cars.The potential justification of limitations to the right to life according to Article 2 paragraph 2 ECHR, such as the defense against unlawful violence, does not apply to moral dilemmas in connection with selfdriving cars.
However, the ECtHR recognizes in principle, albeit in the context of exceptional circumstances, that according to Article 2 paragraph 2 ECHR, in the case of absolute necessity, the use of force resulting in death can also be justified. 135For the scenario discussed here, it makes a considerable difference whether the state bows to the perfidious logic of terrorism and agrees, in mostly hypothetical and extremely unlikely situations, to deliberately kill innocent passengers in order to ensure the survival of a larger number of other persons or whether the state is regulating road traffic, which, inevitably, day in and day out-despite measures to increase road safety-will probably always involve the occurrence of dangerous situations, in a way that minimizes risk.Distinguishing between the scenarios is important 136 because persons who use roads face a similar risk.For example, all persons engaging in road traffic-unlike plane passengers and stadium visitors in the case of a hijacked plane being directed into a crowded stadium-could be assigned to a hazard community. 137It may be instructive for this take to think of all persons involved in road traffic as being behind a Rawlsian veil of ignorance 138 and then to present the options for action to these persons in such a way that they do not know whether they belong to the threatened group with fewer or more persons. 139 initially anonymous hazard community of all road users could thus be justified in advance and is to be judged differently than an ex post decision in an individual case. 140While from an ex ante perspective, it is rather a statistical decision, an ex post decision almost makes it impossible to exclude personal stories from a justification of the decision.
Generally speaking, it can be stated in support of this argument that, taken to the extreme, the assertion that one life is worth the same as several would even lead to the extinction of humankind because one life is not capable of reproducing itself.Basically, the law also knows how to differentiate. 141Murder is punished differently than genocide.Also in international humanitarian law, human victims are considered as collateral damage in such a way that an attack-despite civilian victims-can be lawful as long as the number is in proportion to the military advantage. 142hus, Eric Hilgendorf also rightly suspects that "[t]he evaluation that one human life 'weighs' as much as several, indeed as infinitely many human lives, may only be sustained if such decisions remain textbook fictions." 143rticle 2 ECHR arguably neither prohibits nor prescribes a "utilitarian" programming for moral dilemmas involving self-driving cars and the so-called "margin of appreciation" conceded by the ECtHR to the Member States might also play a role. 144Yet, a decision must be taken by the law and a justification of this decision is necessary.Either way, an interference with Article 2 ECHR will be the case.If a "utilitarian" programming were to be prohibited, the state would let the larger number of persons die even though they could have been saved.If a "utilitarian" programming were to be prescribed, the state would be responsible for the deaths of the smaller group of persons.If no programming were to be mandated as in the German act, the decision would be left to private actors. 145No-action programming-in other words, in our scenario, not redirecting the car-would leave the larger number of persons marooned.Random programming would ultimately, albeit less often, still cost more lives in the aggregate than a "utilitarian" programming. 146This would not meet the state's duty to protect according to Article 2 ECHR either; in other words, the abandonment of the larger number of persons would again be in need of justification.
The recently amended General Safety Regulation of the European Union had the goal to establish the EU to be in the avant-garde for allowing so-called level 4 self-driving cars on European roads. 147This regulation has to stand the test of the arguments made here too.Whether the German act, at least concerning the solution of moral dilemmas, is an ideal role model, is questionable, however.Technically, the Commission Implementing Regulation (EU) 2022/1426 of 5 August 2022 lays down rules for the application of Regulation (EU) 2019/2144 of the European Parliament and of the Council as regards uniform procedures and technical specifications for the type-approval of the automated driving system (-ADS-) of fully automated vehicles. 148Particularly interesting for our purpose is ANNEX II of this Regulation on performance requirements.In 140 See ETHICS COMM'N, supra note 14, at 18.But see supra Section F.IV. 141 This must be recalled, in particular to those voices that attest to "errors of reasoning" in arguments that, in their view, contradict the guarantee of human dignity.2.1.1.The ADS shall be able to detect the risk of collision with other road users, or a suddenly appearing obstacle (debris, lost load) and shall be able to automatically perform appropriate emergency operation (braking, evasive steering) to avoid reasonably foreseeable collisions and minimise risks to safety of the vehicle occupants and other road users.
2.1.1.1.In the event of an unavoidable alternative risk to human life, the ADS shall not provide for any weighting on the basis of personal characteristics of humans.
2.1.1.2.The protection of other human life outside the fully automated vehicle shall not be subordinated to the protection of human life inside the fully automated vehicle.
2.1.2.The vulnerability of road users involved should be taken into account by the avoidance/ mitigation strategy.These rules are in many ways similar to the German act presented before.For instance, also in the realm of this regulation it is clear that the "weighting on the basis of personal characteristics of humans" is clearly forbidden.However, in relation to the moral dilemma discussed here more extensively, the constellation of deciding between a larger and a smaller group of people, these rules remain silent as does the German act.It hinges upon the interpretation of the passage stating that "reasonably foreseeable collisions" need to be avoided "and minimis[ing] risks to safety of the vehicle occupants and other road users" are prescribed.Depending on the interpretation this may include a situation in which minimal risk stands for directing the car against a smaller group of persons in order to safe a larger group or not. 149The Regulation of Moral Dilemmas Involving Self-Driving Cars in the US Self-driving cars-albeit so far mostly in test mode-are no novelty on roads in the US.Therefore, accidents happen as well.As of July 1, 2022, the Department of Motor Vehicles (DMV) in the US state of California, for instance, has received 486 Autonomous Vehicle Collision Reports.150 The regulation of self-driving cars in the US is divided between the federal level, and the individual states.151 How individual states regulate self-driving cars varies significantly.Forerunner states in terms of technological development likely also lead the regulation of this technology.Despite that, in relation to moral dilemma situations, which will become a practical problem when self-driving cars become more frequent, there is simply no regulation in the US at either the federal or individual state levels.149 Regulation of Performance Requirements, at Annex II.The definitions given in Article 2 paras. 14- respectively, including "'minimal risk maneuver (MRM),' meaning a maneuver aimed at minimizing risks in traffic by stopping the vehicle in a safe condition or minimal risk condition," and "'minimal risk condition (MRC),' meaning stable and stopped state of the vehicle that reduces the risk of a crash," are not actually helpful in order to clarify our query.Take the regulation in Arizona, for example.In March 2021, the Legislature enacted HB 2813.152 Thereby standards for driverless vehicles were established in Arizona.The law does not distinguish between test mode and the normal operation of self-driving cars. Comercial services like passenger transportation, freight transportation and delivery operations can, thus, be offered.In order to use a self-driving car without a human driver, however, the operator must previously submit a law enforcement interaction plan according to Section 28-9602.The car must follow federal laws and standards, comply with all traffic and vehicle safety laws and, most interestingly for our case, according to Section 28-9602 C. 1. (B), in case of failure of the automated driving system, "achieve a minimal risk condition."153 This condition is defined as "a condition to which a human driver or an automated driving system may bring a vehicle in order to reduce the risk of a crash when a given trip cannot or should not be completed."154 While these requirements are very understandable, moral dilemmas are simply not addressed.This is a severe lack, which might impede the trust in self-driving cars once they become more frequent on public roads.Therefore, a regulation is necessary.Car manufacturers won't take the first move.So far, they are interested in safety questions and do address also extreme situations to some extent.Yet, they do not include moral dilemmas in their considerations.155 Who would blame them?The nature of a dilemma is that either way, the outcome involves trouble.Such negative consequences are hard to accept, and likely do not contribute to selling cars.Hence, if regulation does not force producers to deal with this issue, they will likely avoid facing it.So far in the US, states took the primary role in regulating self-driving cars.This might change, however.If a federal law would be enacted in the future, 156 or if further state laws are adopted, it could be wise to include a provision on moral dilemmas.Thereby important decisions about live and death would not be decided by car producers but by the legislator.The provision in the German acteven though very likely not as a detailed blueprint-might serve as a role model in as much as that this act is the first one worldwide to actually contain a regulation of moral dilemmas.The lack of addressing the dilemma discussed above, however, significantly reduces the role-model potential of the German act significantly.

H. Conclusion
Decisions about life and death are always difficult, bringing us to the edge of what the law can legitimately provide-and possibly-beyond. 157In particular, moral dilemmas as discussed here represent a problem at the intersection of ethics, technology, and the law, 158 which makes regulation even more difficult.However, to lump different scenarios of moral dilemmas together under one constitutional umbrella is not a successful strategy.It is precisely the sensitive subject matter and the threat to such fundamental legal rights as well as to human life that demand an ethical and legal assessment appropriate to the situation in question. 159Frightening terrorist scenarios, pandemic-related triage decisions, questions of organ transplantation or, precisely, the  See, e.g., Khan, supra note 129 (suggesting that while the law simply cannot standardize certain situations, public officials should certainly act, even without a legal basis or even disregarding it, in order to achieve certain results).

64
Judith J. Thomson, Killing, Letting Die, and the Trolley Problem, 59 MONIST 204, 206 (1976).65 Id. at 216. 66 This scenario was presented in an article appearing some ten years after Thomson's first response to Philippa Foot.Thomson, supra note 23, at 1397. 67

116
See Volker Erb, Automatisierte Notstandshandlungen, in RECHTSSTAATLICHES STRAFRECHT: FESTSCHRIFT FÜR ULFRID NEUMANN ZUM 70 GEBURTSTAG 788 (Frank Saliger et al. eds., 2017).From a technological perspective, see MICHAEL BOTSCH & WOLFGANG UTSCHICK, FAHRZEUGSICHERHEIT UND AUTOMATISIERTES FAHREN: METHODEN DER SIGNALVERARBEITUNG UND DES MASCHINELLEN LERNENS (2020).It is important to mention that this scenario has only been brought up by Awad et al., supra note 75-as has been posited by one of its authors, BONNEFON, supra note 35, at 114-15-in order to demonstrate how dangerous it is to purely look at the moral intuitions of laypersons.
relation to the regulation for Dynamic Driving Tasks (DDT) under critical traffic scenariosemergency operation-it reads as follows:2.1The ADS [Automated Driving System] shall be able to perform the DDT for all reasonably foreseeable critical traffic scenarios in the ODD [Operational Design Domain].
152 H.R. 2813, 55th Legis., Reg.Sess.(Ariz.2021).al., Ethical Issues in Focus by the Autonomous Vehicles Industry, 41 TRANSP.REV.556, 565-66 (2021) (explaining that the only company that comes close to describing what its self-driving cars would do in a moral dilemma is Nuro, a company specialized in automating passenger-less delivery technology, whose cars would prioritize humans over their content in extreme cases).156 See, e.g., Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution Act, H.R. 3388, 115th Cong.(2017). 157

Health Science Reports e1240 holding that "[r]oad traffic injuries are a substantial yet underserved public health issue around the world". Regarding the causes of accidents, they state that "[t]he main contributors to traffic accidents include poor road conditions, reckless passing, drowsy driving, sleepwalking, intoxication, illness, use of mobile phones, eating and drinking in the car, inattention in the event of a street accident, and the inability of other drivers to react quickly enough to the situation." 34 See Rainer Schröder, Verfassungsrechtliche Rahmenbedingungen Des Technikrechts, in HANDBUCH DES TECHNIKRECHTS (Martin Schulte & Rainer Schröder eds., 2011).
For example, it includes protective measures for individuals whose lives are See Austin Brown, Demanding a Better Transportation Future Through Automation, in ARE WE THERE YET? THE MYTHS & REALITIES OF AUTONOMOUS VEHICLES 30 (Michael A. Pagano et al. eds., 2020).Potential developments that make self-driving cars more ecological are not necessarily linked to the control technology becoming independent.Increased mobility, in the sense of increased traffic volume, should thus also be legally regulated to the extent that this new technology should also be sustainable.Öneryildiz v. Turkey (GC), App.No. 48939/99, para.71 (Nov.30, 2004), https://hudoc.echr.coe.int/fre?i=001-67614(regarding various areas under the jurisdiction of the member states); Asiye Genç v. Turkey, App.No. 24109/07, para.71 (Jan.27, 2015), https://hudoc.echr.coe.int/fre?i=001-151025(relating to criminal and private law norms); see also 39See, e.g., This is not to say that Budayeva ea v. Russia, App.No. 15339/02 (Mar.20, 2008), https://hudoc.echr.coe.int/fre?i=001-85436, regarding natural disasters).See also Isabel Schübel-Pfister, Art. 2 Recht auf Leben, in KONVENTION ZUM SCHUTZ DER MENSCHENRECHTE UND GRUNDFREIHEITEN: KOMMENTAR, para.39a (Ulrich Karpenstein & Franz C. Mayer eds., 2022).
51For this argument, see infra note 93.52 Cf.Pretty, App.No. 2346/02 at para.37. See also Eisenberger, supra note 45, at 102.For a discussion on potential individual ethics settings, see Lando Kirchmair, Autonomous Vehicles: Crashes, in ENCYCLOPEDIA OF THE PHILOSOPHY OF LAW AND SOCIAL PHILOSOPHY (Mortimer Sellers & Stephan Kirste, eds., 2023 forthcoming).53 For an overview of the doctrinal German discussion, see Armin Engländer, Das Selbstfahrende Kraftfahrzeug und die Bewältigung dilemmatischer Situationen, 9 ZEITSCHRIFTEN-INFORMATIONS-SERVICE (ZIS) 608, 608-18 (2016).Cf.Welzel, supra note 57 (expressing the opinion that the railroad official had to redirect the car for ethical reasons).The Problem of Abortion & the Doctrine of the Double Effect, in VIRTUES & VICES & OTHER ESSAYS IN MORAL PHIL.(Philippa Foot ed., 2002) (1967).
60 61See Foot, supra note 22; see also Philippa Foot, ., Norbert Paulo, Leonie Möck and Lando Kirchmair, The Use and Abuse of Moral Preferences in the Ethics of Self-Driving Cars, in EXPERIMENTS IN MORAL AND POLITICAL PHILOSOPHY (Fernando Aguiar, Antonio Gaitán and Hugo Viciana, eds., 2024 forthcoming).and sophisticated argument as to why and how the debate on the trolley problem is of normative relevance for the regulation of self-driving cars, see Klicken oder tippen Sie hier, um Text einzugeben.PAULO, supra note 49.
supra note 14, at 11 (noting that this statement is in line with the general positions of Foot, supra note 61, and Thomson, supra note 69).See ETHICS COMM'N, supra note 14, at 11 (noting that this contradicts the findings of Awad et al., supra note 75, who, contrary to Rule 9, claim to have found significant preferences in favor of various personal characteristics over others); see also Bigman & Gray, supra note 77 (criticizing such empirical findings for methodological reasons).