To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
How is liability apportioned between the surgeon and the robot in robot-assisted surgery? In Switzerland, a surgeon’s criminal responsibility rests on an obligation of due diligence. It is generally assumed that due diligence and responsibility is divisible among team members and that each individual is responsible for their own actions. The principle of trust (“Vertrauensgrundsatz”) also establishes that a member of a team may trust other members to do their job. The issue discussed in this chapter is the degree of due diligence owed by surgeons who cooperate with robots. In Switzerland, the principle of trust is not applied to a robot assistant, if only because robots cannot be criminally liable themselves. Apart from complete robot failures, surgeons therefore bare the risk of patient injury from surgical robots, in order to avoid a responsibility gap in the law. However, given that surgical robots benefit patients and are becoming the expected standard of care in certain areas, the chapter argues that the principle of trust should be applied to limit the due diligence expected from a surgeon interacting with a robot, if the robot has been appropriately certified.
This chapter discusses a Norwegian criminal case (2010–12), which concerned the exploitation of a malfunctioning trading robot by two day-traders, in light of underlying narratives about the human–robot relationship. The central argument is that legal decision-making in this case is animated by two different underlying narratives about robots: In the first, the robot is seen as a real agent operating in the market. In the second, it is viewed as a mere tool in the hands of human agents. The outcome of the case depends to a large degree on which of these narratives the court chooses as the basis of its deliberation.
This chapter urges justice systems to recognize five key rights of the accused with respect to digital and machine evidence of guilt or innocence, in line with systemic goals such as accuracy, fairness, dignity, and public legitimacy: (1) front-end development and testing safeguards to minimize error and bias (based on a consensus view of algorithmic fairness); (2) meaningful and equitable pretrial access, including disclosure requirements, eliminating trade secret privileges, allowing defense testing, and defense access to potentially exculpatory technologies; (3) contestation, including a right to be heard in the development and testing process and access to experts both to review government evidence and develop defense evidence; and (4) a factfinding process that is epistemically competent but that also keeps a human in the loop, to protect equity and mercy, avoid automation complacency, and reject dehumanizing technologies. The chapter offers proposals to operationalize each right and discusses how each would apply to the various categories of digital and machine evidence discussed in Chapters 1–6.
This introduction lays out various aspects concerning robots' entanglement with substantive law, including an all-round view of the criminal liability of humans for robots, the criminal responsibility of robots themselves, self-defense against robots, and robots as victims of crime. While Janneke de Snaijer and Marta Bo in their chapter discuss specific aspects of criminal liability and exemptions therefrom, Thomas Weigend analyzes the looming “responsibility gap” and the option of expanding the idea of corporate criminal responsibility to cover harm caused by AI devices. This is one aspect of a preventive, repressive, and long-term perspective on how criminal law can shape human–robot interaction, but also possibly an example of how the wish to regulate robots could affect criminal law itself.
Digital traces that people leave behind can be useful evidence in criminal courts. However, in many jurisdictions, the legal provisions setting the rules for the use of evidence in criminal courts were formulated long before these digital technologies existed, and there seems to be an increasing discrepancy between legal frameworks and actual practices. This chapter investigates this disconnect by analyzing the relevant legal frameworks in the EU for processing data in criminal courts, and comparing and contrasting these with actual court practices. The relevant legal frameworks are criminal and data protection law. Data protection law is mostly harmonized throughout the EU, but since criminal law is mostly national law, this chapter focuses on criminal law in the Netherlands. We conclude that existing legal frameworks do not appear to obstruct the collection of data for evidence, but that regulation on collection in criminal law and regulation on processing and analysis in data protection law are not integrated. We also characterize as remarkable the almost complete absence of regulation of automated data analysis – in contrast with the many rules for data collection.
Historically, criminal offenses were proved through witness testimony, physical evidence, confessions, and rudimentary forensic techniques such as fingerprinting. But with the dawn of the digital era, prosecutors have increasingly relied on evidence gleaned from the modern arsenal of consumer technologies, such as cell phones or automated systems. Although much has been written about prosecutors’ use of such evidence to prove a defendant’s guilt, far less attention has been given to the challenges faced by the defense in accessing, presenting, or attacking forms of proof derived from sophisticated consumer technologies. This chapter aims to fill that gap, first by presenting a taxonomy of digital proof and then by isolating the critical characteristics of such evidence. The chapter suggests that this taxonomy can support efforts to formalize and standardize a defendant’s ability to marshal defense evidence for exculpatory and adversarial purposes as readily as the government does to inculpate.
This article examines the Whig Party’s conception of the presidency and argues that it failed to take hold because the idea of an executive which is subordinate or deferential to the legislative branch is fundamentally at odds with the Constitution. To show this, I assess the four presidents from the Whig Party: William Henry Harrison, John Tyler, Zachary Taylor, and Millard Fillmore. Each of these presidents entered office supporting the Whigs’ ideological vision of legislative supremacy and weak executive power, yet quickly abandoned that vision once in office. I contend that this demonstrates the constitutional logic of the presidency and the way it shapes the officeholder’s perspective, orienting presidents toward a robust understanding of executive power. In short, presidents do not act like Whigs because the Constitution directs them not to.
In human–robot interactions in legal proceedings, human responses to robot-generated evidence will present unique challenges to the accuracy of litigation as well as ancillary goals such as fairness and transparency, though it may also enhance accuracy in other respects. The most important feature of human–robot interactions is the human tendency to anthropomorphize robots, which can generate misleading impressions and be manipulated by designing robots to make them appear more trustworthy and believable. Although robot-generated evidence may also offer unique advantages, there are concerns about the degree to which the traditional methods of testing the accuracy of evidence, particularly cross-examination, will be effective. We explore these phenomena in the autonomous vehicles context, comparing the forums of litigation, alternative dispute resolution, and the National Transportation Safety Board. We suggest that the presence of expert decision-makers might help mitigate some of the problems with human–robot interactions, though other aspects of the procedures in each of the forums still raise concerns.