We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The tax system incentivizes automation, even in cases where it is not otherwise efficient. This is because the vast majority of tax revenue is derived from labor income. When an AI replaces a person, the government loses a substantial amount of tax revenue - potentially hundreds of billions of dollars a year in the aggregate. All of this is the unintended result of a system designed to tax labor rather than capital. Such a system no longer works once labor is capital. Robots are not good taxpayers. The solution is to change the tax system to be more neutral between AI and human workers and to limit automation’s impact on tax revenue. This would be best achieved by reducing taxes on human workers and increasing corporate and capital taxes.
This chapter explains the need for AI legal neutrality and discusses its benefits and limitations. It then provides an overview of its application in tax, tort, intellectual property, and criminal law. Law is vitally important to the development of AI, and AI will have a transformative effect on the law given that many legal rules are based on standards of human behavior that will be automated. As AI increasingly steps into the shoes of people, it will need to be treated more like a person, and more importantly, sometimes people will need to be treated more like AI.
This chapter defines artificial intelligence and discusses its history and evolution, explains the differences between major types of AI (symbolic/classical and connectionist), and describes AI’s most recent advances, applications, and impact. It also weighs in on the question of whether AI can “think,” noting that the question is less relevant to regulatory efforts, which should focus on promoting behaviors that improve social outcomes.
AI has the potential to be substantially safer than people. Self-driving cars will cause accidents, but they will cause fewer accidents than people. Because automation will result in substantial safety benefits, tort law should encourage its adoption as a means of accident prevention. Under current laws, suppliers of AI tortfeasors are strictly responsible for their harms. A better system would hold them liable for harms caused by AI tortfeasors in negligence. Not only would this encourage the use of AI after it exceeds human performance, but also the liability test would focus on activity rather than design, which would be simpler to administer. More importantly, just as AI activity should be discouraged when it is less safe than a person, human activity should be discouraged when it is less safe than an AI. Once AI is safer than a person and automation is practicable, human tortfeasors should be held to the standard of AI behavior.
The impact of artificial inventors is only starting to be felt, but AI’s rapid improvement means that it may soon outdo people at solving problems in certain areas. This should revolutionize not only research and development but also patent law. The most important requirement to being granted a patent is that an invention must be nonobvious to a hypothetical skilled person who represents an average researcher. As AI increasingly augments average researchers, this should make them more knowledgeable and sophisticated. In turn, this should raise the bar to patentability. Once inventive AI moves from augmenting to automating average researchers, it should directly represent the skilled person in obviousness determinations. As inventive AI continues to improve, this should continue to raise the bar to patentability, eventually rendering innovative activities obvious. To a superintelligent AI, everything will be obvious.
This chapter concludes by responding to some of the controversies about artificial intelligence and possible criticisms of AI legal neutrality. It argues that AI legal neutrality is important regardless of whether AI broadly achieves superhuman performance, and that the law would not want to constrain AI development for protectionist reasons. It further argues that AI legal neutrality is a coherent principle for policymakers to apply, even though it allows the law to treat AI and people differently and will sometimes be at odds with other regulatory goals. Finally, it discusses some of the risks and dangers of AI and argues these are susceptible to management with appropriate legal frameworks.
Criminal law falls short in cases where an AI functionally commits a crime and there are no individuals who are criminally liable. This chapter explores potential solutions to this problem, with a focus on holding AI directly criminally liable where it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement for a voluntary act. Drawing on analogies to corporate and strict criminal liability, the chapter shows AI punishment cannot be categorically ruled out with quick theoretical arguments. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, is a better solution to AI crime.
AI is generating patentable inventions without a person involved who qualifies as an inventor. Yet, there are no rules about whether such an invention could be patented, who or what could qualify as an inventor, and who could own the patents. There are laws that require inventors be natural persons, but they predate inventive AI and were never intended to prohibit patents. AI-generated inventions should be patentable because this will incentivize the development of inventive AI and result in more benefits for everyone. When an AI invents, it should be listed as an inventor because listing a person would be unfair to legitimate inventors. Finally, an AI’s owner should own any patents on its output in the same way that people own other types of machine output. The chapter proceeds to address a host of challenges that would result from AI inventorship, ranging from ownership of AI-generated inventions and displacement of human inventors to the need for consumer protection policies.
AI and people do not compete on a level-playing field. Self-driving vehicles may be safer than human drivers, but laws often penalize such technology. People may provide superior customer service, but businesses are automating to reduce their taxes. AI may innovate more effectively, but an antiquated legal framework constrains inventive AI. In The Reasonable Robot, Ryan Abbott argues that the law should not discriminate between AI and human behavior and proposes a new legal principle that will ultimately improve human well-being. This work should be read by anyone interested in the rapidly evolving relationship between AI and the law.
Advances in technologies that were unimaginable a century ago have helped in establishing the current high standards of living. Undoubtedly, the oil and gas industry has played a pivotal role in this respect. Thanks to the advent of the petroleum industry, the use of oil and gas has created new factories and revolutionized industries such as transportation and power generation for more than a century. Liquid fuels have impacted transportation and have made various communities closer. The reliance on liquid and gaseous fuels has affected the lives of every person in the world with the invention of air transportation and personal vehicles.
In the 1990s, British writers began using “transparency” as a portmanteau word to describe that desirable state of organizational management and governance characterized by candor, openness, honesty, clarity, legal compliance, and full disclosure (Handy, 1990). At first, the word didn’t take hold on this side of the Atlantic, perhaps because it was too vague and philosophical for American tastes in managerial buzz words (which tend to run more to the precise and practical).
Given the rapid rate of technological innovation and a desire to be proactive in addressing potential ethical challenges that arise in contexts of innovation, engineers must learn to engage in value-sensitive design – design that is responsive to a broad range of values that are implicated in the research, development, and application of technologies. One widely-used tool is Life Cycle Assessment (LCA). Physical products, as with organisms, have a life cycle, starting with extraction of raw materials, and including refining, transport, manufacturing, use, and finally end-of-life treatment and disposal. LCA is a quantitative modeling framework that can estimate emissions that occur throughout a product’s life cycle, as well as any harmful effects that these emissions have on the environment and/or public health. Importantly, LCA tools allow engineers to evaluate multiple types of environmental and health impacts simultaneously and are not limited to a single endpoint or score. However, LCA is only useful to the extent that its models accurately include the full range of values implicated in the use of a technology, and to the extent that stakeholders, from designers to decisionmakers, understand and are able to communicate these values and how they are assigned. Effective LCA requires good ethical training to understand these values.
There was a time when we were all six-sigma-ing. We did so because Jack Welch had bought into the six-sigma phenomenon and he had created a phenomenally performing General Electric (GE). Then we moved along from good to great to the search for excellence to becoming great by choice to whatever superlative Jim Collins told us was the way to a company that was built to last. Then someone moved our cheese. We had no time for that because we were just one-minute managers. We smoothed earnings, incentivized employees, and created three tiers of employees – including getting rid of the bottom tier of employees, whether they deserved termination or kudos. We all wanted to be part of the Fortune 100, the Fortune Most Admired Companies, even as we were led by Fortune CEOs and CFOs of the year – many of whom ended up doing time.
Modern engineering and technology have allowed us to connect with each other and even to reach the moon. But technology has also polluted vast areas of the planet and empowered surveillance and authoritarian governments with dangerous tools. There are numerous cases where engineers and other stakeholders routinely ask what they are capable of inventing, and what they actually should invent. Nuclear weapons and biotechnology are two examples. But when analyzing the transformations arising from less controversial modern socio-technological tools – like the Internet, smartphones, and connected devices, which augment and define our work and social practices – two very distinct areas of responsibility become apparent. On the one hand, a question arises around the values and practices of the engineers who create the technologies. What values should guide their endeavors and how can society promote good conduct? On the other hand, there are questions regarding the effects of people using these technologies. While engineering and design choices can either promote or hinder commendable social behavior and appropriate use, this chapter will focus on the first question.