We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As technology becomes more powerful, intelligent, and autonomous, its usage also creates unintended consequences and ethical challenges for a vast array of stakeholders. The ethical implications of technology on society, for example, range from job losses (such as potential loss of truck driver jobs due to automation) to lying and deception about a product that may occur within a technology firm or on user-generated content platforms. The challenges around ethical technology design are so multifaceted that there is an essential need for each stakeholder to accept responsibility. Even policymakers who are charged with providing the appropriate regulatory framework and legislation about technologies have an obligation to learn about the pros and cons of proposed options.
There is a delicate balance associated with ethics and privacy in “enterprise continuous monitoring systems.” On the one hand, it can be critical to enterprises to continuously monitor the ethical behavior of different agents, and thus, facilitate enterprise risk management, as noted in the KPMG quote. In particular, continuous monitoring systems help firms monitor related internal and external agents to make sure that the agents hired by or engaged by the enterprise are behaving ethically. However, on the other hand, such continuous monitoring systems can pose ethical and privacy risks to those being monitored and provide risks and costs to the company doing the monitoring. For example, inappropriate information can be assembled, stored, and inferred about a range of individuals. Thus, information obtained by continuous monitoring generally should follow privacy principles that require that the data be up-to-date and conform to the purpose for which the data was originally gathered, and other constraints.
When the US Army established the Institute for Creative Technologies (ICT) at the University of Southern California in 1999, the vision was to push the boundaries of immersive technologies for the purpose of enhancing training and education; not only for the military but also for the rest of society. Over the past two decades great progress has been made on the technologies that support this vision. Breakthroughs in graphics, computer vision, artificial intelligence (AI), affective computing, and mixed reality have already transformed how we interact with one another, with digital media, and with the world. Yet this is in many ways only a starting point, since the application of these technologies is just beginning to be realized. The potential for making a positive impact on individuals and society is great, but there is also the possibility of misuse. This chapter describes some of the capabilities underlying the emerging field of immersive digital media; provides a couple of examples of how they can be used in a positive way; and then discusses the inherent dangers from an ethical standpoint.
Each year hundreds of new biomedical devices and therapies are developed to attempt to solve unmet medical needs. However, many fail due to unforeseen challenges of complex ethical, regulatory, and societal issues. We propose that a number of these issues can be effectively transformed into drivers of innovation for medical solutions if ethical analysis is considered early, iteratively, and comprehensively in the research and development process.
This chapter is based on ethical distinctions. Creating clarity in ethical thought depends on the clarity of distinctions we make in discussing ethical issues. Achieving clarity and consistency in ethical behavior requires understanding some basic distinctions.
In an essay about his science fiction, Isaac Asimov reflected that “it became very common … to picture robots as dangerous devices that invariably destroyed their creators.” He rejected this view and formulated the “laws of robotics,” aimed at ensuring the safety and benevolence of robotic systems. Asimov’s stories about the relationship between people and robots were only a few years old when the phrase “artificial intelligence” (AI) was used for the first time in a 1955 proposal for a study on using computers to “solve kinds of problems now reserved for humans.” Over the half-century since that study, AI has matured into sub-disciplines that have yielded a constellation of methods that enable perception, learning, reasoning, and natural language understanding.
The development and popularity of computer-mediated communication (CMC), social network sites (SNSs), and social media communication (SMC) sparked twenty-first-century ethical dilemmas (Patching & Hirst, 2014; Barnes, 2003). At the heart of social media ethical concerns are data privacy and ownership. The fallout from the Cambridge Analytica data breach on Facebook, which followed a class action settlement in 2012 over the Beacon program, offers clear evidence that lack of user consent over gathering and disseminating information is a long-standing problem (Terelli, Jr. & Splichal, 2018). Facebook appears to have made the problem worse by allowing user data access to outside, third-party program applications (“apps”), and granting user friends the ability to further weaken privacy (Stratton, 2014).
Twenty-first-century innovations in the technical fields designed for human consumption and ultimately as daily life necessities such as personal robots, intelligent implants, driverless cars, and drones require innovations in ethical standards, laws and, rules of ethics. Ethical issues around robots and artificial (AI) intelligence, for example, present a new set of challenges about the new capabilities they afford. These capabilities outpace law and policy in ethics. Tesla and Space X CEO Elon Musk recently warned the governors of the United States that “robots will do everything better than us” and that “AI is a fundamental existential risk for human civilization.” He called for the proactive government regulation of AI, “I think by the time we are reactive in AI regulation, it’s too late” (Domonoske, 2017)
The concerns and corporate practice of business ethics have evolved over the past sixty years. But none of the changes of the past are as great as those that will occur in the next ten years as artificial intelligence (AI) and machine learning become ubiquitous tools in American society. This chapter presents a concise history of corporate attention to business ethics over this historical period in order to identify how “next-generation business ethics” will demonstrate both continuity with and divergence from past attention to business ethics.
In recent times, both journalism and who is defined as a journalist have undergone significant change. With the growth of the internet, and the subsequent ability of anyone with a smartphone camera and a web connection to publish, the business model of journalism that had remained stable for decades has been declared broken and the public service model of journalism under threat. Meanwhile, a US president communicates via Twitter; Facebook Live spreads news while the mainstream media scramble to keep up.
Some of the significant features of our era include the design of large-scale systems; advances in medicine, manufacturing, and artificial intelligence (AI); the role of social media in influencing behavior and toppling governments; and the surge of online transactions that are replacing human face-to-face interactions. Most of these features have resulted from advances in technology. While spanning a variety of disciplines, these features also have two important aspects in common: the necessity for sound decision-making about the technology that is evolving, and the need to understand the ethical implications of these decisions to all stakeholders.
Numerous engineering projects create products and services that are important to society; many have explicit safety implications; some are distinguished by explicitly supporting national security. Failures and deficiencies that might be considered “routine” in some settings can in these cases directly cause injuries and lost lives, in addition to harming national security. In such a setting, decisions regarding quality, testing, reliability, and other “engineering” matters can become ethical decisions, where balancing cost and delivery schedule, for example, against marginal risks and qualities is not a sufficient basis for a decision. When operating in the context of an engineering project with such important societal implications, established engineering processes must therefore be supplemented with additional considerations and decision factors. In this chapter, long-time defense contractor executive and US National Academy of Engineering member Neil Siegel discusses specific examples of ways in which these ethical considerations manifest themselves. The chapter starts with his thesis, asserting that bad engineering risks transitioning into bad ethics under certain circumstances, which are described in the chapter. It then uses a story from the NASA manned space program to illustrate the thesis; unlike some stories, this one has a “happy ending.” The author then moves to the main aspects of the chapter, starting by explaining the behavioral, evolutional, and situational factors that can tempt engineers into unethical behavior: how do engineers get into situations of ethical lapse? No one enters a career in engineering intended to put lives and missions at risk through ethical lapses; at the very least, this is not the path to promotion and positive career recognition. With the basis for such behavior established, the author then defines what he calls the characteristics of modern systems that create risk of ethical lapse; he identifies five specific traits of modern societal systems – systems of the sort that today’s engineers are likely to be engaged in building – as being those that can allow people to slip from bad engineering into bad ethics. These characteristics are then illustrated with examples, from everyday engineering situations, such as working to ensure the reliability of the electric power grid, and designing today’s automobiles. The very complexities and richness of features that distinguish many of today’s products and critical societal systems are shown to become a channel through which bad engineering can transition into bad ethics. Lastly, the chapter discusses some of the author’s ideas about how to correct these situations, and guard against these temptations.
Over the last decade, I have served as the Dean of Religious Life at the University of Southern California (USC), where I oversee more than ninety student religious groups and more than fifty campus chaplains on campus; collectively representing all the world’s great religious traditions and many humanist, spiritual, and denominational perspectives as well. I also have the great privilege to do this work on a campus with more international students than almost any other university in the United States, in the heart of Los Angeles, the most religiously diverse city in human history (Loskota, 2015). As a result, the opportunities to think deeply about geo-religious diversity, interfaith engagement, and global ethics are unparalleled at USC (Mayhew, Rockenbach, & Bowman, 2016).
The past few years have seen a remarkable amount of attention on the long-term future of artificial intelligence (AI). Icons of science and technology such as Stephen Hawking (Cellan-Jones, 2014), Elon Musk (Musk, 2014), and Bill Gates (Gates, 2015) have expressed concern that superintelligent AI may wipe out humanity in the long run. Stuart Russell, coauthor of the most-cited textbook of AI (Russell & Norvig, 2003), recently began prolifically advocating (Dafoe & Russell, 2016) for the field of AI to take this possibility seriously. AI conferences now frequently have panels and workshops on the topic. There has been an outpouring of support from many leading AI researchers for an open letter calling for greatly increased research dedicated to ensuring that increasingly capable AI remains “robust and beneficial,” and gradually a field of “AI safety” is coming into being (Pistono & Yampolskiy, 2016; Yampolskiy, 2016, 2018; Yampolskiy & Spellchecker, 2016). Why all this attention?
This chapter is a “case study,” that is, a collection of facts organized into a story (the case) analyzed to yield one or more lessons (the study). Collecting facts is always a problem. There is no end of facts. Even a small event in the distant past may yield a surprise or two if one looks carefully enough. But the problem of collecting facts is especially severe when the facts change almost daily as the story “unfolds” in the news. One must either stop collecting on some arbitrarily chosen day or go on collecting indefinitely. I stopped collecting on October 3, 2016 (the day on which I first passed this chapter to the editor of this volume). There is undoubtedly much to be learned from the facts uncovered since then, but this chapter leaves to others the collecting and analyzing of those newer facts. The story I tell is good enough for the use I make of it here – and for future generations to consider. Increasingly, whistleblowing is being understood to be part of the professional responsibilities of an engineer.
This chapter presents reflections on next-generation ethical issues by four deans at the University of Southern California: Public Policy, Medicine, Business, and Engineering. Each of the deans was asked to reflect on some of the important ethical issues that they believe we face today or that we will face in the near future. Their responses follow.