To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Some of the significant features of our era include the design of large-scale systems; advances in medicine, manufacturing, and artificial intelligence (AI); the role of social media in influencing behavior and toppling governments; and the surge of online transactions that are replacing human face-to-face interactions. Most of these features have resulted from advances in technology. While spanning a variety of disciplines, these features also have two important aspects in common: the necessity for sound decision-making about the technology that is evolving, and the need to understand the ethical implications of these decisions to all stakeholders.
Numerous engineering projects create products and services that are important to society; many have explicit safety implications; some are distinguished by explicitly supporting national security. Failures and deficiencies that might be considered “routine” in some settings can in these cases directly cause injuries and lost lives, in addition to harming national security. In such a setting, decisions regarding quality, testing, reliability, and other “engineering” matters can become ethical decisions, where balancing cost and delivery schedule, for example, against marginal risks and qualities is not a sufficient basis for a decision. When operating in the context of an engineering project with such important societal implications, established engineering processes must therefore be supplemented with additional considerations and decision factors. In this chapter, long-time defense contractor executive and US National Academy of Engineering member Neil Siegel discusses specific examples of ways in which these ethical considerations manifest themselves. The chapter starts with his thesis, asserting that bad engineering risks transitioning into bad ethics under certain circumstances, which are described in the chapter. It then uses a story from the NASA manned space program to illustrate the thesis; unlike some stories, this one has a “happy ending.” The author then moves to the main aspects of the chapter, starting by explaining the behavioral, evolutional, and situational factors that can tempt engineers into unethical behavior: how do engineers get into situations of ethical lapse? No one enters a career in engineering intended to put lives and missions at risk through ethical lapses; at the very least, this is not the path to promotion and positive career recognition. With the basis for such behavior established, the author then defines what he calls the characteristics of modern systems that create risk of ethical lapse; he identifies five specific traits of modern societal systems – systems of the sort that today’s engineers are likely to be engaged in building – as being those that can allow people to slip from bad engineering into bad ethics. These characteristics are then illustrated with examples, from everyday engineering situations, such as working to ensure the reliability of the electric power grid, and designing today’s automobiles. The very complexities and richness of features that distinguish many of today’s products and critical societal systems are shown to become a channel through which bad engineering can transition into bad ethics. Lastly, the chapter discusses some of the author’s ideas about how to correct these situations, and guard against these temptations.
Over the last decade, I have served as the Dean of Religious Life at the University of Southern California (USC), where I oversee more than ninety student religious groups and more than fifty campus chaplains on campus; collectively representing all the world’s great religious traditions and many humanist, spiritual, and denominational perspectives as well. I also have the great privilege to do this work on a campus with more international students than almost any other university in the United States, in the heart of Los Angeles, the most religiously diverse city in human history (Loskota, 2015). As a result, the opportunities to think deeply about geo-religious diversity, interfaith engagement, and global ethics are unparalleled at USC (Mayhew, Rockenbach, & Bowman, 2016).
The past few years have seen a remarkable amount of attention on the long-term future of artificial intelligence (AI). Icons of science and technology such as Stephen Hawking (Cellan-Jones, 2014), Elon Musk (Musk, 2014), and Bill Gates (Gates, 2015) have expressed concern that superintelligent AI may wipe out humanity in the long run. Stuart Russell, coauthor of the most-cited textbook of AI (Russell & Norvig, 2003), recently began prolifically advocating (Dafoe & Russell, 2016) for the field of AI to take this possibility seriously. AI conferences now frequently have panels and workshops on the topic. There has been an outpouring of support from many leading AI researchers for an open letter calling for greatly increased research dedicated to ensuring that increasingly capable AI remains “robust and beneficial,” and gradually a field of “AI safety” is coming into being (Pistono & Yampolskiy, 2016; Yampolskiy, 2016, 2018; Yampolskiy & Spellchecker, 2016). Why all this attention?
This chapter is a “case study,” that is, a collection of facts organized into a story (the case) analyzed to yield one or more lessons (the study). Collecting facts is always a problem. There is no end of facts. Even a small event in the distant past may yield a surprise or two if one looks carefully enough. But the problem of collecting facts is especially severe when the facts change almost daily as the story “unfolds” in the news. One must either stop collecting on some arbitrarily chosen day or go on collecting indefinitely. I stopped collecting on October 3, 2016 (the day on which I first passed this chapter to the editor of this volume). There is undoubtedly much to be learned from the facts uncovered since then, but this chapter leaves to others the collecting and analyzing of those newer facts. The story I tell is good enough for the use I make of it here – and for future generations to consider. Increasingly, whistleblowing is being understood to be part of the professional responsibilities of an engineer.
This chapter presents reflections on next-generation ethical issues by four deans at the University of Southern California: Public Policy, Medicine, Business, and Engineering. Each of the deans was asked to reflect on some of the important ethical issues that they believe we face today or that we will face in the near future. Their responses follow.
The way people work in teams is changing. The changes are affecting what work teams look like and how those teams function. In years past people worked for the same organizations for many years, perhaps even their whole careers (see Sullivan, 1999 for review). Because their colleagues also stayed in the same organizations for many years, they were likely to work on teams that had relatively stable memberships. This has changed. People now switch employers more frequently and they change roles within organizations more often (Miles & Snow, 1996; Rousseau & Wade-Benzoni, 1995). They are also more likely to work as independent contractors rather than as employees of the company and seek to develop a “boundaryless career” defined as “a sequence of job opportunities that go beyond the boundaries of a single employment setting” (DeFillippi & Arthur, 1996, p. 116).
The study of cyberethics represents an evolution of computer ethics. When the computer first appeared it was seen as a “revolutionary machine,” because of the scale of its activities and its capability to “solve” certain problems with the help of sophisticated software. Attention was soon focused on the disruptive potential of databases, vexing questions about software ownership, and the “hacker ethic.” Traditional moral concepts and values such as responsibility, privacy, and freedom had to be creatively adapted to this new reality (Johnson & Nissenbaum, 1995).
Engineers who operate under constraints and obligations established by codes of ethics or professional responsibility maintained by professional organizations of which they are members and by state government authorities view these constraints and obligations, at times, as limitations or barriers. It is important to recognize, however, that these codes can also work to the benefit of the engineers governed by their terms. The codes of ethics of professional organizations and state authorities can serve a defensive and empowering function for engineers by providing a basis for preserving legal rights of the engineers and by reducing their risk of personal liability based on misconduct. Engineers should understand thoroughly the ethical obligations established by these codes and should identify the provisions of the codes that they can apply in their daily practice to help establish and document their personal defenses against potential future claims of misconduct.
Examinations of whether particular actions or intentions are ethical or not must grapple with the questions of what the applicable ethical standards are, how people may act in particular situations and why. These are difficult questions and the subject of much inquiry. There are many schools of thought and disciplines for answers, from religious traditions to philosophical to psychology.
This chapter presents an interview with Vint Cerf, one of the fathers of the Internet. The chapter discusses his personal views on ethics, data and privacy, net neutrality, public policy, self-driving cars, genetic codes, and reflections on the future.
The global engineering/construction industry is huge. In 2017, it was estimated to be an $8.8 trillion industry (Market Research Hub, 2016). The US construction industry in 2017 was estimated at $1.2 trillion (Wilcox, 2018). Because the industry is comprised of a myriad of projects to build new facilities or to repair or upgrade existing ones, it is often the location for bribery, fraud, and corruption. Government leaders in Panama, Brazil, and Spain have been removed from office for receiving bribes and kickbacks from projects in their countries. Engineering firms in the United States and Canada have been sanctioned for giving bribes to secure projects. These are the facts.
In an era of corporate mistrust, creating sustainable ethical corporations goes beyond implementing governance, risk, and compliance (GRC) strategy. It requires an ongoing intensified spotlight to make the highest ethical standards the norm, and ruthless intolerance of anything less. Corporations are at a tipping point seeking to build sustainable businesses while striving to avoid a front-page scandal. They are placing greater scrutiny on values as business enabler, leadership accountability, and building ethical decision-making as an integrated business process. The next generation of ethical systems is at our corporate doorstep. As Albert Einstein famously said, “we cannot solve problems by using the same kind of thinking we used when we created them.” Today’s workplace has an unprecedented four generations working alongside each other. Globalization and the flattened twenty-first-century economy have pivotally shifted the norms of communication, information sharing, and collaboration. Greater visibility through mass media and social media has revealed new consumer and corporate behaviors. With greater transparency at our fingertips, trust has become the new currency, evidenced in the backlash as trust in public officials and corporate leaders steadily declines. The Edelman Trust Barometer has been studying trust across four institutions since 2012: businesses, government, nongovernmental organizations (NGOs), and media. Their 2017 report reveals that trust has declined broadly across all four institutions and that trust is in crisis around the world.