4 Obstacles to ethical decision-making in the perception of ethical context
I. Introduction
In Chapters 4 and 5, we examine how the construction of mental models, defined and illustrated in Chapter 2, may devolve into the formation of barriers or obstacles that ultimately prevents decision-makers from reaching ethical decisions. While mental models serve to conceptualize, focus, and shape our experiences, in so doing, they sometimes cause us to ignore data and occlude the critical reflection that might be relevant or, indeed, necessary to practical decision-making. We argue that distorting mental models are the foundation or underpinning of impediments to effective ethical decision-making. Chapters 4 and 5 conceptualize ethical decision-making as a multi-stage process, and investigate the many ways in which this process is thwarted by obstacles such as bounded awareness (Bazerman and Chugh, Reference Bazerman and Chugh2006), blind spots (Moberg, Reference Moberg2006), conformity (Asch, Reference Asch1955, Reference Asch and Guetzkow1951), obedience to authority (Milgram, Reference Milgram1974), and others.
We construct mental models – and become habituated to the experiences they enable – because they embody our history and our experiences; they are inherently (tautologically) familiar; and, somehow, they benefit us. Yet, mental models, even – and sometimes, especially – those that make us feel comfortable, happy and productive in our roles as employees, managers, and leaders, can devolve into barriers to ethical decision-making when they discourage attention to the fundamental vulnerabilities of our own processes. In this chapter, we propose a broad conceptual division of the ethical decision-making processes into five steps in order more precisely to identify the interference of distorting mental models in the development of conscientious and responsible resolutions to ethical challenges. We then examine common impediments to the initial two steps of this decision-making model, which describe the process of perceiving a situation or conflict as a dilemma that calls for an ethical response. In Chapter 5, we will turn to a discussion of mental models that prevent the execution of the remaining steps of ethical deliberation, the practice of critical reflection crucial for crafting a responsible decision that maintains fidelity to consciously held, personal values.
II. Ethical decision-making as a process: A mental models approach
Under optimal conditions, we reach decisions through an ethical decision-making process. Broadly speaking, one formulation of this process is framed as follows. A decision-maker (1) becomes aware of a present issue; (2) gathers facts relevant to the ultimate decision; (3) identifies alternative solutions to the dilemma at hand; (4) considers stakeholders implicated or impacted by these alternatives; and (5) reaches a conclusion by comparing and weighing these alternatives, often guided by insights offered by philosophical theory and/or external guidance, as well as by testing solution possibilities by reflecting on their likely consequences from the perspectives provided by other mental models (Hartman and DesJardins, Reference Hartman and Desjardins2008).
Certainly, these steps may occur in a different order, depending upon the circumstances. For example, examining a dilemma from the viewpoint of impacted stakeholders might raise new facts, or bring to light previously unconsidered ethical issues that reframe the dilemma altogether. It is important to recall that other models of ethical decision-making are possible and, in addition, that this approach will not guarantee one single and absolute answer to every decision. In any given situation, it is impossible to gather all facts to assess their relevance, or to consider how one's decision might impact each and every potential stakeholder, individually. Rational decision-makers may disagree at each stage of the process, from the relevance of particular data and the relative importance of particular stakeholder groups to the application or significance of a theoretical insight or the appropriate conclusion. But this analytical approach, which conceptualizes ethical decision-making as multi-step process, provides a helpful beginning in the development of responsible, reasonable, and ethical decision-making. Decisions that follow from such a process of thoughtful and conscientious reasoning will be more accountable and responsible, and will be more likely to be consistent with the decision-maker's deeply held values, than those that do not.
Though these five steps appear both burdensome and cumbersome, since they are likely to be applied in a traditional, fast-paced business environment, it is their habit-forming nature through repetition and reinforcement that tends to create an ethical corporate culture. Mental models that interfere with this process influence our choices in ways of which we are not aware, and thereby subliminally induce or persuade us away from this intentional choice-making toward behaviors inconsistent with our own values (Banaji et al., Reference Banaji, Bazerman and Chugh2003). The problematic mental models discussed in the previous chapter and again explored in greater depth in the current and following chapters create sub-optimal conditions for ethical decision-making at each phase of the process, by impeding our awareness of the ethical dimensions of our decisions, blinding us to significant facts, distorting our views of others, limiting our capacity to imagine alternative resolutions, and discouraging us from reflecting on the value commitments and likely outcomes of alternatives. To the extent that we fail to interrogate our mental models, we increase our vulnerability to problematic mindsets and miss opportunities to strengthen ways of framing experience that promote our capacity to forge ethical decisions and guide our behavior by their light.
Deciding to “do wrong,” or failing to make an ethical decision at all
To be sure, the risk of moral failure is not eliminated by the critical and intentional practice of ethical decision-making. Unethical decisions are sometimes the outcome of a conscious, deliberate, and reflective choice to “do wrong.” Bernie Madoff's admission of guilt to securities fraud, investment adviser, fraud, mail fraud, wire fraud, three counts of money laundering, false statements, perjury, false filings with the U.S. Securities and Exchange Commission (SEC), and theft from an employee benefit plan offers an egregious example of such a deliberate moral failure (Department of Justice, 2009). In his statement to the court, Madoff explained,
Your Honor, for many years up until my arrest on December 11, 2008, I operated a Ponzi scheme through the investment advisory side of my business…I am actually grateful for this first opportunity to publicly speak about my crimes, for which I am so deeply sorry and ashamed. As I engaged in my fraud, I knew what I was doing was wrong, indeed criminal. I cannot adequately express how sorry I am for what I have done. I am here today to accept responsibility for my crimes by pleading guilty and, with this plea allocution, explain the means by which I carried out and concealed my fraud.…
Deliberate unethical and illegal behavior is surely responsible for a portion of corporate malfeasance. In this vein, a recent theory has gone so far as to propose that the global financial crisis was due in large part to rapid changes in corporate organizational practice that have facilitated the rise of “dark leadership,” specifically “corporate psychopaths” who were attracted to the financial sector and who ruthlessly pursued personal greed above all else (Boddy, Reference Boddy2011, Reference Boddy2010).
However, as the examples from Chapter 2 demonstrate, most unethical decisions are not the culmination of deliberately unethical choices, such as those made by Madoff, nor are they made by the small percentage of amoral psychopaths that may occupy corporate leadership positions. Instead, they result from a failure to engage in ethical deliberation. There is no evidence that the many Madoff's investors who enjoyed an abnormally consistent and high rate of return, or the regulatory agencies that failed to discover the fraud, were parties to Madoff's scheme, or indeed were motivated by corrupt or predatory desires for personal profit at all costs. Yet, when these same investors neglected to question how Madoff was able to produce such returns or when the SEC did not pursue warnings about the Ponzi scheme, they too became implicated in the ethical failure (SEC, 2009). Identifying mental models that disable or discourage ethical deliberation is crucial, as Campbell et al. (Reference Campbell, Whitehead and Finklestein2009, 1) point out, because “[t]he daunting reality is that enormously important decisions made by intelligent, responsible people with the best information and intentions are sometimes hopelessly flawed.” When narrowly framed mental models create obstacles to the ethical decision-making process, we may fail to become aware that a situation has moral dimensions, or fail to attend to data, points of view, alternative solutions, and foreseeable consequences crucial to forging an ethical response.
A mental models approach to ethical decision-making
It is important to emphasize that the dangers that certain mental models pose to ethical decision-making cannot be mitigated or overcome by imagining that we could somehow free ourselves of the need for mental models altogether. Without mental models to mediate and shape our experiences, we would be incapable of having experiences at all. As Werhane writes:
The most serious problem in applied ethics, or at least in business ethics, is not that we frame experiences; it is not that these mental models are incomplete, sometimes biased, and surely parochial. The larger problem is that most of us either individually or as managers do not realize that we are framing, disregarding data, ignoring counterevidence, or not taking into account other points of view. (Werhane, Reference Werhane2007, 404)
While all experience and reflection is conditioned by our mental models, our mental models do not determine our thoughts and perceptions. Because our mental models are “not genetically fixed or totally locked in during early experiences,” we are capable of altering, expanding, affirming, resisting, or imaginatively transforming them (Werhane et al., 2011a; Werhane and Moriarty, Reference Werhane and Moriarty2009; Werhane, Reference Werhane1999). Indeed, practicing ethics requires this capacity to shake loose the hold that a particular mental model may have upon our thinking, so that we might become attentive to our habits of mind that disable or discourage us from posing the critical questions necessary to ethical judgment. Becoming aware that we are dependent upon particular mental models in troubling ways entails the facility and courage continually to seek out information, alternate viewpoints, and theoretical frameworks that challenge this dependence. All mental models are incomplete; those models that devolve into impediments to ethical deliberation do so when our reliance upon them encourages us to lose sight of this partiality. Mental models may become distorting to the extent that they shape experience in such a way that their framing effects are rendered unavailable to critical evaluation.
Our mental models enable particular, partial modes of experience and, thereby, organize our perception of a situation in a way that makes some thought and practices possible, while occluding others. We adopt, construct and allow habituation to these schemas because the orientations they provide are experienced as beneficial in some way. When mental models become distorted, it is often because they have become rigid and determinative. The partial perspective has come to be experienced as the whole picture, and the framing model is no longer in view. As we will see in the discussion of theoretical and practical strategies to discourage ethical failure in Chapters 6 and 7, to bring one's mental models into view often requires that we consider conflicting, opposed, or simply unfamiliar ways of orienting ourselves to the situation at hand.
It may help to shift from metaphors of pictures and (distorted) frames, to the metaphorical language of medicine. Paradoxically, if one considers that mental models impede healthy, critical ethical deliberation and action, their antidote requires further experimentation with mental models, which in turn may endanger the health of ethical decision-making process if they are allowed to come to dominance. Yet, like a vaccine that spurs the body's immune system into action, exposing oneself to new, unfamiliar, and even disorienting mental models can activate ethical thinking, and foster an active relationship to the construction of experience.
Risks and promises of ethical decision-making
The medical metaphor reminds us that ethical decision-making may be an uncomfortable process to the extent that it requires that we expose ourselves to unfamiliar, even conflicting, ways of seeing the situation at hand and our own roles within it. Ethical decision-making has further risks or vulnerabilities, as well. It does not result in certain or precise knowledge, but in contestable claims, as others might disagree not only with our judgment about the best course of action, but with the selection of facts, points of view, and alternative solutions that we have considered in the deliberative process. As many corporate whistleblowers can testify, choosing to act ethically may alienate the decision-maker from her or his friends and colleagues, and put the decision-maker's financial, and even physical, security in jeopardy. In an extreme example of the risks of taking ethical action in the workplace, Jeffrey Wigand claimed that, in addition to losing his job and coping with media campaigns maligning his character, his family received death threats as the result of his decision to blow the whistle on his former employer, the Brown & Williamson Tobacco Company (Brenner, Reference Brenner1996). In addition to the risks to security posed by others who may disagree with or feel threatened by our ethical choices, we are also exposed to accountability for the unintended consequences of our decisions when we act under the umbrella of ethics. Ethical decision-making involves acknowledging responsibility for the outcome of what we say and do, which in turn gives rise to a duty to “evaluate the implications of our decisions, to monitor and learn from the outcomes, and to modify our actions accordingly when faced with similar challenges in the future” (Hartman and DesJardins, Reference Hartman and Desjardins2008, 56–7).
However, as the discussion of impediments to ethical decision-making processes in this chapter and the next will show, the risks that ethical action may pose to one's economic security, social status, community identity, or self-esteem must be weighed against the startling evidence that the failure to challenge entrenched mindsets in a vigorous practice of ethical deliberation has been linked to moral failures that endanger our economies, social institutions, communities, and the relative autonomy that we prize as individuals.
To be reminded of the high stakes of overcoming impediments to ethical deliberation, it is instructive to return to our discussion of the famous 1960s Milgram experiments from Chapter 3. Milgram's objective was to find out why “ordinary people, simply doing their jobs, and without any particular hostility on their part, can become agents in a terrible destructive process” that “relatively few people have the resources needed to resist…” (Milgram, Reference Milgram1973, 76). Milgram's studies, and other psychological, sociological, and neurological investigations that followed in their wake have contributed significantly to our comprehension of moral failure. Neera Badhwar (Reference Badhwar2009, 268) argues that the post-war generation of studies exploring the effects of authority and peer pressure on ethical behavior has led to the fundamental, and previously unknown, discovery that “we are capable of succumbing to morally trivial situational pressures.” However, the insight that the capacity for ethical action is universally vulnerable to factors unrelated to the actor's consciously and deeply-held value systems is still difficult for most of us to accept, particularly about ourselves. While each of the mental schemas raised in this chapter constitute experience in a manner that exacerbates these vulnerabilities, we begin with a discussion of mental models that constitute experience in a manner that occludes the moral dimension of situations from view, thereby thwarting the first step of ethical decision-making. We then further refine this discussion by examining ethics-impeding conceptual frameworks that disable or frustrate the second step in the ethical decision-making process: gathering information.
III. Obstacles to awareness of ethical situations
Acknowledging that a problem in our ethical decision-making structures exists is the first step in making responsible choices that reflect our personal value commitments. However, egregious moral failures in the corporate environment often follow from a managerial failure to bring ethical considerations to bear on business decisions. Consider the extent of self-preservation and denial demonstrated by the claims of Enron CEO Jeffrey Skilling and its Chairman Kenneth Lay when they claimed that they did not know of the wrong-doing taking place within their firm. The judge in that case included in his jury instructions the explanation that “knowledge can be inferred if the defendant deliberately blinded himself to the existence of a fact” (US v. Lay, 2006, quoted in Heffernan, Reference Heffernan2011, 2). In the next section, we will investigate mental models that limit our ability to see facts that are right before our eyes – sometimes quite literally, as in the many examples of managers and employees who see unethical behavior take place in front of them, but do not recognize it as such. In this section, we take a step back to scrutinize the psychological biases that encourage us to obstruct or block ethics from our mental models altogether. Why might decision-makers be motivated to blind themselves not only to ethically relevant facts, but also to the relevance of ethics to their judgments of their own behavior and that of others? What psychological tendencies and contextual factors encourage us to become bystanders, rather than moral actors, in situations that call for an ethical response?
Moral self-image
When it is in our interest of self-preservation to avoid recognizing our own or those of another unethical behavior, we tend to overlook those flaws (Gino et al., Reference Gino, Moore, Bazerman, Kramer, Tenbrunsel and Bazerman2009; Moberg, Reference Moberg2006). This willful or motivated blindness is not so different from concluding that our own contributions to a group effort are more significant or burdensome than those of another's. Studies that show a high propensity to overclaim credit when assessing one's own role in group endeavors (Bazerman and Moore, Reference Bazerman and Moore2008), for example, reveal that we are more aware of the sacrifice we have made or time we have taken in our efforts than the contributions of others. Similarly, we tend to justify our own behavior while judging others more harshly; we naturally understand our own motivation while do not have equivalent insight into the origins of others’ behavior. In fact, research demonstrates that we all believe ourselves to be moral – even those who have demonstrated themselves to have lied or are convicted felons (Baumeister, Reference Baumeister, Gilbert, Fiske and Lindzey1998; Allison et al., Reference Allison, Messick and Goethals1989). In maintaining this belief structure, this mental model, we construe our experience in a manner that serves to strengthen this self-perception, and remain blind to evidence that might serve to disprove it. We develop blind spots that prevent us from assessing our experience from an ethical perspective that is guided by the values that we consciously hold dear.
As the paradigm for ethical decision-making outlined previously suggests, a certain type of ignorance can account for poor ethical choices; yet that ignorance can rise to the level of willful or intentional choice when it is motivated by a desire to sustain a moral self-assessment. We make rationalizations to ourselves based on our entrenched mental models or messages that “no one will ever know,” “no one is really going to be hurt,” or that it was someone else who was careless; we say that we are only doing that which anyone else would do under this circumstance. Chance and Norton (Reference Chance and Norton2009) explain that we engage in these explanations to ourselves and to others because we do not want to be perceived as – or feel like – unethical or immoral individuals. Their research indicates that subjects routinely opt for these conclusions:
“I read Playboy for the articles.”
“I’m not selfish, I just prefer not to play the Dictator game.”
“I’ll pick the fat-free yogurt tomorrow.”
We become skilled at such rationalizations of behavior that violates our consciously held values, because the positive self-perceptions they enable have benefits. As Chance and Norton point out, psychologists link a strong moral self-assessment with a higher sense of self-worth, and this, in turn, is correlated with lower levels of depression. In short, self-deceiving justifications allow us to do what we want, while protecting us from the “psychological cost” (2009, 17) that behavior might impose on self-assessments of moral character. However, the mental models fostered by moral self-deception carry other costs. The motivation to preserve a moral self-image encourages a reliance on distorted mental models that impede ethical considerations from view. If we become habituated to believing that we are moral, regardless of what we do (or fail to do), this self-image allows us to bypass ethical decision-making.
The Jayson Blair reporting scandal at The New York Times offers an example of the distorted effects of this myth at the level of corporate culture. Over a six-month period in 2003, Blair fabricated or plagiarized more than thirty news stories (Mnookin, Reference Mnookin2004). An internal inquiry later found that “various editors and reporters [had] expressed misgivings about Mr Blair's reporting skills, maturity and behavior during his five-year journey from raw intern to reporter on national news events” (Barry et al., Reference Barry, Barstow, Glater, Liptak and Steinberg2003, para. 7). Despite its reputation for a strong commitment to journalistic excellence, these misgivings went unheeded, and many of the paper's editors and Blair's colleagues were caught unaware by the exposure of his fraudulent reporting. Werhane and Moriarty (Reference Werhane and Moriarty2009) attribute this moral failure to the widespread belief within the corporation that such unethical behavior “couldn't happen here.” Acculturation to a high moral self-perception – a sense that “we are The New York Times, after all” – generated a distorted mental model, an ethical blind spot that hid Blair's dishonest behavior from view.
Blind spots and mental models
Ethical blind spots prevent us from interrogating our mental models, and therefore pose an obstacle to taking the first step toward responsible, conscientious decision-making. Moberg (Reference Moberg2006) links these blind spots to mental models through his concept of common perceptual frames. He explains that these frames can create blind spots, defined in much the same way that we define mental models: “those defects in one's perceptual field that can cloud one's judgment, lead one to erroneous conclusions, or provide insufficient triggers to appropriate action” (2006, 414). However, though a significant hurdle when unacknowledged, Moberg explains optimistically that they can be overcome, parallel to the metaphor on which they are based; “blind spots are similar to those that afflict drivers of motor vehicles. Once one is aware that they exist, it is possible to develop alternative interpretive and action strategies.”
An example of an ethical blind spot in connection with others occurs when a parent is told that her or his child has cheated in school. The parent's first instinct may be to deny the claim and defend the child. However, if an effort is made to review the facts involved, the evidence collected and to consider, perhaps, the pressures exerted on the stakeholders, the correction to which Moberg refers may occur. The parent often will engage in a more conscious focus on her or his value structure and reach a different rational conclusion. Yet, without the recognition that ethical deliberation, rather than a presumption of morality, is called for, the parent may fail to move on from first instincts. What obstacles prevent us from moving past cognitive biases toward a high moral self-image, and a distorted moral image of others?
Situational factors may exacerbate motivated self-deception about ethical behavior. Using the vocabulary of frame theory, Moberg (Reference Moberg2006) argues that sound moral judgment is particularly vulnerable in work organizations. In organizational settings, we tend to partition our use of moral and competency frames. When judging ourselves, we are likely to presume a positive moral assessment and, therefore, invoke competency criteria when called upon to evaluate our behavior. He refers to this tendency as a “personal ethics blind spot.” Personal ethics blind spots are strengthened by mental models that assume that moral frames are “private,” and therefore not appropriate to workplace judgments. The resulting dominance of competency frames in the workplace can lead managers and employees to fail to trigger moral frames when confronted with situations that call for both ethically responsible and competent resolutions.
Elizabeth Doty (Reference Doty2007), an organizational consultant, conducted in-depth interviews with thirty-eight business people from a broad range of professions about their negotiation of the tension between personal values and the demands of the workplace. She writes of her own experience navigating this tension while working for a luxury hotel chain. At one point, she was asked to provide attractive female staff and low-cut costumes for an all-male corporate event. After complaining to her boss privately, the request was retracted; but it left Doty with “lingering concerns”:
I wasn't naïve. I told myself that ethical bumps in the road were part of the game of business. Our hotel managers sometimes secretly canceled guests’ discount-rate reservations on oversold nights. I myself had concocted the “right” numbers on sales forecasts, and then convinced my boss in his staff meeting that I really believed them. For four years I’d been able to persuade myself that one had to expect such practices even in first-class operations. And it almost worked this time, too; by the final night of the annual meeting, I’d nearly stopped fuming over the costume incident. I even allowed myself to feel some pride in how well the event had come off. (para. 3)
Doty describes the trade-off between her personal unease regarding the unethical behavior in which she had participated and the pride that she felt about achieving her workplace tasks with competence as a “devil's bargain.” In Moberg's terms, we might say that Doty cultivated a personal ethics blind spot by persuading herself that competency frames, rather than moral frames, were the appropriate perceptual apparatuses to utilize in the “game of business.”
Personal ethics blind spots and bystander effects
Personal ethics blind spots are significantly affected by how others behave. Social psychology studies have repeatedly demonstrated that we are less likely to act if we are surrounded by other non-actors (Hudson and Bruckman, Reference Hudson and Bruckman2004; Latané and Nina, Reference Latané and Nina1981; Darley and Latané, Reference Darley and Latané1968). Analyzing their seminal investigations into bystander inaction, Latané and Darley (Reference Latané and Darley1969) proposed that, when there are multiple non-interveners observing a critical situation, individual inaction typically does not betray “apathy.” Rather, each observer looks around at the reaction of others for assistance or confirmation in constructing his or her interpretation of the situation. “Until someone acts,” they explain, “each person sees only other non-responding bystanders, and is likely to be influenced not to act himself.” As a result, “all members may be led (or misled) by each other to define the situation as less critical than they would if alone” (p. 249). Group behavior affects individual action because we look to the behavior of others for cues to tell us which mental models to engage in a particular context. In organizational settings, this “bystander effect” serves as a deterrent to potential whistleblowers (Dozier and Miceli, Reference Dozier and Miceli1985). When one's colleagues and managers do not appear to notice wrongdoing, or identify unethical behavior as such, it may be difficult to perceive ethical problems as existing at all.
Like all of the mental models that impede ethical decision-making, the bystander model has benefits as well as costs, and does not inherently impede responsible decision-making. Making the choice to intervene in order to prevent wrongdoing or harm can be dangerous, especially when that action is taken alone. In many urban neighborhoods with high rates of gang violence, for example, police and prosecutors face high barriers in seeking cooperative witnesses due to fear of retaliation. The National Alliance of Gang Investigators Associations reports “the mere presence of gangs in a community…creat[es] a generalized fear of intimidation that hinders witness cooperation” (Anderson, Reference Anderson2007, 1). This generalized fear is reasonable; in Los Angeles alone, 778 cases of witness intimidation were documented over a five-year period. Typically, corporate whistleblowers are not faced with threats of physical intimidation but harassment, intimidation and job loss are real concerns. Reviewing a random sample of 200 complaints, the National Whistleblower Center found that more than half of those who chose to speak up were fired, while most of the remaining employees reported being subject to unfair disciplinary action or other forms of harassment (Brickey, Reference Brickey2003). Another watchdog group, the Government Accountability Project, discovered that ninety percent of whistleblowers are subject to some form of retaliation in the workplace.
In addition to reasonable concerns for self-preservation, the determination that others who are qualified or better situated can be trusted to intervene effectively might be a conscientious choice in a particular context. Trust in the others is a social value; indeed, trust is a necessary component of all social activity, and mutual trust is especially important to economic relationships (Werhane et al., Reference Werhane, Hartman, Moberg, Englehardt, Pritchard and Parmar2011b; Fukuyama, Reference Fukuyama1995). It would be arrogant, not to mention impossible, for each of us to claim full responsibility for all of the problems in the world, or even the workplace. Often, however, taking up the role of a bystander in a crisis is not the result of a deliberate and reasoned choice to protect oneself or to trust in others, or of an apathetic disinterest in the moral dimensions of the situation, but from a failure to perceive the situation as a crisis that calls for ethical decision-making.
The apparent inability of News Corp. Chairman Rupert Murdoch to perceive his responsibility for wrongdoing taking place within his own corporation illustrates how complex and hierarchical structures can lead the most powerful members of organization to develop mental models in which they see themselves as mere bystanders, free from accountability for the behavior of their employees. After public outrage exploded over allegations that hacking into the cell phone messages of private citizens – including the messages of a missing teenage girl who was later found murdered – was accepted practice at one of his newspapers, Murdoch was brought before Parliament and interrogated surrounding his claim that he was unaware of the malfeasance. Murdoch explained that he may have “lost sight” of the paper because it was “so small in the general frame of the company” (Hutton and Morales, Reference Hutton and Morales2011). Murdoch did not deem this blindness an ethical failure; instead he saw it as a justification for his assertion that, as Chairman, he bore no responsibility for the criminal activity at his paper. When asked who was responsible, Murdoch placed the blame squarely on his underlings, pointing the finger at “the people who I employed, or maybe the people they employed” (Williamson, Reference Williamson2011).
Significant evidence suggests that organizational hierarchy and organizational complexity can exacerbate this blindness to ethical problems. However, hierarchy within an organization does not necessarily impede ethical responsibility; to the contrary, a tiered division of responsibility may promote good behavior by establishing clear expectations of the rights and duties that attain to specific roles. Yet, strong role identification in a hierarchical organization risks devolution from a mental model that encourages moral responsibility to a distorting mental model that, in addition to relieving leaders like Murdoch of a sense of responsibility for their employees, also habituates employees to blindly obey superiors. Milgram (Reference Milgram1974) proposed that the high levels of obedience to unjust orders in his studies could be explained by the participants’ displacement of the responsibility of moral judgment to the experimenter. This tendency to defer moral responsibility to perceived authorities is aggravated by “chain of command” structures (Kilham and Mann, Reference Kilham and Mann1974). In the Challenger and Columbia disasters, discussed in Chapter 2, hierarchical leadership structures discouraged engineers from effectively communicating risk factors to managers, with deadly results.
Werhane and Moriarty (Reference Werhane and Moriarty2009, 9) remind us that “[m]any managers conceive of good leadership as being primarily about motivating employees to do what they want them to do,” but the truth revealed by obedience studies “is that individuals will often carry out instructions that are absurd, immoral, dangerous, or life-threatening when given by a person in authority.” As Johnson (Reference Johnson2009, 265; quoted in Tepper, Reference Tepper2010) writes: “Examine nearly any corporate scandal – AIG Insurance, Arthur Andersen, Enron, Health South, Sotheby's Auction House, Fannie Mae, and you'll find leaders who engaged in immoral behavior and encouraged their followers to do the same.” The challenge is that hierarchical leadership models may create corporate cultures that align responsibility with the competent fulfillment of role duties, but which then also can inhibit the triggering of moral frameworks. These corporate cultures may then condition employees to be passive bystanders, rather than moral actors, and ill-prepare them to perceive ethical crises or wrongdoing as problems that call upon them to intervene.
Self-sufficiency
By definition, ethical blind spots make us doubly blind; we not only fail to perceive ethical problems and situations, but we remain unaware that that we have done so. Admitting that our view of the world is not only partial and subject to bias, but is deeply dependent upon the views of those around us – from peers and colleagues, to authority figures and even advertisers – is difficult. Although this admission is fundamental to the ethical decision-making process, it can also leave decision-makers with a sense of disempowerment. In his seminal study of American democracy in the early nineteenth-century, Alexis de Tocqueville (Reference Tocqueville and Mayer2000, 508) observed that democratic citizens “form the habit of thinking of themselves in isolation and imagine that their destiny is in their own hands.” Today, self-sufficiency remains an exceptionally prized social value, particularly in Western societies (Markus and Kitayama, Reference Markus and Kitayama1991). The presumption of self-sufficiency allows us to believe that we are the masters of our own circumstances, fully in control of our thoughts and actions. Acknowledging that cognitive biases and situational factors impact direct our decision-making to some extent – even any extent – can put our capacity for effective action into question, raising the concern that we may be victims of forces beyond our control. However, holding fast to the myth of self-sufficiency can be deeply ineffective and unproductive; not only do we remain vulnerable to unconscious bias and social cues, but we become complicit in our own vulnerability.
John Gaventa (Reference Gaventa1982) identified self-sufficiency preferences as an explanatory factor in his study of quiescence among certain groups of unionized coal miners in Appalachia. Despite mounting evidence that key members of the union's leadership was guilty of corruption, bribery, intimidation, the dissemination of misinformation and even murder, a few small regions continued to support the leadership against reform advocates. In fact, support of demonstrably corrupt union leaders among rank-and-file miners increased in these regions, while declining among much of the rest of the Appalachian mining community. Why did some miners maintain and deepen their support for union leaders as these same leaders were increasingly revealed to be acting against the miners’ professed interests? Gaventa's study of the power relationships between the union elites and the rank-and-file miners led him to theorizes that the “sense of powerlessness” generated by the historical and present-day conditions of the mine workers had led “to a greater susceptibility to the internalization of the values, beliefs, or rules of the game of the powerful as a further adaptive responses – that is, as a means of escaping the subjective condition of powerlessness, if not its objective condition” (p. 17). Without the resources to resist or exit a situation in which they had little opportunity for effective, self-directed action, the miners adapted by supporting the oppressive regime and internalizing its values, an adaptive strategy that allowed them to deny that they were victims of a campaign of manipulation and coercion.
Heffernan (Reference Heffernan2011) finds similar dynamics occurring in the wake of the discovery that a corporation had knowingly allowed its mining operation to contaminate the town of Libby, Montana with asbestos. Although a small group of locals sued the corporation, the majority of Libby residents refused to accept the truth about the contamination and actively opposed efforts to bring the extent of the harm to light. The people of Libby, Heffernan writes, are known for their stoicism: “They don't whine and they don't want to think of themselves as victims” (p. 105). However, this valorization of self-sufficiency led Libby's residents to a double tragedy. Like the Appalachian miners studied by Gaventa, they were first victimized by powerful organizations and then, in their refusal to accept the truth of their situation, they became “victims of their own blindness” (Heffernan, Reference Heffernan2011). Unable to see their dependence on others, they unwittingly became participants in their own victimhood.
Mental models that discourage us from considering whether and how we might be dependent upon others can pose a particular danger in commerce, as studies have shown that profit-centric thinking strengthens the self-sufficiency model. Vohs, Mead, and Goode (Reference Vohs, Mead and Goode2006) conducted experiments to test the psychological effects of economic motivation. They found that when subjects were prompted to think about money, they were more likely to prefer “to play alone, work alone and put more physical distance between themselves and a new acquaintance” (p. 1154). After playing games in which some participants were led to profit considerations and others were not, those who had been thinking about money were less likely to provide help to student confederate, donate to a charity, assist a stranger or select a leisure activity that involved other people. Vohs et al. term this effect a “self-sufficiency pattern,” which suggests that “money evokes a view that everyone fends for him- or herself” (p. 1156). The relationship between profit-thinking and self-sufficiency preferences need not lead us to conclude that profit motivation inherently leads to unethical decisions. However, if we remain unaware of the bias toward individualism and against interdependence, which is triggered by considerations of profit, this mental model may blind us to the ethical issues at stake in our decision-making practices, double-blinding us to the fact that our self-sufficiency is merely a presumption or preference at all.
Slippery slope
Blind spots to ethical issues are easier to develop when the adoption of problematic mental models occurs over time. In other words, one is less likely to produce strong cognitive conflicts if the disengagement from deeply held moral beliefs takes place over an extended period of time. Bandura (Reference Bandura and Reich1990) uses the term “gradualistic moral disengagement” to describe the strategy used by terror group leaders to socialize new recruits to violence. Rather than trying to persuade recruits to abandon the moral probation against the killing of civilians, terror groups tend to undermine the moral judgment of recruits gradually, first requiring only very minor criminal acts, and slowly disconnecting the recruits from all ties to non-criminal society before revealing the more extreme terror tactics of the organization. By the time new members are fully exposed to the commitment to violence, they have likely developed mental models that preserve their moral self-images by rationalizing their criminal behavior and disengaging from their previous value systems. Although we have been presenting ethical blind spots, and the moral self-assessments, bystander roles, obedience patterns, and self-sufficiency illusions that support them as obstacles to the first step in the ethical making process (awareness of an ethical issue), the failure to take this first step towards ethics may often be the culmination of long process of moral disengagement.
Questions of adaptation to gradual shifts to a model and the failure of decision-makers to notice these gradual changes over time fall under the umbrella term “change blindness” (Gino et al., Reference Gino, Moore, Bazerman, Kramer, Tenbrunsel and Bazerman2009; Gino and Bazerman, Reference Gino and Bazerman2006). Historically, we have recognized both the inherent risks involved in gradual shifts – also referred to as a slippery slope – as well as the non-trivial implications of awareness or preparedness for these shifts in connection with integrity and value consistency. Bazerman and Chugh (Reference Bazerman, Chugh and Thompson2005) offer the example of the Arthur Andersen auditors who did not notice the ethical depths to which Enron had fallen in terms of its decisions. Other stumbling blocks are less intellectual or cognitive than they are a question of motivation and willpower. As author John Grisham explained in his book Rainmaker, “Every lawyer, at least once in every case, feels himself crossing a line he doesn't really mean to cross. It just happens.” Arguably, this sensation applies to decision-makers well beyond the legal profession; sometimes it is simply easier to do the wrong thing. Unfortunately, we do not always draw the lines for appropriate behavior in advance and, even when we do, they are not always crystal clear. As Grisham suggests, it is often easy to do a little thing that crosses the line, and the next time it is easier, and the next easier still. One day, you find yourself much further over your ethical line than you thought you would ever be. You may find yourself failing to see that you are called upon to engage ethics in your decision-making at all.
IV. Obstacles to gathering facts relevant to ethical considerations
If our mental models are disengaged from moral frames altogether and the first step in the process of ethical decision-making is bypassed, a likely byproduct will be a failure to take the second step: to seek out salient information. The information-gathering stage also may be impeded by problematic moral frameworks or by conceptual schemas that lack a moral dimension. When our mental models – regardless of their moral content – reassure us that our picture is complete and that we have no need to seek out additional facts, when they block relevant information from our perceptual field, or when they prompt us to disregard significant information that conflicts with their framework, they thereby distort our assessment of whether we have adequate knowledge. In this section, we examine situational and cognitive factors that are particularly threatening to the capacity to attend to, and seek out, critical information in decision-making settings.
Ideological worldviews
The term “ideologue” often is used pejoratively to refer not only to a deep conviction in the explanatory power of a particular model, theory, or systems of ideas, but also to a conviction that will not admit the possibility of facts or experiences that challenge the validity of the belief system. The political theorist Hannah Arendt described ideologies as those “‘isms’ which, to the satisfaction of their adherents, can explain everything and every occurrence by deducing it from a single premise” (Arendt, 1951, cited in 1973, 468). Referring to ideological believers in the “domino theory” that drove U.S. foreign policy during much of the Cold War period, she writes, “[t]hey needed no facts, no information; they had a ‘theory,’ and all data that did not fit were denied or ignored” (Arendt, 1969, cited in 1972, 39). When we become dogmatic in our conviction that a particular theory or belief system is capable of accurately accounting for any and all real-world possibilities, the need for an open and critical fact-gathering process is blunted. Rather than examining and testing our ideas to see if they correspond to factual reality, ideologues test the validity of the facts by determining whether they fit into the picture of the world generated by their preferred system of ideas.
Typically, mental models are not ideological, in Arendt's sense of the term. Most of us invoke plural, diverse and even conflicting mental models to constitute our perceptions across and within particular domains of experience, rather than allowing a particular theoretical framework or system of ideas to dominate our worldview. Ideologues provide an example of an extreme form of mental modeling in which experiences that cannot be fitted within a preconceived worldview or theory are ignored or rejected. Their extremism serves to remind us of the dangers of mistaking mental models, which are inherently partial and incomplete, for reality itself. Consider Reverend Harold Camping's widely publicized (and just as widely dismissed) prediction that the Rapture would occur on May 21, 2011, launching an earth-bound judgment day. When the date arrived with no accompanying apocalypse, Camping admitted to being “flabbergasted” (Cane, Reference Cane2011). This was not the first time that Camping's prophesies had failed to materialize. A prediction that scheduled the end of the world for 1994 had also proven inaccurate, a disappointment that he attributed to calculation error (James, Reference James2011). Subsequent to the missed 2011 Rapture event, Camping proclaimed a new doomsday date six months down the line (McKinley, Reference McKinley2011). Evidently, even repeated exposure to facts that contradicted his ideology did not shake his conviction in its truth or in his own ability to distinguish ideology from other details.
To many of us, Camping's consistent refusal to accept that his system of ideas had been proven false appears to lack logic. However, Margaret Heffernan (Reference Heffernan2011, 57) reminds us that peculiar ideas, such as those of apocalyptic prophets, are not the only objects of ideological commitment, “[W]hen ideas are widely held, they don't stand out as much; they can even become the norm. We may not see them as ideology and we don't see their proponents as zealots. But appearances can be deceptive.” Heffernan raises Fed Chairman Alan Greenspan's zealous advocacy for market deregulation as an example of ideology hidden in plain sight. She is not alone. The financial crises in 2008 led many commentators to charge that Greenspan's ideological commitment to deregulation had contributed to the collapse of the mortgage market (Schneiderman, Reference Schneiderman2011; Suttell, Reference Suttell2011; “Greenspan Admits…”, 2008). Testifying before Congress in October of 2008, Greenspan was pressed to consider whether his economic philosophy had prevented him from confronting and assessing important facts:
Representative Henry Waxman (CA): “You had the authority to prevent irresponsible lending practices that led to the subprime mortgage crisis. You were advised to do so by many others and now our whole economy is paying the price. Do you feel that your ideology pushed you to make decisions that you wish you had not made?”
Greenspan: “…Yes. I’ve found a flaw. I don't know how significant or permanent it is. But I’ve been very distressed by that fact…”
Waxman: “You found a flaw in the reality.”
Greenspan: “…[A] flaw in the model that I perceived as the critical functioning structure that defines how the world works, so to speak.”
Waxman: “In other words, you found that your view of the world, your ideology was not right. It was not working.”
Greenspan: “Precisely. That's precisely the reason I was shocked, because I had been going for 40 years or more with very considerable evidence that it was working exceptionally well.” (Greenspan Says…, 2008)
Under pressure, Greenspan admitted that he adhered to his ideology when faced with contrary views and conflicting evidence. Yet, to the consternation of his critics, Greenspan's “shock” and “distress” were caused by the discovery of an apparent “flaw” in his model of reality, not by his ethical failure; he did not accept responsibility for his role in promoting this model when it diverged from readily available facts, nor was it evident that he was finally willing to alter his fundamental worldview. As Heffernan (Reference Heffernan2011, 63) writes, “he held fast to his big idea. It wasn't wrong, it was just flawed” (see also Haverston, Reference Haverston2010; Crutsinger and Gordon, Reference Crutsinger and Gordon2008).
Not just ideologists: Bounded awareness
Few people are as dogmatically committed to single worldview as Camping or Greenspan. However, the devolution of mental models into obstacles that prevent challenging, conflicting, or novel information from coming to view is not limited to ideologues. To a less extreme degree, this devolution is an ever-present risk that arises from the “bounded awareness” that characterizes human perception. Gino, Moore, and Bazerman (Reference Gino, Moore, Bazerman, Kramer, Tenbrunsel and Bazerman2009) define bounded awareness as a “systematic pattern of cognition that prevents people from noticing or focusing on useful, observable, and relevant data,” a parallel, though arguably more methodical, conception of Senge (Reference Senge1990) and Werhane's (Reference Werhane1999) analyses of mental models. Gino et al. explain that we, as humans, are bound by these patterns to make implicit choices about whether we attend to certain information in our environment and ignore other information. Because these choices are based on omissions, they necessarily embody errors (Bazerman and Chugh, Reference Bazerman, Chugh and Thompson2005), which we illustrated in Chapter 2.
Certainly, when we intentionally identify and focus on mental models as significant impediments to ethical decision-making, they lose their power to serve as obstacles to fact gathering. However, we are not always aware of the existence or embedded strength of the models, nor are we always attentive to the limits of our attention and awareness when our decision-making skills are most challenged. The most powerful illustrations of this conclusion are often found in the most simple of examples, such as the moonwalking bear or the movie perception tests (Simons, Reference Simons2010; Veenendaal, Reference Veenendaal2008; Levin and Simons, Reference Levin and Simons1997). Based on a research stream exploring inattentional blindness that began in 1992 by Mack and Rock (Reference Mack and Rock1998) and on experiments carried out by Simons and Chabris (Reference Simons and Chabris1999), the advertising company Altogether Digital created a massive awareness campaign for the City of London. The campaign involved a one-minute video of eight basketball players passing two balls; four of the players were wearing white shirts and four were wearing black shirts. The voiceover asked viewers to count the number of passes made by the team in white. On first viewing, practically no one notices a man dressed in a brown bear costume moon-walking directly through the middle of the game. We have had countless personal experiences using this particular video and the experience is replicated consistently. When directed to perform a task (counting passes), humans naturally have a tendency to seek success on that task. If something begins to distract us, we do not surrender to the distraction; instead we steel ourselves against it, focusing ever more strongly on the job at hand.
If the “distraction” happens to be, for instance, the possibility of unethical conduct, unfortunately, we may suffer from a focusing failure where the data included in our circle of vision or awareness is simply insufficient to make an effective – or ethical – decision. A recent study of inattentional blindness (Chabris et al., Reference Chabris, Weinberger, Fontaine and Simons2011) replicated the conditions of a real-world case in which a police officer was charged with perjury for claiming that he failed to see a brutal assault taking place within his line of vision as he ran by the assault in pursuit of a suspect. Not only did the study's results show that inattentional blindness provided a possible explanation for the officer's failure, but the experimenters also demonstrated that manipulating attentional load affected subjects’ perception. In one test variation, subjects were divided into two groups. All subjects were asked to run after a confederate for a fixed distance, but one group was asked to perform a complex counting task that required much attention while in pursuit while the other group was not. Half of the low-attentional load group failed to notice a staged three-person fight taking place just a few feet off of their route as they ran. In the high-attentional load group, the failure rate increased to 72 percent. When our attentional load is high, the limitations on our awareness are intensified. The test subjects, and perhaps the police officer in the case that motivated the research, failed to stop a brutal assault that took place right before their eyes because, quite simply, they were too busy to see it.
Of course, bounded awareness often enables concentrated focus (often a valued quality), in addition to inattentional blindness and focusing failures. It is neither possible nor desirable to focus equal attention on every object within our perceptual field. Selective attention allows us to adjudicate between relevant and irrelevant information, a process that often occurs below the register of conscious awareness. Returning to the basketball study, it is important to note that, if the viewer intentionally focuses on the bear – in other words, watches the sequence a second time and maintains a keen eye for the distraction – of course, the moon-walking bear is clearly visible. However, almost all of these returning viewers fail to focus during repeat screenings on the number of passes made by the team in white. Accordingly, while the returning viewer is appropriately aware of any new element that crosses her or his line of vision, this viewer often is unable to maintain sufficiently detailed focus to continue to “get the job done,” as directed by the voiceover, or even to remember the number of passes from prior screenings.
However, the positive effects of bounded awareness – the selective attention that allows us to “get the job done” – can devolve into distorted perceptions when we do not acknowledge and account for the trade-offs that concentrated focus demands from our cognition. Chugh and Bazerman (Reference Chugh and Bazerman2007) warn that the consequences of focusing failures can be severe. They offer the examples of an airplane pilot who might pay too much attention to controls and miss another plane in the air as a result, or the driver who is distracted from challenges in traffic by a ringing cell phone. Focusing failures can be exacerbated by mental models that tell us that we can multi-task, or that we have adapted to new technology; but instead we see that this same technology has created a business pace that, while both breathtaking and awe inspiring, also has left in its wake anxiety, apprehension and fear (Suri et al., Reference Suri, Lee, Manchanda and Monroe2003, 516; Wilfong, Reference Wilfong2006; Karavidas et al., Reference Karavidas, Lim and Katsikas2005; Matanda et al., Reference Matanda, Jenvey and Phillips2004). The bounded character of human awareness makes focus possible; but, when we are unwilling or unable to acknowledge that we gain focusing depth at the cost of perceptual breadth, we invite focusing failures. We believe we are seeing the “whole picture” when we are not and, as a result, we may fail to see or seek out crucial information.
Bounded awareness, bounded ethicality
Bounded awareness, when it results in inattentional blindness and focusing failures, can lower employee productivity and cause errors; it can also undermine ethical awareness. The first point may seem intuitive, but cognitive bias studies are uncovering new arenas in which the limits of human attention are responsible for performance problems in business. In the securities markets, for example, researchers have demonstrated that certain market abnormalities – underreactions to significant earnings reports, for example – can be explained by the “investor distraction hypothesis.” When investors are distracted by extraneous information on a high news day (Hirshleifer et al., Reference Hirshleifer, Lim and Hong Teoh2009), or by the thoughts of weekend plans on Fridays (Dellavigna and Pollet, Reference Dellavigna and Pollet2009), they are slower to notice and react to critical reports. When we consider that even the day of the week can have a measurable impact on the likelihood of focusing failures, it should come as little surprise that more extreme situational factors like fatigue, sleep deprivation and information overload sharply impair our cognitive capacities. However, these factors do not only make us more likely to overlook or misjudge the significance of salient information; they also discourage us from thinking about ethics altogether. Bazerman and Moore (Reference Bazerman and Moore2008) term this phenomenon “bounded ethicality.” Just as are not aware of the unconscious processes that prevent us from seeing the moonwalking bear in the basketball game, or a crime taking place when we are distracted by a high-attentional task, we may also be unaware of having violated our own standards of ethical behavior; we may make bad decisions without realizing that we have done so.
In practice, bounded awareness and bounded ethicality are often deeply intertwined. For example, working excessive hours not only undermines the “bottom line” by damaging employee productivity; overwork and fatigue are also responsible for a broad range of unethical outcomes in the workplace, including employee health problems, impaired judgments that affect public safety, and spillover effects of employee exhaustion on non-employees (Dembe, Reference Dembe2009). This spillover effect can be seen in the harm to patients that has been attributed to overworked medical interns. A study by the Harvard Work Hours, Health and Safety Group compared the error incident rate of interns working on a traditional schedule, in which every third shift is a twenty-four-hour “on call” shift, with the error rate of interns following a less-intensive schedule. The overworked interns were found to have committed 35.9 percent more serious medical errors than their less-fatigued colleagues (Landrigan et al., Reference Landrigan, Rothschild, Cronin, Kaushal, Burdick, Katz, Lilly, Stone, Lockley, Bates and Czeisler2004).
In Chapter 2, we examined several operative mindsets that contributed to poor judgment exercised by key decision-makers at NASA, resulting in the Challenger disaster. An additional contributing factor may have been sleep deprivation and fatigue caused by overwork. Indeed, according to a committee of scientists examining the effects of sleep-related issues on public safety, fatigue problems were significantly involved in Challenger, Chernobyl and Three Mile Island disasters (Mitler et al., Reference Mitler, Carskadon, Czeisler, Dement, Dinges and Graeber1988). What is interesting in the Columbia disaster was not merely sleep deprivation, but that those engineers and managers had forgotten the history of the Challenger and how their “need to launch” and neglect of signs of fatigue among their team members led to that explosion. Mental models that valorize long work hours as a proxy for productivity ignore more than 100 years of research linking fatigue to decreased employee output (Spurgeon et al., 1997) and unethical working conditions (Dembe, Reference Dembe2009). Corporate cultures that encourage such mental models make their companies vulnerable to moral failures, as well as higher error rates and focusing failures.
Might it be true that when our cognitive capacities are pushed to their limits by overwork, sensory overload, cognitive dissonance or fatigue, the first ability we lose is our capacity for ethical decision-making (Heffernan, Reference Heffernan2011)? Milgram (Reference Milgram1970) proposes that the relatively lower rate of social responsibility amongst city dwellers – their demonstrably higher levels of bystander inaction in crises and lower rates of helpfulness when approached by strangers, for example – could be explained by the concept of “overload.” Overload refers to “the inability of a system to process inputs from the environment because there are too many inputs for the system to cope with, or because successive inputs come so fast that input A cannot be processed when input B is presented” (p. 1462). In response to input overloads, city dwellers gradually adopt adaptive strategies – we would say, mental models – that allow them to unconsciously filter out most of their sensory field, most importantly, the strategic norm of non-involvement.
Strategic adaptation to urban non-involvement norms offer an explanation for the thirty-eight New Yorkers who admitted to having seen Catherine Genovese beaten to death in the street, yet failing to intervene – the 1964 case that first motivated researchers to investigate bystander effects (Hudson and Bruckman, Reference Hudson and Bruckman2004). But cognitive overload and the moral disengagement that it encourages might also explain the failure of so many other neighbors and passers-by to notice the brutal crime taking place at all. Just as it is possible that the police officer discussed earlier failed to notice a crime taking place as he focused on pursuing a suspect, these New Yorkers might have been too overloaded to be aware of and, as a result, too overloaded to care about violations of their most deeply held values. In this sense, the first and second steps – and indeed, as we will see, all of the steps – of the ethical-decision making process are highly interdependent. Mental models that distort our perception of the facts – from dogmatic belief in the validity of ideologies at one extreme, to models that tell us that our competency is unaffected by Friday afternoon day-dreaming or multi-tasking, at the other – may blind us to the unethical behavior of ourselves or others, just as mental models that tell us that we need not consider ethics at all are likely to discourage us from seeing unexpected data or seeking out additional information.
V. Conclusion
We began the preceding analysis with the suggestion that a broad, five-step model provides a productive conceptual structure to describe the process of ethical decision-making, and proposed that many ethical failures occur when the various stages of this process are blocked, tainted or distorted by problematic mindsets, or mental models. We then identified an array of mental models that frustrate the first two steps of responsible decision-making by rendering decision-makers blind to the ethical dimension of a choice context, or to potentially relevant data. An important feature of these distorting mindsets is that, although they tend to generate false or overly-narrow interpretations, it is difficult, if not impossible, to imagine that they might be eradicated, once and for all, by even the most responsible and conscientious decision-maker. We cannot categorically deny that, for example, the role of the bystander might be a wise choice in some contexts, or refusing to acknowledge that striving for self-sufficiency, under some conditions, might provide a path toward ethical action.
Recognizing the pressure on decision-makers to perceive reality in such a way as to avoid taking responsibility, and acknowledging that these perceptions are not easily dismissed as false or mistaken, are not means by which to excuse the possibility of a resulting unethical decision. To the contrary, by accepting that human ethicality is vulnerable to external factors, such as sensory overload or the absence of witnesses to our behavior, and internal biases, such as the desire to be seen by oneself and others as moral or self-sufficient, we become more likely to recognize the partiality of the conceptual apparatuses that we construct to frame our perceptions.
In the next chapter, we will turn to the final three steps in the ethical decision-making model – moving beyond awareness of context and toward the critical analysis that encourages action, or to the contrary, decision surrounding inaction, in order to identify mental models that thwart their effective execution. Again, we will find that it is neither possible nor desirable fully to rid ourselves of mental models that, when left below the register of conscious awareness and out of range of critical interrogation, often become impediments to imaginative, critical reflection regarding alternative solutions, the potential impact of decision possibilities on others, and effective ethical action.
However, as we discuss in greater detail in Chapters 6 and 7, where we investigate strategies that address many of the impediments raised here, overcoming the distorting effects of ethical blind spots is not only possible, but necessary if we wish to encourage effective, ethical decisions at the individual and organizational level. Strategies that work to expose the boundaries of our ethical awareness and the limits of our mental models to critical examination do not guarantee the harmony of our choices and actions with our value commitments; however, when we remain blind to the bounded character of human ethical capacities, and presume our interpretative frameworks to be complete pictures of reality, we all but guarantee ethical failure.