Skip to main content Accessibility help
×
Hostname: page-component-cb9f654ff-plnhv Total loading time: 0.001 Render date: 2025-08-30T02:52:47.843Z Has data issue: false hasContentIssue false

1 - Undertaking a New Interpretive Effort

Published online by Cambridge University Press:  23 August 2019

Bert Spector
Affiliation:
Northeastern University, Boston

Summary

Constructing Crisis is about an idea, and that is the idea of crisis. What do we mean when we use or hear that term? What suggestions or thoughts are communicated when our leaders deploy the word? And how does that deployment impact the dynamics that unfold within our business organizations, our communities, and our societies?

Most usually, the idea of crisis conjures up images of threatening events, occurrences that present a serious risk. A business crisis, for instance, may take the form of a public revelation of maleficence on the part of key executives or flawed product design that led to injuries or deaths. Unwelcomed takeover bids and serious financial losses might also trigger what we refer to as a crisis. Will the business survive? In that way, we may understand crisis to be not just a risk but also an existential threat, placing the very future of the company in jeopardy.

Information

Type
Chapter
Information
Constructing Crisis
Leaders, Crises and Claims of Urgency
, pp. 1 - 17
Publisher: Cambridge University Press
Print publication year: 2019

1 Undertaking a New Interpretive Effort

Constructing Crisis is about an idea, and that is the idea of crisis. What do we mean when we use or hear that term? What suggestions or thoughts are communicated when our leaders deploy the word? And how does that deployment impact the dynamics that unfold within our business organizations, our communities, and our societies?

Most usually, the idea of crisis conjures up images of threatening events, occurrences that present a serious risk. A business crisis, for instance, may take the form of a public revelation of maleficence on the part of key executives or flawed product design that led to injuries or deaths. Unwelcomed takeover bids and serious financial losses might also trigger what we refer to as a crisis. Will the business survive? In that way, we may understand crisis to be not just a risk but also an existential threat, placing the very future of the company in jeopardy.

We also understand the need to think about the possibility of crises that threaten our communities: an out-of-control wildfire racing toward our homes in California, or the devastating aftermath of a Category 5 hurricane on our island territory. Whole continents may come under threat in periods of great migration sparked by nature, perhaps, or war.

We may take a more conceptual approach to the idea of crisis. Perhaps it isn’t a specific social unit or institution at risk. Crises, rather, may be seen as threatening to abstractions: democracy, the rule of law, liberal capitalism, even civilization itself. There is, even in such cases, an event – perhaps an accumulation of events – that is said to trigger the threat.

The variety of events prompted in our thoughts by the use of the term “crisis” is virtually endless. Acts of nature, acts of humans, and acts of humans in response to acts of nature can all generate a crisis, we think. So can system breakdowns, global dynamics, and economic downturns. In all their variety, however, the idea of crisis suggests events that objectively and undeniably pose a threat.Footnote 1

Our common notion of what is meant by the idea of crisis carries our thought process even further. We are convinced, for example, that we live in a time – right here and right now – that is especially, even uniquely prone to these types of traumatic upheavals. And as we are buffered by these seemingly endless disruptions, we expect our leaders to step up, navigate the tumult, and help ensure that we emerge, once the crisis has passed, stronger and more resilient than ever. It is precisely this capacity to navigate through crisis, in fact, that we suppose separates the great leaders from the middling or incompetent ones. Leadership, great leadership, is forged in crisis.

When crisis hits, those of us who are not leaders have a role to play as well. We rally together, following those heroic leaders as well as responding as a community. We know that not everyone will go along, get on board, and pull together. When US presidents declare a state of war – for example, the country has been attacked and must respond – we tend to respond with support; national unity; and, let’s admit it, more than a bit of intolerance for naysayers.Footnote 2 And when the CEO of our corporation insists on the need for an urgent response to a threatening situation, employees receive the message loudly and clearly: it’s time to join together. Business organizations aren’t democracies, after all, so for employees who are not willing to contribute to the leader-defined crisis response, perhaps they should consider leaving. Get on the bus or get off.

It is a powerful idea, in other words, this idea of crisis. The explicit intention of Constructing Crisis is to upend these ideas, each and every one of them. Crises are no more, or less, frequent now than at any other historical moment. Crises are not events. Events have no objective meaning. Leadership is not something that is forged in response to crisis. And the leader-follower dynamic that unfolds in the face of crises declarations can be unhealthy, even potentially dangerous.

It’s an argument built on a single assumption, that ideas have consequences. How we conceptualize a subject influences how we react and behave in relation to that subject. Given that the proposition that ideas have consequences has become the major theme of my work, it is worth exploring just how it is that ideas generally come to matter. And given that I am offering a new interpretive effort of the idea of crisis, it is also worth asking “how and why might such a reinterpretation of an idea so old and often examined as crisis have value?”

In a culture and context in which empiricism is expected and application is preferred to interpretation – that is, the context in which I work at the contemporary university and, with even greater emphasis, at a professional school of management – the governing norm is that ideas should either be grounded in data or have immediate utility. Preferably both. Otherwise, who cares?Footnote 3

And yet, the undertaking of Constructing Crisis is not grounded in data. Nor does it offer any suggestions for how to apply in an immediate way lessons to practice. So again, why should anyone care?

Just to be clear, what I am presenting here is a new interpretative effort of the idea of crisis, one that intends to upend traditional notions of a crisis and of the role played by leaders in responding to urgent situations. In doing so, my goal is not to offer immediate utility. I do not provide a deep dive into some big database or offer a “new-and-improved” how-to formula to help leaders negotiate urgent situations. Yet, I still insist that, yes, ideas have consequences and matter to both other scholars and practitioners.

This is, I realize, a suspicious posture to assume. Why bother, for instance, with an idea that doesn’t directly, even immediately impact practice? Just asking that question reflects a current that runs deeply throughout American thought: an ideology of usefulness.Footnote 4

The Ideology of Usefulness

Consequences and usefulness are not precisely the same notions. Consequences may be long term and indirect, with no immediate or obvious payoff. Certainly, consequences are not necessarily positive. Usefulness, on the other hand, speaks to a far more utilitarian impulse, the desire to apply thought directly to action, to make an immediate difference in practice. That difference is assumed to be positive: to repair a damaged image and to help improve performance at the individual and/or collective level.

A utilitarian impulse, although not confined to any one culture or single country, has deep and significant roots in American thought. It is a belief system that evolved within a specific temporal and historical context, so it is important to appreciate the interplay between that context and that belief system.

All too often, we take widespread and generally accepted constructs for granted, mistaking historically influenced assumptions for commonsense wisdom, or even more problematically as the “truth.” To sidestep that trap, I visit the historical forces that both shaped and amplified that profound preference for the “useful” over the “useless.”

The Dominance of Practicality

A doctrine of practicality runs deep and wide through American culture, a conviction forged on the western frontier and in the pervasive mythology that shaped Americans’ understanding of that frontier experience. The land that lay beyond settled America helped mold not just a country but also that country’s mythology. Frontier settlers engaged in a process of “perennial rebirth” as they moved westward.Footnote 5 That was the observation of Frederick Jackson Turner, the prominent frontier historian and popularizer (although not necessarily the originator) of the American frontier myth.

Myths are stories intended to help explain the world and the human condition within that world. They may be accurate, but not necessarily. Turner’s frontier myth, though highly distorted as history, communicated how entry into and settlement of the frontier became the ever-unfolding origin story of eighteenth- and nineteenth-century America and its prevailing culture. The myth went like this. The vulnerability of settlers within the wilderness demanded an immediate response from them. And they were nothing if not practical, these hardy settlers. On the frontier, the guiding question was “what works?” With self-constructed implements, the pioneer emerged as a crude farmer, clearing and fencing in land, building a home with available materials, and raising enough crops to allow for subsistence. These folks did what was necessary, nothing more or less.

Pioneer ideals focused not on what settlers should do (that is, normative values), but on what they could do (that is, practical lessons) to survive and thrive. Past traditions and previously formed heuristics were abandoned. In shedding old concepts and adapting to new realities, the frontier myth insisted, pioneers created a new country. In that way, the practical nature of a unique and exceptional American character was formed at the juncture of settlement and wilderness.

The idea that settlers left all mental baggage behind when they crossed the Mississippi River is, of course, a myth laden with much distortion.Footnote 6 As white Americans set a westward course, their preestablished beliefs and values persisted.Footnote 7 While aspiring to expand the empire ever westward, Americans maintained an abiding belief in the innocence of their project.Footnote 8 They were not conquerors, they believed; rather, settlers were taking advantage of opportunities for self and national improvement. And it was that belief that erected a profound blindness to some not-so-innocent realities. Westward settlement involved more than improvement and opportunity. It inherently required the trespassing on and ruin of occupied land and established institutions and led to the deaths of more than 50 million indigenous people, chiefly through the spread of diseases carried by those pioneers to an unprotected population.Footnote 9

Building empire by ever-westward expansion necessitated violent conquest, in particular, white conquest over Native Americans and Mexicans. Far from adapting to the “new” land, American settlers imposed upon it racial, ethnic, and gendered assumptions. Those prejudices significantly reduced contact with the native population that might have produced wise practical advice for survival based on generations of experience. Rather than shedding ideology and embracing unencumbered practicality, American pioneers carried predetermined views concerning how the world should work.Footnote 10 And yet, the myth persisted. Americans accepted the curated narrative of western settlement and, with it, adopted practicality as a foundational principle.

An Undercurrent of Anti-intellectualism

In that adoption of the myth of frontier utilitarianism, it is possible to locate a residue of anti-intellectualism. The ideology of usefulness conveyed a disposition to suspect and resent the output of an active but not immediately practical mind.Footnote 11

Consider this oration delivered at Yale in 1884 where undergraduates were told,

The age of philosophy has passed and left few memorials of its existence. That of glory has vanished, and nothing but a painful tradition of human suffering remains. That of utility has commenced, and it requires little warmth of imagination to anticipate for its rein lasting as time, and radiant with wonders of unveiled nature.Footnote 12

In the main, Americans took their stand with utility and its proclaimed radiant wonders rather than with fanciful ideas whose practical usefulness was not readily apparent.

Glorification of utility hardened over time into “unreflective instrumentalism”: a devaluation of thought, or at least any forms of thought “that do not promise relatively immediate practice payoffs.”Footnote 13 Pervasive instrumentality worked to erode any passion for critical reflection on the ends to which useful actions were aiming. That thinking seeped into institutions of higher learning as well.

Universities and the Embrace of Utilitarianism

In their continuous need to subdue the wilderness, American settlers relied on the “school of experience.”Footnote 14 A popular frontier maxim – “Any fool can put on his coat better than a wise man can do it for him” – captured a fundamental unease with educated and detached wisdom. In their desire to be taken as something other than remote cloisters of wise people with little to add to the pursuit of daily life, American universities adapted.Footnote 15

Although often denounced as elitist institutions that functioned in ivory-tower isolation from the “real world” – crossword puzzle clue: “place removed from reality,” answer: “ivory tower” – universities became carriers of the same ideological bias for useable truth and against useless imagining that defined the broader culture.Footnote 16 That was a trend most prevalent at professional schools such as business colleges attached to universities, but it was also apparent in liberal arts and humanities departments that proclaimed their connections to and support of the national economy.Footnote 17 The ideology of usefulness triumphed.

Usefulness expected “applied research” intended to address and solve real-world problems. This is research that, by definition, seeks to improve an existing system. To be sure, applied research can and does offer possibilities for advancement in how diseases are treated, for instance, or how to better measure transaction costs across firm boundaries. It is not designed to question the system itself.

“Flights of imagination” that might provide researchers with a less embedded view of systems and practices do not sit comfortably within the applied research paradigm.Footnote 18 The work of imagining intends to undermine accepted reality and create counter-narratives. That’s what imagining does. Yet, American universities created a climate where such acts were largely marginalized.

It should be no surprise, really, that utilitarianism came to dominate the culture of American universities. Business titans provided the lion’s share of university endowments. Nineteenth-century tycoons including Cyrus H. McCormick, John D. Rockefeller, and Marshall Field “took a particular interest in higher education.”Footnote 19 The end game of a university education bent toward teaching the skills that would repay corporate generosity and help develop the future in a way that supported industrial growth.Footnote 20

In the period roughly spanning 1870 to 1930, academic leaders fundamentally reorganized their institutions to promise upper- and upper-middle-class families that their children, mainly sons, would be well trained for entry into the world of commerce.Footnote 21 To enhance the appeal of college life to the “sons of wealth” as well as their (it was surely hoped) philanthropically inclined parents, the university experience was reshaped to be “more athletic, more masculine, and more fun.”Footnote 22 The sciences and the professions would be counted on to supplement classic studies associated with a traditional university education to fulfil the mission co-defined by business sponsors and willing administrators.

Writing in the late 1960s, Alvin Gouldner worried that utilitarianism had begun to erode the space allocated to less obviously pragmatic roads of academic inquiry. It was his concern that the ideology of the useful would do more than find a place within the academy; it would drive out more abstract theorizing that provoked his warning of a “coming crisis.” And it was not, in Gouldner’s telling, a new or even uniquely American phenomenon.

Taking a broader view than Turner’s frontier thesis, Gouldner traced the ideology of usefulness back to the eighteenth-century overthrow of aristocratic privileges in Western Europe. “With the growing influence of the middle class in the eighteenth century,” Gouldner wrote, “utility emerged as a dominant social standard.”Footnote 23 And, to be fair, there was much to celebrate in the glorification of middle-class achievement over aristocratic privilege that marked this transformation.

The rise of a European middle class brought with it an assumption that society’s rewards should be allocated not “on the basis of birth or of inherited social identity,” observed Gouldner, “but on the basis of talent and energy manifested in individual achievement.”Footnote 24 Usefulness became more than a philosophy. It was a value system: a deeply entrenched notion, reinforced by a newly emergent middle-class culture, of what ought to be. What individuals accomplished rather than their parental lineage became the central tenet for judgment. Utility “became a claim to respect rather than merely a basis for begrudging tolerance,” Gouldner noted.Footnote 25

Looked at with the perspective of the eighteenth-century middle class, usefulness represented a liberation from aristocratic privilege. But there was – and is – a rub. “In focusing public interest on the usefulness of the individual,” worried Gouldner, the emerging ideology favored “a side of his life that had significance not in its personal uniqueness but only in its comparability, its inferior or superior usefulness, to others.”Footnote 26 This would inevitably result in a contest over who was more and, by implication if not explicit condemnation, less useful.

Any tournament that seeks to separate winners from losers must have some metric, some ribbon to be burst through at the end point of the race. There could be no capacity to construct measures unless and until “usefulness” was defined. But defining usefulness leads to a residual construct, one that identifies uselessness as its counterpart. Fortunately, we have folks who are willing to do the hard work of identifying uselessness for us.

Take the Golden Fleece awards. Between 1975 and 1988, William Proxmire, the Democratic senator from Wisconsin, presented monthly “awards” – really nothing more than acts of public humiliation – to government-funded university professors said to be conducting useless research at taxpayer expense.Footnote 27 Projects investigating emotions, the relationship between sexual arousal and marijuana use, and prisoners’ motivation to escape were among the many singled out for ridicule. A renowned behavioral scientist studying biological causes of aggression sued Proxmire for damages (and won) after being mocked for researching “why rats, monkeys, and humans clench their jaws.”Footnote 28 The jeering continued for thirteen years, a testament to the powerful appeal of identifying useless activities unfolding in the ranks of university researchers.

Beyond the inane posturing of a politician, definitions of “useful” and “useless” must be amenable to some sort of comparative assessment that allows for differentiation. If the definition of usefulness is measurable – show me how you improve individual income, overall productivity, streams of innovation, and so on – so much the better. The Obama administration supported a measure of university educational effectiveness that calculated the relationship between the cost of a college degree and the impact on future earnings of each student.Footnote 29 What could be more useful than an improved return on investment?

The “B” School Experience

Professional schools of business management, which appeared in the United States concurrent with the emergence of a managerial tier asserting itself between “worker” and “owner,” bought enthusiastically and unapologetically into the ideology of usefulness.Footnote 30 In the early days, experienced executives taught business school classes: “good ole boys” who dispensed “war stories, cracker barrel wisdom, and the occasional practical pointer.”Footnote 31 Even when appended to elite, rarefied Ivy League universities – Dartmouth, the University of Pennsylvania, and Harvard initially in the United States – business colleges operated as sanctified trade schools, emphasizing the transference of competencies and skills.

Early pioneers of management theory – Frederick Winslow Taylor, Mary Parker Follet, and Elton Mayo among them – sought to “solve any problems companies and administrative organizations might possibly have.”Footnote 32 They and their students were meant to be organizational doctors, tending to the health and well-being of their institutional patients. That was the assumed payoff of their applied research: helping practitioners solve real-world problems with their specialized knowledge.

Academics at business schools can and often do act as consultants to business. Oxford University provides a helpful Internet portal that invites outsiders to seek out “experts from all disciplines” who are “available to work with you as consultants through Consulting Services at Oxford University Innovation. We also have specialized consultancy services in statistics and museums and collections, as well as free student and researcher consultancy services.”Footnote 33 See, even an institution as esteemed as Oxford generates useful knowledge that can be applied to practice.

There is a perfectly legitimate raison d’être for applying knowledge to practice: to “provide solutions to problems that are presented to us, or to legitimate solutions that have already been reached.”Footnote 34 I’m quoting here from Michael Burrawoy, who added that the academic expert can and should supply “true and tested methods” and even conceptual frameworks as a way of orienting thought and legitimizing conclusions. It is a relationship that can be enacted with rigorous attention to constructed knowledge, awareness of prevailing theories, and personal (and interpersonal) integrity.

It is, however, an engagement that unfolds within parameters defined by the client. The end game of applied research and collaborative engagement is to improve organizational performance.Footnote 35 The implicit foundations of both parties as well as their relationship are not scrutinized, critiqued, or reconceptualized. Performance is defined by the organization and its agents, a definition that is not up for debate and amendment. Perhaps more troubling than that, certainly from the perspective of working to build an academic career, direct engagement with practitioners is not likely to help construct a case for tenure.

From Organizational Doctor to Rigorous Scientist

Chaffing against the early trade school image, mid-twentieth-century business schools sought to embrace academic respectability, an adaptation imposed in strong measure by university-wide tenure and promotion (T&P) committees. Those committees were dominated by faculty from “across the river” in a literal sense at the Harvard Business School and a metaphorical sense elsewhere. The social and hard scientists on T&P committees imposed academic rigor and counted elite publications as the currency of career advancement. Harvard University President Derek Bok issued an especially stinging rebuke of the Harvard Business School in his 1979 annual report, demanding that professors there devote increased attention to rigorous research.Footnote 36

As a response to this intense pressure, usefulness needed to be redefined. Rather than thinking of utility in terms of training practitioners to “do” management better, T&P committees found an alternative measure. The value of business school professors would be established by positivist social science and measured by publications in elite “A” list journals and citations in equally prestigious academic work. These were the criteria applied first to the hard sciences and then to the emerging social sciences. Now, business school researchers could demonstrate their merit “by making small twists on existing academic theory or empirical work, or trying to find ‘gaps’ in the literature that can be readily resolved conceptually or empirically.”Footnote 37 Reliance on “large, survey-based data sets and hypothetico-deductive research” – that is, research that starts with a hypothesis that is amenable to either verification or falsification through the application of data – became the defining characteristics of academic rigor.Footnote 38 The researcher has no particular commitment to any outcome save one that can be demonstrated quantitatively.Footnote 39

Academic “rigor” – the need to be “precise and thorough in the development of the theory, in the design and execution of the study, and in reporting the results and drawing implications” – is, of course, vital to the functioning of university research.Footnote 40 There should be no reason to choose between research that generates hypotheses and research that meticulously tests those hypotheses. Big data analysis offers new approaches to understanding the world, from helping online dating sites engineer better matches to allowing public health agencies to identify and react quickly to emergent trends. Empirical research represents the final steps of the theory-building process, converting propositions into hypothesis with empirical indicators that can then be subjected to meticulous testing.Footnote 41

The academic table should offer an honored seat to both imaginative conceptual work and rigorous empirical testing. The worry comes when empirical research is taken as a substitute for rather than a supplement to more conceptual pursuits. There will always be a requirement for new interpretative efforts. But there’s that measurement system – publication in empirically, quantitatively oriented journals – that mitigates against engaging in flights of imagination.

Although the trend toward academic rigor was applauded by many, it was roundly denounced by others. Research was becoming something far less “relevant” to managers, these critics worried. “Instead of measuring themselves in terms of the competence of their graduates, or by how well their faculties understand important drivers of business performance,” complained Warren Bennis and James O’Toole, “they measure themselves almost solely by the rigor of their scientific research.”Footnote 42 That debate between applied and pure research, as intense as it often seemed, was more smoke and mirrors than substance.

In a utilitarian culture, the specific definition of usefulness can and will be contested; the underlying need to define and measure the useful and to separate out the useless is accepted. The ideology of usefulness may value ideas based on their capacity to contribute directly to practice improvement. It may, conversely, value a kind of empiricism that can be useful to fellow academic scholars in search of the best measurement instrumentation and experimental design. Either way, the trap is set. Research must be directly useful and applicable to someone. Otherwise, why bother?

To frame a response to the why bother question, I’ll start with reference to a book by seventeenth-century German physician Johann Joachim Becker titled Foolish Wisdom and Wise Foolishness.Footnote 43 I take that title to contain a plea for rethinking definitions of the useful and the useless, opening up the possibility that what is labeled as “foolish” by some might, in fact, offer a path to wisdom.

A New Interpretive Effort

The idea that Constructing Crisis wants to consider is crisis, a concept both overused and much abused. The crisis label is overused in that it is applied indiscriminately. It is abused in that it is applied incorrectly. My argument is that traditional discourse on crisis gets the idea itself wrong.

This is not a criticism of the traditional approach to crisis. Criticism invokes an examination of a construct – say, a concept, text, or argument – through a consideration of its flaws and imperfections. I could criticize a book on crisis management, for instance, by insisting that the author somehow got the recommendations for how to respond to a crisis wrong. That would be criticism. Criticism can be valuable. Wise, practical counsel is important. It is not, however, what is on offer here.

What I propose instead is a critique of how crisis is conceptualized. Critique focuses not on the particulars of a construct but on the grounds upon which the construct is built.Footnote 44 In his 1781 Critique of Pure Reason, Immanuel Kant insisted that everything must be subjected to critique to ensure that no institution – especially but not exclusively a religious institution – reject fundamental scrutiny.Footnote 45

By virtue of taking a position “at the edge of established knowledge,” critique is inherently seditious of accepted wisdom.Footnote 46 Constructing Crisis offers critique with the goal of provoking a new interpretive effort concerning crisis. There will be no scripted journey intended to take leaders from the onset of a crisis event to a successful response, recovery, and improvement.Footnote 47 No solutions will be proposed, no best practices described so as to be emulated. Rather, an alternative model – a set of concepts and integrating propositions – will be offered built fundamentally on a set of ontological assumptions that veer sharply away from the traditional take on crises.Footnote 48

Ontology of the New Model

What exists? What are those things that can be said to have material being, and what is excluded from that construction? These are the central problems that occupy the discipline of ontology.Footnote 49 There is a story told about two famed theoretical physicists, Albert Einstein and Niels Bohr, engaging in an ongoing debate about the question of objective reality. Einstein once asked Bohr if he really thought that if no one were looking at the moon, it would not exist. Einstein, a believer in objective reality, said he would like to think that it did. Bohr countered by insisting that if no one were looking at the moon, it would be difficult to prove that it existed.Footnote 50

I come down firmly on the side of an objective reality: there is a moon regardless of whether anyone is looking at it. Nonetheless, I insist that ontology distinctions reside at the heart of both the dominant model of crisis and the reconstructed model presented here.

I want to be clear in what I mean to suggest, and what I mean not to suggest, in adopting an ontological viewpoint that crisis is not a thing. What I mean is captured by Alfred Lord Whitehead’s notion of a thing as a “fact of concrete nature.” To view crisis as a thing, we must assume that it can simply be observed as existing in the world within a finite duration of time and space apart from any interaction with or intervention by human interpretation.Footnote 51 To reference the Einstein-Bohr debate, when crisis is taken to be a thing, it is thought of as made up of the same material realness as the moon. Crisis, like the moon, is a fact of concrete nature. It is that view of crisis that I will critique.

I do not mean to contest the existence of threatening, dangerous, or potentially calamitous contingencies. Of course there are such dynamics. But dynamics are not things.

Taking crisis to be a fact of concrete nature, a material thing, lies at the heart of the crisis-as-event model. A thing is something that “can be categorized, located in time and space, and given a name.”Footnote 52 That notion that crisis fits the definition of a thing has become the “intellectual carrier” of the assumptions shared by crisis researchers and expert crisis consultants.Footnote 53

My counter-assertion insists that rather than being a thing, crisis is a claim of urgency made by a leader. There is no real meaning to the dynamics of the world absent human interpretation. Facts never speak for themselves. This is the basis of the crisis-as-claim model.

“A theoretical model starts with things or variables, or (1) units, whose interactions constitute the subject matter of attention,” wrote Robert Dubin. A model goes on to specify “the manner in which these units interact with each other, or (2) the laws of interaction among the units of the model.”Footnote 54 Constructors of theoretical models always need to ask: what is the focal unit of analysis. Why build a model on this unit rather than that one?

The crisis-as-claim model shifts analysis away from the event and to the claim (Table 1.1). That shift allows analysis that surfaces the blind spots – say, the insufficient attention paid in traditional crisis analysis to power and control or the expression of interests embedded in each and every claim of urgency – in the crisis-as-event model.

Table 1.1 Shifting the unit of analysis

I am not building the crisis-as-claim model from scratch. Far from it. Several decades ago, sociologists David Berliner and Bruce Biddle warned against “manufactured” crises generated by a sense of nostalgia and confusion about an idealized past and an uncertain present. The authors were analyzing a manufactured crisis of school inferiority in the United States compared with Japan. The critics of American schools and their lack of “competitiveness,” argued the sociologists, were explicitly intending to deflect attention away from underlying problems of wealth distribution and inequality.Footnote 55 Communications scholars have adopted a social constructionist perspective to note that “crises are symbolic and subjective, not simply objective events” and that “what might be considered a crisis in one situation may not be considered a crisis in another.”Footnote 56

As these critiques suggest, the crisis-as-event model errs when it mistakes the content of claims for descriptions of objective truths. Crises are not things that await first being recognized and then being managed; they are claims that assert urgency, insisting that the social unit faces immediate and significant threat. The thing under analysis in the crisis-as-claim model is the claim itself.

So, we return to the observation that there is no such thing as a crisis. And yet we persist in thinking of crises as things: events that objectively pose real, even existential threats. That assumption is core to the crisis-as-event model. The critique offered through the crisis-as-claim model is that a crisis is not a thing. It is, rather, a label applied to a set of contingencies that may be confusing, complex, and ambiguous. To refer to these contingencies as crises requires an act of human intervention.

How and why did we come to think otherwise?

Footnotes

1 The idea of crisis can also be deployed in a more individualistic sense: mid-life crises, identity crises, developmental crises, anxiety crises, and the like. Constructing Crisis will not be examining the use and implications of the idea at this level.

2 It is true that the US Constitution reserves the right to declare war to the Congress, not the president. But as Michael Beschloss has shown, that provision is more often violated than heeded. See Beschloss, Presidents of War (New York: Crown, 2018).

3 Murray S. Davis, “That’s Interesting! Towards a Phenomenology of Sociology and a Sociology of Phenomenology,” Philosophy of Social Sciences 1 (1971), 311.

4 The term is from Alvin Gouldner, The Coming Crisis of Western Sociology (New York: Basic Books, 1970).

5 Frederick Jackson Turner, The Frontier in American History (New York: H. Holt, 1920), p. 2.

6 I am using the Mississippi only metaphorically here. Well before the 1820s, the frontier consisted of lands east of the Mississippi: the Berkshire Mountains, Atlantic tidewaters, Shenandoah Valley, and Mohawk River, for example.

7 Patricia Limerick, The Legacy of Conquest: Unbroken Past of the American West (New York: Norton, 1987), p. 36.

8 Paul Frymer, Building an American Empire: The Era of Territorial and Political Expansion (Princeton: Princeton University Press, 2017), p. 1.

9 Jill Lepore, These Truths: A History of the United States (New York: Norton, 2018).

10 Richard Slotkin, Regeneration through Violence: The Mythology of the American Frontier, 1600–1860 (Middletown: Wesleyan University Press, 1973); Margaret Walsh, “Women’s Place on the American Frontier,” Journal of American Studies 29 (1995), 241255.

11 Richard Hofstadter, Anti-Intellectualism in American Life (New York: Knopf, 1966).

13 Daniel Rigney, “Three Kinds of Anti-Intellectualism: Rethinking Hofstadter,” Sociological Inquiry 61 (1991), 444.

14 Turner, Frontier in American History, 271.

15 This was a trend that worried Turner. The university must be allowed to be “left free … to explore new regions and to report what they find; for like the pioneers, they have the ideal of investigation, they seek new horizons” (Footnote Ibid., 287).

16 That clue and answer appeared in the USA Today daily crossword of September 9, 2018.

17 Alan Hughes et al., Hidden Connections: Knowledge Exchange Between the Arts and Humanities and the Private, Public, and Third Sectors (Cambridge: Arts & Humanities Research Council, 2011). I purposefully provided this citation – a survey of British universities – to suggest that the turn toward usefulness even among liberal arts schools is hardly unique to the United States.

18 Rigney, “Three Kinds of Anti-Intellectualism”; Nils Roll-Hansen, Why the Distinction between Basic (Theoretical) and Applied (Practical) Research Is Important in the Politics of Science (London: London School of Economics, 2009).

19 Daniel Wren, “American Business Philanthropy and Higher Education in the Nineteenth Century,” Business History Review 57 (1983), 324.

20 Henry Giroux, The University in Chains: Confronting the Military-Industrial-Academic Complex (Boulder: Paradigm Publishers, 2007); Mark Learmonth et al., “Promoting Scholarship that Matters: The Uselessness of Useful Research and the Usefulness of Useless Research,” British Journal of Management 23 (2011), 3544; James March, “A Scholar’s Quest,” Journal of Management Inquiry 20 (2011), 355357.

21 Harold Wechsler, The Qualified Student: A History of Selective College Admission in America (New York: John Wiley & Sons, 1977).

22 Mitchell Stevens, Creating a Class: College Admissions and the Education of Elites (Cambridge, MA: Harvard University Press, 2007), p. 248.

23 Gouldner, The Coming Crisis of Western Sociology, 61.

27 Occasional awards went to government agencies said to be engaged in wasteful spending unrelated to university research.

28 Robert Irion, “What Proxmire’s Golden Fleece Did for and to Science,” The Scientist (December 12, 1988).

29 Kelly Field, “Obama Plan to Tie Student Aid to College Ratings Draws Mixed Reviews,” Chronicle of Higher Education (August 22, 2013). These recommendations were not implemented.

30 Martin Parker, Shut Down the Business School (London: Pluto Press, 2018).

31 Warren Bennis and James O’Toole, “How Business Schools Lost Their Way,” Harvard Business Review 83 (2005), 98.

32 Barbara Czarniawska, Writing Management: Organization Theory as a Literary Genre (Oxford: Oxford University Press, 1999), p. 3.

34 Michael Burawoy, “For Public Sociology,” American Sociological Review 70 (2005), 9.

35 André Spicer et al., “Critical Performativity: The Unfinished Business of Critical Management Studies,” Human Relations 62 (2009), 537560.

36 Larry Kramer, “Harvard Fights Fiercely Over the Business School,” Washington Post (June 8, 1979); Duff McDonald, The Golden Passport: Harvard Business School, the Limits of Capitalism, and the Moral Failure of the MBA Elite (New York: Harper Business, 2017).

37 Sandra Waddock, Intellectual Shamans: Management Academics Making a Difference (Cambridge: Cambridge University Press, 2015), p. 274.

38 Sverre Spoelstra and Peter Svensson, “Critical Performativity: The Happy End of Critical Management Studies?” in The Routledge Companion to Critical Management Studies (London: Routledge, 2016), p. 70.

39 There is evidence that positive empirical results – that is, results supporting an earlier hypothesis – are far more likely to be published by elite journals than negative results disproving earlier hypotheses. The hurdles for having negative results accepted by reviewers are simply higher. It is a publication bias that can lead to distortions in knowledge. See Y. A. de Vries et al., “The Cumulative Effect of Reporting and Citation Biases on the Apparent Efficacy of Treatments: The Case of Depression,” Psychological Medicine 13 (2018), 13.

40 That definition is from Devi Gnyawali and Yue Song, “Pursuit of Rigor in Research: Illustration from Coopetition Literature,” Industrial Marketing Management 57 (2016), 12.

41 Robert Dubin, Theory Building (New York: Free Press, 1978).

42 Bennis and O’Toole, “How Business Schools Lost Their Way,” 98.

43 The book, subtitled Or a Hundred or So Political and Physical, Mercantile and Mechanistic Concepts and Propositions, is referenced in Alfred Kieser, “Rhetoric and Myth in Management Fashion,” Organization 4 (1997), 4974.

44 My definition of critique and its distinction from criticism comes from Barbara Johnson and Jacques Derrida as presented in Joan W. Scott, “History-writing as Critique,” in Manifestos for History (London: Routledge, 2007), pp. 1938.

45 Sverre Raffnsøe, “What Is Critique? Critical Turns in the Age of Criticism,” Critical Practice Studies 18 (2017), 2860.

46 The quote is from Raffnsøe, “What Is Critique?” 42.

47 The term “scripted journeys” is taken from Keith Grint, Management: A Sociological Introduction (Cambridge: Polity Press, 1995).

48 I take this definition of a conceptual model from Jo Rycroft-Malone and Tracey Bucknall. In that definition, “conceptual model” and “conceptual framework” are interchangeable. See Rycroft-Malone and Bucknall, “Theory, Frameworks, and Models Laying Down the Groundwork,” in Models and Frameworks for Implementing Evidence-Based Practice: Linking Evidence to Action (Somerset: John Wiley, 2011), pp. 2350.

49 Dale Jacquette, Ontology (London: Routledge, 2002). Ontological questions quickly turn to metaphysics as a way of appreciating not just what is but also what “the most general features and relations of these things are.” Thomas Hofweber “Logic and Ontology,” Stanford Encyclopedia of Philosophy (2017).

50 This story, and other slightly different versions, may be apocryphal. But a form of it is referenced by David Mermin, “Is the Moon There When Nobody Looks? Reality and Quantum Theory,” Physics Today 38 (1985), 3842.

51 Alfred North Whitehead, Science and the Modern World (Cambridge: Cambridge University Press, 1925), pp. 64, 72.

52 Robin Wagner-Pacifici, What Is an Event? (Chicago: University of Chicago Press, 2017), p. 1.

53 The term “intellectual carrier” is from Mike Reed and Gibson Burrell, “Theory and Organization Studies: The Need for Contestation.” Organization Studies (2019), 3954.

54 Dubin, Theory Building, 7–8.

55 David Berliner and Bruce Biddle, The Manufactured Crisis: Myths, Fraud, and the Attack on America’s Public Schools (Reading, MA: Addison-Wesley, 1995).

56 Brenda Berkelaar and Mohan Dutta, “A Culture-Centered Approach to Crisis Communication,” a paper presented at the annual meeting of the National Communication Association 93rd annual convention (2007). There is a steady stream of linguists and communications and discourse specialists who have tied social constructionism to claims of crisis. See Antoon De Rycker and M. D. Zuraidah, “Discourse in Crisis: Crisis in Discourse,” in Discourse and Crisis: Critical Perspectives (Amsterdam: John Benjamins, 2013), pp. 365; Jesper Falkheimer and Mats Heide, “Multicultural Crisis Communication: Towards a Social Constructionist Perspective,” Journal of Contingencies and Crisis Management 14 (2006), 180189; Jesper Falkheimer and Mats Heide, “Crisis Communicators in Charge: From Plans to Improvisation,” in The Handbook of Crisis Communication (Hoboken, NJ: Wiley-Blackwell, 2010), pp. 511526; Keith Hearit and Jeffrey Courtright, “A Social Constructionist Approach to Crisis Management: Allegations of Sudden Acceleration in the Audi 5000,” Communication Studies 54 (2003), 7995; Loizos Heracleous, Discourse, Interpretation, Organization (Cambridge: Cambridge University Press, 2006); Dermot O’Reilly et al., “Introduction: Leadership in a Crisis-Constructing World,” Leadership 11 (2015), 387395; Richard Vatz, “The Myth of the Rhetorical Situation,” Philosophy & Rhetoric 8 (1973), 154161.

Figure 0

Table 1.1 Shifting the unit of analysis

Accessibility standard: Unknown

Accessibility compliance for the HTML of this book is currently unknown and may be updated in the future.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×