Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-wzw2p Total loading time: 0 Render date: 2024-05-18T05:18:35.833Z Has data issue: false hasContentIssue false

11 - The Monster and the Bank

from III - Freedom of Attention

Published online by Cambridge University Press:  30 May 2018

James Williams
Affiliation:
University of Oxford

Summary

Type
Chapter
Information
Stand out of our Light
Freedom and Resistance in the Attention Economy
, pp. 97 - 105
Publisher: Cambridge University Press
Print publication year: 2018
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

A perceptive and critical reader may object here that I’ve given too much airtime to the problems of the digital attention economy and not enough to its benefits. They would be quite right. This is by design. “Why?” they might ask. “Shouldn’t we make an even-handed assessment of these technologies, and fully consider their benefits along with their costs? Shouldn’t we take care not to throw out the baby with the bath water?”

No, we should not. To proceed in that way would grant the premise that it’s acceptable for our technologies to be adversarial against us to begin with. It would serve as implicit agreement that we’ll tolerate design that isn’t on our side, as long as it throws us a few consolation prizes along the way. But adversarial technology is not even worthy of the name “technology.” And I see no reason, either moral or practical, why we should be expected to tolerate it. If anything, I see good reasons for thinking it morally obligatory that we resist and reform it. Silver linings are the consolations of the disempowered, and I refuse to believe that we are in that position relative to our technologies yet.

The reader might also object, “Are any of these dynamics really new at all? Does the digital attention economy really pose a fundamentally new threat to human freedom?” To be sure, incentives to capture and hold people’s attention existed long before digital technologies arose: elements of the attention economy have been present in previous electric media, such as radio and television, and even further back we find in the word “claptrap” a nice eighteenth-century analogue of “clickbait.” It’s also true that our psychological biases get exploited all the time: when a supermarket sets prices that end in .99, when a software company buries a user-hostile stipulation in a subordinate clause on page 97 of their terms-of-service agreement, or when a newspaper requires you to call, rather than email, in order to cancel your subscription. However, these challenges are new: as I have already argued here, this persuasion is far more powerful and prevalent than ever before, its pace of change is faster than ever before, and it’s centralized in the hands of fewer people than ever before.

This is a watershed moment on the trajectory of divesting our media, that is to say our attentional world, of the biases of print media, a trajectory that arguably has been in motion since the telegraph. But this process is more exponential than it is linear, tracking as it does the rate of technology change as a whole. The fact that this can be placed on an existing trajectory means it is more important, not less, to address.

It’s also wrongheaded to say that taking action to reform the digital attention economy would be premature because we lack sufficient clarity about the precise causal relationships between particular designs and particular types of harm. We will never have the sort of “scientific” clarity about the effects of digital media that we have, say, about the effects of the consumption of different drugs. The technology is changing too fast for research to keep up, its users and their contexts are far too diverse to allow anything but the broadest generalizations as conclusions, and the relationships between people and digital technologies are far too complex to make most research of this nature feasible at all. Again, though, the assumption behind calls to “wait and see” is that there’s a scenario in which we’d be willing to accept design that is adversarial against us in the first place. To demand randomized controlled trials, or similarly rigorous modes of research, before setting out to rewire the attention economy is akin to demanding verification that the opposing army marching toward you do, indeed, have bullets in their guns.

Additionally, it’s important to be very clear about what I’m not claiming here. For one, my argument is in no way anti technology or anti commerce. This is no Luddite move. The perspective I take, and the suggestions I will make, are in no way incompatible with making money, nor do they constitute a “brake pedal” on technological innovation. They’re more of a “steering wheel.” Ultimately, this is a project that takes seriously the claim, and helps advance the vision, that technology design can “make the world a better place.”

Also, it’s important to reiterate that I’m not arguing our nonrational psychological biases are in themselves “bad,” nor that exploiting them via design is inherently undesirable. As I wrote earlier, doing so is inevitable, and design can greatly advance users’ interests with these dynamics, when it’s on their side. As Huxley writes in his 1962 novel Island, “we cannot argue ourselves out of our basic irrationality – we can only learn to be irrational in a reasonable way.” Or, as Hegel puts it in Philosophy of Right, “Impulses should be phases of will in a rational system.”1

Nor, of course, am I arguing that digital technologies somehow “rewire” our brains, or otherwise change the way we think on a physiological level. Additionally, I’m not arguing here that the main problem is that we’re being “manipulated” by design. Manipulation is standardly understood as “controlling the content and supply of information” in such a way that the person is not aware of the influence. This seems to me simply another way of describing what most design is.

Neither does my argument require for its moral claims the presence of addiction.2 It’s enough to simply say that when you put people in different environments, they behave differently. There are many ways in which technology can be unethical, and can even deprive us of our freedom, without being “addictive.” Those in the design community and elsewhere who adopt a default stance of defensiveness on these issues often latch on to the conceptual frame of “addiction” in order to avoid having to meaningfully engage with the implications of ethically questionable design. This may occur explicitly or implicitly (the latter often by analogy to other addiction-forming products such as alcohol, cigarettes, or sugary foods). As users, we implicitly buy into these ethically constraining frames when we use phrases such as “digital detox” or “binge watch.” It’s ironic that comparing our technologies to dependency-inducing chemicals would render us less able to hold them ethically accountable for their designs and effects – but this is precisely the case. When we do so, we give up far too much ethical ground: we help to erect a straw man argument that threatens to commandeer the wider debate about the overall alignment of technology design with human goals and values. We must not confuse clinical standards with moral standards. Whether irresistible or not, if our technologies are not on our side, then they have no place in our lives.

It’s also worth noting several pitfalls we should avoid, namely things we must not do in response to the challenges of the attention economy. For one, we must not reply that if someone doesn’t like the choices on technology’s menu, their only option is to “unplug” or “detox.” This is a pessimistic and unsustainable view of technology, and one at odds with its very purpose. We have neither reason nor obligation to accept a relationship with technology that is adversarial in nature.

We must also be vigilant about the risk of slipping into an overly moralistic mode. Metaphors of food, alcohol, or drugs are often (though not always) signals of such overmoralizing. A recent headline in the British newspaper The Independent proclaims, “Giving your Child a Smartphone is Like Giving them a Gram of Cocaine, Says Top Addiction Expert.”3 Oxford researchers Andy Przybylski and Amy Orben penned a reply to that article in The Conversation, in which they wrote,

To fully confirm The Independent’s headline … you would need to give children both a gram of cocaine and a smartphone and then compare the effects … Media reports that compare social media to drug use are ignoring evidence of positive effects, while exaggerating and generalising the evidence of negative effects. This is scaremongering – and it does not promote healthy social media use. We would not liken giving children sweets to giving children drugs, even though having sweets for every meal could have serious health consequences. We should therefore not liken social media to drugs either.4

Similarly, we must reject the impulse to ask users to “just adapt” to distraction: to bear the burdens of impossible self-regulation, to suddenly become superhuman and take on the armies of industrialized persuasion. To do so would be akin to saying, “Thousands of the world’s brightest psychologists, statisticians, and designers are now spending the majority of their waking lives figuring out how to tear down your willpower – so you just need to have more willpower.” We must also reject the related temptation to say, “Oh well, perhaps the next generation will be better adjusted to this attentional warfare by virtue of having been born into it.” That is acquiescence, not engagement.

Additionally, education is necessary – but not sufficient – for transcending this problem. Nor will “media literacy” alone lead us out of this forest. It’s slightly embarrassing to admit this, but back when I was working at Google I actually printed out the Wikipedia article titled “List of Cognitive Biases” and thumb-tacked it on the wall next to my desk. I thought that having it readily accessible might help me be less susceptible to my own cognitive limitations. Needless to say, it didn’t help at all.

Nor can we focus on addressing the negative effects the attention economy has on children to the exclusion of addressing the effects it has on adults. This is often the site of the most unrestrained and counterproductive moralizing. To be sure, there are unique developmental considerations at play when it comes to children. However, we should seek not only to protect the most vulnerable members of society, but also the most vulnerable parts of ourselves.

We also can’t expect companies to self-regulate, or voluntarily refrain from producing the full effects they’re organizationally structured and financially incentivized to produce. Above all, we must not put any stock whatsoever in the notion that advancing “mindfulness” among employees in the technology industry is in any way relevant to or supportive of reforming the dynamics of the digital attention economy. The hope, if not the expectation, that technology design will suddenly come into alignment with human well-being if only enough CEOs and product managers and user experience researchers begin to conceive of it in Eastern religious terms is as dangerous as it is futile. This merely translates the problem into a rhetorical and philosophical frame that is unconnected to the philosophical foundations of Western liberal democracy, and thus is powerless to guide it. The primary function of thinking and speaking in this way is to gesture in the direction of morality while allowing enough conceptual haze and practical ambiguity to permit the impression that one has altered one’s moral course while not actually having done so.

Perhaps most of all, we cannot put the blame for these problems on the designers of the technologies themselves. No one becomes a designer or engineer because they want to make people’s lives worse. Tony Fadell, the founder of the company Nest, has said,

I wake up in cold sweats every so often thinking, what did we bring to the world? … Did we really bring a nuclear bomb with information that can – like we see with fake news – blow up people’s brains and reprogram them? Or did we bring light to people who never had information, who can now be empowered?5

Ultimately, there is no one to blame. At “fault” are more often the emergent dynamics of complex multiagent systems rather than the internal decision-making dynamics of a single individual. As W. Edwards Deming said, “A bad system will beat a good person every time.”6 John Steinbeck captured well the frustration we feel when our moral psychology collides with the hard truth of organizational reality in The Grapes of Wrath, when tenant farmers are evicted by representatives of the bank:

“Sure,” cried the tenant men, “but it’s our land … We were born on it, and we got killed on it, died on it. Even if it’s no good, it’s still ours … That’s what makes ownership, not a paper with numbers on it.”

“We’re sorry. It’s not us. It’s the monster. The bank isn’t like a man.”

“Yes, but the bank is only made of men.”

“No, you’re wrong there – quite wrong there. The bank is something else than men. It happens that every man in a bank hates what the bank does, and yet the bank does it. The bank is something more than men, I tell you. It’s the monster. Men made it, but they can’t control it.”7

The bank isn’t like a man, nor is the technology company, nor is any other brand nor signifier that we might use to represent the boundary conditions of these technologies that shape our lives. There is no one to blame. Knowing this, however, presents us with a choice of two paths. Do we conjure up an image of a “monster” at whom to direct our blame, and take a path which, while psychologically rewarding, is likely to distract from the goal of enacting real change in the real world? Or do we take the second path, and look head-on at the true nature of the system, as messy and psychologically indigestible as it seems to be?

The first path would seem to lead us toward a kind of digital mythology, in which we engage in imagined relationships with personified dynamics of our informational environment, much as the ancients did with their physical and emotional environments.8 Yet if we take autonomy seriously, we cannot help but note that in Steinbeck’s example it is not the displaced farmers, but rather the bankers, who invoke the idea and, we might say, the brand of the “monster.” Similarly, in the realm of digital technology, it is less often users than companies who produce the representations that serve as the primary psychological and emotional points of connection. In fact, these brands and representations may be the elements of technology design over which users have the least amount of control of all. What this path would entail, then, is acquiescence to a mythology that, while psychologically satisfying, would be (and in many cases already is) even more engineered than the products they represent, or than the decisions that those products are designed to induce.

The second path would entail looking the “monster” in the eye, and seeing it for the complex and multifaceted environment that it is. Such an approach would be akin to what the philosopher Luciano Floridi has called “infraethics,” or attention to the infrastructural, “first-order framework of implicit expectations, attitudes, and practices that can facilitate and promote morally good decisions and actions.”9 In a sense, the perspective of infraethics views society itself as a sort of persuasive technology, with a persuasive design goal of maximizing moral actions.

None of this implies, however, that we can simply stand by and expect the attention economy to fix itself. Noble mission statements and inspirational marketing claims can neither produce nor substitute for right design. “Some of the major disasters of mankind,” writes Alfred North Whitehead, “have been produced by the narrowness of men with a good methodology.”10 Similarly, countertechnologies and calls for players in the attention economy to voluntarily reform may serve as bandages that temporarily stem some localized bleeding – but they are not the surgery, the sustainable systemic change, that is ultimately needed. Besides, they implicitly grant that first, fatal assumption we have already roundly rejected: that it’s acceptable for the technologies that shape our thinking and behavior to be in an adversarial relationship against us in the first place.

After acknowledging and avoiding these pitfalls, what route remains? The route in which we take on the task of Herbert Marcuse’s “great refusal,” which Tim Wu describes in The Attention Merchants as being “the protest against unnecessary repression, the struggle for the ultimate form of freedom – ‘to live without anxiety.’”11 The route that remains is the route in which we move urgently to assert and defend our freedom of attention.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×