Skip to main content Accessibility help
×
Hostname: page-component-6b989bf9dc-6f5p8 Total loading time: 0.001 Render date: 2024-04-14T16:56:55.538Z Has data issue: false hasContentIssue false

5 - Empires of the Mind

from I - Distraction by Design

Published online by Cambridge University Press:  30 May 2018

James Williams
Affiliation:
University of Oxford

Summary

Type
Chapter
Information
Stand out of our Light
Freedom and Resistance in the Attention Economy
, pp. 26 - 40
Publisher: Cambridge University Press
Print publication year: 2018
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

The empires of the future are the empires of the mind.

Churchill

There was once a man walking down a road, wearing a cloak to keep warm. The North Wind noticed him and said to the Sun, “Let’s see which one of us can get that man to take off his cloak. I bet I’ll surely win, for no one can resist the gales of my mighty breath!” The Sun agreed to the contest, so the North Wind went first and started blowing at the man as hard as he could. The man’s hat flew off; leaves swirled in the air all around him. He could barely take a step forward, but he clutched his cloak tightly – and no matter how hard the North Wind blew, the man’s cloak stayed on. “What? Impossible!” the North Wind said. “Well, if I have failed,” he said to the Sun, “then surely there is no hope for you.” “We shall see,” said the Sun. The Sun welled up his chest and made himself as bright as he could possibly be. The man, still walking, had to shield his eyes because the Sun’s shine was so intense. Soon the man grew so warm inside his wool cloak that he began to feel faint: he started to stagger, sweat dripping off his head into the dirt. Breathing deeply, he untied his cloak and flung it over his shoulder, all the while scanning his environs for a source of water where he could cool off. The Sun’s persuasion had won out where the North Wind’s coercion could not.

This story comes from Aesop, the Greek fabulist who lived a few hundred years before Diogenes ever trolled the streets of Corinth. Like Diogenes, Aesop was also a slave at one point in his life before eventually being freed. Aesop died in Delphi, where the famous oracle lived upon whose temple was inscribed that famous maxim “Know Thyself.” You probably know some of Aesop’s other fables – “The Tortoise and the Hare,” “The Ant and the Grasshopper,” “The Dog and its Reflection” – but “The North Wind and the Sun” is one of my favorites, because it shows us that persuasion can be just as powerful, if not more so, than coercion.1

Of all the ways humans try to influence each other, persuasion might be the most prevalent and consequential. A marriage proposal. A car dealer’s sales pitch. The temptation of Christ. A political stump speech. This book. When we consider the stories of our lives, and the stories that give our lives meaning, we find that they often turn on pivot points of persuasion. Since ancient Greece, persuasion has been understood primarily in its linguistic form, as rhetorike techne, or the art of the orator. Aristotle identified what he saw as three pillars of rhetoric – ethos, pathos, and logos – which roughly correspond to our notions of authority, emotion, and reason. And into medieval times, persuasion held a central position in education, alongside grammar and logic, as one-third of the classical trivium.

Yet all design is “persuasive” in a broad sense; it all directs our thoughts or actions in one way or another.2 There’s no such thing as a “neutral” technology. All design embodies certain goals and values; all design shapes the world in some way. A technology can no more be neutral than a government can be neutral. In fact, the cyber- in “cybernetics” and the gover- in “government” both stem from the same Greek root: kyber-, “to steer or to guide,” originally used in the context of the navigation of ships. (This nautical metaphor provides a fitting illustration of what I mean: The idea of a “neutral” rudder is an incoherent one. Certainly, a rudder held straight can help you stay the course – but it won’t guide your ship somewhere. Nor, in the same way, does any technology.)

However, some design is “persuasive” in a narrower sense than this. Some design has a form that follows directly from a specific representation of users’ thoughts or behaviors, that the designer wants to change. This sort of persuasive design is by no means unique to digital technologies; humans have long designed physical environments toward such persuasive ends. Consider, for instance, the placement of escalators in shopping malls, the music in grocery stores, or the layouts of cities.3 Yet what Churchill said about physical architecture – “we shape our buildings, and afterwards, our buildings shape us” – is just as true of the information architectures in which we now spend so much of our lives.4

For most of human history, persuasive design in this narrower sense has been a more or less handicraft undertaking. It’s had the character of an art rather than a science. As a result, we haven’t worried too much about its power over us. Instead, we’ve kept an eye on coercive, as opposed to persuasive, designs. As Postman pointed out, we’ve been more attuned to the Orwellian than the Huxleyan threats to our freedom.

But now the winds have changed. While we weren’t watching, persuasion became industrialized. In the twentieth century the modern advertising industry came to maturity and began systematically applying new knowledge about human psychology and decision making. In parallel, advertising’s scope expanded beyond the mere provision of information to include the shaping of behaviors and attitudes. By the end of the twentieth century, new forms of electric media afforded advertisers new platforms and strategies for their persuasion, but the true effectiveness of their efforts was still hard to measure. Then, the internet came along and closed the feedback loop of measurement. Very quickly, an unprecedented infrastructure of analytics, experimentation, message delivery, customization, and automation emerged to enable digital advertising practices. Furthermore, networked general-purpose computers were becoming more portable and connected, and people were spending more time than ever with them. Designers began applying techniques and infrastructures developed for digital advertising to advance persuasive goals in the platforms and services themselves. The scalability and increasing profitability of digital advertising made it the default business model, and thus incentive structure, for digital platforms and services. As a result, goals and metrics that served the ends of advertising became the dominant goals and metrics in the design of digital services themselves. By and large, these metrics involved capturing the maximum amount of users’ time and attention possible. In order to win the fierce global competition for our attention, design was forced to speak to the lowest parts of us, and to exploit our cognitive vulnerabilities.

This is how the twenty-first century began: with sophisticated persuasion allying with sophisticated technology to advance the pettiest possible goals in our lives. It began with the AI behind the system that beat the world champion at the board game Go recommending videos to keep me watching YouTube longer.5

There’s no good analogue for this monopoly of the mind the forces of industrialized persuasion now hold – especially on the scale of billions of minds. Perhaps Christian adherents carrying the Bible everywhere they go, or the memorization of full Homeric epics in the Greek oral tradition, or the assignment of Buddhist mantras to recite all day under one’s breath, or the total propaganda machines of totalitarian states. But we must look to the religious, the mythic, the totalistic, to find any remotely appropriate comparison. We have not been primed, either by nature or habit, to notice, much less struggle against, these new persuasive forces that so deeply shape our attention, our actions, and our lives.

This problem is not new just in scale, but also in kind. The empires of the present are the empires of the mind.

On October 26, 1994, if you had fired up your 28.8k modem, double-clicked the icon for the newly released Netscape Navigator web browser, and accessed the website of Wired Magazine, you would have seen a rectangle at the top of the page. In it, tie-dye text against a black background would have asked you, “Have you ever clicked your mouse right HERE? You will.6 Whether intended as prediction or command, this message – the first banner ad on the web – was more correct than its creators could have imagined. Digital ad spend was projected to pass $223 billion in 2017, and to continue to grow at double-digit rates until at least 2020.7 Digital advertising is by far the dominant business model for monetizing information on the internet today. Many of the most widely used platforms, such as Google, Facebook, and Twitter, are at core advertising companies. As a result, many of the world’s top software engineers, designers, analysts, and statisticians now spend their days figuring out how to direct people’s thinking and behavior toward predefined goals that may not align with their own. As Jeff Hammerbacher, Facebook’s first research scientist, remarked: “The best minds of my generation are thinking about how to make people click ads … and it sucks.”8

As a media dynamic, advertising has historically been an exception to the rule of information delivery in a given medium. It’s the newspaper ads, but not the articles; it’s the billboards, but not the street signs; it’s the TV commercials, but not the programs. In a world of information scarcity, it was useful to make these exceptions to the rule because they gave us novel information that could help us make better purchasing decisions. This has, broadly speaking, been the justification for advertising’s existence in an information-scarce world.

In the mid twentieth century, as the modern advertising industry was coming to maturity, it started systematically applying new knowledge about human psychology and decision making. Psychologists such as Sigmund Freud had laid the groundwork for the study of unconscious thought, and in the 1970s Daniel Kahneman and Amos Tversky revealed the ways in which our automatic modes of thinking can override more rational rules of statistical prediction.9 In fact a great deal of our everyday experience consists of such automatic, nonconscious processes; our lives take place, as the researchers John Bargh and Tanya Chartrand have said, against the backdrop of an “unbearable automaticity of being.”10 On the basis of all this new knowledge about human psychology and decision making, advertising’s scope continued to expand beyond the informational to the persuasive; beyond shaping behaviors to shaping attitudes.11 And new forms of electric media were giving them new avenues for their persuasion.

Yet most advertising remained faith-based. Without a comprehensive, reliable measurement infrastructure, it was impossible to study the effectiveness of one’s advertising efforts, or to know how to improve on them. As John Wanamaker, a department store owner around the beginning of the twentieth century, is reported to have said, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”12 The potential for computing to revolutionize advertising measurement was recognized as early as the 1960s, when advertising agencies began experimenting with large mainframe computers. Companies such as Nielsen were also beginning to use diary and survey panel methods to understand audiences and their consumption behaviors, which marginally improved advertising intelligence by providing access to demographic data. However, these methods were laborious and expensive, and their aggregate data was useful only directionally. Measuring the actual effectiveness of ads was still largely infeasible.

The internet changed all that. Digital technology enabled a Cambrian explosion of advertising measurement. It was now possible to measure – at the level of individual users – people’s behaviors (e.g. page views), intentions (e.g. search queries), contexts (e.g. physical locations), interests (e.g. inferences from users’ browsing behavior), unique identifiers (e.g. device IDs or emails of logged-in users), and more. Also, vastly improved “benchmarking” data – information about the advertising efforts of one’s competitors – became available via market intelligence services like comScore and Hitwise. Web browsers were key in enabling this sea change of advertising measurement, not only because of their new technical affordances, but also because of the precedent they set for subsequent measurement capabilities in other contexts.

In particular, the browser “cookie” – a small file delivered imperceptibly via website code to track user behavior across pages – played an essential role. In his book The Daily You, Joseph Turow writes that the cookie did “more to shape advertising – and social attention – on the web than any other invention apart from the browser itself.”13 Cookies are also emblematic, in their scope-creep, of digital advertising measurement as a whole. Initially, cookies were created to enable “shopping cart” functionality on retail websites; they were a way for the site to keep track of a user as he or she moved from page to page. Soon, however, they were being used to track people between sites, and indeed all across the web. Many groups raised privacy concerns about these scope-creeping cookies, and it soon became commonplace to speak of two main types: “first-party” cookies (cookies created by the site itself) and “third-party” cookies (cookies created by someone else). In 1997 the Internet Engineering Task Force proposed taking away third-party cookies, which sent the online advertising industry into a frenzy.14 Ultimately, though, third-party cookies became commonplace. As unique identifiers at the level of the web-browser session, cookies paved the way for unique identifiers at higher levels, such as the device and even the user. Since 2014, for instance, Google’s advertising platform has been able to track whether you visit a company’s store in person after you see their ad.15

To manage this fire hose of measurement, “analytics” systems – such as Omniture, Coremetrics, and Google Analytics – emerged to serve as unified interfaces for managing one’s advertising as well as nonadvertising data. In doing so, they helped establish the “engagement” metrics of advertising (e.g. number of clicks, impressions, or time on site) as default operational metrics for websites themselves. This effectively extended the design logic of advertising – and particularly attention-oriented advertising (as opposed to advertising that serves users’ intentions) – to the design of the entire user experience.

In previous media, advertising had largely been an exception to the rule of information delivery – but in digital media, it seemed to have broken down some essential boundary; it seemed now to have become the rule. If advertising was previously said to be “underwriting” the dominant design goals of a medium, in digital media it now seemed to be “overwriting” them with its own. It wasn’t just that the line between advertising and nonadvertising was getting blurry, as with “native advertisements” (i.e. ads that have a similar look and feel to the rest of the content) or product placements (e.g. companies paying YouTube or Instagram “influencers” to use a product). Rather, it seemed that everything was now becoming an ad.

The confluence of these trends has given us the digital “attention economy”, the environment in which digital products and services relentlessly compete to capture and exploit our attention. In the attention economy, winning means getting as many people as possible to spend as much time and attention as possible with one’s product or service. Although, as it’s often said, in the attention economy “the user is the product.”

Think about it: The attention you’re deploying in order to read this book right now (an attention for which, by the way, I’m grateful) – an attention that includes, among other things, the saccades of your eyeballs, the information flows of your executive control function, your daily stockpile of willpower, and the goals you hope reading this book will help you achieve – these and other processes you use to navigate your life are literally the object of competition among many of the technologies you use every day. There are literally billions of dollars being spent to figure out how to get you to look at one thing over another; to buy one thing over another; to care about one thing over another. This is literally the design purpose of many of the technologies you trust to guide your life every day.

Because there’s so much competition for our attention, designers inevitably have to appeal to the lowest parts of us – they have to privilege our impulses over our intentions even further – and exploit the catalog of decision-making biases that psychologists and behavioral economists have been diligently compiling over the last few decades. These biases include things like loss aversion (such as the “fear of missing out,” often abbreviated as FOMO), social comparison, the status quo bias, framing effects, anchoring effects, and countless others.16 My friend Tristan Harris has a nice phrase for this cheap exploitation of our vulnerabilities: the “race to the bottom of the brain stem.”17

Clickbait is emblematic of this petty competition for our attention. Although the word is of recent coinage, “clickbait” has already been enshrined in the Oxford English Dictionary, where it’s defined as “content whose main purpose is to attract attention and encourage visitors to click on a link to a particular web page.” You’ve no doubt come across clickbait on the web, even if you haven’t known it by name. It’s marked by certain recognizable and rage-inducing headline patterns, as seen in, for example: “23 Things Parents Should Never Apologize For,” “This One Surprising Phrase Will Make You Seem More Polite,” or “This Baby Panda Showed Up At My Door. You Won’t Believe What Happened Next.” Clickbait laser-targets our emotions: a study of 100 million articles shared on Facebook found that the most common phrases in “top-performing” headlines were phrases such as “are freaking out,” “make you cry,” and “shocked to see.” It also found that headlines which “appeal to a sense of tribal belonging” drive increased engagement, for instance those of the formulation “X things only [some group] will understand.”18

In the attention economy, this is the game all persuasive design must play – not only the writers of headlines. In fact, there’s a burgeoning industry of authors and consultants helping designers of all sorts draw on the latest research in behavioral science to punch the right buttons in our brains as effectively and reliably as possible.19

One major aim of such persuasive design is to keep users coming back to a product repeatedly, which requires the creation of habits. The closest thing to a bible for designers who want to induce habits in their users is probably Nir Eyal’s book Hooked: How to Build Habit Forming Products. “Technologists build products meant to persuade people to do what we want them to do,” Eyal writes. “We call these people ‘users’ and even if we don’t say it aloud, we secretly wish every one of them would become fiendishly hooked to whatever we’re making.”20 In the book, Eyal gives designers a four-stage model for hooking users that consists of a trigger, an action, a variable reward, and the user’s “investment” in the product (e.g. of time or money).

The key element here is the variable reward. When you randomize the reward schedule for a given action, it increases the number of times a person is likely to take that action.21 This is the underlying dynamic at work behind the high engagement users have with “infinite” scrolling feeds, especially those with “pull-to-refresh” functionality, which we find in countless applications and websites today such as Facebook’s News Feed or Twitter’s Stream. It’s also used widely in all sorts of video games. In fact, this effect is often referred to as the “slot machine” effect, because it’s the foundational mechanism on which the machine gambling industry relies – and which generates for them over a billion dollars in revenue every day in the United States alone.22 Variable reward scheduling is also the engine of the compulsive, and sometimes addictive, habits of usage that many users struggle to control.23

Whether we’re using a slot machine or an app that’s designed to “hook” us, we’re doing the same thing; we’re “paying for the possibility of a surprise.”24 With slot machines, we pay with our money. With technologies in the attention economy, we pay with our attention. And, as with slot machines, the benefits we receive from these technologies – namely “free” products and services – are up front and immediate, whereas we pay the attentional costs in small denominations distributed over time. Rarely do we realize how costly our free things are.

Persuasive design isn’t inherently bad, of course, even when it does appeal to our psychological biases. Indeed, it can be used for our benefit. In the area of public policy, for instance, the practice of “nudging” aims to structure people’s environments in ways that help them make decisions that better promote their well-being. However, in the attention economy the incentives for persuasive design reward grabbing, and holding, our attention – keeping us looking, clicking, tapping, and scrolling. This amplifies, rather than mitigates, the challenges of self-regulation we already face in the era of information abundance.

On the opening screen of one of the first web browsers there was a notice that read, “There is no ‘top’ to the World Wide Web.”25 In other words, the web isn’t categorized hierarchically, like a directory of files – it’s decentralized, a network of nodes. One of the tragic ironies about the internet is that such a decentralized infrastructure of information management could enable the most centralized systems of attention management in human history. Today, just a few people at a handful of companies now have the ability to shape what billions of human beings think and do. One person, Mark Zuckerberg, owns Facebook, which has over 2 billion users, as well as WhatsApp (1.3 billion users), Facebook Messenger (1.2 billion users), and Instagram (800 million users).26 Google and Facebook now comprise 85 percent (and rising) of internet advertising’s year-over-year growth.27 And the Facebook News Feed is now the primary source of traffic for news websites.28

Alexander the Great could never have dreamed of having this amount of power. We don’t even have a good word for it yet. This isn’t a currently categorizable form of control over one’s fellow human beings. It’s more akin to a new government or religion, or even language. But even these categories feel insufficient. There aren’t even 2 billion English speakers in the world.

In 1943, in the thick of World War II, Winston Churchill traveled to Harvard to pick up an honorary degree and say a few words to a packed house. The title of his talk was “The Gift of a Common Tongue.” After lauding the fact that Britain and America shared a common language – which, he hoped, might one day serve as the basis not only for Anglo-American fraternity and solidarity, but even for a common citizenship – he gave a plug to Basic English, a simplified version of English that he hoped might one day become a global lingua franca, a “medium, albeit primitive, of intercourse and understanding.” This was the context – the prospect of giving the world a common linguistic operating system – in which he said “the empires of the future are the empires of the mind.”

The corollary of Churchill’s maxim is that the freedoms of the future are the freedoms of the mind. His future was the present we now struggle to see. Yet when the light falls on it just right, we can see the clear and urgent threat that this unprecedented system of intelligent, industrialized persuasion poses to our freedom of attention.

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×