Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-skm99 Total loading time: 0 Render date: 2024-04-25T19:02:50.683Z Has data issue: false hasContentIssue false

Part III - Humanity

Published online by Cambridge University Press:  24 August 2020

Kevin Werbach
Affiliation:
University of Pennsylvania Wharton School of Business

Summary

Type
Chapter
Information
After the Digital Tornado
Networks, Algorithms, Humanity
, pp. 153 - 154
Publisher: Cambridge University Press
Print publication year: 2020
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

7 Why a Commitment to Pluralism Should Limit How Humanity Is Re-Engineered

Brett Frischmann and Evan Selinger Footnote *

What does it mean to be human? What matters about being human? Can uses of technology dehumanize, and if so, how? What moral values should guide normative evaluations of the networked digital world that we’re building for ourselves, our children and future generations? These are the highly-contested questions we tackle in our recent book, Re-Engineering Humanity.

Our contribution here is to provide three conceptual tools to help people grapple with these difficult questions. First, we discuss humanity, distinguishing between what it means to be human and what matters about being human. We argue that humanity is a shared resource consisting of intergenerational ideals and commitments. Second, we discuss Robert Nozick’s famous thought experiment: the Experience Machine. The scenario raises fundamental questions about the good life and does so from an individualistic perspective. Finally, we discuss our thought experiment: the Experience Machine n.0. This scenario also raises fundamental questions about the good life, but it does so from an interconnected social perspective that emphasizes how world building engineers humanity by shaping the possible lives of others, including future generations.

We present this analysis to help readers make better sense of their own lives and the supposedly smart techno-social systems that exert ever-greater influence on how we think, act, and relate to one another and our environment. The main position we defend is that (i) different views of the good life and their underlying conceptions of what it means to be human are justifiable and (ii) a commitment to pluralism requires building worlds that allow people to pursue diverse paths towards their conceptions of flourishing. Endorsing (ii) requires being committed ethically and politically to two related ideals for freedom: freedom to be off (in the “not always on” sense) and freedom from engineered determinism. Each ideal entails positive and negative libertiesFootnote 1 that are contingent on the dynamic relationships among individuals and between individuals and techno-social environments. We’re asking for degrees of freedom engineered into our lived-in environments and thus our lives.

Introduction

Life can be understood as a role-playing game with the earth being our experience machine. For better or worse, humans haven’t optimized the planet to give us all of the experiences we desire. But this situation can change. Our roles, desires, and capabilities are engineered and re-engineered, more than we typically appreciate.

Let’s unpack the gaming metaphor. Popular role-playing games allow players to create their own characters. In Dungeons and Dragons, players roll dice and assign points to different attributes, such as Strength, Intelligence, Wisdom, Dexterity, Constitution, and Charisma. They also select their race. If players want to stick with what they know, they can remain Human. Or, if they’re feeling more adventurous, they can try being an Elf or a Dwarf. That’s not all. Players also get to choose their alignment, such as Good, Neutral, or Evil. And they decide on a class. Fighters, Clerics, Mages, and Thieves are all popular options.

Many video games follow similar procedures. Sometimes, players are given the option of using a pre-generated character that allocates a fixed number of points to a variety of attributes. Sometimes, they can create their own with a character generation screen. In this case, players choose how to spread points among the attributes. In both Dungeons and Dragons and similar video games, players can improve their attributes, usually by performing tasks and gaining experience in the game. The built environment – the rules, story lines, maps, algorithms, and so much more – constrains and thus shapes players’ lives.

These fictional narratives tell us something important about how we see and, more importantly, how we imagine ourselves as human beings. Humanity is represented as a collection of basic characteristics that each person starts with. You might believe a divine spark pre-generates your “character.” Or you might believe that evolutionary biology plays this role. You could believe both of these things, or something else altogether, since you are largely a product of social conditioning. Your cultural background, your religious affiliation (or lack thereof), your political orientation, and all of the other features that make you who you are, influence which characteristics you value. You might believe that some characteristics are more essential than others. If given the opportunity, perhaps you’d allocate more “points” to autonomy than sociality, or maybe it would be the other way around.

Regardless of your perspective on how human “characters” are generated and the relative weight of various characteristics that they can be constituted by, the gaming analogy usefully emphasizes the importance of identifying basic characteristics that constrain, define, and inspire us to be more, and recognizing how the built environment shapes them. Crucially, we have opportunities to shape ourselves, and these opportunities emerge in and through our built worlds.

In Re-Engineering Humanity, we closely examine these relationships. How do we engineer ourselves through the techno-social world we build? How does techno-social engineering impact important aspects of our humanity? We do not attempt to identify everything important about being human. Instead, we provide a framework for identifying and evaluating techno-social engineering of core capabilities that distinguish humans from an idealized construct that we call “simple machines.” Other baselines are possible, for example, other species. We leave consideration of alternatives for future work. For now, keep in mind that simple machines are programmed and fully predictable. Humans generally are not. If humans consistently behave like simple machines, however, we might need to evaluate the techno-social environment.

We examine some core human capabilities in depth, including commonsensical thinking, sociality, and free will. We touch lightly on others, such as how our sensory capabilities mediate how we look at and relate to others as well as the physical environment. In an extended discussion of transformative tools, we highlight the power of imagination, language, and the collective construction of shared ideals, institutions, and reality. These core capabilities and ideals – and thus humanity – may be at risk as supposedly smart techno-social systems spread.

A world in which engineered determinism governs is a world in which fully predictable and programmable people perform rather than live their lives. Such a world would be tragic. People living in it could be described as human and still would qualify as homo sapiens. Nonetheless, they would have a thin normative status as human beings because much of what matters about being human would be lost.

Being Human and What Matters about Being Human

Many criticize the negative impact of technology for being dehumanizing, especially in recent decades with the widespread adoption of computers, the Internet, and, more recently, smartphones. It is difficult, however, to know when a line has been crossed, when the techno-social engineering has gone too far, when something meaningful has been lost. Do we know when technology replaces or diminishes our humanity? Can we detect when this happens? To begin to answer these questions, we would have to know what constitutes our humanity, what makes us human, and what matters about being human. We need to understand and appreciate our humanity if we are to preserve, protect, and sustain it for ourselves, our children and future generations.

Many have said that humanity can be taken away through slavery as well as its authoritarian political analogues, like totalitarianism. Psychologists label certain practices, such as confining people to pointless tasks and subjecting them to deindividuation, dehumanizing. On this view, humanity can be lost, fully or in increments, partial deprivations, or deteriorations.

Others, however, challenge the notion that one’s humanity can ever be lost or taken away. They argue that one’s humanity persists even when it is not acknowledged or respected. The slave is and always will be human, and thus, her humanity cannot be taken away. What those who support slavery do is fail to acknowledge and respect her humanity. While this is a reasonable perspective, we don’t adopt it; the perspective doesn’t adequately distinguish (i) being human from (ii) having access to, possessing, and sharing in humanity. We agree that the slave is and always will be human, but we don’t think that means her humanity cannot be taken away. To elaborate, we’ll say more about different conceptions of being human (descriptive) and what matters about being human (normative).

But first let us be clear: We are committed to what, over time, has come to be known as the Kantian rule on human dignity: “All human beings are worthy of respect and deserve to never be treated exclusively as a means to an end.” Some might argue that our suggestion that slavery deprives the slave of her humanity means that the slave is no longer a human being and consequently no longer worthy of respect. Similarly, they might argue that since a newborn baby lacks various capabilities that we (elsewhere) identify as potentially essential components of humanity, the baby would not be a full-fledged human being worthy of respect. These arguments fundamentally misconstrue our approach. The slave and the newborn are and always will be human beings worthy of respect. The Kantian rule applies universally to all human beings, regardless of whether they have access to, possess, and/or share fully in the blessings of humanity.Footnote 2

It’s easy to believe that the meaning of “humanity” is simple, intuitive, and unproblematic. You know it when you see it. At least, it seems that way when we look in the mirror or talk to our children. We are human, and so humanity must be what we are and those who are like us are. Unfortunately, this commonsense view lulls us into believing that what matters about being human – what’s special and important and – worth protecting and cherishing—is whatever we happen to be in the present context. In short: What is = what ought to be. Our humanity is taken as a given, stable and safe, as if it’s a persistent and seemingly inevitable and natural state of affairs. Proponents of this view risk being profoundly ignorant about history. It obscures what the present generation has inherited from past generations. It underplays incredible cultural variations in the present generation. It also turns a blind eye to how our actions in the present affect future generations. And, frankly, it allows a lack of imagination about possible futures for humanity and the worlds we’re building.

Some would say that what it means to be human can and should be described biologically in terms of what differentiates us from other species. On this view, we identify the distinguishable, evolved characteristics and capabilities of homo sapiens, the species of humans that outlasted the rest.Footnote 3 For example, in contrast with all other known species, only homo sapiens evolved the complex cognitive and social capabilities needed for widely shared language, fiction (e.g., myth, imagined realities), and social institutions (e.g., trust and rules) that scale beyond small close-knit groups (~n = 150).Footnote 4 These basic capabilities and social infrastructure enabled humans to conquer the earth and reconstruct the environment within which we evolve.

The descriptive, biological approach has the advantage of being scientific and revealing a continuum and set of functions that relate humans with other species as well as our natural environment. This approach has its limits, however. Biology doesn’t explain everything that’s important about humanity. For example, biology can’t account for the complex and nuanced ways we relate to and evolve within our reconstructed, built environment. Biology also doesn’t fully explain the tools we choose to build and use, much less whether, how, why, or when we should engineer ourselves through our techno-social tools. To more fully understand what matters about being human, how we engineer ourselves and future generations, and how to conceptualize humanity as a shared resource, we need to move beyond evolutionary biology.

After all, we may have evolved certain capabilities that enable survival and our rise to the top of the food chain.Footnote 5 However, we don’t necessarily value all of those capabilities as central expressions of who we are. Precisely because we have evolved to the point where we can shape ourselves and our society, several philosophical questions have arisen. How should we exercise such power? What about us should we sustain and cultivate? What should we let go? Who should we aspire to be? How should we engineer ourselves? What type of society should we build and sustain?

Crucially, human beings can contemplate and act on such questions only because of the various capabilities we’ve gained through evolution and practiced, honed, developed, and sustained collectively. Evolution is necessary but not sufficient. What is also necessary, and this is both controversial and incredibly important, is our built world, engineered with our techno-social tools to sustain the basic capabilities that allow us to flourish individually and collectively within and across generations.Footnote 6

According to our approach, (1) what meaningfully distinguishes homo sapiens from other species is our capability to imagine, conceptualize, and engineer ourselves and our environment; and, (2) what matters about being human is how we exercise such power over generations to collectively produce, cultivate, and sustain shared normative conceptions of humanity.

Humanity can thus be understood as a set of ideals about who we are and aspire to be. These ideals are valuable, intangible resources, particularly when shared, acted on, and reflected in our most sacred institutions as shared commitments. Ultimately, we might say that humanity as both a normative concept and as a collectively produced and shared resource (or set of resources) stems from the answers we give to the following fundamental questions:

  • Who are we?

  • What sort of people do we aspire to be?Footnote 7

  • What values and capabilities do we possess and commit ourselves to sustain?

  • What sort of society do we want to build and sustain?

  • What obligations do we owe to past, present, and future generations? And how should such obligations shape the technological and social institutions we build and sustain?

We – as societies, as communities, as generations, as families, as individuals – answer these constitutional questions directly and indirectly through our actions and the cultures, institutions, infrastructures, and environments we build and sustain.

Suppose we describe states of human affairs as the sum of who we currently are as a group, how we see ourselves, and who we want to be.Footnote 8 This description can include a set of characteristics and capabilities,Footnote 9 some of which we possess and some of which we aspire to possess. People with different normative conceptions (or value systems) might disagree about what characteristics and capabilities ought to be in the set, how to prioritize them, and how we sustain them through social institutions.Footnote 10 Such disagreement and diversity produce and are products of different cultures.

Despite such disagreement and diversity, there are some widely shared core ideals, for example, as reflected in the Universal Declaration on Human Rights.Footnote 11 These multinational, macro-level normative commitments answer some of the most fundamental constitutional questions about who we are and aspire to be collectively. International human rights laws and institutions create a global community committed to crosscultural standards and moral floors.Footnote 12 These and other political processes enable but do not guarantee moral progress over time and across generations and cultures.Footnote 13

Ideals don’t become permanent, true, or worth preserving just because lots of people endorse them. They may change and be supplemented with other ideals that vary by culture, as the history of slavery taught us.Footnote 14 But across cultures and generations, human beings have exercised our capabilities to conceptualize and engineer ourselves and our environment, to build and sustain a collective heritage, which we characterize as nothing less than humanity itself.

Preserving the “fundamental blessings” of humanity is the most important constitutional commitment that unites cultures across generations.Footnote 15 In his Lyceum Address of 1838, Abraham Lincoln recognized that the “fundamental blessings” passed on from generation to generation extend beyond the blessings of the Earth to include the blessings of society – the communal heritage of law, political institutions, and fundamental rights of liberty and equality.Footnote 16 Lincoln reminded his generation, as his words ought to remind us today, that the fundamental resources on which any society depends include the blessings bestowed on any present generation by sacrifices of its ancestors. Lincoln’s speech, like the Gettysburg Address,Footnote 17 offers a powerful vision of a transgenerational social contract firmly rooted in equity. Each generation inherits a wealth of natural and communal resources. In return for this boon, it’s obligated to transmit these resources “to the latest generation that fate shall permit the world to know.”Footnote 18 This duty to transmit a legacy to the future reverberates in many cultures. Lincoln’s speech implicitly harkens back to the Athenian Ephebic Oath by which men of ancient Athens swore to “transmit my fatherland not diminished [b]ut greater and better than before.”Footnote 19 The intergenerational moral obligation is rooted in a more traditional conception of equity, akin to the repudiation of unjust enrichment. The present generation is morally bound to perform its duty to transmit because its own welfare and humanity has been enriched by access to and use of the resources passed on to it. To accept the benefits without satisfying the attendant duty would constitute enrichment at the expense of future generations.Footnote 20

Humanity, conceived of normatively as a shared set of ideals reflected in us and our built world of imagined realities, institutions, infrastructures, and environments, is at risk of deterioration by pervasive techno-social engineering. We focus on specific forms of techno-social engineering that affect the basic capabilities that enable us to ask and participate in answering fundamental questions about who we are and aspire to be, individually and collectively. Thus, we consider thinking capacities, the ability to socialize and relate to each other, free will, autonomy, and agency.

Some may disagree with our choices of capabilities to examine. They may choose to examine others. The bottom line is that across cultures and generations, humans have engineered themselves and their built environments to sustain these and other core capabilities. In our view, they are part of our shared heritage, our humanity. And again, they are at risk of being whittled away through rampant techno-social engineering driven by many different forces and logics. Taylorism extended and fetishized computational power, and the allure of ever more powerful intelligent control systems promise tremendous gains in efficiency and productivity along with the convenience and happiness of optimized lives. But at what cost?

Optimized Life on the Experience Machine

Over forty years ago, Robert Nozick wondered whether he or anyone else would choose to plug into a hypothetical “experience machine” that could convincingly simulate any desired experience. In the blink of an eye, the experience machine would let you take on the role of a renowned novelist, a caring father, an ascetic saint, or any one of myriad other possibilities, like rock star, brilliant scientist, or world-class athlete. Nozick’s scenario offers a choice to plug into a virtual reality machine that guaranteed a “lifetime of bliss.”Footnote 21

If you were presented with the opportunity, would you choose to plug into the experience machine?

We don’t claim to know the right answer for you. People have different intuitions. This classic thought experiment prompts imagination and deliberation about one’s basic conception of a good life. Is there more to life – or better yet, the good life – than a person’s subjective experience of happiness? Or, as Nozick put it, than how one’s life feels from the inside?

For hardcore hedonists, the decision is straightforward: Plug in. It guarantees optimal happiness, the highest aggregation of moment-by-moment positive feelings. The movie The Matrix toyed with a more reflective version of the thought experiment when a character named Cipher announces that he understands he’s living in a simulation but still prefers that world to alternatives where he’d be less happy. “I know this steak doesn’t exist. I know when I put it in my mouth the Matrix is telling my brain that it is juicy and delicious. After nine years you know what I realize? Ignorance is bliss.”Footnote 22

Most people would not choose to plug in.Footnote 23 And that’s because most people are not hardcore hedonists.Footnote 24 Of course, happiness matters for most people and is an important component of a life lived well. But many other things matter too, and they are not reducible to or commensurable with happiness. Pursuing pleasure exclusively, as the ultimate end, would lead to a rather shallow life.Footnote 25

Nozick contended that it would be a mistake to plug into the experience machine.Footnote 26 His reservations revolved around a core conviction that many people share today: “[R]eality has intrinsic prudential value.”Footnote 27 No matter how realistic a simulated world feels, it lacks the features of an independent reality conducive to human flourishing. It seems the epitome of a dehumanizing environment. It’s a fully engineered reality wherein simulated people lack free will, simulated things bend to our will and desires, history has no weight, interdependence doesn’t exist, and even the laws of physics can be broken. In such a programmed world, our actions wouldn’t be meaningful. Our accomplishments, including caring for others, would be hollow. Our relationships would be fake. And at least some of our basic human capabilities would atrophy. Fortunately, we wouldn’t realize any of this while living on the machine because we’d be programmed to be oblivious to such concerns. And we’d feel really good.

There’s a related concern that plugging in would lead to an inauthentic life determined by others. A life lived on the experience machine would not be one’s own. Too much of the life lived and happiness experienced would be determined by the machine, or, to be more precise, the engineers who created the machine. This concern has weight for those who insist that the means matter when evaluating the quality of one’s life. On this view, a well-lived life requires some human agency and exercise of free will. By contrast, a fully programmed and determined life on the machine is not a well-lived life, regardless of how much pleasure the machine supplies.

It’s important to appreciate that these considerations are perfectly legitimate products of the thought experiment. By forcing someone to choose upfront whether to plug in, the thought experiment prompts deliberation. Some argue that critical resistance to plugging in is rooted in the framing effects of the narrative. For example, status quo bias may lead people to prefer what they currently experience over the unknown life on a machine.Footnote 28 Or people can question whether the experience machine will actually work: What happens when the power goes out? Will the unplugged be able to take advantage of those who are plugged in? These arguments are valid, but too easily taken as the final word. Framing effects, cognitive biases, and other related concerns can and should be dealt with by engaging the thought experiment thoughtfully, rather than simply dismissing it. Behavioral psychology offers excellent grounds for criticizing and improving thought experiments but does not provide answers to the underlying normative questions.

Others, such as economist Richard Layard and philosopher Joseph Mendola, criticize the thought experiment for being “wildly unrealistic” and pumping unfair intuitions with “unfamiliar gadgetry which invokes our fear of the unfamiliar.”Footnote 29 These criticisms might have had purchase a few decades ago. But they seem wildly exaggerated today. Modern technology and culture make the experience machine scenario reasonably familiar and sufficiently realistic for people to consider. Regardless, these objections to the thought experiment hardly provide a better explanation for people’s preferences than the claim that reality has intrinsic prudential value.

In our view, a significant reason why many people would choose not to plug into the experience machine is that they believe in free will and value having it. To plug into the experience machine would be to accept engineered determinism and know, even if only at the moment of deliberation about whether to plug in, that any subsequent experience of free will would be illusory. Many people highly value free will. Arguably, it is one of the fundamental blessings of humanity, collectively cultivated and sustained across generations through the cultures, institutions, infrastructures, and environments we build and sustain.

But what if the promise of optimal happiness on the machine makes the thought experiment too outlandish? The utopian allure might exacerbate the framing effects, or it might trigger doubts, fears, or even guilt.

Suppose we adjust the thought experiment so that life on or off the machine would be the same in terms of aggregate happiness.Footnote 30 Suppose a person’s life on the experience machine is assumed to be identical to the life they’d live off the machine.Footnote 31 If the life Jane experiences on and off the machine is identical from her subjective perspective, hedonists argue Jane would be equally well on and off the machine. Her well-being – the goodness of her life – is the same. And so, they argue, she should be indifferent to choosing one life over the other. She should be willing to flip a coin.

What about you? Would you be willing to flip a coin? Heads would mean you plug in. And tails would mean you do not plug in. Keep in mind that life on and off the machine is guaranteed to be identical.

If you are truly indifferent, you should be willing to flip a coin and accept the outcome either way. Perhaps some people would do so. We suspect most people would not actually be willing to flip a coin. Further, it’s hard to imagine that anyone would choose to plug into the experience machine if it promised an identical life. Most people would choose not to plug in. Some might argue that unwillingness to flip a coin should be construed as an indication of irrational biases, whether the status quo bias or some irrational fear that fights the hypothetical. But we think a much better explanation is that they would not be indifferent to the two lives. They would choose to maintain the status quo and for good reason. They value free will, or, at least, the illusion of free will. In other words, they would take a pragmatic approach and wager in favor of free will rather than plug into the experience machine, which would eliminate free will, and, at best, offer an engineered illusion of it.Footnote 32 Thus, what people fear about the experience machine is precisely what Nozick highlighted: The loss of authenticity in a world of engineered determinism.

Let’s flip the scenario around. Suppose you are informed that you’re currently plugged into an experience machine and that you’ve been selected for a one-time opportunity to unplug and experience an authentic life. Unlike Cipher in The Matrix, your life unplugged is guaranteed to be identical to the plugged-in life you’re currently living.

What would you choose if you were given the opportunity to (1) unplug, (2) stay plugged in, or (3) flip a coin such that heads would mean you unplug and tails would mean you stay plugged in?

It’s a little more difficult to predict what people would do, although for somewhat surprising reasons. We don’t think many people would choose to flip a coin. Most people are not hardcore hedonists; they wouldn’t be indifferent to the two lives (worlds). Initially, one might think people would unplug and choose an authentic life. But most people would not do so.Footnote 33 Some have suggested that this demonstrates a cognitive bias infecting the thought experiment. A better explanation is that in the reality we know and experience, most people would take a pragmatic approach and wager in favor of free will, meaning their existing belief in free will. In a sense, this is a reaction that fights the hypothetical. But, importantly, it only fights the hypothetical claim about our reality being fully engineered and determined. Absent a high level of proof that our current reality is fully determined (naturally or by engineering), wagering in favor of free will remains the best strategy.

The experience machine thought experiment usefully highlights competing value systems and prompts deliberation about what matters to you about your life. Hardcore hedonists would choose to plug in; most other folks would not. There are many reasons people might choose not to plug in. But an important reason is that a programmed world governed by engineered determinism is not conducive to human flourishing. At the most basic level, life on the experience machine doesn’t jibe with how most people answer any of the basic constitutional questions. We don’t see ourselves, our lives, or our reality as fully programmed or determined, even though we understand that much of what we do and who we are is the product of various factors outside of our control. Neither do we aspire to such a fate.

Do we aspire to build such a world for our children and future generations? Notably, this is a moral question outside the scope of Nozick’s thought experiment. To see why, consider the importance of the cord and plug. The decision about whether or not to plug into the experience machine served a few purposes:

  • It kept the thought experiment focused on an individual decision about an individual life.

  • It prompted deliberation by an individual about that individual’s conception of a good life for that individual, which is often referred to as the individual’s well-being or welfare.

  • It eliminated concerns about paternalism, social engineering, or social welfare.

  • It implied the existence of multiple worlds – at least, worlds on and off the machine.

The myopia of the thought experiment limits its relevance, however, when we turn our attention to social arrangements and world building. Yet that is where we must go if we are to answer the fundamental questions about the world we’re building for posterity.

Experience Machine n.0 – Life in the Machine/World

Let’s eliminate the plug. Consider a different version of the experience machine, which we refer to as the Experience Machine n.0. Imagine a ubiquitous smart techno-social environment that spans the Earth and optimizes the planet to provide human inhabitants optimal happiness. Rather than ask you whether you’d plug yourself in, we’d like for you to consider whether such a machine/world should be built.

If you were given the ultimate decision-making authority, would you build such a machine/world?

Moving from a decision about whether (i) to plug yourself into the machine to (ii) building a machine/world for all humans complicates the analysis and shifts the focus from individual well-being to broader moral and social concerns. To make the situation less abstract, let’s be clear about what we mean by optimal happiness. Assume that the Experience Machine n.0 supplies all human beings on Earth with maximum happiness, measured moment-by-moment for each individual and aggregated over a lifetime, at minimal social cost. (Our arguments hold if we replace happiness with pleasure, positive affect, or other positive mental states.)

We must admit that this sounds pretty darn good. Optimal happiness appears to be an end worth pursuing. But what about the means? Do they matter?

What could the Experience Machine n.0 look like and how might we build such a machine/world? Extrapolating from the present to the near future, we envision that the Experience Machine n.0 would be comprised of interconnected sensor networks and data-driven automation of socio-technical systems around, about, on and in human beings. Imagine that within the next few decades, the following occurs. Large multinational companiesFootnote 34 gradually build and connect smart techno-social environments that actually deliver on their promises. The scope of deployment expands to the point where there is seamless interconnection and integration across all environments within which humans live. The normative agenda executed throughout all this construction and deployment is optimal efficiency, productivity, and happiness.

Guided by the optimization criterion of maximum happiness at minimal social cost, the Experience Machine n.0 necessarily would engineer human beings. After all, human beings are inefficient and costly to sustain. Optimization would entail minimization of various costs associated with humans being human. For example, our bodily and mental engagement with the physical world entails logistical, navigational, and various other transaction costs. Outsourcing to intelligent control systems would minimize these costs. Making decisions, experimenting, and learning (among other mental processes) are also costly endeavors. Again, the optimizing logic would press toward minimizing and potentially eliminating these costs. Interacting with one another, coordinating behaviors, developing relationships, and many other aspects of interdependent human relations entail transaction costs to be minimized.

Finally, there is a subtler type of techno-social engineering prevalent in the Experience Machine n.0. Since what makes us happy is in large part contingent on environmental conditions and experiences, and since those factors are fully determined within the machine/world, optimization also would entail engineered tastes, beliefs, preferences, and other factors that feed into our affective feelings of pleasure.

Bluntly, the cheapest way to make billions of human beings perfectly happy – particularly when using the sorts of technological means we’re imagining – is to set the bar very low. In this case, the techno-social system need only meet or even barely surpass expectations. As hedonists know and often are prone to emphasize, people adapt their beliefs, preferences, and expectations to their conditions and, subsequently, their corresponding happiness levels typically adjust.Footnote 35 So, the goal might very well be to shape beliefs, preferences, and expectations in a manner that makes supplying happiness as cheap and easy as possible. At the end of the day, optimal happiness would boil down to satiation of engineered will.

There are many possibilities, however. Perhaps machine learning would settle on different equilibria. We can imagine extremely dystopian science fiction scenarios, such as Wall-E, where humans are dumb satiated fools. But we also can imagine scenarios where the machine/world manufactures higher happiness aggregates through different types of programs. John Stuart Mill famously argued that it is “better to be Socrates dissatisfied than a fool satisfied.”Footnote 36 Perhaps the Experience Machine n.0 would produce a world filled with happy sages. Cheap engineered bliss need not exclude higher pleasures of the sort Mill defended. Higher pleasures often are cultivated and learned, and cultivation and learning entail costs.Footnote 37 On one hand, minimizing these costs might lead to outsourcing or pushing toward engineering wills satiated with lower pleasures. On the other hand, cultivating certain sets of tastes that correspond to higher pleasures might be worth the cost if they produce even more net pleasure. Who knows? Frankly, it’s impossible to know exactly how the optimization would work out. But that’s really beside the point. One way or another, techno-social systems would determine whether and what to cultivate, who and what we are. What makes humans happy would be predetermined rather than produced through the exercise of free will, experimentation, and self-authorship.

Regardless of the scenario, our thought experiment raises a fundamental normative question: Would maximizing human happiness at minimal social cost through the Experience Machine n.0 justify forcing everyone to live a fully determined life?

We strongly suspect few people would answer in the affirmative, at least on reflection.Footnote 38 Think about what it would mean. Building the Experience Machine n.0 is functionally equivalent to forcing everyone to plug into Nozick’s Experience Machine. Even the most hardcore hedonist would hesitate before imposing a machine life on others. Such paternalism conflicts directly with longstanding ideals concerning free will, autonomy, and agency, which are shared by many cultures and widely regarded as fundamental blessings of humanity.

Yet hesitation does not mean rejection. We posed this thought experiment to Peter Singer, a renowned philosopher and incredibly thoughtful and generous person, and after some discussion and deliberation, he replied: “I’m a sufficiently hardcore hedonist to think that democracy is a means to an end rather than an end in itself. If we can really imagine achieving optimum happiness for all sentient beings, forever, that would be a greater value than democracy, which may be better than any other system of government, but so far hasn’t got anywhere close to producing optimum happiness for all sentient beings.” Optimal happiness for everyone on Earth is an end worth considering carefully, particularly in light of how much misery and suffering exists in our world.

The Experience Machine n.0 poses a threat to the liberal democratic ideal that people should be generally free to choose their own ends with minimal interference. Commitments to forms of this ideal have proved to be the best way for diverse people to live together, have real opportunities to experiment with different ways of living, and determine, over time and with the accrual of experience, which ways to live well. A world that makes it very hard or impossible to opt out of Experience Machine n.0 would violate this ideal by interfering too strongly with our capacity to freely set ends for ourselves. Such interference is morally wrong and should be politically resisted. Although different in some respects, the spread of hedonism through neo-Taylorism amped up by interconnected and ubiquitous smart devices functions much like an authoritarian government imposing a mandatory state religion.

But if longstanding ideals are socially constructed and thus contingent on techno-social engineering systems, would building the Experience Machine n.0 be defensible so long as it was done gradually? Deployed and integrated incrementally over decades, people could be gradually prepared for and conditioned to accept this brave new world. Engineered beliefs could pave the slippery sloped path to the Experience Machine n.0. This is why we’ve emphasized the importance of asking these questions at a time when it’s still possible to recognize and evaluate the path we’re on.

Some would argue that any resistance people currently have toward building the machine/world is itself the product of techno-social engineering of beliefs, preferences, and values by past generations. This is true. The present generation has inherited the fundamental blessings of the past, including shared ideals about who we are and who we aspire to be. But this point doesn’t undermine the inquiry or make it any easier to answer the difficult and pressing questions highlighted by our thought experiment and more thoroughly presented throughout our book.

Finally, some will argue that like Nozick’s version, the Experience Machine n.0 thought experiment is wildly unrealistic and nothing more than an unfair intuition pump. They also will argue that the scenario is scary and framed in a manner that unfairly triggers psychological biases that distort rational analysis. These are fair points. But they are easily overstated and taken too far. Again, we use the thought experiment to prompt deliberation, analysis, and discussion. It is not a theoretical proof or empirical experiment.

While we truly hope our imagined machine/world is wildly unrealistic, there are various reasons to believe the slippery sloped path we’re on is headed in that direction. Regardless of whether we reach the end-state, we need to think more deeply about the world we’re building, both because of what it means for our children and future generations and because of how it affects us as we proceed down the path.

Twenty-first century techno-social engineering deeply affects how we think, feel, and interact with one another. Outsourcing so many of these functions to techno-social systems can’t and shouldn’t be assumed to be in our interest, neutral, or mere natural extensions of ourselves. We need to be aware of atrophying capabilities, mind control, and the gradual loss of human dignity as more aspects of our lives are determined by smart techno-social systems.Footnote 39

We are not fully predictable and programmable machines. In all likelihood, we never will be. But that is no reason to become complacent. Much of what matters about being human can be lost in partial deprivations as we march down the slippery sloped path we’re on.

Pluralism as a Constraint on Techno-Social Engineering of Humanity

The transition from Experience Machine to Experience Machine n.0 marked a significant shift in focus, from evaluating individual well-being to evaluating humanity in a social world. We hope that on reflection, most people, including hardcore hedonists, would decline to build Experience Machine n.0 because they recognize that there is more that matters about being human than subjective feelings of happiness.

We can build much better worlds that are conducive to human flourishing and a plurality of values.Footnote 40 Free will and agency matter, above and beyond how those capabilities cash out in terms of subjectively experienced well-being. But worst case, even if most people say they would choose to build the Experience Machine n.0, that empirical attestation of their values does not provide sufficient justification. This is where liberalism and paternalism clash.Footnote 41 A confounding difficulty with liberalism and relying on individuals’ existing preferences and beliefs is that such preferences and beliefs are contingent on the techno-social environment within which they’ve lived their lives. It’s hard to fully credit preferences and beliefs that significantly discount the constitutional value of free will and agency. After all, deferring to individuals’ choices only makes sense if the individuals themselves have free will and practical agency, and if they do, then it seems morally wrong to deny the same capabilities to future generations, to deny them access to the fundamental blessings of humanity passed on to the present generation.

In the end, a commitment to pluralism demands freedoms engineered into our built environments. Accordingly, one of the most important constitutional questions of the twenty-first century is how to sustain the freedom to be off, to be free from techno-social engineering, to live and develop within underdetermined techno-social environments. A precursor to implementing any potential responses is recognizing the threat that engineered determinism poses to humanity. The sirens call of data-driven, supposedly smart techno-social systems promises cheap bliss – a world of optimal happiness, but it ignores the carnage of humanity lost.

8 Caveat Usor: Surveillance Capitalism as Epistemic Inequality

Shoshana Zuboff

Somebody take me home
While I still believe
While the pines are still the pines
And there’s something left of me
Philip Roebuck
“Somebody Take Me Home”

The world suffers under a dictatorship of no alternatives. Although ideas all by themselves are powerless to overthrow this dictatorship we cannot overthrow it without ideas.

Roberto Unger, The Dictatorship of No Alternatives

I. Who Knows?

On August 9, 2011 the New York Times reported that the Spanish Data Protection Agency had chosen to champion the claims of ninety ordinary citizens who were determined to preserve inherited meaning for a world bent on change at the speed of light.Footnote 1 In the name of these citizens’ “right to be forgotten,” the Agency ordered Google to stop indexing contested links pertaining to their pasts.

Each had a unique complaint. One had been terrorized by her former husband and didn’t want him to find her address. A middle-aged woman was embarrassed by an old arrest from her days as a university student. An attorney, Mario Costejo González, had suffered the foreclosure of his home years earlier. Although the matter had long been resolved, a Google search of his name continued to deliver links to the foreclosure notice, which, he argued, damaged his reputation.

The Agency concluded that citizens had the right to request the removal of links, and ordered Google to stop indexing the information and to remove existing links to its original sources. Google had unilaterally undertaken to change the rules of the information life cycle, when it decided to crawl, index, and make accessible personal details across the World Wide Web without asking anyone’s permission. As a result, information that would normally age had been transformed into a state of perpetual youth and highlighted in the foreground of each person’s digital identity. After all, the Spanish Data Protection Agency reasoned, not all information is worthy of immortality. Some information should be forgotten, because that is only human.

Unsurprisingly, Google challenged the Agency’s order before the Spanish High Court, which selected Mario Costeja González’s case for referral to the Court of Justice of the European Union. On May 13, 2014, after lengthy and dramatic deliberations, the Court of Justice announced its decision to assert the “right to be forgotten” as a fundamental principle of European law.Footnote 2

From the beginning, the case was framed in a peculiar way, pitting “privacy” against an indeterminate open-ended “right to know.” As one expert told the Times: “Europe sees the need to balance freedom of speech and the right to know against a person’s right to privacy or dignity.”Footnote 3 Three years later when the Court of Justice ruled in favor of Costejo González and his right to be forgotten, Google’s then CEO Eric Schmidt repeated that odd juxtaposition. Speaking to his company’s shareholders, he characterized the Court’s decision as a “balance that was struck wrong” in the “collision between a right to be forgotten and a right to know.”Footnote 4

In fact, there was no “balance” that was “struck wrong.” The Court’s decision was not one of “balancing” two conflicting goods, but rather one of redistributing a single good. The conceptual problem here is that the “right to be forgotten” does not stand in opposition to a “right to know.” Rather, it is a “right to know.” The distinction is critical, because it lifts the veil on a political contest over a new domain of fundamental rights: epistemic rights. Such rights confer inalienable entitlements to learning and to knowing. Epistemic rights are the cause of which privacy is the effect. This political contest has been obfuscated to the point of invisibility, despite the fact that its outcome will define the moral and political milieu of our information civilization.

The distribution of epistemic rights determines the degree of epistemic inequality, defined as unequal access to learning imposed by hidden mechanisms of information capture, production, analysis, and control. It is best exemplified in the fast growing abyss between what people can know and what can be known about them. The new axis of epistemic equality/inequality does not reflect what we can earn but rather what we can learn. It represents a focal shift from ownership of “the means of production” to ownership of “the production of meaning.”

Epistemic equality depends on epistemic justice, the scope of which is delineated by three essential questions that reflect the nested dilemmas of knowledge, authority, and power: What is the distribution of knowledge? What are the sources of authority that legitimate the distribution of knowledge? What is the power that sustains that authority? Put simply, “Who knows?” “Who decides who knows?” “Who decides who decides who knows?” The answers to these questions determine a society’s progress toward epistemic equality.

From this vantage point, one may observe that the real decision faced by the Court of Justice was not one of balancing “privacy” against a “right to know,” but rather about the just distribution of epistemic rights. It asked, “Who has the right to know about one’s past and in what degree?” The Court judged whether an inalienable right to learn and to know about an individual’s past adhered primarily to the individual or to Google. What dismayed Schmidt was that the Court rejected Google’s self-authorized claim to a totality of epistemic rights and instead distributed this “right to know about one’s past” in a new pattern.

The Court’s decision answered the three essential questions. First, it privileged the individual with the right to learning and knowledge about one’s own past. Second, it created a juridical “right to be forgotten” that stands as the source of legitimate authority. Third, it was to be Europe’s democratic institutions and their power to govern through the rule of law that sustain this authority – not the private power of a corporation.

The primacy of epistemic rights as the cause of privacy was implied in Justice William O. Douglas’s 1967 dissenting opinion in the Fourth Amendment case, Warden v. Hayden:

Privacy involves the choice of the individual to disclose or to reveal what he believes, what he thinks, what he possesses …. the individual should have the freedom to select for himself the time and circumstances when he will share his secrets with others and decide the extent of that sharing.Footnote 5

In Douglas’s formulation, privacy is contingent on the individual’s sovereign right to self/knowledge, a right that confers the ability to choose whether to disclose or withhold such knowledge, to what degree, and for what purpose. This choice has been an elemental right throughout most of human history. By “elemental,” I mean to mark a distinction between, on the one hand, tacit rights that are given under the conditions of human existence, and, on the other, juridical rights codified in law.

Others have addressed this distinction, and linguistic philosopher John Searle’s “pragmatic considerations of the formulation of rights” are useful here.Footnote 6 Searle argues that elemental conditions of existence are crystallized as formal human rights only at that moment in history when they come under systematic threat. For example, the ability to speak is an elemental right born of an elemental condition. The right to “freedom of expression” is a juridical right, which only emerged when society evolved to a degree of political complexity that the freedom to express oneself came under threat. Searle observes that speech is not more central to human life than breathing or being able to move one’s body. No one has declared a “right to breathe” or a “right to bodily movement” because these elemental rights have not come under attack and therefore do not require legal codification. What counts as a fundamental human right, Searle argues, is both “historically contingent” and “pragmatic.”

As is the case with all elemental rights, many epistemic rights have not yet been codified into law for the simple reason that it has not been necessary to do so. The epistemic “right to be forgotten,” for example, has always existed as an irreducible element of human experience. In a preliterate world no one needed a legal right to be forgotten when the primary record of the past was memory. One lives, one ages, and memories age too. The past is hazy, fragmented, dispersed, prismatic, and then drifts into some combination of oblivion, stories, and myth.

In traditional society, the past was preserved in ritual, and in the modern era it is technology. The printing press and widespread literacy, photography, voice recording – each made it easier to convey one’s past to the next generation, chipping away at oblivion. With these inventions, elemental epistemic rights began their migration toward formal codification. For example, Justice Brandeis was moved to formalize the right to privacy, motivated in part by the invasiveness of the newspaperman’s camera, as it bestowed an independent life on anyone’s face, far from the elemental rights of the subject before she was framed in the camera’s lens. As if in anticipation of Searle, Brandeis wrote:

That the individual shall have full protection in person and in property is a principle as old as the common law; but it has been found necessary from time to time to define anew the exact nature and extent of such protection. Political, social, and economic changes entail the recognition of new rights, and the common law, in its eternal youth, grows to meet the demands of society.Footnote 7

In retrospect, Brandeis’s indignation aimed at the journalist’s camera recalls the last stages of an infinitely long age of epistemic innocence that ended decisively over the last three decades, as the “digital tornado” abruptly transformed the conditions of existence for many twenty-first century humans. In 1986, just 1 percent of the world’s capacity to store information was in digital format; 25 percent in 2000. The year 2002 was the tipping point, when more information was stored on digital than on analogue storage devices. By 2007 digitalization had exploded to 97 percent and by 2020 the shift was largely complete.Footnote 8 Information scholar Martin Hilbert and his colleagues observe that even the foundational elements of civilization, including “language, cultural assets, traditions, institutions, rules, and laws … are currently being digitized, and for the first time, explicitly put into visible code,” then returned to society through the filter of “intelligent algorithms” deployed to govern a rapidly multiplying range of commercial, governmental, and social functions.Footnote 9

Google Inc. was a product of, driving force in, and beneficiary of this sudden sweeping change. The digitalization of everything produced vast new information territories, planetary in scope, and once celebrated by Eric Schmidt as “the world’s largest ungoverned space.”Footnote 10 By 2002, Google was well on its way to elaborating a new economic logic that I have called surveillance capitalism, whose novel imperatives compelled it to hunt and capture ever more dimensions of once private experience as raw material for newly invented processes of datafication, production, and sales. Google was not alone. The vast lawless regions of the digital became the landscape in which companies and governments ruthlessly battle for information dominance, reenacting earlier epochs of invasion, conquest, and empire building in the physical world.

In order to achieve their objectives, the leading surveillance capitalists sought to establish unrivaled dominance over the totality of the world’s information now rendered in digital format.Footnote 11 Their complex supply chains require hyperscale operations capable of storing and processing vast data flows. Surveillance capital has built most of the world’s largest computer networks, data centers, populations of servers, undersea transmission cables, advanced microchips, and frontier machine intelligence, igniting an arms race for the 10,000 or so specialists on the planet who know how to coax knowledge from these vast new data continents.Footnote 12

With Google in the lead, the top surveillance capitalists seek to control labor markets in critical expertise including data science and animal research, elbowing out competitors such as start-ups, universities, high schools, municipalities, state and federal government agencies, established corporations in other industries and less wealthy countries. In 2016, 57 percent of American computer science Ph.D. graduates took jobs in industry, while only 11 percent became tenure-track faculty. With so few teaching faculty, colleges and universities have had to ration computer science enrollments, which has significantly disrupted the knowledge transfer between generations. It’s not just an American problem. In Britain, university administrators contemplate a “missing generation” of data scientists. A Canadian scientist laments, “the power, the expertise, the data are all concentrated in the hands of a few companies.”Footnote 13

Under these unprecedented conditions, elemental epistemic rights can no longer be taken for granted. It’s not that such rights are eliminated. Rather, they fall to hidden powers – commandeered, stolen, redistributed, cornered, and hoarded.

More than 600 years ago the printing press put the written word into the hands of ordinary people, bypassing the priesthood, rescuing the prayers, and delivering spiritual communion directly into the hands of the prayerful. It was perhaps the first great event in the annals of technological disintermediation, removing the “middleman” in favor of a direct line to the end consumer.

The Internet was welcomed as an equally fearsome force of empowerment: the ultimate disintermediator, amplifying Gutenberg’s revolution as it liberates information from the old institutions and distributes it directly to the people. Thanks to the mighty powers of the digital, corporations would no longer decide the music people buy, the news they read, the knowledge they access, or the goods and services they enjoy.

Celebration distracted from a parallel development that moved in stealth just beyond the sightlines as the Internet became a Trojan horse for a novel economics that would eventually infiltrate every aspect of peoples’ lives. The shooting star of disintermediation quickly faded, leaving in its wake the secretive new middleman, that is, surveillance capitalism, which quietly remediated the relationship to all things digital. The result has been that the Internet is not ungoverned. Rather, it is owned and operated by this dark economic logic and wholly subject to its iron laws.

The digital century was to have been democracy’s Golden Age. Instead, many societies enter the third decade of the twenty-first century marked by an extremely new form of social inequality that threatens to remake the social order as it unmakes democracy. A new age of epistemic inequality has dawned in which individuals’ inalienable rights to learning and knowing about one’s own life must be codified in law if they are to exist at all.

Unequal knowledge about us produces unequal power over us, turning epistemic inequality into a critical zone of social contest in our time. Twentieth-century industrial society was based on a division of labor, and it followed that the struggle for economic equality would shape the politics of that age. Our digital century shifts society’s coordinates from a division of labor to a division of learning, and it follows that the struggle for epistemic equality will shape the politics of this age.

In redistributing the “right to be forgotten” to individuals, the European Court of Justice declared that Google’s was not to be the last word on the human or the digital future. It asserted that decisive authority must rest with the people, their laws, and their democratic institutions, even in the face of a great private power. It was to be the beginning, not the end, of a bitter struggle over the fundamental rights that will define the digital future.

The remainder of this chapter explores the iron laws of this private power, their consequences for people and the democratic polis, the historical significance of this struggle, and its remedies.

II. What Is Surveillance Capitalism?

It has long been understood that capitalism evolves by claiming things that exist outside of the market dynamic and turning them into market commodities for sale and purchase. In historian Karl Polanyi’s 1944 grand narrative of the “great transformation” to a self-regulating market economy, he described the origins of this translation process in three astonishing and crucial mental inventions that he called “commodity fictions.” The first was that human life could be subordinated to market dynamics and reborn as “labor” to be bought and sold. The second was that nature could be translated into the market and reborn as “land” or “real estate.” The third was that exchange could be reborn as “money.”Footnote 14

Surveillance capitalism originates in an even more startling mental invention, declaring private human experience as free raw material for translation into production and sales. Once private human experience is claimed for the market, it is translated into behavioral data for computational production.

Early on, it was discovered that, unknown to users, even data freely given harbor rich predictive signals, a surplus that is more than what is required for service improvement. It isn’t only what you post online, but whether you use exclamation points or the color saturation of your photos; not just where you walk but the stoop of your shoulders; not just the identity of your face but the emotional states conveyed by your “micro-expressions”; not just what you like but the pattern of likes across engagements. Soon this behavioral surplus was covertly hunted and captured across virtual and real worlds, accessible to the always-on ubiquitous digital architecture that I call “Big Other” and claimed as proprietary data.

Behavioral surplus is ultimately conveyed through complex supply chains of devices, tracking and monitoring software, and ecosystems of apps and companies that specialize in niche data flows captured in secret.Footnote 15 Data flows empty into surveillance capitalists’ computational factories, called “artificial intelligence,” where they are manufactured into behavioral predictions. A leaked 2018 Facebook document provides some insight into factory operations.Footnote 16 Facebook’s “prediction engine” is built on a machine intelligence platform called “FB Learner Flow,” which the company describes as its “AI backbone” and the key to “personalized experiences” that deliver “the most relevant content.” The machine learning system “ingests trillions of data points every day, trains thousands of models – either offline or in real time – and then deploys them to the server fleet for live predictions.” The company explains that, “since its inception, more than a million models have been trained, and our prediction service has grown to make more than 6 million predictions per second.”Footnote 17

Finally, these prediction products are rapidly swept up into the life of the market, traded in newly constituted marketplaces for behavioral predictions: human futures markets. These markets link surveillance capitalists to business customers with a keen interest in the future behavior of current and potential “users” or consumers. Certainty in human affairs is the lifeblood of these markets, where surveillance capitalists compete on the quality of their predictions, which are about individuals but are not for individuals. Surveillance capitalists have grown immensely wealthy from these trading operations, as many companies are eager to lay bets on future human behavior.

Surveillance capitalism was invented as the solution to financial emergency in the teeth of the dot.com bust when the fledgling company called Google faced the loss of investor confidence. As pressure mounted, Google’s leaders decided to boost ad revenue by using their exclusive access to user data logs, in combination with their already substantial analytical capabilities and computational power, to fabricate predictions of user click-through rates, regarded as a signal of an ad’s relevance.

Operationally, this meant that Google would both repurpose its growing cache of behavioral data, now put to work as a behavioral surplus, and develop methods to aggressively seek new sources of behavioral surplus. According to its own scientists’ accounts, the company developed new methods of surplus hunt and capture that were prized for their ability to find data that users intentionally opted to keep private and to infer extensive personal information that users did not or would not provide. This surplus would be analyzed for predictive patterns that could match a specific ad with a specific user. These new operations institutionalized a new logic of accumulation derived from the social relations of the one-way mirror: surveillance. Its mechanisms and methods are carefully engineered to produce user ignorance through obfuscation, undetectability, indecipherability, and misdirection. Success relies on bypassing individual awareness and thus overriding the individual’s rights to decide the privacy of one’s experience and one’s future course of action. Surveillance is essential to the DNA of this market form.

The elements of this economic logic were discovered, invented, elaborated, and deployed at Google from 2000 to 2004, while held in the strictest secrecy. Only when Google went public in 2004 did the world learn that during that period its revenues increased by 3,590%.Footnote 18 This increase represents the “surveillance dividend,” which raised the bar for attracting investors to the new Internet domain. The shift in the use of behavioral surplus was an historic turning point: the game-changing zero-cost asset that could be diverted from service improvement toward a genuine market exchange. This was not an exchange with “users” but rather with other companies that learned how to profit from low risk bets on users’ future behavior.

Surveillance capitalism migrated to Facebook with Google-turned-Facebook executive Sheryl Sandberg and quickly become the default model of information capitalism, attached to nearly every Internet company, start-up, and app. Like an invasive species with no natural predators, its financial prowess quickly overwhelmed the networked sphere, grossly disfiguring the earlier dream of digital technology as an empowering and emancipatory force.

While online advertisers were the dominant players in the early history of the new futures markets, surveillance capitalism is no more restricted to online ad targeting than mass production was restricted to the fabrication of the Model T. Today any actor with an interest in monetizing probabilistic information about behavior can pay to play in a range of human futures markets where behavioral predictions are told and sold.

In a world of highly commoditized products and services, companies now turn to the surveillance dividend as the source of higher margins. The result is whole new ecosystems of behavioral surplus suppliers, as companies from every sector seek ways to participate in the unilateral dispossession of private experience. Surveillance capitalism now spreads across the “normal” economy in traditionally information intensive sectors such as insurance and finance, but also in healthcare, retail, education, real estate development, and automobiles, to name but a few.

One poignant illustration of these new facts is found in the birthplace of mass production, the Ford Motor Company. One hundred years ago, pioneer capitalists like Ford bent to the revolutionary task of making things at a price that people could afford and shaping a new century of mass consumption. Henry Ford was proud to author the definition of “mass production” for the Encyclopedia Britannica, the Google of his day, describing it as “a productive organization that delivers in quantities a useful commodity of standard material, workmanship, and design, at minimum cost.”Footnote 19 Ford understood that the mass production revolution was the result, not the cause, of new era of demand in US society – farmers and shopkeepers who wanted automobiles too, but at a price they could afford. “The necessary, precedent condition of mass production is a capacity, latent or developed, of mass consumption,” he wrote. “The two go together and in the latter may be traced the reasons for the former.”Footnote 20 In Ford’s cosmos, demand and supply were twinborn, with customers and workers forever linked in a cycle of production and sales that combined low cost goods with consumption-worthy wages immortalized by the five-dollar day.

Had Henry Ford been listening to the Freakonomics Radio podcast in November 2018, he would have learned that his authoritative rendition of supply and demand had been relegated to the dustbin of history by his own distant successor, Jim Hackett. In response to the sustained slump in global auto sales, the company is already cutting thousands of jobs and eliminating models in pursuit of a price-earnings ratio befitting a high tech data company: Hackett wants Ford to become more like Facebook and Google. In this vision, the work of making and selling cars gives way to proprietary monetizable data flows – a “transportation operating system.”Footnote 21 Hackett wants Henry’s company to collect data from the “100 million people … that are sitting in Ford blue-oval vehicles … . We have as much data in the future coming from vehicles, or from users in those vehicles, or from cities talking to those vehicles, as the other competitors that you and I would be talking about [like Facebook and Google] that have monetizable attraction.”

Once customers are reinvented as data sources, it’s easy for Hackett to imagine the next step in which the data that stream from vehicles in real time are combined with Ford’s financing data, where, he says, “we already know … what people make … we know where they work; we know if they’re married. We know how long they’ve lived in their house.” Hackett concludes, “And that’s the leverage we’ve got here with the data … I think our higher purpose is that the smart vehicle and the smart world have an interaction in the future that’s much bigger than it was in the past.”Footnote 22 As one industry analyst put it, Ford “could make a fortune monetizing data. They won’t need engineers, factories, or dealers to do it. It’s almost pure profit.”Footnote 23

This is where we live now: a world in which nearly every product or service that begins with the word “smart,” “personalized,” or “connected,” from cars to “digital assistants,” to devices, appliances, and more, is a supply-chain interface for the unobstructed flow of behavioral surplus. The growth of these connections continuously increases what military strategists call the digital “attack surface,” where behavioral surplus is relentlessly tracked, hunted, coaxed, and captured. What began as a solution to financial emergency is now a burgeoning surveillance-based economic order: a surveillance economy. The dominant market form shifts under our gaze: once profits from products, then services, then profits from speculation, and now profits from surveillance.

Surveillance capitalism can no longer be defined as a specific group of corporations, neither can it be conflated with the digital technologies on which it depends. While it is impossible to imagine surveillance capitalism without the digital, it is easy to imagine the digital without surveillance capitalism. The point cannot be emphasized enough: Surveillance capitalism is not technology. Digital technologies can take many forms and have many effects, depending on the social and economic logics that bring them to life. Surveillance capitalism relies on data-gathering devices like computers, phones, sensors, microphones, and cameras. It deploys machine intelligence and platforms. It expresses itself in algorithms. But it is not the same as any of those. Just as an X-ray reveals bone and muscle, but not the soft tissue that binds them, technology is the bone and muscle here, while surveillance capitalism is the soft tissue that binds the elements and directs them into action. It is the shadow that falls over the digital, the hidden pattern that explains how this once emancipatory project transformed people and society into raw material for others’ economic gain, as the Internet itself falls to the ownership and operations of surveillance capital.

The phrase “surveillance capitalism” is not arbitrary. Surveillance capitalism’s operations are designed for the social relations of the one-way mirror. They know everything about users, but users know nothing about them. Surveillance is baked into the DNA of this logic of accumulation, because without it the surveillance dividend as expressed in the revenues, profits, and market capitalization that mark the first two decades of the twenty-first century would have been impossible to achieve.

III. Surveillance Capitalism’s Economic Imperatives

Surveillance capitalists sell certainty. They compete in human futures markets on the quality of their prediction products, which aim to guarantee outcomes or at least the ever-improving approximation to such guarantees. These guarantees have value, but in the novel logic of surveillance capitalism, their value is a function of markets that bear no organic reciprocities with their populations, now repurposed as the source of unlimited raw material supplies. The competitive dynamics thus set into motion reveal key economic imperatives, and it is these imperatives that compel epistemic inequality, setting surveillance capitalism on a collision course with democracy itself.

First, because predictions must improve in the direction of something like certainty, surplus extraction must move in the direction of something like totality. Machine learning for behavioral prediction wants data in volume and thus economies of scale in data production lay the foundation for all operations.

Once competition for prediction products intensifies, volume is not enough. Surveillance capitalists are compelled to search out ever more predictive sources of behavioral surplus. Machine learning needs volume but also variety, economies of scale but also economies of scope. This realization helped drive the “mobile revolution” sending users into the real world armed with cameras, gyroscopes, and microphones packed inside their smart new pocket-size computers, the ubiquitous interface that conveys surplus supplies to the AI hub. In the competition for scope, surveillance capitalists want your home and what you say and do within its walls.Footnote 24 They want your car, your medical conditions, and the shows you stream; your location as well as all the streets and buildings in your path and all the behavior of all the people in your city.Footnote 25 They want your voice and what you eat and what you buy; your children’s play time and their schooling; your brainwaves and your bloodstream.Footnote 26 Nothing is exempt.Footnote 27

With continued competitive intensification, surveillance capitalists discovered that the most predictive data come from intervening in behavior to tune, herd, and modify action in the direction of commercial objectives. Data scientists describe this as the shift from monitoring to actuation, in which a critical mass of knowledge about a machine system enables the remote control of that system. Now people become targets for remote control, as a third imperative, economies of action emerges as an arena of intense experimentation. Here are the practical grounds on which unequal knowledge morphs into unequal power. Epistemic inequality widens to include the distance between what people can do and what can be done to them. “We are learning how to write the music,” one data scientist explained, “and then we let the music make them dance.”Footnote 28

How shall we understand this new power “to make them dance?” Unlike twentieth-century totalitarianism, it does not employ soldiers and henchmen to threaten terror and murder. It arrives with a cappuccino, not a gun. It is a new instrumentarian power that works its will through Big Other’s medium of ubiquitous digital instrumentation to manipulate subliminal cues, psychologically target communications, impose choice architectures, trigger social comparison dynamics, and levy rewards and punishments – all of it aimed at remotely tuning, herding, and modifying human behavior in the direction of profitable outcomes and always engineered to preserve users’ ignorance.

Although he did not name it, the visionary of ubiquitous computing, Mark Weiser, foresaw the immensity of instrumentarian power as a totalizing social project. He did so in a way that suggests both its utter lack of precedent and the danger of confounding it with what has gone before: “[H]undreds of computers in every room, all capable of sensing people near them and linked by high-speed networks, have the potential to make totalitarianism up to now seem like sheerest anarchy.”Footnote 29 In fact, all those computers are not the means to a digital totalitarianism. They are, as I think Weiser sensed, the foundation of an unprecedented power that can reshape society in unprecedented ways. If instrumentarian power can make totalitarianism look like anarchy, then what might it have in store for this century?

While all power yearns toward totality, instrumentarian power’s specific purposes and methods are not only distinct from totalitarianism but they are in many ways its precise opposite. Instrumentarian power has no principle to instruct, no interest the reformation of the soul. There is no aim toward spiritual salvation, no ideology against which to judge human action. Totalitarianism was a political project that converged with economics to overwhelm society. Instrumentarianism is a market project that converges with the digital to achieve its own unique brand of social domination. Totalitarianism operated through the means of violence, but instrumentarian power operates through the means of behavioral modification.

Instrumentarianism’s specific “viewpoint of observation” was forged in the controversial intellectual domain of “radical behaviorism.” Thanks to Big Other’s capabilities, instrumentarian power reduces human experience to measurable, observable behavior, while remaining profoundly, infinitely, and radically indifferent to the meanings and motives of its targets. Radical indifference produces observation without witness. Instead of an intimate violent political religion, Big Other’s way of knowing yields the remote but inescapable presence of impenetrably complex machine systems and the interests that author them, carrying individuals on a fast-moving current to the fulfilment of others’ ends. Trained on measurable action, Big Other cares only about observing behavior and ensuring that it is continuously accessible to its ever-evolving operations of monitoring, datafication, calculation, actuation, and monetization.

Instrumentarianism’s radical indifference is operationalized in Big Other’s dehumanized methods of evaluation that produce equivalence without equality by reducing individuals to the lowest common denominator of sameness – organisms among organisms. There is no need for mass submission to social norms, no loss of self to the collective induced by terror and compulsion, no inducements of acceptance and belonging as a reward for bending to the group. All of that is superseded by a market-based digital order that thrives within things and bodies, transforming volition into reinforcement and action into conditioned response. Thanks to Big Other’s capabilities to know and to do, instrumentarian power aims for a condition of certainty without terror in the form of guaranteed outcomes. In the execution of economies of action, Big Other transforms “natural selection” into the “unnatural selection” of variation and reinforcement authored by market players and the competition for surveillance revenues.

The paradox is that because instrumentarianism does not claim bodies for some grotesque regime of pain and murder, many are prone to undervalue its effects and lower their guard. Under the regime of instrumentarian power, the mental agency and self-possession of autonomous human action are gradually submerged beneath a new kind of automaticity: a lived routine of stimulus-response-reinforcement that operates outside of awareness and is aggregated as statistical phenomena: the comings and goings of mere organisms.

The challenges associated with successful economies of action have become a critical experimental zone for the elaboration of instrumentarian power. It is likely that much of this experimentation is invisible to the public, but at least some hides in plain sight, where one can observe how knowledge tips into power, epistemic inequality into epistemic injustice.

For example, Facebook conducted “massive-scale contagion experiments,” the results of which were published in 2012 and 2014. The first aimed to subliminally induce voting in the lead-up to the 2010 US mid-term elections. The second aimed to influence users to feel “happy” or “sad.” As a result of these experiments researchers concluded that, (1) it is possible to deploy subliminal cues on Facebook pages to alter real-world behaviour and emotions and (2), it is possible to accomplish such remote behavioral and affective modification with undetectable methods designed to bypass human awareness. Indeed, the very first paragraph of the 2014 research article on emotional contagion celebrates these findings: “Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.”Footnote 30

Facebook provides yet another observation point for the development of economies of action. In May 2017, three years after the publication of the contagion studies, The Australian broke the story on a confidential twenty-three-page Facebook document written by two Facebook executives and aimed at the company’s Australian and New Zealand advertisers. The report depicted the corporation’s systems for gathering “psychological insights” on 6.4 million high school and tertiary students as well as young Australians and New Zealanders already in the workforce. The Facebook document detailed the many ways in which the corporation uses its stores of behavioral surplus to simulate and predict individual and group affective patterns in order to pinpoint the exact moment at which a young person needs a “confidence boost” and is therefore most vulnerable to a specific configuration of advertising cues and nudges: “By monitoring posts, pictures, interactions, and Internet activity, Facebook can work out when young people feel ‘stressed,’ ‘defeated,’ ‘overwhelmed,’ ‘anxious,’ ‘nervous,’ ‘stupid,’ ‘silly,’ ‘useless,’ and a ‘failure.’”Footnote 31

The report reveals Facebook’s interest in leveraging this affective surplus for the pivot from monitoring to actuation. It boasts detailed information on “mood shifts” among young people based on “internal Facebook data,” and it claims that not only can Facebook’s prediction products “detect sentiment,” they can also predict how emotions are communicated at different points during the week. These data are then used to match each emotional phase with appropriate ad messaging for the maximum probability of guaranteed sales. “Anticipatory emotions are more likely to be expressed early in the week,” the analysis counsels, “while reflective emotions increase on the weekend. Monday–Thursday is about building confidence; the weekend is for broadcasting achievements.” The young adults of Australia’s and New Zealand’s cities and towns had no reason to suspect that their fears and fantasies were being routinely exploited for commercial result at the precise moment of their greatest vulnerability. (“NEED A CONFIDENCE BOOST? CLICK HERE! BUY THIS BLACK LEATHER JACKET NOW! FREE OVERNIGHT DELIVERY!”)

Facebook publicly denied these practices, but former Facebook product manager Antonio Garcia-Martinez, the author of Chaos Monkeys, a useful account of Silicon Valley, described in the Guardian the routine application of such practices and accused the corporation of “lying through their teeth.” He concluded: “The hard reality is that Facebook will never try to limit such use of their data unless the public uproar reaches such a crescendo as to be un-mutable.”Footnote 32 It is Facebook that knows. It decides who knows. It decides who decides.

The public’s intolerable knowledge disadvantage is deepened by surveillance capitalists’ perfection of mass communications as gaslighting. Indeed, these firms have long mastered the tactical arts of disinformation and fake news, paving the way for social complacency toward the crisis of truthfulness and social trust that has engulfed public communications. A few examples are illustrative. On April 30, 2019 Mark Zuckerberg made a dramatic announcement at the company’s annual developer conference, declaring: “The future is private.”Footnote 33 A few weeks later, a Facebook litigator appeared before a federal district judge in California to thwart a user lawsuit over privacy invasion, arguing that the very act of using Facebook negates any reasonable expectation of privacy “as a matter of law.”Footnote 34 While leaked internal documents describe the firm’s sophisticated methods for accruing granular psychological insights for targeting and triggering individuals, in early 2020 its Vice President of Public Policy told a public forum: “We don’t do surveillance capitalism, that by definition is surreptitious; we work hard to be transparent.”Footnote 35 In May 2019, Google CEO Sundar Pichai wrote in the New York Times of his corporation’s commitment to the principle that “privacy cannot be a luxury good.”Footnote 36 Five months later Google contractors were observed offering $5 gift cards to homeless people of color in an Atlanta park in return for a facial scan.Footnote 37 While Amazon cracked down on employees for violating the company’s privacy by publicly discussing its policies and practices, the corporation aggressively strengthens the one-way mirror, marketing its collection of surveillance-as-a-service connected devices and appliances based on the Alexa voice recognition system. Its latest suite of Internet-enabled devices was characterized by the Seattle Times as “a sweeping vision of automation, entertainment, ubiquitous surveillance and commerce permeating nearly every aspect of life.”Footnote 38

Facebook’s denial of psychological targeting practices invites even more scrutiny in light of the leaked 2018 company document, which described its “AI Hub.”Footnote 39 That report also indicated that the company’s extraordinary data flows and computational production are dedicated to meeting its corporate customers’ “core business challenges” with procedures that link prediction, microtargeting, intervention, and behavior modification. For example, a Facebook service called “loyalty prediction” was touted for its ability to plumb proprietary behavioral surplus in order to predict which individuals are “at risk” of shifting their brand allegiance. This knowledge alerts advertisers to intervene promptly with targeted messages designed to stabilize loyalty just in time to alter the course of the future.

Google’s experimentation with economies of action moved boldly into the real world with the augmented reality game Pokémon Go. The project had been incubated at Google for many years, led by John Hanke, an early inventor of satellite mapping and leader of Google’s mapping operations including Google Earth and Street View, both critical sources of surplus data supplies. Later Hanke headed up his own augmented reality shop inside Google, Niantic Labs, where Pokémon Go was developed and spun off from the company just in time to go to market with Hanke as its head and Goggle its principal investor.Footnote 40

Pokémon Go brought the emerging science of remote population tuning and herding to the real world: real streets, real towns, real cities. It added the rewards and punishments of gamification to the methods of subliminal cueing and the manipulation of social comparison dynamics in order to bypass users’ awareness of the situational facts: families and friends were engaged in a search game without knowing that it was they who were being searched and gamed. Players were, in fact, unwitting pawns in a hidden higher-order game that aimed to provide “footfall” to fee-paying establishments as they vied for real-world consumer visits in exactly the same way that online advertisers pay for the virtual visits of clicks and engagement. Niantic used immense caches of data collected from game players’ devices in order to apply the incentives and reinforcements of gamification for the sake of herding players to the real-world business customers in its futures markets, from McDonald’s and Starbucks to Joe’s Pizza.Footnote 41

These escalating zones of experimentation and their practical success suggest a disturbing conclusion: the competitive necessity of economies of action means that surveillance capitalists must use all means available to supplant autonomous action with heteronomous behavior. Human awareness is a threat to surveillance revenues because the mobilization of awareness endangers the larger project of behavior modification. Philosophers recognize “self-regulation,” “self-determination,” and “autonomy” as expressions of “freedom of will,” and a flourishing research literature illuminates the antecedents, conditions, consequences, and challenges of human self-regulation as a universal need. The capacity for self-determination is understood as an essential foundation for behaviors associated with critical human capabilities such as empathy, volition, reflection, personal development, authenticity, integrity, learning, goal accomplishment, impulse control, creativity, and the sustenance of intimate relationships. “Implicit in this process is a self that sets goals and standards, is aware of its own thoughts and behaviors, and has the capacity to change them. Indeed, some theorists have suggested that the primary purpose of self-awareness is to enable self-regulation.” Every threat to human autonomy begins with an assault on awareness, “tearing down our capacity to regulate our thoughts, emotions, and desires.”Footnote 42

The salience of self-awareness as a bulwark against self-regulatory failure is also highlighted in recent research on “susceptibility to persuasion,” which concludes that “the ability to premeditate” is the single-most-important determinant of one’s ability to resist persuasion.Footnote 43 People who harness self-awareness to think through the consequences of their actions are more disposed to chart their own course and are thus significantly less vulnerable to persuasion techniques. Self-awareness also figures in the second highest-ranking protection from susceptibility to persuasion: commitment. Those who consciously commit to a course of action or set of principles are less likely to be persuaded to do something that violates their commitment.

In one sense, there is nothing remarkable in observing that capitalists would prefer individuals who submit to arrangements that advantage capital. It would be incorrect, however, to conclude that today’s surveillance capitalists simply represent more of the same. The structural requirements of economies of action turn the means of behavioral modification into an engine of growth. At no other time in history have the wealthiest private corporations had at their disposal a pervasive global architecture of ubiquitous computation able unilaterally to amass unparalleled concentrations of information about individuals, groups, and populations sufficient to mobilize the pivot from knowledge about behavior to the actuation of commercially desirable behavior. In other words, when we climb the mountain of the division of learning and peek into the fortress, we see a frontier operation run by geniuses and funded by vast capital outlays that is furiously dedicated to knowing everything about us and pivoting that knowledge to the remote control of people. These are unprecedented conditions that bestow an unprecedented instrumentarian power on private capital.

IV. Information Warfare

While democracy slept, epistemic inequality was produced, institutionalized, and protected by an unequal power that annuls the possibility of conflict by denying the right of combat. In this case, denial is achieved through Big Other’s hidden techno-economic systems that steal, know, and shape human behavior for the sake of others’ gain. These capabilities depend on the evasion of human awareness. This entails the denial of the epistemic rights that confer individual sovereignty over self/knowledge. Instrumentarian power is the guarantor of epistemic inequality, the hammer of epistemic injustice, and the usurper of epistemic rights.

Because one’s self and all the selves are meant to sleepwalk peacefully through this known unknown, sudden news from behind the veil can have an electrifying effect, if only for a while. This helps to explain the force with which the story of Cambridge Analytica broke on the world in March 2018, when Chris Wylie, the young mastermind-turned-whistleblower, unleashed a torrent of information on that company’s secret efforts to predict and influence individual voting behavior, quickly riveting the world on the small political analytics firm and the giant source of its data: Facebook. There are many unanswered questions about the legality of Cambridge Analytica’s complex subterfuge, its actual political impact, and its relationship with Facebook. My interest here is restricted to how it replicated surveillance capitalism’s ordinary practices, and the implications of that fact.Footnote 44

Academic researchers had already demonstrated the predictive power of behavioral surplus culled from Facebook pages, the insights into human personality that it can yield, the resulting opportunities for behavioral manipulation and modification, and the commercial value of such methods. Wylie recounted his fascination with these studies, especially as they might be pivoted from commercial to political outcomes.Footnote 45 Through a complicated chain of events, it was Wylie who persuaded Cambridge Analytica’s owner, the secretive software billionaire and active enemy of democracy Robert Mercer, and his operatives, including anti-democracy’s dark theorist Steve Bannon, to use Facebook data to advance Mercer’s political aims.

Cambridge Analytica’s operations followed the surveillance capitalist playbook. They were designed to produce ignorance through secrecy and the careful evasion of individual awareness: “We exploited Facebook to harvest millions of people’s profiles,” Wylie admitted, “and built models to exploit what we knew about them and target their inner demons.”Footnote 46 The objective was “behavioral micro-targeting … influencing voters based not on their demographics but on their personalities.”Footnote 47 “I think it’s worse than bullying, because people don’t necessarily know it’s being done to them,” Wylie reflects. “At least bullying respects the agency of people because they know … if you do not respect the agency of people, anything that you’re doing after that point is not conducive to a democracy. And fundamentally, information warfare is not conducive to democracy.”Footnote 48

Wylie describes Cambridge Analytica’s operations as “information warfare,” correctly acknowledging that this form of shadow warfare originates in significant asymmetries of knowledge and the power produced by such knowledge. In other words, information warfare exploits epistemic inequality, while its effects intensify epistemic injustice.

However, the Cambridge Analytica narrative suggests an even more disturbing conclusion. The political firm was only able to operate as an information warrior because the conditions for successful warfare and its weapons had already been developed by surveillance capital. Surveillance capitalist operations like Facebook, Google, Amazon, Microsoft, and countless others are best understood as examples of information-warfare-for-profit. These firms are information mercenaries that leverage unprecedented asymmetries of knowledge/power for the sake of revenues, which, in turn, fund their continued dominance and the intensification of epistemic inequality.

Consider how a small firm such as Cambridge Analytica was able to enter the fray of information war. The so-called political consultancy functioned as a parasite buried into the host of Facebook’s vast behavioral data supply chains, while adapting its host’s foundational mechanisms and methods: secret data capture, extraction of behavioral surplus, predictive computational analysis, behavioral microtargeting in the service of economies of action.

Cambridge Analytica channeled these methods and mechanisms, merely pivoting the surveillance capitalist machinery from commercial markets in human futures toward guaranteed outcomes in the political sphere. Its strategies of secret invasion and hidden conquest were the same standard operating procedures to which billions of innocent “users” are subjected each day. What better description of the unsavory treatment of 6.6 million young people in Australia and New Zealand whose social anxieties were extracted and manipulated for profit than to say that we “built models to exploit what we knew about them and target their inner demons”? What more apt reflection on “loyalty prediction” interventions based on trillions of data points, or Pokémon Go’s manipulative game within the game than, “I think it’s worse than bullying, because people don’t necessarily know it’s being done to them”?

It is worthwhile noting that it was Google’s Eric Schmidt who first pried open this Pandora’s box, transferring surveillance capitalism’s core mechanisms of behavioral microtargeting to the Obama presidential campaigns, where Wylie enjoyed some of his early training under Obama’s Director of Targeting.Footnote 49 In little over a decade, Schmidt’s innovations have become the envy of every enemy of democracy, well within reach of the plutocrat’s wallet or the more modest budgets of other non-state actors. Indeed, information warfare is widely assumed to originate in the State for the purposes of political, cultural, and, or, military destabilization, just as we once considered behavioral modification or surveillance as projects of the State. But recent theories of information warfare have begun to recognize the growing ease with which non-state actors, such as Robert Mercer or ISIS, can undertake information warfare.Footnote 50

What has not yet been adequately recognized is that surveillance capitalism has already institutionalized information warfare as a market project. It is only on the strength of this construction that states and non-state actors alike can succeed as information warriors. Such operations exist as parasites on the host of the larger surveillance capitalist body. A simple set of distinctions framed by US Naval Academy professor and cyber-security expert Martin Libicki are useful here as they help to describe the contributions of the surveillance capitalist host that deliver triple nourishment by providing what Libicki identifies as (1) the conditions, (2) the weapons, and (3) the opportunity to wage information war.Footnote 51

Conditions

Surveillance capitalism’s economic imperatives increase the range of societal vulnerabilities for parasitic exploitation. Libicki observes that US companies lead the world in the collection and processing of personal information. Pervasive datafication and connectivity, largely driven by surveillance capital, substantially increases society’s “attack surface” leaving it more vulnerable to a range of assault capabilities. Libicki asks: “[W]hy collect what can be stolen?”Footnote 52

Weapons

The surveillance capitalism host also provides the “weapons” (data, methods, and mechanisms) necessary to exploit the vulnerabilities that it creates. “Ultimately it has been the evolution of the information economy that has provided the means by which hostile others can run a pervasive harassment campaign,” Libicki acknowledges.Footnote 53 He cites “data-mining techniques” that construct “realistic simulations of individuals, indeed perhaps of most of a population … integrating data streams with enormous cloud-based storage, powerful processing, and a dash of artificial intelligence.” Such simulations, he notes, “may be used to test every individual’s reaction to events,” including “advertising, political campaigns, and psychological operations, and even to guess what might go viral through person-to-person interactions,” just as we saw in the case of Facebook’s contagion experiments.Footnote 54

Libicki catalogues some of these weapons, which are already essential methods in surveillance capitalism’s arsenal: “exquisite psychological operations,” “messages tailored to one person at a time,” and data-mining able to characterize individuals precisely enough for “crafting the message most likely to resonate with them.” These achievements permit the “optimization” of “psychological operations,” “new conduits for persuasion,” and “the manipulation of fear,” all of which are amply on display in surveillance capital’s expanding zones of experimentation as it learns to translate knowledge into power.Footnote 55

Opportunity

Libicki notes that information warfare unfolds in an atmosphere of “deep secrets” protected by “a dense fog of ambiguity.”Footnote 56 These conditions are structurally enabled. First, they reproduce and extend the asymmetries of knowledge and power already compelled by economic imperatives. Second, as long as the surveillance capitalist host operates from the perspective of radical indifference, it is like a Cyclops whose single line of sight leaves it blind to everything but its prey. Parasitic operations succeed because they fall on the blind sides of radical indifference. This means that parasites can persist unchallenged for long periods. It is difficult to find them, disable them, and to confirm their destruction. Such was the case with Cambridge Analytica, which fed off illegitimately collected Facebook data that were illegitimately sold for nefarious purpose – all of it secreted in the shadow of Facebook’s single eye. Radical indifference creates a void where social reciprocities once thrived. Surveillance capitalists cannot fill this void because doing so would violate the logic of accumulation on which everything depends. The rogue forces of disinformation grasp this fact more crisply than anyone else, as they cleverly exploit the Cyclops of radical indifference and escalate the perversion of information in an open society.Footnote 57

Surveillance capitalism’s antidemocratic and antiegalitarian consequences are best described as a market-driven coup from above. It is not a coup d’état in the classic sense but rather a coup de gens: an overthrow of the people concealed in the technological Trojan horse that is Big Other. On the strength of its audacious annexation of human experience, this coup achieves exclusive concentrations of knowledge and power that undermine democracy at root and crown. It poisons democracy at its roots by usurping the epistemic rights that confer individual sovereignty over self/knowledge, thus weakening self-determination and undermining human autonomy without which democratic society is unimaginable. It poisons democracy from above by imposing a new axis of epistemic inequality that now threatens to remake society while unmaking the structure and function of democratic institutions. Surveillance capital wages a quiet information war for epistemic hegemony and the power over human behavior that it promises, thus channeling capitalism’s adversarial bloodline not toward groups like workers or consumers who are defined by their economic function, but rather toward the widest possible category of people: “users.” This broad target of all people engaged in all forms of life is as all-encompassing as the economic imperatives that compel surveillance capitalism toward societal domination. It bears a single message: CAVEAT USOR.

V. The Poisoned Crown: The Division of Learning in Society

When the young Emile Durkheim wrote The Division of Labor in Society, a treatise that would become a foundational text of modern sociology, the title itself was controversial. Why this was the case is relevant to our predicament today. Because the transformation that we witness in our time echoes many of the century-old observations in Durkheim’s seminal work, a few key points are reviewed here.

The division of labor had been understood as a critical means of achieving labor productivity through the specialization of tasks. Adam Smith memorably wrote about this new principle of industrial organization in his description of a pin factory, and the division of labor remained a topic of economic discourse and controversy throughout the nineteenth century. Durkheim recognized labor productivity as an economic imperative of industrial capitalism that would drive the division of labor to its most extreme application, but that was not what held his fascination.

Instead, Durkheim trained his sights on the social transformation already gathering around him, observing that “specialization” was gaining “influence” in politics, administration, the judiciary, science, and the arts. He concluded that the division of labor was no longer quarantined in the industrial workplace. Instead it had burst through those factory walls to becoming the central organizing principle of industrial society: “Whatever opinion one has about the division of labor,” Durkheim wrote, “everyone knows that it exists, and is more and more becoming one of the fundamental bases of the social order.”Footnote 58

Economic imperatives predictably mandated the division of labor in production, but what was the purpose of the division of labor in society? This was the question that motivated Durkheim’s analysis, and his century-old conclusions are relevant for us now. He argued that the division of labor accounts for the interdependencies and reciprocities that link the many diverse members of a modern industrial society in a larger prospect of solidarity. This new principle of social order was an essential response to the breakdown of traditional communities as the old sources of meaning that had reliably bonded people across space and time melted away. What would hold society together in the absence of the rules and rituals of place, clan, and kin? Durkheim’s answer was “the division of labor.” Society’s need for a coherent new source of meaning and structure was the cause, and the effect was an ordering principle that enabled and sustained a healthy modern community. The reciprocities of the division of labor would breed mutual need, interdependence, and respect, all of which imbue this new ordering principle with moral force. As the young social theorist explained:

The most remarkable effect of the division of labor is not that it increases output of functions divided, but that it renders them solidary. Its role … is not simply to embellish or ameliorate existing societies, but to render societies possible which, without it, would not exist …. It passes far beyond purely economic interests, for it consists in the establishment of a social and moral order sui generis.Footnote 59

Durkheim’s vision was neither sterile nor naive. He recognized that things can take a dark turn and often do, resulting in what he called an “abnormal division of labor” (sometimes translated as “pathological”) that produces social distance, injustice, and discord in place of reciprocity and interdependence. In this context, Durkheim singled out the destructive effects of inequality on the division of labor in society, especially what he viewed as the most dangerous source of inequality: extreme asymmetries of power that make “conflict itself impossible” by “refusing to admit the right of combat.” Such pathologies can only be cured by a politics that asserts the people’s right to contest, confront, and prevail in the face of unequal and illegitimate power over society. In the late nineteenth and most of the twentieth centuries, that contest was defined by economic inequality and led by labor and other social movements that asserted rights to economic justice through new institutional constructions: unions, collective bargaining, public education.

But now it is a division of learning that follows the same migratory path from the economic to the social domain once traveled by the division of labor. The progress of digitalization and information intensification began in the offices and factories of the 1980s, when workplaces mobilized around the new questions concerning knowledge, authority, and power, thus drawing labor and capital into a novel and poorly understood crisis of epistemic equality.Footnote 60

Forty years later it is possible to see that the labor crisis of the late twentieth century was an early phase of a longer struggle over the division of learning in society that would engulf twenty-first century societies as the dilemmas of knowledge, authority, and power broke through the boundaries of the economic sphere to overwhelm and finally saturate everyday life. Now the division of learning “passes far beyond purely economic interests,” as it establishes the basis for a new social order and its moral content. But scientists warn that the world’s capacity to produce information has substantially exceeded its ability to process and store information.Footnote 61 Information is digital, but its volume exceeds our ability to discern its meaning.

As the solution to this problem, Martin Hilbert counsels, “The only option we have left to make sense of all the data is to fight fire with fire,” using “artificially intelligent computers” to “sift through the vast amounts of information … Facebook, Amazon, and Google have promised to … create value out of vast amounts of data through intelligent computational analysis.”Footnote 62 The rise of surveillance capitalism, however, necessarily turns Hilbert’s advice into a damning vision of social pathology. Although he does not mean to, Hilbert’s suggestion merely confirms the self-authorized epistemic dominance of the surveillance capitalists and the institutionalization of epistemic inequality as the division of learning in society is bent to the commercial interests of private surveillance capital.

Surveillance capitalism’s command of the division of learning in society begins with the problem of what may be called “the two texts.” The first is the public-facing text, familiar and celebrated for the universe of information and connection that it brings to our fingertips. We are its authors and its readers. Google Search codifies the informational content of the World Wide Web. Facebook’s News Feed binds the social network. Much of this public-facing text is composed of what we inscribe on its pages: posts, blogs, videos, photos, conversations, music, stories, observations, “likes,” tweets, and all the great massing hubbub of lives captured and communicated.

Under the regime of surveillance capitalism, however, the first text does not stand alone; it trails a shadow close behind. The first text, full of promise, actually functions as the supply operation for this second shadow text. Everything that is contributed to the first text, no matter how trivial or fleeting, becomes a target for surplus extraction. That surplus fills the pages of the shadow text, hidden from view and “read only” for surveillance capitalists.Footnote 63 In this text, private experience is dragooned as raw material to be accumulated and analyzed as means to others’ market ends. The shadow text conceals more about us than we can know about ourselves, exemplified in Facebook’s ingestion and calculation of trillions of behavioral data points each day. Worse still, it is nearly impossible to refrain from contributing to this vast concentration of shadow knowledge, as Big Other feeds on the normal and necessary routines of daily life.

Finally, shadow knowledge ricochets back into lives, morphing into the instrumentarian power to shape what is seen, learned, and done. As Frank Pasquale describes Google: “The decisions at the Googleplex are made behind closed doors … the power to include, exclude, and rank is the power to ensure which public impressions become permanent and which remain fleeting … Despite their claims of objectivity and neutrality, they are constantly making value-laden, controversial decisions. They help create the world they claim to merely ‘show’ us.”Footnote 64 When it comes to the shadow text, we are the objects of its narratives from whose lessons we are excluded. As the source from which all the treasure flows, the shadow text is about us, but it is not for us. Instead it is created, maintained, and exploited outside our awareness for others’ profit.

Just as Durkheim warned his society a century ago of an abnormal division of labor, we now enter the third decade of the twenty-first century with our societies already disfigured by a division of learning that drifts into pathology marked by epistemic inequality and injustice at the hands of the unprecedented asymmetries of knowledge institutionalized in the shadow text. The pathology does not stop here. Asymmetries of knowledge feed the progress of instrumentarian power as exclusive knowledge is translated through the networked layer of digital instrumentation to produce new capabilities of actuation at scale – influencing, tuning, herding, and modifying human behavior toward others’ commercial ends. The division of learning is thus both the ascendant principle of social order in the twenty-first century and already hostage to surveillance capital’s privileged position, empowered by its ownership of the texts and its exclusive command of analysis and prediction capabilities.

More than thirty years ago, the legal scholar Spiros Simitis published a remarkable essay on the theme of privacy in an information society. Simitis grasped early on that the already visible trends in public and private “information processing” harbored threats to society that transcended narrow conceptions of privacy and data ownership. “[P]ersonal information is increasingly used to enforce standards of behavior,” he wrote. “Information processing is developing, therefore, into an essential element of long-term strategies of manipulation intended to mold and adjust individual conduct.”Footnote 65 Simitis argued that these trends were incompatible not only with privacy but with the very possibility of democracy, which depends on a reservoir of individual proficiencies associated with autonomous moral judgment and self-determination.

Building on Simitis’s work, Paul Schwartz warned in 1989 that computerization would transform the delicate balance of rights and obligations upon which privacy law depends: “Today the enormous amounts of personal data available in computers threaten the individual in a way that renders obsolete much of the previous legal protection.” Most important, Schwartz foresaw that the scale of the still emerging epistemic crisis would impose risks that exceed the scope of privacy law. “The danger that the computer poses is to human autonomy,” he warned. “The more that is known about a person, the easier it is to control him. Insuring the liberty that nourishes democracy requires a structuring of societal use of information and even permitting some concealment of information.”Footnote 66

Both Simitis and Schwartz sensed the ascent of the division of learning as the axial principle of a new computational societal milieu, but they could not have anticipated the rise of surveillance capitalism and its consequences. While the explosive growth of information territories shifts a crucial axis of the social order from a twentieth-century division of labor to a twenty-first century division of learning, it is surveillance capitalists who command the field and unilaterally lay claim to a disproportionate share of the epistemic rights that shape the division of learning in society.

Instead of the long anticipated explosion of democratization, the competitive struggle over surveillance revenues has dragged our societies into a regressive “pre-Gutenberg” pattern, in which a pathological division of learning is captured by private capital, presided over by a narrow priesthood of privately employed computational specialists, their privately owned machines, and the economic interests for whose sake they learn. This epistemic violence runs free of law, of market restraints, and of organic reciprocities with its communities, which are no longer required as sources of customers or employees but rather as a passive unwitting cornucopia of raw material for production and sales.

The result is best understood as the unauthorized privatization of the division of learning in society. Just as Durkheim warned of the subversion of the division of labor by the powerful forces of industrial capital a century ago, today’s successful prosecution of information warfare aimed at citizen-users by surveillance capital now exerts private power over the definitive principle of social order in our time. Epistemic inequality is enshrined as the signature deformation of this epoch as the pathologies of the division of learning infect the societal superstructure.

Here is democracy’s poisoned crown: As things currently stand, it is the surveillance capitalist corporations that know. It is the market form that decides. It is surveillance capital that decides who decides. Experts in the disciplines associated with machine intelligence know this, although they have little grasp of its implications as the signal of the defining axis of social inequality in our time. One data scientist writes: “Whoever has the best algorithms and the most data wins …. Google with its head start and larger market share, knows better what you want … whoever learns fastest wins.”Footnote 67 In 2018, the New York Times reported that Google CEO Sundar Pichai had located his office on the same floor as the company’s AI research lab, noting it as a trend among many CEOs – a literal take on the concentration of knowledge and power.Footnote 68

Here is the paradox in which citizens are caught: Democracy is the target of this epistemic poison, and its only antidote.

VI. Remedies

Despite the variation of motives among the ninety Spanish citizens, their collective assertion of a “right to be forgotten” announced a new twenty-first-century contest over once elemental epistemic rights now under global assault from private surveillance capital. The European Court of Justice’s decision on the “right to be forgotten” so often reduced to the legal and technical considerations related to the deletion or delinking of personal data, was in fact a key inflection point at which elemental epistemic rights successfully sought the protection of democratic institutions as they undertook the long migration toward law. It was the Court of Justice that wrote an early chapter in what is now an epoch-defining struggle to claw back epistemic rights from the powerful forces of surveillance capital and its determination to assert authority over what can be learned and known. There is evidence that citizens and lawmakers around the world are finally picking up the pen, as a new wave of public “techlash,” legislative initiatives, and regulatory actions begins to take shape.

The past offers good counsel at this key juncture. In his Pulitzer prize-winning history, Prophets of Regulation, Thomas McCraw recounts the phases and distinct purposes of regulatory regimes in the US: the 1870s and the initial period of industrialization; the early twentieth century, especially 1900–1916; the 1930s and the New Deal; and the onset of deindustrialization during the 1970s and 1980s. The challenges of each era brought distinct forms of law and regulatory leadership. At the dawn of the twentieth century it was the muckrakers and progressives who defined the regulatory paradigm. Later, the lawyers dominated. It was only the past few decades that saw the economists as framers of the regulatory vision.

McCraw observes that this “economists’ hour” will certainly end and wonders what will follow. In considering the arc of this history, he finds clues, noting that concerns for justice and fairness have generally overshadowed the more narrow aims of economic growth in the construction of regulatory regimes. “Regulation,” he writes, “is best understood as a political settlement.”Footnote 69

This perspective suggests that the chartering frameworks of a digital future compatible with the principles of a democratic society are most likely to be defined and led by champions of democracy. The key principles here must be: (1) the redistribution of epistemic rights to the sovereign individual, (2) under the authority of the rule of law, and (3) sustained by the power of democratic institutions and their governance. Elected officials, citizens, and specialists can seize this opportunity, allied in the knowledge that despite its failures and shortcomings, democracy is the one idea to emerge from the long human story that enshrines the peoples’ right to govern themselves and asserts the ideal of the sovereign individual as the single most powerful bulwark against tyranny.

McCraw delivers a warning with his observations, however, and it is significant for us now. The historical record shows that regulators failed when they were unable “to frame strategies appropriate to the particular industries they were regulating.”Footnote 70 The lesson is that today’s new legislative and regulatory challenges will not be met effectively without a clear grasp of surveillance capitalism as a novel economic logic defined by distinct economic imperatives and the specific practices and consequences that they compel. Twenty-first century solutions to our twenty-first century challenges may build on existing paradigms of privacy and antitrust but will also have to move beyond and even transform those paradigms, as we learn how to interrupt and outlaw surveillance capitalism’s key mechanisms, methods, and markets. As the European Commissioner for Competition Margrethe Vestager recently put it: “One of the things I have learned from surveillance capitalism … is [that] it’s not you searching Google, it is Google searching you. And that gives a very good idea about not only what you want to buy but also what you think. So we have indeed a lot to do.”Footnote 71

The prospects of a new regulatory paradigm are improved with a clear grasp of the forces that have impeded its emergence during the first two decades of the twenty-first century. Lawmakers have been reluctant to challenge surveillance capitalism for many reasons.Footnote 72 Among these was an unwritten policy of “surveillance exceptionalism” forged in the aftermath of the September 11 terrorist attacks, when the government’s concerns shifted from online privacy protections to a new zeal for “total information awareness.” In that political environment, the fledgling surveillance capabilities emerging from Silicon Valley appeared to hold great promise. Another reason has been the unparalleled lobbying infrastructure pioneered by Google and later joined by Facebook and others. A third is the value of behavioral microtargeting to political campaigns.Footnote 73

As a new wave of public mobilization and lawmaking gathers force, doing “a lot” will require overcoming these old impediments. It also means confronting surveillance capitalism’s strategic propaganda campaigns and its mass communications tactics based on gaslighting and mendacity. Strategies and tactics have been designed as defensive fortifications intended to undermine and intimidate lawmakers and citizens alike, confounding judgment and freezing action. What follows are three examples of strategic propaganda that have received relatively little scrutiny compared to the damage they do.

The Innovation Defense

Surveillance capitalist leaders vigorously portray democracy as the enemy of innovation. Facebook’s Head of Global Policy and Communications, Sir Nick Clegg, warned in 2019 that any restrictions resulting from “tech-lash” risked making it “almost impossible for tech to innovate properly,” invoking the threat of Chinese ascendance as the price the West would pay for law. “I can predict that … we will have tech domination from a country with wholly different sets of values,” he insisted.Footnote 74

Clegg was only repeating what surveillance capitalist leaders had been proselytizing for years. In 2010, Mark Zuckerberg announced that privacy was no longer a “social norm,” celebrating Facebook’s explosive new “privacy policies” that publicly displayed personal information by default as evidence of his determination to innovate rather than “be trapped” by conventions or law. “We decided that these would be the social norms now and we just went for it.”Footnote 75 In 2011, former Google CEO Eric Schmidt warned that government overreach would foolishly constrain innovation: “We’ll move much faster than any government.”Footnote 76 That year Google founder Larry Page complained that “old institutions like the law” impede the firm’s freedom to “build really great things.”Footnote 77 All of this rhetoric is actually a hand-me-down from another era when Gilded Age barons, whom we now call “robbers,” insisted that there was no need for law when one had the “law of evolution,” the “laws of capital,” and the “laws of industrial society.” As historian David Nasaw put it, the millionaires preached “democracy had its limits, beyond which voters and their elected representatives dared not trespass lest economic calamity befall the nation.”Footnote 78

George Orwell observed that the rhetorical silences and blatant contradictions of power are designed so that “to see what is in front of one’s nose needs a constant struggle.”Footnote 79 The tech companies’ innovation rhetoric helped to suppress criticism from users and their lawmakers for many years, despite what was in front of their noses. Facebook and Google were regarded as innovative companies that sometimes make dreadful mistakes at the expense of privacy. Since then the picture has sharpened and we are getting better at seeing what’s in front of our collective nose. It is now possible to recognize that what were once regarded as mistakes – Google Glass, Gmail scanning, Street View’s theft of private data, Facebook’s Beacon program, its sale of private information to developers, and more – were, in fact, the innovations.

The Freedom Defense

Lawmakers also have been held back in their work by confusion about the relationship between knowledge and freedom. Surveillance capitalists are no different from other capitalists in demanding freedom from any sort of constraint. They insist on the “freedom to” launch every novel practice while aggressively asserting the necessity of their “freedom from” law and regulation. This classic pattern reflects two bedrock assumptions about capitalism made by its own theorists: The first is that markets are intrinsically unknowable. The second is that the ignorance produced by this lack of knowledge requires wide-ranging freedom of action for market actors.

The notion that ignorance and freedom are twinborn characteristics of capitalism is rooted in the conditions of life before the advent of modern systems of communication and transportation, let alone global digital networks, the Internet, or the ubiquitous architectures of Big Other. Until the last few moments of the human story, life was necessarily local, and the “whole” was necessarily invisible to the “part.”

Adam Smith’s famous metaphor of the “invisible hand” drew on these enduring realities of human life. Each individual, Smith reasoned, employs his capital locally in pursuit of immediate comforts and necessities. Each one attends to “his own security … his own gain … led by an invisible hand to promote an end which was no part of his intention.” That end is the efficient employ of capital in the broader market: the wealth of nations. The individual actions that produce efficient markets add up to a staggeringly complex pattern, a mystery that no one person or entity could hope to know or understand, let alone to direct: “The statesman, who should attempt to direct private people in what manner they ought to employ their capitals, would … assume an authority which could safely be trusted, not only to no single person, but to no council or senate whatever.”Footnote 80

The neoliberal economist Friedrich Hayek, whose work laid the foundation for the market-privileging economic policies of the past half century, drew the most basic tenets of his arguments from Smith’s assumptions about the whole and the part. “Adam Smith,” Hayek wrote, “was the first to perceive that we have stumbled upon methods of ordering human economic cooperation that exceed the limits of our knowledge and perception. His ‘invisible hand’ had perhaps better have been described as an invisible or unsurveyable pattern.”Footnote 81

In Hayek’s framing, the mystery of the market is that a great many people can behave effectively while remaining ignorant of the whole. Individuals not only can choose freely, but they must freely choose their own pursuits because there is no alternative, no source of total knowledge or conscious control to guide them. “Human design” is impossible, Hayek says, because the relevant information flows are “beyond the span of the control of any one mind.” The market dynamic makes it possible for people to operate in ignorance without “anyone having to tell them what to do.”Footnote 82

When it comes to surveillance capitalist operations, the classic quid pro quo of freedom for ignorance is shattered. The “market” is no longer invisible, certainly not in the way that Smith or Hayek imagined. The competitive struggle among surveillance capitalists produces the compulsion toward totality. Total information tends toward certainty and the promise of guaranteed outcomes. These operations mean that the supply and demand of human futures markets are rendered in infinite detail. Surveillance capitalism aims to replace mystery with certainty as it substitutes datafication, behavioral modification, and prediction for the old “unsurveyable pattern.”

The result is a fundamental reversal of the classic ideal of the “market” as intrinsically unknowable. Now the market is visible. As the head of Facebook’s data science team once reflected: “This is the first time the world has seen this scale and quality of data about human communication. For the first time, we have a microscope that … lets us examine social behavior at a very fine level that we’ve never been able to see before.”Footnote 83 A top Facebook engineer put it more succinctly: “We are trying to map out the graph of everything in the world and how it relates to each other.”Footnote 84 The same objectives are echoed in the other leading surveillance capitalist firms. As Google’s Eric Schmidt observed in 2010: “You give us more information about you, about your friends, and we can improve the quality of our searches. We don’t need you to type at all. We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.”Footnote 85 Microsoft’s Satya Nadella understands all physical and institutional spaces, people, and social relationships as indexable and searchable: all of it subject to machine reasoning, pattern recognition, prediction, preemption, interruption, and modification.Footnote 86

Although there is nothing unusual about the prospect of capitalist enterprises seeking every kind of knowledge advantage in a competitive marketplace, the surveillance capitalist capabilities that translate ignorance into knowledge are unprecedented because they rely on the one resource that distinguishes the surveillance capitalists from traditional utopianists: the financial and intellectual capital that permits the actual transformation of the world, materialized in the continuously expanding architectures of Big Other. More astonishing still is that surveillance capital derives from the dispossession of human experience, operationalized in its unilateral and pervasive programs of rendering private experience as computational data.

This new condition unravels the economic justification for the triumph of raw capitalism: its free markets, free-market actors, and self-regulating enterprises. It suggests that surveillance capitalists mastered the rhetoric and political genius of the neoliberal ideological defense while pursuing a novel logic of accumulation that belies the most fundamental postulates of the capitalist worldview. It’s not just that the cards have been reshuffled; the rules of the game have been transformed into something that is both unprecedented and unimaginable outside the digital milieu and the vast resources of wealth and scientific prowess that the surveillance capitalists bring to the table. Surveillance capitalism’s command and control of the division of learning in society is the signature feature that breaks with the old justifications of the invisible hand and its entitlements. The combination of knowledge and freedom works to accelerate the asymmetry of power between surveillance capitalists and the societies in which they operate. This cycle will be broken only when we acknowledge as citizens, as lawmakers, as societies, and indeed as a civilization that surveillance capitalists know too much to qualify for freedom.

The Success Defense

A third propaganda strategy is the argument that the financial success of the leading surveillance capitalist firms reflects the real value they bring to people. In this view, financial success is prima facie evidence that no laws are required. But data from the demand side evident in a range of research conducted over the last decade and a half suggest a more disturbing picture. In forty-six of the most prominent forty-eight opinion surveys on the subject of privacy administered in the US and Europe between 2008 and 2016, substantial majorities support measures for enhanced privacy and user control over personal data. (Only two early surveys were somewhat less conclusive, because so many participants said they did not understand how or what personal information was being gathered.)

By 2008 it was well established that the more information people have about “Internet privacy practices,” the more they are “very concerned” about privacy.Footnote 87 A major 2009 survey found that when people were informed of the ways that companies gather data for targeted online ads, more than 73 percent rejected such advertising.Footnote 88 A substantial 2015 survey found 91 percent of respondents disagreed that the collection of personal information “without my knowing” was a fair trade-off for a price discount.Footnote 89 Fifty-five percent disagreed that it was even a fair exchange for improved services. By late 2019 the disjuncture was even more pronounced: An important survey from PEW Research reported that 81 percent of Americans believe the potential risks of companies’ data collection outweigh the benefits, compared to 66 percent who felt that way about government data collection.Footnote 90 A similar Swedish study in 2020 found that 55 percent of Swedes were most concerned about data collection by private companies, compared to 11 percent concerned about government data collection.Footnote 91

The surveillance capitalist firms typically dismiss these results, pointing instead to users’ actual behavior and the spectacular revenues it produces as justification for the status quo. Recall former Google CEO Eric Schmidt’s infamous 2009 privacy brushoff: “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”Footnote 92 Scholars have called the gap between attitudes and behavior “the privacy paradox,” but there really is no paradox here, only the predictable consequence of the pitched battle between supply and demand expressed in the difference between what surveillance capitalism imposes on people and what they really want.Footnote 93

The data suggest that surveillance capitalism is best understood as a market failure that would not survive a genuinely competitive commercial environment. Few people who get a glimpse of surveillance capitalism’s hidden operations actually want to be their target. Most want an alternative path to the digital future, one that will fulfill their needs without compromising privacy and usurping epistemic rights. This is one of those disjunctures in economic history, when nearly everyone wants something that they cannot have, just as early twentieth-century farmers and shopkeepers wanted automobiles too, but at a price they could afford. Instead of a close alignment of supply and demand, people use surveillance capitalism’s services because they have no comparable alternatives and because they are ignorant of its shadow operations and their consequences. Corporate success is best understood as the result of coercion and obfuscation, practices that are only sustainable when they are conducted in secret.

Surveillance capitalism has thrived in the absence of law and regulation. Rather than mourning this state of affairs, the lack of prior action may be regarded as a positive. Democracy has not failed to reign in this rogue capitalism, it has simply not yet tried. And further, democratic societies have successfully confronted destructive forms of raw capitalism in the past, asserting new laws that tethered capitalism to the needs of people and democratic values. Democracy moderated some of the excesses of early industrialization. It ended the Gilded Age. It mitigated the destruction of the Great Depression. It built a strong post-War society. It protected earth, creatures, water, air, consumers, and workers.

According to Lawrence Friedman’s history of American law in the twentieth century, the appetite for new law and regulation in the 1930s came from decades of anger, frustration, outrage, and helplessness at the growing scale and complexity of the industrial behemoths.Footnote 94 Only law was up to the task of tethering the giant industrial corporations to the needs of a democratic society. The swell of survey data, the gradual awakening to surveillance capitalism’s mass communication tactics, and the drumbeat of novel legislative and regulatory discussions and initiatives appears to point in a similar direction. The question remains: What is to be done? What kinds of new law and regulation are likely to be effective? Will it be comprehensive privacy legislation? Will it be an antitrust approach, as many counsel? McCraw’s warning suggests that we need new economic, legal, and collective action paradigms born of a close understanding of surveillance capitalism’s economic imperatives and foundational mechanisms.

Privacy and antitrust law are vital, but there is reason to believe that neither will be wholly adequate to this new challenge. An example is privacy law’s call for “data ownership” and related data rights. Such formulations legitimate the original sin that is the theft of human experience for rendition into data in the first instance. All discussions that begin with existing data flows serve to institutionalize that theft. Negotiating data ownership is like negotiating how many hours a day a seven-year-old should be allowed to work in a factory, rather than contesting the fundamental legitimacy of child labor. Data rights also fail to reckon with the realities of behavioral surplus. Even if “users” achieve “ownership” of the data that they provided to a company, they will not achieve “ownership” of the behavioral surplus that floods the shadow text, the predictions gleaned from it, or the fate of those predictions in markets that trade in human futures. Finally, data ownership is a recipe for a new epistemic underclass, in which economically disadvantaged individuals, families, and groups sell their data in the same way that one might sell one’s organs in an illicit market. The Google contractors who induced homeless people of color to “sell” their faces for $5 offered a portent of that bleak future.

The prospect of “breaking up” the largest surveillance capitalist firms also fails to reckon with the actual mechanisms of this economic logic. Surveillance capitalists achieve scale by cornering behavioral surplus supplies and driving up the value chain for more predictive forms of surplus. If there are monopolies here, they are monopolies of behavioral surplus supplies, of scientific labor, and of material infrastructures for predictive analytics. These features do not correspond neatly to conventional monopoly criteria, neither do they reflect the conventional categories of “consumer harms” that most antitrust laws are designed to combat.

It is necessary to rethink the meaning of “size” and “monopoly” when a company with relatively few employees but huge capital reserves can corner data flows from large domains of human experience (e.g., “search” or “social networking”) while simultaneously cornering the capabilities to compute those data flows. Assistant Attorney General Makan Delrahim offered an initial analysis of such distinctions in 2019 noting: “Broadly speaking, in some digital markets, the competition is for user attention or clicks.  If we see the commercial dynamics of Internet search, for example, in terms of the Yellow Pages that were delivered to our doors a generation ago, we cannot properly assess practices and transactions that create, enhance, or entrench market power – and in some cases monopoly power.”Footnote 95 Breaking up the largest surveillance capitalists – Google, Facebook, Microsoft, and Amazon – can address important anti-competitive problems, but without measures tailored to the actual mechanisms of surveillance capitalism, it will not prevent the emergence of smaller and more efficient surveillance capitalist firms, while opening the field for new surveillance capitalist competitors.

The most efficient legislative and regulatory strategies would be aimed at disrupting the surveillance dividend by disrupting the raw material supplies and financial incentives that sustain it. In other words, it means legislative and regulatory strategies that interrupt and in some cases outlaw surveillance capitalism’s mechanisms of supply and demand. Such measures would create space for alternative citizen-based and commercial action, building ecosystems that realign with individual needs and democratic practice.

Supply:

The relentless expansion of extractive supply chain operations is only likely to be constrained when the legal and regulatory focus shifts from data ownership and management to the originating processes of datafication: the secret theft of private human experience as free raw material for rendition into data. The boundary in dispute must move upstream, from contest over data as property to the codification of epistemic rights that link individual sovereignty to inalienable rights of self/knowledge and establish a rights-based moat around private experience.

This codification of epistemic rights to self/knowledge would interrupt data supply chains by safeguarding the boundaries of human experience before they come under assault from the forces of datafication. It assigns the choice to turn any aspect of one’s life into data as a right that adheres to individuals and communities by virtue of law sustained by democratic governance and as an extension of elemental rights now explicated and translated into juridical rights. This means, for example, that companies cannot claim the right to your face as you walk down the street, or use your face as free raw material for analysis, or fabricate, own, or sell any computational products that derive from your face and its depths of personal information. Such epistemic rights can be understood as the cornerstone of individual freedom under twenty-first-century conditions of existence.

The conversation on epistemic rights has already begun. In the absence of a comprehensive epistemic right to self/knowledge, for example, legal scholars, practitioners, and neural scientists have begun to frame epistemic rights claims to “freedom of thought.” They cite the sanctity of the “forum internum,” the interior space of human awareness and thought, and the need for codified individual rights that protect this domain of human experience from unwanted intrusion, theft, and manipulation.Footnote 96 Some of this work is a direct response to surveillance capitalism’s unrelenting drive to eliminate every barrier to the datafication of human experience, including recent breakthroughs in translating brain signals into speech.Footnote 97 Columbia University is home to a “neurorights initiative,”Footnote 98 the OECD issued formal recommendations for “responsible innovation in neurotechnology,Footnote 99 and Amnesty International issued a path-breaking report on the human rights implications of the “surveillance business model.”Footnote 100 As contests over epistemic rights multiply, there is a strong likelihood that surveillance assets will be reinterpreted as toxic “fruit of the poisonous tree” that can only be acquired at the price of fundamental epistemic rights.

Demand:

The opportunity on the demand side is to disrupt or eliminate the financial incentives that sustain the surveillance dividend. This can be accomplished with sanctions that outlaw the trade in human futures. This is not a radical prospect. For example, societies outlaw markets that trade in human organs, babies, and slaves. In each case, such markets are recognized as morally repugnant operations that produce predictably violent consequences and violate democratic principles. Human futures markets can be shown to produce equally predictable outcomes that challenge human freedoms, violate epistemic rights, and undermine democracy.

****************

Consider the Aware Home, a collaboration between computer scientists and engineers in the year 2000, intended as a “living laboratory” for the study of “ubiquitous computing.”Footnote 101 The project envisioned a “human-home symbiosis” in which animate and inanimate processes would be captured by an elaborate network of “context aware sensors” embedded in the house and by wearable computers worn by the home’s occupants. The Aware Home information system was designed as a simple closed-loop controlled entirely by the home’s occupants. Because the house would be “constantly monitoring the occupants’ whereabouts and activities … even tracing its inhabitants’ medical conditions,” the engineers concluded, “there is a clear need to give the occupants knowledge and control of the distribution of this information.” All the information was to be stored on the occupants’ wearable computers “to insure the privacy of an individual’s information.”

In 2017 University of London scholars published a detailed analysis of a single “smart home” device, the Google-owned Nest thermostat.Footnote 102 They determined that the purchase of a single Nest thermostat entails the need to review nearly 1,000 so-called “contracts,” each with its own burdensome and audacious terms of service for third-party data sharing.Footnote 103 Should the customer refuse to agree to Nest’s stipulations, the Terms of Service indicate that the functionality and security of the thermostat may be deeply compromised, no longer supported by the necessary updates meant to ensure its reliability and safety. The consequences can range from frozen pipes to failed smoke alarms to an easily hacked internal home system.Footnote 104

Today we might mourn the innocence of the Aware Home but, like a message in a bottle from a bygone age, it tells us something important. Once we were the subjects of our lives, now we are its objects. We once looked to the digital future as a human future – a place we could call home. The Aware Home is testimony to what we have lost and what we can find again: the rights to know and decide who knows and decide who decides who knows about our lives: Individual epistemic sovereignty, law, and democracy. Such rights and principles have been and remain the only possible grounds for human freedom, a functional democratic society, and an information civilization founded on equality and justice.

9 The Siren Song: Algorithmic Governance by Blockchain

Kevin Werbach

A mysterious new technology emerges … its effects become profound; and later, many people wonder why its powerful promise wasn’t more obvious from the start. What technology am I talking about? Personal computers in 1975, the Internet in 1993, and – I believe – Bitcoin in 2014.Footnote 1

[L]et them bind thee in the swift ship, hand and foot, upright in the mast-stead, and from the mast let rope-ends be tied, that with delight thou mayest hear the voice of the Sirens. And if thou shalt beseech thy company and bid them to loose thee, then let them bind thee with yet more bonds.Footnote 2

A central theme in internet history since the 1990s is the rise of algorithmic power, enabled through the self-restraint of human governments.Footnote 3 Digital platforms were born weak and clumsy. Governments could have stamped them out to enforce traditional territorial boundaries and regulatory categories. They chose not to.Footnote 4 Once the digital tornado was unleashed, however, its path was not easily directed. Fledgling innovators in need of protection developed into dominant platforms that transformed many aspects of the world for the better, but also created serious harms through pervasive data collection and automated decision-making. The threats arose from the very attributes that made these digital systems so appealing.

The cycle is repeating itself. Another broad-based technological shift promises huge gains in both efficiency and freedom by replacing established points of control with open decentralized mechanisms. Startups spin visions of overwhelming established industries and surmounting government-established controls. And once again, a great challenge is how to restrain their own penchant for algorithmic overreach.

This time, the candidate technology is blockchain, and the broader phenomenon of “distributed ledger” systems.Footnote 5 Blockchain technology is still relatively immature. There is significant uncertainty about how it will develop in the future, and whether it will achieve anything like its promised level of impact. Already, however, blockchain and its related phenomenon, cryptocurrencies, have captured the imagination of technologists, entrepreneurs, business executives, and governments around the world. The driver for this activity is the belief that blockchain can foster an “internet of value”Footnote 6 – a new internet that overcomes the intermediation and centralized control that are increasingly prominent in the current digital environment.Footnote 7

The Next Wave?

Like the internet, blockchain and cryptocurrencies are stimulating dramatic levels of investment, startup activity, and media attention, as well as creating massive disruption of industries and passionate visions of societal transformation.Footnote 8 As with the internet, this excitement often gets ahead of reality. The internet economy recovered from the dotcom crash of the early 2000s to realize its potential through the growth of social media, cloud computing, and mobile connectivity. The crypto economy seems likely to experience a similar trajectory over time. To succeed at scale, however, blockchain-based networks and services will need to address the problem of governance. Immutability, the mechanism that allows these systems to generate trust without central authorities, also creates inherent weaknesses that sometimes turn into catastrophic failures.

The Blockchain Phenomenon

For centuries, ledgers have been the foundation for the accounting and record-keeping around which societies are organized.Footnote 9 However, they have always been centralized: controlled by one or more entities with power over the recording and approval of transactions. Even when there are multiple copies of information, one must either be designated as the master or there must a reconciliation process to redress any inconsistencies. Blockchain offers a decentralized alternative. Each party to a transaction can control its own information, while still trusting the information it sees from others.

Someone, or a group of people, using the pseudonym Satoshi Nakamoto kicked off the blockchain phenomenon on October 31, 2008 with the distribution on an internet mailing list of a short whitepaper titled Bitcoin: A Peer-to-Peer Electronic Cash System.Footnote 10 As extraordinary a breakthrough as it represented, there were virtually no technical advances in the paper. Instead, Nakamoto cleverly combined concepts from several streams of academic research and hobbyist tinkering, and then applied them to create the first workable form of private digital cash.Footnote 11 The Bitcoin network, based on voluntary participation and open-source software, launched in January 2009. Other cryptocurrencies followed. Many added additional functionality and expanded the technology beyond financial applications. A blockchain ledger can reliably record anything. Even more exciting, the ledger can function as a global distributed computer, which operates reliably without anyone in charge. Blockchain technology thus promises to eliminate inefficient intermediaries and overcome interorganizational trust gaps in an extraordinary range of contexts, from supply chain management to digital collectibles to the internet of things to property transfers.Footnote 12

Although designed for functions such as payments and decentralized software applications, cryptocurrencies have so far found their most active use in speculative trading as a financial asset class. The price of bitcoin fluctuated for several years and then skyrocketed during 2017. At its peak in December 2017, the aggregate value of bitcoin in circulation exceeded $200 billion, and the overall cryptocurrency market was more than triple that.Footnote 13 Thousands of startups around the world began developing blockchain-based technologies, many of them issuing digital “tokens” to fund their networks. Most of the world’s major financial services and industrial firms began to explore potential applications, and virtually all of the leading enterprise information technology services vendors developed substantial blockchain practices.Footnote 14

For those who lived through the dotcom bubble of the late 1990s, the parallels are striking. Projects with little more than a whitepaper raised tens of millions of dollars from investors around the world. Companies saw their value skyrocket overnight, without any real customer adoption. Experts talked of a new economy in which old metrics were no longer useful, and established industry leaders were soon swept away. And, as with the dotcom bubble of 1998–99, the 2017 cryptocurrency bubble was quickly followed by a brutal “crypto winter,” in which prices plummeted and many projects were abandoned.Footnote 15

Despite overexuberant claims and widespread illicit activity, however, blockchain technology itself, like the internet, is no fraud. It represents an immature but foundational development whose impacts will unfold over time. Where the internet lowered costs of transferring information, blockchain lowers costs of transferring value.Footnote 16 The impacts of this shift will be broad. Secure value exchange is not just a property of banking and payments; it is a basic building block of markets and society. Standing behind the money and security is a deeper property of trust.Footnote 17

Blockchain as a Trust-Based Technology

Blockchain is fundamentally a trust-based technology.Footnote 18 Although Bitcoin relies on blockchain architecture as its foundation for digital currency, blockchain technology itself has been applied to a broad range of other applications. The unifying attribute of these applications is that they require a network of participants to preserve the integrity of shared information. If digital assets on the network cannot be trusted, for example, they are of little value. The distinctive attribute of the blockchain approach is that it expands trust in the system as a whole by minimizing trust in specific authorities or intermediaries that may prove fallible.Footnote 19 Investor and LinkedIn co-founder Reid Hoffman cleverly calls this, “trustless trust.”Footnote 20 The key technical arrangement is known as consensus: All participants must converge on, and receive verifiable assurances of, the exact state of the network, without any enforceable formal agreements.

Bitcoin, for example, uses a system called proof of work to avoid the need to trust a bank or intermediary to verify payments. It establishes a competition every ten minutes to validate chunks of transactions (referred to as blocks) and earn a reward (in bitcoin). The winner is effectively selected at random, however the amount of computer processing power each Bitcoin validator, known as “miner,” brings to bear will increase their likelihood of winning. Bitcoin miners will therefore spend tens of millions of dollars per day in hardware and electricity to increase the likelihood of winning. The purpose of the proof of work system is twofold: To incentivize participation (on the part of the miners) and to constrain behavior (on the part of anyone who might undermine the integrity of the system). It also enhances the security of the system as a whole: An attacker must compete against the computational power as the rest of the network combined.

Thus, even if any participant in Bitcoin’s proof of work system is selfishly motivated to steal from the network, none has the power to do so. Moreover, the network is “censorship-resistant,” meaning any transaction cannot easily be altered or removed. There is no master control point that everything depends on. Anyone around the world can become a Bitcoin node by running some open-source software, and the network functions as long as there is enough mining activity to guarantee security.

Bitcoin’s proof of work system is the most well-established blockchain consensus mechanism. Since the network launched in 2009, no one has successfully undermined it to alter the transaction ledger or spend the same coin twice.Footnote 21 However, it is not the only possible approach. Bitcoin’s success sparked an explosion of research and experimentation with approaches making different fundamental tradeoffs among scalability, security, and decentralization. Other prominent blockchain networks include Ethereum, Ripple, EOS, Dash, Monero, and ZCash. There is also ongoing work to address the inherent scalability and functionality limitations in Bitcoin’s design. And in recent years, enterprises and governments have begun to implement permissioned blockchain networks that, unlike Bitcoin, are limited to authorized participants.Footnote 22

The other important innovation of blockchain systems is the smart contract.Footnote 23 Smart contracts are securely self-executing software code that run on a blockchain network. Essentially, smart contracts allow a blockchain application to function as a parallel distributed computer, in which every machine running the application provably does so in exactly the same way. Smart contracts are the foundation of the functionality of blockchain technology. Smart contracts are broader than legal contracts, in that they can – within limits of performance scalability – encode anything that can be written into a computer program. From a conceptual and doctrinal perspective, however, they are simply contracts.Footnote 24 They allocate rights and responsibilities among parties who voluntarily bind themselves into enforceable commitments. Contracts are a powerful means of generating trust because they backstop voluntary human commitments with formalized legal enforcement embodying the power of the state. Smart contracts are designed to offer a similar kind of confidence backed by the integrity of the blockchain ledger. Which is to say, blockchain is a legal or regulatory technology.Footnote 25 It is a method of governance.Footnote 26

However, to the extent blockchain is a governance technology, it is immature, without the flexibility or capacity to correct for errors or unforeseen situations. In order to garner broader trust and move past its current limited applications, blockchain governance must become more robust.

Lashed to the Mast: The Two Sides of Immutability

In Homer’s The Odyssey, the hero Odysseus encounters sirens, mermaids who lure sailors to their deaths with their enchanting song.Footnote 27 Odysseus is curious about the content of their songs, but he knows that if he hears them, he will not be able to resist plunging into the ocean. So he orders his men to lash him to the mast of his ship. He further orders them to fill their ears with wax, so that if he later urges them to untie him, they will not hear his pleas. Odysseus thus empowers himself to hear the music that no mortal man can survive. He does so, ironically, by radically disempowering himself and his sailors at the critical moment.

The same strategy lies at the heart of the blockchain’s capability to decentralize trust. In the blockchain context, this strategy is known as immutability. Immutability is a great strength of blockchain-based governance systems, but also potentially a catastrophic weakness.

Blockchain Immutability

Immutability on a blockchain means that once a transaction has been incorporated into a validated block and added to the ledger, it cannot be altered.Footnote 28 This kind of guarantee is quite difficult to achieve in digital systems, whose records are naturally ephemeral and encoded in the universal language of binary ones and zeros. In the words of computer scientist and smart contracts pioneer Nick Szabo: “Typical computers are computational etch-a-sketch, while blockchains are computational amber.”Footnote 29 Blockchain systems enforce immutability by making every piece of information reflect the consensus agreement of a network of computers. Changing even the smallest fact means convincing a large percentage of the network to reconsider its settled transaction history. The algorithms and cryptography of the consensus system are designed to make that exceedingly difficult.

From an internet policy perspective, immutability seems to put things backwards. The internet regulation debate is fundamentally about freedom. Decentralized global networks make it easier for people to engage in conduct that some would like to prevent, whether that involves dissidents challenging authoritarian regimes or consumers accessing media they didn’t pay for. As only became clear over time, those networks also concentrate power in digital platforms whose freedom of action is difficult to shackle under conventional mechanisms of antitrust, contract, or privacy protection. Governments responded to the first concern through a variety of mechanisms; their ability to put the platform power and surveillance capitalism genies back in the bottle is yet to be seen.

Like the internet, blockchain systems are often described as technologies of freedom, but in their core functioning they are just the opposite. What makes a blockchain trustworthy is precisely that it restricts freedom to diverge from the consensus state of the ledger. This characteristic is important for security. Transactions involving scarce or valuable assets would not be trustworthy if someone could easily alter the ledger. Beyond that, however, immutability is blockchain’s most significant contribution to governance. It is also the property that creates the most significant risks of catastrophic failure.Footnote 30

Immutability poses a novel set of legal and regulatory challenges. For the most part, cyberlaw is concerned with the plasticity of digital systems. Software can be coded to arbitrage around legal rules. Information can be combined and analyzed to create challenges not present at the outset, such as data aggregation to undermine privacy protections. The challenge has been to tie down actors and systems to particular jurisdictions or classifications. Immutability creates a different problem. The illegitimacy or harm of certain actions may be well-established, but no one may have the ability to do anything about it.

Immutability as a Means of Trust

Immutability is essential to blockchain technology in several ways. It is a proxy for the basic security of the network. If you know that information you see on a blockchain is immutable, you can rely on it. Even more significant, immutability is implicit in blockchain’s approach to trust. If any actor had the power to change the ledger retrospectively, everyone else would need to trust that actor not to do so in secret or illegitimate ways. This is true whether the empowered entity is a thief, a validator, an intermediary, or a government. A blockchain network must be immutable to be censorship-resistant, because a censor is a government agent that demands changes to the information recorded. Thus, the decentralized model of blockchain trust depends on immutability.

Satoshi Nakamoto emphasized this point in the original Bitcoin whitepaper. In the centralized financial system, he or she or they pointed out: “[C]ompletely non-reversible transactions are not really possible, since financial institutions cannot avoid mediating disputes.”Footnote 31 This need for dispute resolution puts power in the hands of governments and intermediaries. And thus, as Nakamoto continued: “With the possibility of reversal, the need for trust [in particular entities] spreads.”Footnote 32 In order to separate generalized trust in transactions from trust in specific fallible actors, Bitcoin had to ensure that records on the ledger could not be reversed.

Immutability is not a precise concept.Footnote 33 In particular, it does not mean changing the ledger is categorically precluded.Footnote 34 For Bitcoin and similar blockchain networks, immutability is a statistical property. The more time that has passed since a block was validated, the less likely it has been altered.Footnote 35 However, the integrity of the network can never be established absolutely; there is always some miniscule possibility that an attacker has successfully altered the chain.Footnote 36 Some other blockchain systems provide for “finality,” which after a certain time prohibits changes to a validated block.Footnote 37 Even then, however, the ledger is not truly immutable.Footnote 38 And public blockchains are always potentially vulnerable to “51% attacks” if someone can obtain a majority of the total power in the network.Footnote 39 There has never been a successful 51 percent attack against Bitcoin, but there have been several against less-valuable cryptocurrencies.Footnote 40

There are also situations in which changing the status of validated blocks may be desirable. Because blockchain networks are decentralized, every node can independently propose a new block to add to the existing chain. The consensus process is designed to ensure that the network continually converges to a single valid chain. When some percentage of nodes on a blockchain network choose to follow a different path than the rest of the network, it is called a fork.Footnote 41 This may occur for mundane reasons. For example, developers may upgrade a network’s software with new features that are not backward-compatible with the earlier version. Those nodes running the non-upgraded software will remain on a different blockchain from everyone else, although if all goes well, that chain will quickly die out. Sometimes a fork is necessary to fix problems with the network, as when denial-of-service attacks were grinding the Ethereum network to a halt in late 2016.Footnote 42 A successful fork, however, can effectively reverse or alter prior transactions, thus undermining immutability.

The imperfection of blockchain immutability corresponds to the imperfection of trust. Trust is not the same as certainty. No one would say they trusted that 2 + 2 = 4, or that a heavy object dropped from a height will fall toward the ground. Neither unshakable confidence in an outcome nor a rational calculus that drives reliance is equivalent to trust. Trust is a human quality, and as such, it requires some modicum of vulnerability.Footnote 43 It is the willingness to commit even though there is some residual risk in doing so. What makes trust valuable is that it goes beyond certainty. A trustworthy counterparty allows one to dispense with cumbersome verification or self-help enforcement, greatly enhancing the scope and efficiency of transactions.Footnote 44

Trustworthy systems must therefore balance the necessary confidence to inspire action with the acknowledgement of imperfection. A thick bank vault may nonetheless be cracked, just as blockchain immutability may in some circumstances be undermined. Moreover, trust expands with experience or relationships. A system that is foolproof on paper may not be in practice. The design of Bitcoin published in 2008 convinced a small number of early adopters, but it was only after years of secure operation that more mainstream users were willing to trust their money to the seemingly strange decentralized system. Just as validated blocks on a public blockchain become more trustworthy over time, the entire blockchain becomes more trustworthy with successful experience.

Immutability as Commitment Device

Another way to think of blockchain immutability is as a kind of commitment device. Economists define a commitment device as “an arrangement entered into by an agent who restricts his or her future choice set by making certain choices more expensive, perhaps infinitely expensive.”Footnote 45 Commitment devices bridge between our present and future selves. Odysseus rationally knew ahead of time that he should not heed the call of the sirens. However, he also realized that, in that nonrational moment, he would be powerless to resist. So he prospectively deprived himself not only of the capability to act, but of the capability to belay his earlier order to his crew.

The need for commitment devices is not limited to mythical mermaids. It comes up virtually any time we envision our future selves. Many of us have an easier time resisting the prospect of ice cream tomorrow than the Ben & Jerrys in front of us right now. In addition, behavioral economists have identified several cognitive biases that make actual behavior in the future diverge from rational expectations in the present.Footnote 46 Most notably, people tend to discount benefits hyperbolically. According to textbooks, the net present value of a future benefit declines linearly over time based on the relevant discount rate. In practice, most people overvalue near-term benefits and strongly undervalue those arriving far in the future.Footnote 47 Just as they have a hard time imagining the beneficial results of compound interest, they fail to properly appreciate even large far-off gains.

A commitment device allows us to bind our future selves to our present rational calculus. Yale University economists Gharad Bryan, Dean Karlan, and Scott Nelson give a simple example of a runner about to embark on a ten-mile training session.Footnote 48 She wants to run the whole way, but she knows that at some point she will become tired and likely slow to a walk. So she signs a contract agreeing to pay a friend $1,000 if she fails to run the whole way. The committed payment makes the walking option considerably less desirable.

Those who commit transactions to a blockchain do so with the knowledge that they are not easily reversible. As Satoshi Nakamoto explained in the Bitcoin whitepaper, they are choosing nonreversibility ahead of time to avoid the trust-inducing processes of mediation and dispute resolution that will seem appealing in the future. Their commitment is necessary if the blockchain itself is to be trusted.

Credible commitments are essential to any bargaining relationship.Footnote 49 If I tell you I won’t pay more than $100, your willingness to agree to my terms depends on your assessment of my credibility. In particular, contractual arrangements depend on the ability of the parties to convince one another that their commitments are credible. If you do not believe I will deliver the products you are paying for, you will not enter into such a contract with me. As elaborated by economist Oliver Williamson, the game theorist Thomas Schelling analogized credible commitments to hostage-taking in primitive societies.Footnote 50 The hostages were part of the agreement process. Each side would be confident in the performance of the other, because otherwise it would kill its hostages. Such gruesome mechanisms seemed necessary in the absence of legal dispute resolution mechanisms. As Williamson explains, we no longer require human hostages because we assume “efficacious rules of law regarding contract disputes are in place and that these are applied by the courts in an informed, sophisticated, and low-cost way.”Footnote 51

The philosopher John Elster, in his essay Ulysses and the Sirens, points out that commitment devices turn the rational actor model of neoclassical economics against itself.Footnote 52 Credible commitments are necessary for the contractual process of market exchange; otherwise, counterparties would breach agreements at will. However, carrying out those threats often requires behaving in a way that would otherwise be irrational. At the later moment when parties have already made relationship-specific investments in a contract, for example, consenting to an unjustified reduction in price would be preferable to walking away entirely. In an extreme case, Thomas Schelling famously applied game theory to the doctrine of mutually assured destruction in the Cold War. The United States and the Soviet Union avoided nuclear war by committing themselves to retaliation that would end life on Earth in the event of an attack. Humans, Elster concludes, are “imperfectly rational creatures able to deal strategically with their own myopia.”Footnote 53

Things Go Wrong

Serious problems emerge when the imperfect rationality implicit in credible commitments is implemented through the perfectly rational vehicle of computers executing smart contracts on a blockchain. The dark side to immutability is that even invalid or illegitimate transactions cannot easily be reversed. Immutability creates the potential for catastrophic failures with no clear means of remediation.

Three examples illustrate the problems with blockchain immutability: The DAO hack, the Parity wallet bug, and the abortive Segwit2x fork.

The DAO Hack (2016)

In June 2016, approximately $50 million in Ether cryptocurrency was extracted from the DAO, a decentralized crowdfunding application.Footnote 54 The DAO was a set of smart contracts on the Ethereum network that allowed individuals who purchased tokens to vote “yes” or “no” on financing a given project.Footnote 55 The more money a user put into the DAO, the more votes the user would receive, and subsequently the greater share the user would receive of income from successful projects. The fund raised 11.5 million Ether through its initial crowd sale, worth approximately $150 million at the time and representing nearly 15 percent of Ether in circulation.Footnote 56 Before it ever began funding projects, however, the DAO was undermined by a catastrophic hack.

Someone took advantage of a vulnerability in the smart contract code that governed a narrow component of the fund’s payment structure.Footnote 57 By repeatedly executing the same request function, the hacker was able to drain roughly one-third of the pool of committed investment currency into a private “child DAO.” Thankfully, the system included a failsafe that prohibited fund withdrawals from the system for thirty days. During that period, the Ethereum Foundation produced a software upgrade that forked the entire blockchain to a new state in which the stolen funds were returned to their rightful owners.Footnote 58 However, the fork was controversial. It essentially broke the immutability of the blockchain in order to reverse the theft. Most members of the community considered this a worthwhile tradeoff. The price of Ether recovered from the uncertainty the DAO hack generated, and then climbed dramatically the following year. Many viewed the Ethereum Foundation’s willingness to act a comforting example of effective governance.Footnote 59

Others were not convinced. Immutability, they argued, was the essence of blockchain decentralization. If the Ethereum Foundation could convince most network nodes to roll back $50 million of transactions once, it could do so again. Perhaps the next time would be less clearly a case of theft. Perhaps it would be a controversial move that advantaged the Foundation’s leadership over the rest of the community. And given the disruption involved in implementing a hard fork, it made no sense to take this tack every time someone exploited a smart contract bug. Where was the line to determine when immutability should be broken? While these opponents were a minority and couldn’t prevent the hard fork, they could do something else. They started mining the other side of the fork, the chain in which the DAO funds were still in possession of the hacker.Footnote 60 This fork, labeled Ethereum Classic (ETC), continues today to exist in parallel to the main Ethereum (ETH) blockchain.Footnote 61

The ETC objection to the DAO fork centered around credible commitments. Why trust the blockchain if it can be forked whenever something goes wrong? A noncredible commitment is worth nothing, or worse. When financial institutions in the 2000s realized they were “too big to fail” and would be bailed out for the government if their bets failed to pay off, their appetite for risk grew to the unsustainable levels that precipitated the global financial crisis of 2008. When several Central and Eastern European governments experienced hyperinflation in the years after World War I, in spite of increasingly vigorous monetary policy initiatives, they mandated convertibility of their currencies into gold.Footnote 62 The gold standard made it impossible to debase the currency too far. By the 1970s, when countries were more stable and central banks more sophisticated, the gold standard and its limiting tether to physical assets were no longer needed.

The ideal credible commitment is strong enough to promote the desired behavior, but weak enough to be overcome through appropriate mechanisms when absolutely necessary. The ad hoc nature of the response to the DAO hack, and the fact that most of those connected with the DAO were also associated with the Ethereum Foundation, created skepticism about the need to break immutability.

The Parity Wallet Bug (2017)

In November 2017, Parity Technologies, an Ethereum-based blockchain developer, suffered a critical security vulnerability that affected certain users of the company’s wallet software for storing cryptocurrency.Footnote 63 An update caused a bug that could have allowed a malicious user to control a large number of Parity’s “multsignature” wallets. A user found the flaw and, allegedly to prevent theft, deleted the smart contract involved.Footnote 64 Unfortunately, this made it impossible for anyone to access the relevant wallets. As a result of this hack more than $280 million of Ether was frozen.Footnote 65 While the Ether was still immutably recorded on the Ethereum blockchain, it was simply inaccessible. Like the DAO, Parity had close ties to the Ethereum Foundation. Gavin Wood, its CEO, was the co-founder and chief technologist of Ethereum, and a large component of the frozen Ether was associated with Parity’s own token offering for a blockchain interoperability project called Polkadot. A hard fork to restore the trapped Ether would seem like a bailout for insiders. Other solutions met with similar skepticism.Footnote 66 As of summer 2018, the funds remained trapped.

Unlike the DAO hack, the Parity wallet bug had no villain.Footnote 67 The cryptocurrency was apparently rendered inaccessible by accident. Yet the impact was similar. Legitimate users who relied on the immutability of the blockchain lost their money as a consequence of that very immutability function. There was no mechanism to alter undesirable transactions after the fact, even when a transaction – locking every user permanently out of their wallets – produced benefits for no one.

Parity wallet users had good reason to trust the firm’s software with their cryptocurrency. Parity’s leaders were highly respected technologists who were intimately involved in the creation of Ethereum. Gavin Wood, in fact, was the primary creator of the Solidity programming language used for Ethereum smart contracts. One would not expect his company to make a relatively elementary Solidity coding flaw. And one would certainly not expect it to leave the flaw in place for months after being told about it.Footnote 68 Yet the reality is that individual and companies are fallible. Trusting Parity was as reasonable as trusting the banks that imploded during the 2008 financial crisis. The difference was that, thanks to a combination of government-mandated insurance and operational mechanisms, no one would ever find their money “permanently stuck” in a bank’s savings account with no recourse.

Trust is a double-edged sword. Users trust Parity because its software operates on an immutable blockchain. However, they don’t necessarily trust Parity enough to implement a hard fork to restore its frozen Ether. The second requires trust in specific human organizations, which is exactly what the blockchain’s immutability was designed to overcome.

The SegWit 2x Battle (2017)

For a number of years, there has been a contentious technical debate among leading Bitcoin developers about how to scale the network. Bitcoin can process a theoretical maximum of seven transactions per second, which is thousands of times fewer than centralized payment-processing systems. As the price of Bitcoin rose and transaction activity increased, the network began to slow down even further. Some developers believed the solution was to change the protocol to increase the amount of data processed in each block. However, that would require a hard fork. It would represent the first substantial step away from the basic architecture that Satoshi Nakamoto outlined in 2008, which is the basis for the Bitcoin network’s remarkable run of secure, uninterrupted operation. Other developers felt that different mechanisms could address the scalability challenge without changing the core protocol, or that rock-solid security was simply more important than handling more transactions.

In spring 2017, a compromise was brokered among major Bitcoin-related companies to implement two competing scalability proposals.Footnote 69 The first, SegWit, could go into effect prior to a hard fork.Footnote 70 It provided foundation for scaling Bitcoin without disturbing the core protocol. The second component was a doubling of the block size referred to as 2x, which was to be implemented in a hard fork later in the year. The SegWit implementation proceeded smoothly. As the date for the 2x hard fork approached, however, controversy reemerged. Critics labeled the compromise, known as the New York Agreement, an illegitimate back-room deal and a corporate takeover of Bitcoin.Footnote 71 And it began to seem likely that, as with Ethereum Classic, some network nodes would continue mining the original, small block-size chain even after the fork. That led to speculation about which chain deserved to carry forward the “Bitcoin” name and its BTC ticker symbol on exchanges.Footnote 72 The hard fork was ultimately abandoned.Footnote 73

The Segwit 2x battle, unlike the prior two examples, didn’t deprive anyone of their cryptocurrency. It involved neither theft nor buggy code. Yet it provoked a similar sense of existential crisis over the essence of Bitcoin. Does immutability mean it must be next to impossible to change the basic properties of a blockchain network, in addition to the transaction records it stores? Removing human intervention from every commitment by means of a software-implemented commitment device seems well and good, but software is created by humans too. They can’t ever fully anticipate the needs of the future. At some point, there will be a need to evolve the system if it is to remain trustworthy. Yet the upgrade process itself opens the Pandora’s Box that immutability was supposed to seal shut.

Be Careful About Your Commitments

Political theorist Kenneth Shepsle distinguishes two forms of commitment device: Motivational and imperative.Footnote 74 The first involves commitments that are incentive compatible. That is to say, at the time the device operates, the person involved rationally desires to comply. The second form of commitment device requires coercion, because otherwise the person involved would not follow through on the commitment. Blockchain systems employ both. Consensus systems like proof of work create economic incentives for accurate validation of the ledger. In cryptocurrency circles, this approach is known as cryptoeconomics.Footnote 75 The blockchain is immutable because the costs of breaking it exceed the returns. By the same token, the immutability of smart contracts is imperative. The victims of the DAO hack or the Parity wallet bug were strongly incentivized to overturn the outputs of the smart contracts. They lacked the power to do so.

If, instead of approaching the beautiful sirens, Odysseus saw his boat heading directly for dangerous rocks, his cries to his men to turn the rudder would be futile. His commitment device would be operating beyond the intended scope, leading to disaster. As the three examples described earlier illustrate, the same issue appears in the blockchain context. Smart contracts cannot necessarily distinguish the scenarios for which immutability was designed from those where it causes harm. There are two fundamental reasons. Contracts of any consequence are generally incomplete; that is to say, they do not precisely specify outcomes for every possible scenario.Footnote 76 Smart contracts magnify this incompleteness. They can only express their terms in sharp-edged software code, eliminating the interpretive discretion of human judges and juries.Footnote 77

The strong immutability of blockchain systems therefore creates significant opportunities for dramatic failures that undermine trust rather than cementing it. As Shepsle concludes: “[W]e should … not be too precipitous in our admiration of commitment and our condemnation of discretion.”Footnote 78 To avoid causing significant harm, blockchain-based solutions must do more than enforce immutability; they must incorporate regimes of governance to temper its excesses.Footnote 79

Blockchain Governance by Design

Blockchain is a governance technology. Consensus algorithms shape how users of networks behave. Through affirmative incentives and cryptographically enforced limits on certain actions, these systems combat hostile conduct and promote cooperative behavior. They establish and enforce rules for “good order and workable arrangements,” which is how the Nobel Prize-winning economist Oliver Williamson defines governance.Footnote 80 Governance provides a framework for establishing accountability, roles, and decision-making authority in an organization.

Digital governance is not a new phenomenon.Footnote 81 Software code, as Lawrence Lessig famously declared and many others have elaborated since, can function as a kind of law, with its own affordances and limitations.Footnote 82 Software-based systems can serve as alternatives to the state, markets, firms, and relational contracting as means of governing relationships. Facebook’s newsfeed algorithms, YouTube’s ContentID system for digital right management, and Uber’s mobile application are examples of digital systems that constitute and shape communities. However, these communities are centralized. The operators of the network control the algorithms and adapt them to ultimately serve their interests. Blockchain instead maintains the possibility of decentralized digital governance. By disempowering intermediaries and network operators, it promises both greater efficiency and greater fairness. Nick Szabo, one of the original developers of the idea of smart contracts, describes this property as social scalability.Footnote 83 A blockchain-based system can, it is claimed, avoid the human biases, imperfections, and inefficiencies that make it difficult for communities to scale without rigid hierarchy.Footnote 84

From Commitments to Institutions

Blockchain governance epitomizes a broader challenge in our increasingly connected and digitized world. There is a growing gap between rule definition and rule execution. The terms of a smart contract must be specified entirely ex ante. A conventional legal contract, by contrast, is subject to relational development, the potential for mutual modification, and ex post judicial dispute resolution.Footnote 85 The case for smart contract modification can be analogized to human intervention in artificial intelligence technology. Machine learning systems produce outputs based on statistical analysis that cannot easily be traced back to their inputs, opening the door for hidden biases to creep in.Footnote 86 To avoid this issue, there is a growing consensus that humans must remain in the loop to ensure the machines avoid bias and unforeseen outputs.Footnote 87 Blockchain-based systems need something similar. The hard problem is how to reincorporate humans without forfeiting the benefits of decentralization and automation that blockchain systems promote.

In the wake of the controversies of 2016–17, prominent new blockchain networks such as Tezos, Decred, and Dfinity touted their “on-chain” governance mechanisms.Footnote 88 With these systems, proposals, such as an increase in a block size, can be decided by voting of token holders, with one coin in the relevant cryptocurrency equal to one vote. The will of the majority is automatically implemented on the blockchain network.

On-chain governance is a promising area of experimentation, although it raises a host of questions.Footnote 89 For example, are those holding a majority of the cryptocurrency always the ones who should decide the fate of the network? Or what happens when, as in real-world elections, a substantial percentage of voters do not participate or lack full understanding of the issues? How might those with a vested interest manipulate the vote? Even if effective, however, on-chain governance systems are at best only one piece of the solution. Just as every possible scenario cannot be coded into smart contracts, every desirable governance action cannot be coded into a self-executing election. On-chain mechanisms cannot completely solve the problem of blockchain governance because they rely on the same immutability that generates it.

To address the governance gap, blockchain systems need credible commitments that are not absolute. This is a well-established concept. Structures that marry the security of credible commitments with the flexibility of human governance are known as institutions. The economic historian Douglass North, the great theorist of institutionalism, defined institutions as “humanly devised constraints that structure political, economic, and social interaction.”Footnote 90 Institutions are voluntarily adopted constraints; that is to say, they are commitment devices.Footnote 91 As North described, the development of both public and private institutions was the defining factor in the establishment of the complex global economy. Effective institutions fused the trustworthiness of family and community ties with the social scalability needed for modern society.

Most institutions, however, are centralized. A court system or a stock market can facilitate trustworthy transactions between strangers, but those strangers must accept their authority. Is this level of trust attainable within a decentralized network? The communities around blockchain networks can effectively govern, as when the Ethereum Foundation shepherded support for the hard fork that reverted the theft of funds from the DAO. The process was somewhat chaotic, but many different interests in the community had the opportunity to be heard, several alternatives were thoroughly vetted, and in the end, network nodes voted with their software whether to adopt the proposed hard fork.

However, this leads to a conundrum identified by Oxford economic sociologist Vili Lehdonvirta.Footnote 92 The theoretical problem with the blockchain practical success story is that it was a triumph of conventional governance. Respected leaders in the community debated solutions, took input, and converged on a response. As Lehdonvirta points out, this human-centric process contrasted with the vision of a decentralized, machine-centric blockchain. If trusted parties are going to make the rules anyway, who needs a blockchain, he argues. Lehdonvirta effectively rebuts the overheated claims that blockchain represents a “paradigm shift in the very idea of economic organization.”Footnote 93 As incidents such as the DAO hack, the Parity wallet bug, and the Segwit2x battle illustrate, effective consensus on immutable distributed ledgers does not resolve the hard problems of governance. In some ways, it accentuates them.

Blockchain decentralization enthusiasts strike strikingly similar notes to the cyberlibertarians of the 1990s. As their poet laureate, Electronic Frontier Foundation co-founder John Perry Barlow declared: “We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.”Footnote 94 In this new world of cyberspace, he continued, governments “have no moral right to rule us nor do [they] possess any methods of enforcement we have true reason to fear.”Footnote 95 We know how that story turned out. The internet has indeed been a radically empowering force. Yet many are still “coerced into silence or conformity” by governments that have found ways to overcome the internet’s decentralization (such as China’s Great Firewall) and, surprisingly, by the privately operated platforms such as Facebook and Google that now dominate cyberspace and its communities.

If the blockchain economy is to replicate the successes of the internet while avoiding some of its failings, governance is critical. In fact, the scope of governance must be expanded beyond its traditional domains. Here again, a comparison with internet law and policy proves enlightening.

Pervasive Governance

The internet gave birth to what Shoshana Zuboff calls surveillance capitalism:Footnote 96 A global economy built increasingly on the collection, aggregation, analysis, and utilization of data related to the behaviors and intentions of individuals. “Privacy protection” online became an increasingly quaint response to the totalizing nature of information platforms. In response privacy advocates turned increasingly to an approach of totalized privacy, known as privacy by design.Footnote 97

Privacy by design takes the position that privacy protections cannot simply be added on to technical systems. They must be built in from their inception.Footnote 98 In other words, privacy by design means more than just raising the bar for protection of personal information. As former Ontario, Canada, Information and Privacy Commissioner Ann Cavoukian explains: “Privacy must be incorporated into networked data systems and technologies, by default. Privacy must become integral to organizational priorities, project objectives, design processes, and planning operations. Privacy must be embedded into every standard, protocol and process that touches our lives.”Footnote 99 The implementation of this vision in legislation and business practice has left something to be desired, but the premise is sound.

Something similar, call it governance by design, should be incorporated into the development and oversight of blockchain-based systems.Footnote 100 Given the structure of blockchains, governance cannot be an afterthought. Neither can it be limited to formalized voting on changes to network algorithms. Voting structures insufficiently address the diversity of governance challenges that can arise, as highlighted by the three examples provided earlier.

In the blockchain context, governance by design means recognizing that perfect immutability creates systems with unacceptable fragility. They work well until they don’t, and then they have no good means to recover. Advocates of strong immutability see an inherent tradeoff in which flexibility to human decision-making undermines decentralization.Footnote 101 However, if we want solutions that can resolve unexpected problems smoothly, we must trust someone to resolve them.

Incorporating governance by design principles, rather than bolt-on governance functionalities, counters this tradeoff. As Cavoukian argues in the analogous context of privacy by design: “Privacy is often positioned in a zero-sum manner as having to compete with other legitimate interests, design objectives, and technical capabilities, in a given domain. Privacy by Design rejects taking such an approach – it embraces legitimate non-privacy objectives and accommodates them, in an innovative positive-sum manner.”Footnote 102 Governance by design can have a similar effect by incorporating governance as a baseline function at every level, not a “get out of jail free” override.

In her work on common-pool resource systems, Nobel Prize-winner Elinor Ostrom emphasizes that governance is polycentric and hierarchical.Footnote 103 Multiple governments, as well as private mechanisms, may shape the management of a resource or community. Ostrom describes three levels of rule: Operational, collective-choice, and constitutional-choice.Footnote 104 Operational governance addresses the day-to-day issues that directly affect a given system. Collective-choice governance determines two things: Who can take certain operational actions and who can change operational rules. Constitutional-choice governance determines who has the authority to change collective-choice rules. A system that works for mundane problems will not necessarily address unusual situations that require extraordinary override. And a system for addressing particular crises will fail to resolve fundamental disagreements about the direction of the community.

A starting point for thinking about governance by design in a blockchain context would be to recognize four hierarchical domains:

  1. 1) Consensus. Analogous to Ostrom’s operational rules, the consensus algorithms of a blockchain network promote honest verification and agreement on status of the ledger. In the normal mode of day-to-day operation, the dynamics of the consensus mechanism determine the attributes of the blockchain network. Discussions of blockchain technology as “governance by code” or a new “Lex Cryptographica”Footnote 105 generally focus on the consensus layer, which is where transactions are designed to be immutable.

  2. 2) Override. When immutability produces problematic results, as in the case of the DAO hack, override governance offers a means to reverse immutability by establishing decision-making power at the outset. This is analogous to the first sense of Ostrom’s collective-choice rules, in that they define who has decision-making power in such situations. The Ethereum community struggled in responding to the hack because it was not clear who should be part of the decision-making process, and how a consensus of decision-makers should be implemented.Footnote 106

  3. 3) Rule Change. Bitcoin’s Segwit2x fight concerned a general property of the network: The size of blocks. As in Ostrom’s constitutional-choice layer, governance here requires a means of determining who sets policy for the network. In the Segwit2x case, groups in the community such as exchanges, miners, users, and core developers had differing views. There was no good mechanism to resolve these views given insufficient structures and norms of governance.

  4. 4) Community Governance. Ostrom’s constitutional-choice layer is about who judges the judges: How the entities empowered to participate in governance and change the rules are constituted. This is often a blind spot in blockchain networks. For example, the launch of Tezos was delayed when the organization developing the software had a conflict with the foundation designed to oversee the network after the project raised over $200 million in a token offering.Footnote 107 The irony that a system designed to automate rule-change governance struggled at community governance was not lost.

This high-level framework is just a starting point for blockchain governance by design.Footnote 108 There will be many practical decisions to make in any network. While governance and decentralization are not fundamentally in conflict, there is room for different workable tradeoffs dependent on either the goals of the network or the culture of its community. The different ways the Bitcoin and Ethereum communities addressed the Segwit2x hard fork and the DAO hack, respectively, illustrate that both processes and norms play a role in solving for decentralized issues.

The final important factor that Ostrom’s polycentric framing emphasizes is that private self-governance and public oversight through sovereign governments are not necessarily in conflict. Her classic study of common-pool resources, Governing the Commons, identifies several cases in which the state facilitated private ordering and the creation of community-based institutions.Footnote 109 The developers of blockchain networks often begin with a strong resistance to government involvement, just like the pioneers of the internet economy. However, as became clear in the development of the internet, governments can do much more than ban or allow technological innovation.Footnote 110 As just one example, the need for strong government-issued network neutrality rules became a rallying cry for advocates of the open internet, as a check on the power of broadband access providers.Footnote 111 There are similar calls today for the state to intervene in order to break the stranglehold of large digital platforms such as Google, Amazon, and Facebook.Footnote 112 We should not ignore the ways in which government might contribute to the health of the blockchain economy.

Conclusion

At this early stage in blockchain development, the adoption path of the technology is quite uncertain. Despite the spike in the price of cryptocurrencies, usage for payments, Bitcoin’s original purpose remains limited.Footnote 113 Many enterprise blockchain pilots built on specialized cryptocurrency models have failed to see the rapid adoption their boosters predicted.Footnote 114 However, blockchain technology itself will continue to see investment and development because it addresses fundamental challenges in organizational recordkeeping and the need for interorganizational trust.Footnote 115 Further, there are major applications of the approach such as trading markets in cryptoassets that seem poised for continued growth even if they do not disrupt traditional markets.Footnote 116 Nonetheless, it is far from certain that any blockchain network will achieve the scope and influence of Google, Facebook, Amazon, Tencent, and Alibaba, let alone realize the grand visions of societal disruption that boosters promulgate.

The importance of blockchain governance, however, does not depend on any particular story of blockchain adoption. Blockchain has proved to be a governance technology that seeks to balance on the knife edge of freedom and constraints. That challenge is as old as civilization. In working to overcome this challenge, we can learn from the ways that blockchain networks try – or don’t try – to resolve the implicit tensions of immutability. Both theory and practice must play a role. There is no shortcut to designing governance mechanisms, watching how they operate in practice, and iterating based on their shortcomings.

Appropriately, that is also the lesson of Odysseus’ encounter. Odysseus has himself tied to the mast so that he, alone, can hear the song of the sirens in safety. What do they sing that is so tempting? The sirens offer a shortcut to knowledge: “For lo, we know all things, all the travail that in wide Troy-land the Argives and Trojans bare by the gods’ designs, yea, and we know all that shall hereafter be upon the fruitful earth.”Footnote 117 The seductive appeal of the sirens is the promise of wisdom without experience, just as the seductive appeal of the blockchain is trust through cryptography and economic incentives without human governance. Believing too strongly in either leads to disaster. Finding the proper balance is the road to valuable insight.

Footnotes

7 Why a Commitment to Pluralism Should Limit How Humanity Is Re-Engineered

* This chapter is adapted from “To What End?,” a chapter in our book, Re-Engineering Humanity (2018). For brevity, we include select references. For more, please consult the book.

1 “Negative liberty is the absence of obstacles, barriers or constraints. One has negative liberty to the extent that actions are available to one in this negative sense. Positive liberty is the possibility of acting – or the fact of acting – in such a way as to take control of one’s life and realize one’s fundamental purposes.” Reference CarterCarter (2016); Reference Berlin and BerlinBerlin (1969); Reference Berlin, Berlin and HardyBerlin (1978). “[I]n the first case liberty seems to be a mere absence of something (i.e. of obstacles, barriers, constraints or interference from others), whereas in the second case it seems to require the presence of something (i.e. of control, self-mastery, self-determination or self-realization).” Reference CarterCarter (2016). See, generally, Reference Berlin and BerlinBerlin (1969), pp. 121–22. (Negative liberty is relevant when one answers the following question: “What is the area within which the subject—a person or group of persons—is or should be left to do or be what he is able to do or be, without interference by other persons?” Positive liberty is relevant when one answers the following question: “What, or who, is the source of control or interference that can determine someone to do, or be, this rather than that?”)

2 The eighteenth-century philosopher Immanuel Kant insisted that this rule is a universal truth, a categorical imperative dictated by an inescapable moral logic. We maintain that the rule is the product of human imagination and collective recognition, just like Hammurabi’s Code and the Declaration of Independence. The Universal Declaration of Human Rights enshrines the rule, recognizing “the inherent dignity and … the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world.” United Nations – Universal Declaration of Human Rights. According to Article 1, “All human beings are born free and equal in dignity and rights.” As many have noted, the Declaration reflects an incredible crosscultural and intergenerational commitment to human dignity.

6 Several political questions accompany the philosophical ones. Who decides? Who should exercise such power over humanity?

7 In this and the fourth bullet point, we use singular words (people, society) where we could use plural words (peoples, societies). We do so only for ease of reading. As we develop further in the text that follows, different groups of peoples and different societies can and do choose to possess and commit themselves to sustaining different values and capabilities. They can and do decide, in different ways, that they owe different obligations to past, present, and future generations.

8 By focusing on the state of human affairs, we do not mean to overinflate the position of human beings. However, this chapter does have a particular focus: it is about the relationships between humans and the technologies we create and use. Of course, as Peter Singer has argued, the suffering of other species caused by humans and our technologies is important. Nevertheless, consideration of this issue would bring us beyond the scope of this chapter.

9 The set could include more than characteristics and capabilities. One could focus on knowledge and moral virtues, for example.

10 While debates have raged for millennia over what matters about being human and what constitutes a good human life, there are persuasive philosophical accounts that identify several basic human capabilities. See Reference SenSen (2005), pp. 151–66; Reference SenSen (1985); Reference SenSen (2001); Reference SenNussbaum and Sen (2004); Reference NussbaumNussbaum (2011), pp. 33–34 (Reference Nussbaum2011); Reference Rachels and RachelsRachels (2014), pp. 15–32 (making the case for universal values that exist across all societies).

11 Some might criticize our approach because it allows for too much variation and cultural contingency. This objection presupposes too much for the reasons stated in the text. Others might criticize us for not being as sensitive to diversity as we aspire to be. After all, appeals to culture risk focusing on shared values at the expense of recognizing differences in race, class, and gender, as well as commonalities found in subcultures (of which there are many) and norms that only make sense in specific contexts (e.g., what’s acceptable at work might not be at home). Our straightforward response to this charge is that we’re using culture broadly to refer to any group that’s constituted, even if only temporarily, by shared commitments, values, experiences, or yearnings.

12 For an interesting take on moral floors, see Reference NussbaumNussbaum (2007), p. 126 (“any minimally just society will make available to all citizens a threshold level of ten central capabilities, as core political entitlements.”); Reference NussbaumNussbaum (2011), pp. 33–34.

13 For the same reasons that we reject technological determinism, we reject corresponding notions of moral determinism. Moral progress and regress are possible.

14 Throughout history, cultures have built diverse worlds that allowed different values to become preeminent or techno-socially engineered into existence. In The Order of Things, philosopher Michel Foucault contends that fundamental values like what constitutes “humanity” have been constantly redefined throughout history to suit a variety of agendas and powerful actors (Reference FoucaultFoucault 1994). Indeed, it’s hard to deny that the confluence of power and prejudice – racism, sexism, classicism, and ableism, among other pernicious “isms” – has had an oversized influence in determining who gets to count as being sufficiently similar to ingroups to qualify as human. Moreover, formulations of humanity and the imagined worlds that support these conceptions can lose their hold on us, just like perceptions of the gods do – a shift that’s aptly illustrated by polytheism being displaced by monotheism in large parts of the world. We could create a laundry-list of the features that have differentiated worlds across human history.

15 This section draws from Frischmann’s article Some Thoughts on Shortsightedness and Intergenerational Equity. See generally Reference FrischmannFrischmann (2005).

17 Reference JaffaJaffa (1959), p. 228 (citing Lincoln 1863). (“The ‘people’ is no longer conceived in the Gettysburg Address, as it is in the Declaration of Independence, as a contractual union of individuals in the present; it is as well a union with ancestors and with posterity; it is organic and sacramental.”)

18 Reference LincolnLincoln (1838). See also Akwesasne Notes (1977) (often referred to as “The Iroquois’ Law of Seven Generations”); Morris (1995) (discussing the “centuries-old Haudenosaunee philosophy that all major decisions of a nation must be based on how those decisions will affect at least the next seven generations.”)

19 Reference SwiftSwift (1947), p. 4 (describing the Athenian Ephebic Oath translation by Clarence A. Forbes).

20 The analogy to unjust enrichment is imperfect. Unlike unjust enrichment in which the beneficiary to whom the benefit is conferred compensates the person who conferred the benefit and so involves only two parties looks to the past, the dynamic we describe involves three parties (past, present, and future generations) and looks to the future. We thank John Breen for pointing this out.

21 To make his objections vivid, Nozick introduced the experience machine thought experiment (1974 and revisited in 1989), a hypothetical scenario that bears a striking resemblance to Ray Bradbury’s earlier short story, “The Happiness Machine” (1957). Reference WeijersWeijers (2011).

22 For an extended discussion of this issue, see Reference GrauGrau (2005).

23 Reference WeijersWeijers (2014); Reference KolberKolber (1994). There is a debate among philosophers and experimentalists about whether thought experiments like Nozick’s are amenable to empirical study. Reference SmithSmith (2011); Reference De BrigardDe Brigard (2010).

24 Reference BrambleBramble (2016). This may be an empirical claim worth exploring. We are unaware of a definitive study and are not certain that empirical testing would work well.

25 Susan Wolf offers compelling examples that illustrate why thinkers like Nozick don’t reduce meaningful experiences to instances where we strive for pleasure or even necessarily experience it. Reference WolfWolf (2015), p. 51.

26 Ben Bramble observes: “Hedonism has few contemporary advocates. This is mainly due to a single, highly influential objection to it, widely considered to be decisive: Robert Nozick’s experience machine. Discussions of well-being—whether in scholarly journals, academic conferences, or university lecture halls—often begin with a quick dismissal of hedonism by reference to Nozick’s objection before turning to ‘more interesting matters’ (usually the question of which desire-based or hybrid theory of well-being is true.” Reference BrambleBramble (2016), p. 136.

29 Reference MendolaMendola (2006), pp. 441–77; Reference LayardLayard (2005). See also Reference Bronsteen, Buccafusco and MasurBronsteen, Buccafusco, and Masur (2014), pp. 172–75 (suggesting that the experience machine thought experiment pumps “inadmissible intuitions”).

30 Nozick tilted the scales in favor of hedonism by guaranteeing optimal happiness. He probably did this to show why hedonism, even in its most tempting form, is repugnant once you think carefully about it. By making the lives on and off the machine equivalent, we adjust the extreme framing of the tradeoffs. This might eliminate speculation about optimal happiness and elevate status quo bias as a factor.

31 Reference CrispCrisp (2006), pp. 635–36. “According to hedonism, [the lives on and off the machine] have exactly the same level of well-being. And that is surely a claim from which most of us will recoil.” Crisp goes on to argue that our beliefs about the value of accomplishment (and by extension free will) might be “an example of a kind of collective bad faith, with its roots in the spontaneous and largely unreflective social practices of our distant ancestors.” Perhaps. But perhaps he has it backwards.

32 See Re-Engineering Humanity for our Free Will Wager.

34 We are not committed to identifying multinational companies as the architects. Governments, public–private partnerships, and others would also presumably be involved. As we explain elsewhere, the point is not to allocate blame to any master planner.

35 As Amartya Sen argued long ago in his critique of welfarism, an incredibly poor person with very little opportunity in life might be subjectively happy because she has adapted to her conditions in life, but that cannot mean that society should not be committed to reducing poverty or investing in building the capabilities of her daughters and sons or of future generations of similarly situated people. Reference SenSen (1985); Reference SenSen (2001); Reference SenNussbaum and Sen (2004).

37 Historically, not everyone has had the means to experience Mill’s higher pleasures. Significant distributional and class-based concerns challenge appeals to higher pleasures. Further, strong commitment to individualism triggers concerns about paternalism – who’s to say what is higher? We don’t address these concerns, except to note that perhaps Experience Machine n.0 could level the playing field. Of course, it is not clear how such leveling would affect humanity.

38 Of course, if we’re wrong, then that would tell us something about people’s baseline normative values.

39 The techno-social engineering test framework developed in Re-Engineering Humanity (2018) provides a set of conceptual tools for identifying and contextually evaluating these risks.

40 This is an important caveat. Suppose the Experience Machine n.0 thought experiment framed the choice in binary terms as follows: (1) Build the Experience Machine n.0; or (2) Retain our current world as it currently exists. This choice presents a more difficult decision. The first option might sacrifice actual free will, but it would provide the illusion of free will and tremendous welfare gains in terms of longer and happier lives for billions of people. The second choice might sustain free will, but it would sacrifice the opportunity for massive social welfare gains. If these are the only choices, the tradeoff might be very difficult. It is critical, in our view, to recognize that these are not the only choices. We can build much better worlds.

41 Paternalism kicks in where there is good reason to conclude that people don’t know what’s best for them; liberalism is skeptical of any such alleged state of affairs. The battle usually devolves into the core political question of “Who Decides?”

8 Caveat Usor: Surveillance Capitalism as Epistemic Inequality

1 Suzanne Daley, “On Its Own, Europe Backs Web Privacy Fights,” New York Times, August 9, 2011, http://www.nytimes.com/2011/08/10/world/europe/10spain.html.

2 Google Spain SL v. Agencia Española de Protección de Datos – (Case C-131/12 (May 13, 2014)),” Harvard Law Review 128, no. 2 (December 10, 2014): 735.

3 Daley, “On Its Own, Europe Backs Web Privacy Fights.”

4 James Vincent, “Google Chief Eric Schmidt Says ‘Right to Be Forgotten’ Ruling Has Got the Balance ‘Wrong,’” Independent, May 15, 2014, http://www.independent.co.uk/life-style/gadgets-and-tech/google-chief-eric-schmidt-says-right-to-be-forgotten-ruling-has-got-the-balance-wrong-9377231.html.

5 Justice William O. Douglas, Dissent, Warden v Hayden, 387 U.S. 294, 1967. https://www.law.cornell.edu/supremecourt/text/387/294#writing-USSC_CR_0387_0294_ZD

6 John R. Searle, Making the Social World: The Structure of Human Civilization (New York: Oxford University Press, 2010), pp. 194–95.

7 Samuel D. Warren and Louis D. Brandeis, “The Right to Privacy,” Harvard Law Review 4, No. 5. (December 15, 1890): 204.

8 Martin Hilbert, “How Much Information Is There in the ‘Information Society’?Significance 9, no. 4 (August 1, 2012): 8–12, http://onlinelibrary.wiley.com/doi/10.1111/j.1740–9713.2012.00584.x/abstract; Michael R. Gillings, Martin Hilbert, and Darrell J. Kemp, “Information in the Biosphere: Biological and Digital Worlds,” Trends in Ecology & Evolution 31, no. 3 (March 1, 2016): 180–89, http://www.sciencedirect.com/science/article/pii/S0169534715003249. Martin Hilbert, “Big Data for Development: From Information – to Knowledge Societies,” United Nations ECLAC Report, 2013, https://doi.org/10.2139/ssrn.2205145.

9 Gillings, Hilbert, and Kemp, “Information in the Biosphere.”

10 Eric Schmidt and Jared Cohen, The New Digital Age: Transforming Nations, Businesses, and Our Lives (New York: Vintage Books, A Division of Random House LLC, 2014).

11 Hilbert, “How Much Information Is There in the ‘Information Society’?”

12 João Marques Lima, “Hyperscalers Taking over the World at an Unprecedented Scale,” Data Economy (blog), April 11, 2017, https://data-economy.com/hyperscalers-taking-world-unprecedented-scale/; Cade Metz, “Building an AI Chip Saved Google from Building a Dozen New Data Centers,” Wired, April 5, 2017, https://www.wired.com/2017/04/building-ai-chip-saved-google-building-dozen-new-data-centers/; Cade Metz, “Tech Giants Are Paying Huge Salaries for Scarce A.I. Talent,” New York Times, October 22, 2017, sec. Technology, https://www.nytimes.com/2017/10/22/technology/artificial-intelligence-experts-salaries.html.

13 Madhumita Murgia, “AI Academics under Pressure to Do Commercial Research,” Financial Times, March 13, 2019, https://www.ft.com/content/94e86cd0-44b6-11e9-a965-23d669740bfb; Sarah McBride and Ashlee Vance, “Apple, Google, and Facebook Are Raiding Animal Research Labs,” Bloomberg.com, June 18, 2019, https://www.bloomberg.com/news/features/2019–06-18/apple-google-and-facebook-are-raiding-animal-research-labs; Committee on the Growth of Computer Science Undergraduate Enrollments et al., Assessing and Responding to the Growth of Computer Science Undergraduate Enrollments (Washington, DC: National Academies Press, 2018), https://doi.org/10.17226/24926; Michael Gofman and Zhao Jin, “Artificial Intelligence, Human Capital, and Innovation,” August 20, 2019, 55; Ian Sample, “Big Tech Firms’ AI Hiring Frenzy Leads to Brain Drain at UK Universities,” Guardian, November 2, 2017, sec. Science, http://www.theguardian.com/science/2017/nov/02/big-tech-firms-google-ai-hiring-frenzy-brain-drain-uk-universities.

14 Karl Polanyi, The Great Transformation: The Political and Economic Origins of Our Time (Boston, MA: Beacon Press, 2001), pp. 75–76.

15 Sam Schechner and Mark Secada, “You Give Apps Sensitive Personal Information. Then They Tell Facebook.,” Wall Street Journal, February 22, 2019, sec. Tech, https://www.wsj.com/articles/you-give-apps-sensitive-personal-information-then-they-tell-facebook-11550851636; “Out of Control: How Consumers Are Exploited by the Online Advertising Industry” (Forbruker Rådet, January 14, 2020), https://fil.forbrukerradet.no/wp-content/uploads/2020/01/2020–01-14-out-of-control-final-version.pdf.

16 Sam Biddle, “Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers, Says Confidential Document,” Intercept (blog), April 13, 2018, https://theintercept.com/2018/04/13/facebook-advertising-data-artificial-intelligence-ai/.

17 “Introducing FBLearner Flow: Facebook’s AI Backbone,” Facebook Code, May 9, 2016, https://code.facebook.com/posts/1072626246134461/introducing-fblearner-flow-facebook-s-ai-backbone/.

18 Securities and Exchange Commission, “Amendment No. 9 to Form S-1 Registration Statement under The Securities Act of 1933 for Google Inc.,” Securities and Exchange Commission, August 18, 2004, https://www.sec.gov/Archives/edgar/data/1288776/000119312512025336/d260164d10k.htm.

19 Henry Ford, “Mass Production,” in Encyclopedia Brittannica (New York, NY: Encyclopedia Britannica, Inc., 1926), http://memory.loc.gov/cgi-bin/query/h?ammem/coolbib:@field(NUMBER+@band(amrlg+lg48)).

20 Ford.

21 William Boston, “Ford to Slash Jobs, Shut Plants in Major European Revamp,” Wall Street Journal, January 10, 2019, sec. Business, https://www.wsj.com/articles/ford-announces-major-european-restructuring-11547117814; Greg Rosalsky, “Can an Industrial Giant Become a Tech Darling? (Ep. 357),” Freakonomics (blog), http://freakonomics.com/podcast/ford/.

22 Rosalsky, “Can an Industrial Giant Become a Tech Darling?”

23 Phoebe Wall Howard, “Data Could Be What Ford Sells Next as It Looks for New Revenue,” Detroit Free Press, November 13, 2018, https://www.freep.com/story/money/cars/2018/11/13/ford-motor-credit-data-new-revenue/1967077002/.

24 Jingjing Ren et al., “Information Exposure From Consumer IoT Devices: A Multidimensional, Network-Informed Measurement Approach,” in Proceedings of the Internet Measurement Conference (IMC ’19: ACM Internet Measurement Conference, Amsterdam Netherlands: ACM, 2019), pp. 267–79, https://doi.org/10.1145/3355369.3355577.

25 Geoffrey A. Fowler, “What Does Your Car Know about You? We Hacked a Chevy to Find Out,” Washington Post, December 17, 2019, https://www.washingtonpost.com/technology/2019/12/17/what-does-your-car-know-about-you-we-hacked-chevy-find-out/; Natasha Singer and Daisuke Wakabayashi, “Google to Store and Analyze Millions of Health Records,” New York Times, November 11, 2019, sec. Business, https://www.nytimes.com/2019/11/11/business/google-ascension-health-data.html; Hooman Mohajeri Moghaddam et al., “Watching You Watch: The Tracking Ecosystem of Over-the-Top TV Streaming Devices,” in Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS ’19: 2019 ACM SIGSAC Conference on Computer and Communications Security, London United Kingdom: ACM, 2019), pp. 131–47, https://doi.org/10.1145/3319535.3354198; Stuart A. Thompson and Charlie Warzel, “Twelve Million Phones, One Dataset, Zero Privacy,” New York Times, December 19, 2019, sec. Opinion, https://www.nytimes.com/interactive/2019/12/19/opinion/location-tracking-cell-phone.html; Ellen P. Goodman and Julia Powles, “Urbanism Under Google: Lessons from Sidewalk Toronto,” Fordham Law Review 457, no. 88 (December 19, 2019): 42.

26 Ron Amadeo, “Users Alarmed by Undisclosed Microphone in Nest Security System,” Ars Technica, February 20, 2019, https://arstechnica.com/gadgets/2019/02/googles-nest-security-system-shipped-with-a-secret-microphone/; Gary Hawkins, “Real-Time Insights on Amazon Prime and Whole Foods Integration,” Winsight Grocery Business, August 15, 2018, https://www.winsightgrocerybusiness.com/retailers/real-time-insights-amazon-prime-whole-foods-integration; Joel Winston, “Google Keeps an Eye on What You Buy, and It’s Not Alone,” Fast Company, August 6, 2019, https://www.fastcompany.com/90349518/google-keeps-an-eye-on-what-you-buy-and-its-not-alone; Katie Collins, “My Friend Cayla Doll Banned in Germany as ‘Espionage Device,’” CNET, February 17, 2017, https://www.cnet.com/news/parents-told-to-destroy-connected-dolls-over-hacking-fears/; Betsy Morris, “Schools Wrestle with Privacy of Digital Data Collected on Students,” Wall Street Journal, July 10, 2019, sec. Tech, https://www.wsj.com/articles/one-parent-is-on-a-mission-to-protect-children-from-digital-mistakes-11562762000; Sigal Samuel, “Brain-Reading Tech Is Coming. The Law Is Not Ready to Protect Us,” Vox, August 30, 2019, https://www.vox.com/2019/8/30/20835137/facebook-zuckerberg-elon-musk-brain-mind-reading-neuroethics; Kirsten Ostherr, “You Don’t Want Facebook Involved with Your Health Care,” Slate, September 19, 2019, https://slate.com/technology/2019/09/social-determinants-health-facebook-google.html.

27 Benjamin Romano, “Amazon Rolls Out New Devices amid Swirl of Privacy Questions,” Seattle Times, September 25, 2019, https://www.seattletimes.com/business/amazon/amazon-rolls-out-new-devices-amid-swirl-of-privacy-questions/.

28 Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: Public Affairs, 2019), p. 295.

29 Mark Weiser, “The Computer for the 21st Century,” Scientific American, July 1999: 104.

30 Robert M. Bond et al., “A 61-Million-Person Experiment in Social Influence and Political Mobilization,” Nature 489, no. 7415 (September 12, 2012): 295–98, https://doi.org/10.1038/nature11421; Adam D. I. Kramer, Jamie E. Guillory, and Jeffrey T. Hancock, “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks,” Proceedings of the National Academy of Sciences 111, no. 24 (June 17, 2014): 8788–90, https://doi.org/10.1073/pnas.1320040111.

31 Darren Davidson, “Facebook Targets ‘Insecure’ to Sell Ads,” Australian, May 1, 2017, https://www.theaustralian.com.au/business/media/facebook-targets-insecure-young-people-to-sell-ads/news-story/a89949ad016eee7d7a61c3c30c909fa6.

32 Antonio Garcia-Martinez, “I’m an Ex-Facebook Exec: Don’t Believe What They Tell You about Ads,” Guardian, May 2, 2017, sec. Technology, https://www.theguardian.com/technology/2017/may/02/facebook-executive-advertising-data-comment.

33 Nick Statt, “Facebook CEO Mark Zuckerberg Says the ‘Future Is Private,’” The Verge, April 30, 2019, https://www.theverge.com/2019/4/30/18524188/facebook-f8-keynote-mark-zuckerberg-privacy-future-2019.

34 Sam Biddle, “In Court, Facebook Blames Users for Destroying Their Own Right to Privacy,” The Intercept (blog), June 14, 2019, https://theintercept.com/2019/06/14/facebook-privacy-policy-court/.

35 Tekla S. Perry, “CES 2020 News: Tech Executives Answer Tough Questions about Privacy,” IEEE Spectrum, January 8, 2020, https://spectrum.ieee.org/view-from-the-valley/telecom/internet/ces-2020-news-tech-executives-answer-tough-questions-about-privacy; Biddle, “Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers, Says Confidential Document”; Darren Davidson, “Facebook Targets ‘Insecure’ to Sell Ads.”

36 Sundar Pichai, “Google’s Sundar Pichai: Privacy Should Not Be a Luxury Good | Opinion,” New York Times, May 22, 2019, sec. Opinion, https://www.nytimes.com/2019/05/07/opinion/google-sundar-pichai-privacy.html.

37 Ginger Adams Otis and Nancy Dillon, “City Worker Saw Homeless People Lined Up to Get $5 Gift Card for Face Scan Uploaded to Google,” Nydailynews.com, October 3, 2019, https://www.nydailynews.com/news/national/ny-witness-saw-homeless-people-selling-face-scans-google-five-dollars-20191004-j6z2vonllnerpiuakt6wrp6l44-story.html.

38 Shirin Ghaffary, “Amazon Threatened to Fire Employees Who Spoke Out against Its Environmental Policies,” Vox, January 2, 2020, https://www.vox.com/recode/2020/1/2/21046886/amazon-climate-change-fired-activists-sustainability-walkout-pledge-carbon-emissions-activism; Benjamin Romano, “Amazon Rolls out New Devices amid Swirl of Privacy Questions.”

39 Biddle, “Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers, Says Confidential Document.”

40 Dyani Sabin, “The Secret History of ‘Pokemon GO’ as Told by the Game’s Creator,” Inverse, February 28, 2017, https://www.inverse.com/article/28485-pokemon-go-secret-history-google-maps-ingress-john-hanke-updates.

41 For a full discussion of John Hanke, Pokémon Go and Niantic Labs, see Zuboff, The Age of Surveillance Capitalism, pp. 309–19.

42 Dylan D. Wagner and Todd F. Heatherton, “Self-Regulation and Its Failure: The Seven Deadly Threats to Self-Regulation,” in APA Handbook of Personality and Social Psychology (Washington, DC: American Psychological Association,2015), pp. 805–42, https://pdfs.semanticscholar.org/2e62/15047e3a296184c3698f3553255ffabd46c7.pdf; William M. Kelley, Dylan D. Wagner, and Todd F. Heatherton, “In Search of a Human Self-Regulation System,” Annual Review of Neuroscience 38, no. 1 (2015): 389411, https://doi.org/10.1146/annurev-neuro-071013–014243.

43 David Modic and Ross J. Anderson, “We Will Make You Like Our Research: The Development of a Susceptibility-to-Persuasion Scale,” SSRN (Rochester, NY: Social Science Research Network, April 28, 2014), https://papers.ssrn.com/abstract=2446971; Mahesh Gopinath and Prashanth U. Nyer, “The Influence of Public Commitment on the Attitude Change Process: The Effects of Attitude Certainty, PFC and SNI,” SSRN, 2007, https://doi.org/10.2139/ssrn.1010562.

44 For a full account of the Cambridge Analytica story, see Zuboff, The Age of Surveillance Capitalism., pp. 278–82, 482–3.

45 Zuboff, The Age of Surveillance Capitalism, pp. 272–78.

46 Andy Kroll, “Cloak and Data: The Real Story behind Cambridge Analytica’s Rise and Fall,” Mother Jones (blog), March 2018, https://www.motherjones.com/politics/2018/03/cloak-and-data-cambridge-analytica-robert-mercer/.

47 Kroll.

48 Carole Cadwalladr, “‘I Made Steve Bannon’s Psychological Warfare Tool’: Meet the Data War Whistleblower,” Guardian, March 18, 2018, sec. News, http://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump.

49 Kroll, “Cloak and Data.”

50 Frederik Zuiderveen Borgesius et al., “Online Political Microtargeting: Promises and Threats for Democracy,” SSRN (Rochester, NY: Social Science Research Network, February 9, 2018), https://papers.ssrn.com/abstract=3128787.

51 Martin C. Libicki, “The Convergence of Information Warfare,” Strategic Studies Quarterly, 2017: 4965; For other relevant discussions, see Gary P. Corn and Robert Taylor, “Sovereignty in the Age of Cyber,” AJIL Unbound 111 (2017): 207–12, https://doi.org/10.1017/aju.2017.57; Duncan Hollis, “The Influence of War; The War for Influence,” Temple International & Comparative Law Journal 32, no. 1 (2018): 3146; Herbert Lin, “The Existential Threat from Cyber-Enabled Information Warfare,” Bulletin of the Atomic Scientists 75, no. 4 (July 4, 2019): 187–96, https://doi.org/10.1080/00963402.2019.1629574.

52 Martin C. Libicki, “The Convergence of Information Warfare,” 51.

53 Libicki, 63.

54 Libicki, 51–52.

55 Libicki, 53–54.

56 Libicki, 55–56.

57 For a detailed discussion, see Zuboff, The Age of Surveillance Capitalism, pp. 504–12.

58 Emile Durkheim, The Division of Labor in Society (New York, NY: Free Press, 1964), p. 41.

59 Durkheim, pp. 60–61.

60 Zuboff, In the Age of the Smart Machine.

61 Hilbert (2012).

62 Viktor Mayer-Schönberger and Kenneth Cukier, Big Data: A Revolution That Will Transform How We Live, Work, and Think (Boston, MA: Houghton Mifflin Harcourt, 2013), p. 9.

63 Harvard legal scholar John Palfrey observed the “read only” nature of electronic surveillance in his wonderful 2008 essay, John Palfrey, “The Public and the Private at the United States Border with Cyberspace,” Mississippi Law Journal 78 (2008): 241–94, see especially p. 249.

64 Frank Pasquale, The Black Box Society (Cambridge, MA: Harvard University Press, 2015), pp. 6061.

65 Spiros Simitis, “Reviewing Privacy in an Information Society,” University of Pennsylvania Law Review 135, no. 3 (1987): 710, https://doi.org/10.2307/3312079.

66 Paul M. Schwartz, “The Computer in German and American Constitutional Law: Towards an American Right of Informational Self-Determination,” American Journal of Comparative Law 37 (1989): 676.

67 Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (New York, NY: Basic Books, 2015), pp. 1213.

68 Cade Metz, “Why A.I. Researchers at Google Got Desks Next to the Boss,” New York Times, February 19, 2018, sec. Technology, https://www.nytimes.com/2018/02/19/technology/ai-researchers-desks-boss.html.

69 Thomas K. McCraw, Prophets of Regulation: Charles Francis Adams; Louis D. Brandeis; James M. Landis; Alfred E. Kahn (Cambridge, MA: Belknap Press: An Imprint of Harvard University Press, 1986), l. 3990.

70 McCraw, l. 4037.

71 Natasha Lomas, “Europe’s Recharged Antitrust Chief Makes Her Five-Year Pitch to Be Digital EVP,” TechCrunch (blog), October 8, 2019, http://social.techcrunch.com/2019/10/08/europes-recharged-antitrust-chief-makes-her-five-year-pitch-to-be-digital-evp/.

72 Zuboff, The Age of Surveillance Capitalism, chapter 4.

73 See, Footnote ibid., pp. 112–27.

74 Natasha Lomas, “Facebook Makes Another Push to Shape and Define Its Own Oversight,” TechCrunch (blog), June 24, 2019, http://social.techcrunch.com/2019/06/24/facebook-makes-another-push-to-shape-and-define-its-own-oversight/.

75 Bobbie Johnson, “Privacy No Longer a Social Norm, Says Facebook Founder,” Guardian, January 10, 2010, sec. Technology, https://www.theguardian.com/technology/2010/jan/11/facebook-privacy.

76 Pascal-Emmanuel Gobry, “Eric Schmidt to World Leaders at EG8: Don’t Regulate Us, or Else,” Business Insider, May 24, 2011, http://www.businessinsider.com/eric-schmidt-google-eg8-2011–5.

77 Jay Yarow, “Google CEO Larry Page Wants a Totally Separate World Where Tech Companies Can Conduct Experiments on People,” Business Insider, May 16, 2013, http://www.businessinsider.com/google-ceo-larry-page-wants-a-place-for-experiments-2013–5.

78 David Nasaw, “Gilded Age Gospels,” in Ruling America: A History of Wealth and Power in a Democracy, eds. Steve Fraser and Gary Gerstle (Cambridge, MA: Harvard University Press, 2005), pp. 124–5, 148.

79 Sonia Orwell and Ian Angus, In Front of Your Nose 1945–1940: The Collected Essays, Journalism, and Letters of George Orwell, vol. 4 (New York, NY: Harcourt, Brace & World, Inc., 1968), p. 125.

80 Adam Smith, The Wealth of Nations, ed. Edwin Cannan (New York, NY: Modern Library, 1994), p. 485.

81 Friedrich August von Hayek, The Collected Works of Friedrich August Hayek, ed. William Warren Bartley (Chicago, IL: University of Chicago Press, 1988), p. 14.

82 Friedrich Hayek, “The Use of Knowledge in Society,” in Individualism and Economic Order (Chicago, IL: University of Chicago Press, 1980). See the discussion on pp. 85–89.

83 Tom Simonite, “What Facebook Knows,” MIT Technology Review, June 13, 2012, https://www.technologyreview.com/s/428150/what-facebook-knows/.

84 Ashlee Vance, “Facebook: The Making of 1 Billion Users,” Bloomberg.com, October 4, 2012, http://www.bloomberg.com/news/articles/2012–10-04/facebook-the-making-of-1-billion-users.

85 Derek Thompson, “Google’s CEO: ‘The Laws Are Written by Lobbyists,’” Atlantic, October 1, 2010, https://www.theatlantic.com/technology/archive/2010/10/googles-ceo-the-laws-are-written-by-lobbyists/63908/.

86 Shoshana Zuboff, “The Road to Digital Serfdom? The Visible Hand of Surveillance Capitalism,” Promarket, February 22, 2019, https://promarket.org/road-to-digital-serfdom-surveillance-capitalism-visible-hand/.

87 Chris Jay Hoofnagle and Jennifer King, “Research Report: What Californians Understand About Privacy Offline,” SSRN, May 15, 2008 (Rochester, NY: Social Science Research Network), http://papers.ssrn.com/abstract=1133075.

88 Joseph Turow et al., “Americans Reject Tailored Advertising and Three Activities That Enable It” (Annenberg School for Communication, September 29, 2009), http://papers.ssrn.com/abstract=1478214.

89 Joseph Turow, Michael Hennessy, and Nora Draper, “ The Tradeoff Fallacy: How Marketers Are Misrepresenting American Consumers and Opening Them Up to Exploitation,” Survey Results (Pennsylvania, PA: Annenberg School for Communication, June 2015), https://www.asc.upenn.edu/news-events/publications/tradeoff-fallacy-how-marketers-are-misrepresenting-american-consumers-and.

90 Brooke Auxier et al., “Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information,” Pew Research Center: Internet, Science & Tech (blog), November 15, 2019, https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/.

91 Markus Lahtinen, “Big Tech Greater Threat to Privacy than Big Brother,” Lund University School of Economics and Management, January 23, 2020, https://lusem.lu.se/news/big-tech-greater-threat-to-privacy-than-big-brother.

92 Richard Esguerra, “Google CEO Eric Schmidt Dismisses the Importance of Privacy,” Electronic Frontier Foundation, 10 December 2009, https://www.eff.org/deeplinks/2009/12/google-ceo-eric-schmidt-dismisses-privacy.

93 Susanne Barth and Menno D. T. de Jong, “The Privacy Paradox—Investigating Discrepancies between Expressed Privacy Concerns and Actual Online Behavior: A Systematic Literature Review,” Telematics and Informatics 34, no. 7 (November 2017): 1038–58, https://doi.org/10.1016/j.tele.2017.04.013.

94 Lawrence M. Friedman, American Law in the 20th Century (New Haven, CT: Yale University Press, 2004).

95 Makan Delrahim, “Assistant Attorney General Makan Delrahim Delivers Remarks for the Antitrust New Frontiers Conference,” United States Department of Justice, June 11, 2019, https://www.justice.gov/opa/speech/assistant-attorney-general-makan-delrahim-delivers-remarks-antitrust-new-frontiers.

96 Examples include, Susie Alegre, “Using Freedom of Thought to Limit ‘Surveillance Capitalism’?” Doughty Street Chambers (blog), July 3, 2019, https://insights.doughtystreet.co.uk/post/102fn86/using-freedom-of-thought-to-limit-surveillance-capitalism; Susie Alegre, “Time to Think about Freedom of Thought,” International Law Bulletin, November 2017, https://doughty-street-chambers.newsweaver.com/International/rn8wy1p800g?a=1&p=2047237&t=174031; Marcello Ienca and Roberto Andorno, “Towards New Human Rights in the Age of Neuroscience and Neurotechnology,” Life Sciences, Society and Policy 13 (April 26, 2017), https://doi.org/10.1186/s40504-017–0050-1.

97 Antonio Regalado, “Facebook Is Funding Brain Experiments to Create a Device That Reads Your Mind,” MIT TechnologyReview, July 30, 2019, https://www.technologyreview.com/s/614034/facebook-is-funding-brain-experiments-to-create-a-device-that-reads-your-mind/; Sigal Samuel, “Facebook Is Building Tech to Read Your Mind. The Ethical Implications Are Staggering.,” Vox, August 5, 2019, https://www.vox.com/future-perfect/2019/8/5/20750259/facebook-ai-mind-reading-brain-computer-interface; David A. Moses et al., “Real-Time Decoding of Question-and-Answer Speech Dialogue Using Human Cortical Activity,” Nature Communications 10, no. 1 (July 30, 2019): 114, https://doi.org/10.1038/s41467-019–10994-4; Sigal Samuel, “Brain-Reading Tech Is Coming. The Law Is Not Ready to Protect Us,” Vox, August 30, 2019, https://www.vox.com/2019/8/30/20835137/facebook-zuckerberg-elon-musk-brain-mind-reading-neuroethics; “Surge in US ‘Brain-Reading’ Patents,” BBC News, May 7, 2015, sec. Technology, http://www.bbc.com/news/technology-32623063; Anderson Cooper, “What Is ‘Brain Hacking’? Tech Insiders on Why You Should Care,” CBS News, April 9, 2017, https://www.cbsnews.com/news/brain-hacking-tech-insiders-60-minutes/; Christopher N. Cascio, Christin Scholz, and Emily B. Falk, “Social Influence and the Brain: Persuasion, Susceptibility to Influence and Retransmission,” Current Opinion in Behavioral Sciences 3 (June 2015): 51–57, https://doi.org/10.1016/j.cobeha.2015.01.007; Kiyoto Kasai et al., “The Future of Real-World Neuroscience: Imaging Techniques to Assess Active Brains in Social Environments,” Neuroscience Research 90 (January 2015): 6571, https://doi.org/10.1016/j.neures.2014.11.007; “Brain-Connected Computers,” Week, July 28, 2017.

98 “NeuroRights Initiative,” accessed February 23, 2020, https://nri.ntc.columbia.edu/.

99 “New Frontiers of the Mind: Enabling Responsible Innovation in Neurotechnology,” The OECD Forum Network, December 19, 2019, http://www.oecd-forum.org/users/338762-david-winickoff/posts/57641-new-frontiers-of-the-mind-enabling-responsible-innovation-in-neurotechnology.

100 Amnesty International, “Surveillance Giants: How the Business Model of Google and Facebook Threatens Human Rights” (2019), https://amnestyusa.org/wp-content/uploads/2019/11/Surveillance-Giants-Embargo-21-Nov-0001-GMT-FINAL-report.pdf.

101 Cory D. Kidd et al., “The Aware Home: A Living Laboratory for Ubiquitous Computing Research,” in Proceedings of the Second International Workshop on Cooperative Buildings, Integrating Information, Organization, and Architecture, CoBuild ’99 (London: Springer-Verlag, 1999), pp. 191–98, http://dl.acm.org/citation.cfm?id=645969.674887.

102 Ron Amadeo, “Nest Is Done as a Standalone Alphabet Company, Merges with Google,” Ars Technica, February 7, 2018, https://arstechnica.com/gadgets/2018/02/nest-is-done-as-a-standalone-alphabet-company-merges-with-google/; Leo Kelion, “Google-Nest Merger Raises Privacy Issues,” BBC News, February 8, 2018, sec. Technology, http://www.bbc.com/news/technology-42989073.

103 Guido Noto La Diega, “Contracting for the ‘Internet of Things’: Looking into the Nest,” Research Paper (London: Queen Mary University of London, School of Law, 2016); Robin Kar and Margaret Radin, “Pseudo-Contract & Shared Meaning Analysis,” SSRN (November 16, 2017), https://papers.ssrn.com/abstract=3083129.

104 Grant Hernandez, Orlando Arias, Daniel Buentello, and Yier Jin, “Smart Nest Thermostat: A Smart Spy in Your Home,” Black Hat USA (2014), https://www.blackhat.com/docs/us-14/materials/us-14-Jin-Smart-Nest-Thermostat-A-Smart-Spy-In-Your-Home-WP.pdf.

9 The Siren Song: Algorithmic Governance by Blockchain

1 Marc Andreessen, Why Bitcoin Matters, N.Y. Times DealBook (Jan. 21, 2014, 11:54 AM), https://dealbook.nytimes.com/2014/01/21/why-bitcoin-matters/.

2 Homer, The Odyssey, Book XII (Trans. S.H. Butcher and L. Lang, 1937).

3 See, e.g., John Danaher, The Threat of Algocracy: Reality, Resistance and Accommodation, 29 Phil. & Tech. 245 (2016). Most of the contributions to this volume concern the challenge of algorithmic power in one form or another.

4 See Kevin Werbach, The Federal Computer Commission, 84 N.C. L. Rev. 1 (2005).

5 See Kevin Werbach, The Blockchain and the New Architecture of Trust 14 (2018) [hereinafter Werbach, New Architecture]. I use blockchain here as a generic term for the collection of cryptocurrency, blockchain, and distributed ledger technologies. Not all blockchain networks have an integral cryptocurrency, and not all cryptocurrencies use a data structure involving chains of transaction blocks. What they share are common properties such as decentralization (no one entity can control the status of the ledger) and immutability (transactions once made are, ideally, impossible to alter).

6 Although now widely used, the “internet of value” phrase is most widely associated with the blockchain payments firm Ripple. See, e.g., Shanna Leonard, The Internet of Value: What It Means and How It Benefits Everyone, Ripple Insights, June 21, 2017, https://ripple.com/insights/the-internet-of-value-what-it-means-and-how-it-benefits-everyone/.

7 See Steven Johnson, Beyond the Bitcoin Bubble, N.Y. Times Mag., Jan. 16, 2018, available at https://www.nytimes.com/2018/01/16/magazine/beyond-the-bitcoin-bubble.html.

8 See, e.g., Don Tapscott and Alex Tapscott, Blockchain Revolution (2016).

9 See Quinn DuPont and Bill Maurer, Ledgers and Law in the Blockchain, King’s Rev., June 23, 2016; Douglas Allen, The Institutional Revolution: Measurement and the Economic Emergence of the Modern World (2011).

10 Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System, https://bitcoin.org/bitcoin.pdf.

11 See Arvind Narayanan and Jeremy Clark, Bitcoin’s Academic Pedigree, 60 Comms. ACM 36 (2017).

12 See Werbach, New Architecture, supra Footnote note 5, at 82–3.

13 See Stan Higgins, $600 Billion: Cryptocurrency Market Cap Sets New Record, Coindesk (Dec. 18, 2017, 6:50 PM UTC), https://www.coindesk.com/600-billion-cryptocurrency-market-cap-sets-new-record/; Historical Snapshot- December 31, 2017, CoinMarketCap (2017) (valuing the market cap of Bitcoin at approximately $221 Billion). These numbers somewhat overstate the actual value of available bitcoin, as it ignores the substantial amount of the cryptocurrency that has been stolen or for which the cryptographic keys have been lost.

14 See Werbach, New Architecture, supra Footnote note 5, at 84.

15 See Paul Vigna, Bitcoin Is in the Dumps, Spreading Gloom Over Crypto World, Wall St. J., March 19, 2019, https://www.wsj.com/articles/bitcoin-is-in-the-dumps-spreading-gloom-over-crypto-world-11552927208.

16 See Christian Catalini and Joshua Gans, Some Simple Economics of the Blockchain, Rotman School of Mgmt. Working Paper No. 2874598 (2017), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2874598; Marco Iansiti and Karim R. Lakhani, The Truth about Blockchain, 95 Harv. Bus. Rev. 118 (Jan./Feb. 2017).

17 See Werbach, New Architecture, supra Footnote note 5, at 84–6.

18 See Footnote id.; Sinclair Davidson et al., Blockchains and the Economic Institutions of Capitalism, 14 J. Institutional Econ. 639 (2018).

19 See Kevin Werbach, Trust, but Verify: Why the Blockchain Needs the Law, 32 Berkeley Tech. L.J. 489 (2018) [hereinafter Werbach, Trust but Verify]; Werbach, New Architecture, supra Footnote note 5, at 28–30.

20 Werbach, Trust but Verify, supra Footnote note 19, at 79–81.

21 This does not mean no bitcoin has been stolen. Cryptocurrencies are bearer instruments. Whoever controls a private cryptographic key effectively owns the currency associated with it. The wallets and exchanges where most users store their cryptocurrency are separate from the decentralized consensus ledger itself. Those centralized systems can be hacked, or keys can be stolen through other means. An estimated $1.2 billion of cryptocurrency was stolen between the beginning of 2017 and May 2018. See Gertrude Chavez-Dreyfuss, About $1.2 Billion in Cryptocurrency Stolen Since 2017: Cybercrime Group, Reuters (May 24, 2018, 10:59 AM), https://www.reuters.com/article/us-crypto-currency-crime/about-1-2-billion-in-cryptocurrency-stolen-since-2017-cybercrime-group-idUSKCN1IP2LU.

22 See UK Government Chief Scientific Advisor, Distributed Ledger Technology: Beyond Block Chain (2015), https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/492972/gs-16-1-distributed-ledger-technology.pdf; Tim Swanson, Consensus-as-a-Service: A Brief Report on the Emergence of Permissioned, Distributed Ledger Systems (Apr. 6, 2015), http://www.ofnumbers.com/wp-content/uploads/2015/04/Permissioned-distributed-ledgers.pdf.

23 See Kevin Werbach and Nicolas Cornell, Contracts Ex Machina, 67 Duke L.J. 314 (2017); Nick Szabo, Formalizing and Securing Relationships on Public Networks, 2 First Monday (1997), http://ojphi.org/ojs/index.php/fm/article/view/548.

24 See Werbach and Cornell, supra Footnote note 23.

25 See Lawrence Lessig, Code and Other Laws of Cyberspace (2nd revised ed. 2006); Werbach, New Architecture, supra Footnote note 5, at 189; Primavera de Filippi and Aaron Wright, Blockchain and Law: The Rule of Code (2018).

26 See Rachel O’Dwyer, Code != Law: Explorations of the Blockchain as a Mode of Algorithmic Governance (2018), https://www.academia.edu/34734732/Code_Law_Explorations_of_the_Blockchain_as_a_Mode_of_Algorithmic_Governance; Sinclair Davidson, Primavera De Filippi, and Jason Potts, Economics of Blockchain (March 8, 2016), http://dx.doi.org/10.2139/ssrn.2744751; Marcella Atzori, Blockchain Technology and Decentralized Governance: Is the State Still Necessary? (2015), http://nzz-files-prod.s3-website-eu-west-1.amazonaws.com/files/9/3/1/blockchain+Is+the+State+Still+Necessary_1.18689931.pdf.

27 See The Odyssey, supra Footnote note 2.

28 See Nick Szabo, Money, Blockchains, and Social Scalability, Unenumerated (Feb. 9, 2017), http://unenumerated.blogspot.com/2017/02/money-blockchains-and-social-scalability.html (“To say that data is post-unforgeable or immutable means that it can’t be undetectably altered after being committed to the blockchain.”); Angela Walch, The Path of the Blockchain Lexicon (and the Law), 36 Rev. Banking & Fin. L. 713 (2016–2017); Marc Pilkington, Blockchain Technology: Principles & Applications, in Research Handbook on Digital Transformations 15 (F. Xavier Olleros & Majlinda Zhegu eds., 2016).

29 Szabo, supra Footnote note 28.

30 There are also plenty of ill-intentioned parties that use blockchain technology to facilitate illegal or unethical activity. See, e.g., Kevin Werbach, What the Russia Hack Indictments Reveal about Bitcoin, N.Y. Times, July 22, 2018, https://www.nytimes.com/2018/07/22/opinion/russia-hacking-indictments-bitcoin.html. These issues lie beyond the scope of this paper.

31 Nakamoto, supra Footnote note 10.

33 See Walch, supra Footnote note 28.

34 See Footnote id. at 738–41; Werbach, Trust but Verify, supra Footnote note 19; Gideon Greenspan, The Blockchain Immutability Myth, Coindesk (May 9, 2017), http://www.coindesk.com/blockchain-immutability-myth/.

35 See Tim Ferriss, The Quiet Master of Cryptocurrency—Nick Szabo, Tim Ferriss Show (June 4, 2017), https://tim.blog/2017/06/04/nick-szabo/.

36 Of course, this is true of centralized financial networks as well.

37 See Vitalik Buterin, On Settlement Finality, Ethereum Blog (May 9, 2016), https://blog.ethereum.org/2016/05/09/on-settlement-finality/.

38 The consensus algorithms that provide for finality also generally trade off some other property, such as security or decentralization, for the guarantee that prior blocks may not be changed.

39 For proof of work systems like Bitcoin, this means a majority of computation devoted to mining.

40 See Daniel Oberhaus, Cryptocurrency Miners Are Sabotaging Blockchains for Their Personal Gain, Motherboard (May 25, 2018, 11:00 AM), https://motherboard.vice.com/en_us/article/a3a38e/what-is-a-51-percent-attack-silicon-valley-bitcoin-gold-verge-monacoin-cryptocurrency. The lower the total value of a cryptocurrency, the less resources it makes economic sense to spend on mining, which means the costs of a successful 51 percent attack are also lower.

41 A soft fork means both paths are compatible. There is still one consensus chain of blocks, but some network nodes have access to different features than the others. A hard fork means there are two incompatible chains after a certain point.

42 See Thomas Jay Rush, Defeating the Ethereum DDOS Attacks, Medium (Feb. 12, 2017), https://medium.com/@tjayrush/defeating-the-ethereum-ddos-attacks-d3d773a9a063.

43 See Werbach, New Architecture, supra Footnote note 5, at 77.

44 See Francis Fukuyama, Trust: The Social Virtues and the Creation of Prosperity (1995); Niklas Luhmann, Trust and Power (1979).

45 Gharad Bryan et al., Commitment Devices, 2 Ann. Rev. Econ. 671–98 (2010). The authors include two additional conditions for precision. The agent must value the commitment effect itself, as opposed to, for example, merely paying now for a benefit later. And the commitment must not be for some strategic purpose, such as deterring action that might invoke it or influencing someone else.

47 See Richard Thaler, Some Empirical Evidence on Dynamic Inconsistency, 8 Econ. Lett. 201 (1981).

48 See Footnote id. at 674.

49 See Thomas Schelling, An Essay on Bargaining, 46 Am. Econ. Rev. 281306 (1956).

50 See Oliver Williamson, Credible Commitments: Using Hostages to Support Exchange, 73 Am. Econ. Rev. 519, 519 (1983).

51 Footnote Id. at 520.

52 See John Elster, Ulysses and the Sirens: A Theory of Imperfect Rationality, 16 Soc. Sci. Info. 469 (1977). Elster uses the Roman name of Ulysses instead of the Greek Odysseus.

53 Footnote Id. at 502.

54 See Klint Finley, A $50 Million Hack Just Showed that the DAO Was All Too Human, Wired (June 18, 2016), https://www.wired.com/2016/06/50-million-hack-just-showed-dao-human/.

55 DAO is an abbreviation for decentralized autonomous organization, a concept of a corporation governed through self-executing smart contracts. See Vitalik Buterin, DAOs, DACs, DAs and More: An Incomplete Terminology Guide, Ethereum Blog (May 6, 2014), https://blog.ethereum.org/2014/05/06/daos-dacs-das-and-more-an-incomplete-terminology-guide/. The DAO was styled as the first real-world implementation of the concept, which cryptocurrency advocates such as Ethereum founder Vitalik Buterin had developed a few years earlier.

56 See Toshendra Kumar Sharma, Details of the DAO Hacking in Ethereum in 2016, Blockchain Council (Aug. 20, 2017), https://www.blockchain-council.org/blockchain/details-of-the-dao-hacking-in-ethereum-in-2016/.

58 See E. J. Spode, The Great Cryptocurrency Heist, Aeon (Feb. 14, 2017), https://aeon.co/essays/trust-the-inside-story-of-the-rise-and-fall-of-ethereum.

59 See Nathaniel Popper, Move Over, Bitcoin. Ether Is the Digital Currency of the Moment, N.Y. Times DealBook, June 19, 2017, https://www.nytimes.com/2017/06/19/business/dealbook/ethereum-bitcoin-digital-currency.html.

60 See David Morris, The Bizarre Fallout of Ethereum’s Epic Fail, Fortune (Sept. 4, 2016), http://fortune.com/2016/09/04/ethereum-fall-out.

61 The two chains are identical up to the moment of the fork, then diverge. All holders of Ether at the time saw their holdings double, although Ethereum Classic coins are worth substantially less than those of the more popular Ethereum.

62 See Thomas J. Sargent, The Ends of Four Big Inflations, in Inflation: Causes and Effects (R.E. Hall, ed., 1982).

63 See Parity Technologies, Security Alert (November 8, 2017), https://paritytech.io/security-alert-2/.

64 See Jordan Pearson, Ethereum Wallet Company Knew about Critical Flaw That Let a User Lock Up Millions, Motherboard (Nov. 15, 2017, 2:21pm), https://motherboard.vice.com/en_us/article/d3djwj/ethereum-wallet-parity-knew-about-critical-flaw-that-let-user-devops199-lock-up-millions.

65 See Ryan Browne, “Accidental” Bug May Have Frozen $280 Million Worth of Digital Coin Ether in a Cryptocurrency Wallet, CNBC (Nov. 8, 2017, 6:42 AM), https://www.cnbc.com/2017/11/08/accidental-bug-may-have-frozen-280-worth-of-ether-on-parity-wallet.html.

66 See Rachel Rose O’Leary, The New Last-Ditch Effort to Unfreeze a $260 Million Ethereum Fortune, Coindesk (Apr. 18, 2018, 4:00 UTC), https://www.coindesk.com/new-last-ditch-effort-unfreeze-260-million-ethereum-fortune/.

67 There is some question whether the user, Devops199, offered a truthful account, or was actually behaving maliciously. Even if deletion of the smart contract was malicious, however, it was not theft, because no one obtained access to the trapped funds.

68 See O’Leary, supra Footnote note 64.

69 See Laura Shin, Will This Battle for the Soul of Bitcoin Destroy It?, Forbes (Oct. 23, 2017, 1:35 pm), https://www.forbes.com/sites/laurashin/2017/10/23/will-this-battle-for-the-soul-of-bitcoin-destroy-it/#42adb77f3d3c.

70 The term is short for “segregated witness,” an allusion to the famous prisoners’ dilemma scenario in game theory. Two alleged witnesses to a crime must be put in separate rooms in a prison, unable to communicate, in order to produce the classic result that both will confess.

71 See Shin, supra Footnote note 69.

73 See Paul Vigna, Bitcoin Dodges Split that Threatened Tts Surging Price, Wall St. J. (Nov. 8, 2017, 3:25 pm), https://www.wsj.com/articles/bitcoin-dodges-split-that-threatened-its-surging-price-1510172701/. The price of Bitcoin spiked to new highs when the hard fork was called off, indicating that uncertainty about its outcome was a substantial overhang for investors. See Evelyn Cheng, Bitcoin Hits Record High after Developers Call Off Plans to Split Digital Currency, CNBC (Nov. 8, 2017, 12:40 pm), https://www.cnbc.com/2017/11/08/bitcoin-surges-11-percent-to-record-above-7800-after-developers-call-off-plans-to-split-digital-currency.html.

74 See Kenneth A. Shepsle, Institutions and the Problem of Government Commitment, in Social Theory for a Changing Society, pp. 245, 247 (Pierre Bordieu and James S. Coleman, eds., 1991).

75 See Werbach, New Architecture, supra Footnote note 5, at 47–8.

76 See Werbach and Cornell, supra Footnote note 23; Oliver D. Hart, Incomplete Contracts and the Theory of the Firm, 4 J. L. Econ. & Org. 119–39 (1998).

77 See Werbach and Cornell, supra Footnote note 23.

78 Shepsle, supra Footnote note 74, at 249.

79 Permissioned blockchain and distributed ledger networks do not face quite the same challenges. Because participants are identified and authorized, traditional mechanisms of consortium governance can be applied. While not always effective, such arrangements only require the agreement of a relatively small group of organizations to make governance decisions for the network.

80 Oliver Williamson, The Economics of Governance, 95 Amer. Econ. Rev. 1, 1 (2005).

81 See, e.g., Lisa Welchman, Managing Chaos: Digital Governance by Design (2015).

82 See Lessig, supra Footnote note 25.

83 See Szabo, supra Footnote note 28.

85 See Werbach and Cornell, supra Footnote note 23.

86 See Andrew Selbst and Solon Barocas, The Intuitive Appeal of Explainable Machines, 87 Fordham L. Rev. 1085 (2018).

87 See Rachel Courtland, Bias Detectives: The Researchers Striving to Make Algorithms Fair, Nature (June 20, 2018), https://www.nature.com/articles/d41586-018-05469-3; Frank Pasquale, toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society, 78 Ohio St. L.J. 1243 (2017).

88 See Werbach, New Architecture, supra Footnote note 5, at 217.

89 See Vitalik Buterin, Notes on Blockchain Governance, Vitalik Buterin’s Website (Dec. 17, 2017), https://vitalik.ca/general/2017/12/17/voting.html.

90 Douglass North, Institutions, 5 J. Econ. Persp. 97, 97 (1991).

91 See Douglass North, Institutions and Credible Commitment, 149 J. Instit. & Theoretical Econ. 11 (1993).

92 See Vili Lehdonvirta, The Blockchain Paradox: Why Distributed Ledger Technologies May Do Little to Transform the Economy, Oxford Internet Institute (Nov. 21, 2016), https://www.oii.ox.ac.uk/blog/the-blockchain-paradox-why-distributed-ledger-technologies-may-do-little-to-transform-the-economy/.

93 Footnote Id. The quote, which appears in the introduction to the online transcript of Lehdonvirta’s talk, is from Seth Bannon, The Tao of “The DAO” or: How the Autonomous Corporation Is Already Here, Techcrunch (May 16, 2016), https://techcrunch.com/2016/05/16/the-tao-of-the-dao-or-how-the-autonomous-corporation-is-already-here/.

94 John Perry Barlow, A Declaration of the Independence of Cyberspace, Electronic Frontier Foundation (Feb. 8, 1996), https://www.eff.org/cyberspace-independence.

96 See Shoshana Zuboff, Big Other: Surveillance Capitalism and the Prospects of an Information Civilization, 30 J. Info. Tech. 7589 (2015); Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019).

97 It is now widely accepted by policymakers in a variety of contexts, most notably the European General Data Protection Regulation that went into force in 2018. See Lee Bygrave, Data Protection by Design and by Default: Deciphering the EU’s Legislative Requirements, 4 Oslo L. Rev. 105 (2017).

98 See Privacy-Enhancing Technologies: The Path to Anonymity, Vol. 1 (Aug. 1995), http://www.ontla.on.ca/library/repository/mon/10000/184530.pdf; Peter Schaar, Privacy by Design, 3 Identity in Info. Soc. 267 (2010).

99 Ann Cavoukian, Privacy by Design: The 7 Foundational Principles, IAB.org (2009), https://www.iab.org/wp-content/IAB-uploads/2011/03/fred_carter.pdf. Cavoukian was the first to develop the concept of privacy by design.

100 A few other authors employ the same term, but generally in different ways than I do here. Deirdre Mulligan and Kenneth Bamberger use “governance-by-design” to refer to efforts to promote value or implement regulatory mandates through manipulation of technological systems. Deirdre K. Mulligan and Kenneth Bamberger, Saving Governance-by-Design, 106 Calif. L. Rev. 697 (2018); Primavera de Filippi uses “governance by the design” for self-governance models in which “rules are embedded directly into the underlying technology of the platforms they use to operate.” Rachel O’Dwyer, Commons Governance and Law with Primavera De Filippi, Commons Transition (July 31, 2015, 10:55 AM), http://commonstransition.org/commons-centric-law-and-governance-with-primavera-de-filippi/. Embedding governance structures into code, such as through the on-chain governance technology of systems such as Tezos, is a subset of what I propose here. Governance by design means systematically embedding governance into all relevant processes, whether implemented in software, in regularized procedures for human discussion, or in decisional structures.

101 See Jordan Pearson, The Ethereum Hard Fork Spawned a Shaky Rebellion, Motherboard (July 27, 2016, 5:55pm), https://www.vice.com/en_us/article/z43qb4/the-ethereum-hard-fork-spawned-a-shaky-rebellion-ethereum-classic-etc-eth.

102 Cavoukian, supra Footnote note 99.

103 See Elinor Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action (1990).

104 See Footnote id. at 52.

105 De Fillippi and Wright, supra Footnote note 25.

106 On-chain governance systems are one approach to this challenge. See supra Footnote notes 88Footnote 9 and associated text. Another is to structure decentralized arbitration mechanisms that leverage the incentive mechanisms of cryptocurrencies themselves. See Werbach, New Architecture, supra Footnote note 5, at 215–16 (describing Augur’s “computational juries” as an example).

107 See Paul Vigna, Tezos Raised $232 Million in a Hot Coin Offering, Then a Fight Broke Out, Wall St. J. (Oct. 18, 2017, 12:07 AM), https://www.wsj.com/articles/tezos-raised-232-million-in-a-hot-coin-offering-then-a-fight-broke-out-1508354704.

108 As Mulligan and Bamberger point out, embedding policy prescriptions directly into technological systems can create new problems. Governance by design may fail to address specific cases in a nuanced way, deprecate important human rights values, or reduce transparency of the regulatory process. These considerations should be taken into account in designing blockchain governance mechanisms. They offer a series of recommendations to ameliorate the dangers of governance by design, which are consistent with the layered, polycentric approach proposed here. See Mulligan and Bamberger, supra Footnote note 100.

109 See Ostrom, supra Footnote note 96, at 212.

110 See Kevin Werbach, The Song Remains the Same: What Cyberlaw Might Teach the Next Internet Economy, 69 Fla. L. Rev. 887 (2017).

111 See Footnote id. at 915–16.

112 See, e.g., Jonathan T. Taplin, Move Fast and Break Things: How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy (2017); Lina M. Khan, Amazon’s Antitrust Paradox, 126 Yale L.J. 710 (2017).

113 See Paul Vigna, People Love Talking about Bitcoin More Than Using It, Wall St. J. (Apr. 12, 2017, 5:30 AM), https://www.wsj.com/articles/people-love-talking-about-bitcoin-more-than-using-it-1491989403.

114 See Olga Kharif, Blockchain, Once Seen as a Corporate Cure-All, Suffers Slowdown, Bloomberg (July 31, 2018), https://www.bloomberg.com/news/articles/2018-07-31/blockchain-once-seen-as-a-corporate-cure-all-suffers-slowdown.

115 See Werbach, New Architecture, supra Footnote note 5, at 226, 236–7.

116 See Kevin Werbach, Blockchain Isn’t a Revolution, Medium (June 18, 2018),https://medium.com/s/story/blockchain-isnt-a-revolution-it-s-two-big-innovations-and-one-promising-idea-988fca6b0fca; Shawn Tully, The NYSE’s Owner Wants to Bring Bitcoin to Your 401(k). Are Crypto Credit Cards Next?, Fortune (Aug. 14, 2018), http://fortune.com/longform/nyse-owner-bitcoin-exchange-startup/.

117 The Odyssey, supra Footnote note 2 (Book XII).

References

Berlin, Isaiah (1969). Two Concepts of Liberty, in Berlin, I, Four Essays on Liberty, London: Oxford University Press. New ed. in Berlin 2002.Google Scholar
Berlin, Isaiah (1978). From Hope and Fear Set Free, in Berlin, I, Concepts and Categories. Philosophical Essays, ed. Hardy, H, London: Hogarth Press; Oxford: Oxford University Press, 1980. Reprinted in Berlin 2002.Google Scholar
Bramble, Ben (2016). The Experience Machine. Philosophy Compass, 11(3), 136–45.CrossRefGoogle Scholar
Bronsteen, John; Buccafusco, Christopher; and Masur, Jonathon S. (2014). Happiness and the Law. Chicago, IL: Chicago University Press.Google Scholar
Carter, Ian (2016). Positive and Negative Liberty, The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta.Google Scholar
Crisp, Roger (2006). Hedonism Reconsidered. Philosophy and Phenomenological Research, 73(3), 619–45.Google Scholar
De Brigard, Felipe (2010). If You Like It, Does It Matter if It’s Real. Philosophical Psychology, 23(1), 4357.Google Scholar
Foucault, Michel (1994). The Order of Things: An Archaeology of the Human Sciences. New York: Vintage Books.Google Scholar
Frischmann, Brett M. (2005). Some Thoughts on Shortsightedness and Intergenerational Equity. Loyola University Chicago Law Journal, 36(2), 457–67.Google Scholar
Grau, Christopher (ed.). (2005). Philosophers Explore the Matrix. New York: Oxford University Press.Google Scholar
Harari, Yuval Noah (2014). Sapiens: A Brief History of Humankind. New York: Vintage.Google Scholar
Jaffa, Harry V. (1959). Crisis of the House Divided: An Interpretation of the Issues in the Lincoln-Douglas Debates. Chicago, IL: University of Chicago Press.Google Scholar
Kolber, Adam J. (1994). Mental Statism and the Experience Machine. Bard Journal of Social Sciences, 3, 1017.Google Scholar
Layard, Richard (2005). Happiness: Lessons from a New Science. New York: Penguin Group.Google Scholar
Lincoln, Abraham (1838, January 27). The Perpetuation of Our Political Institutions, Address Before the Young Men’s Lyceum of Springfield, Illinois (January 27, 1838). Retrieved from http://Federalistpatriot.us/histdocs/Lincolnlyceum.html.Google Scholar
Mendola, Joseph (2006). Intuitive Hedonism. Philosophical Studies: An International Journey for Philosophy in the Analytic Tradition, 128(2), 441–77.Google Scholar
Mill, John S. (1962). Utilitarianism: On Liberty. London: Collins.Google Scholar
Morris, Glenn. For the Next Seven Generations: Indigenous Americans and Communalism. Fellowship for Intentional Community. Retrieved from www.ic.org/wiki/next-seven-generations-indigenous-americans-communalism/ (last visited January 6, 2005).Google Scholar
Nozick, Robert (2013). Anarchy, State, and Utopia. New York: Basic Books.Google Scholar
Nussbaum, Martha (2007). The Capabilities Approach and Ethical Cosmopolitanism: A Response to Noah Feldman. Yale Law Journal Pocket Part, 117, 123–29.Google Scholar
Nussbaum, Martha (2011). Creating Capabilities: The Human Development Approach. Cambridge, MA: Harvard University Press.Google Scholar
Nussbaum, Martha and Sen, Amartya K. (2004). The Quality of Life. Oxford: Oxford University Press.Google Scholar
Rachels, James (2014). The Challenge of Cultural Relativism. In Rachels, Stuart (ed.), Elements of Moral Philosophy (pp. 1532). New York: McGraw-Hill Education.Google Scholar
Sen, Amartya K. (1985). Commodities and Capabilities. Amsterdam: North-Holland.Google Scholar
Sen, Amartya K. (2001). Development as Freedom. Oxford: Oxford University Press.Google Scholar
Sen, Amartya K. (2004). Rationality and Freedom. Cambridge, MA: Harvard University Press.Google Scholar
Sen, Amartya K. (2005). Human Rights and Capabilities. Journal of Human Development, 6(2), 151–66.Google Scholar
Sen, Amartya, K. (2011). Development as Freedom. New York: Anchor Books.Google Scholar
Smith, Basil (2011). Can We Test the Experience Machine?, Ethical Perspectives, 18 (1), 2951.Google Scholar
Swift, Fletcher H. (1947). The Athenian Ephebic Oath of Allegiance in American Schools and Colleges. Los Angeles, CA: University of California Press.Google Scholar
United Nations. Universal Declaration of Human Rights. Retrieved from www.un.org/en/universal-declaration-human-rights/.Google Scholar
Weijers, Dan (2011). A Review and Assessment of the Experience Machine Objection to Hedonism. Retrieved from www.danweijers.com/pdf/A%20Review%20 and%20Assessment%20of%20the%20Experience%20Machine%20Objection%20to%20Hedonism%20-%20Dan%20Weijers.pdf.Google Scholar
Weijers, Dan (2014). Nozick’s Experience Machine Is Dead, Long Live the Experience Machine. Philosophical Psychology, 27(4), 513–35.Google Scholar
Wolf, Susan R. (2015). The Variety of Values: Essays on Morality, Meaning, and Love. New York: Oxford University Press.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×