We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
New technologies are offering companies, politicians, and others unprecedented opportunity to manipulate us. Sometimes we are given the illusion of power - of freedom - through choice, yet the game is rigged, pushing us in specific directions that lead to less wealth, worse health, and weaker democracy. In, Manipulation, nudge theory pioneer and New York Times bestselling author, Cass Sunstein, offers a new definition of manipulation for the digital age, explains why it is wrong; and shows what we can do about it. He reveals how manipulation compromises freedom and personal agency, while threatening to reduce our well-being; he explains the difference between manipulation and unobjectionable forms of influence, including 'nudges'; and he lifts the lid on online manipulation and manipulation by artificial intelligence, algorithms, and generative AI, as well as threats posed by deepfakes, social media, and 'dark patterns,' which can trick people into giving up time and money. Drawing on decades of groundbreaking research in behavioral science, this landmark book outlines steps we can take to counteract manipulation in our daily lives and offers guidance to protect consumers, investors, and workers.
The current method used by the US Government to calculate benefits and costs does not accurately measure the monetary value of some regulations. The problem is that the method fails to recognize the possibility that individual valuations, reflecting judgments in a relatively isolated, uncoordinated situation, might be significantly different from individual valuations in a situation of coordination. For example, people might be willing to pay $X for a good, supposing that other people have that good, but might be willing to pay $Y to abolish that good, supposing that no one will have that good. Or people might be willing to pay $X to protect members of an endangered species in their individual capacity, but far more than $X for the same purpose, assuming that many others are paying as well; one reason may be that an individual expenditure seems futile. We sketch, identify, and explain this unmeasured value, which we define as coordination value, meant as an umbrella concept to cover several categories of cases in which individual valuation measured in the uncoordinated state might be inadequate. Changing the methodology of benefit–cost analysis to consider coordination value would present serious empirical challenges, but would eliminate the estimation error.
Do people like financial nudges? To answer that question we conducted a pre-registered survey presenting people with 36 hypothetical scenarios describing financial interventions. We varied levels of transparency (i.e., explaining how the interventions worked), framing (interventions framed in terms of spending, or saving), and ‘System’ (interventions could target either System 1 or System 2). Participants were a random sample of 2,100 people drawn from a representative Australian population. All financial interventions were tested across six dependent variables: approval, benefit, ethics, manipulation, the likelihood of use, as well as the likelihood of use if the intervention were to be proposed by a bank. Results indicate that people generally approve of financial interventions, rating them as neutral to positive across all dependent variables (except for manipulation, which was reverse coded). We find effects of framing and System. People have strong and significant preferences for System 2 interventions, and interventions framed in terms of savings. Transparency was not found to have a significant impact on how people rate financial interventions. Financial interventions continue to be rated positive, regardless of the messenger. Looking at demographics, we find that participants who were female, younger, living in metro areas and earning higher incomes were most likely to favor financial interventions, and this effect is especially strong for those aged under 45. We discuss the implications for these results as applied to the financial sector.
In 1921, John Maynard Keynes and Frank Knight independently insisted on the importance of making a distinction between uncertainty and risk. Keynes referred to matters about which ‘there is no scientific basis on which to form any calculable probability whatever’. Knight claimed that ‘Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated’. Knightian uncertainty exists when people cannot assign probabilities to imaginable outcomes. People might know that a course of action might produce bad outcomes A, B, C, D and E, without knowing much or anything about the probability of each. Contrary to a standard view in economics, Knightian uncertainty is real, and it poses challenging and unresolved issues for decision theory and regulatory practice. It bears on many problems, potentially including those raised by artificial intelligence. It is tempting to seek to eliminate the worst-case scenario, and thus to adopt the maximin rule, which might seem to be the appropriate approach under Knightian uncertainty. But serious problems arise if eliminating the worst-case scenario would (1) impose high risks and costs, (2) eliminate large benefits or potential ‘miracles’ or (3) create uncertain risks.
Many moral judgments are rooted in the outrage heuristic. In making such judgments about certain personal injury cases, people’s judgments are both predictable and widely shared. With respect to outrage (on a bounded scale of one to six) and punitive intent (also on a bounded scale of one to six), the judgments of one group of six people, or 12 people, nicely predict the judgements of other groups of six people, or 12 people. Moreover, outrage judgments are highly predictive of punitive intentions. Because of their use of the outrage heuristic, people are intuitive retributivists. People care about deterrence, but they do not think in terms of optimal deterrence. Because outrage is category-specific, those who use the outrage heuristic are likely to produce patterns that they would themselves reject, if only they were to see them. Because people are intuitive retributivists, they reject some of the most common and central understandings in economic and utilitarian theory. To the extent that a system of criminal justice depends on the moral psychology of ordinary people, it is likely to operate on the basis of the outrage heuristic and will, from the utilitarian point of view, end up making serious and systematic errors.
Both Republican and Democratic administrations make regulatory and funding decisions with close reference to benefit–cost analysis (BCA). With respect to regulation, there has been a great deal of academic discussion of BCA and its limits, but almost no attention has been paid to the role of BCA in government funding. That is a serious gap, not least in connection with climate-related risks, such as wildfire, drought, extreme heat, and flooding. Office of Management and Budget (OMB) Circular A-94 sets out guidelines for the BCA required when people are applying to many federal discretionary grant programs. Through Circular A-94, OMB has long required applicants to demonstrate that the benefits of their projects would exceed the costs. But under Circular A-94 as it stood for many years, efficiency-based BCA could produce results that fail to maximize welfare and that are also highly inequitable. The 2023 revision of Circular A-94 focuses more directly on welfare and equity, which are now – not uncontroversially – being brought directly into policy. At the same time, the new Circular A-94 raises fresh questions about how best to promote welfare, and to consider equity, in practice. This article explains the economic foundations for promoting welfare through distributional weighting – and how the old BCA guidance fell short. It then offers recommendations on how to operationalize distributional weighting on the ground specifically for government spending programs – and for BCA more broadly.
Some regulations do not only reduce human deaths, injuries, and illnesses; they also protect nonhuman animals. Regulatory Impact Analyses, required by prevailing executive orders, usually do not disclose or explore benefits or costs with respect to nonhuman animals, even when those benefits or costs are significant. This is an inexcusable gap. If a regulation prevents dogs, horses, or cats from being killed or hurt, the benefits should be specified and quantified. This proposition holds even if those benefits are in some sense incidental to the main goal of the regulation. At the same time, turning the relevant benefits into monetary equivalents raises serious challenges, akin to those raised by the valuation of statistical children.
People buy some goods that they do not enjoy and wish did not exist. They might even be willing to pay a great deal for such goods, whether the currency involves time, commitment or money. One reason involves signaling to others; so long as the good exists, nonconsumption might give an unwanted signal to friends or colleagues. Another reason involves self-signaling; so long as the good exists, nonconsumption might give an unwanted signal to an agent about himself or herself. Yet another reason involves a combination of network effects and status competition; nonconsumption might deprive people of the benefits of participating in a network and thus cause them to lose relative position. With respect to real-world goods (including activities) of this kind, there is typically heterogeneity in relevant populations, with some people deriving positive utility from goods to which other people are indifferent, or which other people deplore. Efforts to measure people’s willingness to pay for goods of this kind will suggest a welfare gain, and possibly a substantial one, even though the existence of such goods produces a welfare loss, and possibly a substantial one. Collective action, private or public, is necessary to eliminate goods that people consume but wish did not exist. Legal responses here might be contemplated when someone successfully maneuvers people into a situation in which they are incentivized to act against their interests, by consuming a product or engaging in an activity they do not enjoy, in order to avoid offering an unwanted signal. Prohibitions on waiving certain rights might be justified in this way; some restrictions on uses of social media, especially by young people, might be similarly justified.
Why are take up rates incomplete or low when the relevant opportunities are unambiguously advantageous to people who are eligible for them? How can public officials promote higher take up of opportunities? All over the world, these are challenges of the first order. There are three primary barriers to take up: learning costs, compliance costs, and psychological costs. These costs lower the net expected benefit of opportunities, and reduce participation in otherwise advantageous programs. Fully rational agents would consider these costs in their take up decisions, and in light of behavioral biases, such costs loom especially large and may seem prohibitive. Experimental and other evidence suggest methods for reducing the barriers to take up and the effects of behavioral biases. Use of such methods has the potential to significantly increase access to a wide range of opportunities that would increase individual well-being and social welfare.
Chater & Loewenstein, superb and distinguished social scientists, have misfired. Their complaint is baseless: In the real world of policymaking, behavioral science is mostly being used to reform systems, not to alter individual behavior. Nor is there empirical support for the proposition that interventions aimed at helping individuals make systemic reform less likely.
In important contexts, people prefer option A to option B when they evaluate the two separately, but prefer option B to option A when they evaluate the two jointly. In consumer behavior, politics, and law, such preference reversals present serious puzzles about rationality and behavioral biases. They are often a product of the pervasive problem of evaluability. Some important characteristics of options are difficult or impossible to assess in separate evaluation, and hence choosers disregard or downplay them; those characteristics are much easier to assess in joint evaluation, where they might be decisive. But in joint evaluation, certain characteristics of options may receive excessive weight, because they do not much affect peoples actual experience or because the particular contrast between joint options distorts people’s judgments. In joint as well as separate evaluation, people are subject to manipulation, though for different reasons. It follows that neither mode of evaluation is reliable. The appropriate approach will vary depending on the goal of the task – increasing consumer welfare, preventing discrimination, achieving optimal deterrence, or something else. Under appropriate circumstances, global evaluation would be much better, but it is often not feasible.
People are often reluctant to make decisions by calculating the costs and benefits of alternative courses of action in particular cases. Knowing, in addition, that they may err, people and institutions often resort to second order strategies for reducing the burdens of, and risk of error in, first-order decisions. They make a second-order decision when they choose one from among such possible strategies. They adopt rules or presumptions; they create standards; they delegate authority to others; they take small steps; they pick rather than choose. Some of these strategies impose high costs before decision but low costs at the time of ultimate decision; others impose low costs both before and at the time of ultimate decision; still others impose low costs before decision while exporting to others the high costs at the time of decision. Political, legal, and ethical issues are also raised by second-order decisions.
When should government mandate labels? When would mandatory labels have desirable consequences for social welfare? How can those consequences be measured? When would labels do more good than harm?
A great deal of theoretical work explores the possibility that algorithms may be biased in one or another respect. But for purposes of law and policy, some of the most important empirical research finds exactly the opposite. In the context of bail decisions, an algorithm designed to predict flight risk does much better than human judges, in large part because the latter place an excessive emphasis on the current offense. Current Offense Bias, as we might call it, is best seen as a cousin of “availability bias,” a well-known source of mistaken probability judgments. The broader lesson is that well-designed algorithms should be able to avoid cognitive biases of many kinds. Existing research on bail decisions also casts a new light on how to think about the risk that algorithms will discriminate on the basis of race (or other factors). Algorithms can easily be designed so as to avoid taking account of race (or other factors). They can also be constrained so as to produce whatever kind of racial balance is sought, and thus to reveal tradeoffs among various social values.
We live in a period in which liberal ideas about personal autonomy is under considerable pressure. Can poems be liberal? Baudelaire’s Enivrez-Vous captures something essential about the most appealing forms of liberalism, and about its underlying spirit (captured, in different ways, by John Stuart Mill, Walt Whitman, and Bob Dylan as well): its insistence on freedom of choice, on the diversity of tastes and preferences, and on human agency. The poem is liberal in its exuberance – its pleasure in its own edginess, its defiance, its sheer rebelliousness, its sense of mischief, its implicit laughter, its love of life and what it has to offer. It is the opposite of dutiful. It is far more exuberant than Mill’s On Liberty, but it is exuberant in the same way. It tells us something important about autonomy and freedom.
People are frequently exposed to competing evidence about climate change. We examined how new information alters people’s beliefs. We find that people who doubt that man-made climate change is occurring, and who do not favor an international agreement to reduce greenhouse gas emissions, show a form of asymmetrical updating: They change their beliefs in response to unexpected good news (suggesting that average temperature rise is likely to be less than previously thought) and fail to change their beliefs in response to unexpected bad news (suggesting that average temperature rise is likely to be greater than previously thought). By contrast, people who strongly believe that man-made climate change is occurring, and who favor an international agreement, show the opposite asymmetry: They change their beliefs far more in response to unexpected bad news (suggesting that average temperature rise is likely to be greater than previously thought) than in response to unexpected good news (suggesting that average temperature rise is likely to be smaller than previously thought). The results suggest that exposure to varied scientific evidence about climate change may increase polarization within a population due to asymmetrical updating. The implications of these findings are explored for how people will update their beliefs upon receiving new evidence about climate change, and also for other beliefs.
To make a decision about a decision – whether to delegate, whether to take a small step, whether to pick, whether to opt – people need information. They may not be able to obtain all of the information they need, which is one reason that they might make a second-order decision. But they need to know something, which makes it natural to assume that it is good to obtain information. But when, exactly, is information good? There is a great deal of information that people have no interest in receiving. It has no value for them. It clutters the mind. It is boring. In addition, there is a great deal of information that people want not to receive. It is unpleasant. It is painful. In some cases, people do not want to know, in the sense that they have no particular motivation to find out. They will not take active steps to learn. In others, they want not to know, in the sense that they have a particular motivation not to find out. They decide to avoid learning, and they take active steps to avoid that.