To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
ABSTRACT. Decisions under uncertainty depend not only on the degree of uncertainty but also on its source, as illustrated by Ellsberg's observation of ambiguity aversion. In this article we propose the comparative ignorance hypothesis, according to which ambiguity aversion is produced by a comparison with less ambiguous events or with more knowledgeable individuals. This hypothesis is supported in a series of studies showing that ambiguity aversion, present in a comparative context in which a person evaluates both clear and vague prospects, seems to disappear in a noncomparative context in which a person evaluates only one of these prospects in isolation.
INTRODUCTION
One of the fundamental problems of modern decision theory is the analysis of decisions under ignorance or ambiguity, where the probabilities of potential outcomes are neither specified in advance nor readily assessed on the basis of the available evidence. This issue was addressed by Knight [1921], who distinguished between measurable uncertainty or risk, which can be represented by precise probabilities, and unmeasurable uncertainty, which cannot. Furthermore, he suggested that entrepreneurs are compensated for bearing unmeasurable uncertainty as opposed to risk. Contemporaneously, Keynes [1921] distinguished between probability, representing the balance of evidence in favor of a particular proposition and the weight of evidence, representing the quantity of evidence supporting that balance. He then asked, “If two probabilities are equal in degree, ought we, in choosing our course of action, to prefer that one which is based on a greater body of knowledge?” [p. 313].
Economists pervasively explain widespread risk aversion by the realistic assumption that people generally have diminishing marginal utility of wealth. Indeed, diminishing marginal utility of wealth probably explains much of our aversion to large-scale financial risk: We dislike vast uncertainty in lifetime wealth because the marginal value of a dollar when we are poor is higher than when we are rich.
Within the expected-utility framework, the concavity of the utility-of-wealth function is not only sufficient to explain risk aversion - it is also necessary: Diminishing marginal utility of wealth is the sole explanation for risk aversion. Unfortunately, it is an utterly implausible explanation for appreciable risk aversion, except when the stakes are very large. Any utility-of-wealth function that does not predict absurdly severe risk aversion over very large stakes predicts negligible risk aversion over modest stakes.
Arrow (1971, p. 100) shows that an expected-utility maximizer with a differentiable utility function will always want to take a sufficiently small stake in any positive-expected-value bet. That is, expected-utility maximizers are arbitrarily close to risk neutral when stakes are arbitrarily small. Although most economists understand this formal limit result, fewer appreciate that the approximate risk-neutrality prediction holds not just for very small stakes but for quite sizable and economically important stakes as well. Diminishing marginal utility of wealth is not a plausible explanation of people's aversion to risk on the scale of $10, $100, $1,000, or even more.
ABSTRACT. We contrast the rational theory of choice in the form of expected utility theory with descriptive psychological analysis in the form of prospect theory, using problems involving the choice between political candidates and public referendum issues. The results showed that the assumptions underlying the classical theory of risky choice are systematically violated in the manner predicted by prospect theory. In particular, our respondents exhibited risk aversion in the domain of gains, risk seeking in the domain of losses, and a greater sensitivity to losses than to gains. This is consistent with the advantage of the incumbent under normal conditions and the potential advantage of the challenger in bad times. The results further show how a shift in the reference point could lead to reversals of preferences in the evaluation of political and economic options, contrary to the assumption of invariance. Finally, we contrast the normative and descriptive analyses of uncertainty in choice and address the rationality of voting.
The assumption of individual rationality plays a central role in the social sciences, especially in economics and political science. Indeed, it is commonly assumed that most if not all economic and political agents obey the maxims of consistency and coherence leading to the maximization of utility. This notion has been captured by several models that constitute the rational theory of choice, including the expected utility model for decision making under risk, the riskless theory of choice among commodity bundles, and the Bayesian theory for the updating of belief.
ABSTRACT. We discuss the cognitive and the psychophysical determinants of choice in risky and riskless contexts. The psychophysics of value induce risk aversion in the domain of gains and risk seeking in the domain of losses. The psychophysics of chance induce overweighting of sure things and of improbable events, relative to events of moderate probability. Decision problems can be described or framed in multiple ways that give rise to different preferences, contrary to the invariance criterion of rational choice. The process of mental accounting, in which people organize the outcomes of transactions, explains some anomalies of consumer behavior. In particular, the acceptability of an option can depend on whether a negative outcome is evaluated as a cost or as an uncompensated loss. The relation between decision values and experience values is discussed.
Making decisions is like speaking prose - people do it all the time, knowingly or unknowingly. It is hardly surprising, then, that the topic of decision making is shared by many disciplines, from mathematics and statistics, through economics and political science, to sociology and psychology. The study of decisions addresses both normative and descriptive questions. The normative analysis is concerned with the nature of rationality and the logic of decision making. The descriptive analysis, in contrast, is concerned with people's beliefs and preferences as they are, not as they should be. The tension between normative and descriptive considerations characterizes much of the study of judgment and choice.
ABSTRACT. Decision theory distinguishes between risky prospects, where the probabilities associated with the possible outcomes are assumed to be known, and uncertain prospects, where these probabilities are not assumed to be known. Studies of choice between risky prospects have suggested a nonlinear transformation of the probability scale that overweights low probabilities and underweights moderate and high probabilities. The present article extends this notion from risk to uncertainty by invoking the principle of bounded subadditivity: An event has greater impact when it turns impossibility into possibility, or possibility into certainty, than when it merely makes a possibility more or less likely. A series of studies provides support for this principle in decision under both risk and uncertainty and shows that people are less sensitive to uncertainty than to risk. Finally, the article discusses the relationship between probability judgments and decision weights and distinguishes relative sensitivity from ambiguity aversion.
Decisions are generally made without definite knowledge of their consequences. The decisions to invest in the stock market, to undergo a medical operation, or to go to court are generally made without knowing in advance whether the market will go up, the operation will be successful, or the court will decide in one's favor. Decision under uncertainty, therefore, calls for an evaluation of two attributes: the desirability of possible outcomes and their likelihood of occurrence. Indeed, much of the study of decision making is concerned with the assessment of these values and the manner in which they are - or should be - combined.
ABSTRACT. Preference can be inferred from direct choice between options or from a matching procedure in which the decision maker adjusts one option to match another. Studies of preferences between two-dimensional options (e.g., public policies, job applicants, benefit plans) show that the more prominent dimension looms larger in choice than in matching. Thus, choice is more lexicographic than matching. This finding is viewed as an instance of a general principle of compatibility: The weighting of inputs is enhanced by their compatibility with the output. To account for such effects, we develop a hierarchy of models in which the trade-off between attributes is contingent on the nature of the response. The simplest theory of this type, called the contingent weighting model, is applied to the analysis of various compatibility effects, including the choice-matching discrepancy and the preference-reversal phenomenon. These results raise both conceptual and practical questions concerning the nature, the meaning and the assessment of preference.
The relation of preference between acts or options is the key element of decision theory that provides the basis for the measurement of utility or value. In axiomatic treatments of decision theory, the concept of preference appears as an abstract relation that is given an empirical interpretation through specific methods of elicitation, such as choice and matching. In choice the decision maker selects an option from an offered set of two or more alternatives. In matching the decision maker is required to set the value of some variable in order to achieve an equivalence between options (e.g., what chance to win $750 is as attractive as 1 chance in 10 to win $2,500?).
ABSTRACT. Eliciting people's values is a central pursuit in many areas of the social sciences, including survey research, attitude research, economics, and behavior decision theory. These disciplines differ considerably in the core assumptions they make about the nature of the values that are available for elicitation. These assumptions lead to very different methodological concerns and interpretations, as well as to different risks of reading too much or too little into people's responses. The analysis here characterizes these assumptions and the research paradigms based on them. It also offers an account of how they arise, rooted in the psychological and sociological contexts within which different researchers function.
Taken all together, how would you say things are these days - would you say that you are very happy, pretty happy, or not too happy?
National Opinion Research Center (NORC), 1978
Think about the last time during the past month that you were tired easily. Suppose that it had been possible to pay a sum of money to have eliminated being tired easily immediately that one time. What sum of money would you have been willing to pay?
Dickie, Gerking, McClelland, & Schulze, 1987, p. 19 (Appendix 1)
ABSTRACT. Community standards of fairness for the setting of prices and wages were elicited by telephone surveys. In customer or labor markets, it is acceptable for a firm to raise prices (or cut wages) when profits are threatened and to maintain prices when costs diminish. It is unfair to exploit shifts in demand by raising prices or cutting wages. Several market anomalies are explained by assuming that these standards of fairness influence the behavior of firms.
Just as it is often useful to neglect friction in elementary mechanics, there may be good reasons to assume that firms seek their maximal profit as if they were subject only to legal and budgetary constraints. However, the patterns of sluggish or incomplete adjustment often observed in markets suggest that some additional constraints are operative. Several authors have used a notion of fairness to explain why many employers do not cut wages during periods of high unemployment (George Akerlof, 1979;Robert,Solow, 1980). Arthur Okun (1981) went further in arguing that fairness also alters the outcomes in what he called customer markets - characterized by suppliers who are perceived as making their own pricing decisions, have some monopoly power (if only because search is costly), and often have repeat business with their clientele. Like labor markets, customer markets also sometimes fail to clear:
… firms in the sports and entertainment industries offer their customers tickets at standard prices for events that clearly generate excess demand.
In their article on Prospect Theory, Kahneman and Tversky (1979) introduced a nonlinear transformation of probabilities, p → w(p) (or π(p) in the 1979 notation), which is also called a probability weighting function. The purpose of the transformation was to explain several key expected utility violations, including the classical paradoxes of Allais (1953). Taking each violation as an independent constraint on w(p), they composed a conjecture about its shape - a conjecture that has been refined but not substantially altered by later work (Camerer and Ho 1994, Wu and Gonzalez 1996a).
Although its empirical picture has come into focus, the weighting function has remained a somewhat tricky object to analyze - at least in comparison with utility functions. A glance at Figure 4.1, displaying some recent estimates, reveals the nature of the problem. In the figure, the x-axis represents probability of an outcome, and the y-axis the weight associated with that probability. Unlike utility functions, in which the deviation from linearity has an essentially one-dimensional character (i.e., concavity), here we see both concavity and convexity. Curiously, the function is asymmetrical, with the convex region being about twice as large as the concave region. Overall, it does not look like a shape that one would draw unless compelled by strong empirical evidence.
ABSTRACT. The equity premium puzzle refers to the empirical fact that stocks have outperformed bonds over the last century by a surprisingly large margin. We offer a new explanation based on two behavioral concepts. First, investors are assumed to be “loss averse,” meaning that they are distinctly more sensitive to losses than to gains. Second, even long-term investors are assumed to evaluate their portfolios frequently. We dub this combination “myopic loss aversion.” Using simulations, we find that the size of the equity premium is consistent with the previously estimated parameters of prospect theory if investors evaluate their portfolios annually.
INTRODUCTION
There is an enormous discrepancy between the returns on stocks and fixed income securities. Since 1926 the annual real return on stocks has been about 7 percent, while the real return on treasury bills has been less than 1 percent. As demonstrated by Mehra and Prescott [1985], the combination of a high equity premium, a low risk-free rate, and smooth consumption is difficult to explain with plausible levels of investor risk aversion. Mehra and Prescott estimate that investors would have to have coefficients of relative risk aversion in excess of 30 to explain the historical equity premium, whereas previous estimates and theoretical arguments suggest that the actual figure is close to 1.0. We are left with a pair of questions: why is the equity premium so large, or why is anyone willing to hold bonds?
ABSTRACT. Two consumer strategies for the purchase of multiple items from a product class are contrasted. In one strategy (simultaneous choices/ sequential consumption), the consumer buys several items on one shopping trip and consumes the items over several consumption occasions. In the other strategy (sequential choices/sequential consumption), the consumer buys one item at a time, just before each consumption occasion. The first strategy is posited to yield more variety seeking than the second. The greater variety seeking is attributed to forces operating in the simultaneous choices/sequential consumption strategy, including uncertainty about future preferences and a desire to simplify the decision. Evidence from three studies, two involving real products and choices, is consistent with these conjectures. The implications and limitations of the results are discussed.
Consumption of products often is separated temporally from the decision to buy those products. Hence, when making a purchase decision, consumers must predict their preferences at the time of consumption (Kahneman and Snell, in press; March 1978). The decision is complicated further if consumers want to avoid going to the store before each consumption occasion and decide to buy several items in a category for a number of occasions. For example, in one shopping trip a consumer might purchase a week's supply of yogurt. The research reported here examines the strategies consumers use when making multiple purchases in a product category for future consumption. The behavior of consumers who make multiple purchases in a product class for several consumption occasions is compared with that of consumers who purchase one item at a time before each consumption occasion.
There are many opportunities to observe turbulent flows in our everyday surroundings, whether it be smoke from a chimney, water in a river or waterfall, or the buffeting of a strong wind. In observing a waterfall, we immediately see that the flow is unsteady, irregular, seemingly random and chaotic, and surely the motion of every eddy or droplet is unpredictable. In the plume formed by a solid rocket motor (see Fig. 1.1), turbulent motions of many scales can be observed, from eddies and bulges comparable in size to the width of the plume, to the smallest scales the camera can resolve. The features mentioned in these two examples are common to all turbulent flows.
More detailed and careful observations can be made in laboratory experiments. Figure 1.2 shows planar images of a turbulent jet at two different Reynolds numbers. Again, the concentration fields are irregular, and a large range of length scales can be observed.
As implied by the above discussion, an essential feature of turbulent flows is that the fluid velocity field varies significantly and irregularly in both position and time. The velocity field (which is properly introduced in Section 2.1) is denoted by U(x, t), where x is the position and t is time.
Figure 1.3 shows the time history U1(t) of the axial component of velocity measured on the centerline of a turbulent jet (similar to that shown in Fig. 1.2).
From vector calculus we are familiar with scalars and vectors. A scalar has a single value, which is the same in any coordinate system. A vector has a magnitude and a direction, and (in any given coordinate system) it has three components. With Cartesian tensors, we can represent not only scalar and vectors, but also quantities with more directions associated with them. Specifically, an Nth-order tensor (N ≥ 0) has N directions associated with it, and (in a given Cartesian coordinate system) it has 3N components. A zeroth-order tensor is a scalar, and a first-order tensor is a vector. Before defining higher-order tensors, we briefly review the representation of vectors in Cartesian coordinates.
Cartesian coordinates and vectors
Fluid flows (and other phenomena in classical mechanics) take place in the three-dimensional, Euclidean, physical space. As sketched in Fig. A.1, let E denote a Cartesian coordinate system in physical space. This is defined by the position of the origin O, and by the directions of the three mutually perpendicular axes. The unit vectors in the three coordinate directions are denoted by e1, e2, and e3. We write ei to refer to any one of these, with the understanding that the suffix i (or any other suffix) takes the value 1, 2, or 3.
The basic properties of the unit vectors ei are succinctly expressed in terms of the Kronecker delta δij.
The most commonly studied turbulent free shear flows are jets, wakes, and mixing layers. As the name ‘free’ implies, these flows are remote from walls, and the turbulent flow arises because of mean-velocity differences.
We begin by examining the round jet. By combining experimental observations (Section 5.1) with the Reynolds equations (Section 5.2), a good deal can be learned, not only about the round jet, but also about the behavior of turbulent flows in general. In Section 5.3, we study the turbulent kinetic energy in the round jet, and the important processes of production and dissipation of energy. Other self-similar free shear flows are briefly described in Section 5.4; and further observations about the behavior of free shear flows are made in Section 5.5.
The round jet: experimental observations
A description of the flow
We have already encountered the round jet in Chapter 1, for example, Figs. 1.1–1.4. The ideal experimental configuration and the coordinate system employed are shown in Fig. 5.1. A Newtonian fluid steadily flows through a nozzle of diameter d, which produces (approximately) a flat-topped velocity profile, with velocity UJ. The jet from the nozzle flows into an ambient of the same fluid, which is at rest at infinity. The flow is statistically stationary and axisymmetric. Hence statistics depend on the axial and radial coordinates (x and r), but are independent of time and of the circumferential coordinate, θ.
The mean velocity 〈U(x, t)〉 and the Reynolds stresses 〈uiuj〉 are the first and second moments of the Eulerian PDF of velocity f(V; x, t) (Eq. (3.153)). In PDF methods, a model transport equation is solved for a PDF such as f(V; x, t).
The exact transport equation for f(V; x, t) is derived from the Navier–Stokes equations in Appendix H, and discussed in Section 12.1. In this equation, all convective transport is in closed form – in contrast to the term ∂〈uiuj〉/∂xi in the mean-momentum equation, and ∂〈uiuj〉/∂xi in the Reynolds-stress equation. A closed model equation for the PDF – based on the generalized Langevin model (GLM) – is given in Section 12.2, and it is shown how this is closely related to models for the pressure–rate-of-strain tensor, ℛij.
Central to PDF methods are stochastic Lagrangian models, which involve new concepts and require additional mathematical tools. The necessary background on diffusion processes and stochastic differential equations is given in Appendix J. The simplest stochastic Lagrangian model is the Langevin equation, which provides a model for the velocity following a fluid particle. This model is introduced and examined in Section 12.3.
A closure cannot be based on the PDF of velocity alone, because this PDF contains no information on the turbulence timescale. One way to obtain closure is to supplement the PDF equation with the model dissipation equation. A superior way, described in Section 12.5, is to consider the joint PDF of velocity and a turbulence frequency.