We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter explores how common challenges facing long-term care systems across the world have given rise to common trends in the development of long-term care service delivery - a focus on improving integration, the shift from residential care to home- and community-based care, the growing role of the private sector in care provision and the emergence of digital technologies with transformative potential. Recent developments in five countries (Germany, Japan, Sweden, Norway, and Romania) are used to exemplify and distil overarching lessons for strengthening long-term care service delivery.
This chapter introduces the vanishing trial phenomenon – the emphasis on settlement and plea bargains and the decline of the judicial verdict. This phenomenon began in common law systems and coincided with the rise of alternative dispute resolution (ADR). ADR has been promulgated through a variety of legal constructs, including national laws and transnational directives. However, to date, it is often the case that neither the normative values of adjudication nor the fundamental values of ADR (such as dialogue and relation building) prevail. In their stead, especially in common law countries, there is a drive for efficiency in both courts and mediation sessions. Efficiency has, to a large extent, become synonymous with settlement and the means by which settlement is reached receive little to no notice. Judges, in this setting, are expected to manage cases until they settle – though, as our research shows, some judges have more ambitious horizons for their role, lending new insights to the possible new trajectories. As methods to replace the judicial role are under experimentation, the value and place of the judicial role have reached a critical crossroads.
This chapter looks at the most recent climate science and starkly sets out the severity of the problems ahead. It gives the reader all the knowledge needed to broadly understand the critical issues of our day from a technical perspective, including systems of production and consumption for energy and food, biodiversity loss, pollution (including plastics), disease threats and population levels. It then looks at ways in which we can technically transfer to a sustainable way of living.
In a series of laboratory experiments, we explore the impact of different market features (the level of information, search costs, and the level of commitment) on agents’ behavior and on the outcome of decentralized matching markets. In our experiments, subjects on each side of the market actively search for a partner, make proposals, and are free to accept or reject any proposal received at any time throughout the game. Our results suggest that a low information level does not affect the stability or the efficiency of the final outcome, although it boosts market activity, unless coupled with search costs. Search costs have a significant negative impact on stability and on market activity. Finally, commitment harms stability slightly but acts as a disciplinary device to market activity and is associated with higher efficiency levels of the final outcome.
Corporate boards, experts panels, parliaments, cabinets, and even nations all take important decisions as a group. Selecting an efficient decision rule to aggregate individual opinions is paramount to the decision quality of these groups. In our experiment we measure revealed preferences over and efficiency of several important decision rules. Our results show that: (1) the efficiency of the theoretically optimal rule is not as robust as simple majority voting, and efficiency rankings in the lab can differ from theory; (2) participation constraints often hinder implementation of more efficient mechanisms; (3) these constraints are relaxed if the less efficient mechanism is risky; (4) participation preferences appear to be driven by realized rather than theoretic payoffs of the decision rules. These findings highlight the difficulty of relying on theory alone to predict what mechanism is better and acceptable to the participants in practice.
We study the distributional preferences of Americans during 2013–2016, a period of social and economic upheaval. We decompose preferences into two qualitatively different tradeoffs—fair-mindedness versus self-interest, and equality versus efficiency—and measure both at the individual level in a large and diverse sample. Although Americans are heterogeneous in terms of both fair-mindedness and equality-efficiency orientation, we find that the individual-level preferences in 2013 are highly predictive of those in 2016. Subjects that experienced an increase in household income became more self-interested, and those who voted for Democratic presidential candidates in both 2012 and 2016 became more equality-oriented.
We devise an experiment to explore the effect of different degrees of bargaining power on the design and the selection of contracts in a hidden-information context. In our benchmark case, each principal is matched with one agent of unknown type. In our second treatment, a principal can select one of three agents, while in a third treatment an agent may choose between the contract menus offered by two principals. We first show theoretically how different ratios of principals and agents affect outcomes and efficiency. Informational asymmetries generate inefficiency. In an environment where principals compete against each other to hire agents, these inefficiencies may disappear, but they are insensitive to the number of principals. In contrast, when agents compete to be hired, efficiency improves dramatically, and it increases in the relative number of agents because competition reduces the agents’ informational monopoly power. However, this environment also generates a high inequality level and is characterized by multiple equilibria. In general, there is a fairly high degree of correspondence between the theoretical predictions and the contract menus actually chosen in each treatment. There is, however, a tendency to choose more ‘generous’ (and more efficient) contract menus over time. We find that competition leads to a substantially higher probability of trade, and that, overall, competition between agents generates the most efficient outcomes.
This paper studies the effect of social relations on convergence to the efficient equilibrium in 2 × 2 coordination games from an experimental perspective. We employ a 2 × 2 factorial design in which we explore two different games with asymmetric payoffs and two matching protocols: “friends” versus “strangers”. In the first game, payoffs by the worse-off player are the same in the two equilibria, whereas in the second game, this player will receive lower payoffs in the efficient equilibrium. Surprisingly, the results show that “strangers” coordinate more frequently in the efficient equilibrium than “friends” in both games. Network measures such as in-degree, out-degree and betweenness are all positively correlated with playing the strategy which leads to the efficient outcome but clustering is not. In addition, ‘envy’ explains no convergence to the efficient outcome.
There is mixed evidence on whether subjects coordinate on the efficient equilibrium in experimental stag hunt games under complete information. A design that generates an anomalously high level of coordination, Rankin et al. (Games Econo Behav 32(2):315–337, 2000), varies payoffs each period in repeated play rather than holding them constant. These payoff “perturbations” are eerily similar to those used to motivate the theory of global games, except the theory operates under incomplete information. Interestingly, that equilibrium selection concept is known to coincide with risk dominance, rather than payoff dominance. Thus, in theory, a small change in experimental design should produce a different equilibrium outcome. We examine this prediction in two treatments. In one, we use public signals to match Rankin et al. (2000)’s design; in the other, we use private signals to match the canonical example of global games theory. We find little difference between treatments, in both cases, subject play approaches payoff dominance. Our literature review reveals this result may have more to do with the idiosyncrasies of our complete information framework than the superiority of payoff dominance as an equilibrium selection principle.
An inequality game is an asymmetric 2 × 2 coordination game in which player 1 earns a substantially higher payoff than player 2 except in the inefficient Nash equilibrium (NE). The two players may have either common or conflicting interests over the two NE. This paper studies a redistribution scheme which allows the players to voluntarily transfer their payoffs after the play of an inequality game. We find that the redistribution scheme induces positive transfer from player 1 to player 2 in both common- and conflicting- interest games, and is particularly effective in increasing efficient coordination and reducing coordination failures in conflicting-interest games. We explain these findings by considering reciprocity by player 1 in response to the sacrifice made by player 2 in achieving efficient coordination in conflicting-interest games.
In this paper, we study the behavior of individuals when facing two different, but incentive-wise identical, institutions. We pair the first price auction with an equivalent lottery. Once a subject is assigned a value for the auctioned object, the first price auction can be modeled as a lottery in which the individual faces a given probability of winning a certain payoff. This set up allows us to explore to what extent the misperception of the probability of winning in the auction is responsible for bidders in a first price auction to bidding above the risk neutral Nash equilibrium prediction. The first result we obtain is that individuals, even though facing the same choice over probability/payoff pairs, behave differently depending on the type of choice they are called to make. When facing an auction, subjects with high values tend to bid significantly above the bid they choose in the corresponding lottery environment. We further find that in both the lottery and the auction environments, subjects tend to bid in excess of the bid predicted by the risk neutral model, at least for intermediate range values. Finally, we find that the difference between the lottery behavior and the auction behavior is substantially, but not totally, eliminated by showing the subjects the probability of winning the auction.
We experimentally investigate in the laboratory prominent mechanisms that are employed in school choice programs to assign students to public schools and study how individual behavior is influenced by preference intensities and risk aversion. Our main results show that (a) the Gale–Shapley mechanism is more robust to changes in cardinal preferences than the Boston mechanism independently of whether individuals can submit a complete or only a restricted ranking of the schools and (b) subjects with a higher degree of risk aversion are more likely to play “safer” strategies under the Gale–Shapley but not under the Boston mechanism. Both results have important implications for enrollment planning and the possible protection risk averse agents seek.
When agents, due to incomplete preferences, fail to have well-defined marginal valuations for goods, a great many government policies will maximize social welfare or achieve efficiency. Welfare economics then becomes useless as a practical guide to decision-making. For example, the values agents assign to increases in a public good will be discretely smaller than the values they assign to decreases. For society as a whole, a large valuation gap will form and a wide range of quantities of the public good will therefore qualify as optimal. Applied welfare economics and cost–benefit analysis bypass this obstacle by paying attention only to agents’ smallest valuations, thus slanting policymaking against public goods. The multiplicity of preferences that agents view as reasonable also neuters Pareto efficiency as a policy guide: virtually any policy change is likely to harm some of the preferences agents deem reasonable.
The danger to democratic norms aside, this chapter demonstrates that state government is also a needless source of additional regulation, additional taxation, and inefficient duplication of functions – in short, a waste of taxpayer money and a pointless burden on the citizenry. Yet, many of the specific functions currently performed by state governments are essential. The abolition of state government would therefore require the redistribution of those necessary functions between the national government and the local governments. This chapter demonstrates that such a redistribution would be administratively workable. To show this, it formulates general criteria for deciding which functions should go where and offers illustrations of how those criteria might be applied to specific functions in practice.
Chapter 3 discusses the fundamentals of backscatter radio communications, analyzes the RFID backscatter channel, its major limitations and mitigation approaches, and presents recent advances including novel RFID quadrature backscatter modulation techniques.
Randomized response (RR) is a well-known method for measuring sensitive behavior. Yet this method is not often applied because: (i) of its lower efficiency and the resulting need for larger sample sizes which make applications of RR costly; (ii) despite its privacy-protection mechanism the RR design may not be followed by every respondent; and (iii) the incorrect belief that RR yields estimates only of aggregate-level behavior but that these estimates cannot be linked to individual-level covariates. This paper addresses the efficiency problem by applying item randomized-response (IRR) models for the analysis of multivariate RR data. In these models, a person parameter is estimated based on multiple measures of a sensitive behavior under study which allow for more powerful analyses of individual differences than available from univariate RR data. Response behavior that does not follow the RR design is approached by introducing mixture components in the IRR models with one component consisting of respondents who answer truthfully and another component consisting of respondents who do not provide truthful responses. An analysis of data from two large-scale Dutch surveys conducted among recipients of invalidity insurance benefits shows that the willingness of a respondent to answer truthfully is related to the educational level of the respondents and the perceived clarity of the instructions. A person is more willing to comply when the expected benefits of noncompliance are minor and social control is strong.
Data in social and behavioral sciences typically possess heavy tails. Structural equation modeling is commonly used in analyzing interrelations among variables of such data. Classical methods for structural equation modeling fit a proposed model to the sample covariance matrix, which can lead to very inefficient parameter estimates. By fitting a structural model to a robust covariance matrix for data with heavy tails, one generally gets more efficient parameter estimates. Because many robust procedures are available, we propose using the empirical efficiency of a set of invariant parameter estimates in identifying an optimal robust procedure. Within the class of elliptical distributions, analytical results show that the robust procedure leading to the most efficient parameter estimates also yields a most powerful test statistic. Examples illustrate the merit of the proposed procedure. The relevance of this procedure to data analysis in a broader context is noted.
Asymptotic distribution theory of Brogden's form of biserial correlation coefficient is derived and large sample estimates of its standard error obtained. Its efficiency relative to the biserial correlation coefficient is examined. Other modifications of the statistic are evaluated, and on the basis of these results, recommendations for choice of estimator of biserial correlation are presented.
Factor scores are naturally predicted by means of their conditional expectation given the indicators y. Under normality this expectation is linear in y but in general it is an unknown function of y. It is discussed that under nonnormality factor scores can be more precisely predicted by a quadratic function of y.
Methods for the analysis of one-factor randomized groups designs with ordered treatments are well established, but they do not apply in the case of more complex experiments. This article describes ordered treatment methods based on maximum-likelihood and robust estimation that apply to designs with clustered data, including those with a vector of covariates. The contrast coefficients proposed for the ordered treatment estimates yield higher power than those advocated by Abelson and Tukey; the proposed robust estimation method is shown (using theory and simulation) to yield both high power and robustness to outliers. Extensions for nonmonotonic alternatives are easily obtained.