To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Systematic reviews are often characterized as being inherently replicable, but several studies have challenged this claim. The objective of the study was to investigate the variation in results following independent replication of literature searches and meta-analyses of systematic reviews. We included 10 systematic reviews of the effects of health interventions published in November 2020. Two information specialists repeated the original database search strategies. Two experienced review authors screened full-text articles, extracted data, and calculated the results for the first reported meta-analysis. All replicators were initially blinded to the results of the original review. A meta-analysis was considered not ‘fully replicable’ if the original and replicated summary estimate or confidence interval width differed by more than 10%, and meaningfully different if there was a difference in the direction or statistical significance. The difference between the number of records retrieved by the original reviewers and the information specialists exceeded 10% in 25/43 (58%) searches for the first replicator and 21/43 (49%) searches for the second. Eight meta-analyses (80%, 95% CI: 49–96) were initially classified as not fully replicable. After screening and data discrepancies were addressed, the number of meta-analyses classified as not fully replicable decreased to five (50%, 95% CI: 24–76). Differences were classified as meaningful in one blinded replication (10%, 95% CI: 1–40) and none of the unblinded replications (0%, 95% CI: 0–28). The results of systematic review processes were not always consistent when their reported methods were repeated. However, these inconsistencies seldom affected summary estimates from meta-analyses in a meaningful way.
Inattentive survey respondents are a growing concern for social scientists who rely on online surveys for their research. While inattentiveness has been well documented in lower quality sample sources, there is less understanding of how common the phenomenon is in high-quality surveys. We document the presence of a small percentage of respondents in Cooperative Election Study surveys who pass quality control measures but still exhibit inattentive behavior. We show that these respondents may affect public opinion estimates for small subgroups. Finally, we present the results from an experiment testing whether inattentive respondents can be encouraged to pay more attention, but we find that such an intervention fails.
Arend Lijphart's Patterns of Democracy, similar to most of his work, elicited fierce scientific debate. This article replicates some of the analyses proposed in its second edition (published in 2012) in the light of the critiques received by the first edition (published in 2009). It primarily examines the relationship between institutional setup and interest group representation, disentangling the effect of consensualism from that of corporatism on issues such as macroeconomic performance and governance capabilities. The article further deepens our understanding of the complex causal mechanisms connecting these variables, proposing a more sophisticated empirical investigation that emphasises selection effects and conjunctural causation.
This paper presents a review and synthesis of resources available to social entrepreneurs considering social franchising as an option for scale. We identified 20 publications produced by organizations supporting social franchising and four peer-reviewed journal manuscripts. Commonalities and differences between social and commercial franchising are discussed, with a focus on capacities and considerations needed to undertake social franchising. Based on our synthesis, we propose a seven-stage approach to guide social entrepreneurs in considering this option and to inform future research on social franchising as one potential mechanism for scaling impact.
Over the last decade, the field of political science has been exposed to two concomitant developments: a surge of Big Data (BD) and a growing demand for transparency. To date, however, we do not know the extent to which these two developments are compatible with one another. The purpose of this article is to assess, empirically, the extent to which BD political science (broadly defined) adheres to established norms of transparency in the discipline. To address this question, we develop an original dataset of 1555 articles drawn from the Web of Science database covering the period 2008–2019. In doing so, we also provide an assessment of the current level of transparency in empirical political science and quantitative political science in general. We find that articles using Big Data are significantly less likely than other, more traditional works of political science, to share replication files. Our study also illustrates some of the promises and challenges associated with extracting data from Web of Science and similar databases.
Do economic sanctions negatively affect democracy and human rights in targeted countries? Although often intended to improve these outcomes, their record of doing so has historically been mixed at best. Most canonical studies cover the 1980s–1990s, but sanctions practice has since undergone major innovations following debates on humanitarian harm. Given this move toward ‘targeted’ sanctions, it stands to reason that sanctions may today be achieving their intended purposes. I take up policy and methodological innovations to re-examine the effects of Western sanctions seeking to improve democracy and human rights from 1990 to 2021. I find that negative effects persist, offering an important update to the empirical literature. Beyond this contribution, I present a template for replicating and extending country-year research in international relations (IR).
In this rejoinder, we provide a historical overview of the emerging critiques of the L2 Motivational Self System and examine the structural and conceptual factors that have perpetuated these unresolved issues. As our analysis shows, a core concern is that the L2 Motivational Self System lacks clear falsifiability criteria, making it difficult to evaluate or revise in light of contradictory evidence. Despite numerous inconsistent or null findings, there appears to be no threshold at which core assumptions are reconsidered. We argue that advancing the field requires a renewed commitment to falsifiability, where constructs are subjected to empirical scrutiny and can, in principle, be shown to be wrong. Beyond technical matters, we acknowledge the emotional and professional challenges involved in confronting evidence that undermines familiar frameworks. We advocate for a shift toward greater theoretical precision, methodological transparency, and openness to critique.
Assembling datasets is crucial for advancing social science research, but researchers who construct datasets often face difficult decisions with little guidance. Once public, these datasets are sometimes used without proper consideration of their creators’ choices and how these affect the validity of inferences. To support both data creators and data users, we discuss the strengths, limitations, and implications of various data collection methodologies and strategies, showing how seemingly trivial methodological differences can significantly impact conclusions. The lessons we distill build on the process of constructing three cross-national datasets on education systems. Despite their common focus, these datasets differ in the dimensions they measure, as well as their definitions of key concepts, coding thresholds and other assumptions, types of coders, and sources. From these lessons, we develop and propose general guidelines for dataset creators and users aimed at enhancing transparency, replicability, and valid inferences in the social sciences.
A valid model is one whether the inferences drawn from it are true. Many factors can threaten the validity of a model including imprecise or inaccurate measurements, bias in study design or in sampling, and misspecification of the model itself.
A key way to validate a model is to replicate the findings with new data. The best method of replication is collecting new data. However, when that is not possible, it is possible to perform a replicate by dividing the sample using a split-group, jackknife, or bootstrap method. Of these 3 methods, split-group is the strongest but requires a dataset large enough to split your sample. A bootstrap is the weakest method of replication, but produces more valid confidence intervals than a simple model.
The studies in Shafir (1993, Memory & Cognition 21, 546–556) examined the impact of decision frames (choosing vs. rejecting) on decision-making. Our replication—Chandrashekar et al. (2021, Judgment and Decision Making 16, 36–56)—revealed mixed results with only partial support for the original findings, concluding a successful replication of only 2 out of 8 scenarios. Our data from an exploratory extension suggested a pattern in support of an alternative theoretical mechanism aligning with Wedell’s (1997, Memory & Cognition 25, 873–887) accentuation hypothesis. Shafir and Cheek’s (2024) commentary criticized our approach to replications, and the value and importance of direct close replications overall, and shared their views regarding the theory and scope of the phenomenon, with new information about what they consider to be needed steps to empirically test the phenomenon. In our response, we clarify misunderstandings and address empirical findings shared in the commentary. We discuss and defend the value and importance of direct replications and the necessity for full transparency regarding the theoretical assumptions and the process of empirical investigations. Finally, we call for the implementation of open science more broadly, in conducting more direct close replications, sharing of all protocols, materials, data, and code, and implementing outcome-blind reviewing and Registered Reports. These would allow for stronger theoretical and empirical foundations, and a more credible and robust psychological science.
In a close replication study of Darcy et al., (2016), Huensch (2024) reported a lack of clear relationships between inhibitory control (IC) and phonological processing, contrary to the initial findings. Given the general unreliability of response-time differences, which are often the basis of IC measures and could potentially mask small effects, we performed secondary analyses on Huensch’s (2024) open data set to investigate (a) the extent to which the reliability of IC measures could be improved using model-based approaches (Hui & Wu, 2024), (b) the correlations between the different IC tasks, and (c) their predictive power for phonological processing, based on the more reliable indices. Results showed that model-based approaches generally improved reliability, and particularly for the Stroop and Simon tasks to acceptable levels. Yet, correlations between IC tasks remained low, and partial correlation and hierarchical regression still failed to reveal significant relationships between IC and phonological processing, further confirming Huensch’s (2024) findings.
I explore and defend the unusual view that the replacement of matter taking place in the human body undermines egoistic reasons, and that we therefore have little or no basis for long-term egoistic concern. I begin by arguing that you should not have egoistic concern for a replica, i.e. a person resulting from a complete and sudden replacement of matter. I then argue that when it comes to egoistic concern, replication is not relevantly different from the slower and more gradual form of replacement found in human metabolism: if the former undermines egoistic reasons, so does the latter. I grant that the resulting view is, in some respects, hard to accept, but I conclude that we should at least treat it as a serious possibility.
This is a masters-level overview of the mathematical concepts needed to fully grasp the art of derivatives pricing, and a must-have for anyone considering a career in quantitative finance in industry or academia. Starting from the foundations of probability, this textbook allows students with limited technical background to build a solid knowledge of the most important principles. It offers a unique compromise between intuition and mathematics, even when discussing abstract ideas such as change of measure. Mathematical concepts are introduced initially using toy examples, before moving on to examples of finance cases, both in discrete and continuous time. Throughout, numerical applications and simulations illuminate the analytical results. The end-of-chapter exercises test students' understanding, with solved exercises at the end of each part to aid self-study. Additional resources are available online, including slides, code and an interactive app.
We introduce derivative securities and ask ourselves how to determine their price from a financial perspective. We discover that the cashflow of zero-coupon bonds and forward contracts can be artificially replicated by adopting a static trading strategy featuring primary assets. With almost no math, we obtain the central result that every product whose payoff is a linear function of the future price of tradeable products can be computed without relying on any model. The story is different for products whose payoff is a non-linear function of the future price of assets, such as European calls and puts. In such cases, pricing by replication may still be possible, but is more complex but it requires a model and features a dynamic replicating strategy, evolving through time. We use the law of one price to give a clear interpretation to the no-arbitrage price of derivatives. We conclude the chapter with the general expression of a derivative’s price, given by the risk-neutral expectation of its payoff discounted at the risk-free rate. The purpose of the book is to introduce all the concepts needed to understand why and when this result holds, and how it can be evaluated in practice.
We study the application of the Cox-–Ross–Rubinstein model to pricing financial contracts. The determination of the “fair price” consists in looking for an adapted self-financing trading strategy replicating the payoff of the product at hand, and determining the amount needed to launch this procedure. We observe that the mathematical expression of this price takes the form of the conditional expectation of the payoff discounted at the risk-free rate provided that one considers a specific set of probabilities when computing the expectation. This amounts to computing the expectation under a special probability measure (called risk-neutral measure) equivalent to – but different from – the physical probability measure. We show that the risk-neutral measure has the specific property that the price process of assets paying no cashflows are martingales when discounted at the risk-free rate. We illustrate using zero-coupon bonds, forward contracts, and European options that the price found by computing the risk-neutral expectation indeed enables us to start a self-financing strategy that replicates the payoff of those products on a binomial tree.
We derive a deterministic equation whose solution yields the expression of the no-arbitrage price of a derivative security in continuous time. In contrast to Chapter 11 where the latter is found via a risk-neutral expectation, we adopt a no-arbitrage argument as in Chapter 15. To this end, we look for a trading strategy that would (i) be self-financing, (ii) comply with the evolution of a function which only depends on time and on the current price of the underlying asset, and (ii) replicate the derivative’s payoff. Solving this problem yields (i) a partial differential equation (PDE) whose solution is the price function and (ii) the analytical expression of the replicating strategy, something that we failed to obtain in Chapter 11. We show that the price of ZCBs, forward contracts and European call and put options computed using risk-neutral expectations all satisfy the PDE. The price of a specific product is determined by picking the price function that complies with its payoff. The Feynman–Kac theorem justifies that the price found using the risk-neutral expectation approach in Chapter 11 coincides with the no-arbitrage expression obtained by following a replication argument.
Recently, social science research replicability has received close examination, with discussions revolving around the degree of success in replicating experimental results. We lend insight to the replication discussion by examining the quality of replication studies. We examine how even a seemingly minor protocol deviation in the experimental process (Camerer et al. in Science 351(6280):143–1436, 2016. https://doi.org/10.1126/science.aaf0918), the removal of common information, can lead to a finding of “non-replication” of the results from the original study (Chen and Chen in Am Econ Rev 101(6):2562–2589, 2011). Our analysis of the data from the original study, its replication, and a series of new experiments shows that, with common information, we obtain the original result in Chen and Chen (2011), whereas without common information, we obtain the null result in Camerer et al. (2016). Together, we use our findings to propose a set of procedure recommendations to increase the quality of replications of laboratory experiments in the social sciences.
We replicate the strategy-method experiment by Fischbacher et al. (Econ. Lett. 71:397-404, 2001) developed to measure attitudes towards cooperation in a one-shot public goods game. We collected data from 160 students at four different universities across urban and rural Russia. Using the classification proposed by Fischbacher et al. (2001) we find that the distribution of types is very similar across the four locations. The share of conditional cooperators in our Russian subject pools is comparable to the one found by Fischbacher et al. in a Swiss subject pool. However, the distribution of the other types differs from the one found in Switzerland.
Are female hurricanes more deadly? In this chapter we demonstrate multiverse analysis using analytical inputs from many scholars in a high-profile empirical debate. In results from more than 10,000 model specifications, only 12 percent of estimates are statistically significant and 99 percent are smaller in magnitude than what the original authors reported. Multiverse analysis shows that some published findings are extremely weak and nonrobust.
Multiverse analysis is not simply a computational method but also a philosophy of science. In this chapter we explore its core tenets and historical foundations. We discuss the foundational principle of transparency in the history of science and argue that multiverse analysis brings social science back into alignment with this core founding ideal. We make connections between this framework and multiverse concepts developed in cosmology and quantum physics.