To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A comprehensive class of models is proposed that can be used for continuous, binary, ordered categorical and count type responses. The difficulty of items is described by difficulty functions, which replace the item difficulty parameters that are typically used in item response models. They crucially determine the response distribution and make the models very flexible with regard to the range of distributions that are covered. The model class contains several widely used models as the binary Rasch model and the graded response model as special cases, allows for simplifications, and offers a distribution free alternative to count type items. A major strength of the models is that they can be used for mixed item formats, when different types of items are combined to measure abilities or attitudes. It is an immediate consequence of the comprehensive modeling approach that allows that difficulty functions automatically adapt to the response distribution. Basic properties of the model class are shown. Several real data sets are used to illustrate the flexibility of the models
The connection between populism and democracy is widely researched. Most of the literature focuses on populist actors (e.g., parties, leaders, and governments) as it examines the intricacies of this relationship. Some of the resulting takeaways have become embedded firmly in scholarship and are currently considered accepted knowledge across the discipline. Scholars have only recently started focusing on the individual-level relationship between populism and democracy. As a result, our knowledge remains limited and is often based on the assumption that what holds for populist actors also will hold for populist citizens. The first part of this article briefly reviews the state of the art on the individual-level relationship between populism and democracy. Drawing from this review, we identify several theoretical and empirical gaps and limitations in the literature that future research should address. We conclude that contemporary scholarship has made important contributions, but more nuanced and targeted research is necessary to comprehensively understand the intricacies between populism and democracy on the individual level.
This paper examines the impact of different ways of inducing discounting in alternating-offer bargaining games in the lab. We examine this by following the framework of Ochs and Roth (Am Econ Rev, pp. 355–384, 1989) and test whether the model's predictions find support in data under three different discounting implementations; the shrinking-pie procedure, the effective-discounting procedure and the bargaining-delay procedure. We find no sensitivity to the number of periods in any of the three procedures. However, we find mixed evidence for the effect of changing the discount factor in the effective-discounting procedure and the shrinking-pie procedure, but the magnitude of effects are small. Furthermore, there was more disagreement in both the effective-discounting and bargaining-delay procedures than in the shrinking-pie procedure.
Prior to discussing and challenging two criticisms on coefficient \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\alpha $$\end{document}, the well-known lower bound to test-score reliability, we discuss classical test theory and the theory of coefficient \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\alpha $$\end{document}. The first criticism expressed in the psychometrics literature is that coefficient \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\alpha $$\end{document} is only useful when the model of essential \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\tau $$\end{document}-equivalence is consistent with the item-score data. Because this model is highly restrictive, coefficient \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\alpha $$\end{document} is smaller than test-score reliability and one should not use it. We argue that lower bounds are useful when they assess product quality features, such as a test-score’s reliability. The second criticism expressed is that coefficient \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\alpha $$\end{document} incorrectly ignores correlated errors. If correlated errors would enter the computation of coefficient \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\alpha $$\end{document}, theoretical values of coefficient \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\alpha $$\end{document} could be greater than the test-score reliability. Because quality measures that are systematically too high are undesirable, critics dismiss coefficient \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\alpha $$\end{document}. We argue that introducing correlated errors is inconsistent with the derivation of the lower bound theorem and that the properties of coefficient \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\alpha $$\end{document} remain intact when data contain correlated errors.
This work offers a comprehensive approach to understanding the phenomena underlying vesicular exocytosis, a process involved in vital functions of living organisms such as neuronal and neuroendocrine signaling. The kinetics of release of most neuromediators that modulate these functions in various ways can be efficiently monitored using single-cell amperometry (SCA). Indeed, SCA at ultramicro- or nanoelectrodes provides the necessary temporal, flux, and nanoscale resolution to accurately report on the shape and intensity of single exocytotic spikes. Rather than characterizing amperometric spikes using standard descriptive parameters (e.g., amplitude and half-width), however, this study summarizes a modeling approach based on the underlying biology and physical chemistry of single exocytotic events. This approach provides deeper insights into intravesicular phenomena that control vesicular release dynamics. The ensuing model’s intrinsic parsimony makes it computationally efficient and friendly, enabling the processing of large amperometric traces to gain statistically significant insights.
The United Nations (UN) as well as specialized UN agencies are turning to behavioral science. The UN clearly states that behavioral sciences should be included in its work to achieve its goals.1 In that, it follows the World Bank, which devoted its World Development Report 2015, “Mind, Society, and Behavior,” to behavioral insights in order to promote development.2 Let me stress from the outset that I deem this development necessary. It is highly promising that more realistic behavioral assumptions3 and insights underpin policies of international organizations (IOs), their member states, and international law.4 Still, when behavioral insights, especially nudges, are used, careful consideration of relevant scientific and normative limitations are needed to uphold legitimacy and accountability of those regulatory tools. Nudges and other behavioral interventions have been used nationally around the world5 and a lively discussion on their ethical and legal limitations has ensued on the national level.6
The widespread Internet “piracy” continues to fuel the debate about business models impervious to copyright infringement. We studied the displacement effects of “piracy” on sales in the book industry. We conducted a year-long large-scale field experiment: in the treatment group, we removed unauthorised copies appearing on the Internet and observed the sales data, whereas in the control group, we simply observed sales. We were able to substantially curb the unauthorised distribution, which resulted in a small, positive effect on sales. While using classical analysis we found it not to be significantly different from zero, a Bayesian approach using previous “piracy” studies to generate a prior led to the conclusion that protecting from piracy resulted in a significant sales boost of about 9 per cent.
The critical reactions of Bentler (2009, doi:10.1007/s11336-008-9100-1), Green and Yang (2009a, doi:10.1007/s11336-008-9098-4; 2009b, doi:10.1007/s11336-008-9099-3), and Revelle and Zinbarg (2009, doi:10.1007/s11336-008-9102-z) to Sijtsma’s (2009, doi:10.1007/s11336-008-9101-0) paper on Cronbach’s alpha are addressed. The dissemination of psychometric knowledge among substantive researchers is discussed.
In 2018, the birth of the world’s first ‘CRISPR Babies’ rendered the global community in disbelief. This was the catalyst for an international moratorium on Heritable Human Genome Editing (‘HHGE’). For the first time, the international community was prompted to consider a pathway forward to regulate HHGE. In light of the evolving maturity of Clustered Regularly Interspaced Short Palindromic Repeats (‘CRISPR’) as a biotechnology, it is timely to evaluate Australian federal legal and regulatory frameworks governing human genome editing. The response to HHGE must carefully balance the need to prevent unethical applications, against the progress of research to improve and refine the technology. This article argues Australia’s federal legislative regime must be reviewed to ensure it has the necessary capabilities to effectively regulate HHGE. It applies three schools of thought which offer an instructive theoretical lens to understand how Australian law has responded to advancements in technology. In addition, an analysis of the governing federal legislation reveals three regulatory gaps — complexity, operational ambiguity and inconsistent legislative objectives. Together, these gaps may be indicative of a legislative and regulatory landscape that is no longer fit for purpose.
This work poses and partially explores an astrobiological hypothesis: might polymeric sulfur and phosphorus-based oxides form heteropolymers in the acidic cloud decks of Venus’ atmosphere? Following an introduction to the emerging field of computational astrobiology, we demonstrate the use of quantum chemical methods to evaluate basic properties of a hypothetical carbon-free heteropolymer that might be sourced from feedstock in the Venusian atmosphere. Our modeling indicates that R-substituted polyphosphoric sulfonic ester polymers may form via multiple thermodynamically favorable pathways and exhibit sufficient kinetic stability to persist in the Venusian clouds. Their thermodynamic stability compares favorably to polypeptides, whose formation is slightly thermodynamically unfavored relative to amino acids in most known abiotic conditions. We propose a combined approach of vibrational spectroscopy and mass spectrometry to search for related materials in Venus’s atmosphere but note that none of the currently planned missions are well suited for their detection. While predicted Ultraviolet–Visible spectra suggest that the studied polymers are unlikely candidates for Venus’s unidentified UV absorbers, the broader possibility of sulfuric acid–based chemistry supporting alternative biochemistries challenges the traditional carbon-centric models of life. We argue that such unconventional lines of inquiry are warranted in the search for life beyond Earth.
The Ising model is one of the most widely analyzed graphical models in network psychometrics. However, popular approaches to parameter estimation and structure selection for the Ising model cannot naturally express uncertainty about the estimated parameters or selected structures. To address this issue, this paper offers an objective Bayesian approach to parameter estimation and structure selection for the Ising model. Our methods build on a continuous spike-and-slab approach. We show that our methods consistently select the correct structure and provide a new objective method to set the spike-and-slab hyperparameters. To circumvent the exploration of the complete structure space, which is too large in practical situations, we propose a novel approach that first screens for promising edges and then only explore the space instantiated by these edges. We apply our proposed methods to estimate the network of depression and alcohol use disorder symptoms from symptom scores of over 26,000 subjects.
Several measures of agreement, such as the Perreault–Leigh coefficient, the \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\textsc {AC}_{1}$$\end{document}, and the recent coefficient of van Oest, are based on explicit models of how judges make their ratings. To handle such measures of agreement under a common umbrella, we propose a class of models called guessing models, which contains most models of how judges make their ratings. Every guessing model have an associated measure of agreement we call the knowledge coefficient. Under certain assumptions on the guessing models, the knowledge coefficient will be equal to the multi-rater Cohen’s kappa, Fleiss’ kappa, the Brennan–Prediger coefficient, or other less-established measures of agreement. We provide several sample estimators of the knowledge coefficient, valid under varying assumptions, and their asymptotic distributions. After a sensitivity analysis and a simulation study of confidence intervals, we find that the Brennan–Prediger coefficient typically outperforms the others, with much better coverage under unfavorable circumstances.
This paper studies correction for chance in coefficients that are linear functions of the observed proportion of agreement. The paper unifies and extends various results on correction for chance in the literature. A specific class of coefficients is used to illustrate the results derived in this paper. Coefficients in this class, e.g. the simple matching coefficient and the Dice/Sørenson coefficient, become equivalent after correction for chance, irrespective of what expectation is used. The coefficients become either Cohen’s kappa, Scott’s pi, Mak’s rho, Goodman and Kruskal’s lambda, or Hamann’s eta, depending on what expectation is considered appropriate. Both a multicategorical generalization and a multivariate generalization are discussed.
Alan Strudler’s “Lying about Reservation Prices in Business Negotiation: A Qualified Defense” challenges a number of claims I make in a prior essay, “A Lie Is a Lie: The Ethics of Lying in Business Negotiations.” Here, I examine Strudler’s critique and seek to refute his various arguments—in particular, those based on assumption of risk and the signalling value of reservation price lies.
Categorical marginal models (CMMs) are flexible tools for modelling dependent or clustered categorical data, when the dependencies themselves are not of interest. A major limitation of maximum likelihood (ML) estimation of CMMs is that the size of the contingency table increases exponentially with the number of variables, so even for a moderate number of variables, say between 10 and 20, ML estimation can become computationally infeasible. An alternative method, which retains the optimal asymptotic efficiency of ML, is maximum empirical likelihood (MEL) estimation. However, we show that MEL tends to break down for large, sparse contingency tables. As a solution, we propose a new method, which we call maximum augmented empirical likelihood (MAEL) estimation and which involves augmentation of the empirical likelihood support with a number of well-chosen cells. Simulation results show good finite sample performance for very large contingency tables.
The Anatolian hieroglyphs SARMA and its variants were employed during the Late Bronze and Iron Ages to invoke the god Sarruma and as theophoric elements pointing to the same god in personal names. In this paper, these SARMA signs are analysed in order to understand the chronological development of the signs, to challenge the use of ligatures, phonetic indicators and phonetic complements with the sign, to determine the precise semantic value of the sign and whether a phonetic value can be confidently identified or dismissed, and finally to investigate how scribes creatively engaged with the sign in various usages and how readers interacted with the sign and its component elements. It will be argued that an increasingly complex phonetic conceptualisation of the sign grew alongside its semantic value, and that Iron Age scribes creatively juxtaposed signs and other graphic elements to evoke memories of the Hittite past and divine legitimation.
Rapid advances in psychology and technology open opportunities and present challenges beyond familiar forms of educational assessment and measurement. Viewing assessment through the perspectives of complex adaptive sociocognitive systems and argumentation helps us extend the concepts and methods of educational measurement to new forms of assessment, such as those involving interaction in simulation environments and automated evaluation of performances. I summarize key ideas for doing so and point to the roles of measurement models and their relation to sociocognitive systems and assessment arguments. A game-based learning assessment SimCityEDU: Pollution Challenge! is used to illustrate ideas.