To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Multidimensional forced-choice (MFC) tests are increasing in popularity but their construction is complex. The Thurstonian item response model (Thurstonian IRT model) is most often used to score MFC tests that contain dominance items. Currently, in a frequentist framework, information about the latent traits in the Thurstonian IRT model is computed for binary outcomes of pairwise comparisons, but this approach neglects stochastic dependencies. In this manuscript, it is shown how to estimate Fisher information on the block level. A simulation study showed that the observed and expected standard errors based on the block information were similarly accurate. When local dependencies for block sizes \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$>\,2$$\end{document} were neglected, the standard errors were underestimated, except with the maximum a posteriori estimator. It is shown how the multidimensional block information can be summarized for test construction. A simulation study and an empirical application showed small differences between the block information summaries depending on the outcome considered. Thus, block information can aid the construction of reliable MFC tests.
According to J. S. Mill’s liberty principle, the only legitimate justification for restricting the freedom of competent adults is to prevent harm to others. However, this is ambiguous between two interpretations. The harm causation version (Brown, 1972) has it that only conduct that is itself harmful is liable to interference. In contrast, the general prevention of harm version (Lyons, 1979) allows interference with conduct that does not itself cause harm, such as refusals to assist others, so long as this interference prevents harm from occurring.
Mark Tunick (2024) has recently offered new arguments for the harm causation interpretation, suggesting that only this can explain Mill’s resistance to legal interference with prostitutes. This paper challenges Tunick’s arguments. First, I show that Mill does not clearly restrict interference to the proximate causes of harm. While he prefers interference to focus on the clients, rather than singling out the prostitutes, he is prepared to countenance interference with the prostitutes as well. Further, his preference for focusing on the clients is explicable, even if not required by the liberty principle.
Researchers in the field of network psychometrics often focus on the estimation of Gaussian graphical models (GGMs)—an undirected network model of partial correlations—between observed variables of cross-sectional data or single-subject time-series data. This assumes that all variables are measured without measurement error, which may be implausible. In addition, cross-sectional data cannot distinguish between within-subject and between-subject effects. This paper provides a general framework that extends GGM modeling with latent variables, including relationships over time. These relationships can be estimated from time-series data or panel data featuring at least three waves of measurement. The model takes the form of a graphical vector-autoregression model between latent variables and is termed the ts-lvgvar when estimated from time-series data and the panel-lvgvar when estimated from panel data. These methods have been implemented in the software package psychonetrics, which is exemplified in two empirical examples, one using time-series data and one using panel data, and evaluated in two large-scale simulation studies. The paper concludes with a discussion on ergodicity and generalizability. Although within-subject effects may in principle be separated from between-subject effects, the interpretation of these results rests on the intensity and the time interval of measurement and on the plausibility of the assumption of stationarity.
In this paper, we show that for some structural equation models (SEM), the classical chi-square goodness-of-fit test is unable to detect the presence of nonlinear terms in the model. As an example, we consider a regression model with latent variables and interactions terms. Not only the model test has zero power against that type of misspecifications, but even the theoretical (chi-square) distribution of the test is not distorted when severe interaction term misspecification is present in the postulated model. We explain this phenomenon by exploiting results on asymptotic robustness in structural equation models. The importance of this paper is to warn against the conclusion that if a proposed linear model fits the data well according to the chi-quare goodness-of-fit test, then the underlying model is linear indeed; it will be shown that the underlying model may, in fact, be severely nonlinear. In addition, the present paper shows that such insensitivity to nonlinear terms is only a particular instance of a more general problem, namely, the incapacity of the classical chi-square goodness-of-fit test to detect deviations from zero correlation among exogenous regressors (either being them observable, or latent) when the structural part of the model is just saturated.
In this paper I argue for a specific and highly challenging form of empathy involved in caring for young children – empathy that is an active and normally temporally extended exploration of the target subject’s complex and dynamic emotional life, guided by an epistemic aim of psychological understanding. I further argue that engagement in this empathetic work is liable to disable the caregiver’s normal emotional functioning in a way that can give rise to a sense of self-alienation. I end the paper by identifying three ways in which engagement in this special form of empathetic activity can also serve to enrich the caregiver’s life, or contribute to her flourishing.
The work in this paper introduces finite mixture models that can be used to simultaneously cluster the rows and columns of two-mode ordinal categorical response data, such as those resulting from Likert scale responses. We use the popular proportional odds parameterisation and propose models which provide insights into major patterns in the data. Model-fitting is performed using the EM algorithm, and a fuzzy allocation of rows and columns to corresponding clusters is obtained. The clustering ability of the models is evaluated in a simulation study and demonstrated using two real data sets.
Network analysis of ESM data has become popular in clinical psychology. In this approach, discrete-time (DT) vector auto-regressive (VAR) models define the network structure with centrality measures used to identify intervention targets. However, VAR models suffer from time-interval dependency. Continuous-time (CT) models have been suggested as an alternative but require a conceptual shift, implying that DT-VAR parameters reflect total rather than direct effects. In this paper, we propose and illustrate a CT network approach using CT-VAR models. We define a new network representation and develop centrality measures which inform intervention targeting. This methodology is illustrated with an ESM dataset.
We define an involution on the elliptic space of tempered unipotent representations of inner twists of a split simple $p$-adic group $G$ and investigate its behaviour with respect to restrictions to reductive quotients of maximal compact open subgroups. In particular, we formulate a precise conjecture about the relation with a version of Lusztig's nonabelian Fourier transform on the space of unipotent representations of the (possibly disconnected) reductive quotients of maximal compact subgroups. We give evidence for the conjecture, including proofs for ${\mathsf {SL}}_n$ and ${\mathsf {PGL}}_n$.
When President Trump took office on January 20, 2025, he issued numerous executive orders, among them one that suspended the admissions of refugees into the United States. This executive order includes carveouts for refugees whose admission may be in the national interest of the United States, and notes that it is the policy of the United States “to admit only those refugees who can fully and appropriately assimilate into the United States.”1 A little over two weeks later, President Trump issued a second executive order entitled, “Addressing the Egregious Actions of the Republic of South Africa[.]”2 This order directed the Secretaries of State and Homeland Security to “prioritize humanitarian relief, including admission and resettlement through the Unites States Refugee Admissions Program, for Afrikaners in South Africa who are victims of unjust racial discrimination.”3 The “government-sponsored race-based discrimination” in question includes what the order describes as “countless government policies designed to dismantle equal opportunity in employment, education, and business[,]” including a recent law that, again, according to the order, “enable[s] the government of South Africa to seize ethnic minority Afrikaner’s agricultural property without compensation.”4 The executive order targeting South Africa also halts all other aid or assistance to the country from the United States because South Africa has “taken aggressive positions towards the United States and its allies, including accusing Israel, not Hamas, of genocide in the International Court of Justice[.]”5
Factor analysis is a well-known method for describing the covariance structure among a set of manifest variables through a limited number of unobserved factors. When the observed variables are collected at various occasions on the same statistical units, the data have a three-way structure and standard factor analysis may fail. To overcome these limitations, three-way models, such as the Parafac model, can be adopted. It is often seen as an extension of principal component analysis able to discover unique latent components. The structural version, i.e., as a reparameterization of the covariance matrix, has been also formulated but rarely investigated. In this article, such a formulation is studied by discussing under what conditions factor uniqueness is preserved. It is shown that, under mild conditions, such a property holds even if the specific factors are assumed to be within-variable, or within-occasion, correlated and the model is modified to become scale invariant.
I experimentally investigate the relation of endowment origin, cognitive abilities (as measured by the Cognitive Reflection Test, CRT), and co-operation in a one-shot linear public goods game. The results show that subjects’ contributions depend on an interplay of cognitive abilities and endowment origin. A house money effect exists only for subjects with low CRT scores. They contribute more when income was allocated to them and less when income was obtained by effort. In contrast, subjects with high CRT scores contribute the same amount independent of income type. The findings have implications for redistribution, team production, and experimental designs.
Dual scaling (DS) is a multivariate exploratory method equivalent to correspondence analysis when analysing contingency tables. However, for the analysis of rating data, different proposals appear in the DS and correspondence analysis literature. It is shown here that a peculiarity of the DS method can be exploited to detect differences in response styles. Response styles occur when respondents use rating scales differently for reasons not related to the questions, often biasing results. A spline-based constrained version of DS is devised which can detect the presence of four prominent types of response styles, and is extended to allow for multiple response styles. An alternating nonnegative least squares algorithm is devised for estimating the parameters. The new method is appraised both by simulation studies and an empirical application.
Manipulating matter by strong coupling to the vacuum field has attracted intensive interests over the last decade. In particular, vibrational strong coupling (VSC) has shown great potential for modifying ground state properties in solution chemistry and biochemical processes. In this work, the effect of VSC of water on the melting behaviour of ds-DNA, an important biophysical process, is explored. Several experimental conditions, including the concentration of ds-DNA, cavity profile, solution environment, as well as thermal annealing treatment, were tested. No significant effect of VSC was observed for the melting behaviour of the ds-DNA sequence used. This demonstrates yet again the robustness of ds-DNA to outside perturbations. Our work also provides a general protocol to probe the effects of VSC on biological systems inside microfluid Fabry–Perot cavities and should be beneficial to better understand and harness this phenomenon.
DNA helicases are molecular motors that use the energy from nucleotide hydrolysis to move along DNA, promoting the unwinding or rewinding of the double helix. Here, we use magnetic and optical tweezers to track the motion of three helicases, gp41, RecQ, and RecG, while they unwind or rewind a DNA hairpin. Their activity is characterized by measuring the helicase velocity and diffusivity under different force and ATP conditions. We use a continuous-time random walk framework that allows us to compute the mean helicase displacement and its fluctuations analytically. Fitting the model to the measured helicase velocity and diffusivity allows us to determine the main states and transitions in the helicase mechanochemical cycle. A general feature for all helicases is the need to incorporate an off-pathway pausing state to reproduce the data, raising the question of whether pauses play a regulatory role. Diffusivity measurements also lead to estimations of the thermodynamic uncertainty factor related to the motor efficiency. Assuming a tight mechano-chemical coupling, we find that the RecG helicase reaches a high efficiency when operating uphill, whereas the unwinding gp41 and RecQ helicases display much lower efficiencies. Incorporating the analysis of fluctuations allows for better characterization of the activity of molecular machines, which represents an advance in the field.
The discourse on State immunity has traditionally focused on its application in judicial proceedings. However, in recent years scholars have begun to address whether the law on State immunity also protects foreign States against measures taken against their property by the territorial State's executive and/or legislative organs. This question has been raised following unilateral sanctions regimes freezing property of foreign States. It has gained renewed attention in the context of the ‘immobilization’ of around €300 billion of the Central Bank of Russia's assets as a reaction to the invasion of Ukraine by the Russian Federation. In addition, there are recent suggestions to subject these sovereign assets to further steps, including confiscation, the generation of investment returns or taxing windfall profits accruing to the entities holding the assets. This article revisits the various conceptions of the law on State immunity to address the question of whether a principle of State immunity against non-judicial measures of constraint exists. Based on a review of existing State practice and opinio juris, it argues that customary international law does provide for State immunity in this context. However, the article further contends that the content of the norm should be construed differently than in relation to judicial proceedings, recognizing the weight of public policy concerns of the territorial State.
In this paper, the notion of Markov move from algebraic statistics is used to analyze the weighted kappa indices in rater agreement problems. In particular, the problem of the maximum kappa and its dependence on the choice of the weighting schemes are discussed. The Markov moves are also used in a simulated annealing algorithm to actually find the configuration of maximum agreement.
From the late ninth to the mid-seventh century BCE, the Urartian kings expanded their polity from the Euphrates to Lake Urmia. In this context, the question of Urartian legitimacy and how it was achieved is a key issue. Previous research has suggested that rulers primarily used visual representations to appeal to different segments of society, but this article explores how royal legitimacy was also pursued through religious rituals and festivals, starting from the so-called co-regency of Išpuini and Minua (ca 820–810 BCE). By focussing on these rituals, which possibly reached a broader audience than visual representations, this study seeks to understand the roles of performance and religion in the early formation of the Urartian state.
Given a surface $\Sigma$ equipped with a set P of marked points, we consider the triangulations of $\Sigma$ with vertex set P. The flip-graph of $\Sigma$ is the graph whose vertices are these triangulations, and whose edges correspond to flipping arcs in these triangulations. The flip-graph of a surface appears in the study of moduli spaces and mapping class groups. We consider the number of geodesics in the flip-graph of $\Sigma$ between two triangulations as a function of their distance. We show that this number grows exponentially provided the surface has enough topology, and that in the remaining cases the growth is polynomial.
Given a positive definite covariance matrix \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\widehat{\Sigma }$$\end{document} of dimension n, we approximate it with a covariance of the form \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$HH^\top +D$$\end{document}, where H has a prescribed number \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$k<n$$\end{document} of columns and \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$D>0$$\end{document} is diagonal. The quality of the approximation is gauged by the I-divergence between the zero mean normal laws with covariances \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\widehat{\Sigma }$$\end{document} and \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$HH^\top +D$$\end{document}, respectively. To determine a pair (H, D) that minimizes the I-divergence we construct, by lifting the minimization into a larger space, an iterative alternating minimization algorithm (AML) à la Csiszár–Tusnády. As it turns out, the proper choice of the enlarged space is crucial for optimization. The convergence of the algorithm is studied, with special attention given to the case where D is singular. The theoretical properties of the AML are compared to those of the popular EM algorithm for exploratory factor analysis. Inspired by the ECME (a Newton–Raphson variation on EM), we develop a similar variant of AML, called ACML, and in a few numerical experiments, we compare the performances of the four algorithms.
Fix $\alpha >0$. Then by Fejér's theorem $(\alpha (\log n)^{A}\,\mathrm {mod}\,1)_{n\geq 1}$ is uniformly distributed if and only if $A>1$. We sharpen this by showing that all correlation functions, and hence the gap distribution, are Poissonian provided $A>1$. This is the first example of a deterministic sequence modulo $1$ whose gap distribution and all of whose correlations are proven to be Poissonian. The range of $A$ is optimal and complements a result of Marklof and Strömbergsson who found the limiting gap distribution of $(\log (n)\, \mathrm {mod}\,1)$, which is necessarily not Poissonian.