To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In April 2016, then Prime Minister Malcolm Turnbull confirmed the existence of Australia’s offensive cyber capability. Said to constitute both a coordinating Information Warfare Division inside the Australian Army as well as dedicated cyberoffensive capability inside the Australian Signals Directorate, the unveiling of this capability was a watershed in Australian defence policy. Yet whilst the literature has briefly examined whether Australia’s cyberoffensive capability is congruous with international law, no such analysis under Australia’s domestic laws has been undertaken. This paper seeks to partially address this gap in the research by focusing on whether the Australian Defence Force could legally launch cyberattacks against domestic targets under Commonwealth call-out powers.
In psychometrics, the canonical use of conditional likelihoods is for the Rasch model in measurement. Whilst not disputing the utility of conditional likelihoods in measurement, we examine a broader class of problems in psychometrics that can be addressed via conditional likelihoods. Specifically, we consider cluster-level endogeneity where the standard assumption that observed explanatory variables are independent from latent variables is violated. Here, “cluster” refers to the entity characterized by latent variables or random effects, such as individuals in measurement models or schools in multilevel models and “unit” refers to the elementary entity such as an item in measurement. Cluster-level endogeneity problems can arise in a number of settings, including unobserved confounding of causal effects, measurement error, retrospective sampling, informative cluster sizes, missing data, and heteroskedasticity. Severely inconsistent estimation can result if these challenges are ignored.
In November 1980, a twenty-nine-year-old contract worker in Hammond, Louisiana by the name of Stephen K. Clark was arrested and charged with criminal mischief for painting a thirty-foot mural of Mickey Mouse “making an obscene gesture to Iran” on the side of Sunflower supermarket. According to the store's manager, Clark had been hired to give the store a fresh coat of yellow paint before going wildly off script. The Hammond city prosecutor told the press that Clark would face jail time if convicted for his renegade painting, which, alongside the enormous image of Mickey, featured a word balloon proclaiming “We're fed up. Hey Iran!”1 This was no one-off use of Mickey's likeness to send a message to Iran. At the dawn of the 1980s, the image and sentiment Clark felt compelled to share had become curiously popular across the United States, appearing in surprising places all over the country.
Mark 2.21 uses unusual terminology in describing the ‘patch of an unfulled rag’ (ἐπίβλημα ῥάκους ἀγνάφου) as well as in relation to what happens when the patch subsequently fails (αἴρει τὸ πλήρωμα ἀπ’ αὐτοῦ τὸ καινὸν τοῦ παλαιοῦ καὶ χεῖρον σχίσμα γίνεται). While Matthew largely repeats Mark’s version verbatim (with only minor changes), Luke appears to make substantive changes to the ‘parable’. Several scholars have suggested that Luke lacked an understanding of the facts and rendered the situation entirely improbable. However, if one takes account of terminology associated with fulling processes in antiquity, recently illuminated by archaeologically grounded studies of ancient fulleries, Luke’s version emerges as a plausible interpretation of his predecessor’s and, in the other direction, certain interpretive possibilities in Mark’s account become legible.
Intensive longitudinal (IL) data are increasingly prevalent in psychological science, coinciding with technological advancements that make it simple to deploy study designs such as daily diary and ecological momentary assessments. IL data are characterized by a rapid rate of data collection (1+ collections per day), over a period of time, allowing for the capture of the dynamics that underlie psychological and behavioral processes. One powerful framework for analyzing IL data is state-space modeling, where observed variables are considered measurements for underlying states (i.e., latent variables) that change together over time. However, state-space modeling has typically relied on continuous measurements, whereas psychological data often come in the form of ordinal measurements such as Likert scale items. In this manuscript, we develop a general estimation approach for state-space models with ordinal measurements, specifically focusing on a graded response model for Likert scale items. We evaluate the performance of our model and estimator against that of the commonly used “linear approximation” model, which treats ordinal measurements as though they are continuous. We find that our model resulted in unbiased estimates of the state dynamics, while the linear approximation resulted in strongly biased estimates of the state dynamics. Finally, we develop an approximate standard error, termed slice standard errors and show that these approximate standard errors are more liberal than true standard errors (i.e., smaller) at a consistent bias.
This commentary reflects on the articles included in the Psychometrika Special Issue on Network Psychometrics in Action. The contributions to the special issue are related to several possible future paths for research in this area. These include the development of models to analyze and represent interventions, improvement in exploratory and inferential techniques in network psychometrics, the articulation of psychometric theories in addition to psychometric models, and extensions of network modeling to novel data sources. Finally, network psychometrics is part of a larger movement in psychology that revolves around the analysis of human beings as complex systems, and it is timely that psychometricians start extending their rich modeling tradition to improve and extend the analysis of systems in psychology.
We propose a prenet (product-based elastic net), a novel penalization method for factor analysis models. The penalty is based on the product of a pair of elements in each row of the loading matrix. The prenet not only shrinks some of the factor loadings toward exactly zero but also enhances the simplicity of the loading matrix, which plays an important role in the interpretation of the common factors. In particular, with a large amount of prenet penalization, the estimated loading matrix possesses a perfect simple structure, which is known as a desirable structure in terms of the simplicity of the loading matrix. Furthermore, the perfect simple structure estimation via the proposed penalization turns out to be a generalization of the k-means clustering of variables. On the other hand, a mild amount of the penalization approximates a loading matrix estimated by the quartimin rotation, one of the most commonly used oblique rotation techniques. Simulation studies compare the performance of our proposed penalization with that of existing methods under a variety of settings. The usefulness of the perfect simple structure estimation via our proposed procedure is presented through various real data applications.
The article examines the role of transnational peer review in shaping financial market regulation in Australia in pursuit of financial stability. Transnational regulatory networks have become an important source of standards and enforcement practices in financial regulation. In the aftermath of the financial crises of the 2000s, global initiatives to strengthen financial supervision have reinforced peer review mechanisms to monitor the national implementation of transnational standards. Through such peer review, regulatory networks can influence domestic rules and practices, as well as the exercise of discretion by national regulatory authorities. The article studies the interaction between transnational peer review and regulatory choices in Australian financial supervision through three case studies. Notwithstanding concerns in the literature about the efficacy and legitimacy of regulatory networks, the case studies demonstrate the scope for productive dialogue between the transnational and national level in making regulatory choices.
Factor analysis (FA) procedures can be classified into three types (Adachi in WIREs Comput Stat https://onlinelibrary.wiley.com/doi/abs/10.1002/wics.1458, 2019): latent variable FA (LVFA), matrix decomposition FA (MDFA), and its variant (Stegeman in Comput Stat Data Anal 99: 189–203, 2016) named completely decomposed FA (CDFA) through the theorems proved in this paper. We revisit those procedures from the Comprehensive FA (CompFA) model, in which a multivariate observation is decomposed into common factor, specific factor, and error parts. These three parts are separated in MDFA and CDFA, while the specific factor and error parts are not separated, but their sum, called a unique factor, is considered in LVFA. We show that the assumptions in the CompFA model are satisfied by the CDFA solution, but not completely by the MDFA one. Then, how the CompFA model parameters are estimated in the FA procedures is examined. The study shows that all parameters can be recovered well in CDFA, while the sum of the parameters for the specific factor and error parts is approximated by the LVFA estimate of the unique factor parameter and by the MDFA estimate of the specific factor parameter. More detailed results are given through our subdivision of the CompFA model according to whether the error part is uncorrelated among variables or not.
Graph-based causal models are a flexible tool for causal inference from observational data. In this paper, we develop a comprehensive framework to define, identify, and estimate a broad class of causal quantities in linearly parametrized graph-based models. The proposed method extends the literature, which mainly focuses on causal effects on the mean level and the variance of an outcome variable. For example, we show how to compute the probability that an outcome variable realizes within a target range of values given an intervention, a causal quantity we refer to as the probability of treatment success. We link graph-based causal quantities defined via the do-operator to parameters of the model implied distribution of the observed variables using so-called causal effect functions. Based on these causal effect functions, we propose estimators for causal quantities and show that these estimators are consistent and converge at a rate of \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$N^{-1/2}$$\end{document} under standard assumptions. Thus, causal quantities can be estimated based on sample sizes that are typically available in the social and behavioral sciences. In case of maximum likelihood estimation, the estimators are asymptotically efficient. We illustrate the proposed method with an example based on empirical data, placing special emphasis on the difference between the interventional and conditional distribution.
Offline volunteering was faced with new challenges during the COVID-19 pandemic. Using a survey experiment with 1207 student participants, we test the impact of informing subjects about blood donation urgency (shortage information), and secondly, the effect of providing information about measures taken to reduce SARS-CoV-2 transmission at blood donation centers (hygiene information), on their inclination to donate during and after the COVID-19 lockdown. The results show that shortage information increases extensive-margin willingness to donate for non-donors by 15 percentage points (pp), on average, and increases the willingness to donate quickly for all respondents. Hygiene information, however, reduces prior donors’ intention to donate again by 8pp, on average, and reduces the willingness of non-donors to donate quickly.
The asymptotic posterior normality (APN) of the latent variable vector in an item response theory (IRT) model is a crucial argument in IRT modeling approaches. In case of a single latent trait and under general assumptions, Chang and Stout (Psychometrika, 58(1):37–52, 1993) proved the APN for a broad class of latent trait models for binary items. Under the same setup, they also showed the consistency of the latent trait’s maximum likelihood estimator (MLE). Since then, several modeling approaches have been developed that consider multivariate latent traits and assume their APN, a conjecture which has not been proved so far. We fill this theoretical gap by extending the results of Chang and Stout for multivariate latent traits. Further, we discuss the existence and consistency of MLEs, maximum a-posteriori and expected a-posteriori estimators for the latent traits under the same broad class of latent trait models.
The longitudinal process that leads to university student dropout in STEM subjects can be described by referring to (a) inter-individual differences (e.g., cognitive abilities) as well as (b) intra-individual changes (e.g., affective states), (c) (unobserved) heterogeneity of trajectories, and d) time-dependent variables. Large dynamic latent variable model frameworks for intensive longitudinal data (ILD) have been proposed which are (partially) capable of simultaneously separating the complex data structures (e.g., DLCA; Asparouhov et al. in Struct Equ Model 24:257–269, 2017; DSEM; Asparouhov et al. in Struct Equ Model 25:359–388, 2018; NDLC-SEM, Kelava and Brandt in Struct Equ Model 26:509–528, 2019). From a methodological perspective, forecasting in dynamic frameworks allowing for real-time inferences on latent or observed variables based on ongoing data collection has not been an extensive research topic. From a practical perspective, there has been no empirical study on student dropout in math that integrates ILD, dynamic frameworks, and forecasting of critical states of the individuals allowing for real-time interventions. In this paper, we show how Bayesian forecasting of multivariate intra-individual variables and time-dependent class membership of individuals (affective states) can be performed in these dynamic frameworks using a Forward Filtering Backward Sampling method. To illustrate our approach, we use an empirical example where we apply the proposed forecasting method to ILD from a large university student dropout study in math with multivariate observations collected over 50 measurement occasions from multiple students (\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$N = 122$$\end{document}). More specifically, we forecast emotions and behavior related to dropout. This allows us to predict emerging critical dynamic states (e.g., critical stress levels or pre-decisional states) 8 weeks before the actual dropout occurs.
MTVE is an open-source software tool (citeware) that can be applied in laboratory and online experiments to implement video communication. The tool enables researchers to gather video data from these experiments in a way that these videos can be later used for automatic analysis through machine learning techniques. The browser-based tool comes with an easy user interface and can be easily integrated into z-Tree, oTree (and other experimental or survey tools). It provides the experimenters control over several communication parameters (e.g., number of participants, resolution), produces high-quality video data, and circumvents the Cocktail Party Problem (i.e., the problem of separating speakers solely based on audio input) by producing separate files. Using some of the recommended Voice-to-Text AI, the experimenters can transcribe individual files. MTVE can merge these individual transcriptions into one conversation.
It is shown that the psychometric test reliability, based on any true-score model with randomly sampled items and uncorrelated errors, converges to 1 as the test length goes to infinity, with probability 1, assuming some general regularity conditions. The asymptotic rate of convergence is given by the Spearman–Brown formula, and for this it is not needed that the items are parallel, or latent unidimensional, or even finite dimensional. Simulations with the 2-parameter logistic item response theory model reveal that the reliability of short multidimensional tests can be positively biased, meaning that applying the Spearman–Brown formula in these cases would lead to overprediction of the reliability that results from lengthening a test. However, test constructors of short tests generally aim for short tests that measure just one attribute, so that the bias problem may have little practical relevance. For short unidimensional tests under the 2-parameter logistic model reliability is almost unbiased, meaning that application of the Spearman–Brown formula in these cases of greater practical utility leads to predictions that are approximately unbiased.
This paper introduces the Bradley–Terry regression trunk model, a novel probabilistic approach for the analysis of preference data expressed through paired comparison rankings. In some cases, it may be reasonable to assume that the preferences expressed by individuals depend on their characteristics. Within the framework of tree-based partitioning, we specify a tree-based model estimating the joint effects of subject-specific covariates over and above their main effects. We, therefore, combine a tree-based model and the log-linear Bradley-Terry model using the outcome of the comparisons as response variable.The proposed model provides a solution to discover interaction effects when no a-priori hypotheses are available. It produces a small tree, called trunk, that represents a fair compromise between a simple interpretation of the interaction effects and an easy to read partition of judges based on their characteristics and the preferences they have expressed. We present an application on a real dataset following two different approaches, and a simulation study to test the model’s performance. Simulations showed that the quality of the model performance increases when the number of rankings and objects increases. In addition, the performance is considerably amplified when the judges’ characteristics have a high impact on their choices.
Many of the models that have been proposed for response data share the assumptions that define the monotone homogeneity (MH) model. Observable properties that are implied by the MH model allow for these assumptions to be tested. For binary response data, the most restrictive of these properties is called conditional association (CA). All the other properties considered can be considered incomplete tests of CA that alleviate the practical limitations encountered when assessing the MH model assumptions using CA. It is found that the assessment of the MH model assumptions with an incomplete test of CA, rather than CA, is generally associated with a substantial loss of information. We also look at the sensitivity of the observable properties to model violation and discuss the implications of the results. It is argued that more research is required about the extent to which the assumptions and the model specifications influence the inferences made from response data.
A shadow-test approach to the calibration of field-test items embedded in adaptive testing is presented. The objective function used in the shadow-test model selects both the operational and field-test items adaptively using a Bayesian version of the criterion of \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$D_{\mathrm{s}}$$\end{document}-optimality. The constraint set for the model can be used to hide the field-test items completely in the content of the test as well as to deal with such practical issues as random control of their exposure rates. The approach runs on efficient implementations of the Gibbs sampler for the real-time updating of the ability and field-test parameters. Optimal settings for the proposed algorithms were found and used to demonstrate item calibration with smaller than traditional sample sizes in runtimes fully comparable with conventional adaptive testing.
This article examines the consequences of the Australia’s Foreign Relations (State and Territory Arrangements) Act 2020 (Cth) (‘Foreign Relations Act’) for international law. It argues that the arrangements entered into by state, territory and local governments to which the Foreign Relations Act applies can be relevant to international law in three ways. First, they may relate indirectly to Australia’s international legal obligations. Second, they may be a means by which Australian subnational governments claim a role for themselves in governance on global issues. Third, as an exercise of diplomacy, they influence the relations Australia maintains with other nations and the way in which it participates in the international system. As the states and territories in particular become more assertive, including on international issues such as climate change, giving the Commonwealth complete control over such arrangements may impact Australia’s relationship with international law.