We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We provide an assessment of the Infinity Two fusion pilot plant (FPP) baseline plasma physics design. Infinity Two is a four-field period, aspect ratio $A = 10$, quasi-isodynamic stellarator with improved confinement appealing to a max-$J$ approach, elevated plasma density and high magnetic fields ($ \langle B\rangle = 9$ T). Here $J$ denotes the second adiabatic invariant. At the envisioned operating point ($800$ MW deuterium-tritium (DT) fusion), the configuration has robust magnetic surfaces based on magnetohydrodynamic (MHD) equilibrium calculations and is stable to both local and global MHD instabilities. The configuration has excellent confinement properties with small neoclassical transport and low bootstrap current ($|I_{bootstrap}| \sim 2$ kA). Calculations of collisional alpha-particle confinement in a DT FPP scenario show small energy losses to the first wall (${\lt}1.5 \,\%$) and stable energetic particle/Alfvén eigenmodes at high ion density. Low turbulent transport is produced using a combination of density profile control consistent with pellet fueling and reduced stiffness to turbulent transport via three-dimensional shaping. Transport simulations with the T3D-GX-SFINCS code suite with self-consistent turbulent and neoclassical transport predict that the DT fusion power$P_{{fus}}=800$ MW operating point is attainable with high fusion gain ($Q=40$) at volume-averaged electron densities $n_e\approx 2 \times 10^{20}$ m$^{-3}$, below the Sudo density limit. Additional transport calculations show that an ignited ($Q=\infty$) solution is available at slightly higher density ($2.2 \times 10^{20}$ m$^{-3}$) with $P_{{fus}}=1.5$ GW. The magnetic configuration is defined by a magnetic coil set with sufficient room for an island divertor, shielding and blanket solutions with tritium breeding ratios (TBR) above unity. An optimistic estimate for the gas-cooled solid breeder designed helium-cooled pebble bed is TBR $\sim 1.3$. Infinity Two satisfies the physics requirements of a stellarator fusion pilot plant.
Transport characteristics and predicted confinement are shown for the Infinity Two fusion pilot plant baseline plasma physics design, a high field stellarator concept developed using modern optimization techniques. Transport predictions are made using high-fidelity nonlinear gyrokinetic turbulence simulations along with drift kinetic neoclassical simulations. A pellet-fuelled scenario is proposed that enables supporting an edge density gradient to substantially reduce ion temperature gradient turbulence. Trapped electron mode turbulence is minimized through the quasi-isodynamic configuration that has been optimized with maximum-J. A baseline operating point with deuterium–tritium fusion power of $P_{{fus,DT}}=800$ MW with high fusion gain $Q_{{fus}}=40$ is demonstrated, respecting the Sudo density limit and magnetohydrodynamic stability limits. Additional higher power operating points are also predicted, including a fully ignited ($Q_{{fus}}=\infty$) case with $P_{{fus,DT}}=1.5$ GW. Pellet ablation calculations indicate it is plausible to fuel and sustain the desired density profile. Impurity transport calculations indicate that turbulent fluxes dominate neoclassical fluxes deep into the core, and it is predicted that impurity peaking will be smaller than assumed in the transport simulations. A path to access the large radiation fraction needed to satisfy exhaust requirements while sustaining core performance is also discussed.
Safety villages are interventions that aim to boost children's knowledge and behaviour regarding risk-taking behaviours and their consequences via an experiential learning approach. In safety villages, children experience scenarios involving risks that resemble real-life situations. We investigated the extent to which desirable learning outcomes from a single-session safety village visit are visible outside the safety village context. In a well-powered quasi-experimental preregistered field study, we compared students (aged 11–13) who received experiential safety education to a control group of students who had not yet received the education on three important learning outcomes: Knowledge-application, risk-taking behaviour and general risk-taking tendencies. Data were collected outside of the safety village environment, before or after the visit, and without explicit reminders of the visit. Results show students who received experiential safety education outperformed those who did not yet receive experiential education on knowledge-application and reduced risk-taking behaviours. We found no differences on general risk-taking tendencies. These results show a single visit to a safety village visit can reduce risk-taking of risks that were experienced in the village, but not general risk-taking tendencies. Theoretical and policy implications are discussed.
Bounds are established for log odds ratios (log cross-product ratios) involving pairs of items for item response models. First, expressions for bounds on log odds ratios are provided for one-dimensional item response models in general. Then, explicit bounds are obtained for the Rasch model and the two-parameter logistic (2PL) model. Results are also illustrated through an example from a study of model-checking procedures. The bounds obtained can provide an elementary basis for assessment of goodness of fit of these models.
The problem of characterizing the manifest probabilities of a latent trait model is considered. The item characteristic curve is transformed to the item passing-odds curve and a corresponding transformation is made on the distribution of ability. This results in a useful expression for the manifest probabilities of any latent trait model. The result is then applied to give a characterization of the Rasch model as a log-linear model for a 2J- contingency table. Partial results are also obtained for other models. The question of the identifiability of “guessing” parameters is also discussed.
The Dutch Identity is a useful way to reexpress the basic equations of item response models that relate the manifest probabilities to the item response functions (IRFs) and the latent trait distribution. The identity may be exploited in several ways. For example: (a) to suggest how item response models behave for large numbers of items—they are approximate submodels of second-order loglinear models for 2J tables; (b) to suggest new ways to assess the dimensionality of the latent trait—principle components analysis of matrices composed of second-order interactions from loglinear models; (c) to give insight into the structure of latent class models; and (d) to illuminate the problem of identifying the IRFs and the latent trait distribution from sample data.
We give an account of Classical Test Theory (CTT) in terms of the more fundamental ideas of Item Response Theory (IRT). This approach views classical test theory as a very general version of IRT, and the commonly used IRT models as detailed elaborations of CTT for special purposes. We then use this approach to CTT to derive some general results regarding the prediction of the true-score of a test from an observed score on that test as well from an observed score on a different test. This leads us to a new view of linking tests that were not developed to be linked to each other. In addition we propose true-score prediction analogues of the Dorans and Holland measures of the population sensitivity of test linking functions. We illustrate the accuracy of the first-order theory using simulated data from the Rasch model, and illustrate the effect of population differences using a set of real data.
Item response theory (IT) models are now in common use for the analysis of dichotomous item responses. This paper examines the sampling theory foundations for statistical inference in these models. The discussion includes: some history on the “stochastic subject” versus the random sampling interpretations of the probability in IRT models; the relationship between three versions of maximum likelihood estimation for IRT models; estimating θ versus estimating θ-predictors; IRT models and loglinear models; the identifiability of IRT models; and the role of robustness and Bayesian statistics from the sampling theory perspective.
This study reveals the usefulness of multiple correlation techniques in estimating the relative importance of different aspects of a tracking task in the operator's tracking behavior. The technique is applied to a compensatory tracking task with a position control.
For the case of the one-element Markov learning model for which the guessing parameter p is assumed known the efficiency of Bower's method of moments estimator c+ for the learning parameter, c, is found. It seems that although c+ is not efficient, for some practical value of c it is not very inefficient. If p and c are small, ĉ, the maximum likelihood estimate, and c+ have approximately the same first term when expanded in powers of p.
The Non-Equivalent groups with Anchor Test (NEAT) design involves missingdata that are missing by design. Three nonlinear observed score equating methods used with a NEAT design are the frequency estimation equipercentile equating (FEEE), the chain equipercentile equating (CEE), and the item-response-theory observed-score-equating (IRT OSE). These three methods each make different assumptions about the missing data in the NEAT design. The FEEE method assumes that the conditional distribution of the test score given the anchor test score is the same in the two examinee groups. The CEE method assumes that the equipercentile functions equating the test score to the anchor test score are the same in the two examinee groups. The IRT OSE method assumes that the IRT model employed fits the data adequately, and the items in the tests and the anchor test do not exhibit differential item functioning across the two examinee groups. This paper first describes the missing data assumptions of the three equating methods. Then it describes how the missing data in the NEAT design can be filled in a manner that is coherent with the assumptions made by each of these equating methods. Implications on equating are also discussed.
The problem of deciding whether a set of mental test data is consistent with any one of a large class of item response models is considered. The “classical” assumption of locla independence is weakened to a new condition, local nonnegative dependence (LND). Necessary and sufficient conditions are derived for a LND item response model to fit a set of data. This leads to a condition that a set of data must satisfy if it is to be representable by any item response model that assumes both local independence and monotone item characteristic curves. An example is given to show that LND is strictly weaker than local independence. Thus rejection of LND models implies rejection of all item response models that assume local independence for a given set of data.
Although behavioral mechanisms in the association among depression, anxiety, and cancer are plausible, few studies have empirically studied mediation by health behaviors. We aimed to examine the mediating role of several health behaviors in the associations among depression, anxiety, and the incidence of various cancer types (overall, breast, prostate, lung, colorectal, smoking-related, and alcohol-related cancers).
Methods
Two-stage individual participant data meta-analyses were performed based on 18 cohorts within the Psychosocial Factors and Cancer Incidence consortium that had a measure of depression or anxiety (N = 319 613, cancer incidence = 25 803). Health behaviors included smoking, physical inactivity, alcohol use, body mass index (BMI), sedentary behavior, and sleep duration and quality. In stage one, path-specific regression estimates were obtained in each cohort. In stage two, cohort-specific estimates were pooled using random-effects multivariate meta-analysis, and natural indirect effects (i.e. mediating effects) were calculated as hazard ratios (HRs).
Results
Smoking (HRs range 1.04–1.10) and physical inactivity (HRs range 1.01–1.02) significantly mediated the associations among depression, anxiety, and lung cancer. Smoking was also a mediator for smoking-related cancers (HRs range 1.03–1.06). There was mediation by health behaviors, especially smoking, physical inactivity, alcohol use, and a higher BMI, in the associations among depression, anxiety, and overall cancer or other types of cancer, but effects were small (HRs generally below 1.01).
Conclusions
Smoking constitutes a mediating pathway linking depression and anxiety to lung cancer and smoking-related cancers. Our findings underline the importance of smoking cessation interventions for persons with depression or anxiety.
Understanding how sustainable preference change can be achieved is of both scientific and practical importance. Recent work shows that merely responding or not responding to objects during go/no-go training can influence preferences for these objects right after the training, when people choose with a time limit. Here we examined whether and how such immediate preference change in fast choices can affect choices without time limit one week later. In two preregistered experiments, participants responded to go food items and withheld responses toward no-go food items during a go/no-go training. Immediately after the training, they made consumption choices for half of the items (with a time limit in Experiment 1; without time limit in Experiment 2). One week later, participants chose again (without time limit in both experiments). Half of the choices had been presented immediately after the training (repeated choices), while the other half had not (new choices). Participants preferred go over no-go items both immediately after the training and one week later. Furthermore, the effect was observed for both repeated and new choices after one week, revealing a direct effect of mere (non)responses on preferences one week later. Exploratory analyses revealed that the effect after one week is related to the memory of stimulus-response contingencies immediately after the training, and this memory is impaired by making choices. These findings show mere action versus inaction can directly induce preference change that lasts for at least one week, and memory of stimulus-response contingencies may play a crucial role in this effect.
Food decisions are driven by differences in value of choice alternatives such that high value items are preferred over low value items. However, recent research has demonstrated that by implementing the Cue-Approach Training (CAT) the odds of choosing low value items over high value items can be increased. This effect was explained by increased attention to the low value items induced by CAT. Our goal was to replicate the original findings and to address the question of the underlying mechanism by employing eye-tracking during participants’ choice making. During CAT participants were presented with images of food items and were instructed to quickly respond to some of them when an auditory cue was presented (cued items), and not without this cue (uncued items). Next, participants made choices between two food items that differed on whether they were cued during CAT (cued versus uncued) and in pre-training value (high versus low). As predicted, results showed participants were more likely to select a low value food item over a high value food item for consumption when the low value food item had been cued compared to when the low value item had not been cued. Important, and against our hypothesis, there was no significant increase in gaze time for low value cued items compared to low value uncued items. Participants did spend more time fixating on the chosen item compared to the unchosen alternative, thus replicating previous work in this domain. The present research thus establishes the robustness of CAT as means of facilitating choices for low value over high value food but could not demonstrate that this increased preference was due to increased attention for cued low value items. The present research thus raises the question how CAT may increase choices for low value options.
The present research aimed to test the role of mood in the Iowa Gambling Task (IGT; Bechara et al., 1994). In the IGT, participants can win or lose money by picking cards from four different decks. They have to learn by experience that two decks are overall advantageous and two decks are overall disadvantageous. Previous studies have shown that at an early stage in this card-game, players begin to display a tendency towards the advantageous decks. Subsequent research suggested that at this stage, people base their decisions on conscious gut feelings (Wagar & Dixon, 2006). Based on empirical evidence for the relation between mood and cognitive processing-styles, we expected and consistently found that, compared to a negative mood state, reported and induced positive mood states increased this early tendency towards advantageous decks. Our results provide support for the idea that a positive mood causes stronger reliance on affective signals in decision-making than a negative mood.
Democratic cooperation is a particularly complex type of arrangement that requires attendant institutions to ensure that the problems inherent in collective action do not subvert the public good. It is perhaps due to this complexity that historians, political scientists, and others generally associate the birth of democracy with the emergence of so-called states and center it geographically in the “West,” where it then diffused to the rest of the world. We argue that the archaeological record of the American Southeast provides a case to examine the emergence of democratic institutions and to highlight the distinctive ways in which such long-lived institutions were—and continue to be—expressed by Native Americans. Our research at the Cold Springs site in northern Georgia, USA, provides important insight into the earliest documented council houses in the American Southeast. We present new radiocarbon dating of these structures along with dates for the associated early platform mounds that place their use as early as cal AD 500. This new dating makes the institution of the Muskogean council, whose active participants have always included both men and women, at least 1,500 years old, and therefore one of the most enduring and inclusive democratic institutions in world history.
Back pain is one of the largest drivers of workplace injury and lost productivity in industries around the world. Back injuries were one of the leading reasons in resulting in days away from work at 38.5% across all occupations, increasing for manual laborers to 43%. While the cause of the back pain can vary across occupations, for materiel movers it is often caused from repetitive poor lifting. To reduce the issues, the Aerial Porter Exoskeleton (APEx) was created. The APEx uses a hip-mounted, powered exoskeleton attached to an adjustable vest. An onboard computer calculates the configuration of the user to determine when to activate. Lift form is assisted by using a novel lumbar brace mounted on the sides of the hips. Properly worn, the APEx holds the user upright while providing additional hip torque through a lift. This was tested by having participants complete a lifting test with the exoskeleton worn in the “on” configuration compared with the exoskeleton not worn. The APEx has been shown to deliver 30 Nm of torque in lab testing. The activity recognition algorithm has also been shown to be accurate in 95% of tested conditions. When worn by subjects, testing has shown average peak reductions of 14.9% BPM, 8% in VO2 consumption, and an 8% change in perceived effort favoring the APEx.
The H2020/PERISCOPE project, including 32 partners from European universities & agencies, began 1st November 2020 and will last 36 months. The overarching objectives of PERISCOPE are to map and analyse the unintended impacts of the COVID-19 outbreak; develop solutions and guidance for policymakers and health authorities on how to mitigate the impact of the outbreak; enhance Europe’s preparedness for future similar events; and reflect on the future multi-level governance in the health as well as other domains affected by the outbreak. During this session we will report about early lessons learnt from the mapping and assessments of the impacts of the COVID-19 outbreak on mental health at national and subnational level in the EU with respect to individuals, communities and societies. Further, we will comment on their comparability. The aim is to explore differences between countries regarding the occurrence of mental ill health, and especially the impact on vulnerable groups, and how this is related to exposure to SARS-CoV-2, differences in policies over time, and effects on the economy. We will reflect on the short- and long-term consequences on mental health and health inequalities, report on the ongoing development of holistic policy guidelines for health authorities & other authorities, and from the analysis of multilevel governance, at local, regional and national level, memberstate – EU-level, and EU - global governance level. PERISCOPE will continue collecting data and updating a common data ”Atlas”, which would lead the consortium to engage in modelling and experiments to provide “continuous nowcasting” of the outbreak.
Gravitational waves from coalescing neutron stars encode information about nuclear matter at extreme densities, inaccessible by laboratory experiments. The late inspiral is influenced by the presence of tides, which depend on the neutron star equation of state. Neutron star mergers are expected to often produce rapidly rotating remnant neutron stars that emit gravitational waves. These will provide clues to the extremely hot post-merger environment. This signature of nuclear matter in gravitational waves contains most information in the 2–4 kHz frequency band, which is outside of the most sensitive band of current detectors. We present the design concept and science case for a Neutron Star Extreme Matter Observatory (NEMO): a gravitational-wave interferometer optimised to study nuclear physics with merging neutron stars. The concept uses high-circulating laser power, quantum squeezing, and a detector topology specifically designed to achieve the high-frequency sensitivity necessary to probe nuclear matter using gravitational waves. Above 1 kHz, the proposed strain sensitivity is comparable to full third-generation detectors at a fraction of the cost. Such sensitivity changes expected event rates for detection of post-merger remnants from approximately one per few decades with two A+ detectors to a few per year and potentially allow for the first gravitational-wave observations of supernovae, isolated neutron stars, and other exotica.