We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Celiac disease (CD), an autoimmune disorder triggered by gluten, impacts about one percent of the population. Only one-third receive a diagnosis, leaving the majority unaware of their condition. Untreated CD can lead to gut lining damage, resulting in malnutrition, anemia, and osteoporosis. Our primary goal was to identify at-risk groups and assess the cost-effectiveness of active case finding in primary care.
Methods
Our methodology involved systematic reviews and meta-analyses focusing on the accuracy of CD risk factors (chronic conditions and symptoms) and diagnostic tests (serological and genetic). Prediction models, based on identified risk factors, were developed for identifying individuals who would benefit from CD testing in routine primary care. Additionally, an online survey gauged individuals’ preferences regarding diagnostic certainty before initiating a gluten-free diet. This information informed the development of economic models evaluating the cost-effectiveness of various active case finding strategies.
Results
Individuals with dermatitis herpetiformis, a family history of CD, migraine, anemia, type 1 diabetes, osteoporosis, or chronic liver disease showed one and a half to two times higher risk of having CD. IgA tTG, and EMA demonstrated good diagnostic accuracy. Genetic tests showed high sensitivity but low specificity. Survey results indicated substantial variation in preference for certainty from a blood test before initiating a gluten-free diet. Cost-effectiveness analyses showed that, in adults, IgA tTG at a one percent pre-test probability (equivalent to population screening) was the most cost effective. For non-population screening strategies, IgA EMA plus HLA was most cost effective. There was substantial uncertainty in economic model results.
Conclusions
While population-based screening with IgA tTG appears the most cost effective in adults, decisions for implementation should not solely rely on economic analyses. Future research should explore whether population-based CD screening aligns with UK National Screening Committee criteria and requires a long-term randomized controlled trial of screening strategies.
Robust schemes in regression are adapted to mean and covariance structure analysis, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is properly weighted according to its distance, based on first and second order moments, from the structural model. A simple weighting function is adopted because of its flexibility with changing dimensions. The weight matrix is obtained from an adaptive way of using residuals. Test statistic and standard error estimators are given, based on iteratively reweighted least squares. The method reduces to a standard distribution-free methodology if all cases are equally weighted. Examples demonstrate the value of the robust procedure.
Current practice in factor analysis typically involves analysis of correlation rather than covariance matrices. We study whether the standard z-statistic that evaluates whether a factor loading is statistically necessary is correctly applied in such situations and more generally when the variables being analyzed are arbitrarily rescaled. Effects of rescaling on estimated standard errors of factor loading estimates, and the consequent effect on z-statistics, are studied in three variants of the classical exploratory factor model under canonical, raw varimax, and normal varimax solutions. For models with analytical solutions we find that some of the standard errors as well as their estimates are scale equivariant, while others are invariant. For a model in which an analytical solution does not exist, we use an example to illustrate that neither the factor loading estimates nor the standard error estimates possess scale equivariance or invariance, implying that different conclusions could be obtained with different scalings. Together with the prior findings on parameter estimates, these results provide new guidance for a key statistical aspect of factor analysis.
Indefinite symmetric matrices that are estimates of positive-definite population matrices occur in a variety of contexts such as correlation matrices computed from pairwise present missing data and multinormal based methods for discretized variables. This note describes a methodology for scaling selected off-diagonal rows and columns of such a matrix to achieve positive definiteness. As a contrast to recently developed ridge procedures, the proposed method does not need variables to contain measurement errors. When minimum trace factor analysis is used to implement the theory, only correlations that are associated with Heywood cases are shrunk.
Data in social and behavioral sciences typically possess heavy tails. Structural equation modeling is commonly used in analyzing interrelations among variables of such data. Classical methods for structural equation modeling fit a proposed model to the sample covariance matrix, which can lead to very inefficient parameter estimates. By fitting a structural model to a robust covariance matrix for data with heavy tails, one generally gets more efficient parameter estimates. Because many robust procedures are available, we propose using the empirical efficiency of a set of invariant parameter estimates in identifying an optimal robust procedure. Within the class of elliptical distributions, analytical results show that the robust procedure leading to the most efficient parameter estimates also yields a most powerful test statistic. Examples illustrate the merit of the proposed procedure. The relevance of this procedure to data analysis in a broader context is noted.
This paper studies the asymptotic distributions of three reliability coefficient estimates: Sample coefficient alpha, the reliability estimate of a composite score following a factor analysis, and the estimate of the maximal reliability of a linear combination of item scores following a factor analysis. Results indicate that the asymptotic distribution for each of the coefficient estimates, obtained based on a normal sampling distribution, is still valid within a large class of nonnormal distributions. Therefore, a formula for calculating the standard error of the sample coefficient alpha, recently obtained by van Zyl, Neudecker and Nel, applies to other reliability coefficients and can still be used even with skewed and kurtotic data such as are typical in the social and behavioral sciences.
A test for linear trend among a set of eigenvalues of a correlation matrix is developed. As a technical implementation of Cattell's scree test, this is a generalization of Anderson's test for the equality of eigenvalues, and extends Bentler and Yuan's work on linear trends in eigenvalues of a covariance matrix. The power of minimum x2 and maximum likelihood ratio tests are compared. Examples show that the linear trend hypothesis is more realistic than the standard hypothesis of equality of eigenvalues, and that the hypothesis is compatible with standard decisions on the number of factors or components to retain in data analysis.
Data in social and behavioral sciences are often hierarchically organized though seldom normal, yet normal theory based inference procedures are routinely used for analyzing multilevel models. Based on this observation, simple adjustments to normal theory based results are proposed to minimize the consequences of violating normality assumptions. For characterizing the distribution of parameter estimates, sandwich-type covariance matrices are derived. Standard errors based on these covariance matrices remain consistent under distributional violations. Implications of various covariance estimators are also discussed. For evaluating the quality of a multilevel model, a rescaled statistic is given for both the hierarchical linear model and the hierarchical structural equation model. The rescaled statistic, improving the likelihood ratio statistic by estimating one extra parameter, approaches the same mean as its reference distribution. A simulation study with a 2-level factor model implies that the rescaled statistic is preferable.
Since data in social and behavioral sciences are often hierarchically organized, special statistical procedures for covariance structure models have been developed to reflect such hierarchical structures. Most of these developments are based on a multivariate normality distribution assumption, which may not be realistic for practical data. It is of interest to know whether normal theory-based inference can still be valid with violations of the distribution condition. Various interesting results have been obtained for conventional covariance structure analysis based on the class of elliptical distributions. This paper shows that similar results still hold for 2-level covariance structure models. Specifically, when both the level-1 (within cluster) and level-2 (between cluster) random components follow the same elliptical distribution, the rescaled statistic recently developed by Yuan and Bentler asymptotically follows a chi-square distribution. When level-1 and level-2 have different elliptical distributions, an additional rescaled statistic can be constructed that also asymptotically follows a chi-square distribution. Our results provide a rationale for applying these rescaled statistics to general non-normal distributions, and also provide insight into issues related to level-1 and level-2 sample sizes.
Mean comparisons are of great importance in the application of statistics. Procedures for mean comparison with manifest variables have been well studied. However, few rigorous studies have been conducted on mean comparisons with latent variables, although the methodology has been widely used and documented. This paper studies the commonly used statistics in latent variable mean modeling and compares them with parallel manifest variable statistics. Our results indicate that, under certain conditions, the likelihood ratio and Wald statistics used for latent mean comparisons do not always have greater power than the Hotelling T2 statistics used for manifest mean comparisons. The noncentrality parameter corresponding to the T2 statistic can be much greater than those corresponding to the likelihood ratio and Wald statistics, which we find to be different from those provided in the literature. Under a fixed alternative hypothesis, our results also indicate that the likelihood ratio statistic can be stochastically much greater than the corresponding Wald statistic. The robustness property of each statistic is also explored when the model is misspecified or when data are nonnormally distributed. Recommendations and advice are provided for the use of each statistic.
Factor analysis is regularly used for analyzing survey data. Missing data, data with outliers and consequently nonnormal data are very common for data obtained through questionnaires. Based on covariance matrix estimates for such nonstandard samples, a unified approach for factor analysis is developed. By generalizing the approach of maximum likelihood under constraints, statistical properties of the estimates for factor loadings and error variances are obtained. A rescaled Bartlett-corrected statistic is proposed for evaluating the number of factors. Equivariance and invariance of parameter estimates and their standard errors for canonical, varimax, and normalized varimax rotations are discussed. Numerical results illustrate the sensitivity of classical methods and advantages of the proposed procedures.
Recent research has shown the potential of speleothem δ13C to record a range of environmental processes. Here, we report on 230Th-dated stalagmite δ13C records for southwest Sulawesi, Indonesia, over the last 40,000 yr to investigate the relationship between tropical vegetation productivity and atmospheric methane concentrations. We demonstrate that the Sulawesi stalagmite δ13C record is driven by changes in vegetation productivity and soil respiration and explore the link between soil respiration and tropical methane emissions using HadCM3 and the Sheffield Dynamic Global Vegetation Model. The model indicates that changes in soil respiration are primarily driven by changes in temperature and CO2, in line with our interpretation of stalagmite δ13C. In turn, modelled methane emissions are driven by soil respiration, providing a mechanism that links methane to stalagmite δ13C. This relationship is particularly strong during the last glaciation, indicating a key role for the tropics in controlling atmospheric methane when emissions from high-latitude boreal wetlands were suppressed. With further investigation, the link between δ13C in stalagmites and tropical methane could provide a low-latitude proxy complementary to polar ice core records to improve our understanding of the glacial–interglacial methane budget.
Previous research has shown that non-Māori Speaking New Zealanders have extensive latent knowledge of Māori, despite not being able to speak it. This knowledge plausibly derives from a memory store of Māori forms (Oh et al., 2020; Panther et al., 2023). Modelling suggests that this ‘proto-lexicon’ includes not only Māori words, but also word-parts; however, this suggestion has not yet been tested experimentally.
We present the results of a new experiment in which non-Māori speaking New Zealanders and non-New Zealanders were asked to segment a range of Māori words into parts. We show that the degree to which segmentations of non-Māori speakers correlate to the segmentations of two fluent speakers of Māori is stronger among New Zealanders than non-New Zealanders. This research adds to the growing evidence that even in a largely ‘monolingual’ population, there is evidence of latent bilingualism through long-term exposure to a second language.
Based on an original US survey, this article argues that, on average, US conservatives today feel substantially cooler toward Latin American countries than liberals do. They also desire massively tougher Mexico border policies and much less foreign aid than liberals do. Averages can hide substantial differences within groups, however. Not all liberals and conservatives are alike, and their differences shape attitudes toward Latin America. For instance, our survey reveals that libertarians and economic conservatives oppose foreign aid to places like Haiti out of a belief in the Protestant ethic of self-help and opposition to income redistribution. Communitarians and economic liberals, by contrast, are more supportive of foreign aid to Haiti. Cultural conservatives fear the impact of Mexican immigration on Christian values and a WASP American national identity more than cultural liberals do. But race and racism continue to divide Americans the most consistently in their attitudes and policy preferences toward Latin America. The policy implications of ideologically divided public opinion for US immigration reform are also addressed.
Radiocarbon (14C) ages cannot provide absolutely dated chronologies for archaeological or paleoenvironmental studies directly but must be converted to calendar age equivalents using a calibration curve compensating for fluctuations in atmospheric 14C concentration. Although calibration curves are constructed from independently dated archives, they invariably require revision as new data become available and our understanding of the Earth system improves. In this volume the international 14C calibration curves for both the Northern and Southern Hemispheres, as well as for the ocean surface layer, have been updated to include a wealth of new data and extended to 55,000 cal BP. Based on tree rings, IntCal20 now extends as a fully atmospheric record to ca. 13,900 cal BP. For the older part of the timescale, IntCal20 comprises statistically integrated evidence from floating tree-ring chronologies, lacustrine and marine sediments, speleothems, and corals. We utilized improved evaluation of the timescales and location variable 14C offsets from the atmosphere (reservoir age, dead carbon fraction) for each dataset. New statistical methods have refined the structure of the calibration curves while maintaining a robust treatment of uncertainties in the 14C ages, the calendar ages and other corrections. The inclusion of modeled marine reservoir ages derived from a three-dimensional ocean circulation model has allowed us to apply more appropriate reservoir corrections to the marine 14C data rather than the previous use of constant regional offsets from the atmosphere. Here we provide an overview of the new and revised datasets and the associated methods used for the construction of the IntCal20 curve and explore potential regional offsets for tree-ring data. We discuss the main differences with respect to the previous calibration curve, IntCal13, and some of the implications for archaeology and geosciences ranging from the recent past to the time of the extinction of the Neanderthals.
A well-dated δ18O record in a stalagmite from a cave in the Klamath Mountains, Oregon, with a sampling interval of 50 yr, indicates that the climate of this region cooled essentially synchronously with Younger Dryas climate change elsewhere in the Northern Hemisphere. The δ18O record also indicates significant century-scale temperature variability during the early Holocene. The δ13C record suggests increasing biomass over the cave through the last deglaciation, with century-scale variability but with little detectable response of vegetation to Younger Dryas cooling.
The final effort of the CLIMAP project was a study of the last interglaciation, a time of minimum ice volume some 122,000 yr ago coincident with the Substage 5e oxygen isotopic minimum. Based on detailed oxygen isotope analyses and biotic census counts in 52 cores across the world ocean, last interglacial sea-surface temperatures (SST) were compared with those today. There are small SST departures in the mid-latitude North Atlantic (warmer) and the Gulf of Mexico (cooler). The eastern boundary currents of the South Atlantic and Pacific oceans are marked by large SST anomalies in individual cores, but their interpretations are precluded by no-analog problems and by discordancies among estimates from different biotic groups. In general, the last interglacial ocean was not significantly different from the modern ocean. The relative sequencing of ice decay versus oceanic warming on the Stage 6/5 oxygen isotopic transition and of ice growth versus oceanic cooling on the Stage 5e/5d transition was also studied. In most of the Southern Hemisphere, the oceanic response marked by the biotic census counts preceded (led) the global ice-volume response marked by the oxygen-isotope signal by several thousand years. The reverse pattern is evident in the North Atlantic Ocean and the Gulf of Mexico, where the oceanic response lagged that of global ice volume by several thousand years. As a result, the very warm temperatures associated with the last interglaciation were regionally diachronous by several thousand years. These regional lead-lag relationships agree with those observed on other transitions and in long-term phase relationships; they cannot be explained simply as artifacts of bioturbational translations of the original signals.
To benefit from the many advantages of organic semiconductors like flexibility, transparency, and small thickness, electronic devices should be entirely made from organic materials. This means, additionally to organic LEDs, organic solar cells, and organic sensors, we need organic transistors to amplify, process, and control signals and electrical power. The standard lateral organic field effect transistor (OFET) does not offer the necessary performance for many of these applications. One promising candidate for solving this problem is the vertical organic field effect transistor (VOFET). In addition to the altered structure of the electrodes, the VOFET has one additional part compared to the OFET – the source-insulator. However, the influence of the used material, the size, and geometry of this insulator on the behavior of the transistor has not yet been examined. We investigate key-parameters of the VOFET with different source insulator materials and geometries. We also present transmission electron microscopy (TEM) images of the edge area. Additionally, we investigate the charge transport in such devices using drift-diffusion simulations and the concept of a vertical organic light emitting transistor (VOLET). The VOLET is a VOFET with an embedded OLED. It allows the tracking of the local current density by measuring the light intensity distribution.
We show that the insulator material and thickness only have a small influence on the performance, while there is a strong impact by the insulator geometry – mainly the overlap of the insulator into the channel. By tuning this overlap, on/off-ratios of 9x105 without contact doping are possible.