We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Network meta-analysis allows the synthesis of relative effects from several treatments. Two broad approaches are available to synthesize the data: arm-synthesis and contrast-synthesis, with several models that can be fitted within each. Limited evaluations comparing these approaches are available. We re-analyzed 118 networks of interventions with binary outcomes using three contrast-synthesis models (CSM; one fitted in a frequentist framework and two in a Bayesian framework) and two arm-synthesis models (ASM; both fitted in a Bayesian framework). We compared the estimated log odds ratios, their standard errors, ranking measures and the between-trial heterogeneity using the different models and investigated if differences in the results were modified by network characteristics. In general, we observed good agreement with respect to the odds ratios, their standard errors and the ranking metrics between the two Bayesian CSMs. However, differences were observed when comparing the frequentist CSM and the ASMs to each other and to the Bayesian CSMs. The network characteristics that we investigated, which represented the connectedness of the networks and rareness of events, were associated with the differences observed between models, but no single factor was associated with the differences across all of the metrics. In conclusion, we found that different models used to synthesize evidence in a network meta-analysis (NMA) can yield different estimates of odds ratios and standard errors that can impact the final ranking of the treatment options compared.
The effect dietary FODMAPs (fermentable oligo-, di- and mono-saccharides and polyols) in healthy adults is poorly documented. This study compared specific effects of low and moderate FODMAP intake (relative to typical intake) on the faecal microbiome, participant-reported outcomes and gastrointestinal physiology. In a single-blind cross-over study, 25 healthy participants were randomised to one of two provided diets, ‘low’ (LFD) <4 g/d or ‘moderate’ (MFD) 14-18 g/d, for 3 weeks each, with ≥2-week washout between. Endpoints were assessed in the last week of each diet. The faecal bacterial/archaeal and fungal communities were characterised in 18 participants in whom high quality DNA was extracted by 16S rRNA and ITS2 profiling, and by metagenomic sequencing. There were no differences in gastrointestinal or behavioural symptoms (fatigue, depression, anxiety), or in faecal characteristics and biochemistry (including short-chain fatty acids). Mean colonic transit time (telemetry) was 23 (95% confidence interval: 15, 30) h with the MFD compared with 34 (24, 44) h with LFD (n=12; p=0.009). Fungal diversity (richness) increased in response to MFD, but bacterial richness was reduced, coincident with expansion of the relative abundances of Bifidobacterium, Anaerostipes, and Eubacterium. Metagenomic analysis showed expansion of polyol-utilising Bifidobacteria, and Anaerostipes with MFD. In conclusion, short-term alterations of FODMAP intake are not associated with symptomatic, stool or behavioural manifestations in healthy adults, but remarkable shifts within the bacterial and mycobiome populations were observed. These findings emphasise the need to quantitatively assess all microbial Domains and their interrelationships to improve understanding of consequences of diet on gut function.
Glacial geomorphic processes can be mapped as a network of vertical and longitudinal connections between process domains in the glacier system, that can stretch from sources in continental interiors to sinks in the oceans, and through which ice, water and debris are transferred or stored. Domains can be defined structurally by their position within a flow system from areas of accumulation through to areas of ablation, but the functional or process-related connection of domains is better defined by geographic and temporal patterns in factors such as temperature that control glacier geomorphic processes. The idea of connectivity has long been important in glacier research, but without much explicit reference to connectivity science or terminology. Debris transport pathways, sediment stores, sediment budgets, and transfers of energy, water and debris through glaciers are fundamental to how glacial geomorphic systems work. There is a clear opportunity for glacial geomorphology to engage more with connectivity theory, as other areas of geomorphology have done, and for connectivity theory to be applied more explicitly to glacial environments.
Covering both theory and experiment, this text describes the behaviour of homogeneous and density-stratified fluids over and around topography. Its presentation is suitable for advanced undergraduate and graduate students in fluid mechanics, as well as for practising scientists, engineers, and researchers. Using laboratory experiments and illustrations to further understanding, the author explores topics ranging from the classical hydraulics of single-layer flow to more complex situations involving stratified flows over two- and three-dimensional topography, including complex terrain. A particular focus is placed on applications to the atmosphere and ocean, including discussions of downslope windstorms, and of oceanic flow over continental shelves and slopes. This new edition has been restructured to make it more digestible, and updated to cover significant developments in areas such as exchange flows, gravity currents, waves in stratified fluids, stability, and applications to the atmosphere and ocean.
We develop a model that relates self-control to cooperation patterns in social dilemmas, and we test the model in a laboratory public goods experiment. As predicted, we find a robust association between stronger self-control and higher levels of cooperation, and the association is at its strongest when the decision maker’s risk aversion is low and the cooperation levels of others high. We interpret the pattern as evidence for the notion that individuals may experience an impulse to act in self-interest—and that cooperative behavior benefits from self-control. Free-riders differ from other contributor types only in their tendency not to have identified a self-control conflict in the first place.
Data from a risky choice experiment are used to estimate a fully parametric stochastic model of risky choice. As is usual with such analyses, Expected Utility Theory is rejected in favour of a form of Rank Dependent Theory. Then an estimate of the risk aversion parameter is deduced for each subject, and this is used to construct a measure of the “closeness to indifference” of each subject in each choice problem. This measure is then used as an explanatory variable in a random effects model of decision time, with other explanatory variables being the complexity of the problem, the financial incentives, and the amount of experience accumulated at the time of performing the task. The most interesting finding is that significantly more effort is allocated to problems in which subjects are close to indifference. This presents us with another reason (in addition to statistical information considerations) why such tasks should play a prominent role in experiments.
We present an experiment where subjects sequentially receive signals about the true state of the world and need to form beliefs about which one is true, with payoffs related to reported beliefs. We attempt to control for risk aversion using the Offerman et al. (Rev Econ Stud 76(4):1461–1489, 2009) technique. Against the baseline of Bayesian updating, we test for belief adjustment underreaction and overreaction and model the decision making process of the agent as a double hurdle model where agents with inferential expectations first decide whether to adjust their beliefs and then, if so, decide by how much. We also test the effects of increased inattention and complexity on belief updating. We find evidence for periods of belief inertia interspersed with belief adjustment. This is due to a combination of random belief adjustment; state-dependent belief adjustment, with many subjects requiring considerable evidence to change their beliefs; and quasi-Bayesian belief adjustment, with aggregate insufficient belief adjustment when a belief change does occur. Inattention, like complexity, makes subjects less likely to adjust their stated beliefs, while inattention additionally discourages full adjustment.
Experiments frequently use a random incentive system (RIS), where only tasks that are randomly selected at the end of the experiment are for real. The most common type pays every subject one out of her multiple tasks (within-subjects randomization). Recently, another type has become popular, where a subset of subjects is randomly selected, and only these subjects receive one real payment (between-subjects randomization). In earlier tests with simple, static tasks, RISs performed well. The present study investigates RISs in a more complex, dynamic choice experiment. We find that between-subjects randomization reduces risk aversion. While within-subjects randomization delivers unbiased measurements of risk aversion, it does not eliminate carry-over effects from previous tasks. Both types generate an increase in subjects’ error rates. These results suggest that caution is warranted when applying RISs to more complex and dynamic tasks.
The classical trinity of tests is used to check for the presence of a tremble in economic experiments in which the response variable is binary. A tremble is said to occur when an agent makes a decision completely at random, without regard to the values taken by the explanatory variables. The properties of the tests are discussed, and an extension of the methodology is used to test for the presence of a tremble in binary panel data from a well-known economic experiment.
We consider a dictator game experiment in which dictators perform a sequence of giving tasks and taking tasks. The data are used to estimate the parameters of a Stone–Geary utility function over own-payoff and other’s payoff. The econometric model incorporates zero observations (e.g. zero-giving or zero-taking) by applying the Kuhn–Tucker theorem and treating zeros as corner solutions in the dictator’s constrained optimisation problem. The method of maximum simulated likelihood (MSL) is used for estimation. We find that selfishness is significantly lower in taking tasks than in giving tasks, and we attribute this difference to the “cold prickle of taking”.
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
A new observable consequence of the property of invariant item ordering is presented, which holds under Mokken’s double monotonicity model for dichotomous data. The observable consequence is an invariant ordering of the item-total regressions. Kendall’s measure of concordance W and a weighted version of this measure are proposed as measures for this property. Karabatsos and Sheu proposed a Bayesian procedure (Appl. Psychol. Meas. 28:110–125, 2004), which can be used to determine whether the property of an invariant ordering of the item-total regressions should be rejected for a set of items. An example is presented to illustrate the application of the procedures to empirical data.
Correspondence analysis can be described as a technique which decomposes the departure from independence in a two-way contingency table. In this paper a form of correspondence analysis is proposed which decomposes the departure from the quasi-independence model. This form seems to be a good alternative to ordinary correspondence analysis in cases where the use of the latter is either impossible or not recommended, for example, in case of missing data or structural zeros. It is shown that Nora's reconstitution of order zero, a procedure well-known in the French literature, is formally identical to our correspondence analysis of incomplete tables. Therefore, reconstitution of order zero can also be interpreted as providing a decomposition of the residuals from the quasi-independence model. Furthermore, correspondence analysis of incomplete tables can be performed using existing programs for ordinary correspondence analysis.
Performance measures, conditionalized on the last error and other events, have been of central concern in the development of absorbing Markov-chain models for learning and problem solving. Expressions for response probabilities and expected latencies conditional on the occurrence of the last error and other events are derived using matrix methods.
Randomized response (RR) is a well-known method for measuring sensitive behavior. Yet this method is not often applied because: (i) of its lower efficiency and the resulting need for larger sample sizes which make applications of RR costly; (ii) despite its privacy-protection mechanism the RR design may not be followed by every respondent; and (iii) the incorrect belief that RR yields estimates only of aggregate-level behavior but that these estimates cannot be linked to individual-level covariates. This paper addresses the efficiency problem by applying item randomized-response (IRR) models for the analysis of multivariate RR data. In these models, a person parameter is estimated based on multiple measures of a sensitive behavior under study which allow for more powerful analyses of individual differences than available from univariate RR data. Response behavior that does not follow the RR design is approached by introducing mixture components in the IRR models with one component consisting of respondents who answer truthfully and another component consisting of respondents who do not provide truthful responses. An analysis of data from two large-scale Dutch surveys conducted among recipients of invalidity insurance benefits shows that the willingness of a respondent to answer truthfully is related to the educational level of the respondents and the perceived clarity of the instructions. A person is more willing to comply when the expected benefits of noncompliance are minor and social control is strong.
Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest monotonicity across a variety of observed scores, such as the restscore, a single item score, and in some cases the total score. In this study, we show that manifest monotonicity can be tested by means of the order-constrained statistical inference framework. We propose a procedure that uses this framework to determine whether manifest monotonicity should be rejected for specific items. This approach provides a likelihood ratio test for which the p-value can be approximated through simulation. A simulation study is presented that evaluates the Type I error rate and power of the test, and the procedure is applied to empirical data.
In van der Heijden and de Leeuw (1985) it was proposed to use loglinear analysis to detect interactions in a multiway contingency table, and to explore the form of these interactions with correspondence analysis. After performing the exploratory phase of the analysis, we will show here how the results found in this phase can be used for confirmation.
Loglinear analysis and correspondence analysis provide us with two different methods for the decomposition of contingency tables. In this paper we will show that there are cases in which these two techniques can be used complementary to each other. More specifically, we will show that often correspondence analysis can be viewed as providing a decomposition of the difference between two matrices, each following a specific loglinear model. Therefore, in these cases the correspondence analysis solution can be interpreted in terms of the difference between these loglinear models. A generalization of correspondence analysis, recently proposed by Escofier, will also be discussed. With this decomposition, which includes classical correspondence analysis as a special case, it is possible to use correspondence analysis complementary to loglinear analysis in more instances than those described for classical correspondence analysis. In this context correspondence analysis is used for the decomposition of the residuals of specific restricted loglinear models.
A dynamic factor model is proposed for the analysis of multivariate nonstationary time series in the time domain. The nonstationarity in the series is represented by a linear time dependent mean function. This mild form of nonstationarity is often relevant in analyzing socio-economic time series met in practice. Through the use of an extended version of Molenaar's stationary dynamic factor analysis method, the effect of nonstationarity on the latent factor series is incorporated in the dynamic nonstationary factor model (DNFM). It is shown that the estimation of the unknown parameters in this model can be easily carried out by reformulating the DNFM as a covariance structure model and adopting the ML algorithm proposed by Jöreskog. Furthermore, an empirical example is given to demonstrate the usefulness of the proposed DNFM and the analysis.