We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The purpose of this study was to examine the impact of acoustic filtering and modality on speech-in-noise recognition for Spanish-English late bilinguals (who were exposed to English after their 5th birthday) and English monolinguals. All speech perception testing was conducted in English. Speech reception thresholds (SRTs) were estimated at 50% recognition accuracy in an open-set sentence recognition task in the presence of speech-shaped noise (SSN) in both low-pass and no-filter conditions. Consonant recognition was assessed in a closed-set identification task in SSN in four conditions: low-pass and no-filter stimuli presented in auditory-only (AO) and audiovisual (AV) modalities. Results indicated that monolinguals outperformed late bilinguals in all conditions. Late bilinguals and monolinguals were similarly impacted by acoustic filtering. Some data indicated that monolinguals may be more adept at integrating auditory and visual cues than late bilinguals. Theoretical and practical implications are discussed.
We present a multidimensional data analysis framework for the analysis of ordinal response variables. Underlying the ordinal variables, we assume a continuous latent variable, leading to cumulative logit models. The framework includes unsupervised methods, when no predictor variables are available, and supervised methods, when predictor variables are available. We distinguish between dominance variables and proximity variables, where dominance variables are analyzed using inner product models, whereas the proximity variables are analyzed using distance models. An expectation–majorization–minimization algorithm is derived for estimation of the parameters of the models. We illustrate our methodology with three empirical data sets highlighting the advantages of the proposed framework. A simulation study is conducted to evaluate the performance of the algorithm.
The EUnetHTA Core Model® is well-established in the HTA community. Some recommendations of corresponding guidance documents leave room for alternative methodological choices. Considering the new HTA regulation (HTAR), we aimed to identify needs for concretization (NCs) in EUnetHTA guidance and provide indicative methodological options.
Methods
We carried out a qualitative document analysis and structured group discussion. Twenty-two EUnetHTA documents were screened using transparent criteria. Identified NCs were classified into topics according to the PRISMA statement and presented to Austrian HTA practitioners (n = 11) during a structured group discussion. Participants rated NC’s importance. To identify potential solutions, selected key handbooks for generic (Cochrane) and HTA-specific (IQWIG/NICE) evidence synthesis were systematically reviewed and matching content was charted against the NCs.
Results
Thirty-two topics with varying numbers of NCs were identified, twenty-six during the screening process, and six from the group discussion. Most of the topics related to evidence synthesis methods (nine topics), evidence eligibility criteria (nine topics), risk of bias (three topics), and certainty assessment (three topics). Other topics related to information sources, search strategy, data collection process, data items, effect measures, and reporting bias. One or more methodological approaches and recommendations could be identified for each identified topic from the included methodological handbooks.
Conclusions
Our analysis identified a need for concretization in some EUnetHTA guidelines. The structured overview of methodological options may support HTA doers in adapting and applying the guidelines to the national and local practical context.
Sometime after 1992, I first learned that the High Court of Australia had discovered that the Australian Constitution contained something that sounded very much like a freedom of speech guarantee. And the reasoning that supported that discovery sounded like the philosophy of Alexander Meiklejohn which I had been teaching in a seminar on Free Speech for several years.
Although Meiklejohn was talking about the United States Constitution, he was not emphasising the words of the First Amendment thereto. Drawing upon pre-Bill of Rights commitments recorded in various historical documents, Meiklejohn's view was that the framers of the United States Constitution had made a covenant with each other to build a democracy in which the people were both the governors and the governed. Freedom of speech, according to Meiklejohn, was necessary to make a democracy, and that was all that freedom of speech was designed to do.
In this paper, we reconsider the merits of unfolding solutions based on loss functions involving a normalization on the variance per subject. In the literature, solutions based on Stress-2 are often diagnosed to be degenerate in the majority of cases. Here, the focus lies on two frequently occurring types of degeneracies. The first type typically locates some subject points far away from a compact cluster of the other points. In the second type of solution, the object points lie on a circle. In this paper, we argue that these degenerate solutions are well fitting and informative. To reveal the information, we introduce mixtures of plots based on the ideal point model of unfolding, the vector model, and on the signed distance model. In addition to a different representation, we provide a new iterative majorization algorithm to optimize the average squared correlation between the distances in the configuration and the transformed data per individual. It is shown that this approach is equivalent to minimizing Kruskal’s Stress-2.
Multidimensional unfolding methods suffer from the degeneracy problem in almost all circumstances. Most degeneracies are easily recognized: the solutions are perfect but trivial, characterized by approximately equal distances between points from different sets. A definition of an absolutely degenerate solution is proposed, which makes clear that these solutions only occur when an intercept is present in the transformation function. Many solutions for the degeneracy problem have been proposed and tested, but with little success so far. In this paper, we offer a substantial modification of an approach initiated bythat introduced a normalization factor based on thevariance in the usual least squares loss function. Heiser unpublishedthesis, (1981) and showed that the normalization factor proposed by Kruskal and Carroll was not strong enough to avoid degeneracies. The factor proposed in the present paper, based on the coefficient of variation, discourages or penalizes nonmetric transformations of the proximities with small variation, so that the procedure steers away from solutions with small variation in the interpoint distances. An algorithm is described for minimizing the re-adjusted loss function, based on iterative majorization. The results of a simulation study are discussed, in which the optimal range of the penalty parameters is determined. Two empirical data sets are analyzed by our method, clearly showing the benefits of the proposed loss function.
Non-specialist mental health interventions serve as a potential solution to reduce the mental healthcare gap in low- and middle-income countries, such as Sri Lanka. However, contextual factors often influence their effective implementation, reflecting a research-to-practice gap. This study, using a qualitative, participatory approach with local mental health workers (n = 9) and potential service users (n = 11), identifies anticipated barriers and facilitators to implementing these interventions while also exploring alternative strategies for reducing the mental healthcare gap in this context. Perceived barriers include concerns about effectiveness, acceptance and feasibility in the implementation of non-specialist mental health interventions (theme 1). The participants’ overall perception that these interventions are a beneficial strategy for reducing the mental healthcare gap was identified as a facilitating factor for implementation (theme 2). Further facilitators relate to important non-specialist characteristics (theme 3), including desirable traits and occupational backgrounds that may aid in increasing the acceptance of this cadre. Other suggestions relate to facilitating the reach, intervention acceptance and feasibility (theme 4). This study offers valuable insights to enhance the implementation process of non-specialist mental health interventions in low-and middle-income countries such as Sri Lanka.
The influence of symmetry-breaking effects of ridge-type roughness on secondary currents in turbulent channel flow is investigated using direct numerical simulations. The ridges have triangular cross-section, which is systematically varied from isosceles to right-angled triangle, introducing an imbalance to the slopes of the ridges’ lateral surfaces while the streamwise homogeneity of the surfaces is maintained. In all cases, secondary current vortices are produced, but asymmetric ridge cross-sections break the symmetry of these vortices. As a result of the asymmetry-induced misalignment and imbalance in the secondary current vortices, net spanwise flow emerges. The magnitude of the spanwise flow increases with the slope ratio of the ridge lateral surfaces and significantly modifies the mean flow topology, leading to the merging of critical points in the case of the right-angled triangular ridge shape. Within the cavities, the net spanwise flow is accompanied by a non-zero mean spanwise pressure gradient, while from the perspective of the outer flow, the scalene ridge surfaces have a similar effect as a wall that is slowly moving in the spanwise direction. Overall, the present results suggest the existence of a special type of Prandtl's secondary currents of the second kind, namely those that result in net spanwise flow.
Background: We performed a network meta-analysis of randomized controlled trials to assess the comparative effectiveness of available pharmacological prophylaxis for migraines. Methods: We searched MEDLINE, EMBASE, Web of Science, Scopus, PsycINFO and Cochrane CENTRAL up to October 2023 for trials that: (1) enrolled adults diagnosed with chronic migraine, and (2) randomized them to any prophylactic medication vs. another medication or placebo. We performed a random-effects frequentist network meta-analysis for patient-important outcomes. Results: We included 193 randomized trials. Compared to placebo, CGRP monoclonal antibodies (mean difference [MD] -1.7, 95%CI: -1.1 to -2.2), injection of botulinum toxin (MD -1.8, 95%CI: -0.7 to -2.9), calcium channel blockers (MD -1.8, 95%CI: -0.5 to -3.0), beta-blockers (MD -1.4, 95%CI: -0.2 to -2.6), and anticonvulsants (MD -1.1, 95%CI: -0.4 to -1.8) were among the most effective treatments in reducing average number of headache days per months. Anticonvulsants (Risk Ratio [RR] 2.3, 95%CI: 1.8 to 3.0), calcium channel blockers (RR 1.8, 95% CI: 1.1 to 3.1), and tricyclic antidepressants (RR 2.3, 95% CI: 1.3 to 3.8) showed the highest risk of discontinuation due to adverse events. Conclusions: Our findings suggest that CGRP inhibitors, botulinum toxin, and beta-blockers may provide the greatest benefit, and tolerability, for reducing the frequency of migraine headaches.
The betweenness centrality of a graph vertex measures how often this vertex is visited on shortest paths between other vertices of the graph. In the analysis of many real-world graphs or networks, the betweenness centrality of a vertex is used as an indicator for its relative importance in the network. In particular, it is among the most popular tools in social network analysis. In recent years, a growing number of real-world networks have been modeled as temporal graphs instead of conventional (static) graphs. In a temporal graph, we have a fixed set of vertices and there is a finite discrete set of time steps, and every edge might be present only at some time steps. While shortest paths are straightforward to define in static graphs, temporal paths can be considered “optimal” with respect to many different criteria, including length, arrival time, and overall travel time (shortest, foremost, and fastest paths). This leads to different concepts of temporal betweenness centrality, posing new challenges on the algorithmic side. We provide a systematic study of temporal betweenness variants based on various concepts of optimal temporal paths.
Computing the betweenness centrality for vertices in a graph is closely related to counting the number of optimal paths between vertex pairs. While in static graphs computing the number of shortest paths is easily doable in polynomial time, we show that counting foremost and fastest paths is computationally intractable (#P-hard), and hence, the computation of the corresponding temporal betweenness values is intractable as well. For shortest paths and two selected special cases of foremost paths, we devise polynomial-time algorithms for temporal betweenness computation. Moreover, we also explore the distinction between strict (ascending time labels) and non-strict (non-descending time labels) time labels in temporal paths. In our experiments with established real-world temporal networks, we demonstrate the practical effectiveness of our algorithms, compare the various betweenness concepts, and derive recommendations on their practical use.
OBJECTIVES/GOALS: Increases in anxiety and depression during adolescence may be related to increased biological reactivity to negative social feedback (i.e., social threat sensitivity). Our goal was to identify biomarkers of social threat sensitivity, which may provide unique etiological insight to inform early detection and intervention efforts. METHODS/STUDY POPULATION: Adolescents aged 12-14 (N=84; 55% female; 80% White; 69% annual family income <$70,000) were recruited. Youth viewed a series of happy, neutral, and angry faces while eye-tracking and electroencephalogram (EEG) data were recorded to capture cognitive and neural markers of sensitivity to social threat (i.e., an angry face). Fixation time and time to disengage from angry faces were derived from eye-tracking and event-related potentials were derived from EEG, which index rapid attention capture (P1), attention selection and discrimination (N170), and cognitive control (N2). Adolescents also completed a social stress task and provided salivary cortisol samples to assess endocrine reactivity. Social anxiety and depressive symptoms were self-reported concurrently and one year later. RESULTS/ANTICIPATED RESULTS: Latency to disengage from threatening faces was associated with lower N2 amplitudes (indexing poor cognitive control; r= -.24, p = .03) and higher concurrent social anxiety (r = .28, p = .01). Higher N170 amplitudes, reflecting attentional selection and discrimination in favor of threatening faces, predicted increases in depressive symptoms one year later (b= .88, p = .02). No other neurophysiological measures were associated with each other or with concurrent or prospective symptomatology. DISCUSSION/SIGNIFICANCE: Eye-tracking and EEG measures indexing difficulty disengaging from social threat and poor cognitive control may be biomarkers of social anxiety, which could be utilized as novel intervention targets. High N170 amplitudes to social threat, derived from EEG, may have clinical utility as a susceptibility/risk biomarker for depressive symptoms.
n-3 fatty acid consumption during pregnancy is recommended for optimal pregnancy outcomes and offspring health. We examined characteristics associated with self-reported fish or n-3 supplement intake.
Design:
Pooled pregnancy cohort studies.
Setting:
Cohorts participating in the Environmental influences on Child Health Outcomes (ECHO) consortium with births from 1999 to 2020.
Participants:
A total of 10 800 pregnant women in twenty-three cohorts with food frequency data on fish consumption; 12 646 from thirty-five cohorts with information on supplement use.
Results:
Overall, 24·6 % reported consuming fish never or less than once per month, 40·1 % less than once a week, 22·1 % 1–2 times per week and 13·2 % more than twice per week. The relative risk (RR) of ever (v. never) consuming fish was higher in participants who were older (1·14, 95 % CI 1·10, 1·18 for 35–40 v. <29 years), were other than non-Hispanic White (1·13, 95 % CI 1·08, 1·18 for non-Hispanic Black; 1·05, 95 % CI 1·01, 1·10 for non-Hispanic Asian; 1·06, 95 % CI 1·02, 1·10 for Hispanic) or used tobacco (1·04, 95 % CI 1·01, 1·08). The RR was lower in those with overweight v. healthy weight (0·97, 95 % CI 0·95, 1·0). Only 16·2 % reported n-3 supplement use, which was more common among individuals with a higher age and education, a lower BMI, and fish consumption (RR 1·5, 95 % CI 1·23, 1·82 for twice-weekly v. never).
Conclusions:
One-quarter of participants in this large nationwide dataset rarely or never consumed fish during pregnancy, and n-3 supplement use was uncommon, even among those who did not consume fish.
Delirium is a potential emergency with serious consequences. Little attention has been paid to residents of nursing homes, although they are at extreme risk for developing delirium. Health Care Professionals (HCPs) such as nurses and general practitioners are assumed to know little about delirium in nursing homes.
Objectives:
The German project DeliA (delirium in nursing homes) comprises three sub- studies and two reviews. The sub-studies have the following objectives: (1) to determine the prevalence of delirium and its sub-types in German nursing homes; (2) to describe and assess the quality of delirium care practices (prevention, diagnosis, therapy) of HCPs in nursing homes; and (3) to develop a Technology Enhanced Learning (TEL) to increase the delirium- specific knowledge of HCPs in nursing homes. The reviews aim to (a) summarize the prevalence of delirium reported in international studies and (b) to find out how, why and under what context education for HCPs in nursing homes works.
Methods:
A systematic review of the reported prevalence of delirium in nursing homes will be conducted (a). The prevalence study (1) will assess delirium and its proposed associated factors in at least 50 nursing homes using validated measurements. Medication schedules of participating residents will be analyzed to determine potential for delirium. To describe current practice, process-oriented semi-structured guided interviews will be conducted with 30 representatives of the (nursing home) medical service and the nursing service of nursing homes (2). As a theoretical basis for the TEL, a realist review will be conducted to understand the active ingredients of educational interventions and to develop an initial program theory (b). The curriculum for the proposed TEL will be developed based on a synthesis of existing curricula and evaluated by Delphi experts for relevance, comprehensiveness, and content. A final feasibility study will assess the potential increase in knowledge about delirium among HCPs (n = 50) in nursing homes (3).
Expected Results:
It is expected that the project and the dissemination of its findings will raise awareness among HCPs and the public about delirium in nursing homes. The developed TEL and its underlying program theory will be further tested.
Financial risk protection from high costs for care is a main goal of health systems. Health system characteristics typically associated with universal health coverage and financial risk protection, such as financial redistribution between insureds, are inherent to, e.g. social health insurance (SHI) but missing in private health insurance (PHI). This study provides evidence on financial protection in PHI for the case of Germany's dual insurance system of PHI and SHI, where PHI covers 11% of the population. Linked survey and claims data of PHI insureds (n = 3105) and population-wide household budget data (n = 42,226) are used to compute the prevalence of catastrophic health expenditures (CHE), i.e. the share of households whose out-of-pocket payments either exceed 40% of their capacity-to-pay or push them (further) into poverty. Despite comparatively high out-of-pocket payments, CHE is low in German PHI. It only affects the poor. Key to low financial burden seems to be the restriction of PHI to a small, overall wealthy group. Protection for the worse-off is provided through special mandatorily offered tariffs. In sum, Germany's dual health insurance system provides close-to-universal coverage. Future studies should further investigate the effect of premiums on financial burden, especially when linked to utilisation.
Status hierarchies are ubiquitous across cultures and have been over deep time. Position in hierarchies shows important links with fitness outcomes. Consequently, humans should possess psychological adaptations for navigating the adaptive challenges posed by living in hierarchically organised groups. One hypothesised adaptation functions to assess, track, and store the status impacts of different acts, characteristics and events in order to guide hierarchy navigation. Although this status-impact assessment system is expected to be universal, there are several ways in which differences in assessment accuracy could arise. This variation may link to broader individual difference constructs. In a preregistered study with samples from India (N = 815) and the USA (N = 822), we sought to examine how individual differences in the accuracy of status-impact assessments covary with status motivations and personality. In both countries, greater overall status-impact assessment accuracy was associated with higher status motivations, as well as higher standing on two broad personality constructs: Honesty–Humility and Conscientiousness. These findings help map broad personality constructs onto variation in the functioning of specific cognitive mechanisms and contribute to an evolutionary understanding of individual differences.
The starting point for many analyses of European state development is the historical fragmentation of territorial authority. The dominant bellicist explanation for state formation argues that this fragmentation was an unintended consequence of imperial collapse, and that warfare in the early modern era overcame fragmentation by winnowing out small polities and consolidating strong states. Using new data on papal conflict and religious institutions, I show instead that political fragmentation was the outcome of deliberate choices, that it is closely associated with papal conflict, and that political fragmentation persisted for longer than the bellicist explanations would predict. The medieval Catholic Church deliberately and effectively splintered political power in Europe by forming temporal alliances, funding proxy wars, launching crusades, and advancing ideology to ensure its autonomy and power. The roots of European state formation are thus more religious, older, and intentional than often assumed.
This study aims to (i) describe the (evidence-based) reimbursement process of hospital individual services, (ii) evaluate the accordance between evidence-based recommendations and reimbursement decision of individual services and (iii) elaborate potential aspects that play a role in the decision-making process in Austria.
Methods
The reimbursement process is described based on selected relevant sources such as official documents. Evidence-based recommendations and subsequent reimbursement decisions for the annual maintenance of the hospital individual service catalogue in Austria between 2008 and 2020 were analyzed using a mixed methods approach, encompassing descriptive statistics and a focus group with Austrian decisionmakers.
Results
One hundred and eighteen evidence-based recommendations were analyzed. There were 93 (78.8%) negative and 25 (21.2%) positive evidence-based recommendations. In total, 107 out of 118 evidence-based recommendations (90.1%) did not lead to a deviating reimbursement decision. We identified six aspects that may have played a role in the decision-making process for the annual maintenance of the hospital individual service catalogue, with clinical evidence being the most notable. Further aspects included quality assurance/organizational aspects (i.e., structural quality assurance), costs (if comparable to already existing medical services, not: cost-effectiveness), procedural aspects (e.g., if certain criteria for adoption have not been met formally through the proposals), “other countries” (i.e., taking into account how other countries decided) and situational aspects (such as the COVID-19 pandemic).
Conclusions
There is good accordance between evidence-based recommendations and reimbursement decisions regarding hospital individual services in Austria. Beyond clinical evidence, organizational aspects seem to be considered often with regard to quality assurance but costs do not appear to play a major role. The Austrian system has mechanisms in place that can restrict widespread adoption of novel hospital individual services with uncertain clinical benefits. Future studies could investigate how well these mechanisms work and how they compare to other health systems in Europe.
We document statistically significant relations between mutual fund betas and past market returns driven by fund feedback trading. Against this backdrop, evidence of “artificial” market timing emerges when standard market timing regressions are estimated across periods that span time variation in fund systematic risk levels, as is typical. Artificial timing significantly explains the inverse relation between timing model estimates of market timing and stock selectivity. A fund’s feedback trading relates to its past performance and remains significant after accounting for trading on momentum. Fund flows suggest that investors value feedback trading, which helps hedge downside risk during bear markets.
Whereas streamwise effective slope ($ES_{x}$) is accepted as a key topographical parameter in the context of rough-wall turbulent flows, the significance of its spanwise counterpart ($ES_{y}$) remains largely unexplored. Here, the response of turbulent channel flow over irregular, three-dimensional rough walls with systematically varied values of $ES_{y}$ is studied using direct numerical simulation. All simulations were performed at a fixed friction Reynolds number 395, corresponding to a viscous-scaled roughness height $k^{+}\approx 65.8$ (where $k$ is the mean peak-to-valley height). A surface generation algorithm is used to synthesise a set of ten irregular surfaces with specified $ES_{y}$ for three different values of $ES_{x}$. All surfaces share a common mean peak-to-valley height and are near-Gaussian, which allows this study to focus on the impact of varying $ES_{y}$, since roughness amplitude, skewness and $ES_{x}$ can be eliminated simultaneously as parameters. Based on an analysis of first- and second-order velocity statistics, as well as turbulence co-spectra and the fractional contribution of pressure and viscous drag, the study shows that $ES_{y}$ can strongly affect the roughness drag penalty – particularly for low-$ES_{x}$ surfaces. A secondary observation is that particular low-$ES_{y}$ surfaces in this study can lead to diminished levels of outer-layer similarity in both mean flow and turbulence statistics, which is attributed to insufficient scale separation between the outer length scale and the in-plane spanwise roughness wavelength.