We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In 2023 the Supreme Court of Mauritius cited human rights and public health arguments to strike down a colonial-era law criminalizing consensual same-sex sex. The parliament of Singapore recently did the same through legislative means. Are these aberrations or a shifting global consensus? This article documents a remarkable shift international legal shift regarding LGBTQ+ sexuality. Analysis of laws from 194 countries across multiple years demonstrates a clear, ongoing trend toward decriminalization globally. Where most countries criminalized same-sex sexuality in the 1980s, now two-thirds of countries do not criminalize under law. Additionally, 28 criminalizing countries in 2024 demonstrate a de facto policy of non-enforcement, a milestone towards legal change that all of the countries that have fully decriminalized since 2017 have taken. This has important public health effects, with health law lessons for an era of multiple pandemics. But amidst this trend, the reverse is occurring in some countries, with a counter-trend toward deeper, harsher criminalization of LGBTQ+ sexuality. Case studies of Angola, Singapore, India, Botswana, Mauritius, Cook Islands, Gabon, and Antigua and Barbuda show many politically- and legally-viable pathways to decriminalization and highlight actors in the executive, legislative, and judicial arenas of government and civil society engaged in legal change.
England's primary care service for psychological therapy (Improving Access to Psychological Therapies [IAPT]) treats anxiety and depression, with a target recovery rate of 50%. Identifying the characteristics of patients who achieve recovery may assist in optimizing future treatment. This naturalistic cohort study investigated pre-therapy characteristics as predictors of recovery and improvement after IAPT therapy.
Methods
In a cohort of patients attending an IAPT service in South London, we recruited 263 participants and conducted a baseline interview to gather extensive pre-therapy characteristics. Bayesian prediction models and variable selection were used to identify baseline variables prognostic of good clinical outcomes. Recovery (primary outcome) was defined using (IAPT) service-defined score thresholds for both depression (Patient Health Questionnaire [PHQ-9]) and anxiety (Generalized Anxiety Disorder [GAD-7]). Depression and anxiety outcomes were also evaluated as standalone (PHQ-9/GAD-7) scores after therapy. Prediction model performance metrics were estimated using cross-validation.
Results
Predictor variables explained 26% (recovery), 37% (depression), and 31% (anxiety) of the variance in outcomes, respectively. Variables prognostic of recovery were lower pre-treatment depression severity and not meeting criteria for obsessive compulsive disorder. Post-therapy depression and anxiety severity scores were predicted by lower symptom severity and higher ratings of health-related quality of life (EuroQol questionnaire [EQ5D]) at baseline.
Conclusion
Almost a third of the variance in clinical outcomes was explained by pre-treatment symptom severity scores. These constructs benefit from being rapidly accessible in healthcare services. If replicated in external samples, the early identification of patients who are less likely to recover may facilitate earlier triage to alternative interventions.
During the coronavirus disease 2019 pandemic, mathematical modeling has been widely used to understand epidemiological burden, trends, and transmission dynamics, to facilitate policy decisions, and, to a lesser extent, to evaluate infection prevention and control (IPC) measures. This review highlights the added value of using conventional epidemiology and modeling approaches to address the complexity of healthcare-associated infections (HAI) and antimicrobial resistance. It demonstrates how epidemiological surveillance data and modeling can be used to infer transmission dynamics in healthcare settings and to forecast healthcare impact, how modeling can be used to improve the validity of interpretation of epidemiological surveillance data, how modeling can be used to estimate the impact of IPC interventions, and how modeling can be used to guide IPC and antimicrobial treatment and stewardship decision-making. There are several priority areas for expanding the use of modeling in healthcare epidemiology and IPC. Importantly, modeling should be viewed as complementary to conventional healthcare epidemiological approaches, and this requires collaboration and active coordination between IPC, healthcare epidemiology, and mathematical modeling groups.
How was trust created and reinforced between the inhabitants of medieval and early modern cities? And how did the social foundations of trusting relationships change over time? Current research highlights the role of kinship, neighbourhood, and associations, particularly guilds, in creating ‘relationships of trust’ and social capital in the face of high levels of migration, mortality, and economic volatility, but tells us little about their relative importance or how they developed. We uncover a profound shift in the contribution of family and guilds to trust networks among the middling and elite of one of Europe's major cities, London, over three centuries, from the 1330s to the 1680s. We examine almost 15,000 networks of sureties created to secure orphans’ inheritances to measure the presence of trusting relationships connected by guild membership, family, and place. We uncover a profound increase in the role of kinship – a re-embedding of trust within the family – and a decline of the importance of shared guild membership in connecting Londoners who secured orphans’ inheritances together. These developments indicate a profound transformation in the social fabric of urban society.
OBJECTIVES/GOALS: Clinical indicators predictive of venous thromboembolism (VTE) in trauma patients at multiple time points are not well outlined, particularly at time of discharge. We aimed to describe and predict inpatient and post-discharge risk factors of VTE after trauma using a multi-variate regression model and best of class machine learning (ML) models. METHODS/STUDY POPULATION: In a prospective, case-cohort study, all trauma patients (pts) who arrived as level 1 or 2 trauma activations, from June 2018 to February 2020 were considered for study inclusion. A subset of pts who developed incident, first time, VTE and those who did not develop VTE within 90 days of discharge were identified. VTE were confirmed either by imaging or at autopsy during inpatient stay or post-discharge. Outcomes were defined as the development of symptomatic VTE (DVT and/or PE) within 90 days of discharge.A multi-variate Cox regression model and a best in class of a set of 5 different ML models (support-vector machine, random-forest, naives Bayes, logistic regression, neural network]) were used to predict VTE using models applied a) at 24 hours of injury date or b) on day of patient discharge. RESULTS/ANTICIPATED RESULTS: Among 393 trauma pts (ISS=12.0, hospital LOS=4.0 days, age=48 years, 71% male, 96% with blunt mechanism, mortality 2.8%), 36 developed inpatient VTE and 36 developed VTE after discharge. In a weighted, multivariate Cox model, any type of surgery by day 1, increased age per 10 years, and BMI per 5 points were predictors of overall symptomatic VTE (C-stat 0.738). Prophylactic IVC filter placement (4.40), increased patient age per 10 years, and BMI per 5 points were predictors of post-discharge symptomatic VTE (C-stat= 0.698). A neural network ML model predicted VTE by day 1 with accuracy and AUC of 0.82 and 0.76, with performance exceeding those of a Cox model. A naīve Bayesian ML model predicted VTE at discharge, with accuracy and AUC of 0.81 and 0.77 at time of discharge, with performance exceeding those of a Cox model. DISCUSSION/SIGNIFICANCE: The rate of inpatient and post-discharge VTEs remain high. Limitations: single institution study, limited number of patients, internal validation only, with the use of limited number of ML models. We developed and internally validated a ML based tool.Future work will focus on external validation and expansion of ML techniques.
There is uncertainty about factors associated with involuntary in-patient psychiatric care. Understanding these factors would help in reducing coercion in psychiatry.
Aims
To explore variables associated with involuntary care in the largest database of involuntary admissions published.
Method
We identified 166 102 public mental health hospital admissions over 5 years in New South Wales, Australia. Demographic, clinical and episode-of-care variables were examined in an exploratory, multivariable logistic regression.
Results
A total of 54% of eligible admissions included involuntary care. The strongest associations with involuntary care were referral from the legal system (odds ratio 4.98, 95% CI 4.61–5.38), and psychosis (odds ratio 4.48, 95% CI 4.31–4.64) or organic mental disorder (odds ratio 4.40, 95% CI 3.85–5.03). There were moderately strong associations between involuntary treatment and substance use disorder (odds ratio 2.68, 95% CI 2.56–2.81) or affective disorder (odds ratio 2.06, 95% CI 1.99–2.14); comorbid cannabis and amphetamine use disorders (odds ratio 1.65, 95% CI 1.57–1.74); unmarried status (odds ratio 1.62, 95% CI 1.49–1.76) and being born in Asia (odds ratio 1.42, 95% CI 1.35–1.50), Africa or the Middle East (odds ratio 1.32, 95% CI 1.24–1.40). Involuntary care was less likely for people aged >75 years (odds ratio 0.68, 95% CI 0.62–0.74), with comorbid personality disorder (odds ratio 0.90, 95% CI 0.87–0.94) or with private health insurance (odds ratio 0.89, 95% CI 0.86–0.93).
Conclusions
This research strengthens the evidence linking diagnostic, socioeconomic and cultural factors to involuntary treatment. Targeted interventions are needed to reduce involuntary admissions in disadvantaged groups.
Due to decades of structural and institutional racism, minoritized individuals in the US are more likely to live in low socioeconomic neighborhoods, which may underlie the observed greater risk for neurocognitive impairment as they age. However, these relationships have not been examined among people aging with HIV. To investigate neurocognitive disparities among middle- and older-aged Latino and non-Latino White people living with HIV (PWH), and whether neighborhood socioeconomic deprivation may partially mediate these relationships.
Participants and Methods:
Participants were 372 adults ages 40-85 living in southern California, including 186 Latinos (94 PWH, 92 without HIV) and 186 non-Latino (NL) Whites (94 PWH, 92 without HIV) age-matched to the Latino group (for the overall cohort: Age M=57.0, SD=9.1, Education: M=12.7, SD=3.9, 38% female; for the group of PWH: 66% AIDS, 88% on antiretroviral therapy [ART]; 98% undetectable plasma RNA [among those on ART]). Participants completed psychiatric and neuromedical evaluations and neuropsychological tests of verbal fluency, learning and memory in person or remotely. Neuropsychological results were converted to demographically-unadjusted global scaled scores for our primary outcome. A neighborhood socioeconomic deprivation variable (SESDep) was generated for census tracts in San Diego County using American Community Survey 2013-2017 data. Principal components analysis was used to create one measure using nine variables comprising educational (% with high school diploma), occupational (% unemployed), economic (rent to income ratio, % in poverty, (% female-headed households with dependent children, % with no car, % on public assistance), and housing (% rented housing, % crowded rooms) factors. Census tract SESDep values were averaged for a 1km radius buffer around participants’ home addresses.
Results:
Univariable analyses (independent samples t-tests and Chi-square tests) indicated Latinos were more likely to be female and had fewer years of formal education than NL-Whites (ps<.05). Latino PWH had higher nadir CD4 than White PWH (p=.02). Separate multivariable regression models in the overall sample, controlling for demographics and HIV status, showed Latinos had significantly lower global scaled scores than Whites (b=-0.59; 95%CI-1.13, -0.06; p=.03) and lived in more deprived neighborhoods (b=0.62; 95%CI=0.36, 0.88; p<.001). More SES deprivation was significant associated with worse global neurocognition in an unadjusted linear regression (b=-0.55; 95%CI=-0.82, -0.28; p<.001), but similar analyses controlling for demographics and HIV status, showed SESDep was not significantly related to global scaled scores (b=-0.11; 95%CI= -0.36, 0.14; p=.40). Exploratory analyses examined primary language (i.e., English vs Spanish) as a marker of Hispanic heterogeneity and its association with neurocognition and SESDep. Controlling for demographics and HIV status, both English-speaking (b=0.33; 95%CI=0.01. 0.64; p=.04) and Spanish-speaking Latinos (b=0.88; 95%CI=0.58, 1.18; p<.001) lived in significantly greater SESDep neighborhoods than Whites, with SESDep greater for Spanish-speakers than English-speakers (p<.001). However, only English-speaking Latinos had significantly lower neurocognition than Whites (b=-0.91; 95%CI=0-1.57, -0.26; p<.01; Spanish-speakers: b=-0.27; 95%CI=-0.93, 0.38; p=.41).
Conclusions:
Among our sample of diverse older adults living with and without HIV, English-speaking Latinos showed worse neurocognition than Whites. Though SES neighborhood deprivation was worse among Latinos (particularly Spanish-speakers) it was not associated with neurocognitive scores after adjusting for demographics. Further studies investigating other neighborhood characteristics and more nuanced markers of Hispanic heterogeneity (e.g., acculturation) are warranted to understand factors underlying aging and HIV-related neurocognitive disparities among diverse older adults.
Patients diagnosed with coronavirus disease 2019 (COVID-19) aerosolize severe acute respiratory coronavirus virus 2 (SARS-CoV-2) via respiratory efforts, expose, and possibly infect healthcare personnel (HCP). To prevent transmission of SARS-CoV-2 HCP have been required to wear personal protective equipment (PPE) during patient care. Early in the COVID-19 pandemic, face shields were used as an approach to control HCP exposure to SARS-CoV-2, including eye protection.
Methods:
An MS2 bacteriophage was used as a surrogate for SARS-CoV-2 and was aerosolized using a coughing machine. A simulated HCP wearing a disposable plastic face shield was placed 0.41 m (16 inches) away from the coughing machine. The aerosolized virus was sampled using SKC biosamplers on the inside (near the mouth of the simulated HCP) and the outside of the face shield. The aerosolized virus collected by the SKC Biosampler was analyzed using a viability assay. Optical particle counters (OPCs) were placed next to the biosamplers to measure the particle concentration.
Results:
There was a statistically significant reduction (P < .0006) in viable virus concentration on the inside of the face shield compared to the outside of the face shield. The particle concentration was significantly lower on the inside of the face shield compared to the outside of the face shield for 12 of the 16 particle sizes measured (P < .05).
Conclusions:
Reductions in virus and particle concentrations were observed on the inside of the face shield; however, viable virus was measured on the inside of the face shield, in the breathing zone of the HCP. Therefore, other exposure control methods need to be used to prevent transmission from virus aerosol.
Over the last 20 years, finite element analysis (FEA) has become a standard analysis tool for metal joining processes. When FEA tools are combined with design of experiments (DOE) methodologies, academic research has shown the potential for virtual DOE to allow for the rapid analysis of manufacturing parameters and their influence on final formed products. However, within the domain of bulk-metal joining, FEA tools are rarely used in industrial applications and limit DOE trails to physical testing which are therefore constrained by financial costs and time.
This research explores the suitability of an FEA-based DOE to predict the complex behaviour during bulk-metal joining processes through a case study on the staking of spherical bearings. For the two DOE outputs of pushout strength and post-stake torque, the FEA-based DOE error did not exceed ±1.2% and ± 1.5 Nm respectively which far surpasses what was previously capable from analytically derived closed-form solutions. The outcomes of this case study demonstration the potential for FEA-based DOE to provide an inexpensive, methodical, and scalable solution for modelling bulk-metal joining process
The purpose of this investigation was to expand upon the limited existing research examining the test–retest reliability, cross-sectional validity and longitudinal validity of a sample of bioelectrical impedance analysis (BIA) devices as compared with a laboratory four-compartment (4C) model. Seventy-three healthy participants aged 19–50 years were assessed by each of fifteen BIA devices, with resulting body fat percentage estimates compared with a 4C model utilising air displacement plethysmography, dual-energy X-ray absorptiometry and bioimpedance spectroscopy. A subset of thirty-seven participants returned for a second visit 12–16 weeks later and were included in an analysis of longitudinal validity. The sample of devices included fourteen consumer-grade and one research-grade model in a variety of configurations: hand-to-hand, foot-to-foot and bilateral hand-to-foot (octapolar). BIA devices demonstrated high reliability, with precision error ranging from 0·0 to 0·49 %. Cross-sectional validity varied, with constant error relative to the 4C model ranging from −3·5 (sd 4·1) % to 11·7 (sd 4·7) %, standard error of the estimate values of 3·1–7·5 % and Lin’s concordance correlation coefficients (CCC) of 0·48–0·94. For longitudinal validity, constant error ranged from −0·4 (sd 2·1) % to 1·3 (sd 2·7) %, with standard error of the estimate values of 1·7–2·6 % and Lin’s CCC of 0·37–0·78. While performance varied widely across the sample investigated, select models of BIA devices (particularly octapolar and select foot-to-foot devices) may hold potential utility for the tracking of body composition over time, particularly in contexts in which the purchase or use of a research-grade device is infeasible.
An important tradition of thinking about divine and human agency takes divine transcendence as key to reconciling human freedom with God’s universal causality. Proponents of this tradition, who often claim Aquinas as their inspiration, sometimes maintain that this “transcendence approach” (TA) offers a third way that defies classification as libertarian or compatibilist. I argue that, carefully defined, libertarianism and compatibilism are mutually exclusive and exhaustive options for those affirming free will, but that it is an open question whether a view that affirms free will alongside God’s universal causality is compatibilist or libertarian. I then consider the interesting strategies employed by proponents of TA and argue that they leave it unclear whether TA is a libertarian or compatibilist view. Finally, I argue that a consistent deployment of these strategies results in a version of TA that is clearly libertarian, but which maintains a strong view of divine sovereignty and providence.
To describe the epidemiology of patients with nonintestinal carbapenem-resistant Enterobacterales (CRE) colonization and to compare clinical outcomes of these patients to those with CRE infection.
Design:
A secondary analysis of Consortium on Resistance Against Carbapenems in Klebsiella and other Enterobacteriaceae 2 (CRACKLE-2), a prospective observational cohort.
Setting:
A total of 49 US short-term acute-care hospitals.
Patients:
Patients hospitalized with CRE isolated from clinical cultures, April, 30, 2016, through August 31, 2017.
Methods:
We described characteristics of patients in CRACKLE-2 with nonintestinal CRE colonization and assessed the impact of site of colonization on clinical outcomes. We then compared outcomes of patients defined as having nonintestinal CRE colonization to all those defined as having infection. The primary outcome was a desirability of outcome ranking (DOOR) at 30 days. Secondary outcomes were 30-day mortality and 90-day readmission.
Results:
Of 547 patients with nonintestinal CRE colonization, 275 (50%) were from the urinary tract, 201 (37%) were from the respiratory tract, and 71 (13%) were from a wound. Patients with urinary tract colonization were more likely to have a more desirable clinical outcome at 30 days than those with respiratory tract colonization, with a DOOR probability of better outcome of 61% (95% confidence interval [CI], 53%–71%). When compared to 255 patients with CRE infection, patients with CRE colonization had a similar overall clinical outcome, as well as 30-day mortality and 90-day readmission rates when analyzed in aggregate or by culture site. Sensitivity analyses demonstrated similar results using different definitions of infection.
Conclusions:
Patients with nonintestinal CRE colonization had outcomes similar to those with CRE infection. Clinical outcomes may be influenced more by culture site than classification as “colonized” or “infected.”
Schmid and Mullins present what they call ‘the Aloneness Argument’ for the inconsistency of four theses from classical theism: the doctrine of divine simplicity, the doctrine of divine omniscience, the claim that God is free to create or not to create, and the claim that it is possible that God and nothing but God exist. We deny that they have shown an inconsistency between these theses. We maintain that, depending on how certain premises are interpreted, one or another premise is false. We also offer a positive proposal regarding a simple God's knowledge that he is alone in a world where he doesn't create anything.
The present study reports the validity of multiple assessment methods for tracking changes in body composition over time and quantifies the influence of unstandardised pre-assessment procedures. Resistance-trained males underwent 6 weeks of structured resistance training alongside a hyperenergetic diet, with four total body composition evaluations. Pre-intervention, body composition was estimated in standardised (i.e. overnight fasted and rested) and unstandardised (i.e. no control over pre-assessment activities) conditions within a single day. The same assessments were repeated post-intervention, and body composition changes were estimated from all possible combinations of pre-intervention and post-intervention data. Assessment methods included dual-energy X-ray absorptiometry (DXA), air displacement plethysmography, three-dimensional optical imaging, single- and multi-frequency bioelectrical impedance analysis, bioimpedance spectroscopy and multi-component models. Data were analysed using equivalence testing, Bland–Altman analysis, Friedman tests and validity metrics. Most methods demonstrated meaningful errors when unstandardised conditions were present pre- and/or post-intervention, resulting in blunted or exaggerated changes relative to true body composition changes. However, some methods – particularly DXA and select digital anthropometry techniques – were more robust to a lack of standardisation. In standardised conditions, methods exhibiting the highest overall agreement with the four-component model were other multi-component models, select bioimpedance technologies, DXA and select digital anthropometry techniques. Although specific methods varied, the present study broadly demonstrates the importance of controlling and documenting standardisation procedures prior to body composition assessments across distinct assessment technologies, particularly for longitudinal investigations. Additionally, there are meaningful differences in the ability of common methods to track longitudinal body composition changes.
This chapter discusses the German experience with supervisory codetermination, in which shareholder and employee representatives share governance of large corporations. After discussing the basic features of the system, we examine how it has been viewed by American corporate law scholars as an anomaly that could only arise through legislative fiat. In fact, the German system was born of consensual agreement at a time when labor and capital had roughly equal bargaining power, and only later became enshrined in law. We then discuss recent studies that evaluate how well codetermination serves the needs of various corporate constituents, including employees, creditors, and shareholders, and the role it played in Germany's relatively rapid recovery from the global financial crisis. In the end, the success of the German system serves as an empirical rejoinder to the hypothetical arguments used by law and economics scholars to justify the exclusive shareholder franchise, as well as a sort of proof of concept of the shared governance model.
This chapter critically examines the claim that shareholders have a homogeneous interest in wealth maximization. This claim, which is central to several arguments for the exclusive shareholder franchise, is not accurate. Shareholder preferences diverge along a number of dimensions, including those in or out of a control group; those with differential voting power; those involved in vote buying or voting trusts; hedged shareholders; management, employee, and pension fund shareholders; sovereign wealth funds; and corporate social responsibility investors. Even shareholders with shared goals of wealth maximization may have differing timelines, risk tolerances, and ways of defining “wealth.” Thus, actual shareholders have a range of preferences far too great to support many of the arguments that rely on their shared agreement.
This chapter provides an introduction to the basic structure and themes of the book. We begin with a description of the basic features of a corporation. We then discuss the intellectual foundations of corporate governance, including an overview of the doctrine of shareholder primacy and the view that a corporation is merely a nexus of contracts. We begin to catalog some of the cracks in these foundations, focusing on the shortcomings of the long-standing arguments for the exclusive shareholder franchise. Next, we make clear that our the criticisms of shareholder primacy and the exclusive shareholder franchise do not question, but indeed make extensive use of, the basic principles of standard economics and social choice theory. In other words, both our critique and our positive theory come from within the very tradition that gave rise to the original arguments for shareholder control. We conclude the chapter with a detailed plan for the rest of the book.