We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Celiac disease (CD), an autoimmune disorder triggered by gluten, impacts about one percent of the population. Only one-third receive a diagnosis, leaving the majority unaware of their condition. Untreated CD can lead to gut lining damage, resulting in malnutrition, anemia, and osteoporosis. Our primary goal was to identify at-risk groups and assess the cost-effectiveness of active case finding in primary care.
Methods
Our methodology involved systematic reviews and meta-analyses focusing on the accuracy of CD risk factors (chronic conditions and symptoms) and diagnostic tests (serological and genetic). Prediction models, based on identified risk factors, were developed for identifying individuals who would benefit from CD testing in routine primary care. Additionally, an online survey gauged individuals’ preferences regarding diagnostic certainty before initiating a gluten-free diet. This information informed the development of economic models evaluating the cost-effectiveness of various active case finding strategies.
Results
Individuals with dermatitis herpetiformis, a family history of CD, migraine, anemia, type 1 diabetes, osteoporosis, or chronic liver disease showed one and a half to two times higher risk of having CD. IgA tTG, and EMA demonstrated good diagnostic accuracy. Genetic tests showed high sensitivity but low specificity. Survey results indicated substantial variation in preference for certainty from a blood test before initiating a gluten-free diet. Cost-effectiveness analyses showed that, in adults, IgA tTG at a one percent pre-test probability (equivalent to population screening) was the most cost effective. For non-population screening strategies, IgA EMA plus HLA was most cost effective. There was substantial uncertainty in economic model results.
Conclusions
While population-based screening with IgA tTG appears the most cost effective in adults, decisions for implementation should not solely rely on economic analyses. Future research should explore whether population-based CD screening aligns with UK National Screening Committee criteria and requires a long-term randomized controlled trial of screening strategies.
Most patients with long-term conditions (LTC) receive regular blood tests to monitor disease progression and response to treatment and to detect complications. There is currently no robust evidence to inform recommendations on monitoring. Creating this evidence base is challenging because the benefits and harms of testing are dependent on what is done in response to the test results.
Methods
We identified a list of commonly used tests. We defined a series of filtering questions to determine whether there was evidence to support the rationale of monitoring, such as “Can the general practitioner do anything in response to an abnormal test result?” Through a series of rapid reviews we identified evidence to answer each question. The evidence was presented at a consensus meeting where clinicians and patients voted for inclusion, exclusion, or further analysis. A process evaluation was performed alongside this. Further analyses were performed using routinely collected healthcare data and by performing incidence analyses, emulating randomized controlled trials (RCTs), and modeling disease progression.
Results
We tested this methodology on three common LTCs: chronic kidney disease (CKD), type 2 diabetes mellitus (T2DM), and hypertension. We found sufficient evidence to include hemoglobin A1C and estimated glomerular filtration rate (eGFR) for monitoring patients with T2DM; hemoglobin and eGFR for patients with CKD; and eGFR for patients with hypertension. The consensus panel excluded four tests, while 10 tests were selected for further analysis. The emulated RCTs will investigate the effect of regular monitoring with certain tests on health outcomes among routinely monitored patients. In addition, we will investigate the signal-to-noise ratio of each test over time using a modeling approach.
Conclusions
The cost effectiveness of the evidence-based testing panels needs to be tested in clinical practice. We are currently developing an intervention package and are planning to run a feasibility trial. This program of work has the potential to change how LTCs are monitored in primary care, ultimately improving patient outcomes and leading to more efficient use of healthcare resources.
In this section, we describe psychiatric disorders in the perinatal period, with a focus on understanding the triggering of mood and psychotic disorders by childbirth (postpartum psychosis and postnatal depression). We define perinatal disorders and the perinatal period, and the issues around how these episodes of illness are dealt with in the current diagnostic classification systems – the World Health Organization’s International Classification of Diseases, 11th edition (ICD-11) and the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-5) [1, 2].
Laboratory studies of choice and decision making among real monetary rewards typically use smaller real rewards than those common in real life. When laboratory rewards are large, they are almost always hypothetical. In applying laboratory results meaningfully to real-life situations, it is important to know the extent to which choices among hypothetical rewards correspond to choices among real rewards and whether variation of the magnitude of hypothetical rewards affects behavior in meaningful ways. The present study compared real and hypothetical monetary rewards in two experiments. In Experiment 1, participants played a temporal discounting game that incorporates the logic of a repeated prisoner’s-dilemma (PD) game versus tit-for-tat; choice of one alternative (“defection” in PD terminology) resulted in a small-immediate reward; choice of the other alternative (“cooperation” in PD terminology) resulted in a larger reward delayed until the following trial. The larger-delayed reward was greater for half of the groups than for the other half. Rewards also differed in type across groups: multiples of real nickels, hypothetical nickels, or hypothetical hundred-dollar bills. All groups significantly increased choice of the larger delayed reward over the 40 trials of the experiment. Over the last 10 trials, cooperation was significantly higher when the difference between larger and smaller hypothetical rewards was greater. Reward type (real or hypothetical) made no significant difference in cooperation on most measures. In Experiment 2, real and hypothetical rewards were compared in social discounting—the decrease in value to the giver of a reward as social distance increases to the receiver of the reward. Social discount rates were well described by a hyperbolic function. Discounting rates for real and hypothetical rewards did not significantly differ. These results add to the evidence that results of experiments with hypothetical rewards validly apply in everyday life.
Best practice requires the treating physician to understand the needs and hopes of his/her patient, particularly in relation to pregnancy and childbirth preferences. This is even more necessary for women with Severe Mental Illness (SMI) because of the complicated decisions they face balancing the need to continue medication in pregnancy to prevent relapse against any possible harm to the foetus. Objectives: To explore what women themselves view as most important when discussing pregnancy and childbirth with psychiatrists and what barriers there are to a) having a meaningful conversation and b) achieving optimum outcomes. Qualitative methods were used to analyse the data from in-depth interviews with 21 women, recruited from a South London NHS organisation (76%) and the UK’s national bipolar charity (24%). The views of 25 health professionals, including 19 psychiatrists, were also collected and analysed. Results: Many themes emerged but principally women wanted: information, continuity of care, better training for health professionals, to co-produce a detailed care plan, access to a Mother and Baby Unit, peer support and more research on medications in pregnancy. Conclusions: This study highlighted the importance of understanding women’s needs and fears and giving them the necessary information to make the difficult decisions that face them. Such understanding is likely to lead to more positive therapeutic relationships and better long-term outcomes.
This report describes a cluster of patients infected by Serratia marcescens in a metropolitan neonatal intensive care unit (NICU) and a package of infection control interventions that enabled rapid, effective termination of the outbreak.
Design:
Cross-sectional analytical study using whole-genome sequencing (WGS) for phylogenetic cluster analysis and identification of virulence and resistance genes.
Setting:
NICU in a metropolitan tertiary-care hospital in Sydney, Australia.
Patients:
All neonates admitted to the level 2 and level 3 neonatal unit.
Interventions:
Active inpatient and environmental screening for Serratia marcescens isolates with WGS analysis for identification of resistance genes as well as cluster relatedness between isolates. Planning and implementation of a targeted, multifaceted infection control intervention.
Results:
The cluster of 10 neonates colonized or infected with Serratia marcescens was identified in a metropolitan NICU. Two initial cases involved devastating intracranial infections with brain abscesses, highlighting the virulence of this organism. A targeted and comprehensive infection control intervention guided by WGS findings enabled termination of this outbreak within 15 days of onset. WGS examination demonstrated phylogenetic linkage across the cluster, and genomic unrelatedness of later strains identified in the neonatal unit and elsewhere.
Conclusions:
A comprehensive, multipronged, infection control package incorporating close stakeholder engagement, frequent microbiological patient screening, environmental screening, enhanced cleaning, optimization of hand hygiene and healthcare worker education was paramount to the prompt control of Serratia marcescens transmission in this neonatal outbreak. WGS was instrumental in establishing relatedness between isolates and identification of possible transmission pathways in an outbreak setting.
Psychiatric mother and baby units (MBUs) are recommended for severe perinatal mental illness, but effectiveness compared with other forms of acute care remains unknown.
Aims
We hypothesised that women admitted to MBUs would be less likely to be readmitted to acute care in the 12 months following discharge, compared with women admitted to non-MBU acute care (generic psychiatric wards or crisis resolution teams (CRTs)).
Method
Quasi-experimental cohort study of women accessing acute psychiatric care up to 1 year postpartum in 42 healthcare organisations across England and Wales. Primary outcome was readmission within 12 months post-discharge. Propensity scores were used to account for systematic differences between MBU and non-MBU participants. Secondary outcomes included assessment of cost-effectiveness, experience of services, unmet needs, perceived bonding, observed mother–infant interaction quality and safeguarding outcome.
Results
Of 279 women, 108 (39%) received MBU care, 62 (22%) generic ward care and 109 (39%) CRT care only. The MBU group (n = 105) had similar readmission rates to the non-MBU group (n = 158) (aOR = 0.95, 95% CI 0.86–1.04, P = 0.29; an absolute difference of −5%, 95% CI −14 to 4%). Service satisfaction was significantly higher among women accessing MBUs compared with non-MBUs; no significant differences were observed for any other secondary outcomes.
Conclusions
We found no significant differences in rates of readmission, but MBU advantage might have been masked by residual confounders; readmission will also depend on quality of care after discharge and type of illness. Future studies should attempt to identify the effective ingredients of specialist perinatal in-patient and community care to improve outcomes.
To determine the impact of an inpatient stewardship intervention targeting fluoroquinolone use on inpatient and postdischarge Clostridioides difficile infection (CDI).
Design:
We used an interrupted time series study design to evaluate the rate of hospital-onset CDI (HO-CDI), postdischarge CDI (PD-CDI) within 12 weeks, and inpatient fluoroquinolone use from 2 years prior to 1 year after a stewardship intervention.
Setting:
An academic healthcare system with 4 hospitals.
Patients:
All inpatients hospitalized between January 2017 and September 2020, excluding those discharged from locations caring for oncology, bone marrow transplant, or solid-organ transplant patients.
Intervention:
Introduction of electronic order sets designed to reduce inpatient fluoroquinolone prescribing.
Results:
Among 163,117 admissions, there were 683 cases of HO-CDI and 1,104 cases of PD-CDI. In the context of a 2% month-to-month decline starting in the preintervention period (P < .01), we observed a reduction in fluoroquinolone days of therapy per 1,000 patient days of 21% after the intervention (level change, P < .05). HO-CDI rates were stable throughout the study period. In contrast, we also detected a change in the trend of PD-CDI rates from a stable monthly rate in the preintervention period to a monthly decrease of 2.5% in the postintervention period (P < .01).
Conclusions:
Our systemwide intervention reduced inpatient fluoroquinolone use immediately, but not HO-CDI. However, a downward trend in PD-CDI occurred. Relying on outcome measures limited to the inpatient setting may not reflect the full impact of inpatient stewardship efforts.
Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools.
Aims
To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics.
Method
Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts.
Results
Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO.
Conclusions
AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses.
Background: Effective inpatient stewardship initiatives can improve antibiotic prescribing, but impact on outcomes like Clostridioides difficile infections (CDIs) is less apparent. However, the effect of inpatient stewardship efforts may extend to the postdischarge setting. We evaluated whether an intervention targeting inpatient fluoroquinolone (FQ) use in a large healthcare system reduced incidence of postdischarge CDI. Methods: In August 2019, 4 acute-care hospitals in a large healthcare system replaced standalone FQ orders with order sets containing decision support. Order sets redirected prescribers to syndrome order sets that prioritize alternative antibiotics. Monthly patient days (PDs) and antibiotic days of therapy (DOT) administered for FQs and NHSN-defined broad-spectrum hospital-onset (BS-HO) antibiotics were calculated using patient encounter data for the 23 months before and 13 months after the intervention (COVID-19 admissions in the previous 7 months). We evaluated hospital-onset CDI (HO-CDI) per 1,000 PD (defined as any positive test after hospital day 3) and 12-week postdischarge (PDC- CDI) per 100 discharges (any positive test within healthcare system <12 weeks after discharge). Interrupted time-series analysis using generalized estimating equation models with negative binomial link function was conducted; a sensitivity analysis with Medicare case-mix index (CMI) adjustment was also performed to control for differences after start of the COVID-19 pandemic. Results: Among 163,117 admissions, there were 683 HO-CDIs and 1,009 PDC-CDIs. Overall, FQ DOT per 1,000 PD decreased by 21% immediately after the intervention (level change; P < .05) and decreased at a consistent rate throughout the entire study period (−2% per month; P < .01) (Fig. 1). There was a nonsignificant 5% increase in BS-HO antibiotic use immediately after intervention and a continued increase in use after the intervention (0.3% per month; P = .37). HO-CDI rates were stable throughout the study period, with a nonsignificant level change decrease of 10% after the intervention. In contrast, there was a reversal in the trend in PDC-CDI rates from a 0.4% per month increase in the preintervention period to a 3% per month decrease in the postintervention period (P < .01). Sensitivity analysis with adjustment for facility-specific CMI produced similar results but with wider confidence intervals, as did an analysis with a distinct COVID-19 time point. Conclusion: Our systemwide intervention using order sets with decision support reduced inpatient FQ use by 21%. The intervention did not significantly reduce HO-CDI but significantly decreased the incidence of CDI within 12 weeks after discharge. Relying on outcome measures limited to inpatient setting may not reflect the full impact of inpatient stewardship efforts and incorporating postdischarge outcomes, such as CDI, should increasingly be considered.
To determine the effect of an electronic medical record (EMR) nudge at reducing total and inappropriate orders testing for hospital-onset Clostridioides difficile infection (HO-CDI).
Design:
An interrupted time series analysis of HO-CDI orders 2 years before and 2 years after the implementation of an EMR intervention designed to reduce inappropriate HO-CDI testing. Orders for C. difficile testing were considered inappropriate if the patient had received a laxative or stool softener in the previous 24 hours.
Setting:
Four hospitals in an academic healthcare network.
Patients:
All patients with a C. difficile order after hospital day 3.
Intervention:
Orders for C. difficile testing in patients administered a laxative or stool softener in <24 hours triggered an EMR alert defaulting to cancellation of the order (“nudge”).
Results:
Of the 17,694 HO-CDI orders, 7% were inappropriate (8% prentervention vs 6% postintervention; P < .001). Monthly HO-CDI orders decreased by 21% postintervention (level-change rate ratio [RR], 0.79; 95% confidence interval [CI], 0.73–0.86), and the rate continued to decrease (postintervention trend change RR, 0.99; 95% CI, 0.98–1.00). The intervention was not associated with a level change in inappropriate HO-CDI orders (RR, 0.80; 95% CI, 0.61–1.05), but the postintervention inappropriate order rate decreased over time (RR, 0.95; 95% CI, 0.93–0.97).
Conclusion:
An EMR nudge to minimize inappropriate ordering for C. difficile was effective at reducing HO-CDI orders, and likely contributed to decreasing the inappropriate HO-CDI order rate after the intervention.
A proportion of ex-military personnel who develop mental health and social problems end up in the Criminal Justice System. A government review called for better understanding of pathways to offending among ex-military personnel to improve services and reduce reoffending. We utilised data linkage with criminal records to examine the patterns of offending among military personnel after they leave service and the associated risk (including mental health and alcohol problems) and socio-economic protective factors.
Method
Questionnaire data from a cohort study of 13 856 randomly selected UK military personnel were linked with national criminal records to examine changes in the rates of offending after leaving service.
Results
All types of offending increased after leaving service, with violent offending being the most prevalent. Offending was predicted by mental health and alcohol problems: probable PTSD, symptoms of common mental disorder and aggressive behaviour (verbal, property and threatened or actual physical aggression). Reduced risk of offending was associated with post-service socio-economic factors: absence of debt, stable housing and relationship satisfaction. These factors were associated with a reduced risk of offending in the presence of mental health risk factors.
Conclusions
Ex-military personnel are more likely to commit violent offences after leaving service than other offence-types. Mental health and alcohol problems are associated with increased risk of post-service offending, and socio-economic stability is associated with reduced risk of offending among military veterans with these problems. Efforts to reduce post-service offending should encompass management of socio-economic risk factors as well as mental health.
Using data from the Stockholm Stock Exchange (SSE), we study the value added by (as distinct from the abnormal returns to) analysts’ recommendations. Recommending brokers’ clients trade profitably around positive recommendations at the expense of other brokers’ clients. Significant profits come from transactions before recommendation dates. Value added is greatest for upgrades to large caps, and largely insignificant for downgrades and recommendations of small caps, despite high abnormal returns. Brokers making profitable recommendations generate abnormally high commission income, recouping much of their clients’ abnormal profits, and their abnormal commission income varies in line with the abnormal profits for their clients.
To identify potential participants for clinical trials, electronic health records (EHRs) are searched at potential sites. As an alternative, we investigated using medical devices used for real-time diagnostic decisions for trial enrollment.
Methods:
To project cohorts for a trial in acute coronary syndromes (ACS), we used electrocardiograph-based algorithms that identify ACS or ST elevation myocardial infarction (STEMI) that prompt clinicians to offer patients trial enrollment. We searched six hospitals’ electrocardiograph systems for electrocardiograms (ECGs) meeting the planned trial’s enrollment criterion: ECGs with STEMI or > 75% probability of ACS by the acute cardiac ischemia time-insensitive predictive instrument (ACI-TIPI). We revised the ACI-TIPI regression to require only data directly from the electrocardiograph, the e-ACI-TIPI using the same data used for the original ACI-TIPI (development set n = 3,453; test set n = 2,315). We also tested both on data from emergency department electrocardiographs from across the US (n = 8,556). We then used ACI-TIPI and e-ACI-TIPI to identify potential cohorts for the ACS trial and compared performance to cohorts from EHR data at the hospitals.
Results:
Receiver-operating characteristic (ROC) curve areas on the test set were excellent, 0.89 for ACI-TIPI and 0.84 for the e-ACI-TIPI, as was calibration. On the national electrocardiographic database, ROC areas were 0.78 and 0.69, respectively, and with very good calibration. When tested for detection of patients with > 75% ACS probability, both electrocardiograph-based methods identified eligible patients well, and better than did EHRs.
Conclusion:
Using data from medical devices such as electrocardiographs may provide accurate projections of available cohorts for clinical trials.
UK veterans suffering from a psychological or psychiatric illness as a consequence of service in the Second World War were entitled to a war pension. Their case files, which include regular medical assessments, are a valuable resource to investigate the nature, distribution and duration of symptoms.
Methods
A standardised form was used to collect data from pension records of a random sample of 500 UK army veterans from the first presentation in the 1940s until 1980. Data were also gathered from 50 civilians and 54 emergency responders with a pension for post-traumatic illness following air-raids.
Results
The 10 most common symptoms reported by veterans were anxiety, depression, sleep problems, headache, irritability/anger, tremor/shaking, difficulty completing tasks, poor concentration, repeated fears and avoidance of social contact. Nine of the 10 were widely distributed across the veteran population when symptoms were ranked by the number of subjects who reported them. Nine symptoms persisted significantly longer in the veteran sample than in emergency responders. These included seven of the most common symptoms, together with two others: muscle pain and restlessness. The persistence of these symptoms in the veteran group suggests a post-traumatic illness linked to lengthy overseas service in combat units.
Conclusions
The nature and duration of symptoms exhibited by veterans may be associated with their experience of heightened risks. Exposure to severe or prolonged trauma seems to be associated with chronic multi-symptom illness, symptoms of post-traumatic stress and somatic expressions of pain that may delay or complicate the recovery process.
The application of digital monitoring biomarkers in health, wellness and disease management is reviewed. Harnessing the near limitless capacity of these approaches in the managed healthcare continuum will benefit from a systems-based architecture which presents data quality, quantity, and ease of capture within a decision-making dashboard.
Methods
A framework was developed which stratifies key components and advances the concept of contextualized biomarkers. The framework codifies how direct, indirect, composite, and contextualized composite data can drive innovation for the application of digital biomarkers in healthcare.
Results
The de novo framework implies consideration of physiological, behavioral, and environmental factors in the context of biomarker capture and analysis. Application in disease and wellness is highlighted, and incorporation in clinical feedback loops and closed-loop systems is illustrated.
Conclusions
The study of contextualized biomarkers has the potential to offer rich and insightful data for clinical decision making. Moreover, advancement of the field will benefit from innovation at the intersection of medicine, engineering, and science. Technological developments in this dynamic field will thus fuel its logical evolution guided by inputs from patients, physicians, healthcare providers, end-payors, actuarists, medical device manufacturers, and drug companies.
Patient-reported outcome measures (PROMs) provide a way to measure the impact of a disease and its associated treatments on the quality of life from the patients’ perspective. The aim of this review was to identify PROMs that have been developed and/or validated in patients with carotid artery disease (CAD) undergoing revascularization, and to assess their psychometric properties and examine suitability for research and clinical use.
METHODS:
Eight electronic databases including MEDLINE and CINAHL were searched from inception to May 2015 and updated in the MEDLINE database to February 2017. A two-stage search approach was used to identify studies reporting the development and/or validation of relevant PROMs in patients with CAD undergoing revascularization. Supplementary citation searching and hand-searching reference lists of included studies were also undertaken. The Consensus-based standards for the selection of health measurement instruments (COSMIN) and Oxford criteria were used to assess the methodological quality of the included studies, and the psychometric properties of the PROMs were evaluated using established assessment criteria.
RESULTS:
Six PROMs, reported in five studies, were identified: 36-Item Short Form Health Survey (SF-36), Euro-QoL-5-Dimension Scale (EQ-5D), Hospital Anxiety and Depression Scale (HADS), Dizziness Handicap Inventory (DHI), Quality of life for CAD scale by Ivanova 2015 and a disease-specific PROM designed by Stolker 2010. The rigour of the psychometric assessment of the PROMs were variable with most only attempting to assess a single psychometric criterion. No study reported evidence on criterion validity and test-retest reliability. The overall psychometric evaluation of all included PROMs was rated as poor.
CONCLUSIONS:
This review highlighted a lack of evidence in validated PROMs used for patients undergoing carotid artery revascularization. As a result, the development and validation of a new PROM for this patient population is warranted in order to provide data which can supplement traditional clinical outcomes (stroke >30 days post-procedural, myocardial infarction and death), and capture changes in health status and quality of life in patients to help inform treatment decisions.
Women with bipolar disorder are at increased risk of having a severe episode of illness associated with childbirth.
Aims
To explore the factors that influence the decision-making of women with bipolar disorder regarding pregnancy and childbirth.
Method
Qualitative study with a purposive sample of women with bipolar disorder considering pregnancy, or currently or previously pregnant, supplemented by data from an online forum. Data were analysed using thematic analysis.
Results
Twenty-one women with bipolar disorder from an NHS organisation were interviewed, and data were used from 50 women's comments via the online forum of the UK's national bipolar charity. The centrality of motherhood, social and economic contextual factors, stigma and fear were major themes. Within these themes, new findings included women considering an elective Caesarian section in an attempt to avoid the deleterious effects of a long labour and loss of sleep, or trying to avoid the risks of pregnancy altogether by means of adoption or surrogacy.
Conclusions
This study highlights the information needs of women with bipolar disorder, both pre-conception and when childbearing, and the need for improved training for all health professionals working with women with bipolar disorder of childbearing age to reduce stigmatising attitudes and increase knowledge of the evidence base on treatment in the perinatal period.
Using survey data, we analyze institutional investors’ expectations about the future performance of fund managers and the impact of those expectations on asset allocation decisions. We find that institutional investors allocate funds mainly on the basis of fund managers’ past performance and of investment consultants’ recommendations, but not because they extrapolate their expectations from these. This suggests that institutional investors base their investment decisions on the most defensible variables at their disposal and supports the existence of agency considerations in their decision making.