We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Bathing intensive care unit (ICU) patients with chlorhexidine gluconate (CHG) decreases healthcare-associated infections (HAIs). The optimal method of CHG bathing remains undefined.
Methods:
Prospective crossover study comparing CHG daily bathing with 2% CHG-impregnated cloths versus 4% CHG solution. In phase 1, from January 2020 through March 2020, 1 ICU utilized 2% cloths, while the other ICU utilized 4% solution. After an interruption caused by the coronavirus disease 2019 pandemic, in phase 2, from July 2020 through September 2020, the unit CHG bathing assignments were reversed. Swabs were performed 3 times weekly from patients’ arms and legs to measure skin microbial colonization and CHG concentration. Other outcomes included HAIs, adverse reactions, and skin tolerability.
Results:
411 assessments occurred after baths with 2% cloth, and 425 assessments occurred after baths with 4% solution. Average microbial colonization was 691 (interquartile range 0, 30) colony-forming units per square centimeter (CFU/cm2) for patients bathed with 2% cloths, 1,627 (0, 265) CFUs/cm2 for 4% solution, and 8,519 (10, 1130) CFUs/cm2 for patients who did not have a CHG bath (P < .001). Average CHG skin concentration (parts per million) was 1300.4 (100, 2000) for 2% cloths, 307.2 (30, 200) for 4% solution, and 32.8 (0, 20) for patients without a recorded CHG bath. Both CHG bathing methods were well tolerated. Although underpowered, no difference in HAI was noted between groups.
Conclusions:
Either CHG bathing method resulted in a significant decrease in microbial skin colonization with a greater CHG concentration and fewer organisms associated with 2% CHG cloths.
History of prior mental disorder, particularly post-traumatic stress disorder (PTSD), increases risk for PTSD following subsequent trauma exposure. However, limited research has examined differences associated with specific prior mental disorders among people with PTSD.
Aims
The current study examined whether different prior mental disorders were associated with meaningful differences among individuals presenting to a specialist service for severe earthquake-related distress following the Canterbury earthquakes (N = 177).
Method
Two sets of comparisons were made: between participants with no history of prior disorder and participants with history of any prior disorder; and between participants with history of prior PTSD and those with history of other prior disorders. Comparisons were made in relation to sociodemographic factors, earthquake exposure, peri-traumatic distress, life events and current psychological functioning.
Results
Participants with any prior mental disorder had more current disorders than those with no prior disorder. Among participants with history of any prior disorder, those with prior PTSD reported more life events in the past 5 years than those with other prior disorders.
Conclusions
Findings suggest a history of any prior mental disorder contributes to increased clinical complexity, but not increased PTSD severity, among people with PTSD seeking treatment. Although post-disaster screening efforts should include those with prior mental disorders, it should also be recognised that those with no prior disorders are also at risk of developing equally severe PTSD.
Being married may protect late-life cognition. Less is known about living arrangement among unmarried adults and mechanisms such as brain health (BH) and cognitive reserve (CR) across race and ethnicity or sex/gender. The current study examines (1) associations between marital status, BH, and CR among diverse older adults and (2) whether one’s living arrangement is linked to BH and CR among unmarried adults.
Method:
Cross-sectional data come from the Washington Heights-Inwood Columbia Aging Project (N = 778, 41% Hispanic, 33% non-Hispanic Black, 25% non-Hispanic White; 64% women). Magnetic resonance imaging (MRI) markers of BH included cortical thickness in Alzheimer’s disease signature regions and hippocampal, gray matter, and white matter hyperintensity volumes. CR was residual variance in an episodic memory composite after partialing out MRI markers. Exploratory analyses stratified by race and ethnicity and sex/gender and included potential mediators.
Results:
Marital status was associated with CR, but not BH. Compared to married individuals, those who were previously married (i.e., divorced, widowed, and separated) had lower CR than their married counterparts in the full sample, among White and Hispanic subgroups, and among women. Never married women also had lower CR than married women. These findings were independent of age, education, physical health, and household income. Among never married individuals, living with others was negatively linked to BH.
Conclusions:
Marriage may protect late-life cognition via CR. Findings also highlight differential effects across race and ethnicity and sex/gender. Marital status could be considered when assessing the risk of cognitive impairment during routine screenings.
Female patients using indwelling urinary catheters (IUCs) are disproportionately at risk for developing catheter-associated urinary tract infections (CAUTIs) compared to males. Female external urine wicking devices (FEUWDs) have emerged as potential alternatives to IUCs for incontinence management.
Objectives:
To assess the clinical risks and benefits of FEUWDs as alternatives to IUCs.
Methods:
Ovid MEDLINE, Embase, Scopus, Web of Science Core Collection, CINAHL Complete, and ClinicalTrials.gov were searched from inception to July 10, 2023. Included studies used FEUWDs as an intervention and reported measures of urinary tract infections and secondary outcomes related to incontinence management.
Results:
Of 2,580 returned records, 50 were systematically reviewed. Meta-analyses assessed rates of indwelling CAUTIs and IUC utilization. Following FEUWD implementation, IUC utilization rates decreased 14% (RR = 0.86, 95% CI = [0.76, 0.97]) and indwelling CAUTI rates nonsignificantly decreased up to 32% (IRR = 0.68, 95% CI = [0.39, 1.17]). Limited only to studies that described protocols for implementation, the incidence rate of indwelling CAUTIs decreased significantly up to 54% (IRR = 0.46, 95% CI = [0.32, 0.66]). Secondary outcomes were reported less routinely.
Conclusions:
Overall, FEUWDs nonsignificantly reduced indwelling CAUTI rates, though reductions were significant among studies describing FEUWD implementation protocols. We recommend developing standard definitions for consistent reporting of non-indwelling CAUTI complications such as FEUWD-associated UTIs, skin injuries, and mobility-related complications.
Individuals with Down syndrome (DS) experience intellectual disability, such that measures of cognitive and adaptive functioning are near the normative floor upon evaluation. Individuals with DS are also at increased risk for Alzheimer's disease (AD) beginning around age 40; and test performances and adaptive ratings at the normative floor make it difficult to detect change in cognition and functioning. This study first assessed the range of raw intelligence scores and raw adaptive functioning of individuals with DS at the normative floor. Next, we assessed whether those raw intelligence scores were predictive of raw adaptive functioning scores, and by association, whether they may be meaningful when assessing change in individuals with a lower baseline of cognitive functioning.
Participants and Methods:
Participants were selected from a cohort of 117 adults with DS in a longitudinal study examining AD risk. Participants (n=96; M=40.9 years-old, SD=10.67; 57.3% female) were selected if they had both a completed measure of IQ (Kaufmann Brief Intelligence Test; KBIT2) and informant ratings of adaptive functioning (Vineland Adaptive Behavior Scales; VABS-II). Multiple regression was conducted predicting VABS-II total raw score using K-BIT2 total raw score, while controlling for age.
Results:
A slight majority (57.3%) of the sample had a standardized IQ score of 40 with the majority (95.7%) having a standardized score at or below 60. Additionally, 85.3% of the sample had a standard VABS-II score at or below 60. Within the normative floor for the KBIT2 (IQ=40), there was a normal distribution and substantial range of both KBIT2 raw scores (M = 31.19, SD = 13.19, range: 2 to 41) and VABS-II raw scores (M = 406.33, SD = 84.91, range: 198 to 569). Using the full sample, age significantly predicted raw VABS-II scores (ß = -.283, p = .008). When KBIT2 raw scores were included in the model, age was no longer an independently significant predictor. KBIT2 raw scores significantly predicted raw VABS-II scores (ß = .689, p < .001). Age alone accounted for 8.0% of variance in VABS-II raw scores and KBIT2 raw scores accounted for 43.8% additional variance in VABS-II raw scores. This relationship was maintained when the sample was reduced to individuals at the normative floor (n = 51) where KBIT2 raw scores accounted for 23.7% of the variance in raw VABS-II scores (ß = .549, p < .001).
Conclusions:
The results indicate that meaningful variability exists among raw intelligence test performances that may be masked by scores at the normative floor. Further, the variability in raw intelligence scores is associated with variability in adaptive functioning, such that lower intelligence scores are associated with lower ratings of adaptive functioning. Considering this relationship would be masked by a reduction of range due to norming, these findings indicate that raw test performances and adaptive functioning ratings may have value when monitoring change in adults with DS at risk for AD.
Non-Hispanic Black older adults experience a disproportionate burden of Alzheimer’s Disease and related dementias (ADRD) risk compared to non-Hispanic White older adults. It is necessary to identify mechanisms that may be contributing to inequities in cognitive aging. Psychosocial stressors that disproportionately affect Black adults (e.g., discrimination) have the potential to impact brain health through stress pathways. The brain’s white matter, which appears to be particularly important for ADRD risk among Black older adults, may be uniquely vulnerable to stress-related physiological dysfunction. To further understand whether and how discrimination can affect ADRD risk, this study aimed to examine associations between multiple forms of racial discrimination and white matter integrity, operationalized through diffusion tensor imaging.
Participants and Methods:
Cross-sectional data were obtained from 190 non-Hispanic Black residents aged 65+ without dementia in northern Manhattan. Racial discrimination was self-reported using the Everyday Discrimination and Major Experiences of Lifetime Discrimination scales. Example items from the Everyday Discrimination Scale include: “You are treated with less respect than other people”; “You are called names or insulted.” Example items from the Major Experiences of Lifetime Discrimination Scale include: “At any time in your life, have you ever been unfairly fired from a job?”; “Have you ever been unfairly denied a bank loan?” Racial discrimination was operationalized as experiences attributed to “race” or “skin color.” White matter integrity was assessed using fractional anisotropy (FA) via diffusion tensor imaging. Multivariable regression models evaluated the unique effects of everyday and major experiences of lifetime racial discrimination on mean FA in the whole brain and specific regions. Initial models controlled for age, sex/gender, intracranial volume, and white matter hyperintensities. Subsequent models additionally controlled for socioeconomic and health factors to consider potential confounders or mediators of the relationship between discrimination and white matter integrity.
Results:
Major experiences of lifetime discrimination were negatively associated with mean FA within the left cingulum cingulate gyrus and the right inferior fronto-occipital fasciculus. These associations persisted when controlling for additional covariates (i.e., education, depression, and cardiovascular diseases). In contrast, major experiences of lifetime discrimination were positively associated with mean FA within the right superior longitudinal fasciculus (temporal part). This association was attenuated when controlling for additional covariates. Everyday racial discrimination was not associated with mean FA in any regions.
Conclusions:
These results extend prior work linking racial discrimination to brain health and provide evidence for both risk and resilience among Black older adults. Major experiences of lifetime racial discrimination, a proxy for institutional racism, may have a stronger effect on white matter integrity than everyday racial discrimination, a proxy for interpersonal racism. Educational opportunities and cardiovascular risk factors may represent mediators between racial discrimination and white matter integrity. White matter integrity within specific brain regions may be a mechanism through which racially patterned social stressors contribute to racial disparities in ADRD. Future research should characterize within-group heterogeneity in order to identify factors that promote resilience among Black older adults.
Cross-sectional studies have shown that the COVID-19 pandemic has had a significant impact on the mental health of healthcare staff. However, it is less well understood how working over the long term in successive COVID-19 waves affects staff well-being.
Aims
To identify subpopulations within the health and social care staff workforce with differentiated trajectories of mental health symptoms during phases of the COVID-19 pandemic.
Method
The COVID-19 Staff Wellbeing Survey assessed health and social care staff well-being within an area of the UK at four time points, separated by 3-month intervals, spanning November 2020 to August 2021.
Results
Growth mixture models were performed on the depression, anxiety and post-traumatic stress disorder longitudinal data. Two class solutions provided the best fit for all models. The vast majority of the workforce were best represented by the low-symptom class trajectory, where by symptoms were consistently below the clinical cut-off for moderate-to-severe symptoms. A sizable minority (13–16%) were categorised as being in the high-symptom class, a group who had symptom levels in the moderate-to-severe range throughout the peaks and troughs of the pandemic. In the depression, anxiety and post-traumatic stress disorder models, the high-symptom class perceived communication from their organisation to be less effective than the low-symptom class.
Conclusions
This research identified a group of health service staff who reported persistently high mental health symptoms during the pandemic. This group of staff may well have particular needs in terms of the provision of well-being support services.
Children with fragile X syndrome (FXS) often avoid eye contact, a behavior that is potentially related to hyperarousal. Prior studies, however, have focused on between-person associations rather than coupling of within-person changes in gaze behaviors and arousal. In addition, there is debate about whether prompts to maintain eye contact are beneficial for individuals with FXS. In a study of young females (ages 6–16), we used eye tracking to assess gaze behavior and pupil dilation during social interactions in a group with FXS (n = 32) and a developmentally similar comparison group (n = 23). Participants engaged in semi-structured conversations with a female examiner during blocks with and without verbal prompts to maintain eye contact. We identified a social–behavioral and psychophysiological profile that is specific to females with FXS; this group exhibited lower mean levels of eye contact, significantly increased mean pupil dilation during conversations that included prompts to maintain eye contact, and showed stronger positive coupling between eye contact and pupil dilation. Our findings strengthen support for the perspective that gaze aversion in FXS reflects negative reinforcement of social avoidance behavior. We also found that behavioral skills training may improve eye contact, but maintaining eye contact appears to be physiologically taxing for females with FXS.
Pain following surgery for cardiac disease is ubiquitous, and optimal management is important. Despite this, there is large practice variation. To address this, the Paediatric Acute Care Cardiology Collaborative undertook the effort to create this clinical practice guideline.
Methods:
A panel of experts consisting of paediatric cardiologists, advanced practice practitioners, pharmacists, a paediatric cardiothoracic surgeon, and a paediatric cardiac anaesthesiologist was convened. The literature was searched for relevant articles and Collaborative sites submitted centre-specific protocols for postoperative pain management. Using the modified Delphi technique, recommendations were generated and put through iterative Delphi rounds to achieve consensus
Results:
60 recommendations achieved consensus and are included in this guideline. They address guideline use, pain assessment, general considerations, preoperative considerations, intraoperative considerations, regional anaesthesia, opioids, opioid-sparing, non-opioid medications, non-pharmaceutical pain management, and discharge considerations.
Conclusions:
Postoperative pain among children following cardiac surgery is currently an area of significant practice variability despite a large body of literature and the presence of centre-specific protocols. Central to the recommendations included in this guideline is the concept that ideal pain management begins with preoperative counselling and continues through to patient discharge. Overall, the quality of evidence supporting recommendations is low. There is ongoing need for research in this area, particularly in paediatric populations.
In daycare centres, the close contact of children with other children and employees favours the transmission of infections. The majority of children <6 years attend daycare programmes in Germany, but the role of daycare centres in the SARS-CoV-2 pandemic is unclear. We investigated the transmission risk in daycare centres and the spread of SARS-CoV-2 to associated households. 30 daycare groups with at least one recent laboratory-confirmed SARS-CoV-2 case were enrolled in the study (10/2020–06/2021). Close contact persons within daycare and households were examined over a 12-day period (repeated SARS-CoV-2 PCR tests, genetic sequencing of viruses, symptom diary). Households were interviewed to gain comprehensive information on each outbreak. We determined primary cases for all daycare groups. The number of secondary cases varied considerably between daycare groups. The pooled secondary attack rate (SAR) across all 30 daycare centres was 9.6%. The SAR tended to be higher when the Alpha variant was detected (15.9% vs. 5.1% with evidence of wild type). The household SAR was 53.3%. Exposed daycare children were less likely to get infected with SARS-CoV-2 than employees (7.7% vs. 15.5%). Containment measures in daycare programmes are critical to reduce SARS-CoV-2 transmission, especially to avoid spread to associated households.
Anorexia nervosa (AN) is a psychiatric disorder with complex etiology, with a significant portion of disease risk imparted by genetics. Traditional genome-wide association studies (GWAS) produce principal evidence for the association of genetic variants with disease. Transcriptomic imputation (TI) allows for the translation of those variants into regulatory mechanisms, which can then be used to assess the functional outcome of genetically regulated gene expression (GReX) in a broader setting through the use of phenome-wide association studies (pheWASs) in large and diverse clinical biobank populations with electronic health record phenotypes.
Methods
Here, we applied TI using S-PrediXcan to translate the most recent PGC-ED AN GWAS findings into AN-GReX. For significant genes, we imputed AN-GReX in the Mount Sinai BioMe™ Biobank and performed pheWASs on over 2000 outcomes to test the clinical consequences of aberrant expression of these genes. We performed a secondary analysis to assess the impact of body mass index (BMI) and sex on AN-GReX clinical associations.
Results
Our S-PrediXcan analysis identified 53 genes associated with AN, including what is, to our knowledge, the first-genetic association of AN with the major histocompatibility complex. AN-GReX was associated with autoimmune, metabolic, and gastrointestinal diagnoses in our biobank cohort, as well as measures of cholesterol, medications, substance use, and pain. Additionally, our analyses showed moderation of AN-GReX associations with measures of cholesterol and substance use by BMI, and moderation of AN-GReX associations with celiac disease by sex.
Conclusions
Our BMI-stratified results provide potential avenues of functional mechanism for AN-genes to investigate further.
Throughout the coronavirus disease 2019 (COVID-19) pandemic, health and social care workers have faced unprecedented professional demands, all of which are likely to have placed considerable strain on their psychological well-being.
Aims
To measure the national prevalence of mental health symptoms within healthcare staff, and identify individual and organisational predictors of well-being.
Method
The COVID-19 Staff Wellbeing Survey is a longitudinal online survey of psychological well-being among health and social care staff in Northern Ireland. The survey included four time points separated by 3-month intervals; time 1 (November 2020; n = 3834) and time 2 (February 2021; n = 2898) results are presented here. At time 2, 84% of respondents had received at least one dose of a COVID-19 vaccine. The survey included four validated psychological well-being questionnaires (depression, anxiety, post-traumatic stress and insomnia), as well as demographic and organisational measures.
Results
At time 1 and 2, a high proportion of staff reported moderate-to-severe symptoms of depression (30–36%), anxiety (26–27%), post-traumatic stress (30–32%) and insomnia (27–28%); overall, significance tests and effect size data suggested psychological well-being was generally stable between November 2020 and February 2021 for health and social care staff. Multiple linear regression models indicated that perceptions of less effective communication within their organisation predicted greater levels of anxiety, depression, post-traumatic stress and insomnia.
Conclusions
This study highlights the need to offer psychological support to all health and social care staff, and to communicate with staff regularly, frequently and clearly regarding COVID-19 to help protect staff psychological well-being.
Dietary advice about the potential health risks of unhealthy foods or diets has historically been communicated in terms of nutrients. Recent evidence has shown that the processing of food itself is independently attributable to harmful outcomes, particularly a new category of foods described to be ‘ultra-processed’. Dietary guidelines (DG) are a key policy tool to translate and communicate nutrition research; however, there is little research exploring whether and how the harms of food processing are communicated and rationalised in dietary advice.
Design:
Nineteen publicly available DG were thematically analysed to explore: (1) the diversity of terms used to refer to processed foods and (2) the underlying explanations and rationales provided to reduce consumption of processed foods.
Setting:
International.
Participants:
Sample of national dietary guidelines.
Results:
Seventeen different descriptive terms were used to refer to processed foods, with many countries using a large variation of terms within their DG. Six rationales to reduce consumption of processed foods were identified, which were grouped into four overarching domains: harmful outcomes (disease risk, environmental risk); food quality (food quality, nutrient content); diet quality and food environment.
Conclusion:
The rationales provided to reduce the consumption of processed foods reflect upstream and downstream determinants of health. However, the persistence of nutrient-based rationales indicate that most DG do not apply an upstream understanding of the issues with ultra-processing. Further, the diversity of terms and foods referenced in DG suggest that the concept of ultra-processing is subject to multiple interpretations.
The innovation of rapid influenza polymerase chain reaction (XT-PCR) has allowed quick, highly sensitive test results. Consequently, physicians can differentiate influenza from other respiratory illnesses and rapidly initiate treatment. We examined the effect of implementing XT-PCR on antimicrobial use, admission rates, and length of stay at a tertiary healthcare system.
We conducted a comparative retrospective study to quantify the impact of coronavirus disease 2019 (COVID-19) on patient safety. We found a statistically significant increase in central-line–associated bloodstream infections and blood culture contamination rates during the pandemic. Increased length of stay and mortality were also observed during the COVID-19 pandemic.
Background: The clinical picture of influenza-like illness can mimic bacterial pneumonia, and empiric treatment is often initiated with antibacterial agents. Molecular testing such as polymerase chain reaction (PCR) is often used to diagnose influenza. However, traditional PCR tests have a slow turnaround time and cannot deliver results soon enough to influence the clinical decision making. The Detroit Medical Center (DMC) implemented the Xpert Flu test for all patients presenting with influenza-like illness (ILI). We evaluated antibacterial use after implementation of rapid influenza PCR Xpert Flu. Methods: We conducted a retrospective study comparing all pediatric and adult patients tested using traditional RT PCR during the 2017–2018 flu season to patients tested using the rapid influenza Xpert Flu during the 2018–2019 flu season in a tertiary-care hospital in Detroit, Michigan. These patients were further divided into 3 groups: not admitted (NA), admitted to acute-care floor (ACF), or admitted to intensive care unit (ICU). The groups were then compared with respect to percentage of antibacterial use after traditional RT PCR versus rapid influenza Xpert Flu testing during their hospital visit for ILI. The χ2 test was used for statistical analyses. Results: In total, 20,923 patients presented with influenza-like illness during the study period: 26% (n = 5,569) had the rapid influenza Xpert Flu and 73.4% (n= 15,354) had traditional RT PCR. For a comparison of the number of patients in 3 groups (NA, ACF, and ICU) and type of influenza PCR performed among these patients, please refer to Table 1. When comparing antibacterial use in the NA group, the proportions of patients who received antibacterial agents in the traditional RT PCR group versus the rapid influenza Xpert Flu group were 24.4% (n = 695) versus 3.9% (n = 450), respectively (P < .0001). In the ACF group, the proportions of patients who received antibacterial agents in the traditional RT PCR group versus the rapid influenza Xpert Flu group was 62.3% (n = 1,406) versus 27.7% (n = 994), respectively (P < .001). In the ICU group, the proportions of patients who received antibacterials in the traditional RT PCR group versus the rapid influenza Xpert Flu group were 80.3% (n = 382) versus 38.3% (n = 204), respectively (P < .0001). Conclusions: With rising antimicrobial resistance and increasing influenza morbidity and mortality, rapid diagnostics not only can help diagnose influenza faster but also can reduce inappropriate antimicrobial use.
Background: Influenza causes a high burden of disease in the United States, with an estimate of 960,000 hospitalizations in the 2017–2018 flu season. Traditional flu diagnostic polymerase chain reaction (PCR) tests have a longer (24 hours or more) turnaround time that may lead to an increase in unnecessary inpatient admissions during peak influenza season. A new point-of-care rapid PCR assays, Xpert Flu, is an FDA-approved PCR test that has a significant decrease in turnaround time (2 hours). The present study sought to understand the impact of implementing a new Xpert Flu test on the rate of inpatient admissions. Methods: A retrospective study was conducted to compare rates of inpatient admissions in patients tested with traditional flu PCR during the 2017–2018 flu season and the rapid flu PCR during the 2018–2019 flu season in a tertiary-care center in greater Detroit area. The center has 1 pediatric hospital (hospital A) and 3 adult hospitals (hospital B, C, D). Patients with influenza-like illness who presented to all 4 hospitals during 2 consecutive influenza seasons were analyzed. Results: In total, 20,923 patients were tested with either the rapid flu PCR or the traditional flu PCR. Among these, 14,124 patients (67.2%) were discharged from the emergency department and 6,844 (32.7%) were admitted. There was a significant decrease in inpatient admissions in the traditional flu PCR group compared to the rapid flu PCR group across all hospitals (49.56% vs 26.6% respectively; P < .001). As expected, a significant proportion of influenza testing was performed in the pediatric hospital, 10,513 (50.2%). A greater reduction (30% decrease in the rapid flu PCR group compared to the traditional flu PCR group) was observed in inpatient admissions in the pediatric hospital (Table 1) Conclusions: Rapid molecular influenza testing can significantly decrease inpatient admissions in a busy tertiary-care hospital, which can indirectly lead to improved patient quality with easy bed availability and less time spent in a private room with droplet precautions. Last but not the least, this testing method can certainly lead to lower healthcare costs.
In this chapter, the authors trace out the “natural history” of an intensely collaborative multisited comparison, which was distinct from many other comparative research projects because research at each site was carried out by a PhD-level anthropologist who was involved in the scientific development of the project rather than only in the implementation of a centrally directed project. It draws on their experiences with this once-in-a-lifetime opportunity, a large, US National Institutes of Health–funded multisite project, to discuss ways in which that comparative research could have been even more powerful, things that future comparative research should strive to avoid, recommended best practices, and what the authors would call “minimum adequate” approaches to comparative ethnography.
Species distribution models (SDMs) are statistical tools used to develop continuous predictions of species occurrence. ‘Integrated SDMs’ (ISDMs) are an elaboration of this approach with potential advantages that allow for the dual use of opportunistically collected presence-only data and site-occupancy data from planned surveys. These models also account for survey bias and imperfect detection through the use of a hierarchical modelling framework that separately estimates the species–environment response and detection process. This is particularly helpful for conservation applications and predictions for rare species, where data are often limited and prediction errors may have significant management consequences. Despite this potential importance, ISDMs remain largely untested under a variety of scenarios. We performed an exploration of key modelling decisions and assumptions on an ISDM using the endangered Baird’s tapir (Tapirus bairdii) as a test species. We found that site area had the strongest effect on the magnitude of population estimates and underlying intensity surface and was driven by estimates of model intercepts. Selecting a site area that accounted for the individual movements of the species within an average home range led to population estimates that coincided with expert estimates. ISDMs that do not account for the individual movements of species will likely lead to less accurate estimates of species intensity (number of individuals per unit area) and thus overall population estimates. This bias could be severe and highly detrimental to conservation actions if uninformed ISDMs are used to estimate global populations of threatened and data-deficient species, particularly those that lack natural history and movement information. However, the ISDM was consistently the most accurate model compared to other approaches, which demonstrates the importance of this new modelling framework and the ability to combine opportunistic data with systematic survey data. Thus, we recommend researchers use ISDMs with conservative movement information when estimating population sizes of rare and data-deficient species. ISDMs could be improved by using a similar parameterization to spatial capture–recapture models that explicitly incorporate animal movement as a model parameter, which would further remove the need for spatial subsampling prior to implementation.