We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Complications following the Fontan procedure include prolonged pleural drainage and readmission for effusions. To address these complications, a post-Fontan management pathway was implemented with primary goals of reducing chest tube duration/reinsertion rates and decreasing hospital length of stay and readmissions.
Methods:
Fontan patients were identified by retrospective chart review (2017–2019) to obtain baseline data for chest tube duration/reinsertion rates, hospital length of stay, and readmission rates for effusion. A post-Fontan management pathway was implemented (2020–2021) utilising post-operative vasopressin, nasal cannula oxygen until chest tube removal, and discharge regimen of three times daily diuretics, sildenafil, and afterload reducing medications. Patients were followed to evaluate primary outcomes.
Results:
The pre- and post-pathway groups were similar in single ventricle morphology, demographics, and pre-operative haemodynamics. Forty-three and 36 patients were included in the pre- and post-pathway cohorts, respectively. There were statistically significant reductions in chest tube duration (8 vs. 5 days, p ≤ 0.001), chest tube output on post-operative day 4 (20.4 vs. 9.9 mL/kg/day, p = 0.003), and hospital readmission rates for effusion (13[30%] vs. 3[8%], p = 0.02) compared to baseline. There was an absolute reduction in hospital length of stay (11 vs. 9.5 days, p = 0.052). When combining average cost savings for the Fontan hospitalisations, readmissions for effusion, and cardiac catheterisations within 6 months of Fontan completion, there was a $325,144 total cost savings for 36 patients following pathway implementation.
Conclusion:
Implementation of a post-Fontan management pathway resulted in significant reductions in chest tube duration and output, and readmission rates for effusion in the perioperative period.
Understanding the factors contributing to optimal cognitive function throughout the aging process is essential to better understand successful cognitive aging. Processing speed is an age sensitive cognitive domain that usually declines early in the aging process; however, this cognitive skill is essential for other cognitive tasks and everyday functioning. Evaluating brain network interactions in cognitively healthy older adults can help us understand how brain characteristics variations affect cognitive functioning. Functional connections among groups of brain areas give insight into the brain’s organization, and the cognitive effects of aging may relate to this large-scale organization. To follow-up on our prior work, we sought to replicate our findings regarding network segregation’s relationship with processing speed. In order to address possible influences of node location or network membership we replicated the analysis across 4 different node sets.
Participants and Methods:
Data were acquired as part of a multi-center study of 85+ cognitively normal individuals, the McKnight Brain Aging Registry (MBAR). For this analysis, we included 146 community-dwelling, cognitively unimpaired older adults, ages 85-99, who had undergone structural and BOLD resting state MRI scans and a battery of neuropsychological tests. Exploratory factor analysis identified the processing speed factor of interest. We preprocessed BOLD scans using fmriprep, Ciftify, and XCPEngine algorithms. We used 4 different sets of connectivity-based parcellation: 1)MBAR data used to define nodes and Power (2011) atlas used to determine node network membership, 2) Younger adults data used to define nodes (Chan 2014) and Power (2011) atlas used to determine node network membership, 3) Older adults data from a different study (Han 2018) used to define nodes and Power (2011) atlas used to determine node network membership, and 4) MBAR data used to define nodes and MBAR data based community detection used to determine node network membership.
Segregation (balance of within-network and between-network connections) was measured within the association system and three wellcharacterized networks: Default Mode Network (DMN), Cingulo-Opercular Network (CON), and Fronto-Parietal Network (FPN). Correlation between processing speed and association system and networks was performed for all 4 node sets.
Results:
We replicated prior work and found the segregation of both the cortical association system, the segregation of FPN and DMN had a consistent relationship with processing speed across all node sets (association system range of correlations: r=.294 to .342, FPN: r=.254 to .272, DMN: r=.263 to .273). Additionally, compared to parcellations created with older adults, the parcellation created based on younger individuals showed attenuated and less robust findings as those with older adults (association system r=.263, FPN r=.255, DMN r=.263).
Conclusions:
This study shows that network segregation of the oldest-old brain is closely linked with processing speed and this relationship is replicable across different node sets created with varied datasets. This work adds to the growing body of knowledge about age-related dedifferentiation by demonstrating replicability and consistency of the finding that as essential cognitive skill, processing speed, is associated with differentiated functional networks even in very old individuals experiencing successful cognitive aging.
To evaluate the construct validity of the NIH Toolbox Cognitive Battery (NIH TB-CB) in the healthy oldest-old (85+ years old).
Method:
Our sample from the McKnight Brain Aging Registry consists of 179 individuals, 85 to 99 years of age, screened for memory, neurological, and psychiatric disorders. Using previous research methods on a sample of 85 + y/o adults, we conducted confirmatory factor analyses on models of NIH TB-CB and same domain standard neuropsychological measures. We hypothesized the five-factor model (Reading, Vocabulary, Memory, Working Memory, and Executive/Speed) would have the best fit, consistent with younger populations. We assessed confirmatory and discriminant validity. We also evaluated demographic and computer use predictors of NIH TB-CB composite scores.
Results:
Findings suggest the six-factor model (Vocabulary, Reading, Memory, Working Memory, Executive, and Speed) had a better fit than alternative models. NIH TB-CB tests had good convergent and discriminant validity, though tests in the executive functioning domain had high inter-correlations with other cognitive domains. Computer use was strongly associated with higher NIH TB-CB overall and fluid cognition composite scores.
Conclusion:
The NIH TB-CB is a valid assessment for the oldest-old samples, with relatively weak validity in the domain of executive functioning. Computer use’s impact on composite scores could be due to the executive demands of learning to use a tablet. Strong relationships of executive function with other cognitive domains could be due to cognitive dedifferentiation. Overall, the NIH TB-CB could be useful for testing cognition in the oldest-old and the impact of aging on cognition in older populations.
Several Miscanthus species are cultivated in the U.S. Midwest and Northeast, and feral populations can displace the native plant community and potentially negatively affect ecosystem processes. The monetary cost of eradicating feral Miscanthus populations is unknown, but quantifying eradication costs will inform decisions on whether eradication is a feasible goal and should be considered when totaling the economic damage of invasive species. We managed experimental populations of eulaliagrass (Miscanthus sinensis Andersson) and the giant Miscanthus hybrid (Miscanthus × giganteus J.M. Greef & Deuter ex Hodkinson & Renvoize) in three floodplain forest and three old field sites in central Illinois with the goal of eradication. We recorded the time invested in eradication efforts and tracked survival of Miscanthus plants over a 5-yr period, then estimated the costs associated with eradicating these Miscanthus populations. Finally, we used these estimates to predict the total monetary costs of eradicating existing M. sinensis populations reported on EDDMapS. Miscanthus populations in the old field sites were harder to eradicate, resulting in an average of 290% greater estimated eradication costs compared with the floodplain forest sites. However, the cost and time needed to eradicate Miscanthus populations were similar between Miscanthus species. On-site eradication costs ranged from $390 to $3,316 per site (or $1.3 to $11 m−2) in the old field sites, compared with only $85 to $547 (or $0.92 to $1.82 m−2) to eradicate populations within the floodplain forests, with labor comprising the largest share of these costs. Using our M. sinensis eradication cost estimates in Illinois, we predict that the potential costs to eradicate populations reported on EDDMapS would range from $10 to $37 million, with a median predicted cost of $22 million. The monetary costs of eradicating feral Miscanthus populations should be weighed against the benefits of cultivating these species to provide a comprehensive picture of the relative costs and benefits of adding these species to our landscapes.
Understanding the distribution and geometry of faults and fractures is critical for predicting both subsurface permeability architecture and the integrity of geological fluid barriers, particularly in rocks with low primary porosity and permeability. While fracture patterns in relatively competent, weathering-resistant (therefore often well-exposed) rocks are generally well studied in outcrop, the role of mechanically weak layers in defining fracture patterns is frequently overlooked or under-represented. Here we show that rock composition, specifically clay and silicate minerals versus carbonate content, exerts a strong control on fault and fracture propagation and bed-containment within a mechanically layered, Cretaceous carbonate sequence at Canyon Lake Gorge, Texas. We find that relatively incompetent, clay-rich layers limit fault and fracture propagation, and cause bed-containment of fractures in more competent beds. In our results, no clear relationships exist between mechanical layer thickness and fracture abundance. These results are important for understanding the relative importance of composition versus bed thickness on fracture abundance in the subsurface, and for predicting fracture-controlled fluid flow pathways, seals and fracture connectivity across beds with variable compositions, thicknesses and competences.
Effective nutrition policies require timely, accurate individual dietary consumption data; collection of such information has been hampered by cost and complexity of dietary surveys and lag in producing results. The objective of this work was to assess accuracy and cost-effectiveness of a streamlined, tablet-based dietary data collection platform for 24-hour individual dietary recalls (24HR) administered using INDDEX24 platform v. a pen-and-paper interview(PAPI) questionnaire, with weighed food record (WFR) as a benchmark. This cross-sectional comparative study included women 18–49 years old from rural Burkina Faso (n 116 INDDEX24; n 115 PAPI). A WFR was conducted; the following day, a 24HR was administered by different interviewers. Food consumption data were converted into nutrient intakes. Validity of 24HR estimates of nutrient and food group consumption was based on comparison with WFR using equivalence tests (group level) and percentages of participants within ranges of percentage error (individual level). Both modalities performed comparably estimating consumption of macro- and micronutrients, food groups and quantities (modalities’ divergence from WFR not significantly different). Accuracy of both modalities was acceptable (equivalence to WFR significant at P < 0·05) at group level for macronutrients, less so for micronutrients and individual-level consumption (percentage within ±20 % for WFR, 17–45 % for macronutrients, 5–17 % for micronutrients). INDDEX24 was more cost-effective than PAPI based on superior accuracy of a composite nutrient intake measure (but not gram amount or item count) due to lower time and personnel costs. INDDEX24 for 24HR dietary surveys linked to dietary reference data shows comparable accuracy to PAPI at lower cost.
The Variables and Slow Transients Survey (VAST) on the Australian Square Kilometre Array Pathfinder (ASKAP) is designed to detect highly variable and transient radio sources on timescales from 5 s to
$\sim\!5$
yr. In this paper, we present the survey description, observation strategy and initial results from the VAST Phase I Pilot Survey. This pilot survey consists of
$\sim\!162$
h of observations conducted at a central frequency of 888 MHz between 2019 August and 2020 August, with a typical rms sensitivity of
$0.24\ \mathrm{mJy\ beam}^{-1}$
and angular resolution of
$12-20$
arcseconds. There are 113 fields, each of which was observed for 12 min integration time, with between 5 and 13 repeats, with cadences between 1 day and 8 months. The total area of the pilot survey footprint is 5 131 square degrees, covering six distinct regions of the sky. An initial search of two of these regions, totalling 1 646 square degrees, revealed 28 highly variable and/or transient sources. Seven of these are known pulsars, including the millisecond pulsar J2039–5617. Another seven are stars, four of which have no previously reported radio detection (SCR J0533–4257, LEHPM 2-783, UCAC3 89–412162 and 2MASS J22414436–6119311). Of the remaining 14 sources, two are active galactic nuclei, six are associated with galaxies and the other six have no multi-wavelength counterparts and are yet to be identified.
Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools.
Aims
To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics.
Method
Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts.
Results
Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO.
Conclusions
AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses.
A novel paediatric disease, multi-system inflammatory syndrome in children, has emerged during the 2019 coronavirus disease pandemic.
Objectives:
To describe the short-term evolution of cardiac complications and associated risk factors in patients with multi-system inflammatory syndrome in children.
Methods:
Retrospective single-centre study of confirmed multi-system inflammatory syndrome in children treated from 29 March, 2020 to 1 September, 2020. Cardiac complications during the acute phase were defined as decreased systolic function, coronary artery abnormalities, pericardial effusion, or mitral and/or tricuspid valve regurgitation. Patients with or without cardiac complications were compared with chi-square, Fisher’s exact, and Wilcoxon rank sum.
Results:
Thirty-nine children with median (interquartile range) age 7.8 (3.6–12.7) years were included. Nineteen (49%) patients developed cardiac complications including systolic dysfunction (33%), valvular regurgitation (31%), coronary artery abnormalities (18%), and pericardial effusion (5%). At the time of the most recent follow-up, at a median (interquartile range) of 49 (26–61) days, cardiac complications resolved in 16/19 (84%) patients. Two patients had persistent mild systolic dysfunction and one patient had persistent coronary artery abnormality. Children with cardiac complications were more likely to have higher N-terminal B-type natriuretic peptide (p = 0.01), higher white blood cell count (p = 0.01), higher neutrophil count (p = 0.02), severe lymphopenia (p = 0.05), use of milrinone (p = 0.03), and intensive care requirement (p = 0.04).
Conclusion:
Patients with multi-system inflammatory syndrome in children had a high rate of cardiac complications in the acute phase, with associated inflammatory markers. Although cardiac complications resolved in 84% of patients, further long-term studies are needed to assess if the cardiac abnormalities (transient or persistent) are associated with major cardiac events.
An open-label extension study (NCT02873208) evaluated the long-term tolerability, safety, and efficacy of combination olanzapine/samidorphan (OLZ/SAM) treatment in patients with schizophrenia. This qualitative sub study explored perceptions of benefit, burden, and satisfaction with previous medications and OLZ/SAM.
Methods
Semi-structured interviews (60 minutes; audio-recorded) were conducted. Interviewer sensitivity training, senior interviewer oversight, and a list of common medications to aid recall supported data collection. Interview transcripts were content coded and analyzed (NVivo v11.0).
Results
All 41 patients reported a lifetime burden with schizophrenia adversely impacting employment, relationships, emotional health, social activities, and daily tasks. Hospitalization for schizophrenia management was another reported aspect of disease burden. Although most (n=32) patients reported previous medication benefits, side effects affecting physical, emotional/behavioral, and cognitive functioning were reported by all (n=41). Following OLZ/SAM treatment, 39/41 patients (95%) reported improvements in symptoms including hallucinations, paranoia, depression, sleep, and concentration. Furthermore, patients described improvements in self-esteem, social activities, relationships, and daily activities. Twenty-three patients (56%) reported side effects attributed to OLZ/SAM; lack of energy (n=12 [29%]) and dry mouth (n= 5 [12%]) were most common. Twenty-four (59%) patients were “very satisfied” with OLZ/SAM; most (n=35 [85%]) preferred to continue OLZ/SAM vs switching to another medication. As most substudy patients (n=40; 98%) completed the extension study, satisfied patients may be overrepresented in this analysis.
Conclusion
This qualitative interview approach provided valuable insight into patients’ experiences with previous medications and OLZ/SAM. Overall, most patients reported treatment satisfaction and improvements in symptoms, function, and health-related quality of life with OLZ/SAM.
This SHEA white paper identifies knowledge gaps and challenges in healthcare epidemiology research related to coronavirus disease 2019 (COVID-19) with a focus on core principles of healthcare epidemiology. These gaps, revealed during the worst phases of the COVID-19 pandemic, are described in 10 sections: epidemiology, outbreak investigation, surveillance, isolation precaution practices, personal protective equipment (PPE), environmental contamination and disinfection, drug and supply shortages, antimicrobial stewardship, healthcare personnel (HCP) occupational safety, and return to work policies. Each section highlights three critical healthcare epidemiology research questions with detailed description provided in supplementary materials. This research agenda calls for translational studies from laboratory-based basic science research to well-designed, large-scale studies and health outcomes research. Research gaps and challenges related to nursing homes and social disparities are included. Collaborations across various disciplines, expertise and across diverse geographic locations will be critical.
This study examined longitudinal associations between performance on the Rey–Osterrieth Complex Figure–Developmental Scoring System (ROCF-DSS) at 8 years of age and academic outcomes at 16 years of age in 133 children with dextro-transposition of the great arteries (d-TGA).
Method:
The ROCF-DSS was administered at the age of 8 and the Wechsler Individual Achievement Test, First and Second Edition (WIAT/WIAT-II) at the ages of 8 and 16, respectively. ROCF-DSS protocols were classified by Organization (Organized/Disorganized) and Style (Part-oriented/Holistic). Two-way univariate (ROCF-DSS Organization × Style) ANCOVAs were computed with 16-year academic outcomes as the dependent variables and socioeconomic status (SES) as the covariate.
Results:
The Organization × Style interaction was not statistically significant. However, ROCF-DSS Organization at 8 years was significantly associated with Reading, Math, Associative, and Assembled academic skills at 16 years, with better organization predicting better academic performance.
Conclusions:
Performance on the ROCF-DSS, a complex visual-spatial problem-solving task, in children with d-TGA can forecast academic performance in both reading and mathematics nearly a decade later. These findings may have implications for identifying risk in children with other medical and neurodevelopmental disorders affecting brain development.
Goosegrass control options in bermudagrass are limited. Topramezone is one option that offers excellent control of mature goosegrass, but application to bermudagrass results in unacceptable symptoms of bleaching and necrosis typical of hydroxyphenylpyruvate dioxygenase inhibitors. Previous research has shown that adding chelated iron reduced the phytotoxicity of topramezone without reducing the efficacy of the herbicide, resulting in safening when applied to bermudagrass. Our objective was to examine additional iron sources to determine whether similar safening effects occur with other sources. Field trials were conducted in the summers of 2016 to 2018 (Auburn University). Mixtures of topramezone and methylated seed oil were combined with six different commercial iron sources, including sodium ferric ethylenediamine di-o-hydroxyphenyl-acetate (FeEDDHA), ferrous diethylenetriamine pentaacetic acid (FeDTPA), iron citrate, FeSO4, and a combination of iron oxide/sucrate/sulfate, some of which contained nitrogen. Bermudagrass necrosis and bleaching symptoms were visually rated on a 0% to 100% scale. Reflectance (normalized difference vegetation index) and clipping yield measurements were also collected. Application of FeDTPA and FeSO4 reduced symptoms of bleaching and necrosis when applied with topramezone. Other treatments that contained nitrogen did not reduce injury but did reduce bermudagrass recovery time following the appearance of necrosis. Inclusion of small amounts of nitrogen often negated the safening effects of FeSO4. The iron oxide/sucrate/sulfate product had no effect on bleaching or necrosis. Data suggest that the iron source had a differential effect on bleaching and necrosis reduction when applied in combination with topramezone to bermudagrass. Overall, FeSO4 and FeDTPA safened topramezone the most on bermudagrass.
POST goosegrass and other grassy weed control in bermudagrass is problematic. Fewer herbicides that can control goosegrass are available due to regulatory pressure and herbicide resistance. Alternative herbicide options that offer effective control are needed. Previous research demonstrates that topramezone controls goosegrass, crabgrass, and other weed species; however, injury to bermudagrass may be unacceptable. The objective of this research was to evaluate the safening potential of topramezone combinations with different additives on bermudagrass. Field trials were conducted at Auburn University during summer and fall from 2015 to 2018 and 2017 to 2018, respectively. Treatments included topramezone mixtures and methylated seed oil applied in combination with five different additives: triclopyr, green turf pigment, green turf paint, ammonium sulfate, and chelated iron. Bermudagrass bleaching and necrosis symptoms were visually rated. Normalized-difference vegetative index measurements and clipping yield data were also collected. Topramezone plus chelated iron, as well as topramezone plus triclopyr, reduced bleaching potential the best; however, the combination of topramezone plus triclopyr resulted in necrosis that outweighed reductions in bleaching. Masking agents such as green turf paint and green turf pigment were ineffective in reducing injury when applied with topramezone. The combination of topramezone plus ammonium sulfate should be avoided because of the high level of necrosis. Topramezone-associated bleaching symptoms were transient and lasted 7 to 14 d on average. Findings from this research suggest that chelated iron added to topramezone and methylated seed oil mixtures acted as a safener on bermudagrass.
Radiocarbon (14C) ages cannot provide absolutely dated chronologies for archaeological or paleoenvironmental studies directly but must be converted to calendar age equivalents using a calibration curve compensating for fluctuations in atmospheric 14C concentration. Although calibration curves are constructed from independently dated archives, they invariably require revision as new data become available and our understanding of the Earth system improves. In this volume the international 14C calibration curves for both the Northern and Southern Hemispheres, as well as for the ocean surface layer, have been updated to include a wealth of new data and extended to 55,000 cal BP. Based on tree rings, IntCal20 now extends as a fully atmospheric record to ca. 13,900 cal BP. For the older part of the timescale, IntCal20 comprises statistically integrated evidence from floating tree-ring chronologies, lacustrine and marine sediments, speleothems, and corals. We utilized improved evaluation of the timescales and location variable 14C offsets from the atmosphere (reservoir age, dead carbon fraction) for each dataset. New statistical methods have refined the structure of the calibration curves while maintaining a robust treatment of uncertainties in the 14C ages, the calendar ages and other corrections. The inclusion of modeled marine reservoir ages derived from a three-dimensional ocean circulation model has allowed us to apply more appropriate reservoir corrections to the marine 14C data rather than the previous use of constant regional offsets from the atmosphere. Here we provide an overview of the new and revised datasets and the associated methods used for the construction of the IntCal20 curve and explore potential regional offsets for tree-ring data. We discuss the main differences with respect to the previous calibration curve, IntCal13, and some of the implications for archaeology and geosciences ranging from the recent past to the time of the extinction of the Neanderthals.
To determine the Final ICU Need in the 24 hours prior to ICU discharge for children with cardiac disease by utilising a single-centre survey.
Methods:
A cross-sectional survey was utilised to determine Final ICU Need, which was categorised as “Cardiovascular”, “Respiratory”, “Feeding”, “Sedation”, “Systems Issue”, or “Other” for each encounter. Survey responses were obtained from attending physicians who discharged children (≤18 years of age with ICU length of stay >24 hours) from the Cardiac ICU between April 2016 and July 2018.
Measurements and results:
Survey response rate was 99% (n = 1073), with 667 encounters eligible for analysis. “Cardiovascular” (61%) and “Respiratory” (26%) were the most frequently chosen Final ICU Needs. From a multivariable mixed effects logistic regression model fitted to “Cardiovascular” and “Respiratory”, operations with significantly reduced odds of having “Cardiovascular” Final ICU Need included Glenn palliation (p = 0.003), total anomalous pulmonary venous connection repair (p = 0.024), truncus arteriosus repair (p = 0.044), and vascular ring repair (p < 0.001). Short lengths of stay (<7.9 days) had significantly higher odds of “Cardiovascular” Final ICU Need (p < 0.001). “Cardiovascular” and “Respiratory” Final ICU Needs were also associated with provider and ICU discharge season.
Conclusions:
Final ICU Need is a novel metric to identify variations in Cardiac ICU utilisation and clinical trajectories. Final ICU Need was significantly influenced by benchmark operation, length of stay, provider, and season. Future applications of Final ICU Need include targeting quality and research initiatives, calibrating provider and family expectations, and identifying provider-level variability in care processes and mental models.
The Pueblo population of Chaco Canyon during the Bonito Phase (AD 800–1130) employed agricultural strategies and water-management systems to enhance food cultivation in this unpredictable environment. Scepticism concerning the timing and effectiveness of this system, however, remains common. Using optically stimulated luminescence dating of sediments and LiDAR imaging, the authors located Bonito Phase canal features at the far west end of the canyon. Additional ED-XRF and strontium isotope (87Sr/86Sr) analyses confirm the diversion of waters from multiple sources during Chaco’s occupation. The extent of this water-management system raises new questions about social organisation and the role of ritual in facilitating responses to environmental unpredictability.