We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Patients with posttraumatic stress disorder (PTSD) exhibit smaller regional brain volumes in commonly reported regions including the amygdala and hippocampus, regions associated with fear and memory processing. In the current study, we have conducted a voxel-based morphometry (VBM) meta-analysis using whole-brain statistical maps with neuroimaging data from the ENIGMA-PGC PTSD working group.
Methods
T1-weighted structural neuroimaging scans from 36 cohorts (PTSD n = 1309; controls n = 2198) were processed using a standardized VBM pipeline (ENIGMA-VBM tool). We meta-analyzed the resulting statistical maps for voxel-wise differences in gray matter (GM) and white matter (WM) volumes between PTSD patients and controls, performed subgroup analyses considering the trauma exposure of the controls, and examined associations between regional brain volumes and clinical variables including PTSD (CAPS-4/5, PCL-5) and depression severity (BDI-II, PHQ-9).
Results
PTSD patients exhibited smaller GM volumes across the frontal and temporal lobes, and cerebellum, with the most significant effect in the left cerebellum (Hedges’ g = 0.22, pcorrected = .001), and smaller cerebellar WM volume (peak Hedges’ g = 0.14, pcorrected = .008). We observed similar regional differences when comparing patients to trauma-exposed controls, suggesting these structural abnormalities may be specific to PTSD. Regression analyses revealed PTSD severity was negatively associated with GM volumes within the cerebellum (pcorrected = .003), while depression severity was negatively associated with GM volumes within the cerebellum and superior frontal gyrus in patients (pcorrected = .001).
Conclusions
PTSD patients exhibited widespread, regional differences in brain volumes where greater regional deficits appeared to reflect more severe symptoms. Our findings add to the growing literature implicating the cerebellum in PTSD psychopathology.
The law and corpus linguistics movement shares many of the commitments of experimental jurisprudence. Both are concerned with testing intuitions about legal concepts through the lens of empirical evidence gathered through experimentation. Though often discussed in the context of a given case or legal problem, linguistic evidence from legal corpora can help provide content to otherwise indeterminate concepts in the law.
Using language evidence from linguistic corpora, we can begin to have more meaningful conversations about what concepts like ordinary meaning, ambiguity, and speech community might actually mean and make progress on the boundaries of these concepts and their implications for legal interpretation. And, because corpora are constructed from linguistic utterances made in natural linguistic settings, they can provide an important check and means of triangulation for experimental jurisprudence claims that are often premised on survey data.
In response to the COVID-19 pandemic, we rapidly implemented a plasma coordination center, within two months, to support transfusion for two outpatient randomized controlled trials. The center design was based on an investigational drug services model and a Food and Drug Administration-compliant database to manage blood product inventory and trial safety.
Methods:
A core investigational team adapted a cloud-based platform to randomize patient assignments and track inventory distribution of control plasma and high-titer COVID-19 convalescent plasma of different blood groups from 29 donor collection centers directly to blood banks serving 26 transfusion sites.
Results:
We performed 1,351 transfusions in 16 months. The transparency of the digital inventory at each site was critical to facilitate qualification, randomization, and overnight shipments of blood group-compatible plasma for transfusions into trial participants. While inventory challenges were heightened with COVID-19 convalescent plasma, the cloud-based system, and the flexible approach of the plasma coordination center staff across the blood bank network enabled decentralized procurement and distribution of investigational products to maintain inventory thresholds and overcome local supply chain restraints at the sites.
Conclusion:
The rapid creation of a plasma coordination center for outpatient transfusions is infrequent in the academic setting. Distributing more than 3,100 plasma units to blood banks charged with managing investigational inventory across the U.S. in a decentralized manner posed operational and regulatory challenges while providing opportunities for the plasma coordination center to contribute to research of global importance. This program can serve as a template in subsequent public health emergencies.
Understanding characteristics of healthcare personnel (HCP) with SARS-CoV-2 infection supports the development and prioritization of interventions to protect this important workforce. We report detailed characteristics of HCP who tested positive for SARS-CoV-2 from April 20, 2020 through December 31, 2021.
Methods:
CDC collaborated with Emerging Infections Program sites in 10 states to interview HCP with SARS-CoV-2 infection (case-HCP) about their demographics, underlying medical conditions, healthcare roles, exposures, personal protective equipment (PPE) use, and COVID-19 vaccination status. We grouped case-HCP by healthcare role. To describe residential social vulnerability, we merged geocoded HCP residential addresses with CDC/ATSDR Social Vulnerability Index (SVI) values at the census tract level. We defined highest and lowest SVI quartiles as high and low social vulnerability, respectively.
Results:
Our analysis included 7,531 case-HCP. Most case-HCP with roles as certified nursing assistant (CNA) (444, 61.3%), medical assistant (252, 65.3%), or home healthcare worker (HHW) (225, 59.5%) reported their race and ethnicity as either non-Hispanic Black or Hispanic. More than one third of HHWs (166, 45.2%), CNAs (283, 41.7%), and medical assistants (138, 37.9%) reported a residential address in the high social vulnerability category. The proportion of case-HCP who reported using recommended PPE at all times when caring for patients with COVID-19 was lowest among HHWs compared with other roles.
Conclusions:
To mitigate SARS-CoV-2 infection risk in healthcare settings, infection prevention, and control interventions should be specific to HCP roles and educational backgrounds. Additional interventions are needed to address high social vulnerability among HHWs, CNAs, and medical assistants.
Clinical outcomes of repetitive transcranial magnetic stimulation (rTMS) for treatment of treatment-resistant depression (TRD) vary widely and there is no mood rating scale that is standard for assessing rTMS outcome. It remains unclear whether TMS is as efficacious in older adults with late-life depression (LLD) compared to younger adults with major depressive disorder (MDD). This study examined the effect of age on outcomes of rTMS treatment of adults with TRD. Self-report and observer mood ratings were measured weekly in 687 subjects ages 16–100 years undergoing rTMS treatment using the Inventory of Depressive Symptomatology 30-item Self-Report (IDS-SR), Patient Health Questionnaire 9-item (PHQ), Profile of Mood States 30-item, and Hamilton Depression Rating Scale 17-item (HDRS). All rating scales detected significant improvement with treatment; response and remission rates varied by scale but not by age (response/remission ≥ 60: 38%–57%/25%–33%; <60: 32%–49%/18%–25%). Proportional hazards models showed early improvement predicted later improvement across ages, though early improvements in PHQ and HDRS were more predictive of remission in those < 60 years (relative to those ≥ 60) and greater baseline IDS burden was more predictive of non-remission in those ≥ 60 years (relative to those < 60). These results indicate there is no significant effect of age on treatment outcomes in rTMS for TRD, though rating instruments may differ in assessment of symptom burden between younger and older adults during treatment.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
Anterior temporal lobectomy is a common surgical approach for medication-resistant temporal lobe epilepsy (TLE). Prior studies have shown inconsistent findings regarding the utility of presurgical intracarotid sodium amobarbital testing (IAT; also known as Wada test) and neuroimaging in predicting postoperative seizure control. In the present study, we evaluated the predictive utility of IAT, as well as structural magnetic resonance imaging (MRI) and positron emission tomography (PET), on long-term (3-years) seizure outcome following surgery for TLE.
Participants and Methods:
Patients consisted of 107 adults (mean age=38.6, SD=12.2; mean education=13.3 years, SD=2.0; female=47.7%; White=100%) with TLE (mean epilepsy duration =23.0 years, SD=15.7; left TLE surgery=50.5%). We examined whether demographic, clinical (side of resection, resection type [selective vs. non-selective], hemisphere of language dominance, epilepsy duration), and presurgical studies (normal vs. abnormal MRI, normal vs. abnormal PET, correctly lateralizing vs. incorrectly lateralizing IAT) were associated with absolute (cross-sectional) seizure outcome (i.e., freedom vs. recurrence) with a series of chi-squared and t-tests. Additionally, we determined whether presurgical evaluations predicted time to seizure recurrence (longitudinal outcome) over a three-year period with univariate Cox regression models, and we compared survival curves with Mantel-Cox (log rank) tests.
Results:
Demographic and clinical variables (including type [selective vs. whole lobectomy] and side of resection) were not associated with seizure outcome. No associations were found among the presurgical variables. Presurgical MRI was not associated with cross-sectional (OR=1.5, p=.557, 95% CI=0.4-5.7) or longitudinal (HR=1.2, p=.641, 95% CI=0.4-3.9) seizure outcome. Normal PET scan (OR= 4.8, p=.045, 95% CI=1.0-24.3) and IAT incorrectly lateralizing to seizure focus (OR=3.9, p=.018, 95% CI=1.2-12.9) were associated with higher odds of seizure recurrence. Furthermore, normal PET scan (HR=3.6, p=.028, 95% CI =1.0-13.5) and incorrectly lateralized IAT (HR= 2.8, p=.012, 95% CI=1.2-7.0) were presurgical predictors of earlier seizure recurrence within three years of TLE surgery. Log rank tests indicated that survival functions were significantly different between patients with normal vs. abnormal PET and incorrectly vs. correctly lateralizing IAT such that these had seizure relapse five and seven months earlier on average (respectively).
Conclusions:
Presurgical normal PET scan and incorrectly lateralizing IAT were associated with increased risk of post-surgical seizure recurrence and shorter time-to-seizure relapse.
Medical surge events require effective coordination between multiple partners. Unfortunately, the information technology (IT) systems currently used for information-sharing by emergency responders and managers in the United States are insufficient to coordinate with health care providers, particularly during large-scale regional incidents. The numerous innovations adopted for the COVID-19 response and continuing advances in IT systems for emergency management and health care information-sharing suggest a more promising future. This article describes: (1) several IT systems and data platforms currently used for information-sharing, operational coordination, patient tracking, and resource-sharing between emergency management and health care providers at the regional level in the US; and (2) barriers and opportunities for using these systems and platforms to improve regional health care information-sharing and coordination during a large-scale medical surge event. The article concludes with a statement about the need for a comprehensive landscape analysis of the component systems in this IT ecosystem.
We present new data from the debris-rich basal ice layers of the NEEM ice core (NW Greenland). Using mineralogical observations, SEM imagery, geochemical data from silicates (meteoric 10Be, εNd, 87Sr/86Sr) and organic material (C/N, δ13C), we characterize the source material, succession of previous glaciations and deglaciations and the paleoecological conditions during ice-free episodes. Meteoric 10Be data and grain features indicate that the ice sheet interacted with paleosols and eroded fresh bedrock, leading to mixing in these debris-rich ice layers. Our analysis also identifies four successive stages in NW Greenland: (1) initial preglacial conditions, (2) glacial advance 1, (3) glacial retreat and interglacial conditions and (4) glacial advance 2 (current ice-sheet development). C/N and δ13C data suggest that deglacial environments favored the development of tundra and taiga ecosystems. These two successive glacial fluctuations observed at NEEM are consistent with those identified from the Camp Century core basal sediments over the last 3 Ma. Further inland, GRIP and GISP2 summit sites have remained glaciated more continuously than the western margin, with less intense ice-substratum interactions than those observed at NEEM.
To describe strategies used to recruit and retain young adults in nutrition, physical activity and/or obesity intervention studies, and quantify the success and efficiency of these strategies.
Design:
A systematic review was conducted. The search included six electronic databases to identify randomised controlled trials (RCT) published up to 6 December 2019 that evaluated nutrition, physical activity and/or obesity interventions in young adults (17–35 years). Recruitment was considered successful if the pre-determined sample size goal was met. Retention was considered acceptable if ≥80 % retained for ≤6-month follow-up or ≥70 % for >6-month follow-up.
Results:
From 21 582 manuscripts identified, 107 RCT were included. Universities were the most common recruitment setting used in eighty-four studies (79 %). Less than half (46 %) of the studies provided sufficient information to evaluate whether individual recruitment strategies met sample size goals, with 77 % successfully achieving recruitment targets. Reporting for retention was slightly better with 69 % of studies providing sufficient information to determine whether individual retention strategies achieved adequate retention rates. Of these, 65 % had adequate retention.
Conclusions:
This review highlights poor reporting of recruitment and retention information across trials. Findings may not be applicable outside a university setting. Guidance on how to improve reporting practices to optimise recruitment and retention strategies within young adults could assist researchers in improving outcomes.
Friedrich Hayek’s business cycle theory withered throughout the 1930s as he admitted that its underlying model of Böhm-Bawerkian roundaboutness was incomplete and inadequate. In 1934, Hayek started a two-volume book on capital theory, completing only one volume in 1941. Curiously, Hayek ([1941] 2009) cites John Hicks’s (1939) Value and Capital but not the financial measure of roundaboutness that Hicks suggested as a substitute for Böhm-Bawerkian roundaboutness. In 1967, in “The Hayek Story,” Hicks criticized the inexplicable lags. Hayek maintained his view that consumption was sticky and responded to Hicks with a mound-of-honey analogy. Nevertheless, Hayek maintained that his business cycle theory was fundamentally correct and continued to hope that others might someday discover a capital structure theory to undergird it. Toward fulfilling Hayek’s hope, we suggest augmenting the canonical stages of production with a sequestered-capital stage where products are invented, productized, and inventoried prior to launch, uncoordinated by observable prices.
To develop a pediatric research agenda focused on pediatric healthcare-associated infections and antimicrobial stewardship topics that will yield the highest impact on child health.
Participants:
The study included 26 geographically diverse adult and pediatric infectious diseases clinicians with expertise in healthcare-associated infection prevention and/or antimicrobial stewardship (topic identification and ranking of priorities), as well as members of the Division of Healthcare Quality and Promotion at the Centers for Disease Control and Prevention (topic identification).
Methods:
Using a modified Delphi approach, expert recommendations were generated through an iterative process for identifying pediatric research priorities in healthcare associated infection prevention and antimicrobial stewardship. The multistep, 7-month process included a literature review, interactive teleconferences, web-based surveys, and 2 in-person meetings.
Results:
A final list of 12 high-priority research topics were generated in the 2 domains. High-priority healthcare-associated infection topics included judicious testing for Clostridioides difficile infection, chlorhexidine (CHG) bathing, measuring and preventing hospital-onset bloodstream infection rates, surgical site infection prevention, surveillance and prevention of multidrug resistant gram-negative rod infections. Antimicrobial stewardship topics included β-lactam allergy de-labeling, judicious use of perioperative antibiotics, intravenous to oral conversion of antimicrobial therapy, developing a patient-level “harm index” for antibiotic exposure, and benchmarking and or peer comparison of antibiotic use for common inpatient conditions.
Conclusions:
We identified 6 healthcare-associated infection topics and 6 antimicrobial stewardship topics as potentially high-impact targets for pediatric research.
This is the first report on the association between trauma exposure and depression from the Advancing Understanding of RecOvery afteR traumA(AURORA) multisite longitudinal study of adverse post-traumatic neuropsychiatric sequelae (APNS) among participants seeking emergency department (ED) treatment in the aftermath of a traumatic life experience.
Methods
We focus on participants presenting at EDs after a motor vehicle collision (MVC), which characterizes most AURORA participants, and examine associations of participant socio-demographics and MVC characteristics with 8-week depression as mediated through peritraumatic symptoms and 2-week depression.
Results
Eight-week depression prevalence was relatively high (27.8%) and associated with several MVC characteristics (being passenger v. driver; injuries to other people). Peritraumatic distress was associated with 2-week but not 8-week depression. Most of these associations held when controlling for peritraumatic symptoms and, to a lesser degree, depressive symptoms at 2-weeks post-trauma.
Conclusions
These observations, coupled with substantial variation in the relative strength of the mediating pathways across predictors, raises the possibility of diverse and potentially complex underlying biological and psychological processes that remain to be elucidated in more in-depth analyses of the rich and evolving AURORA database to find new targets for intervention and new tools for risk-based stratification following trauma exposure.
We study the dynamics of cash-and-carry arbitrage using the U.S. crude oil market. Sizable arbitrage-related inventory movements occur at the New York Mercantile Exchange (NYMEX) futures contract delivery point but not at other storage locations, where instead, operational factors explain most inventory changes. We add to the theory-of-storage literature by introducing two new features. First, due to arbitrageurs contracting ahead, inventories respond to not only contemporaneous but also lagged futures spreads. Second, storage-capacity limits can impede cash-and-carry arbitrage, leading to the persistence of unexploited arbitrage opportunities. Our findings suggest that arbitrage-induced inventory movements are, on average, price stabilizing.
The emphasis on team science in clinical and translational research increases the importance of collaborative biostatisticians (CBs) in healthcare. Adequate training and development of CBs ensure appropriate conduct of robust and meaningful research and, therefore, should be considered as a high-priority focus for biostatistics groups. Comprehensive training enhances clinical and translational research by facilitating more productive and efficient collaborations. While many graduate programs in Biostatistics and Epidemiology include training in research collaboration, it is often limited in scope and duration. Therefore, additional training is often required once a CB is hired into a full-time position. This article presents a comprehensive CB training strategy that can be adapted to any collaborative biostatistics group. This strategy follows a roadmap of the biostatistics collaboration process, which is also presented. A TIE approach (Teach the necessary skills, monitor the Implementation of these skills, and Evaluate the proficiency of these skills) was developed to support the adoption of key principles. The training strategy also incorporates a “train the trainer” approach to enable CBs who have successfully completed training to train new staff or faculty.
We implemented universal severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) testing of patients undergoing surgical procedures as a means to conserve personal protective equipment (PPE). The rate of asymptomatic coronavirus disease 2019 (COVID-19) was <0.5%, which suggests that early local public health interventions were successful. Although our protocol was resource intensive, it prevented exposures to healthcare team members.
In 2010, South Africa (SA) hosted the Fédération Internationale de Football Association (FIFA) World Cup (soccer). Emergency Medical Services (EMS) used the SA mass gathering medicine (MGM) resource model to predict resource allocation. This study analyzed data from the World Cup and compared them with the resource allocation predicted by the SA mass gathering model.
Methods:
Prospectively, data were collected from patient contacts at 9 venues across the Western Cape province of South Africa. Required resources were based on the number of patients seeking basic life support (BLS), intermediate life support (ILS), and advanced life support (ALS). Overall patient presentation rates (PPRs) and transport to hospital rates (TTHRs) were also calculated.
Results:
BLS services were required for 78.4% (n = 1279) of patients and were consistently overestimated using the SA mass gathering model. ILS services were required for 14.0% (n = 228), and ALS services were required for 3.1% (n = 51) of patients. Both ILS and ALS services, and TTHR were underestimated at smaller venues.
Conclusions:
The MGM predictive model overestimated BLS requirements and inconsistently predicted ILS and ALS requirements. MGM resource models, which are heavily based on predicted attendance levels, have inherent limitations, which may be improved by using research-based outcomes.
Deriving ecological and evolutionary descriptions of, and implications from, faunal assemblage patterns is commonly addressed by observation and a variety of exploratory techniques (scaling and clustering), along with qualitative evaluations of species occurrences and relative abundances. We argue that interpretations of faunal patterns, especially those documented by the fossil record, should be based upon the composition and structure of entire communities to provide strong conclusions and replicable results.
As an example, we use benthic foraminiferal data at high resolution (1–2 cm, corresponding to 300–1400 yr) over a section corresponding to about 20 kyr across the beginning of the Paleocene–Eocene thermal maximum (PETM). The PETM was an episode of rapid global warming about 55.5 Ma, associated with ocean acidification and lowered open oceanic productivity and deoxygenation and marked by severe turnover in benthic foraminiferal assemblages. Here we provide a stand-alone approach applicable to any dynamic faunal system, perturbation detection analysis (PDA), to recognize and identify community disruption evidenced as either positive growth or negative decline, and we use this methodical approach to obtain new information on foraminiferal communities before, during, and after the initiation of the PETM.
We conclude that the late Paleocene benthic foraminiferal community (FCOM1) was in a growth stage of positive increasing diversity, suggestive of favorable environmental conditions. This stage continued through the initial changes at the onset of the PETM, when disruption through environmental stress led to this community's termination. A second community (FCOM2) formed with declining diversity and high variability, showing a lack of adaptation to changing conditions. Knowledge of total assemblage status under both adverse and advantageous conditions is necessary, but not recognized by methods that rely upon analysis of single samples only: individual samples cannot be used to recognize disruptive changes in a community's structure, but these are easily identified using PDA.