We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Abacs approximating the product-moment correlation for both explicit and implicit selection are presented. These abacs give accuracy to within .01 of the corresponding analytic estimate.
Coastal wetlands are hotspots of carbon sequestration, and their conservation and restoration can help to mitigate climate change. However, there remains uncertainty on when and where coastal wetland restoration can most effectively act as natural climate solutions (NCS). Here, we synthesize current understanding to illustrate the requirements for coastal wetland restoration to benefit climate, and discuss potential paths forward that address key uncertainties impeding implementation. To be effective as NCS, coastal wetland restoration projects will accrue climate cooling benefits that would not occur without management action (additionality), will be implementable (feasibility) and will persist over management-relevant timeframes (permanence). Several issues add uncertainty to understanding if these minimum requirements are met. First, coastal wetlands serve as both a landscape source and sink of carbon for other habitats, increasing uncertainty in additionality. Second, coastal wetlands can potentially migrate outside of project footprints as they respond to sea-level rise, increasing uncertainty in permanence. To address these first two issues, a system-wide approach may be necessary, rather than basing cooling benefits only on changes that occur within project boundaries. Third, the need for NCS to function over management-relevant decadal timescales means methane responses may be necessary to include in coastal wetland restoration planning and monitoring. Finally, there is uncertainty on how much data are required to justify restoration action. We summarize the minimum data required to make a binary decision on whether there is a net cooling benefit from a management action, noting that these data are more readily available than the data required to quantify the magnitude of cooling benefits for carbon crediting purposes. By reducing uncertainty, coastal wetland restoration can be implemented at the scale required to significantly contribute to addressing the current climate crisis.
Researchers increasingly rely on aggregations of radiocarbon dates from archaeological sites as proxies for past human populations. This approach has been critiqued on several grounds, including the assumptions that material is deposited, preserved, and sampled in proportion to past population size. However, various attempts to quantitatively assess the approach suggest there may be some validity in assuming date counts reflect relative population size. To add to this conversation, here we conduct a preliminary analysis coupling estimates of ethnographic population density with late Holocene radiocarbon dates across all counties in California. Results show that counts of late Holocene radiocarbon-dated archaeological sites increase significantly as a function of ethnographic population density. This trend is robust across varying sampling windows over the last 5000 BP. Though the majority of variation in dated-site counts remains unexplained by population density. Outliers reveal how departures from the central trend may be influenced by regional differences in research traditions, development-driven contract work, organic preservation, and landscape taphonomy. Overall, this exercise provides some support for the “dates-as-data” approach and offers insights into the conditions where the underlying assumptions may or may not hold.
Knowledge of sex differences in risk factors for posttraumatic stress disorder (PTSD) can contribute to the development of refined preventive interventions. Therefore, the aim of this study was to examine if women and men differ in their vulnerability to risk factors for PTSD.
Methods
As part of the longitudinal AURORA study, 2924 patients seeking emergency department (ED) treatment in the acute aftermath of trauma provided self-report assessments of pre- peri- and post-traumatic risk factors, as well as 3-month PTSD severity. We systematically examined sex-dependent effects of 16 risk factors that have previously been hypothesized to show different associations with PTSD severity in women and men.
Results
Women reported higher PTSD severity at 3-months post-trauma. Z-score comparisons indicated that for five of the 16 examined risk factors the association with 3-month PTSD severity was stronger in men than in women. In multivariable models, interaction effects with sex were observed for pre-traumatic anxiety symptoms, and acute dissociative symptoms; both showed stronger associations with PTSD in men than in women. Subgroup analyses suggested trauma type-conditional effects.
Conclusions
Our findings indicate mechanisms to which men might be particularly vulnerable, demonstrating that known PTSD risk factors might behave differently in women and men. Analyses did not identify any risk factors to which women were more vulnerable than men, pointing toward further mechanisms to explain women's higher PTSD risk. Our study illustrates the need for a more systematic examination of sex differences in contributors to PTSD severity after trauma, which may inform refined preventive interventions.
Several hypotheses may explain the association between substance use, posttraumatic stress disorder (PTSD), and depression. However, few studies have utilized a large multisite dataset to understand this complex relationship. Our study assessed the relationship between alcohol and cannabis use trajectories and PTSD and depression symptoms across 3 months in recently trauma-exposed civilians.
Methods
In total, 1618 (1037 female) participants provided self-report data on past 30-day alcohol and cannabis use and PTSD and depression symptoms during their emergency department (baseline) visit. We reassessed participant's substance use and clinical symptoms 2, 8, and 12 weeks posttrauma. Latent class mixture modeling determined alcohol and cannabis use trajectories in the sample. Changes in PTSD and depression symptoms were assessed across alcohol and cannabis use trajectories via a mixed-model repeated-measures analysis of variance.
Results
Three trajectory classes (low, high, increasing use) provided the best model fit for alcohol and cannabis use. The low alcohol use class exhibited lower PTSD symptoms at baseline than the high use class; the low cannabis use class exhibited lower PTSD and depression symptoms at baseline than the high and increasing use classes; these symptoms greatly increased at week 8 and declined at week 12. Participants who already use alcohol and cannabis exhibited greater PTSD and depression symptoms at baseline that increased at week 8 with a decrease in symptoms at week 12.
Conclusions
Our findings suggest that alcohol and cannabis use trajectories are associated with the intensity of posttrauma psychopathology. These findings could potentially inform the timing of therapeutic strategies.
The current assays to confirm herbicide resistance can be time- and labor-intensive (dose–response) or require a skill set/technical equipment (genetic sequencing). Stakeholders could benefit from a rapid assay to confirm herbicide-resistant weeds to ensure sustainable crop production. Because protoporphyrinogen oxidase (PPO)-inhibiting herbicides rapidly interfere with chlorophyll production/integrity; we propose a new, rapid assay utilizing spectral reflectance to confirm resistance. Leaf disks were excised from two PPO-inhibiting herbicide-resistant (target-site [TSR] and non–target site [NTSR]) and herbicide-susceptible redroot pigweed (Amaranthus retroflexus L.) populations and placed into a 24-well plate containing different concentrations (0 to 10 mM) of fomesafen for 48 h. A multispectral sensor captured images from the red (668 nm), green (560 nm), blue (475 nm), and red edge (717 nm) wavebands after a 48-h incubation period. The green leaf index (GLI) was utilized to determine spectral reflectance ratios of the treated leaf disks. Clear differences of spectral reflectance were observed in the red edge waveband for all populations treated with the 10 mM concentration in the dose–response assays. Differences of spectral reflectance were observed for the NTSR population compared with the TSR and susceptible populations treated with the 10 mM concentration in the green waveband and the GLI in the dose–response assay. Leaf disks from the aforementioned A. retroflexus populations and two additional susceptible populations were subjected to a similar assay with the discriminating concentration (10 mM). Spectral reflectance was different between the PPO-inhibiting herbicide-resistant and herbicide-susceptible populations in the red, blue, and green wavebands. Spectral reflectance was not distinctive between the populations in the red edge waveband and the GLI. The results provide a basis for rapidly (∼48 h) detecting PPO-inhibiting herbicide-resistant A. retroflexus via spectral reflectance. Discrimination between TSR and NTSR populations was possible only in the dose–response assay, but the assay still has utility in distinguishing herbicide-resistant plants from herbicide-susceptible plants.
Glufosinate is an effective postemergence herbicide, and overreliance on this herbicide for weed control is likely to increase and select for glufosinate-resistant weeds. Common assays to confirm herbicide resistance are dose–response and molecular sequencing techniques; both can require significant time, labor, unique technical equipment, and a specialized skillset to perform. As an alternative, we propose an image-based approach that uses a relatively inexpensive multispectral sensor designed for unmanned aerial vehicles to measure and quantify surface reflectance from glufosinate-treated leaf disks. Leaf disks were excised from a glufosinate-resistant and glufosinate-susceptible corn (Zea mays L.), cotton (Gossypium hirsutum L.), and soybean [Glycine max (L.) Merr.] varieties and placed into a 24-well plate containing eight different concentrations (0 to 10 mM) of glufosinate for 48 h. Multispectral images were collected after the 48-h incubation period across five discrete wave bands: blue (475 to 507 nm), green (560 to 587 nm), red (668to 682 nm), red edge (717 to 729 nm), and near infrared (842 to 899 nm). The green leaf index (GLI; a metric to measure chlorophyll content) was utilized to determine relationships between measured reflectance from the tested wave bands from the treated leaf disks and the glufosinate concentration. Clear differences of spectral reflectance were observed between the corn, cotton, and soybean leaf disks of the glufosinate-resistant and glufosinate-susceptible varieties at the 10 mM concentration for select wave bands and GLI. Leaf disks from two additional glufosinate-resistant and glufosinate-susceptible varieties of each crop were subjected to a similar assay with two concentrations: 0 and 10 mM. No differences of spectral reflectance were observed from the corn and soybean varieties in all wave bands and the GLI. The leaf disks of the glufosinate-resistant and glufosinate-susceptible cotton varieties were spectrally distinct in the green, blue, and red-edge wave bands. The results provide a basis for rapidly detecting glufosinate-resistant plants via spectral reflectance. Future research will need to determine the glufosinate concentrations, useful wave bands, and susceptible/resistant thresholds for weeds that evolve resistance.
Posttraumatic stress symptoms (PTSS) are common following traumatic stress exposure (TSE). Identification of individuals with PTSS risk in the early aftermath of TSE is important to enable targeted administration of preventive interventions. In this study, we used baseline survey data from two prospective cohort studies to identify the most influential predictors of substantial PTSS.
Methods
Self-identifying black and white American women and men (n = 1546) presenting to one of 16 emergency departments (EDs) within 24 h of motor vehicle collision (MVC) TSE were enrolled. Individuals with substantial PTSS (⩾33, Impact of Events Scale – Revised) 6 months after MVC were identified via follow-up questionnaire. Sociodemographic, pain, general health, event, and psychological/cognitive characteristics were collected in the ED and used in prediction modeling. Ensemble learning methods and Monte Carlo cross-validation were used for feature selection and to determine prediction accuracy. External validation was performed on a hold-out sample (30% of total sample).
Results
Twenty-five percent (n = 394) of individuals reported PTSS 6 months following MVC. Regularized linear regression was the top performing learning method. The top 30 factors together showed good reliability in predicting PTSS in the external sample (Area under the curve = 0.79 ± 0.002). Top predictors included acute pain severity, recovery expectations, socioeconomic status, self-reported race, and psychological symptoms.
Conclusions
These analyses add to a growing literature indicating that influential predictors of PTSS can be identified and risk for future PTSS estimated from characteristics easily available/assessable at the time of ED presentation following TSE.
Defining key barriers to the development of a well-trained clinical research professional (CRP) workforce is an essential first step in identifying solutions for successful CRP onboarding, training, and competency development, which will enhance quality across the clinical and translational research enterprise. This study aimed to summarize barriers and best practices at academic medical centers related to effective CRP onboarding, training, professional development, identify challenges with the assessment of and mentoring for CRP competency growth, and describe opportunities to improve training and professionalization for the CRP career pathway.
Materials/Methods:
Qualitative data from a series of Un-Meeting breakout sessions and open-text survey questions were analyzed to explore the complex issues involved when developing high-quality onboarding and continuing education opportunities for CRPs at academic medical centers.
Results:
Results suggest there are several barriers to training the CRP workforce, including balancing foundational onboarding with role-based training, managing logistical challenges and institutional contexts, identifying/enlisting institutional champions, assessing competency, and providing high-quality mentorship. Several of these themes are interrelated. Two universal threads present throughout all themes are the need for effective communication and the need to improve professionalization of the CRP career pathway.
Conclusion:
Few institutions have solved all the issues related to training a competent and adaptable CRP workforce, although some have addressed one or more. We applied a socio-technical lens to illustrate our findings and the need for NCATS-funded academic medical centers to work collaboratively within and across institutions to overcome training barriers and support a vital, well-qualified workforce and present several exemplars from the field to help attain this goal.
Capacity development is critical to long-term conservation success, yet we lack a robust and rigorous understanding of how well its effects are being evaluated. A comprehensive summary of who is monitoring and evaluating capacity development interventions, what is being evaluated and how, would help in the development of evidence-based guidance to inform design and implementation decisions for future capacity development interventions and evaluations of their effectiveness. We built an evidence map by reviewing peer-reviewed and grey literature published since 2000, to identify case studies evaluating capacity development interventions in biodiversity conservation and natural resource management. We used inductive and deductive approaches to develop a coding strategy for studies that met our criteria, extracting data on the type of capacity development intervention, evaluation methods, data and analysis types, categories of outputs and outcomes assessed, and whether the study had a clear causal model and/or used a systems approach. We found that almost all studies assessed multiple outcome types: most frequent was change in knowledge, followed by behaviour, then attitude. Few studies evaluated conservation outcomes. Less than half included an explicit causal model linking interventions to expected outcomes. Half of the studies considered external factors that could influence the efficacy of the capacity development intervention, and few used an explicit systems approach. We used framework synthesis to situate our evidence map within the broader literature on capacity development evaluation. Our evidence map (including a visual heat map) highlights areas of low and high representation in investment in research on the evaluation of capacity development.
Across Eurasia, horse transport transformed ancient societies. Although evidence for chariotry is well dated, the origins of horse riding are less clear. Techniques to distinguish chariotry from riding in archaeological samples rely on elements not typically recovered from many steppe contexts. Here, the authors examine horse remains of Mongolia's Deer Stone-Khirigsuur (DSK) Complex, comparing them with ancient and modern East Asian horses used for both types of transport. DSK horses demonstrate unique dentition damage that could result from steppe chariotry, but may also indicate riding with a shallow rein angle at a fast gait. A key role for chariots in Late Bronze Age Mongolia helps explain the trajectory of horse use in early East Asia.
Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools.
Aims
To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics.
Method
Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts.
Results
Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO.
Conclusions
AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses.
Non-alcoholic fatty liver disease (NAFLD) is an increasing cause of chronic liver disease that accompanies obesity and the metabolic syndrome. Excess fructose consumption can initiate or exacerbate NAFLD in part due to a consequence of impaired hepatic fructose metabolism. Preclinical data emphasized that fructose-induced altered gut microbiome, increased gut permeability, and endotoxemia play an important role in NAFLD, but human studies are sparse. The present study aimed to determine if two weeks of excess fructose consumption significantly alters gut microbiota or permeability in humans.
Methods:
We performed a pilot double-blind, cross-over, metabolic unit study in 10 subjects with obesity (body mass index [BMI] 30–40 mg/kg/m2). Each arm provided 75 grams of either fructose or glucose added to subjects’ individual diets for 14 days, substituted isocalorically for complex carbohydrates, with a 19-day wash-out period between arms. Total fructose intake provided in the fructose arm of the study totaled a mean of 20.1% of calories. Outcome measures included fecal microbiota distribution, fecal metabolites, intestinal permeability, markers of endotoxemia, and plasma metabolites.
Results:
Routine blood, uric acid, liver function, and lipid measurements were unaffected by the fructose intervention. The fecal microbiome (including Akkermansia muciniphilia), fecal metabolites, gut permeability, indices of endotoxemia, gut damage or inflammation, and plasma metabolites were essentially unchanged by either intervention.
Conclusions:
In contrast to rodent preclinical findings, excess fructose did not cause changes in the gut microbiome, metabolome, and permeability as well as endotoxemia in humans with obesity fed fructose for 14 days in amounts known to enhance NAFLD.
This is the first report on the association between trauma exposure and depression from the Advancing Understanding of RecOvery afteR traumA(AURORA) multisite longitudinal study of adverse post-traumatic neuropsychiatric sequelae (APNS) among participants seeking emergency department (ED) treatment in the aftermath of a traumatic life experience.
Methods
We focus on participants presenting at EDs after a motor vehicle collision (MVC), which characterizes most AURORA participants, and examine associations of participant socio-demographics and MVC characteristics with 8-week depression as mediated through peritraumatic symptoms and 2-week depression.
Results
Eight-week depression prevalence was relatively high (27.8%) and associated with several MVC characteristics (being passenger v. driver; injuries to other people). Peritraumatic distress was associated with 2-week but not 8-week depression. Most of these associations held when controlling for peritraumatic symptoms and, to a lesser degree, depressive symptoms at 2-weeks post-trauma.
Conclusions
These observations, coupled with substantial variation in the relative strength of the mediating pathways across predictors, raises the possibility of diverse and potentially complex underlying biological and psychological processes that remain to be elucidated in more in-depth analyses of the rich and evolving AURORA database to find new targets for intervention and new tools for risk-based stratification following trauma exposure.
Analysis of a recent surge of Morsnevbreen, Svalbard, is used to test predictions of the enthalpy balance theory of surging. High-resolution time series of velocities, ice thickness and crevasse distribution allow key elements of the enthalpy (internal energy) budget to be quantified for different stages of the surge cycle. During quiescence (1936–1990), velocities were very low, and geothermal heat slowly built-up enthalpy at the bed. Measurable mass transfer and frictional heating began in 1990–2010, then positive frictional heating-velocity feedbacks caused gradual acceleration from 2010 to 2015. Rapid acceleration occurred in summer 2016, when extensive crevassing and positive air temperatures allowed significant surface to bed drainage. The surge front reached the terminus in October 2016, coincident with a drop in velocities. Ice plumes in the fjord are interpreted as discharge of large volumes of supercooled water from the bed. Surge termination was prolonged, however, indicating persistence of an inefficient drainage system. The observations closely match predictions of the theory, particularly build-up of enthalpy from geothermal and frictional heat, and surface meltwater, and the concomitant changes in ice-surface elevation and velocity. Additional characteristics of the surge reflect spatial processes not represented in the model, but can be explained with respect to enthalpy gradients.
The sternocleidomastoid can be used as a pedicled flap in head and neck reconstruction. It has previously been associated with high complication rates, likely due in part to the variable nature of its blood supply.
Objective
To provide clinicians with an up-to-date review of clinical outcomes of sternocleidomastoid flap surgery in head and neck reconstruction, integrated with a review of vascular anatomical studies of the sternocleidomastoid.
Methods
A literature search of the Medline and Web of Science databases was conducted. Complications were analysed for each study. The trend in success rates was analysed by date of the study.
Results
Reported complication rates have improved over time. The preservation of two vascular pedicles rather than one may have contributed to improved outcomes.
Conclusion
The sternocleidomastoid flap is a versatile option for patients where prolonged free flap surgery is inappropriate. Modern vascular imaging techniques could optimise pre-operative planning.
Clinical Enterobacteriacae isolates with a colistin minimum inhibitory concentration (MIC) ≥4 mg/L from a United States hospital were screened for the mcr-1 gene using real-time polymerase chain reaction (RT-PCR) and confirmed by whole-genome sequencing. Four colistin-resistant Escherichia coli isolates contained mcr-1. Two isolates belonged to the same sequence type (ST-632). All subjects had prior international travel and antimicrobial exposure.
The concept of nobility in the middle ages is the focus of this volume. Embracing regions as diverse as England (before and after the Norman Conquest), Italy, the Iberian peninsula, France, Norway, Poland, Portugal, and the Romano-German empire, it ranges over the whole medieval period from the fifth to the early sixteenth century. The articles confront many of the central issues about the origins and nature of `nobility', its relationship with the late Roman world, its acquisition and exercise of power, its association with military obligation, and its gradual `pacification' and transformation into a more or less willing instrument of royal government (indeed, the symbiotic relationship between royal, or imperial, and noble power is a recurring theme). Other ideas historically linked to the concept of nobility and discussed here are `nobility' itself; the distinction between nobility of birth and nobility of character; chivalry; violence and its effects; and noblewomen as co-progenitors and transmitters of nobility of blood.
Dr ANNE DUGGAN teaches in the Department of History at King's College London.
Chemical constituents trapped within glacial ice provide a unique record of climate, as well as repositories for biological material such as pollen grains, fungal spores, viruses, bacteria and dissolved organic carbon. Past research suggests that the veins of polycrystalline ice may provide a liquid microenvironment for active microbial metabolism fueled by concentrated impurities in the veins. Despite these claims, no direct measurements of impurity concentration in ice veins have been made. Using micro-Raman spectroscopy, we show that sulfate and nitrate concentrations in the veins of glacial ice from Greenland (Greenland Ice Sheet Project 2) and Antarctic (Newall Glacier and a Dominion Range glacier) core samples were 104 and 105 times greater than the concentrations measured in melted (bulk) core water. Methanesulfonate was not found in the veins, consistent with its presence as particulate matter within the ice. The measured vein concentration of molecular anions implies a highly acidic (pH < 3) vein environment with high ionic strength (mM-M). We estimate that the vein volume provides 16.7 and 576 km3 of habitable space within the Greenland and Antarctic ice sheets, respectively, which could support the metabolism of organisms that are capable of growing in cold, high ionic strength solutions with low pH.
Over the past 30 years, the number of US doctoral anthropology graduates has increased by about 70%, but there has not been a corresponding increase in the availability of new faculty positions. Consequently, doctoral degree-holding archaeologists face more competition than ever before when applying for faculty positions. Here we examine where US and Canadian anthropological archaeology faculty originate and where they ultimately end up teaching. Using data derived from the 2014–2015 AnthroGuide, we rank doctoral programs whose graduates in archaeology have been most successful in the academic job market; identify long-term and ongoing trends in doctoral programs; and discuss gender division in academic archaeology in the US and Canada. We conclude that success in obtaining a faculty position upon graduation is predicated in large part on where one attends graduate school.