We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To assess the prevalence of obesity and investigate any changes in body mass index in children with CHD compared to age-matched healthy controls, in Southwestern Ontario.
Methods:
The body mass index z-scores of 1259 children (aged 2–18) with CHD were compared with 2037 healthy controls. The body mass index z-scores of children who presented to our paediatric cardiology outpatient clinic from 2018 to 2021 were compared with previously collected data from 2008 to 2010. A longitudinal analysis of patients with data in both cohorts was also completed.
Results:
In total, 21.4% of patients with CHD and 26.6% of healthy controls were found to be overweight or obese (p < 0.001). The 2018–2021 cohort of CHD patients and controls had significantly higher body mass index z-scores compared to the 2008–2010 cohort (p < 0.001). Longitudinal analysis showed that body mass index z-scores significantly increased over time for CHD patients with data in both cohorts (2018–2021: M = 0.59, SD = 1.26; 2008–2010: M = −0.04, SD = 1.05; p < 0.001).
Conclusion:
The prevalence of obesity in all children, irrespective of CHD, is rising. The coexistence of obesity and CHD may pose additional cardiovascular risks and complications.
Foliar-applied postemergence applications of glufosinate are often applied to glufosinate-resistant crops to provide nonselective weed control without significant crop injury. Rainfall, air temperature, solar radiation, and relative humidity near the time of application have been reported to affect glufosinate efficacy. However, previous research may have not captured the full range of weather variability to which glufosinate may be exposed to prior to or following application. Additionally, climate models suggest more extreme weather will become the norm, further expanding this weather range glufosinate can be exposed to. The objective of this research was to quantify the probability of successful weed control (efficacy ≥85%) with glufosinate applied to some key weed species across a broad range of weather conditions. A database of >10,000 North American herbicide evaluation trials was used in this study. The database was filtered to include treatments with a single POST application of glufosinate applied to waterhemp (Amaranthus tuburculatus (Moq.) J. D. Sauer), morningglory species (Ipomoea spp.), and/or giant foxtail (Setaria faberi Herm.) <15cm in height. These species were chosen because they are well represented in the database and listed as common and troublesome weed species in both corn (Zea mays L.) and soybean [Glycine max (L.) Merr.] (Van Wychen 2020, 2022). Individual random forest models were created. Low rainfall (≤20 mm) over the five days prior to glufosinate application was detrimental to the probability of successful control of A. tuburculatus and S. faberi. Lower relative humidity (≤70%) and solar radiation (≤23 MJ m-1 day-1) the day of application reduced the probability of successful weed control in most cases. Additionally, the probability of successful control decreased for all species when average air temperature over the first five days after application was ≤25C. As climate continues to change and become more variable, the risk of unacceptable control of several common species with glufosinate is likely to increase.
Foliar-applied postemergence herbicides are a critical component of corn (Zea mays L.) and soybean [Glycine max (L.) Merr.] weed management programs in North America. Rainfall and air temperature around the time of application may affect the efficacy of herbicides applied postemergence in corn or soybean production fields. However, previous research utilized a limited number of site-years and may not capture the range of rainfall and air temperatures that these herbicides are exposed to throughout North America. The objective of this research was to model the probability of achieving successful weed control (≥85%) with commonly applied postemergence herbicides across a broad range of environments. A large database of more than 10,000 individual herbicide evaluation field trials conducted throughout North America was used in this study. The database was filtered to include only trials with a single postemergence application of fomesafen, glyphosate, mesotrione, or fomesafen + glyphosate. Waterhemp [Amaranthus tuberculatus (Moq.) Sauer], morningglory species (Ipomoea spp.), and giant foxtail (Setaria faberi Herrm.) were the weeds of focus. Separate random forest models were created for each weed species by herbicide combination. The probability of successful weed control deteriorated when the average air temperature within the first 10 d after application was <19 or >25 C for most of the herbicide by weed species models. Additionally, drier conditions before postemergence herbicide application reduced the probability of successful control for several of the herbicide by weed species models. As air temperatures increase and rainfall becomes more variable, weed control with many of the commonly used postemergence herbicides is likely to become less reliable.
The goal of this article is to outline a new account of the virtue of patience. To help build the account, we focus on five important issues pertaining to patience: (i) goals and time, (ii) emotion, (iii) continence versus virtue, (iv) motivation, and (v) good ends. The heart of the resulting account is that patience is a cross-situational and stable disposition to react, both internally and externally, to slower than desired progress toward goal achievement with a reasonable level of calmness. The article ends with an application of the account to better understanding the vices associated with patience.
Age related Antibiotic Prescribing Trends of Clostridioides Difficile Incident Cases within Davidson County Tennessee 2012-2020 Michael Norris, MSN, Priscilla Pineda, MPH, Malakai Miller, MPH, Raquel Villegas, PhD, MS Background: Clostridioides difficile infection (CDI) is one of the most common healthcare-associated infections in the United States. Antibiotic use is considered a predisposing factor for CDI. The State of Tennessee collaborates with the CDC as part of an ongoing Emerging Infections Program (EIP). We sought to better understand the impact of antimicrobial use prior to the date of incident of CDI within the defined age groups of Davidson County, Tennessee. Methods: Surveillance data from the years 2012–2020 were examined for all positive CDI cases within Davidson County. A positive CDI case was defined as a laboratory confirmed case who is ≥ 1 year old living in Davidson County, Tennessee. Antibiotic use was assessed in the 12 weeks prior to CDI. Trends of overall antibiotic use, including the top five antibiotics prescribed by our defined age groups were examined. Analyses were performed using SAS version 9.4. Only fully abstracted cases are included in the study. Results: Among 7,346 positive CDI incident cases identified between 2012–2020, 5,467 (74.4%) received antibiotics 12 weeks prior to a confirmed infection. We looked at the trend of antibiotic prescription over time from 2012-2020 (77.0%, 76.7%, 74.3%, 80.7%, 76.3%, 75.1%, 73.7%, 74.8% and 71.5%) which has decreased since 2015. The prevalence of antibiotic use by age group 1-18 years, 19-44 years, 45-64 years, 65-74 years, and 75+ years was 53.4%, 68.8%, 74.5%, 79.2% and 83.1% respectively. The five most prescribed antibiotics were ceftriaxone ((11.1%), followed by vancomycin IV (10.9%), ciprofloxacin (10.2%), metronidazole (9.1%), and piperacillin (8.6%). Cases in the 45-64 years age group were more likely to be prescribed vancomycin IV, ciprofloxacin, metronidazole, and piperacillin-tazobactam compared to other age groups (p < 0.0001). There was no statistically significant association between ceftriaxone prescription and our defined age groups. Conclusion: In this study, almost three quarters of the CDI cases had received antimicrobial therapy in the 12 weeks prior to infection. Since antibiotic prescription is a potentially modifiable risk factor for CDI, a more in-depth study, combined with an antibiotic stewardship program implementation in all settings would be beneficial to reduce the risk of CDI complications of antibiotic usage.
How was trust created and reinforced between the inhabitants of medieval and early modern cities? And how did the social foundations of trusting relationships change over time? Current research highlights the role of kinship, neighbourhood, and associations, particularly guilds, in creating ‘relationships of trust’ and social capital in the face of high levels of migration, mortality, and economic volatility, but tells us little about their relative importance or how they developed. We uncover a profound shift in the contribution of family and guilds to trust networks among the middling and elite of one of Europe's major cities, London, over three centuries, from the 1330s to the 1680s. We examine almost 15,000 networks of sureties created to secure orphans’ inheritances to measure the presence of trusting relationships connected by guild membership, family, and place. We uncover a profound increase in the role of kinship – a re-embedding of trust within the family – and a decline of the importance of shared guild membership in connecting Londoners who secured orphans’ inheritances together. These developments indicate a profound transformation in the social fabric of urban society.
This study identified 26 late invasive primary surgical site infection (IP-SSI) within 4–12 months of transplantation among 2073 SOT recipients at Duke University Hospital over the period 2015–2019. Thoracic organ transplants accounted for 25 late IP-SSI. Surveillance for late IP-SSI should be maintained for at least one year following transplant.
Sleep pattern alteration is a core feature of bipolar disorder (BD), often challenging to treat and affecting clinical outcomes. Suvorexant, a hypnotic agent that decreases wakefulness, has shown promising results in treating primary insomnia. To date, data on its use in BD are lacking. This study evaluated the efficacy and tolerability of adjunctive suvorexant for treatment-resistant insomnia in BD patients.
Methods
Thirty-six BD outpatients (19 BDI, 69.4% female, 48.9 [±15.2] years) were randomized for 1 week to double-blind suvorexant (10–20 mg/day) versus placebo. Then, all subjects who completed the randomized phase were offered open suvorexant for 3 months. Subjective total sleep time (sTST) and objective total sleep time (oTST) were assessed.
Results
During the randomized control trial (RCT) phase, an overall increase in the oTST emerged, which was statistically significant for the Cole–Kripke algorithm (p = 0.035). The comparison between the suvorexant and placebo groups was limited by significant differences between measurements at baseline. During the open phase, no significant improvement was detected relative to either sTST and oTST. No adverse events nor major intolerances were reported.
Discussion
Efficacy results are inconsistent. During the RCT phase, only a small increase in the objective oTST emerged, while during the open phase, no significant improvement was detected. While this is the first ever study of suvorexant in BD-related insomnia, the limitation of the small sample and the high rate of dropouts limits the generalizability of these findings. Larger studies are needed to assess suvorexant in treating BD-related insomnia.
Although the link between alcohol involvement and behavioral phenotypes (e.g. impulsivity, negative affect, executive function [EF]) is well-established, the directionality of these associations, specificity to stages of alcohol involvement, and extent of shared genetic liability remain unclear. We estimate longitudinal associations between transitions among alcohol milestones, behavioral phenotypes, and indices of genetic risk.
Methods
Data came from the Collaborative Study on the Genetics of Alcoholism (n = 3681; ages 11–36). Alcohol transitions (first: drink, intoxication, alcohol use disorder [AUD] symptom, AUD diagnosis), internalizing, and externalizing phenotypes came from the Semi-Structured Assessment for the Genetics of Alcoholism. EF was measured with the Tower of London and Visual Span Tasks. Polygenic scores (PGS) were computed for alcohol-related and behavioral phenotypes. Cox models estimated associations among PGS, behavior, and alcohol milestones.
Results
Externalizing phenotypes (e.g. conduct disorder symptoms) were associated with future initiation and drinking problems (hazard ratio (HR)⩾1.16). Internalizing (e.g. social anxiety) was associated with hazards for progression from first drink to severe AUD (HR⩾1.55). Initiation and AUD were associated with increased hazards for later depressive symptoms and suicidal ideation (HR⩾1.38), and initiation was associated with increased hazards for future conduct symptoms (HR = 1.60). EF was not associated with alcohol transitions. Drinks per week PGS was linked with increased hazards for alcohol transitions (HR⩾1.06). Problematic alcohol use PGS increased hazards for suicidal ideation (HR = 1.20).
Conclusions
Behavioral markers of addiction vulnerability precede and follow alcohol transitions, highlighting dynamic, bidirectional relationships between behavior and emerging addiction.
The Black Hebrew Israelite movement claims that African Americans are descendants of the Ancient Israelites and has slowly become a significant force in African American religion. This Element provides a general overview of the BHI movement, its diverse history/ies, ideologies, and practices. The Element shows how different factions and trends have taken the forefront at different periods over its 140-year history, leading to the current situation where diverse iterations of the movement exist alongside each other, sharing some core concepts while differing widely. In particular, the questions of how and why BHI has become a potent and attractive movement in recent years are addressed, arguing that it fulfils a specific religious need to do with identity and teleology, and represents a new and persistent form of Abrahamic religion.
Little and Meng (L&M) (2023) question the prevailing narrative of widespread democratic backsliding by showing that various objective indicators of democracy are flat over time. However, because recent democratic decline is concentrated in democracies, the objective indicators can accurately test for backsliding only if they can track democratic quality within democracies. This response article shows that they cannot, for conceptual and empirical reasons. The indicators generally can distinguish democracies from autocracies but are blind to variation in quality within democracies. L&M, therefore, are showing that one form of variation in democracy is stagnant but are systematically missing the very type of variation that has most informed current warnings about backsliding.
Children with CHD or born very preterm are at risk for brain dysmaturation and poor neurodevelopmental outcomes. Yet, studies have primarily investigated neurodevelopmental outcomes of these groups separately.
Objective:
To compare neurodevelopmental outcomes and parent behaviour ratings of children born term with CHD to children born very preterm.
Methods:
A clinical research sample of 181 children (CHD [n = 81]; very preterm [≤32 weeks; n = 100]) was assessed at 18 months.
Results:
Children with CHD and born very preterm did not differ on Bayley-III cognitive, language, or motor composite scores, or on expressive or receptive language, or on fine motor scaled scores. Children with CHD had lower ross motor scaled scores compared to children born very preterm (p = 0.047). More children with CHD had impaired scores (<70 SS) on language composite (17%), expressive language (16%), and gross motor (14%) indices compared to children born very preterm (6%; 7%; 3%; ps < 0.05). No group differences were found on behaviours rated by parents on the Child Behaviour Checklist (1.5–5 years) or the proportion of children with scores above the clinical cutoff. English as a first language was associated with higher cognitive (p = 0.004) and language composite scores (p < 0.001). Lower median household income and English as a second language were associated with higher total behaviour problems (ps < 0.05).
Conclusions:
Children with CHD were more likely to display language and motor impairment compared to children born very preterm at 18 months. Outcomes were associated with language spoken in the home and household income.
Former professional American football players have a high relative risk for neurodegenerative diseases like chronic traumatic encephalopathy (CTE). Interpreting low cognitive test scores in this population occasionally is complicated by performance on validity testing. Neuroimaging biomarkers may help inform whether a neurodegenerative disease is present in these situations. We report three cases of retired professional American football players who completed comprehensive neuropsychological testing, but “failed” performance validity tests, and underwent multimodal neuroimaging (structural MRI, Aß-PET, and tau-PET).
Participants and Methods:
Three cases were identified from the Focused Neuroimaging for the Neurodegenerative Disease Chronic Traumatic Encephalopathy (FIND-CTE) study, an ongoing multimodal imaging study of retired National Football League players with complaints of progressive cognitive decline conducted at Boston University and the UCSF Memory and Aging Center. Participants were relatively young (age range 55-65), had 16 or more years of education, and two identified as Black/African American. Raw neuropsychological test scores were converted to demographically-adjusted z-scores. Testing included standalone (Test of Memory Malingering; TOMM) and embedded (reliable digit span, RDS) performance validity measures. Validity cutoffs were TOMM Trial 2 < 45 and RDS < 7. Structural MRIs were interpreted by trained neurologists. Aß-PET with Florbetapir was used to quantify cortical Aß deposition as global Centiloids (0 = mean cortical signal for a young, cognitively normal, Aß negative individual in their 20s, 100 = mean cortical signal for a patient with mild-to-moderate Alzheimer’s disease dementia). Tau-PET was performed with MK-6240 and first quantified as standardized uptake value ratio (SUVR) map. The SUVR map was then converted to a w-score map representing signal intensity relative to a sample of demographically-matched healthy controls.
Results:
All performed in the average range on a word reading-based estimate of premorbid intellect. Contribution of Alzheimer’s disease pathology was ruled out in each case based on Centiloids quantifications < 0. All cases scored below cutoff on TOMM Trial 2 (Case #1=43, Case #2=42, Case #3=19) and Case #3 also scored well below RDS cutoff (2). Each case had multiple cognitive scores below expectations (z < -2.0) most consistently in memory, executive function, processing speed domains. For Case #1, MRI revealed mild atrophy in dorsal fronto-parietal and medial temporal lobe (MTL) regions and mild periventricular white matter disease. Tau-PET showed MTL tau burden modestly elevated relative to controls (regional w-score=0.59, 72nd%ile). For Case #2, MRI revealed cortical atrophy, mild hippocampal atrophy, and a microhemorrhage, with no evidence of meaningful tau-PET signal. For Case #3, MRI showed cortical atrophy and severe white matter disease, and tau-PET revealed significantly elevated MTL tau burden relative to controls (w-score=1.90, 97th%ile) as well as focal high signal in the dorsal frontal lobe (overall frontal region w-score=0.64, 74th%ile).
Conclusions:
Low scores on performance validity tests complicate the interpretation of the severity of cognitive deficits, but do not negate the presence of true cognitive impairment or an underlying neurodegenerative disease. In the rapidly developing era of biomarkers, neuroimaging tools can supplement neuropsychological testing to help inform whether cognitive or behavioral changes are related to a neurodegenerative disease.
Traumatic brain injury (TBI) and concussion are associated with increased dementia risk. Accurate TBI/concussion exposure estimates are relatively unknown for less common neurodegenerative conditions like frontotemporal dementia (FTD). We evaluated lifetime TBI and concussion frequency in patients diagnosed with a range of FTD spectrum conditions and related prior head trauma to cavum septum pellucidum (CSP) characteristics observable on MRI.
Participants and Methods:
We administered the Ohio State University TBI Identification and Boston University Head Impact Exposure Assessment to 108 patients (age 69.5 ± 8.0, 35% female, 93% white or unknown race) diagnosed at the UCSF Memory and Aging Center with one of the following FTD or related conditions: behavioral variant frontotemporal dementia (N=39), semantic variant primary progressive aphasia (N=16), nonfluent variant PPA (N=23), corticobasal syndrome (N=14), or progressive supranuclear palsy (N=16). Data were also obtained from 217 controls (“HC”; age 76.8 ± 8.0, 53% female, 91% white or unknown race). CSP characteristics were defined based on width or “grade” (0-1 vs. 2+) and length of anterior-posterior separation (millimeters). We first describe frequency of any and multiple (2+) prior TBI based on different but commonly used definitions: TBI with loss of consciousness (LOC), TBI with LOC or posttraumatic amnesia (LOC/PTA), TBI with LOC/PTA or other symptoms like dizziness, nausea, “seeing stars,” etc. (“concussion”). TBI/concussion frequency was then compared between FTD and HC using chi-square. Associations between TBI/concussion and CSP characteristics were analyzed with chi-square (CSP grade) and Mann-Whitney U tests (CSP length). We explored sex differences due to typically higher rates of TBI among males.
Results:
History of any TBI with LOC (FTD=20.0%, HC=19.2%), TBI with LOC/PTA (FTD:32.2%, HC=31.5%), and concussion (FTD: 50.0%, HC=44.3%) was common but not different between study groups (p’s>.4). In both FTD and HC, prior TBI/concussion was nominally more frequent in males but not significantly greater than females. Frequency of repeat TBI/concussion (2+) also did not differ significantly between FTD and HC (repeat TBI with LOC: 6.7% vs. 3.3%, TBI with LOC/PTA: 12.2% vs. 10.3%, concussion: 30.2% vs. 28.7%; p’s>.2). Prior TBI/concussion was not significantly related to CSP grade or length in the total sample or within the FTD or HC groups.
Conclusions:
TBI/concussion rates depend heavily on the symptom definition used for classifying prior injury. Lifetime symptomatic TBI/concussion is common but has an unclear impact on risk for FTD-related diagnoses. Larger samples are needed to appropriately evaluate sex differences, to evaluate whether TBI/concussion rates differ between specific FTD phenotypes, and to understand the rates and effects of more extensive repetitive head trauma (symptomatic and asymptomatic) in patients with FTD.
Inconsistent relationships between subjective and objective performance have been found across various clinical groups. Discrepancies in these relationships across studies have been attributed to various factors such as patient characteristics (e.g., level of insight associated with cognitive impairment) and test characteristics (e.g., using too few measures to assess different cognitive domains). Although performance and symptom invalidity are common in clinical and research settings and have the potential to impact responding on testing, previous studies have not explored the role of performance and symptom invalidity on relationships between objective and subjective performance. Therefore, the current study examined the impact of invalidity on performance and symptom validity tests (PVTs and SVTs, respectively) on the relationship between subjective and objective cognitive functioning.
Participants and Methods:
Data were obtained from 299 Veterans (77.6% male, mean age of 48.8 years (SD = 13.5)) assessed in a VA medical center epilepsy monitoring unit from 2008-2018. Participants completed a measure of subjective functioning (i.e., the Patient Competency Rating Scale), PVTs (i.e., Word Memory Test, Test of Memory Malingering, Reliable Digit Span), SVTs (i.e., Minnesota Multiphasic Personality Inventory-2-Restructured Form Response Bias Scale, Structured Inventory of Malingered Symptomatology), and neuropsychological measures assessing objective cognitive performance (e.g., Trail Making Test parts A and B). Pearson correlations were conducted between subjective functioning and objective cognitive performance in the following groups: 1.) PVT and SVT valid, 2.) PVT and SVT invalid, 3.) PVT-only invalid, 4.) SVT-only invalid. Using Fisher’s r-to-z transformation, tests for the differences between correlation coefficients were then conducted between the PVT and SVT valid vs. PVT and SVT invalid groups, and the PVT-only invalid vs. SVT-only invalid groups.
Results:
Participants with fully valid PVT and SVT performances demonstrated generally stronger relationships between subjective and objective scores (r’s = .058 - .310) compared to participants with both invalid PVT and SVT scores (r’s = -.033 - .132). However, the only significant difference in the strengths of correlations between the groups was found on Trail Making Test Part B (p = .034). In separate exploratory analyses due to low group size, those with invalid PVT scores only (fully valid SVT) demonstrated generally stronger relationships between subjective and objective scores (r’s = -.101 - .741) compared to participants with invalid SVT scores only (fully valid PVT; r’s = -.088 - .024). However, the only significant difference in the strengths of correlations between the groups was found on Trail Making Test Part A (p = .028).
Conclusions:
The present study suggests that at least some of the discrepancies in previous studies between subjective and objective cognitive performance may be related to performance and symptom validity. Specifically, very weak relationships between objective and subjective performance were found in participants who only failed SVTs, whereas relationships were stronger in those who only failed PVTs. Therefore, findings suggest that including measures of PVTs and SVTs in future studies investigating relationships between subjective and objective cognitive performance is critical to ensuring accuracy of conclusions that are drawn.
We explore electoral explanations for U.S. governors’ willingness to commute death sentences in their state. Across descriptive tests and pre-registered regression specifications, we find little evidence that election timing or term limits affect either the probability of commuting death sentences or the proportion of such sentences governors might commute. However, we do find evidence that governors are more likely to commute sentences – and commute sentences for a higher proportion of defendants – during the “lame duck” period after their successor’s election but before their inauguration.
The original architects of the representational theory of measurement interpreted their formalism operationally and explicitly acknowledged that some aspects of their representations are conventional. We argue that the conventional elements of the representations afforded by the theory require careful scrutiny as one moves toward a more metaphysically robust interpretation by showing that there is a sense in which the very number system one uses to represent a physical quantity such as mass or length is conventional. This result undermines inferences which impute structure from the numerical representational structure to the quantity it is used to represent.
Several hypotheses may explain the association between substance use, posttraumatic stress disorder (PTSD), and depression. However, few studies have utilized a large multisite dataset to understand this complex relationship. Our study assessed the relationship between alcohol and cannabis use trajectories and PTSD and depression symptoms across 3 months in recently trauma-exposed civilians.
Methods
In total, 1618 (1037 female) participants provided self-report data on past 30-day alcohol and cannabis use and PTSD and depression symptoms during their emergency department (baseline) visit. We reassessed participant's substance use and clinical symptoms 2, 8, and 12 weeks posttrauma. Latent class mixture modeling determined alcohol and cannabis use trajectories in the sample. Changes in PTSD and depression symptoms were assessed across alcohol and cannabis use trajectories via a mixed-model repeated-measures analysis of variance.
Results
Three trajectory classes (low, high, increasing use) provided the best model fit for alcohol and cannabis use. The low alcohol use class exhibited lower PTSD symptoms at baseline than the high use class; the low cannabis use class exhibited lower PTSD and depression symptoms at baseline than the high and increasing use classes; these symptoms greatly increased at week 8 and declined at week 12. Participants who already use alcohol and cannabis exhibited greater PTSD and depression symptoms at baseline that increased at week 8 with a decrease in symptoms at week 12.
Conclusions
Our findings suggest that alcohol and cannabis use trajectories are associated with the intensity of posttrauma psychopathology. These findings could potentially inform the timing of therapeutic strategies.
Despite their long-held reputation as controlled affairs, autocratic elections continue to surprise. Recent years have seen turnovers in power or unexpected opposition gains in such far-flung places as Guinea-Bissau, Venezuela, Malaysia, and Bhutan. Of course, in many other dictatorships, ruling parties resoundingly win their elections and only increase their aura of dominance. Yet even when turnover is unlikely, elections can loom over autocratic politics like a squall line on the horizon. For instance, many observers claim that Putin’s planned reelection in 2024 makes him less willing to accept defeat in Ukraine. How do we make sense of the inner workings and surprising outcomes of these elections?
Children with congenital heart disease (CHD) can face neurodevelopmental, psychological, and behavioural difficulties beginning in infancy and continuing through adulthood. Despite overall improvements in medical care and a growing focus on neurodevelopmental screening and evaluation in recent years, neurodevelopmental disabilities, delays, and deficits remain a concern. The Cardiac Neurodevelopmental Outcome Collaborative was founded in 2016 with the goal of improving neurodevelopmental outcomes for individuals with CHD and pediatric heart disease. This paper describes the establishment of a centralised clinical data registry to standardize data collection across member institutions of the Cardiac Neurodevelopmental Outcome Collaborative. The goal of this registry is to foster collaboration for large, multi-centre research and quality improvement initiatives that will benefit individuals and families with CHD and improve their quality of life. We describe the components of the registry, initial research projects proposed using data from the registry, and lessons learned in the development of the registry.