We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Multicenter clinical trials are essential for evaluating interventions but often face significant challenges in study design, site coordination, participant recruitment, and regulatory compliance. To address these issues, the National Institutes of Health’s National Center for Advancing Translational Sciences established the Trial Innovation Network (TIN). The TIN offers a scientific consultation process, providing access to clinical trial and disease experts who provide input and recommendations throughout the trial’s duration, at no cost to investigators. This approach aims to improve trial design, accelerate implementation, foster interdisciplinary teamwork, and spur innovations that enhance multicenter trial quality and efficiency. The TIN leverages resources of the Clinical and Translational Science Awards (CTSA) program, complementing local capabilities at the investigator’s institution. The Initial Consultation process focuses on the study’s scientific premise, design, site development, recruitment and retention strategies, funding feasibility, and other support areas. As of 6/1/2024, the TIN has provided 431 Initial Consultations to increase efficiency and accelerate trial implementation by delivering customized support and tailored recommendations. Across a range of clinical trials, the TIN has developed standardized, streamlined, and adaptable processes. We describe these processes, provide operational metrics, and include a set of lessons learned for consideration by other trial support and innovation networks.
To investigate the potential application of replacing a proportion of a perennial ryegrass (PRG) silage diet with press cake on productivity and enteric methane (CH4) emissions in late lactation and non-lactating spring-calving dairy cows, a study was undertaken in which control cows (n = 21) were offered PRG silage, while treatment cows (n = 21) were offered a diet consisting of 60% PRG press cake and 40% of the same PRG silage. Although treatment cows had higher group average dry matter intakes (DMI) and produced more enteric CH4, carbon dioxide (CO2), milk solids, protein, fat- and protein-corrected milk yield (FPCM) in late lactation, the magnitude of the difference between treatment and control cows varied from week to week (P < 0.050). When enteric CH4 per kg of milk yield, milk solids and FPCM were considered, there was no significant difference between treatment and control. Absolute enteric CH4 was higher for cows fed press cake during the non-lactating period but this tended to vary from week to week. Similarly, CO2 (P < 0.001) and hydrogen (H2; P = 0.023) differed from week to week for cows offered press cake, and cows offered PRG silage in the non-lactating period. Although there was no significant effect of diet on body weight (BW) and body condition score (BCS), when enteric CH4 was expressed on a per kg BW basis, cows offered press cake tended to produce more enteric CH4 in both late lactation and during the dry period.
New technologies and disruptions related to Coronavirus disease-2019 have led to expansion of decentralized approaches to clinical trials. Remote tools and methods hold promise for increasing trial efficiency and reducing burdens and barriers by facilitating participation outside of traditional clinical settings and taking studies directly to participants. The Trial Innovation Network, established in 2016 by the National Center for Advancing Clinical and Translational Science to address critical roadblocks in clinical research and accelerate the translational research process, has consulted on over 400 research study proposals to date. Its recommendations for decentralized approaches have included eConsent, participant-informed study design, remote intervention, study task reminders, social media recruitment, and return of results for participants. Some clinical trial elements have worked well when decentralized, while others, including remote recruitment and patient monitoring, need further refinement and assessment to determine their value. Partially decentralized, or “hybrid” trials, offer a first step to optimizing remote methods. Decentralized processes demonstrate potential to improve urban-rural diversity, but their impact on inclusion of racially and ethnically marginalized populations requires further study. To optimize inclusive participation in decentralized clinical trials, efforts must be made to build trust among marginalized communities, and to ensure access to remote technology.
Risk of suicide-related behaviors is elevated among military personnel transitioning to civilian life. An earlier report showed that high-risk U.S. Army soldiers could be identified shortly before this transition with a machine learning model that included predictors from administrative systems, self-report surveys, and geospatial data. Based on this result, a Veterans Affairs and Army initiative was launched to evaluate a suicide-prevention intervention for high-risk transitioning soldiers. To make targeting practical, though, a streamlined model and risk calculator were needed that used only a short series of self-report survey questions.
Methods
We revised the original model in a sample of n = 8335 observations from the Study to Assess Risk and Resilience in Servicemembers-Longitudinal Study (STARRS-LS) who participated in one of three Army STARRS 2011–2014 baseline surveys while in service and in one or more subsequent panel surveys (LS1: 2016–2018, LS2: 2018–2019) after leaving service. We trained ensemble machine learning models with constrained numbers of item-level survey predictors in a 70% training sample. The outcome was self-reported post-transition suicide attempts (SA). The models were validated in the 30% test sample.
Results
Twelve-month post-transition SA prevalence was 1.0% (s.e. = 0.1). The best constrained model, with only 17 predictors, had a test sample ROC-AUC of 0.85 (s.e. = 0.03). The 10–30% of respondents with the highest predicted risk included 44.9–92.5% of 12-month SAs.
Conclusions
An accurate SA risk calculator based on a short self-report survey can target transitioning soldiers shortly before leaving service for intervention to prevent post-transition SA.
Only a limited number of patients with major depressive disorder (MDD) respond to a first course of antidepressant medication (ADM). We investigated the feasibility of creating a baseline model to determine which of these would be among patients beginning ADM treatment in the US Veterans Health Administration (VHA).
Methods
A 2018–2020 national sample of n = 660 VHA patients receiving ADM treatment for MDD completed an extensive baseline self-report assessment near the beginning of treatment and a 3-month self-report follow-up assessment. Using baseline self-report data along with administrative and geospatial data, an ensemble machine learning method was used to develop a model for 3-month treatment response defined by the Quick Inventory of Depression Symptomatology Self-Report and a modified Sheehan Disability Scale. The model was developed in a 70% training sample and tested in the remaining 30% test sample.
Results
In total, 35.7% of patients responded to treatment. The prediction model had an area under the ROC curve (s.e.) of 0.66 (0.04) in the test sample. A strong gradient in probability (s.e.) of treatment response was found across three subsamples of the test sample using training sample thresholds for high [45.6% (5.5)], intermediate [34.5% (7.6)], and low [11.1% (4.9)] probabilities of response. Baseline symptom severity, comorbidity, treatment characteristics (expectations, history, and aspects of current treatment), and protective/resilience factors were the most important predictors.
Conclusions
Although these results are promising, parallel models to predict response to alternative treatments based on data collected before initiating treatment would be needed for such models to help guide treatment selection.
Fewer than half of patients with major depressive disorder (MDD) respond to psychotherapy. Pre-emptively informing patients of their likelihood of responding could be useful as part of a patient-centered treatment decision-support plan.
Methods
This prospective observational study examined a national sample of 807 patients beginning psychotherapy for MDD at the Veterans Health Administration. Patients completed a self-report survey at baseline and 3-months follow-up (data collected 2018–2020). We developed a machine learning (ML) model to predict psychotherapy response at 3 months using baseline survey, administrative, and geospatial variables in a 70% training sample. Model performance was then evaluated in the 30% test sample.
Results
32.0% of patients responded to treatment after 3 months. The best ML model had an AUC (SE) of 0.652 (0.038) in the test sample. Among the one-third of patients ranked by the model as most likely to respond, 50.0% in the test sample responded to psychotherapy. In comparison, among the remaining two-thirds of patients, <25% responded to psychotherapy. The model selected 43 predictors, of which nearly all were self-report variables.
Conclusions
Patients with MDD could pre-emptively be informed of their likelihood of responding to psychotherapy using a prediction tool based on self-report data. This tool could meaningfully help patients and providers in shared decision-making, although parallel information about the likelihood of responding to alternative treatments would be needed to inform decision-making across multiple treatments.
In this era of spatially resolved observations of planet-forming disks with Atacama Large Millimeter Array (ALMA) and large ground-based telescopes such as the Very Large Telescope (VLT), Keck, and Subaru, we still lack statistically relevant information on the quantity and composition of the material that is building the planets, such as the total disk gas mass, the ice content of dust, and the state of water in planetesimals. SPace Infrared telescope for Cosmology and Astrophysics (SPICA) is an infrared space mission concept developed jointly by Japan Aerospace Exploration Agency (JAXA) and European Space Agency (ESA) to address these questions. The key unique capabilities of SPICA that enable this research are (1) the wide spectral coverage $10{-}220\,\mu\mathrm{m}$, (2) the high line detection sensitivity of $(1{-}2) \times 10^{-19}\,\mathrm{W\,m}^{-2}$ with $R \sim 2\,000{-}5\,000$ in the far-IR (SAFARI), and $10^{-20}\,\mathrm{W\,m}^{-2}$ with $R \sim 29\,000$ in the mid-IR (SPICA Mid-infrared Instrument (SMI), spectrally resolving line profiles), (3) the high far-IR continuum sensitivity of 0.45 mJy (SAFARI), and (4) the observing efficiency for point source surveys. This paper details how mid- to far-IR infrared spectra will be unique in measuring the gas masses and water/ice content of disks and how these quantities evolve during the planet-forming period. These observations will clarify the crucial transition when disks exhaust their primordial gas and further planet formation requires secondary gas produced from planetesimals. The high spectral resolution mid-IR is also unique for determining the location of the snowline dividing the rocky and icy mass reservoirs within the disk and how the divide evolves during the build-up of planetary systems. Infrared spectroscopy (mid- to far-IR) of key solid-state bands is crucial for assessing whether extensive radial mixing, which is part of our Solar System history, is a general process occurring in most planetary systems and whether extrasolar planetesimals are similar to our Solar System comets/asteroids. We demonstrate that the SPICA mission concept would allow us to achieve the above ambitious science goals through large surveys of several hundred disks within $\sim\!2.5$ months of observing time.
Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools.
Aims
To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics.
Method
Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts.
Results
Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO.
Conclusions
AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses.
We have adapted the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) Science Pipelines to process data from the Gravitational-wave Optical Transient Observer (GOTO) prototype. In this paper, we describe how we used the LSST Science Pipelines to conduct forced photometry measurements on nightly GOTO data. By comparing the photometry measurements of sources taken on multiple nights, we find that the precision of our photometry is typically better than 20 mmag for sources brighter than 16 mag. We also compare our photometry measurements against colour-corrected Panoramic Survey Telescope and Rapid Response System photometry and find that the two agree to within 10 mmag (1$\sigma$) for bright (i.e., $\sim 14{\rm th} \mathrm{mag}$) sources to 200 mmag for faint (i.e., $\sim 18{\rm th} \mathrm{mag}$) sources. Additionally, we compare our results to those obtained by GOTO’s own in-house pipeline, gotophoto, and obtain similar results. Based on repeatability measurements, we measure a $5\sigma$L-band survey depth of between 19 and 20 magnitudes, depending on observing conditions. We assess, using repeated observations of non-varying standard Sloan Digital Sky Survey stars, the accuracy of our uncertainties, which we find are typically overestimated by roughly a factor of two for bright sources (i.e., $< 15{\rm th} \mathrm{mag}$), but slightly underestimated (by roughly a factor of 1.25) for fainter sources ($> 17{\rm th} \mathrm{mag}$). Finally, we present lightcurves for a selection of variable sources and compare them to those obtained with the Zwicky Transient Factory and GAIA. Despite the LSST Software Pipelines still undergoing active development, our results show that they are already delivering robust forced photometry measurements from GOTO data.
The past few decades have seen the burgeoning of wide-field, high-cadence surveys, the most formidable of which will be the Legacy Survey of Space and Time (LSST) to be conducted by the Vera C. Rubin Observatory. So new is the field of systematic time-domain survey astronomy; however, that major scientific insights will continue to be obtained using smaller, more flexible systems than the LSST. One such example is the Gravitational-wave Optical Transient Observer (GOTO) whose primary science objective is the optical follow-up of gravitational wave events. The amount and rate of data production by GOTO and other wide-area, high-cadence surveys presents a significant challenge to data processing pipelines which need to operate in near-real time to fully exploit the time domain. In this study, we adapt the Rubin Observatory LSST Science Pipelines to process GOTO data, thereby exploring the feasibility of using this ‘off-the-shelf’ pipeline to process data from other wide-area, high-cadence surveys. In this paper, we describe how we use the LSST Science Pipelines to process raw GOTO frames to ultimately produce calibrated coadded images and photometric source catalogues. After comparing the measured astrometry and photometry to those of matched sources from PanSTARRS DR1, we find that measured source positions are typically accurate to subpixel levels, and that measured L-band photometries are accurate to $\sim50$ mmag at $m_L\sim16$ and $\sim200$ mmag at $m_L\sim18$. These values compare favourably to those obtained using GOTO’s primary, in-house pipeline, gotophoto, in spite of both pipelines having undergone further development and improvement beyond the implementations used in this study. Finally, we release a generic ‘obs package’ that others can build upon, should they wish to use the LSST Science Pipelines to process data from other facilities.
To describe epidemiologic and genomic characteristics of a severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) outbreak in a large skilled-nursing facility (SNF), and the strategies that controlled transmission.
Design, setting, and participants:
This cohort study was conducted during March 22–May 4, 2020, among all staff and residents at a 780-bed SNF in San Francisco, California.
Methods:
Contact tracing and symptom screening guided targeted testing of staff and residents; respiratory specimens were also collected through serial point prevalence surveys (PPSs) in units with confirmed cases. Cases were confirmed by real-time reverse transcription–polymerase chain reaction testing for SARS-CoV-2, and whole-genome sequencing (WGS) was used to characterize viral isolate lineages and relatedness. Infection prevention and control (IPC) interventions included restricting from work any staff who had close contact with a confirmed case; restricting movement between units; implementing surgical face masking facility-wide; and the use of recommended PPE (ie, isolation gown, gloves, N95 respirator and eye protection) for clinical interactions in units with confirmed cases.
Results:
Of 725 staff and residents tested through targeted testing and serial PPSs, 21 (3%) were SARS-CoV-2 positive: 16 (76%) staff and 5 (24%) residents. Fifteen cases (71%) were linked to a single unit. Targeted testing identified 17 cases (81%), and PPSs identified 4 cases (19%). Most cases (71%) were identified before IPC interventions could be implemented. WGS was performed on SARS-CoV-2 isolates from 4 staff and 4 residents: 5 were of Santa Clara County lineage and the 3 others were distinct lineages.
Conclusions:
Early implementation of targeted testing, serial PPSs, and multimodal IPC interventions limited SARS-CoV-2 transmission within the SNF.
We describe here efforts to create and study magnetized electron–positron pair plasmas, the existence of which in astrophysical environments is well-established. Laboratory incarnations of such systems are becoming ever more possible due to novel approaches and techniques in plasma, beam and laser physics. Traditional magnetized plasmas studied to date, both in nature and in the laboratory, exhibit a host of different wave types, many of which are generically unstable and evolve into turbulence or violent instabilities. This complexity and the instability of these waves stem to a large degree from the difference in mass between the positively and the negatively charged species: the ions and the electrons. The mass symmetry of pair plasmas, on the other hand, results in unique behaviour, a topic that has been intensively studied theoretically and numerically for decades, but experimental studies are still in the early stages of development. A levitated dipole device is now under construction to study magnetized low-energy, short-Debye-length electron–positron plasmas; this experiment, as well as a stellarator device that is in the planning stage, will be fuelled by a reactor-based positron source and make use of state-of-the-art positron cooling and storage techniques. Relativistic pair plasmas with very different parameters will be created using pair production resulting from intense laser–matter interactions and will be confined in a high-field mirror configuration. We highlight the differences between and similarities among these approaches, and discuss the unique physics insights that can be gained by these studies.
A national need is to prepare for and respond to accidental or intentional disasters categorized as chemical, biological, radiological, nuclear, or explosive (CBRNE). These incidents require specific subject-matter expertise, yet have commonalities. We identify 7 core elements comprising CBRNE science that require integration for effective preparedness planning and public health and medical response and recovery. These core elements are (1) basic and clinical sciences, (2) modeling and systems management, (3) planning, (4) response and incident management, (5) recovery and resilience, (6) lessons learned, and (7) continuous improvement. A key feature is the ability of relevant subject matter experts to integrate information into response operations. We propose the CBRNE medical operations science support expert as a professional who (1) understands that CBRNE incidents require an integrated systems approach, (2) understands the key functions and contributions of CBRNE science practitioners, (3) helps direct strategic and tactical CBRNE planning and responses through first-hand experience, and (4) provides advice to senior decision-makers managing response activities. Recognition of both CBRNE science as a distinct competency and the establishment of the CBRNE medical operations science support expert informs the public of the enormous progress made, broadcasts opportunities for new talent, and enhances the sophistication and analytic expertise of senior managers planning for and responding to CBRNE incidents.
An experiment was carried out to examine the effects of offering beef steers grass silage (GS) as the sole forage, lupins/triticale silage (LTS) as the sole forage, a mixture of LTS and GS at a ratio of 70:30 on a dry matter (DM) basis, vetch/barley silage (VBS) as the sole forage, a mixture of VBS and GS at a ratio of 70:30 on a DM basis, giving a total of five silage diets. Each of the five silage diets was supplemented with 2 and 5 kg of concentrates/head/day in a 5 × 2 factorial design to evaluate the five silages at two levels of concentrate intake and to examine possible interactions between silage type and concentrate intake. A total of 80 beef steers were used in the 122-day experiment. The GS was well preserved while the whole crop cereal/legume silages had high ammonia-nitrogen (N) concentrations, low lactic acid concentrations and low butyric acid concentrations For GS, LTS, LTS/GS, VBS and VBS/GS, respectively, silage DM intakes were 6.5, 7.0, 7.2, 6.1 and 6.6 (s.e.d. 0.55) kg/day and live weight gains were 0.94, 0.72, 0.63, 0.65 and 0.73 (s.e.d. 0.076) kg/day. Silage type did not affect carcass fatness, the colour or tenderness of meat or the fatty acid composition of the intramuscular fat in the longissimus dorsi muscle.
An experiment was carried out to examine the effects of offering beef cattle five silage diets. These were perennial ryegrass silage (PRGS) as the sole forage, tall fescue/perennial ryegrass silage (FGS) as the sole forage, PRGS in a 50:50 ratio on a dry matter (DM) basis with lupin/triticale silage (LTS), lupin/wheat silage (LWS) and pea/oat silage (POS). Each of the five silage diets was supplemented with 4 and 7 kg of concentrates/head/day in a five silages × two concentrate intakes factorial design. A total of 90 cattle were used in the 121-day experiment. The grass silages were of medium digestibility and were well preserved. The legume/cereal silages had high ammonia N, high acetic acid, low lactic acid, low butyric acid and low digestible organic matter concentrations (542, 562 and 502 g/kg DM for LTS, LWS and POS, respectively). Silage treatment did not significantly affect liveweight gain, carcass gain, carcass characteristics, the instrumental assessment of meat quality or fatty acid composition of the M. longissimus dorsi muscle. In view of the low yields of the legume/cereal crops, it is concluded that the inclusion of spring-sown legume/cereal silages in the diets of beef cattle is unlikely to be advantageous.
Prenatal adversity shapes child neurodevelopment and risk for later mental health problems. The quality of the early care environment can buffer some of the negative effects of prenatal adversity on child development. Retrospective studies, in adult samples, highlight epigenetic modifications as sentinel markers of the quality of the early care environment; however, comparable data from pediatric cohorts are lacking. Participants were drawn from the Maternal Adversity Vulnerability and Neurodevelopment (MAVAN) study, a longitudinal cohort with measures of infant attachment, infant development, and child mental health. Children provided buccal epithelial samples (mean age = 6.99, SD = 1.33 years, n = 226), which were used for analyses of genome-wide DNA methylation and genetic variation. We used a series of linear models to describe the association between infant attachment and (a) measures of child outcome and (b) DNA methylation across the genome. Paired genetic data was used to determine the genetic contribution to DNA methylation at attachment-associated sites. Infant attachment style was associated with infant cognitive development (Mental Development Index) and behavior (Behavior Rating Scale) assessed with the Bayley Scales of Infant Development at 36 months. Infant attachment style moderated the effects of prenatal adversity on Behavior Rating Scale scores at 36 months. Infant attachment was also significantly associated with a principal component that accounted for 11.9% of the variation in genome-wide DNA methylation. These effects were most apparent when comparing children with a secure versus a disorganized attachment style and most pronounced in females. The availability of paired genetic data revealed that DNA methylation at approximately half of all infant attachment-associated sites was best explained by considering both infant attachment and child genetic variation. This study provides further evidence that infant attachment can buffer some of the negative effects of early adversity on measures of infant behavior. We also highlight the interplay between infant attachment and child genotype in shaping variation in DNA methylation. Such findings provide preliminary evidence for a molecular signature of infant attachment and may help inform attachment-focused early intervention programs.
Early detection of karyotype abnormalities, including aneuploidy, could aid producers in identifying animals which, for example, would not be suitable candidate parents. Genome-wide genetic marker data in the form of single nucleotide polymorphisms (SNPs) are now being routinely generated on animals. The objective of the present study was to describe the statistics that could be generated from the allele intensity values from such SNP data to diagnose karyotype abnormalities; of particular interest was whether detection of aneuploidy was possible with both commonly used genotyping platforms in agricultural species, namely the Applied BiosystemsTM AxiomTM and the Illumina platform. The hypothesis was tested using a case study of a set of dizygotic X-chromosome monosomy 53,X sheep twins. Genome-wide SNP data were available from the Illumina platform (11 082 autosomal and 191 X-chromosome SNPs) on 1848 male and 8954 female sheep and available from the AxiomTM platform (11 128 autosomal and 68 X-chromosome SNPs) on 383 female sheep. Genotype allele intensity values, either as their original raw values or transformed to logarithm intensity ratio (LRR), were used to accurately diagnose two dizygotic (i.e. fraternal) twin 53,X sheep, both of which received their single X chromosome from their sire. This is the first reported case of 53,X dizygotic twins in any species. Relative to the X-chromosome SNP genotype mean allele intensity values of normal females, the mean allele intensity value of SNP genotypes on the X chromosome of the two females monosomic for the X chromosome was 7.45 to 12.4 standard deviations less, and were easily detectable using either the AxiomTM or Illumina genotype platform; the next lowest mean allele intensity value of a female was 4.71 or 3.3 standard deviations less than the population mean depending on the platform used. Both 53,X females could also be detected based on the genotype LRR although this was more easily detectable when comparing the mean LRR of the X chromosome of each female to the mean LRR of their respective autosomes. On autopsy, the ovaries of the two sheep were small for their age and evidence of prior ovulation was not appreciated. In both sheep, the density of primordial follicles in the ovarian cortex was lower than normally found in ovine ovaries and primary follicle development was not observed. Mammary gland development was very limited. Results substantiate previous studies in other species that aneuploidy can be readily detected using SNP genotype allele intensity values generally already available, and the approach proposed in the present study was agnostic to genotype platform.
Many medications administered to patients with schizophrenia possess anticholinergic properties. When aggregated, pharmacological treatments may result in a considerable anticholinergic burden. The extent to which anticholinergic burden has a deleterious effect on cognition and impairs ability to participate in and benefit from psychosocial treatments is unknown.
Method
Seventy patients were followed for approximately 3 years. The MATRICS consensus cognitive battery (MCCB) was administered at baseline. Anticholinergic burden was measured with the Anticholinergic Cognitive Burden (ACB) scale. Ability to benefit from psychosocial programmes was measured using the DUNDRUM-3 Programme Completion Scale (D-3) at baseline and follow-up. Psychiatric symptoms were measured using the PANSS. Total antipsychotic dose was measured using chlorpromazine equivalents. Functioning was measured using the Social and Occupational Functioning Assessment Scale (SOFAS).
Results
Mediation analysis found that the influence of anticholinergic burden on ability to participate and benefit from psychosocial programmes was completely mediated by the MCCB. For every 1-unit increase on the ACB scale, change scores for DUNDRUM-3 decreased by −0.27 points. This relationship appears specific to anticholinergic burden and not total antipsychotic dose. Moreover, mediation appears to be specific to cognition and not psychopathology. Baseline functioning also acted as mediator but only when MCCB was not controlled for.
Conclusions
Anticholinergic burden has a significant impact on patients’ ability to participate in and benefit from psychosocial treatment programmes. Physicians need to be mindful of the cumulative effect that medications can have on patient cognition, functional capacity and ability to benefit from psychosocial treatments.
Despite documented increases in emergency department (ED) mental health (MH) presentations, there are inconsistent findings on the characteristics of patients with repeat presentations to pediatric EDs (PEDs) for MH concerns. Our study sought to explore the characteristics of MH patients with repeat PED visits and determine predictors of return visits, of earlier repeat visits, and of more frequent repeat visits.
Methods
We examined data collected prospectively in a clinical database looking at MH presentations to a crisis intervention program housed within a PED from October 2006 to December 2011. Predictive models based on demographic and clinical variables were constructed using logistic, Cox, and negative binomial regression.
Results
A total of 4,080 presentations to the PED were made by the 2,900 children and youth. Repeat visits accounted for almost half (45.8%) of all presentations. Multivariable analysis identified five variables that independently predicted greater odds of having repeat presentations, greater risk of earlier repeat presentations, and greater risk of frequent repeat presentations. The five variables were: female, living in the metropolitan community close to the PED, being in the care of child protective services, taking psychotropic medications, and presenting with an actionable need in the area of mood disturbances.
Conclusions
Repeat visits account for a large portion of all MH presentations to the PED. Furthermore, several patient characteristics are significant predictors of repeat PED use and of repeating use sooner and more frequently. Further research is needed to examine interventions targeting this patient group to ensure appropriate MH patient management.
Clinical databases in congenital and paediatric cardiac care provide a foundation for quality improvement, research, policy evaluations and public reporting. Structured audits verifying data integrity allow database users to be confident in these endeavours. We report on the initial audit of the Pediatric Cardiac Critical Care Consortium (PC4) clinical registry.
Materials and methods
Participants reviewed the entire registry to determine key fields for audit, and defined major and minor discrepancies for the audited variables. In-person audits at the eight initial participating centres were conducted during a 12-month period. The data coordinating centre randomly selected intensive care encounters for review at each site. The audit consisted of source data verification and blinded chart abstraction, comparing findings by the auditors with those entered in the database. We also assessed completeness and timeliness of case submission. Quantitative evaluation of completeness, accuracy, and timeliness of case submission is reported.
Results
We audited 434 encounters and 29,476 data fields. The aggregate overall accuracy was 99.1%, and the major discrepancy rate was 0.62%. Across hospitals, the overall accuracy ranged from 96.3 to 99.5%, and the major discrepancy rate ranged from 0.3 to 0.9%; seven of the eight hospitals submitted >90% of cases within 1 month of hospital discharge. There was no evidence for selective case omission.
Conclusions
Based on a rigorous audit process, data submitted to the PC4 clinical registry appear complete, accurate, and timely. The collaborative will maintain ongoing efforts to verify the integrity of the data to promote science that advances quality improvement efforts.