We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This study aimed to refine the content of a new patient-reported outcome (PRO) measure via cognitive interviewing techniques to assess the unique presentation of depressive symptoms in older adults with cancer (OACs).
Methods
OACs (≥ 70years) with a history of a depressive disorder were administered a draft measure of the Older Adults with Cancer – Depression (OAC-D) Scale, then participated in a semi-structured cognitive interview to provide feedback on the appropriateness, comprehensibility, and overall acceptability of measure. Interviews were audio-recorded and transcribed, and qualitative methods guided revision of scale content and structure.
Results
OACs (N = 10) with a range of cancer diagnoses completed cognitive interviews. Participants felt that the draft measure took a reasonable amount of time to answer and was easily understandable. They favored having item prompts and response anchors repeated with each item for ease of completion, and they helped identify phrasing and wording of key terms consistent with the authors’ intended constructs. From this feedback, a revised version of the OAC-D was created.
Significance of results
The OAC-D Scale is the first PRO developed specifically for use with OACs. The use of expert and patient input and rigorous cognitive interviewing methods provides a conceptually accurate means of assessing the unique symptom experience of OACs with depression.
A modification of stimulus sampling theory is presented. The restriction that each stimulus element is conditioned to one and only one response is replaced with the notion of a scale of conditioning for each element. This variation provides a context in which such variables as reward magnitude and motivation can be viewed as determiners of behavior. Some experimental results on multiple response problems also have a natural interpretation in terms of these ideas.
A theory for discrimination learning which incorporates the concept of an observing response is presented. The theory is developed in detail for experimental procedures in which two stimuli are employed and two responses are available to the subject. Applications of the model to cases involving probabilistic and nonprobabilistic schedules of reinforcement are considered; some predictions are derived and compared with experimental results.
A model for the acquisition of responses in an anticipatory rote serial learning situation is presented. The model is developed in detail for the case of a long intertrial interval and employed to fit data where the list length is varied from 8 to 18 words. Application of the model to the case of a short intertrial interval is considered; some predictions are derived and checked against experimental data.
Unsupervised classification is becoming an increasingly common method to objectively identify coherent structures within both observed and modelled climate data. However, in most applications using this method, the user must choose the number of classes into which the data are to be sorted in advance. Typically, a combination of statistical methods and expertise is used to choose the appropriate number of classes for a given study; however, it may not be possible to identify a single “optimal” number of classes. In this work, we present a heuristic method, the ensemble difference criterion, for unambiguously determining the maximum number of classes supported by model data ensembles. This method requires robustness in the class definition between simulated ensembles of the system of interest. For demonstration, we apply this to the clustering of Southern Ocean potential temperatures in a CMIP6 climate model, and show that the data supports between four and seven classes of a Gaussian mixture model.
In 2016, the National Center for Advancing Translational Science launched the Trial Innovation Network (TIN) to address barriers to efficient and informative multicenter trials. The TIN provides a national platform, working in partnership with 60+ Clinical and Translational Science Award (CTSA) hubs across the country to support the design and conduct of successful multicenter trials. A dedicated Hub Liaison Team (HLT) was established within each CTSA to facilitate connection between the hubs and the newly launched Trial and Recruitment Innovation Centers. Each HLT serves as an expert intermediary, connecting CTSA Hub investigators with TIN support, and connecting TIN research teams with potential multicenter trial site investigators. The cross-consortium Liaison Team network was developed during the first TIN funding cycle, and it is now a mature national network at the cutting edge of team science in clinical and translational research. The CTSA-based HLT structures and the external network structure have been developed in collaborative and iterative ways, with methods for shared learning and continuous process improvement. In this paper, we review the structure, function, and development of the Liaison Team network, discuss lessons learned during the first TIN funding cycle, and outline a path toward further network maturity.
Volume 2 of The Cambridge History of Global Migrations presents an authoritative overview of the various continuities and changes in migration and globalization from the 1800s to the present day. Despite revolutionary changes in communication technologies, the growing accessibility of long-distance travel, and globalization across major economies, the rise of nation-states empowered immigration regulation and bureaucratic capacities for enforcement that curtailed migration. One major theme worldwide across the post-1800 centuries was the differentiation between “skilled” and “unskilled” workers, often considered through a racialized lens; it emerged as the primary divide between greater rights of immigration and citizenship for the former, and confinement to temporary or unauthorized migrant status for the latter. Through thirty-one chapters, this volume further evaluates the long global history of migration; and it shows that despite the increased disciplinary systems, the primacy of migration remains and continues to shape political, economic, and social landscapes around the world.
Childhood trauma and adversity are common across societies and have strong associations with physical and psychiatric morbidity throughout the life-course. One possible mechanism through which childhood trauma may predispose individuals to poor psychiatric outcomes is via associations with brain structure. This study aimed to elucidate the associations between childhood trauma and brain structure across two large, independent community cohorts.
Methods
The two samples comprised (i) a subsample of Generation Scotland (n=1,024); and (ii) individuals from UK Biobank (n=27,202). This comprised n=28,226 for mega-analysis. MRI scans were processed using Free Surfer, providing cortical, subcortical, and global brain metrics. Regression models were used to determine associations between childhood trauma measures and brain metrics and psychiatric phenotypes.
Results
Childhood trauma associated with lifetime depression across cohorts (OR 1.06 GS, 1.23 UKB), and related to early onset and recurrent course within both samples. There was evidence for associations between childhood trauma and structural brain metrics. This included reduced global brain volume, and reduced cortical surface area with highest effects in the frontal (β=−0.0385, SE=0.0048, p(FDR)=5.43x10−15) and parietal lobes (β=−0.0387, SE=0.005, p(FDR)=1.56x10−14). At a regional level the ventral diencephalon (VDc) displayed significant associations with childhood trauma measures across both cohorts and at mega-analysis (β=−0.0232, SE=0.0039, p(FDR)=2.91x10−8). There were also associations with reduced hippocampus, thalamus, and nucleus accumbens volumes.
Discussion
Associations between childhood trauma and reduced global and regional brain volumes were found, across two independent UK cohorts, and at mega-analysis. This provides robust evidence for a lasting effect of childhood adversity on brain structure.
Most farm assurance schemes in the UK at least, in part, aim to provide assurances to consumers and retailers of compliance with welfare standards. Inclusion of welfare outcome assessments into the relevant inspection procedures provides a mechanism to improve animal welfare within assurance schemes. In this study, taking laying hens as an example, we describe a process for dealing with the practical difficulties in achieving this in two UK schemes; Freedom Food and Soil Association. The key challenges arise from selecting the most appropriate measures, defining sampling strategies that are feasible and robust, ensuring assessors can deliver a consistent evaluation and establishing a mechanism to achieve positive change. After a consultation exercise and pilot study, five measures (feather cover, cleanliness, aggressive behaviour, management of sick or injured birds, and beak trimming) were included within the inspection procedures of the schemes. The chosen sampling strategy of assessing 50 birds without handling provided reasonable certainty at a scheme level but less certainty at an individual farm level. Despite the inherent limitations within a time and cost sensitive certification assessment, the approach adopted does provide a foundation for welfare improvement by being able to highlight areas of concern requiring attention, enabling schemes to promote the use of outcome scoring as a management tool, promoting the dissemination of relevant technical information in a timely manner and increasing the scrutiny of standards important for the welfare of the birds.
This paper describes a case example where initiatives from private assurance schemes, scientists, charities, government and egg companies have improved the welfare of UK cage-free laying hens. The RSPCA and Soil Association farm assurance schemes introduced formal welfare outcome assessment into their annual audits of laying-hen farms in 2011. Feather loss was assessed on 50 birds from each flock on a three-point scale for two body regions: Head and Neck (HN) and Back and Vent (BV). In support of the observations, assessors were trained in feedback techniques designed to encourage change in farmer behaviour to improve welfare. In addition, during Year 2 farmers were asked about changes they had made, and intended to make on their farms. During 2011-2013 there were also wider industry initiatives to improve feather cover. Data were analysed from 830 and 743 farms in Year 1 and Year 2, respectively. From Year 1 to Year 2 there was a significant reduction in the prevalence of feather loss from 31.8% (9.6% severe) to 20.8% (6% severe) for the HN region, and from 33.1% (12.6% severe) to 22.7% (8.3% severe) for BV. Fifty-nine percent of 662 farmers reported they had made changes on their farms during Year 1 to improve bird welfare. For such a substantial industry change, attributing causation to specific initiatives is difficult; however, this is the first study to demonstrate the value to animal welfare of certification schemes monitoring the effectiveness of their own and other industry-led interventions to guide future policy.
Background: Poorly-defined cases (PDCs) of focal epilepsy are cases with no/subtle MRI abnormalities or have abnormalities extending beyond the lesion visible on MRI. Here, we evaluated the utility of Arterial Spin Labeling (ASL) MRI perfusion in PDCs of pediatric focal epilepsy. Methods: ASL MRI was obtained in 25 consecutive children presenting with poorly-defined focal epilepsy (20 MRI- positive, 5 MRI-negative). Qualitative visual inspection and quantitative analysis with asymmetry and Z-score maps were used to detect perfusion abnormalities. ASL results were compared to the hypothesized epileptogenic zone (EZ) derived from other clinical/imaging data and the resection zone in patients with Engel I/II outcome and >18 month follow-up. Results: Qualitative analysis revealed perfusion abnormalities in 17/25 total cases (68%), 17/20 MRI-positive cases (85%) and none of the MRI-negative cases. Quantitative analysis confirmed all cases with abnormalities on qualitative analysis, but found 1 additional true-positive and 4 false-positives. Concordance with the surgically-proven EZ was found in 10/11 cases qualitatively (sensitivity=91%, specificity=50%), and 11/11 cases quantitatively (sensitivity=100%, specificity=23%). Conclusions: ASL perfusion may support the hypothesized EZ, but has limited localization benefit in MRI-negative cases. Nevertheless, owing to its non-invasiveness and ease of acquisition, ASL could be a useful addition to the pre-surgical MRI evaluation of pediatric focal epilepsy.
Background: Poorly-defined cases (PDCs) of focal epilepsy are cases with no/subtle MRI abnormalities or have abnormalities extending beyond the lesion visible on MRI. Here, we evaluated the utility of Arterial Spin Labeling (ASL) MRI perfusion in PDCs of pediatric focal epilepsy. Methods: ASL MRI was obtained in 25 consecutive children presenting with poorly-defined focal epilepsy (20 MRI- positive, 5 MRI-negative). Qualitative visual inspection and quantitative analysis with asymmetry and Z-score maps were used to detect perfusion abnormalities. ASL results were compared to the hypothesized epileptogenic zone (EZ) derived from other clinical/imaging data and the resection zone in patients with Engel I/II outcome and >18 month follow-up. Results: Qualitative analysis revealed perfusion abnormalities in 17/25 total cases (68%), 17/20 MRI-positive cases (85%) and none of the MRI-negative cases. Quantitative analysis confirmed all cases with abnormalities on qualitative analysis, but found 1 additional true-positive and 4 false-positives. Concordance with the surgically-proven EZ was found in 10/11 cases qualitatively (sensitivity=91%, specificity=50%), and 11/11 cases quantitatively (sensitivity=100%, specificity=23%). Conclusions: ASL perfusion may support the hypothesized EZ, but has limited localization benefit in MRI-negative cases. Nevertheless, owing to its non-invasiveness and ease of acquisition, ASL could be a useful addition to the pre-surgical MRI evaluation of pediatric focal epilepsy.
Background: Repetitive sub-concussive head impacts have been associated with changes in brain architecture and neurological symptoms. In this study, we examined the association between repetitive sub-concussive impacts, impact burden, and blood brain barrier (BBB) integrity in university football players. Methods: 59 university football players were followed over the 2019 season. Athletes with diagnosed concussion and those sustaining impacts that alerted a sideline impact monitor (relayed by ferroelectric helmet sensors) underwent dynamic contrast-enhanced MRI (DCE-MRI) within one week of injury/alert, and 4 weeks following initial incident. Results: Helmets recorded 2648 impacts over 48 cumulative hours. 8 concussions occurred during the 2019 season (2.82 per 1000 activity hours). On average, athletes with a diagnosed concussion had 55.3 impacts to the front sensor, compared to 14.1 in non-concussed athletes. Athletes who consented to DCE-MRI (n=5) had 10.78% BBB-D within a week of concussion/alert, and 6.77% BBB-D at 4-weeks. Conclusions: We show quantification of BBB integrity relative to head impact burden for the first time. This preliminary study highlights the potential of impact-detecting helmets to provide relevant impact characteristics and offers a foundation for future work on neurological consequences of repetitive sub-concussive impacts.
Background: Negative body image predicts many adverse outcomes. The current study prospectively examined patterns of body esteem development in early adolescence and identified predictors of developmental subtypes. Methods: 328 girls and 429 boys reported annually across a 4-year period (Mage at baseline = 11.14, SD = 0.35) on body esteem, appearance ideal internalization, perceived sociocultural pressures, appearance comparisons, appearance-related teasing, self-esteem, positive and negative affect, and dietary restraint. We performed latent class growth analyses to identify the most common trajectories of body esteem development and examine risk and protective factors for body image development. Results: Three developmental subgroups were identified: (a) high body esteem (39.1%); (b) moderate body esteem (46.1%); and (c) low body esteem (14.8%). Body esteem was stable within the low trajectory and there were minor fluctuations in the high and moderate trajectories. Greater appearance-related teasing, lower self-esteem, less positive affect, and higher dietary restraint predicted the low trajectory, whereas higher self-esteem and lower dietary restraint best predicted the high trajectory. Conclusions: Low body esteem appears to be largely stable from age 11 years. Prevention programming may be enhanced by incorporating components to address transdiagnostic resilience factors such as self-esteem and positive affect.
Introduction: The New Brunswick Trauma Registry is a database of injury admissions from eight hospitals throughout the province. Data tracks individuals in-hospital. By linking this information with vital statistics, we are able to observe outcomes post-discharge and can model health outcomes for participants. We want to know how outcomes for trauma patients compare with the general population post discharge. Methods: Using data from 2014-15, we followed over 2100 trauma registry observations for one year and tracked mortality rate per 1,000 people by age-group. We also compared the outcomes of this group to all Discharge Abstract Database (DAD) entries in the province (circa. 7500 total). We tracked mortality in-hospital, at six months, and one year after discharge. We truncated age into groups aged 40-64, 65-84, and 85 or older. Results: In-hospital mortality among those in the trauma registry is approximately 20 per 1,000 people for those age 40-64, 50 per 1,000 people for those aged 65-84, and 150 per 1,000 people aged 85 or older. For the oldest age group this is in line with the expected population mortality rate, for the younger two groups these estimates are approximately 2-4 times higher than expected mortality. The mortality at six-month follow-up for both of the younger groups remains higher than expected. At one-year follow-up, the mortality for the 65-84 age group returns to the expected population baseline, but is higher for those age 40-64. Causes of death for those who die in hospital are injury for nearly 50% of observations. After discharge, neoplasms and heart disease are the most common causes of death. Trends from the DAD are similar, with lower mortality overall. Of note, cardiac causes of death account for nearly as many deaths in the 6 months after the injury in the 40 -64 age group as the injury itself. Conclusion: Mortality rates remain high upon discharge for up to a year later for some age groups. Causes of death are not injury-related. Some evidence suggests that the injury could have been related to the eventual cause of death (e.g., dementia), but questions remain about the possibility for trauma-mitigating care increasing the risk of mortality from comorbidities. For example, cardiac death, which is largely preventable, is a significant cause of death in the 40-64 age group after discharge. Including an assessment of Framingham risk factors as part of the patients rehabilitation prescription may reduce mortality.
Introduction: Buprenorphine/naloxone (buprenorphine) has proven to be a life-saving intervention amidst the ongoing opioid epidemic in Canada. Research has shown benefits to initiating buprenorphine from the emergency department (ED) including improved treatment retention, systemic health care savings and fewer drug-related visits to the ED. Despite this, there has been little to no uptake of this evidence-based practice in our department. This qualitative study aimed to determine the local barriers and potential solutions to initiating buprenorphine in the ED and gain an understanding of physician attitudes and behaviours regarding harm reduction care and opioid use disorder management. Methods: ED physicians at a midsize Atlantic hospital were recruited by convenience sampling to participate in semi-structured privately conducted interviews. Audio recordings were transcribed verbatim and de-identified transcripts were uploaded to NVivo 12 plus for concept driven and inductive coding and a hierarchy of open, axial and selective coding was employed. Transcripts were independently reviewed by a local qualitative research expert and themes were compared for similarity to limit bias. Interview saturation was reached after 7 interviews. Results: Emergent themes included a narrow scope of harm reduction care that primarily focused on abstinence-based therapies and a multitude of biases including feelings of deception, fear of diversion, feeling buprenorphine induction was too time consuming for the ED and differentiating patients with opioid use disorder from ‘medically ill’ patients. Several barriers and proposed solutions to initiating buprenorphine from the ED were elicited including lack of training and need for formal education, poor familiarity with buprenorphine, the need for an algorithm and community bridge program and formal supports such as an addictions consult team for the ED. Conclusion: This study elicited several opportunities for improved care for patients with addictions presenting to our ED. Future education will focus on harm reduction care, specifically strategies for managing patients desiring to continue to use substances. Education will focus on addressing the multitude of biases elicited and dispelling common myths. A locally informed buprenorphine pathway will be developed. In future, this study may be used to advocate for improved formal supports for our department including an addictions consult team.
Introduction: Determining fluid status prior to resuscitation provides a more accurate guide for appropriate fluid administration in the setting of undifferentiated hypotension. Emergency Department (ED) point of care ultrasound (PoCUS) has been proposed as a potential non-invasive, rapid, repeatable investigation to ascertain inferior vena cava (IVC) characteristics. Our goal was to determine the feasibility of using PoCUS to measure IVC size and collapsibility. Methods: This was a planned secondary analysis of data from a prospective multicentre international study investigating PoCUS in ED patients with undifferentiated hypotension. We prospectively collected data on IVC size and collapsibility using a standard data collection form in 6 centres. The primary outcome was the proportion of patients with a clinically useful (determinate) scan defined as a clearly visible intrahepatic IVC, measurable for size and collapse. Descriptive statistics are provided. Results: A total of 138 scans were attempted on 138 patients; 45.7% were women and the median age was 58 years old. Overall, one hundred twenty-nine scans (93.5%; 95% CI 87.9 to 96.7%) were determinate. 131 (94.9%; 89.7 to 97.7%) were determinate for IVC size, and 131 (94.9%; 89.7 to 97.7%) were determinate for collapsibility. Conclusion: In this analysis of 138 ED patients with undifferentiated hypotension, the vast majority of PoCUS scans to investigate IVC characteristics were determinate. Future work should include analysis of the value of IVC size and collapsibility in determining fluid status in this group.
Introduction: Patients presenting to the emergency department (ED) with hypotension have a high mortality rate and require careful yet rapid resuscitation. The use of cardiac point of care ultrasound (PoCUS) in the ED has progressed beyond the basic indications of detecting pericardial fluid and activity in cardiac arrest. We examine if finding left ventricular dysfunction (LVD) on emergency physician performed PoCUS reliably predicts the presence of cardiogenic shock in hypotensive ED patients. Methods: We prospectively collected PoCUS findings performed in 135 ED patients with undifferentiated hypotension as part of an international study. Patients with clearly identified etiologies for hypotension were excluded, along with other specific presumptive diagnoses. LVD was defined as identification of a generally hypodynamic LV in the setting of shock. PoCUS findings were collected using a standardized protocol and data collection form. All scans were performed by PoCUS-trained emergency physicians. Final shock type was defined as cardiogenic or non-cardiogenic by independent specialist blinded chart review. Results: All 135 patients had complete follow up. Median age was 56 years, 53% of patients were male. Disease prevalence for cardiogenic shock was 12% and the mortality rate was 24%. The presence of LVD on PoCUS had a sensitivity of 62.50% (95%CI 35.43% to 84.80%), specificity of 94.12% (88.26% to 97.60%), positive-LR 10.62 (4.71 to 23.95), negative-LR 0.40 (0.21 to 0.75) and accuracy of 90.37% (84.10% to 94.77%) for detecting cardiogenic shock. Conclusion: Detecting left ventricular dysfunction on PoCUS in the ED may be useful in confirming the underlying shock type as cardiogenic in otherwise undifferentiated hypotensive patients.