We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Acute stroke treatments are highly time-sensitive, with geographical disparities affecting access to care. This study examined the impact of driving distance to the nearest comprehensive stroke center (CSC) and rurality on the use of thrombectomy or thrombolysis in Ontario, Canada.
Methods:
This retrospective cohort study used administrative data to identify adults hospitalized with acute ischemic stroke between 2017 and 2022. Driving time from patients’ residences to the nearest CSC was calculated using the Ontario Road Network File and postal codes. Rurality was categorized using postal codes. Multivariable logistic regression, adjusted for baseline differences, estimated the association between driving distance and treatment with thrombectomy (primary outcome) or thrombolysis (secondary outcome). Driving time was modeled as a continuous variable using restricted cubic splines.
Results:
Data from 57,678 patients (median age 74 years, IQR 64–83) were analyzed. Increased driving time was negatively associated with thrombectomy in a nonlinear fashion. Patients living 120 minutes from a CSC were 20% less likely to receive thrombectomy (adjusted odds ratio [aOR] 0.80, 95% CI 0.62–1.04), and those 240 minutes away were 60% less likely (aOR 0.41, 95% CI 0.28–0.60). Driving time did not affect thrombolysis rates, even at 240 minutes (aOR 1.0, 95% CI 0.70–1.42). Thrombectomy use was similar in medium urban areas (aOR 0.80, 95% CI 0.56–1.16) and small towns (aOR 0.78, 95% CI 0.57–1.06) compared to large urban areas.
Conclusion:
Thrombolysis access is equitable across Ontario, but thrombectomy access decreases with increased driving distance to CSCs. A multifaceted approach, combining healthcare policy innovation and infrastructure development, is necessary for equitable thrombectomy delivery.
At least 200 billion black soldier fly (Hermetia illucens) larvae (BSFL) are reared each year as food and feed, and the insect farming industry is projected to grow rapidly. Despite interest by consumers, producers, and legislators, no empirical evidence exists to guide producers in practicing humane – or instantaneous – slaughter for these novel mini-livestock. BSFL may be slaughtered via freezing, boiling, grinding, or other methods; however standard operating procedures (SOPs) and equipment design may affect the likelihood of instantaneous death using these methods. We tested how larval body size and particle size plate hole diameter affect the likelihood of instantaneous death for black soldier fly larvae that are slaughtered using a standard meat grinder. Larval body size did not affect the likelihood of instantaneous death for larvae that are 106–175 mg in mass. However, particle size plate hole diameter had a significant effect on the likelihood of instantaneous death, with only 54% of larvae experiencing an instant death when using the largest particle size plate (12-mm hole diameter) compared to 84% using the smallest particle size plate (2.55 mm). However, a higher percentage of instantaneous death (up to 99%) could be achieved by reducing the proportion of larvae that become stuck in the machine. We conclude by outlining specific recommendations to support producers in achieving a 99% instantaneous death rate through specific SOPs to be used with similarly designed machines. We also develop a protocol for producers that wish to test their own grinding SOPs.
Stroke outcomes research requires risk-adjustment for stroke severity, but this measure is often unavailable. The Passive Surveillance Stroke SeVerity (PaSSV) score is an administrative data-based stroke severity measure that was developed in Ontario, Canada. We assessed the geographical and temporal external validity of PaSSV in British Columbia (BC), Nova Scotia (NS) and Ontario, Canada.
Methods:
We used linked administrative data in each province to identify adult patients with ischemic stroke or intracerebral hemorrhage between 2014-2019 and calculated their PaSSV score. We used Cox proportional hazards models to evaluate the association between the PaSSV score and the hazard of death over 30 days and the cause-specific hazard of admission to long-term care over 365 days. We assessed the models’ discriminative values using Uno’s c-statistic, comparing models with versus without PaSSV.
Results:
We included 86,142 patients (n = 18,387 in BC, n = 65,082 in Ontario, n = 2,673 in NS). The mean and median PaSSV were similar across provinces. A higher PaSSV score, representing lower stroke severity, was associated with a lower hazard of death (hazard ratio and 95% confidence intervals 0.70 [0.68, 0.71] in BC, 0.69 [0.68, 0.69] in Ontario, 0.72 [0.68, 0.75] in NS) and admission to long-term care (0.77 [0.76, 0.79] in BC, 0.84 [0.83, 0.85] in Ontario, 0.86 [0.79, 0.93] in NS). Including PaSSV in the multivariable models increased the c-statistics compared to models without this variable.
Conclusion:
PaSSV has geographical and temporal validity, making it useful for risk-adjustment in stroke outcomes research, including in multi-jurisdiction analyses.
In the heart of the boreal forest in 1949, trappers gathered at a spring meeting in Wabowden, Manitoba, to discuss many items of business, including wolf predation on beavers. Recent debate and disagreement had broken out among the trappers regarding whether wolves actually killed beavers. One trapper stated wolves ‘harassed’ a beaver colony so extensively that he had to fell trees into the water to ensure the colony's survival. Some trappers remained sceptical and unconvinced. The debate was put to a lively and emphatic end when a trapper walked into the spring meeting and presented a bushel sack stuffed with wolf scats containing beaver fur (Nash 1951). The proof was in the poop!
Surprisingly, our understanding of wolf predation on beavers has progressed relatively little since 1949. Most attempts to study wolf predation on beavers followed an approach akin to the Manitoba trappers: collecting and examining wolf scats. By doing this, researchers in many areas across North America and Eurasia concluded, like the trappers, that beavers were important prey for wolves during the ice-free season. However, wolf–beaver dynamics received little attention beyond this, largely because (1) most wolf predation research was focused on wolf–ungulate interactions and predation on smaller alternate prey was not a priority (Gable et al 2018c), and (2) rigorously studying wolf predation during spring to autumn in forested ecosystems with dense vegetation was a monumental, and often impossible, task prior to GPS collar technology. Of course, many researchers and biologists had interesting ideas or hypotheses about wolf–beaver interactions, but most were based on anecdotal observations, indirect evidence or conjecture (Gable et al 2018c). None the less, these ideas were compelling and relevant. Some suggested dense beaver populations increased wolf pup survival (Benson et al 2013) and, in turn, wolf pack and population size (Andersone 1999; Barber-Meyer et al 2016). Others posited that dense beaver populations reduced wolf predation on ungulate prey (Forbes and Theberge 1996) while some claimed it increased predation (Andersone and Ozoliņš 2004; Latham et al 2013). Still others suspected wolves changed ecosystems by altering the ecosystem engineering behaviour of beavers (Peterson et al 2014). Clearly, wolf–beaver dynamics needed to be studied in more detail.
Clinical trial processes are unnecessarily inefficient and costly, slowing the translation of medical discoveries into treatments for people living with disease. To reduce redundancies and inefficiencies, a group of clinical trial experts developed a framework for clinical trial site readiness based on existing trial site qualifications from sponsors. The site readiness practices are encompassed within six domains: research team, infrastructure, study management, data collection and management, quality oversight, and ethics and safety. Implementation of this framework for clinical trial sites would reduce inefficiencies in trial conduct and help prepare new sites to enter the clinical trials enterprise, with the potential to improve the reach of clinical trials to underserved communities. Moreover, the framework holds benefits for trial sponsors, contract research organizations, trade associations, trial participants, and the public. For novice sites considering future trials, we provide a framework for site preparation and the engagement of stakeholders. For experienced sites, the framework can be used to assess current practices and inform and engage sponsors, staff, and participants. Details in the supplementary materials provide easy access to key regulatory documents and resources. Invited perspective articles provide greater depth from a systems, DEIA (diversity, equity, inclusion, and accessibility) and decentralized trials perspective.
Situated within the public will and political will framework, this paper explores frames to address the social issue of gender pay inequity. Specifically, the authors examine whether demographic characteristics affect perceived acceptability of different frames describing gender pay inequity and perceptions of this social issue. First, the authors identified 26 terms used to discuss gender pay inequity; this list was narrowed to 12, representing four categories. Next, the authors solicited sentiment reactions to those frames and perceptions of gender pay inequity. Taken together, the results indicated that although respondents had consistently positive reactions to the frames fair pay, equal pay, and pay fairness, perceptions varied across demographic groups. The biggest effects were consistently for political party-related variables. One frame, strategic compensation practices, emerged as a value-neutral frame that could potentially be used to reframe the issue and re-engage business and political stakeholders who do not perceive gender pay inequity as problematic.
The megalithic pillar sites found around Lake Turkana, Kenya, are monumental cemeteries built approximately 5000 years ago. Their construction coincides with the spread of pastoralism into the region during a period of profound climate change. Early work at the Jarigole pillar site suggested that these places were secondary burial grounds. Subsequent excavations at other pillar sites, however, have revealed planned mortuary cavities for predominantly primary burials, challenging the idea that all pillar sites belonged to a single ‘Jarigole mortuary tradition’. Here, the authors report new findings from the Jarigole site that resolve long-standing questions about eastern Africa's earliest monuments and provide insight into the social lives, and deaths, of the region's first pastoralists.
Optimal preoperative therapy regimen in the treatment of resectable retroperitoneal sarcoma (RPS) remains unclear. This study compares the impact of preoperative radiation, chemoradiation and chemotherapy on overall survival (OS) in RPS patients.
Materials and Methods:
The National Cancer Database (NCDB) was queried for patients with non-metastatic, resectable RPS (2006–15). The primary endpoint was OS, evaluated by Kaplan–Meier method, log-rank test, Cox multivariable analysis and propensity score matching.
Results:
A total of 1,253 patients met the inclusion criteria, with 210 patients (17%) receiving chemoradiation, 850 patients (68%) receiving radiation and 193 patients (15%) receiving chemotherapy. On Cox multivariable analysis, when compared to preoperative chemoradiation, preoperative radiation was not associated with improved OS (hazards ratio [HR] 0·98, 95% CI 0·76–1·25, p = 0·84), while preoperative chemotherapy was associated with worse OS (HR 1·64, 95% CI 1·24–2·18, p < 0·001). Similar findings were observed in 199 and 128 matched pairs for preoperative radiation and chemotherapy, respectively, when compared to preoperative chemoradiation.
Findings:
Our study suggested an OS benefit in using preoperative chemoradiation compared to chemotherapy alone, but OS outcomes were comparable between preoperative chemoradiation and radiation alone.
We report on the case of a 15 year old young person with a known diagnosis of autism presenting with a rapid and acute regression in functional abilities, decline in expressive speech and bizarre posturing. The symptoms first started during lockdown (April 2020) with anxiety related to school work followed by urinary incontinence, insomnia, muttering to self and incongruent smiling. Initial medical investigations including MRI, lumbar puncture and 24hour EEG were inconclusive, so she was referred to Paediatric Liaison for assessment.
Objectives
We demonstrate the value of a child psychiatry liaison service being involved with young people in an acute medical hospital
Methods
This young person had a thorough psychiatric assessment.
Results
Through daily psychiatric assessment and reviews with the young person, her parent, social care, wider community team, school and Paediatric Inpatient ward in order to expand on the understanding of the young person and develop a case formulation. She was started on oral Olanzapine 2.5mg which was gradually increased to 10mg OD with minimal improvement.
Conclusions
Childhood Disintegrative Disorder (CDD or Heller’s Syndrome) is a rare pervasive disorder presenting as a loss of previously acquired skills after at least two years of normal development. Despite no longer being included in DSM-V, it is important for Psychiatrists to have a working knowledge of CDD and consider other differentials when assessing young people.
We describe the baseline characteristics and complications of individuals with influenza in the US FDA’s Sentinel System by antiviral treatment timing.
Design:
Retrospective cohort design.
Patients:
Individuals aged ≥6 months with outpatient diagnoses of influenza in June 2014–July 2017, 3 influenza seasons.
Methods:
We identified the comorbidities, vaccination history, influenza testing, and outpatient antiviral dispensings of individuals with influenza using administrative claims data from 13 data partners including the Centers for Medicare and Medicaid Services, integrated delivery systems, and commercial health plans. We assessed complications within 30 days: hospitalization, oxygen use, mechanical ventilation, critical care, ECMO, and death.
Results:
There were 1,090,333 influenza diagnoses in 2014–2015; 1,005,240 in 2016–2017; and 578,548 in 2017–2018. Between 49% and 55% of patients were dispensed outpatient treatment within 5 days. In all periods >80% of treated individuals received treatment on the day of diagnosis. Those treated on days 1–5 after diagnosis had higher prevalences of diabetes, chronic obstructive pulmonary disease, asthma, and obesity compared to those treated on the day of diagnosis or not treated at all. They also had higher rates of hospitalization, oxygen use, and critical care. In 2014–2015, among those aged ≥65 years, the rates of hospitalization were 45 per 1,000 diagnoses among those treated on day 0; 74 per 1,000 among those treated on days 1–5; and 50 per 1,000 among those who were untreated.
Conclusions:
In a large, national analysis, approximately half of people diagnosed with influenza in the outpatient setting were treated with antiviral medications. Delays in outpatient dispensed treatment were associated with higher prevalence of comorbidities and higher rates of complication.
A stated goal of language documentation is to make language resources available for use in language revitalization. This chapter identifies some limitations and challenges of working with language documentation materials, particularly legacy (historical) documents and resources in digital language archives. It then suggests ways that language documenters can make their work more useful for revitalization purposes. It identifies often-ignored areas that documentation should target, such as family language, everyday usage and young people’s speech, and suggests further contextual information and metadata that should be included. Language revitalizers can also adopt the methods, practices and tools of language documenters and should be encouraged to document the processes, decision-making, events, successes and failures of their work so that they and others can learn from them. The capsules present technical advice on making audio and video language documentation recordings; a community-based research model for field methods courses on revitalization; and outcomes of a pilot study on Alznerish conducted during a field school in Poland, with methodological proposals for short-term studies.
Induction chemotherapy (iC) followed by concurrent chemoradiation has been shown to improve overall survival (OS) for locally advanced pancreatic cancer (LAPC). However, the survival benefit of stereotactic body radiation therapy (SBRT) versus conventionally fractionated radiation therapy (CFRT) following iC remains unclear.
Materials and methods:
The National Cancer Database (NCDB) was queried for primary stage III, cT4N0-1M0 LAPC (2004–15). Kaplan–Meier analysis, Cox proportional hazards method and propensity score matching were used.
Results:
Among 872 patients, 738 patients underwent CFRT and 134 patients received SBRT. Median follow-up was 24·3 and 22·9 months for the CFRT and SBRT cohorts, respectively. The use of SBRT showed improved survival in both the multivariate analysis (hazards ratio 0·78, p = 0·025) and 120 propensity-matched pairs (median OS 18·1 versus 15·9 months, p = 0·004) compared to the CFRT.
Findings:
This NCDB analysis suggests survival benefit with the use of SBRT versus CFRT following iC for the LAPC.
This National Cancer Database (NCDB) analysis was performed to evaluate the outcomes of adjuvant chemotherapy (AC) versus observation for resected pancreatic adenocarcinoma treated with neoadjuvant therapy (NT).
Materials and methods:
The NCDB was queried for primary stages I–II cT1-3N0-1M0 resected pancreatic adenocarcinoma treated with NT (2004–2015). Baseline patient, tumour and treatment characteristics were extracted. The primary end point was overall survival (OS). With a 6-month conditional landmark, Kaplan–Meier analysis, multivariable Cox proportional hazards method and 1:1 propensity score matching was used to analyse the data.
Results:
A total of 1,737 eligible patients were identified, of which 1,247 underwent post-operative observation compared to 490 with AC. The overall median follow-up was 34·7 months. The addition of AC showed improved survival on the multivariate analysis (HR 0·78, p < 0·001). AC remained statistically significant for improved OS, with a median OS of 26·3 months versus 22·3 months and 2-year OS of 63·9% versus 52·9% for the observation cohort (p < 0·001). Treatment interaction analysis showed OS benefit of AC for patients with smaller tumours.
Findings:
Our findings suggest a survival benefit for AC compared to observation following NT and surgery for resectable pancreatic adenocarcinoma, especially in patients with smaller tumours.
The authors aim to demonstrate that the current drive-through testing model at a health district was improved in certain parameters compared with a previous testing protocol, and to provide the methodology of the current model for other coronavirus disease (COVID-19) testing sites to potentially emulate.
Methods:
Initially, a small drive-through site was constructed at a converted tuberculosis clinic, but due to an increase in testing needs, an expanded point of screening and testing (POST) system was developed in an event center parking lot to administer tests to a higher volume of patients.
Results:
An average of 51.1 patients was tested each day (2.0 tests per personnel in personal protective equipment [PPE] per hour) at the initial tuberculosis clinic drive-through site, which increased to 217.8 patients tested each day (5.9 tests per personnel in PPE per hour) with the new drive-through POST system (P < 0.001). Mean testing time was 3.4 minutes and the total time on-site averaged 14.4 minutes.
Conclusions:
This POST drive-through system serves as an efficient, safe, and adaptable model for high volume COVID-19 nasopharyngeal swabbing that the authors recommend other COVID-19 testing sites nationwide consider adopting for their own use.
Site-selectivity analysis of drilling predation traces may provide useful behavioral information concerning a predator interacting with its prey. However, traditional approaches exclude some spatial information (i.e., oversimplified trace position) and are dependent on the scale of analysis (e.g., arbitrary grid system used to divide the prey skeleton into sectors). Here we introduce the spatial point pattern analysis of traces (SPPAT), an approach for visualizing and quantifying the distribution of traces on shelled invertebrate prey, which includes improved collection of spatial information inherent to drillhole location (morphometric-based estimation), improved visualization of spatial trends (kernel density and hotspot mapping), and distance-based statistics for hypothesis testing (K-, L-, and pair correlation functions). We illustrate the SPPAT approach through case studies of fossil samples, modern beach-collected samples, and laboratory feeding trials of naticid gastropod predation on bivalve prey. Overall results show that kernel density and hotspot maps enable visualization of subtle variations in regions of the shell with higher density of predation traces, which can be combined with the maximum clustering distance metric to generate hypotheses on predatory behavior and anti-predatory responses of prey across time and geographic space. Distance-based statistics also capture the major features in the distribution of traces across the prey skeleton, including aggregated and segregated clusters, likely associated with different combinations of two modes of drilling predation, edge and wall drilling. The SPPAT approach is transferable to other paleoecologic and taphonomic data such as encrustation and bioerosion, allowing for standardized investigation of a wide range of biotic interactions.
In 2011, the FDA published guidelines regarding the prescribing of citalopram and escitalopram following publication of evidence showing prolongation of the QT period at therapeutic doses. This paper looked at the impact of these guidelines on the prescribing practices of clinicians in one centre. It showed that clinicians have changed practices in accordance with the guidelines for citalopram but no clear patterns were seen in escitalopram or when looking individually at thespecific guidelines for patients over 60 years of age. There was no evidence of increased concordance by clinicians with the guidelines in patients taking other QT prolonging drugs who are at additional risk. Overall, the guidelines have made an impact on practice but this is partial and2% of all patients still remain on regimes that do not fit the guidelines. The possible reasons for this are explored.
OCD is a condition seen often in Community Mental Health Teams in England. It is treated with medication and psychology. We wanted to assess what co-morbidities were present in our OCD patients, with which medications they were being treated, and whether patients had received psychological treatment. On assessment It is clear that a very large number of the OCD patients in our cohort are complex patients who have not responded to first line treatment, such as SSRIs or basic psychology, and who suffer from co-morbidities. Treatment of these patients, while oriented towards the achievement of recovery, is also relatively complex and long term.
Perinatal mental healthcare in Canada is characterized by under-diagnosis and under-treatment. Approaches to mental health screening can influence pregnant women’s uptake of treatment services.
Objective
To determine the acceptability of mental health screening in Canadian pregnant women.
This cross-sectional survey used the Barriers and Facilitators of Mental Health Screening Survey. The study included pregnant women who read/spoke English. The survey was administered via computer-tablet to women recruited from prenatal classes and maternity clinics in Alberta. Analyses included descriptive statistics and multivariable regression.
Respondents (n=459, 92% participation) were largely 25-34 years old (89%), Caucasian (83%), and partnered (95%). Almost two-thirds of women indicated they expected to be asked about mental health, with 35% reporting their provider asked. The majority (99.8%) indicated that they could be honest with their provider about their mental health if asked and 99.3% of those asked reported they were comfortable with screening. Women indicated a strong preference for routine screening, but identified sporadic assessment as threatening. Women were more likely to report screening as positive if: 1) they had been treated previously for depression/anxiety; or 2) they identified barriers to screening as: a) feeling worried that their concerns were unimportant to their provider; or b) feeling that their provider did not have time to talk about mental health. Women were less likely to report screening as positive if they expected their provider to ask about their mental health.
Findings confirm women’s acceptability of routine prenatal mental health assessment. Results will inform decision-making regarding routine perinatal mental healthcare.
Cognitive behavioural therapy (CBT) is an evidence-based psychotherapy and one of the most widely used treatments for mental health problems. It is generally acknowledged that supervision improves the quality of treatment although systematic descriptions and empirical evaluation of supervision have been sparse. Moreover, there are relatively few valid and reliable instruments to evaluate supervision. Based on a comprehensive review of the supervision literature, six competency domains were identified to cover the scope of CBT supervision: Theory, Focus, Learning strategy, Techniques, Structure, and Interpersonal style. The Moeller, Moerch, Rosenberg Supervision Scale (MMRSS) was developed to evaluate supervisor performance within each of these domains after observation of supervision. The present study examined the psychometric properties of the MMRSS (inter-rater reliability and construct validity), the clinical utility, and satisfaction when using MMRSS to evaluate CBT supervision. CBT supervisors (n = 8) were recruited for the study and provided videos of group supervision. A total of 21 videos were rated using the MMRSS and the Supervisory Competency Scale (SCS) by two independent raters. Supervisees and supervisors completed a satisfaction questionnaire to capture their experience of using the MMRSS during supervision of supervision. The MMRSS showed acceptable internal consistency and validity. Several domains in MMRSS (Structure, Learning strategy, and Interpersonal style) correlated significantly with the corresponding domains in the SCS for cognitive supervision. Preliminary results indicate that the MMRSS may be a valid and clinically useful tool to evaluate CBT supervision, although further systematic evaluation is needed.
Key learning aims
(1) To understand that empirically founded evaluation of cognitive behavioural supervision is essential for good training.
(2) To argue that a modern view of supervision places an emphasis on learning principles.
(3) To describe the Moeller, Moerch, Rosenberg Supervision Scale (MMRSS) and the scale’s preliminary psychometric properties.
(4) To describe the supervisors’ and supervisees’ reported satisfaction using the MMRSS.
Women are more likely to be admitted to nursing home after stroke than men. Differences in patient characteristics and outcomes by sex after institutionalization are less understood. We examined sex differences in the characteristics and care needs of patients admitted to nursing home following stroke and their subsequent survival.
Methods:
We identified patients with stroke newly admitted to nursing home between April 2011 and March 2016 in Ontario, Canada, with follow-up until March 2018 using linked administrative data. We calculated prevalence ratios and 95% confidence intervals (CIs) for the primary outcomes of dependence for activities of daily living, cognitive impairment, frailty, health instability, and symptoms of depression or pain, comparing women to men. The secondary outcome was all-cause mortality.
Results:
Among 4831 patients, 60.9% were women. Compared to men, women were older (median age [interquartile range, IQR]: 84 [78, 89] vs. 80 [71, 86]), more likely to be frail (prevalence ratio 1.14, 95% CI [1.08, 1.19]), have unstable health (1.45 [1.28, 1.66]), and experience symptoms of depression (1.25 [1.11, 1.40]) or pain (1.21 [1.13, 1.30]), and less likely to have aggressive behaviors (0.87 [0.80, 0.94]). Overall median survival was 2.9 years. In a propensity-score-matched cohort, women had lower mortality than men (hazard ratio 0.85, 95% CI [0.77, 0.94]), but in the age-stratified survival analysis, the survival advantage in women was limited to those aged 75 years and older.
Conclusions:
Despite lower subsequent mortality, women admitted to nursing home after stroke required more care than men. Pain and depression are two treatable symptoms that disproportionately affect women.