To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Space – whether physical or virtual, individual or shared – can have an important impact on learning. It can bring people together; it can encourage exploration, collaboration and discussion; it can also frame an unspoken message of exclusion, disconnectedness and disengagement.
(Elkington, 2019, 3)
As pedagogy has moved away from knowledge transmission through lectures to more independent forms of learning and group work, recognition of the importance of social and informal learning spaces on campuses has risen (Bennett, 2009; Cox and Benson Marshall, 2021). Students need places to study, and often their residential accommodation is not well designed to support this. Acknowledging the importance of physical spaces is also to recognise the physical and emotional dimensions of learning (Cox, 2017). While digital might have been thought to reduce the importance of space, in fact we know that the digital has a strong material dimension (Gourlay and Oliver, 2018). Devices are physical objects that need to be carried, handled and charged, and their role in learning is shaped by the physical and social contexts of their use. And while digital has reduced the need for libraries to be warehouses for books, it has opened up the potential for libraries to offer multiple types of space to support different forms of learning. Librarians have demonstrated growing wisdom about how to design library spaces that enable learning. This has given them a potentially influential role in reconceptualising university space as a whole. How libraries have been redesigned provides a model for reconfiguring the whole estate to be much more about supporting different types of learning, rather than seeing the lecture theatre as the only place where learning happens or focusing on the material environment and not on the feel of spaces.
Globally, there is seasonal variation in tuberculosis (TB) incidence, yet the biological and behavioural or social factors driving TB seasonality differ across countries. Understanding season-specific risk factors that may be specific to the UK could help shape future decision-making for TB control. We conducted a time-series analysis using data from 152,424 UK TB notifications between 2000 and 2018. Notifications were aggregated by year, month, and socio-demographic covariates, and negative binomial regression models fitted to the aggregate data. For each covariate, we calculated the size of the seasonal effect as the incidence risk ratio (IRR) for the peak versus the trough months within the year and the timing of the peak, whilst accounting for the overall trend. There was strong evidence for seasonality (p < 0.0001) with an IRR of 1.27 (95% CI 1.23–1.30). The peak was estimated to occur at the beginning of May. Significant differences in seasonal amplitude were identified across age groups, ethnicity, site of disease, latitude and, for those born abroad, time since entry to the UK. The smaller amplitude in older adults, and greater amplitude among South Asians and people who recently entered the UK may indicate the role of latent TB reactivation and vitamin D deficiency in driving seasonality.
After the rapid implementation of digital health services during the COVID-19 pandemic, a paucity of research exists about the suitability of remote consulting in people with intellectual disabilities and their carers, particularly for neuropsychiatric reviews.
Aim
This study examines when remote neuropsychiatric routine consulting is suitable for this population.
Method
A survey was conducted of people with intellectual disabilities and their carers, examining their preference between face-to-face and video consultations for ongoing neuropsychiatric reviews within a rural countywide intellectual disability service in Cornwall, England (population: 538 000). The survey was sent to all adults with intellectual disabilities open to the service on 30 July 2022, closing on 30 September 2022. Participants were asked to provide responses on 11 items predesigned and co-produced between clinicians and experts by experience. The entire service caseload of people had White ethnicity, reflecting the ethnic demographics of Cornwall. Responses received without consent were excluded from the study dataset.
Results
Of 271 eligible participants, 119 responses were received, 104 of whom consented to having their anonymised data used for research analysis. There were no significant differences between preferences and age and gender variables. There was no statistically significant difference regarding preference for the reintroduction of face-to-face appointments (52.0%) compared with video consultations (48.0%). Travel distance (>10 miles) to the clinical setting was important but did not outweigh benefits for those preferring a face-to-face appointment.
Conclusions
This study offers insights into the factors that influence preferences about what type of neuropsychiatric appointment is most suitable for people with intellectual disabilities.
OBJECTIVES/GOALS: Contingency management (CM) procedures yield measurable reductions in cocaine use. This poster describes a trial aimed at using CM as a vehicle to show the biopsychosocial health benefits of reduced use, rather than total abstinence, the currently accepted metric for treatment efficacy. METHODS/STUDY POPULATION: In this 12-week, randomized controlled trial, CM was used to reduce cocaine use and evaluate associated improvements in cardiovascular, immune, and psychosocial well-being. Adults aged 18 and older who sought treatment for cocaine use (N=127) were randomized into three groups in a 1:1:1 ratio: High Value ($55) or Low Value ($13) CM incentives for cocaine-negative urine samples or a non-contingent control group. They completed outpatient sessions three days per week across the 12-week intervention period, totaling 36 clinic visits and four post-treatment follow-up visits. During each visit, participants provided observed urine samples and completed several assays of biopsychosocial health. RESULTS/ANTICIPATED RESULTS: Preliminary findings from generalized linear mixed effect modeling demonstrate the feasibility of the CM platform. Abstinence rates from cocaine use were significantly greater in the High Value group (47% negative; OR = 2.80; p = 0.01) relative to the Low Value (23% negative) and Control groups (24% negative;). In the planned primary analysis, the level of cocaine use reduction based on cocaine-negative urine samples will serve as the primary predictor of cardiovascular (e.g., endothelin-1 levels), immune (e.g., IL-10 levels) and psychosocial (e.g., Addiction Severity Index) outcomes using results from the fitted models. DISCUSSION/SIGNIFICANCE: This research will advance the field by prospectively and comprehensively demonstrating the beneficial effects of reduced cocaine use. These outcomes can, in turn, support the adoption of reduced cocaine use as a viable alternative endpoint in cocaine treatment trials.
Clinicians can play an integral role in the ultimate determination of defendants’ criminal responsibility, given that information gleaned from mental state at the time of the offense (MSO) evaluations influence judges and jurors’ decision-making about a particular case. Such evaluations are particularly complicated due to their retrospective nature, lack of a standardized assessment approach, and variability in criminal responsibility statutes across jurisdictions and time. Yet several legal, clinical, and contextual factors appear to impact clinicians’ decision-making when tasked with these evaluations. In this chapter, we examine the existing literature regarding MSO evaluation referrals, including combined evaluations, to help inform practitioners’ expectations. Next, we review critical components of an MSO evaluation and identify challenges for clinical decision-making. Then we discuss forensic report writing and testifying, as informed by the literature regarding best practices. Lastly, we suggest how field reliability of mental state evaluations might improve through research and policy.
To evaluate the impact of a menu box delivery service tailored to the long-day care (LDC) setting on improving menu compliance with recommendations, children’s diet quality and dietary intake while in care.
Design:
A cluster randomised controlled trial in LDC centres randomly assigned to an intervention (menu box delivery) or comparison (menu planning training) group. The primary outcome was child food provision and dietary intake. Secondary outcomes include menu compliance and process evaluation, including acceptability, fidelity and menu cost (per child, per day).
Setting:
South Australian LDC centres.
Participants:
Eight LDC centres (n 224 children) provided data.
Results:
No differences were observed in serves/d between intervention and comparison centres, for provision (intervention, 0·9 inter-quartile range (IQR) 0·7–1·2; comparison, 0·8 IQR 0·5–1·3) or consumption (intervention, 0·5 IQR 0·2–0·8; comparison, 0·5 IQR 0·3–0·9) of vegetables. Child food provision and dietary intake were similar across both groups for all food groups (P < 0·05). At follow-up, all intervention centres met menu planning guidelines for vegetables, whereas only one comparison centre met guidelines. Intervention centre directors found the menu box delivery more acceptable than cooks. Cost of the intervention was AUD$2·34 greater than comparison centres (intervention, AUD$4·62 (95 % CI ($4·58, $4·67)); comparison, AUD$2·28 (95 % CI ($2·27, $2·30)) per child, per day).
Conclusions:
Menu compliance can be improved via a menu delivery service, delivering equivalent impacts on child food provision and dietary intake compared with an online training programme. Further exploration of cooks acceptability and cost is essential before scaling up to implementation.
To inform a package of initiatives to increase children’s vegetable intake while in long day care (LDC) by evaluating the independent and combined effects of three initiatives targeting food provision, the mealtime environment and the curriculum.
Design:
Using the Multiphase Optimisation Strategy (MOST) framework, a 12-week, eight-condition (n 7 intervention, n 1 control) randomised factorial experiment was conducted. Children’s dietary intake data were measured pre- and post-initiative implementation using the weighed plate waste method (1× meal and 2× between-meal snacks). Vegetable intake (g/d) was calculated from vegetable provision and waste. The optimal combination of initiatives was determined using a linear mixed-effects model comparing between-group vegetable intake at follow-up, while considering initiative fidelity and acceptability.
Setting:
LDC centres in metropolitan Adelaide, South Australia.
Participants:
32 centres, 276 staff and 1039 children aged 2–5 years.
Results:
There were no statistically significant differences between any of the intervention groups and the control group for vegetable intake (all P > 0·05). The curriculum with mealtime environment group consumed 26·7 g more vegetables/child/day than control (ratio of geometric mean 3·29 (95 % CI 0·96, 11·27), P = 0·06). Completion rates for the curriculum (> 93 %) and mealtime environment (61 %) initiatives were high, and acceptability was good (4/5 would recommend), compared with the food provision initiative (0–50 % completed the menu assessment, 3/5 would recommend).
Conclusion:
A programme targeting the curriculum and mealtime environment in LDC may be useful to increase children’s vegetable intake. Determining the effectiveness of this optimised package in a randomised controlled trial is required, as per the evaluation phase of the MOST framework.
COVID-19 has markedly impacted the provision of neurodevelopmental care. In response, the Cardiac Neurodevelopmental Outcome Collaborative established a Task Force to assess the telehealth practices of cardiac neurodevelopmental programmes during COVID-19, including adaptation of services, test protocols and interventions, and perceived obstacles, disparities, successes, and training needs.
Study Design:
A 47-item online survey was sent to 42 Cardiac Neurodevelopmental Outcome Collaborative member sites across North America within a 3-week timeframe (22 July to 11 August 2020) to collect cross-sectional data on practices.
Results:
Of the 30 participating sites (71.4% response rate), all were providing at least some clinical services at the time of the survey and 24 sites (80%) reported using telehealth. All but one of these sites were offering new telehealth services in response to COVID-19, with the most striking change being the capacity to offer new intervention services for children and their caregivers. Only a third of sites were able to carry out standardised, performance-based, neurodevelopmental testing with children and adolescents using telehealth, and none had completed comparable testing with infants and toddlers. Barriers associated with language, child ability, and access to technology were identified as contributing to disparities in telehealth access.
Conclusions:
Telehealth has enabled continuation of at least some cardiac neurodevelopmental services during COVID-19, despite the challenges experienced by providers, children, families, and health systems. The Cardiac Neurodevelopmental Outcome Collaborative provides a unique platform for sharing challenges and successes across sites, as we continue to shape an evidence-based, efficient, and consistent approach to the care of individuals with CHD.
The purpose of this retrospective study was to evaluate safety and efficacy end points of a postoperative antibiotic prophylaxis protocol in liver transplant (LT) patients, which was revised to limit antibiotic use.
Methods:
In the routine antibiotics group (RA), patients routinely received prophylactic antibiotics for around 3 days postoperatively for a variety of rationales, versus the limited antibiotics group (LA), in which patients received antibiotics for the treatment of secondary peritonitis. Patients were included if they were 18 or older and underwent liver transplant between January 2016 and September 2019. In total, 216 patients remained after exclusion: 118 patients in the RA group and 98 patients in the LA group.
Results:
We detected a significant difference in the primary end point of postoperative antibiotic days of therapy. The median days of therapy was 2 for the RA group and 0 for the LA group (P < 0.005). Significantly fewer patients received only intraoperative antibiotics in the RA group versus the LA group: 42 (35.6%) versus 76 (73.5%) respectively (P < .005). There was no significant difference in secondary or safety outcomes, including surgical site infections.
Conclusions:
This study provides evidence that limiting the duration of prophylactic antibiotics postoperatively and treating most patients with only intraoperative antibiotics is safe.
The 2020 update of the Canadian Stroke Best Practice Recommendations (CSBPR) for the Secondary Prevention of Stroke includes current evidence-based recommendations and expert opinions intended for use by clinicians across a broad range of settings. They provide guidance for the prevention of ischemic stroke recurrence through the identification and management of modifiable vascular risk factors. Recommendations address triage, diagnostic testing, lifestyle behaviors, vaping, hypertension, hyperlipidemia, diabetes, atrial fibrillation, other cardiac conditions, antiplatelet and anticoagulant therapies, and carotid and vertebral artery disease. This update of the previous 2017 guideline contains several new or revised recommendations. Recommendations regarding triage and initial assessment of acute transient ischemic attack (TIA) and minor stroke have been simplified, and selected aspects of the etiological stroke workup are revised. Updated treatment recommendations based on new evidence have been made for dual antiplatelet therapy for TIA and minor stroke; anticoagulant therapy for atrial fibrillation; embolic strokes of undetermined source; low-density lipoprotein lowering; hypertriglyceridemia; diabetes treatment; and patent foramen ovale management. A new section has been added to provide practical guidance regarding temporary interruption of antithrombotic therapy for surgical procedures. Cancer-associated ischemic stroke is addressed. A section on virtual care delivery of secondary stroke prevention services in included to highlight a shifting paradigm of care delivery made more urgent by the global pandemic. In addition, where appropriate, sex differences as they pertain to treatments have been addressed. The CSBPR include supporting materials such as implementation resources to facilitate the adoption of evidence into practice and performance measures to enable monitoring of uptake and effectiveness of recommendations.
To prioritise and refine a set of evidence-informed statements into advice messages to promote vegetable liking in early childhood, and to determine applicability for dissemination of advice to relevant audiences.
Design:
A nominal group technique (NGT) workshop and a Delphi survey were conducted to prioritise and achieve consensus (≥70 % agreement) on thirty evidence-informed maternal (perinatal and lactation stage), infant (complementary feeding stage) and early years (family diet stage) vegetable-related advice messages. Messages were validated via triangulation analysis against the strength of evidence from an Umbrella review of strategies to increase children’s vegetable liking, and gaps in advice from a Desktop review of vegetable feeding advice.
Setting:
Australia.
Participants:
A purposeful sample of key stakeholders (NGT workshop, n 8 experts; Delphi survey, n 23 end users).
Results:
Participant consensus identified the most highly ranked priority messages associated with the strategies of: ‘in-utero exposure’ (perinatal and lactation, n 56 points) and ‘vegetable variety’ (complementary feeding, n 97 points; family diet, n 139 points). Triangulation revealed two strategies (‘repeated exposure’ and ‘variety’) and their associated advice messages suitable for policy and practice, twelve for research and four for food industry.
Conclusions:
Supported by national and state feeding guideline documents and resources, the advice messages relating to ‘repeated exposure’ and ‘variety’ to increase vegetable liking can be communicated to families and caregivers by healthcare practitioners. The food industry provides a vehicle for advice promotion and product development. Further research, where stronger evidence is needed, could further inform strategies for policy and practice, and food industry application.
Optical tracking systems typically trade off between astrometric precision and field of view. In this work, we showcase a networked approach to optical tracking using very wide field-of-view imagers that have relatively low astrometric precision on the scheduled OSIRIS-REx slingshot manoeuvre around Earth on 22 Sep 2017. As part of a trajectory designed to get OSIRIS-REx to NEO 101955 Bennu, this flyby event was viewed from 13 remote sensors spread across Australia and New Zealand to promote triangulatable observations. Each observatory in this portable network was constructed to be as lightweight and portable as possible, with hardware based off the successful design of the Desert Fireball Network. Over a 4-h collection window, we gathered 15 439 images of the night sky in the predicted direction of the OSIRIS-REx spacecraft. Using a specially developed streak detection and orbit determination data pipeline, we detected 2 090 line-of-sight observations. Our fitted orbit was determined to be within about 10 km of orbital telemetry along the observed 109 262 km length of OSIRIS-REx trajectory, and thus demonstrating the impressive capability of a networked approach to Space Surveillance and Tracking.
Chinese and Korean protests over “revisionist” Japanese histories of World War II are well known. The impact of contested Chinese and US histories of the Korean War on US-China relations today has received less attention. More broadly, there has been little research seeking to systematically explore just how history textbook controversies matter for international relations. This article experimentally manipulates the impact of nation (US/China), of source (in-group/out-group textbooks), and of valence (positive/negative historical narratives) on measures of beliefs about the past, emotions, collective self-esteem, and threat perception in present-day US-China relations. A 2 × 2 × 2 design exposed randomized groups of Chinese and US university students to fictional high school history textbook accounts of the Korean War. Findings reveal significant effects of nation, source, and valence and suggest that the “historical relevance” of a shared past to national identities in the present has a dramatic impact on how historical controversies affect threat perception.
This study used administrative health data to describe emergency department (ED) visits by residents from assisted living and nursing home facilities in the Vancouver Coastal Health region, British Columbia. We compared ED visit rates, the distribution of visits per resident, and ED dispositions of the assisted living and nursing home populations over a 3-year period (2005–2008). There were 13,051 individuals in our study population. Visit rates (95% confidence interval) were 124.8 (118.1–131.7) and 64.1 (62.9–65.3) visits per 100 resident years in assisted living and nursing home facilities respectively. A smaller proportion of ED visits by assisted living residents resulted in hospital admission compared to nursing home residents (45% vs. 48%, p < .01). The ED visit rate among assisted living residents is significantly higher compared to that among nursing home residents. Future research is needed into the underlying causes for this finding.
This study examined how nursing home facility ownership and organizational characteristics relate to emergency department (ED) transfer rates. The sample included a retrospective cohort of nursing home residents in the Vancouver Coastal Health region (n = 13,140). Rates of ED transfers were compared between nursing home ownership types. Administrative data were further linked to survey-derived data of facility organizational characteristics for exploratory analysis. Crude ED transfer rates (transfers/100 resident years) were 69, 70, and 51, respectively, in for-profit, non-profit, and publicly owned facilities. Controlling for sex and age, public ownership was associated with lower ED transfer rates compared to for-profit and non-profit ownership. Results showed that higher total direct-care nursing hours per resident day, and presence of allied health staff – disproportionately present in publicly owned facilities – were associated with lower transfer rates. A number of other facility organizational characteristics – unrelated to ownership – were also associated with transfer rates.
Long-term care (LTC) patients are often sent to emergency departments (EDs) by ambulance. In this novel extended care paramedic (ECP) program, specially trained paramedics manage LTC patients on site. The objective of this pilot study was to describe the dispatch and disposition of LTC patients treated by ECPs and emergency paramedics.
Methods:
Data were collected from consecutive calls to 15 participating LTC facilities for 3 months. Dispatch determinants, transport rates, and relapse rates were described for LTC patients attended by ECPs or emergency paramedics. ECP involvement in end-of-life care was identified.
Results:
Of 238 eligible calls, 140 (59%) were attended by an ECP and 98 (41%) by emergency paramedics. Although the top three determinants were the same in each group, the overall distribution of dispatch determinants and acuity differed. In the ECP cohort, 98 of 140 (70%) were treated and released, 33 of 140 (24%) had “facilitated transfer” arranged by an ECP, and 9 of 140 (6%) were immediately transported to the ED by ambulance. In the emergency paramedic cohort, 77 of 98 (79%) were immediately transported to the ED and 21 of 98 (21%) were not transported. In the ECP group, 6 of 98 (6%) patients not transported triggered a 911 call within 48 hours for a related clinical reason, although none of the patients not transported by emergency paramedics relapsed.
Conclusion:
ECP involvement in LTC calls was found to reduce transports to the ED with a low rate of relapse. These pilot data generated hypotheses for future study, including determination of appropriate populations for ECP care and analysis of appropriate and safe nontransport.
Hospitalization of nursing home residents can be futile as well as costly, and now evidence indicates that treating nursing home residents in place produces better outcomes for some conditions. We examined facility organizational characteristics that previous research showed are associated with potentially avoidable hospital transfers and with better care quality. Accordingly, we conducted a cross-sectional survey of nursing home directors of care in Vancouver Coastal Health, a large health region in British Columbia. The survey addressed staffing levels and organization, physician access, end-of-life care, and factors influencing facility-to-hospital transfers. Many of the modifiable organizational characteristics associated in the literature with potentially avoidable hospital transfers and better care quality are present in nursing homes in British Columbia. However, their presence is not universal, and some features, especially the organization of physician care and end-of-life planning and services, are particularly lacking.
Background and purpose: Research is increasingly important in radiation therapy, but radiation therapists (or therapy radiographers) (RTs) are relatively new to research and may have difficulty defining research topics. Our aim was to identify the group interests and focus research priorities of Australian RTs. Although not measured, an additional aim was to make RTs more aware of the relevance of RT research.
Materials and methods: An Australia-wide Delphi process was used, examining the problems related to patient care, working with colleagues, and radiotherapy in general, that RTs experienced in their daily work. In an initial study, 374 problems were identified. These were translated into 53 research areas which were prioritised in the second stage of the study. Agreement between groups was analysed using a hierarchical cluster procedure and post hoc Scheffe multiple comparisons.
Results: There were three groups of responders with varying degrees of research interest. There was agreed high importance (p > 0.01) for the technical aspects of radiation therapy, such as image guidance, intensity-modulated radiation therapy (IMRT) and patient positioning. There was significant disagreement (p < 0.001 to p = 0.023) between groups on the importance of patient care research.
Conclusions: The strong interest in technical research is consistent with the rapid influx of technology, particularly in imaging. The disagreement on patient-related research may be of concern. The list of potential research areas specific to radiation therapy will be useful for new RT researchers to consider.
Objectives: Nuclear medicine has changed rapidly as a result of technological developments. Very little is reported on the effects these developments may have on technologist productivity. This study aims to determine whether advances have created a workplace where more patient studies can be performed with fewer technologists. The level of change in automation or time taken to perform a routine task by the nuclear medicine technologist as a result of technological development over the past decade is reported.
Methods: A systematic review was conducted using Embase.com, Medline, INSPEC, and Cinahl. Two authors reviewed each article for eligibility. Technological developments in routine areas over the past decade were reviewed. The resultant automation or time effects on data acquisition, data processing, and image processing were summarized.
Results: Sixteen articles were included in the areas of myocardial perfusion, information technology, and positron emission tomography (PET). Gamma camera design has halved the acquisition time for myocardial perfusion studies, automated analysis requires little manual intervention and information technologies and filmless departments are more efficient. Developments in PET have reduced acquisition to almost one-fifth of the time.
Conclusions: Substantial efficiencies have occurred over the decade thereby increasing productivity, but whether staffing levels are appropriate for safe, high quality practice is unclear. Future staffing adequacy is of concern given the anticipated increasing service needs.