We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Mental Health Bill, 2025, proposes to remove autism and learning disability from the scope of Section 3 of the Mental Health Act, 1983 (MHA). The present article represents a professional and carer consensus statement that raises concerns and identifies probable unintended consequences if this proposal becomes law. Our concerns relate to the lack of clear mandate for such proposals, conceptual inconsistency when considering other conditions that might give rise to a need for detention and the inconsistency in applying such changes to Part II of the MHA but not Part III. If the proposed changes become law, we anticipate that detentions would instead occur under the less safeguarded Deprivation of Liberty Safeguards framework, and that unmanaged risks will eventuate in behavioural consequences that will lead to more autistic people or those with a learning disability being sent to prison. Additionally, there is a concern that the proposed definitional breadth of autism and learning disability gives rise to a risk that people with other conditions may unintentionally be unable to be detained. We strongly urge the UK Parliament to amend this portion of the Bill prior to it becoming law.
Mild cognitive impairment with Lewy bodies (MCI-LB) may be identified prospectively based on the presence of cognitive impairment and several core clinical features (visual hallucinations, cognitive fluctuations, parkinsonism, and REM sleep behavior disorder). MCI-LB may vary in its presenting features, which may reflect differences in underlying pathological pattern, severity, or comorbidity.
We aimed to assess how clinical features of MCI-LB accumulate over time, and whether this is associated with the rate of cognitive decline.
Methods
In this cohort study, 74 individuals seen with MCI-LB prospectively underwent repeated annual cognitive and clinical assessment up to nine years. Relationships between clinical features (number of core features present and specific features present) and cognitive change on the Addenbrooke’s Cognitive Examination–Revised (ACE-R) were examined with time-varying mixed models. The accumulation of core clinical features over time was examined with a multi-state Markov model.
Results
When an individual with MCI-LB endorsed more clinical features, they typically experienced a faster cognitive decline (ACE-R Score Difference β = −1.1 [−1.7 to −0.5]), specifically when experiencing visual hallucinations (β = −2.1 [−3.5 to −0.8]) or cognitive fluctuations (β = −3.4 [−4.8 to −2.1]).
Individuals with MCI-LB typically acquired more clinical features with the passage of time (25.5% [20.0–32.0%] one-year probability), limiting the prognostic utility of baseline-only features.
Conclusions
The clinical presentation of MCI-LB may evolve over time. The accumulation of more clinical features of Lewy body disease, in particular visual hallucinations and cognitive fluctuations, may be associated with a worse prognosis in clinical settings.
The aim of this study was to determine whether there was a significant change in cardiac [123I]-metaiodobenzylguanidine uptake between baseline and follow-up in individuals with mild cognitive impairment with Lewy bodies (MCI-LB) who had normal baseline scans. Eight participants with a diagnosis of probable MCI-LB and a normal baseline scan consented to a follow-up scan between 2 and 4 years after baseline. All eight repeat scans remained normal; however, in three cases uptake decreased by more than 10%. The mean change in uptake between baseline and repeat was −5.2% (range: −23.8% to +7.0%). The interpolated mean annual change in uptake was −1.6%.
In order to study the structure and temperature distribution within high-mass star-forming clumps, we employed the Australia Telescope Compact Array to image the $\mathrm{NH}_3$ (J,K) = (1,1) through (6,6) and the (2,1) inversion transitions, the $\mathrm{H}_2\mathrm{O}$$6_{16}$-$5_{23}$ maser line at 22.23508 GHz, several $\mathrm{CH}_3\mathrm{OH}$ lines and hydrogen and helium recombination lines. In addition, 22- and 24-GHz radio continuum emission was also imaged.
The $\mathrm{NH}_3$ lines probe the optical depth and gas temperature of compact structures within the clumps. The $\mathrm{H}_2\mathrm{O}$ maser pinpoints the location of shocked gas associated with star formation. The recombination lines and the continuum emission trace the ionised gas associated with hot OB stars. The paper describes the data and presents sample images and spectra towards select clumps. The technique for estimating gas temperature from $\mathrm{NH}_3$ line ratios is described. The data show widespread hyperfine intensity anomalies in the $\mathrm{NH}_3$ (1,1) images, an indicator of non-LTE $\mathrm{NH}_3$ excitation. We also identify several new $\mathrm{NH}_3$ (3,3) masers associated with shocked gas. Towards AGAL328.809+00.632, the $\mathrm{H}_2\mathrm{O}$$6_{16}$-$5_{23}$ line, normally seen as a maser, is instead seen as a thermally excited absorption feature against a strong background continuum. The data products are described in detail.
Access to psychedelic drugs is liberalizing, yet responses are highly unpredictable. It is therefore imperative that we improve our ability to predict the nature of the acute psychedelic experience to improve safety and optimize potential therapeutic outcomes. This study sought to validate the ‘Imperial Psychedelic Predictor Scale’ (IPPS), a short, widely applicable, prospective measure intended to be predictive of salient dimensions of the psychedelic experience.
Methods
Using four independent datasets in which the IPPS was completed prospectively – two online surveys of ‘naturalistic’ use (N = 741, N = 836) and two controlled administration datasets (N = 30, N = 28) – we conducted factor analysis, regression, and correlation analyses to assess the construct, predictive, and convergent validity of the IPPS.
Results
Our approach produced a 9-item scale with good internal consistency (Cronbach's α = 0.8) containing three factors: set, rapport, and intention. The IPPS was significantly predictive of ‘mystical’, ‘challenging’, and ‘emotional breakthrough’ experiences. In a controlled administration dataset (N = 28), multiple regression found set and rapport explaining 40% of variance in mystical experience, and simple regression found set explained 16% of variance in challenging experience. In another (N = 30), rapport was related to emotional breakthrough explaining 9% of variance.
Conclusions
Together, these data suggest that the IPPS is predictive of relevant acute features of the psychedelic experience in a broad range of contexts. We hope that this brief 9-item scale will be widely adopted for improved knowledge of psychedelic preparedness in controlled settings and beyond.
Operative cancellations adversely affect patient health and impose resource strain on the healthcare system. Here, our objective was to describe neurosurgical cancellations at five Canadian academic institutions.
Methods:
The Canadian Neurosurgery Research Collaborative performed a retrospective cohort study capturing neurosurgical procedure cancellation data at five Canadian academic centres, during the period between January 1, 2014 and December 31, 2018. Demographics, procedure type, reason for cancellation, admission status and case acuity were collected. Cancellation rates were compared on the basis of demographic data, procedural data and between centres.
Results:
Overall, 7,734 cancellations were captured across five sites. Mean age of the aggregate cohort was 57.1 ± 17.2 years. The overall procedure cancellation rate was 18.2%. The five-year neurosurgical operative cancellation rate differed between Centre 1 and 2 (Centre 1: 25.9%; Centre 2: 13.0%, p = 0.008). Female patients less frequently experienced procedural cancellation. Elective, outpatient and spine procedures were more often cancelled. Reasons for cancellation included surgeon-related factors (28.2%), cancellation for a higher acuity case (23.9%), patient condition (17.2%), other factors (17.0%), resource availability (7.0%), operating room running late (6.4%) and anaesthesia-related (0.3%). When clustered, the reason for cancellation was patient-related in 17.2%, staffing-related in 28.5% and operational or resource-related in 54.3% of cases.
Conclusions:
Neurosurgical operative cancellations were common and most often related to operational or resource-related factors. Elective, outpatient and spine procedures were more often cancelled. These findings highlight areas for optimizing efficiency and targeted quality improvement initiatives.
This chapter introduces the reader to facial recognition technology (FRT) history and the development of FRT from the perspective of science and technologies studies. Beginning with the traditionally accepted origins of FRT in 1964–1965, developed by Woody Bledsoe, Charles Bisson, and Helen Wolf Chan in the United States, Simon Taylor discusses how FRT builds on earlier applications in mug shot profiling, imaging, biometrics, and statistical categorisation. Grounded in the history of science and technology, the chapter demonstrates how critical aspects of FRT infrastructure are aided by scientific and cultural innovations from different times of locations: that is, mugshots in eighteenth-century France; mathematical analysis of caste in nineteenth-century British India; innovations by Chinese closed-circuit television companies and computer vision start-ups conducting bio-security experiments on farm animals. This helps to understand FRT development beyond the United States-centred narrative. The aim is to deconstruct historical data, mathematical, and digital materials that act as ‘back-stage elements’ to FRT and are not so easily located in infrastructure yet continue to shape uses today. Taylor’s analysis lays a foundation for the kinds of frameworks that can better help regulate and govern FRT as a means for power over populations in the following chapters.
Precolumbian Maya graffiti is challenging to document because it is complex, multilayered, and difficult to see with the naked eye. In the Maya Lowlands, precolumbian graffiti occurs as etched palimpsests on parts of substructures such as stucco walls of residences, palaces, and temples that are frequently only accessible through dark and narrow tunnel excavations. Experienced iconographers or epigraphers with advanced drawing skills are the most qualified researchers to accurately record, analyze, and interpret precolumbian Maya graffiti. Because these scholars have a vast knowledge of conventions and styles from multiple time periods and sites, they are less likely to document the complex and seemingly chaotic incisions incorrectly. But as with many specialists in Maya archaeology, iconographers and epigraphers are not always available to collaborate in the field. This raises the question, how might an archaeologist without advanced training in iconography accurately record graffiti in subterranean excavations? Advances in digital applications of archaeological field recording have opened new avenues for documenting graffiti. One of these is Reflectance Transformation Imaging (RTI), a method that uses a moving light source and photography in order to visualize, interact with, and analyze a three-dimensional object in a two-dimensional image. With practice, RTI images can easily be produced in the field and later shared with specialists for the purposes of analysis and interpretation. Performed on a series of 20 unique graffiti from the Maya archaeological site of Holtun (two examples are presented here), RTI shows promise as a viable technique for documenting and preserving graffiti as cultural heritage.
The adsorption of 13C-labeled benzene on imogolite has been studied on samples which had been evacuated and then heated to remove water below their decomposition point. After adsorption of labeled benzene, the samples were studied by nuclear magnetic resonance using non-spinning techniques. The results show that benzene can occupy more than one pore type and that water does not displace benzene from the intra-tube pores at atmospheric pressure. A further finding is that there are at least two types of adsorbed benzene in so called inter-tube pores, one of which is more rigidly held than that in intratube pores. The presence of disordered materials at the edge of pores could also play a role in altering the pore mouth thereby creating new types of pores. Moreover, where two tubes do not pack properly, space might be created where an adsorbed molecule may bind more tightly than expected in a conventional pore.
This essay recovers the newspaper writings of the Omaha journalist Susette Bright Eyes La Flesche as the first Indigenous woman to publish about the 1890 Wounded Knee Massacre. Her eyewitness accounts challenge mainstream histories of the massacre that focus largely on frontier violence and Indigenous death by rewriting Wounded Knee as a place of Indigenous resilience and of an Indigenous community bound together by the rights and responsibilities of kinship. By prioritizing the stories of surviving Indigenous women and girls, Bright Eyes's reporting speaks to and becomes a precedent for ongoing acts and discourses of Indigenous activism, feminism, resurgence, and self-determination.
COVID-19 has significantly impacted society for over 2.5 years, and Long COVID is concerning for its long-term impact on the healthcare system. Further, cognitive and emotional functioning in Long COVID has limited research, but 2 recent studies (Whiteside et al., 2022a, Whiteside et al., 2022b) examined cognitive and emotional functioning in Long COVID patients approximately 6 months post-diagnosis. The studies found limited cognitive deficits, but significant depression and anxiety, which in turn were the best predictors of low average cognitive scores. Further, the mean Personality Assessment Inventory (PAI) profile included highest mean elevations on somatic preoccupation (SOM) and depression (DEP) subscales. To further explore personality functioning in Long COVID, this study compared PAI profiles of Long COVID patients with a potentially similar group with post-concussion syndrome (PCS) which has been shown to have a strong psychological component.
Participants and Methods:
Participants included 44 consecutive outpatients (Mean age = 47.89, SD = 13.05, 84% Female, 75% Caucasian) referred from a Long COVID clinic with cognitive complaints related to COVID, while the comparison group of PCS patients included 50 consecutive referrals (Mean age = 38.82, SD = 16.24, 52% Female, 90% Caucasian) related to cognitive complaints attributed to PCS. A series of t-tests between the 2 groups was conducted on the PAI validity, clinical, interpersonal, and treatment consideration scales. PAI clinical subscales were also compared. To control for multiple comparisons, p < .01 was utilized and effect sizes were compared.
Results:
The results demonstrated that both Long COVID (SOM M = 68.66, SD = 12.56; DEP M = 63.39, SD = 12.70) and PCS groups (SOM M = 65.28, SD = 12.06; DEP M = 70.32, SD = 16.15) displayed the highest mean elevations on PAI SOM and DEP scales but no statistically significant differences in mean scale elevations between Long COVID and PCS groups on SOM (t [92] = 1.33, p = .80) and DEP (t [92] = -2.11, p = .097). However, results demonstrated statistically significant differences on the paranoia subscale (PAR; t [92] = -3.27, p = .009), antisocial features subscale (ANT; t [92] = -2.22, p = .01), stress subscale (STR; t [90] = -3.51, p = .006) and suicidal ideation subscale (SUI; t [92] = -2.73, p = .000) of the PAI. Specifically, the mean scores for the PCS group were higher across the paranoia (M = 57.30), antisocial features (M= 52.24), stress (M = 58.44), and suicidal ideation subscales (M = 57.82) of the PAI than the Long COVID group. While these patterns of reporting differed between groups, mean scores for both groups were in the normal range.
Conclusions:
Results support the similarities in emotional/personality functioning across Long COVID and PCS patients and the importance of evaluating psychological functioning in these samples as a standard part of neuropsychological evaluations. Further, the results suggest that psychological treatment strategies utilized with PCS patients may be helpful for Long COVID patients, but more research is needed.
Inductive reasoning training has been found to be particularly effective at improving inductive reasoning, with some evidence of improved everyday functioning and driving. Telehealth may be useful for increasing access to, reducing time and travel burdens of, and reducing the need for physical spaces for cognitive training. On the other hand, telehealth increases technology burden. The present study investigated the feasibility and effectiveness of implementing an inductive reasoning training program, designed to mimic the inductive reasoning arm used in a large multi-site clinical trial (Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE)), via telehealth (using Zoom and Canvas as delivery platforms).
Participants and Methods:
31 older adult participants (mean age = 71.2, range = 65-85; mean education = 15.5, range = 13-18; 64.5% female; 87.1% white) received 10-sessions of telehealth-delivered inductive reasoning training over 5 weeks. Comparison groups (inductive reasoning trained and no-contact controls) were culled from the in-person ACTIVE trial via propensity matching. All participants completed three pretest and posttest inductive reasoning measures (Word Series, Letter Series, Letter Sets), as well as a posttest measure assessing participant perceptions of the telehealth intervention. In addition, at the end of each of the ten training sessions, participants received a final inductive reasoning assessment.
Results:
Telehealth participants provided high levels of endorsement suggesting that the telehealth training program was useful, reliable, easy to use and interact on, and employed a useable interface. Participants were generally satisfied with the training program. With regard to performance, telehealth participants demonstrated greater gains than untrained controls on Letter Series [F(1, 116) = 9.81, p = 0.002, partial eta-squared = 0.084] and Letter Sets [F(1, 116) = 8.69, p = 0.004, partial eta-squared = 0.074], but did not differ in improvement on Word Series [F(1, 116) = 1.145, p = 0.287, partial eta-squared = 0.010]. Furthermore, telehealth participants evinced similar inductive reasoning gains as matched inperson inductive reasoning trained participants on Letter Series [F(1, 116) = 1.24, p = 0.226, partial eta-squared = 0.01] and Letter Sets [F(1, 116) = 1.29, p = 0.259, partial eta-squared = 0.01], but demonstrated fewer gains in Word Series performance [F(1, 116) = 25.681, p = < 0.001, partial eta-squared = 0.181]. On the end-of-session reasoning tests, telehealth-trained participants showed a similar general pattern of improvement across the ten training sessions and did not differ significantly from in-person trained comparison participants.
Conclusions:
Cognitive training via telehealth evinced similar gains across nearly all measures as its in-person counterpart. However, telehealth also led to substantial challenges regarding the telehealth training platform. Despite these challenges, participants reported perceiving increased competence with computer use, peripherals (mice, trackpad), and videoconferencing. These may be ancillary benefits of such training and may be maximized if more age-friendly learning management systems are investigated. Overall, this study suggests that telehealth delivery may be a viable form of cognitive training in inductive reasoning, and future studies could increase performance gains by optimizing the online training platform for older adults.
Chapter 9 presents Ratzinger’s theology of creation, in which he emphasizes the ontological goodness of the created world and promotes an integral vision of human ecology.
Little is known about when youth may be at greatest risk for attempting suicide, which is critically important information for the parents, caregivers, and professionals who care for youth at risk. This study used adolescent and parent reports, and a case-crossover, within-subject design to identify 24-hour warning signs (WS) for suicide attempts.
Methods
Adolescents (N = 1094, ages 13 to 18) with one or more suicide risk factors were enrolled and invited to complete bi-weekly, 8–10 item text message surveys for 18 months. Adolescents who reported a suicide attempt (survey item) were invited to participate in an interview regarding their thoughts, feelings/emotions, and behaviors/events during the 24-hours prior to their attempt (case period) and a prior 24-hour period (control period). Their parents participated in an interview regarding the adolescents’ behaviors/events during these same periods. Adolescent or adolescent and parent interviews were completed for 105 adolescents (81.9% female; 66.7% White, 19.0% Black, 14.3% other).
Results
Both parent and adolescent reports of suicidal communications and withdrawal from social and other activities differentiated case and control periods. Adolescent reports also identified feelings (self-hate, emotional pain, rush of feelings, lower levels of rage toward others), cognitions (suicidal rumination, perceived burdensomeness, anger/hostility), and serious conflict with parents as WS in multi-variable models.
Conclusions
This study identified 24-hour WS in the domains of cognitions, feelings, and behaviors/events, providing an evidence base for the dissemination of information about signs of proximal risk for adolescent suicide attempts.
A canonical genealogy of artificial intelligence must include technologies and data being built with, for and from animals. Animal identification using forms of electronic monitoring and digital management began in the 1970s. Early data innovations comprised RFID tags and transponders that were followed by digital imaging and computer vision. Initially applied in the 1980s for agribusiness to identify meat products and to classify biosecurity data for animal health, yet computer vision is interlaced in subtler ways with commercial pattern recognition systems to monitor and track people in public spaces. As such this paper explores a set of managerial projects in Australian agriculture connected to computer vision and machine learning tools that contribute to dual-use. Herein, ‘the cattle crush’ is positioned as a pivotal space for animal bodies to be interrogated by AI imaging, digitization and data transformation with forms of computational and statistical analysis. By disentangling the kludge of numbering, imaging and classifying within precision agriculture the paper highlights a computational transference of techniques between species, institutional settings and domains that is relevant to regulatory considerations for AI development. The paper posits how a significant sector of data innovation – concerning uses on animals – may tend to evade some level of regulatory and ethical scrutiny afforded to human spaces and settings, and as such afford optimisation of these systems beyond our recognition.
Background: Reflexive urine culturing, a strategy wherein urine cultures are only performed on samples with pyuria, is increasingly being used to reduce unnecessary urine cultures, healthcare costs, and inappropriate antibiotics. To support implementation of a reflexive urine-culture order for pediatric patients aged <18 years, we assessed the proportion of urine cultures that would be avoided with reflexive urine culturing, and we calculated the sensitivity and negative predictive value (NPV) of the ≥10 white blood cells (WBC) per high-powered field (HPF) threshold for diagnosing urinary tract infections (UTI) in patients aged <18 years who presented to the pediatric emergency department (ED). Methods: A retrospective review of patients <18 years with a urine culture performed from January to May 2022 in an urban, tertiary-care, pediatric ED was performed. A positive urine culture was defined as ≥50,000 CFU/mL for catheterized specimens and ≥100,000 CFU/mL for clean-catch or unspecified specimens. Pyuria was defined as ≥10 WBC/HPF. ‘True UTI’ was defined as a positive urine culture with a consistent clinical presentation (eg, fever or dysuria). Sensitivity, specificity, and NPV were calculated using the pyuria threshold of ≥10WBC/HPF compared to the gold standard of a ‘true UTI.’ Results: During the study period, 658 patients aged <18 years had urine cultures sent, of which 46 (7%) were positive. In all, 407 urine cultures (61.9%) were obtained by clean catch, 233 (35.4%) were obtained by urethral catheterization, 2 (0.3%) were obtained by Foley catheter, and 16 (2.4%) were unspecified. Among the 46 positive cultures, 32 (69.6%) had ≥10 WBC/HPF, and 55 (9.0%) of 612 negative cultures had ≥10 WBC/HPF. Of the 14 patients with positive urine cultures without pyuria, 8 had a contaminated sample or asymptomatic bacteriuria, 3 had urologic abnormalities, and 3 were infants aged <3 months. Of the 14 patients, 3 (21.4%) had a consistent clinical presentation for UTI and were treated with antibiotics: 2 were infants aged <3 months and 1 had urologic abnormalities. Using the ≥10 WBC/HPF threshold compared to ‘true UTI,’ sensitivity was 91.4%, specificity was 91.5%, positive predictive value was 36%, and NPV was 99.5%. Sensitivity and NPV increased to 100% when infants aged <3 months and urologic patients with positive urine culture were excluded. We estimated a cost saving of ~$200,000 had reflexive testing been in place. Conclusions: A reflexive urine culture for specimens with ≥10 WBC/HPF would have reduced the number of urine cultures substantially because 571 (86.8%) of 658 urine cultures would not have been performed. To prevent missed diagnoses of UTI, infants aged <3 months and children with urologic abnormalities should be excluded from this diagnostic stewardship intervention.
Many factors affect patient outcome after congenital heart surgery, including the complexity of the heart disease, pre-operative status, patient specific factors (prematurity, nutritional status and/or presence of comorbid conditions or genetic syndromes), and post-operative residual lesions. The Residual Lesion Score is a novel tool for assessing whether specific residual cardiac lesions after surgery have a measurable impact on outcome. The goal is to understand which residual lesions can be tolerated and which should be addressed prior to leaving the operating room. The Residual Lesion Score study is a large multicentre prospective study designed to evaluate the association of Residual Lesion Score to outcomes in infants undergoing surgery for CHD. This Pediatric Heart Network and National Heart, Lung, and Blood Institute-funded study prospectively enrolled 1,149 infants undergoing 5 different congenital cardiac surgical repairs at 17 surgical centres. Given the contribution of echocardiographic measurements in assigning the Residual Lesion Score, the Residual Lesion Score study made use of a centralised core lab in addition to site review of all data. The data collection plan was designed with the added goal of collecting image quality information in a way that would permit us to improve our understanding of the reproducibility, variability, and feasibility of the echocardiographic measurements being made. There were significant challenges along the way, including the coordination, de-identification, storage, and interpretation of very large quantities of imaging data. This necessitated the development of new infrastructure and technology, as well as use of novel statistical methods. The study was successfully completed, but the size and complexity of the population being studied and the data being extracted required more technologic and human resources than expected which impacted the length and cost of conducting the study. This paper outlines the process of designing and executing this complex protocol, some of the barriers to implementation and lessons to be considered in the design of future studies.
This paper considers the structure and priorities of the Carthaginian state in its imperial endeavours in both North Africa and across the Mediterranean, focusing especially on the well-documented period of the Punic Wars (264–146 BC.). It suggests that Carthaginian constitutional structures, in particular the split between civil shofetim (‘judges’) and military rabbim (‘generals’), impacted the strategic outlook and marginal bellicosity of the city, making it less competitive against its primary peer-rival in the Western Mediterranean, Rome.