We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Background: Stroke survivors have a higher risk of depression and suicide, but how hospitalization for major depression modifies the risk of suicide after stroke is not well-known. Methods: We conducted a population-based matched cohort study of adults hospitalized with first-ever stroke between 2008 to 2017 matched 1:1 to the general Ontario population on age, sex, neighbourhood-level income, rurality, and comorbidities. Patients with major depression or deliberate self-harm prior to index event were excluded from both groups. We used cause-specific proportional hazards models to evaluate the association between stroke and suicide (defined as self-harm or death by suicide) and used an interaction term to assess effect modification of depression on stroke-suicide association. Results: We included 64,719 matched pairs of patients with stroke and without (45.5% female, mean age 71.4 years). Compared to matched controls, stroke survivors had a higher rate of suicide (11.1 vs. 3.2, HR 2.87 [2.35-3.51]). Depression was associated with a higher rate of suicide in both groups (HR 13.8 [8.82-21.61]). The interaction between stroke and depression was not significant (Pstroke*depression = 0.51). Conclusions: Hospitalization for depression does not modify the rate of suicide after stroke, suggesting the need to better understand the pathways leading to suicide after stroke.
Tape rolls are often used for multiple patients despite recommendations by manufacturers for single-patient use. We developed a survey to query Health Care Personnel about their tape use practices and beliefs and uncovered behaviors that put patients at risk for hospital-acquired infections due to tape use.
This study aimed to evaluate the effect of marine-based rumen buffer (Lithothamnium calcareum) supplementation on rumen health as well as milk yield and composition and also behavioural and metabolic parameters of dairy cows. Thirty-six lactating multiparous Holstein cows were used with a milk yield average of 39 kg/d and 64 d in milk. The experiment was conducted over 60 d using two groups: control (CON; n = 18) was supplemented with sodium bicarbonate at 1.1% dry matter and a treatment group that received Lithothamnium calcareum at 0.5% of dry matter (LITHO; n = 18). Each group was fed daily with the buffers mixed to the total mixed ration containing 29.28% starch. Ruminal fluid collections were performed weekly to evaluate pH and volatile fatty acids. Feeding behaviour data were obtained through automatic feeders, while overall behavioural data were obtained using monitoring collars. Milk yield was recorded daily and adjusted for fat and energy. Milk samples were retrieved once weekly for analysis of fat, protein, lactose and total solids. Blood samples were collected weekly for metabolic analysis and faecal samples were collected weekly to evaluate pH and starch concentrations. LITHO produced more fat- and energy-corrected milk (P ≤ 0.01) as well as the highest percentage of fat and solids (P < 0.05) when compared to the CON group. Data on feeding behaviour showed an increased eating time (P ≤ 0.01) in the LITHO group but a higher eating rate (P < 0.01) in the CON group. Animals from the LITHO group had lower faecal pH (P < 0.05). The treatment did not affect dry matter intake, animal behaviour, ruminal acid–base balance, or faecal starch. In summary, Lithothamnium calcareum supplementation at 0.5% of dry matter improved milk yield, milk composition and, presumably, feed conversion efficiency.
The ongoing Russian–Ukrainian war has been linked to mental health problems in the Ukrainian general population. To date, however, scarce research has examined the mental health of psychosocial support workers (PSWs) in Ukraine who have a burdensome workload in the context of ongoing conflict. This study aimed to examine the prevalence and correlates of burnout, posttraumatic stress disorder (PTSD), and suicidal ideation (SI) in PSWs in Ukraine during the Russian–Ukrainian war.
Methods:
One hundred seventy-eight PSWs in Ukraine completed a survey assessing war exposure, mental health, and psychosocial characteristics.
Results:
A total 59.6% of PSWs screened positive for burnout, 38.2% for PTSD, and 10.7% for current SI. Lower optimism was associated with greater odds of burnout. Greater distress from witnessing war-related destruction, lower optimism, lower presence of meaning in life, and lower levels of close social relationships were associated with greater odds of burnout. Lower presence of meaning in life was associated with greater odds of SI.
Conclusions:
Results of this study highlight the mental health challenges faced by PSWs in Ukraine during the ongoing Russian–Ukrainian war. They further suggest that interventions to foster meaning in life and promote social connectedness may “help the helpers” during this ongoing conflict.
The U.S. Department of Veterans Affairs is actively transitioning away from a disease-centric model of healthcare to one that prioritizes disease prevention and the promotion of overall health and well-being. Described as Whole Health, this initiative aims to provide personalized, values-centered care that optimizes physical, behavioral, spiritual, and socioeconomic well-being. To inform this initiative, we analyzed cross-sectional data from a nationally representative sample of primarily older U.S. military veterans to estimate levels of well-being across these domains, and identify sociodemographic, military, and potentially modifiable health and psychosocial correlates of them. Results revealed that, overall, veterans reported high domain-specific well-being (average scores ranging from 6.7 to 8.3 out of 10), with the highest levels in the socioeconomic domain and lowest in the physical domain. Several modifiable factors, including purpose in life, resilience, and social support, were strongly associated with the examined well-being domains. Interventions targeting these constructs may help promote well-being among U.S. veterans.
Pediatric medical devices lag behind adult devices due to economic barriers, smaller patient populations, changing anatomy and physiology of patients, regulatory hurdles, and especially difficulties in executing clinical trials. We investigated the requirements, challenges, associated timeline, and costs of conducting a multi-site pivotal clinical trial for a Class II pediatric physiologic monitoring device.
Methods:
This case study focused on the negotiation of clinical trial agreements (CTAs), budgets, and Institutional Review Board (IRB) processing times for a pediatric device trial. We identified key factors contributing to delays in clinical trial execution and potential best practices to expedite the process while maintaining safety, ethics, and efficacy.
Results:
The total time from site contact to first patient enrollment averaged 14 months. CTA and budget negotiations were the most time-consuming processes, averaging nearly 10 and 9 months, respectively. Reliance and local IRB processing also contributed significantly to the timeline, overall adding an average of 6.5 months across institutions. Nearly half of all costs were devoted to regulatory oversight. The COVID-19 pandemic caused significant slowdowns and delays at multiple institutions during study enrollment. Despite these pandemic-induced delays, it is important to note that the issues and themes highlighted remain relevant and have post-pandemic applicability.
Conclusions:
Our case study results underscore the importance of establishing efficient and standardized processing of CTAs, budget negotiations, and use of reliance IRBs to expedite clinical trial execution for pediatric devices. The findings also highlight the need for a national clinical trials network to streamline the clinical trial process.
U.S. military veterans are an average 20 years older than non-veterans and have elevated rates of certain health conditions. While negative aging stereotypes have been linked to increased risk for various health conditions, little is known about the prevalence and correlates of these stereotypes in this population. Using data from a nationally representative sample of 4,069 U.S. veterans surveyed between 11/19 and 3/20, we examined (1) the current prevalence of negative aging stereotypes related to physical, mental, and cognitive health and (2) sociodemographic, health, and psychosocial factors associated with these stereotypes. Multivariable regression and relative weight analyses were conducted to identify independent correlates of negative aging stereotypes. Results revealed that 82.3%, 71.1%, and 30.0% of veterans endorsed negative aging stereotypes related to physical, cognitive, and emotional health, respectively. Older age (36.6% relative variance explained), grit (23.6%), and optimism (17.5%) explained the majority of the variance in negative age stereotypes related to physical aging; grit (46.6%), openness to experiences (31.5%), and older age (15.1%) in negative age stereotypes related to cognitive aging; and emotional stability (28.8%), purpose in life (28.8%), and grit (25.3%) in negative age stereotypes related to emotional aging. This study provides an up-to-date characterization of the prevalence and correlates of negative aging stereotypes in U.S. veterans. Results underscore the importance of targeting key correlates of negative aging stereotypes, such as lower grit, as part of efforts to promote health and functioning in this population.
Face-to-face administration is the “gold standard” for both research and clinical cognitive assessments. However, many factors may impede or prevent face-to-face assessments, including distance to clinic, limited mobility, eyesight, or transportation. The COVID19 pandemic further widened gaps in access to care and clinical research participation. Alternatives to face-to-face assessments may provide an opportunity to alleviate the burden caused by both the COVID-19 pandemic and longer standing social inequities. The objectives of this study were to develop and assess the feasibility of a telephone- and video-administered version of the Uniform Data Set (UDS) v3 cognitive batteries for use by NIH-funded Alzheimer’s Disease Research Centers (ADRCs) and other research programs.
Participants and Methods:
Ninety-three individuals (M age: 72.8 years; education: 15.6 years; 72% female; 84% White) enrolled in our ADRC were included. Their most recent adjudicated cognitive status was normal cognition (N=44), MCI (N=35), mild dementia (N=11) or other (N=3). They completed portions of the UDSv3 cognitive battery, plus the RAVLT, either by telephone or video-format within approximately 6 months (M:151 days) of their annual in-person visit, where they completed the same in-person cognitive assessments. Some measures were substituted (Oral Trails for TMT; Blind MoCA for MoCA) to allow for phone administration. Participants also answered questions about the pleasantness, difficulty level, and preference for administration mode. Cognitive testers provided ratings of perceived validity of the assessment. Participants’ cognitive status was adjudicated by a group of cognitive experts blinded to most recent inperson cognitive status.
Results:
When results from video and phone modalities were combined, the remote assessments were rated as pleasant as the inperson assessment by 74% of participants. 75% rated the level of difficulty completing the remote cognitive assessment the same as the in-person testing. Overall perceived validity of the testing session, determined by cognitive assessors (video = 92%; phone = 87.5%), was good. There was generally good concordance between test scores obtained remotely and in-person (r = .3 -.8; p < .05), regardless of whether they were administered by phone or video, though individual test correlations differed slightly by mode. Substituted measures also generally correlated well, with the exception of TMT-A and OTMT-A (p > .05). Agreement between adjudicated cognitive status obtained remotely and cognitive status based on in-person data was generally high (78%), with slightly better concordance between video/in-person (82%) vs phone/in-person (76%).
Conclusions:
This pilot study provided support for the use of telephone- and video-administered cognitive assessments using the UDSv3 among individuals with normal cognitive function and some degree of cognitive impairment. Participants found the experience similarly pleasant and no more difficult than inperson assessment. Test scores obtained remotely correlated well with those obtained in person, with some variability across individual tests. Adjudication of cognitive status did not differ significantly whether it was based on data obtained remotely or in-person. The study was limited by its’ small sample size, large test-retest window, and lack of randomization to test-modality order. Current efforts are underway to more fully validate this battery of tests for remote assessment. Funded by: P30 AG072947 & P30 AG049638-05S1
Our limited knowledge of the climate prevailing over Europe during former glaciations is the main obstacle to reconstruct the past evolution of the ice coverage over the Alps by numerical modelling. To address this challenge, we perform a two-step modelling approach: First, a regional climate model is used to downscale the time slice simulations of a global earth system model in high resolution, leading to climate snapshots during the Last Glacial Maximum (LGM) and the Marine Isotope Stage 4 (MIS4). Second, we combine these snapshots and a climate signal proxy to build a transient climate over the last glacial period and force the Parallel Ice Sheet Model to simulate the dynamical evolution of glaciers in the Alps. The results show that the extent of modelled glaciers during the LGM agrees with several independent key geological imprints, including moraine-based maximal reconstructed glacial extents, known ice transfluences and trajectories of erratic boulders of known origin and deposition. Our results highlight the benefit of multiphysical coupled climate and glacier transient modelling over simpler approaches to help reconstruct paleo glacier fluctuations in agreement with traces they have left on the landscape.
Neuropsychiatric symptoms (NPS) are common during the course of neurocognitive disorders. NPS have been previously reported in early and late stages of Alzheimer’s Disease. However, our understanding of NPS in high-risk states for dementia such as mild cognitive impairment (MCI) and major depressive disorder (MDD) is poor.
Objectives
To compare the frequency and factor structure of neuropsychiatric symptoms among individuals with Mild Cognitive Impairment (MCI), Major Depressive Disorder (MDD) in remission, and comorbid MCI and MDD (in remission) (MCI-D).
Methods
We used baseline data from the Prevention of Alzheimer’s Dementia with Cognitive Remediation Plus Transcranial Direct Current Stimulation in Mild Cognitive Impairment and Depression (PACt-MD) study, a multicenter trial across five academic sites in Toronto, Canada (clinical trial No. NCT0238667). We used ANOVA or χ2-test to compare frequency of NPS across groups. We used factor analysis of Neuropsychiatric Inventory Questionnaire (NPI-Q) items in the three groups.
Results
We included 374 participants with a mean age of 72.0 years (SD = 6.3). In the overall sample, at least one NPS was present in 64.2% participants, and 36.1 % had at least moderate severity NPS (36.1%). Depression (54%, χ2 < 0.001) and apathy (28.7%, χ2=0.002) were more prevalent in the MCI-D group as compared to MCI and MDD groups. In factor analysis, NPS grouped differently in MCI, MDD, and MCI-D groups. A “psychotic” subgroup emerged among MCI and MCI-D, but not in MDD. Night-time behaviors and disinhibition grouped differently across all three groups.
Conclusions
Prevalence of NPS seems higher in persons with MCI-D as compared to those with only MCI or MDD. The factor structure of NPS differed between MCI, MDD, and MCI-D groups. Future studies should investigate the association of NPS factors with cognition, function, and illness biomarkers.
Background: Saccade and pupil responses are potential neurodegenerative disease biomarkers due to overlap between oculomotor circuitry and disease-affected areas. Instruction-based tasks have previously been examined as biomarker sources, but are arduous for patients with limited cognitive abilities; additionally, few studies have evaluated multiple neurodegenerative pathologies concurrently. Methods: The Ontario Neurodegenerative Disease Research Initiative recruited individuals with Alzheimer’s disease (AD), mild cognitive impairment (MCI), amyotrophic lateral sclerosis (ALS), frontotemporal dementia, progressive supranuclear palsy, or Parkinson’s disease (PD). Patients (n=274, age 40-86) and healthy controls (n=101, age 55-86) viewed 10 minutes of frequently changing video clips without instruction while their eyes were tracked. We evaluated differences in saccade and pupil parameters (e.g. saccade frequency and amplitude, pupil size, responses to clip changes) between groups. Results: Preliminary data indicates low-level behavioural alterations in multiple disease cohorts: increased centre bias, lower overall saccade rate and reduced saccade amplitude. After clip changes, patient groups generally demonstrated lower saccade rate but higher microsaccade rate following clip change to varying degrees. Additionally, pupil responses were blunted (AD, MCI, ALS) or exaggerated (PD). Conclusions: This task may generate behavioural biomarkers even in cognitively impaired populations. Future work should explore the possible effects of factors such as medication and disease stage.
Perceived purpose in life (PIL) has been linked to a broad range of adverse physical, mental, and cognitive outcomes. However, limited research has examined factors associated with PIL that can be targeted in prevention and treatment efforts in aging populations at heightened risk of adverse outcomes. Using data from predominantly older US veterans, we sought to identify important correlates of PIL.
Methods:
Cross-sectional data were analyzed from the 2019–2020 National Health and Resilience in Veterans Study, which surveyed a nationally representative sample of 4069 US military veterans (Mage = 62.2). Elastic net and relative importance analyses were conducted to evaluate sociodemographic, military, health, and psychosocial variables that were strongly associated with PIL.
Results:
Of the 39 variables entered into an elastic net analysis, 10 were identified as significant correlates of PIL. In order of magnitude, these were resilience (18.7% relative variance explained [RVE]), optimism (12.1%), depressive symptoms (11.3%), community integration (10.7%), gratitude (10.2%), loneliness (9.8%), received social support (8.6%), conscientiousness (8.5%), openness to experience (5.4%), and intrinsic religiosity (4.7%).
Conclusions:
Several modifiable psychosocial factors emerged as significant correlates of PIL in US military veterans. Interventions designed to target these factors may help increase PIL and mitigate risk for adverse health outcomes in this population.
This systematic literature review aimed to provide an overview of the characteristics and methods used in studies applying the disability-adjusted life years (DALY) concept for infectious diseases within European Union (EU)/European Economic Area (EEA)/European Free Trade Association (EFTA) countries and the United Kingdom. Electronic databases and grey literature were searched for articles reporting the assessment of DALY and its components. We considered studies in which researchers performed DALY calculations using primary epidemiological data input sources. We screened 3053 studies of which 2948 were excluded and 105 studies met our inclusion criteria. Of these studies, 22 were multi-country and 83 were single-country studies, of which 46 were from the Netherlands. Food- and water-borne diseases were the most frequently studied infectious diseases. Between 2015 and 2022, the number of burden of infectious disease studies was 1.6 times higher compared to that published between 2000 and 2014. Almost all studies (97%) estimated DALYs based on the incidence- and pathogen-based approach and without social weighting functions; however, there was less methodological consensus with regards to the disability weights and life tables that were applied. The number of burden of infectious disease studies undertaken across Europe has increased over time. Development and use of guidelines will promote performing burden of infectious disease studies and facilitate comparability of the results.
Economists have for decades recommended that carbon dioxide and other greenhouse gases be taxed – or otherwise priced – to provide incentives for their reduction. The USA does not have a federal carbon tax; however, many state and federal programs to reduce carbon emissions effectively price carbon – for example, through cap-and-trade systems or regulations. There are also programs that subsidize reductions in carbon emissions. At the 2022 meetings of the American Economic Association, the Society for Benefit-Cost Analysis brought together five well-known economists – Joe Aldy, Dallas Burtraw, Carolyn Fischer, Meredith Fowlie, and Rob Williams – to discuss how the USA does, in fact, price carbon and how it could price carbon. Maureen Cropper chaired the panel. This paper summarizes their remarks.
Seabirds are highly threatened, including by fisheries bycatch. Accurate understanding of offshore distribution of seabirds is crucial to address this threat. Tracking technologies revolutionised insights into seabird distributions but tracking data may contain a variety of biases. We tracked two threatened seabirds (Salvin’s Albatross Thalassarche salvini n = 60 and Black Petrel Procellaria parkinsoni n = 46) from their breeding colonies in Aotearoa (New Zealand) to their non-breeding grounds in South America, including Peru, while simultaneously completing seven surveys in Peruvian waters. We then used species distribution models to predict occurrence and distribution using either data source alone, and both data sources combined. Results showed seasonal differences between estimates of occurrence and distribution when using data sources independently. Combining data resulted in more balanced insights into occurrence and distributions, and reduced uncertainty. Most notably, both species were predicted to occur in Peruvian waters during all four annual quarters: the northern Humboldt upwelling system for Salvin’s Albatross and northern continental shelf waters for Black Petrels. Our results highlighted that relying on a single data source may introduce biases into distribution estimates. Our tracking data might have contained ontological and/or colony-related biases (e.g. only breeding adults from one colony were tracked), while our survey data might have contained spatiotemporal biases (e.g. surveys were limited to waters <200 nm from the coast). We recommend combining data sources wherever possible to refine predictions of species distributions, which ultimately will improve fisheries bycatch management through better spatiotemporal understanding of risks.
Mental health and psychosocial support (MHPSS) staff in humanitarian settings have limited access to clinical supervision and are at high risk of experiencing burnout. We previously piloted an online, peer-supervision program for MHPSS professionals working with displaced Rohingya (Bangladesh) and Syrian (Turkey and Northwest Syria) communities. Pilot evaluations demonstrated that online, peer-supervision is feasible, low-cost, and acceptable to MHPSS practitioners in humanitarian settings.
Objectives
This project will determine the impact of online supervision on i) the wellbeing and burnout levels of local MHPSS practitioners, and ii) practitioner technical skills to improve beneficiary perceived service satisfaction, acceptability, and appropriateness.
Methods
MHPSS practitioners in two contexts (Bangladesh and Turkey/Northwest Syria) will participate in 90-minute group-based online supervision, fortnightly for six months. Sessions will be run on zoom and will be co-facilitated by MHPSS practitioners and in-country research assistants. A quasi-experimental multiple-baseline design will enable a quantitative comparison of practitioner and beneficiary outcomes between control periods (12-months) and the intervention. Outcomes to be assessed include the Kessler-6, Harvard Trauma Questionnaire and Copenhagen Burnout Inventory and Client Satisfaction Questionnaire-8.
Results
A total of 80 MHPSS practitioners will complete 24 monthly online assessments from May 2022. Concurrently, 1920 people receiving MHPSS services will be randomly selected for post-session interviews (24 per practitioner).
Conclusions
This study will determine the impact of an online, peer-supervision program for MHPSS practitioners in humanitarian settings. Results from the baseline assessments, pilot evaluation, and theory of change model will be presented.
Health services research (HSR) is affected by a widespread problem related to service terminology including non-commensurability (using different units of analysis for comparisons) and terminological unclarity due to ambiguity and vagueness of terms. The aim of this study was to identify the magnitude of the terminological bias in health and social services research and health economics by applying an international classification system.
Methods
This study, that was part of the PECUNIA project, followed an ontoterminology approach (disambiguation of technical and scientific terms using a taxonomy and a glossary of terms). A listing of 56 types of health and social services relevant for mental health was compiled from a systematic review of the literature and feedback provided by 29 experts in six European countries. The disambiguation of terms was performed using an ontology-based classification of services (Description and Evaluation of Services and DirectoriEs – DESDE), and its glossary of terms. The analysis focused on the commensurability and the clarity of definitions according to the reference classification system. Interrater reliability was analysed using κ.
Results
The disambiguation revealed that only 13 terms (23%) of the 56 services selected were accurate. Six terms (11%) were confusing as they did not correspond to services as defined in the reference classification system (non-commensurability bias), 27 (48%) did not include a clear definition of the target population for which the service was intended, and the definition of types of services was unclear in 59% of the terms: 15 were ambiguous and 11 vague. The κ analyses were significant for agreements in unit of analysis and assignment of DESDE codes and very high in definition of target population.
Conclusions
Service terminology is a source of systematic bias in health service research, and certainly in mental healthcare. The magnitude of the problem is substantial. This finding has major implications for the international comparability of resource use in health economics, quality and equality research. The approach presented in this paper contributes to minimise differentiation between services by taking into account key features such as target population, care setting, main activities and type and number of professionals among others. This approach also contributes to support financial incentives for effective health promotion and disease prevention. A detailed analysis of services in terms of cost measurement for economic evaluations reveals the necessity and usefulness of defining services using a coding system and taxonomical criteria rather than by ‘text-based descriptions’.
Infectious pandemics have had a significant negative impact on economies and health-care systems around the world repeatedly throughout history. Patients with advanced age are commonly disproportionately affected by pandemics. Health-care providers for older patients may be the first to recognize emerging infectious emergencies and play a critical role for older patients during infectious threats. This chapter outlines historical infectious outbreaks, epidemics, and pandemics and their impact on older patients. The chapter further outlines the risks of pandemics to older patients, describes key response strategies, and guides preparedness of the geriatric care provider for future infectious public health emergencies.
Background: Eye movements reveal neurodegenerative disease processes due to overlap between oculomotor circuitry and disease-affected areas. Characterizing oculomotor behaviour in context of cognitive function may enhance disease diagnosis and monitoring. We therefore aimed to quantify cognitive impairment in neurodegenerative disease using saccade behaviour and neuropsychology. Methods: The Ontario Neurodegenerative Disease Research Initiative recruited individuals with neurodegenerative disease: one of Alzheimer’s disease, mild cognitive impairment, amyotrophic lateral sclerosis, frontotemporal dementia, Parkinson’s disease, or cerebrovascular disease. Patients (n=450, age 40-87) and healthy controls (n=149, age 42-87) completed a randomly interleaved pro- and anti-saccade task (IPAST) while their eyes were tracked. We explored the relationships of saccade parameters (e.g. task errors, reaction times) to one another and to cognitive domain-specific neuropsychological test scores (e.g. executive function, memory). Results: Task performance worsened with cognitive impairment across multiple diseases. Subsets of saccade parameters were interrelated and also differentially related to neuropsychology-based cognitive domain scores (e.g. antisaccade errors and reaction time associated with executive function). Conclusions: IPAST detects global cognitive impairment across neurodegenerative diseases. Subsets of parameters associate with one another, suggesting disparate underlying circuitry, and with different cognitive domains. This may have implications for use of IPAST as a cognitive screening tool in neurodegenerative disease.
In this Research Communication we investigate the motivations of Brazilian dairy farmers to adopt automated behaviour recording and analysis systems (ABRS) and their attitudes towards the alerts that are issued. Thirty-eight farmers participated in the study distributed into two groups, ABRS users (USERS, n = 16) and non-users (NON-USERS, n = 22). In the USERS group 16 farmers accepted being interviewed, answering a semi-structured interview conducted by telephone, and the answers were transcribed and codified. In the NON-USERS group, 22 farmers answered an online questionnaire. Descriptive analysis was applied to coded answers. Most farmers were young individuals under 40 years of age, with undergraduate or graduate degrees and having recently started their productive activities, after a family succession process. Herd size varied with an overall average of approximately 100 cows. Oestrus detection and cow's health monitoring were the main reasons given to invest in this technology, and cost was the most important factor that prevented farmers from purchasing ABRS. All farmers in USERS affirmed that they observed the target cows after receiving a health or an oestrus alert. Farmers believed that they were able to intervene in the evolution of the animals' health status, as the alerts gave a window of three to four days before the onset of clinical signs of diseases, anticipating the start of the treatment.The alerts issued by the monitoring systems helped farmers to reduce the number of cows to be observed and to identify pre-clinically sick and oestrous animals more easily. Difficulties in illness detection and lack of definite protocols impaired the decision making process and early treatment, albeit farmers believed ABRS improved the farm's routine and reproductive rates.