We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A growing number of studies use “real” effort designs for laboratory experiments where subjects complete an actual task to exert effort rather than using a stylized effort design where subjects simply choose an effort level from a predefined set. The commonly argued reason for real effort is that it makes the results more generalizable and field relevant. We investigate the nature of modeling effort provision by first trying to provide a clear theoretical understanding of the nature of effort costs. We then empirically examine claims about the differences between real effort and stylized effort. A key to our examination is ensuring that we compare the two modes of effort provision keeping effort costs constant, which is a point overlooked in many past examinations. In our data, when controlling for effort costs, we find no differences in behavior between real and stylized effort. Given the importance of effort costs and the lack of a generally accepted way to include them in real effort designs, we provide a simple add-on that any researcher can use with their real effort experiments to incorporate a theoretically appropriate and controlled cost of effort even in a real effort setting. We also discuss ways to better approach modeling effort costs in experiments, whether one is conducting real or stylized designs, to improve inference on research questions.
There is increasing recognition of cognitive and pathological heterogeneity in early-stage Alzheimer’s disease and other dementias. Data-driven approaches have demonstrated cognitive heterogeneity in those with mild cognitive impairment (MCI), but few studies have examined this heterogeneity and its association with progression to MCI/dementia in cognitively unimpaired (CU) older adults. We identified cluster-derived subgroups of CU participants based on comprehensive neuropsychological data and compared baseline characteristics and rates of progression to MCI/dementia or a Dementia Rating Scale (DRS) of <129 across subgroups.
Participants and Methods:
A hierarchical cluster analysis was conducted using 11 baseline neuropsychological test scores from 365 CU participants in the UCSD Shiley-Marcos Alzheimer’s Disease Research Center (age M=71.93 years, SD=7.51; 55.9% women; 15.6% Hispanic/Latino/a/x/e). A discriminate function analysis was then conducted to test whether the individual neuropsychological scores predicted cluster-group membership. Cox regressions examined the risk of progression to consensus diagnosis of MCI or dementia, or to DRS score <129, by cluster group.
Results:
Cluster analysis identified 5 groups: All-Average (n=139), Low-Visuospatial (n=46), Low-Executive (n=51), Low-Memory/Language (n=83), and Low-All Domains (n=46). The discriminant function analysis using the neuropsychological measures to predict group membership into these 5 clusters correctly classified 85.2% of the participants. Subgroups had unique demographic and clinical characteristics. Relative to the All-Average group, the Low-Visuospatial (hazard ratio [HR] 2.39, 95% CI [1.03, 5.56], p=.044), Low-Memory/Language (HR 4.37, 95% CI [2.24, 8.51], p<.001), and Low-All Domains (HR 7.21, 95% CI [3.59, 14.48], p<.001) groups had greater risk of progression to MCI/dementia. The Low-Executive group was also twice as likely to progress to MCI/dementia compared to the AllAverage group, but did not statistically differ (HR 2.03, 95% CI [0.88,4.70], p=.096). A similar pattern of results was found for progression to DRS score <129, with the Low-Executive (HR 2.82, 95% CI [1.26, 6.29], p=.012), Low-Memory/Language (HR 3.70, 95% CI [1.80, 7.56], p<.001) and Low-All Domains (HR 5.79, 95% CI [2.74, 12.27], p<.001) groups at greater risk of progression to a DRS score <129 than the All-Average group. The Low-Visuospatial group was also twice as likely to progress to DRS <129 compared to the All-Average group, but did not statistically differ (HR 2.02, 95% CI [0.80, 5.06], p=.135).
Conclusions:
Our results add to a growing literature documenting heterogeneity in the earliest cognitive and pathological presentations associated with Alzheimer’s disease and related disorders. Participants with subtle memory/language, executive, and visuospatial weaknesses all declined at faster rates than the All-Average group, suggesting that there are multiple pathways and/or unique subtle cognitive decline profiles that ultimately lead to a diagnosis of MCI/dementia. These results have important implications for early identification of individuals at risk for MCI/dementia. Given that the same classification approach may not be optimal for everyone, determining profiles of subtle cognitive difficulties in CU individuals and implementing neuropsychological test batteries that assess multiple cognitive domains may be a key step towards an individualized approach to early detection and fewer missed opportunities for early intervention.
The hazard ratio (HR) is a commonly used summary statistic when comparing time to event (TTE) data between trial arms, but assumes the presence of proportional hazards (PH). Non-proportional hazards (NPH) are increasingly common in NICE technology appraisals (TAs) due to an abundance of novel cancer treatments, which have differing mechanisms of action compared with traditional chemotherapies. The goal of this study is to understand how pharmaceutical companies, evidence review groups (ERGs) and appraisal committees (ACs) test for PH and report clinical effectiveness in the context of NPH.
Methods
A thematic analysis of NICE TAs concerning novel cancer treatments published between 1 January 2020 and 31 December 2021 was undertaken. Data on PH testing and clinical effectiveness reporting for overall survival (OS) and progression-free survival (PFS) were obtained from company submissions, ERG reports, and final appraisal determinations (FADs).
Results
NPH were present for OS or PFS in 28/40 appraisals, with log-cumulative hazard plots the most common testing methodology (40/40), supplemented by Schoenfeld residuals (20/40) and/or other statistical methods (6/40). In the context of NPH, the HR was ubiquitously reported by companies, inconsistently critiqued by ERGs (10/28), and commonly reported in FADs (23/28).
Conclusions
There is inconsistency in PH testing methodology used in TAs. ERGs are inconsistent in critiquing use of the HR in the context of NPH, and even when critiqued it remains a commonly reported outcome measure in FADs. Other measures of clinical effectiveness should be considered, along with guidance on clinical effectiveness reporting when NPH are present.
We aimed to understand practice nurses’ perceptions about how they engage with parents during consultations concerning the measles, mumps and rubella (MMR) vaccine.
Background:
The incidence of measles is increasing globally. Immunisation is recognised as the most significant intervention to influence global health in modern times, although many factors are known to adversely affect immunisation uptake. Practice nurses are a key member of the primary care team responsible for delivering immunisation. However, little is known how practice nurses perceive this role.
Methods:
Semi-structured interviews were undertaken with 15 practice nurses in England using a qualitative descriptive approach. Diversity in terms of years of experience and range of geographical practice settings were sought. These interviews were recorded, transcribed verbatim and open-coded using qualitative content analysis to manage, analyse and identify themes.
Findings:
Three themes were derived from the data: engaging with parents, the informed practice nurse and dealing with parental concerns: strategies to promote MMR uptake. During their consultations, practice nurses encountered parents who held strong opinions about the MMR vaccine and perceived this to be related to the parents’ socio-demographic background. Practice nurses sought to provide parents with tailored and accurate sources of information to apprise their immunisation decision-making about the MMR vaccine.
Ecological models acknowledge the importance of human-environment interactions in understanding and changing behavior. These models incorporate multiple levels of influence on behavior, including policy, community, organizational, social, and individual. Studies applying ecological models to explore health behavior correlates have tended to identify determinants at the individual level, with fewer exploring correlates at the social, physical, and policy levels. While primarily developed to explain human behavior, some ecological models have been further developed to inform interventions to change human behavior, often paired with theories such as social cognitive theory, organizational theory, and behavioral choice theory. Evidence syntheses indicate that ecological models are seldom used to inform intervention design, with the majority focusing on just one or two levels of the model. Most interventions applying ecological models to target child and adolescent health behaviors have reported small effect sizes, while child obesity prevention initiatives targeting factors at multiple levels of influence have shown larger effects. Future research should focus on developing interventions targeting all levels of ecological models, using interventions based on ecological models to change the behavior of whole communities, using ecological models within a systems framework, and exploring how they can assist with the scaling up of interventions to improve population reach.
Innovation Concept: Global health fieldwork is valuable for Canadian residents, but is often trainee-organized, short-term, unsupervised, and lacking in preparation and debriefing. In contrast, we have developed a Certificate Program which will be offered to University of Toronto (UofT) emergency medicine (EM) trainees in their final year of residency. This 6-month Program will complement the Transition to Practice stage for residents interested in becoming leaders in GHEM. Methods: We completed a multi-phase needs assessment to inform the structure and content of a GHEM Certificate Program. Phase 1 consisted of 9 interviews with Program Directors (PDs), Assistant PDs, and past fellows from existing GH fellowships in Canada and USA to understand program structure, curriculum, fieldwork and funding. In Phase 2 we interviewed 4 PDs and fellows from UofT fellowship programs to understand local administrative structures. In Phase 3 we collected feedback from 5 UofT residents and 7 faculty with experience in global health to assess interest in a local GHEM Program. All interview data was reviewed and best practices and lessons learned from key stakeholders were summarized into a proposed outline for a 6-month GHEM Certificate Program. Curriculum, Tool, or Material: The Program will comprise of 1) 3 months of preparatory work in Toronto followed by 2) 3 months of fieldwork in Addis Ababa, Ethiopia. Fieldwork will coincide with activities under the Toronto-Addis Ababa Academic Collaboration in Emergency Medicine (TAAAC-EM). The GHEM trainee's work will support TAAAC-EM activities. Preparatory months will include training in specific competencies (POCUS, teaching, tropical medicine, QI) and meetings between the trainee and a UofT mentor to design an academic project. During fieldwork, the trainee will do EM teaching (75% of time) and complete their academic project (25% of time). A UofT supervisor will accompany, orient and supervise the trainee for their first 2 weeks in Addis. Throughout fieldwork, the trainee will be required to debrief with their UofT mentor weekly for academic and clinical mentoring. One AAU faculty member will be identified as a local supervisor and will participate in all evaluations of the trainee during fieldwork. Conclusion: This Program will launch with a call for applications in July 2021, expecting the first trainee to complete the Program in 2022-23. We anticipate that this Program will increase the number of Canadian EM trainees committed to global health projects and partnerships throughout their career.
Objectives: Research has shown that analyzing intrusion errors generated on verbal learning and memory measures is helpful for distinguishing between the memory disorders associated with Alzheimer’s disease (AD) and other neurological disorders, including Huntington’s disease (HD). Moreover, preliminary evidence suggests that certain clinical populations may be prone to exhibit different types of intrusion errors. Methods: We examined the prevalence of two new California Verbal Learning Test-3 (CVLT-3) intrusion subtypes – across-trial novel intrusions and across/within trial repeated intrusions – in individuals with AD or HD. We hypothesized that the encoding/storage impairment associated with medial-temporal involvement in AD would result in a greater number of novel intrusions on the delayed recall trials of the CVLT-3, whereas the executive dysfunction associated with subcortical-frontal involvement in HD would result in a greater number of repeated intrusions across trials. Results: The AD group generated significantly more across-trial novel intrusions than across/within trial repeated intrusions on the delayed cued-recall trials, whereas the HD group showed the opposite pattern on the delayed free-recall trials. Conclusions: These new intrusion subtypes, combined with traditional memory analyses (e.g., recall versus recognition performance), promise to enhance our ability to distinguish between the memory disorders associated with primarily medial-temporal versus subcortical-frontal involvement.
Distributed models and a good knowledge of the catchment studied are required to assess mitigation measures for nitrogen (N) pollution. A set of alternative scenarios (change of crop management practices and different strategies of landscape management, especially different sizes and distribution of set-aside areas) were simulated with a fully distributed model in a small agricultural catchment. The results show that current practices are close to complying with current regulations, which results in a limited effect of the implementation of best crop management practices. The location of set-aside zones is more important than their size in decreasing nitrate fluxes in stream water. The most efficient location is the lower parts of hillslopes, combining the dilution effect due to the decrease of N input per unit of land and the interception of nitrate transferred by sub-surface flows. The main process responsible for the interception effect is probably uptake by grassland and retention in soils since the denitrification load tends to decrease proportionally to N input and, for the scenarios considered, is lower in the interception scenarios than in the corresponding dilution zones.
Objectives: Although subjective cognitive complaints (SCC) are an integral component of the diagnostic criteria for mild cognitive impairment (MCI), previous findings indicate they may not accurately reflect cognitive ability. Within the Alzheimer’s Disease Neuroimaging Initiative, we investigated longitudinal change in the discrepancy between self- and informant-reported SCC across empirically derived subtypes of MCI and normal control (NC) participants. Methods: Data were obtained for 353 MCI participants and 122 “robust” NC participants. Participants were classified into three subtypes at baseline via cluster analysis: amnestic MCI, mixed MCI, and cluster-derived normal (CDN), a presumptive false-positive group who performed within normal limits on neuropsychological testing. SCC at baseline and two annual follow-up visits were assessed via the Everyday Cognition Questionnaire (ECog), and discrepancy scores between self- and informant-report were calculated. Analysis of change was conducted using analysis of covariance. Results: The amnestic and mixed MCI subtypes demonstrated increasing ECog discrepancy scores over time. This was driven by an increase in informant-reported SCC, which corresponded to participants’ objective cognitive decline, despite stable self-reported SCC. Increasing unawareness was associated with cerebrospinal fluid Alzheimer’s disease biomarker positivity and progression to Alzheimer’s disease. In contrast, CDN and NC groups over-reported cognitive difficulty and demonstrated normal cognition at all time points. Conclusions: MCI participants’ discrepancy scores indicate progressive underappreciation of their evolving cognitive deficits. Consistent over-reporting in the CDN and NC groups despite normal objective cognition suggests that self-reported SCC do not predict impending cognitive decline. Results demonstrate that self-reported SCC become increasingly misleading as objective cognitive impairment becomes more pronounced. (JINS, 2018, 24, 842–853)
Objectives: The third edition of the California Verbal Learning Test (CVLT-3) includes a new index termed List A versus Novel/Unrelated recognition discriminability (RD) on the Yes/No Recognition trial. Whereas the Total RD index incorporates false positive (FP) errors associated with all distractors (including List B and semantically related items), the new List A versus Novel/Unrelated RD index incorporates only FP errors associated with novel, semantically unrelated distractors. Thus, in minimizing levels of source and semantic interference, the List A versus Novel/Unrelated RD index may yield purer assessments of yes/no recognition memory independent of vulnerability to source memory difficulties or semantic confusion, both of which are often seen in individuals with primarily frontal-system dysfunction (e.g., early Huntington’s disease [HD]). Methods: We compared the performance of individuals with Alzheimer’s disease (AD) and HD in mild and moderate stages of dementia on CVLT-3 indices of Total RD and List A versus Novel/Unrelated RD. Results: Although AD and HD subgroups exhibited deficits on both RD indices relative to healthy comparison groups, those with HD generally outperformed those with AD, and group differences were more robust on List A versus Novel/Unrelated RD than on Total RD. Conclusions: Our findings highlight the clinical utility of the new CVLT-3 List A versus Novel/Unrelated RD index, which (a) maximally assesses yes/no recognition memory independent of source and semantic interference; and (b) provides a greater differentiation between individuals whose memory disorder is primarily at the encoding/storage level (e.g., as in AD) versus at the retrieval level (e.g., as in early HD). (JINS, 2018, 24, 833–841)
Footprints in Time: The Longitudinal Study of Indigenous Children (LSIC) is a national study of 1759 Australian Aboriginal and Torres Strait Islander children living across urban, regional and remote areas of Australia. The study is in its 11th wave of annual data collection, having collected extensive data on topics including birth and early life influences, parental health and well-being, identity, cultural engagement, language use, housing, racism, school engagement and academic achievement, and social and emotional well-being. The current paper reviews a selection of major findings from Footprints in Time relating to the developmental origins of health and disease for Australian Aboriginal and Torres Strait Islander peoples. Opportunities for new researchers to conduct further research utilizing the LSIC data set are also presented.
The anabolic potential of a dietary protein is determined by its ability to elicit postprandial rises in circulating essential amino acids and insulin. Minimal data exist regarding the bioavailability and insulinotropic effects of non-animal-derived protein sources. Mycoprotein is a sustainable and rich source of non-animal-derived dietary protein. We investigated the impact of mycoprotein ingestion, in a dose–response manner, on acute postprandial hyperaminoacidaemia and hyperinsulinaemia. In all, twelve healthy young men completed five experimental trials in a randomised, single-blind, cross-over design. During each trial, volunteers consumed a test drink containing either 20 g milk protein (MLK20) or a mass matched (not protein matched due to the fibre content) bolus of mycoprotein (20 g; MYC20), a protein matched bolus of mycoprotein (40 g; MYC40), 60 g (MYC60) or 80 g (MYC80) mycoprotein. Circulating amino acid, insulin and uric acid concentrations, and clinical chemistry profiles, were assessed in arterialised venous blood samples during a 4-h postprandial period. Mycoprotein ingestion resulted in slower but more sustained hyperinsulinaemia and hyperaminoacidaemia compared with milk when protein matched, with overall bioavailability equivalent between conditions (P>0·05). Increasing the dose of mycoprotein amplified these effects, with some evidence of a plateau at 60–80 g. Peak postprandial leucine concentrations were 201 (sem 24) (30 min), 118 (sem 10) (90 min), 150 (sem 14) (90 min), 173 (sem 23) (45 min) and 201 (sem 21 (90 min) µmol/l for MLK20, MYC20, MYC40, MYC60 and MYC80, respectively. Mycoprotein represents a bioavailable and insulinotropic dietary protein source. Consequently, mycoprotein may be a useful source of dietary protein to stimulate muscle protein synthesis rates.
Introduction/Innovation Concept: Demand for training in global health emergency medicine (EM) practice and education across Canada is high and increasing. For faculty with advanced global health EM training, EM departments have not traditionally recognized global health as an academic niche warranting support. To address these unmet needs, expert faculty at the University of Toronto (UT) established the Global Health Emergency Medicine (GHEM) organization to provide both quality training opportunities for residents and an academic home for faculty in the field of global health EM. Methods: Six faculty with training and experience in global health EM founded GHEM in 2010 at a UT teaching hospital, supported by the leadership of the ED chief and head of the Divisions of EM. This initial critical mass of faculty formed a governing body, seed funding was granted from the affiliated hospital practice plan and a five-year strategic academic plan was developed. Curriculum, Tool, or Material: GHEM has flourished at UT with growing membership and increasing academic outputs. Five governing members and 9 general faculty members currently run 18 projects engaging over 60 faculty and residents. Formal partnerships have been developed with institutions in Ethiopia, Congo and Malawi, supported by five granting agencies. Fifteen publications have been authored to date with multiple additional manuscripts currently in review. Nineteen FRCP and CCFP-EM residents have been mentored in global health clinical practice, research and education. Finally, GHEM’s activities have become a leading recruitment tool for both EM postgraduate training programs and the EM department. Conclusion: GHEM is the first academic EM organization in Canada to meet the ever-growing demand for quality global health EM training and to harness and support existing expertise among faculty. The productivity from this collaborative framework has established global health EM at UT as a relevant and sustainable academic career. GHEM serves as a model for other faculty and institutions looking to move global health EM practice from the realm of ‘hobby’ to recognized academic endeavor, with proven academic benefits conferring to faculty, trainees and the institution.
Objectives: We examined florbetapir positron emission tomography (PET) amyloid scans across stages of preclinical Alzheimer’s disease (AD) in cortical, allocortical, and subcortical regions. Stages were characterized using empirically defined methods. Methods: A total of 312 cognitively normal Alzheimer’s Disease Neuroimaging Initiative participants completed a neuropsychological assessment and florbetapir PET scan. Participants were classified into stages of preclinical AD using (1) a novel approach based on the number of abnormal biomarkers/cognitive markers each individual possessed, and (2) National Institute on Aging and the Alzheimer’s Association (NIA-AA) criteria. Preclinical AD groups were compared to one another and to a mild cognitive impairment (MCI) sample on florbetapir standardized uptake value ratios (SUVRs) in cortical and allocortical/subcortical regions of interest (ROIs). Results: Amyloid deposition increased across stages of preclinical AD in all cortical ROIs, with SUVRs in the later stages reaching levels seen in MCI. Several subcortical areas showed a pattern of results similar to the cortical regions; however, SUVRs in the hippocampus, pallidum, and thalamus largely did not differ across stages of preclinical AD. Conclusions: Substantial amyloid accumulation in cortical areas has already occurred before one meets criteria for a clinical diagnosis. Potential explanations for the unexpected pattern of results in some allocortical/subcortical ROIs include lack of correspondence between (1) cerebrospinal fluid and florbetapir PET measures of amyloid, or between (2) subcortical florbetapir PET SUVRs and underlying neuropathology. Findings support the utility of our novel method for staging preclinical AD. By combining imaging biomarkers with detailed cognitive assessment to better characterize preclinical AD, we can advance our understanding of who is at risk for future progression. (JINS, 2016, 22, 978–990)
Fetal alcohol spectrum disorder (FASD) is increasingly recognized as a growing public health issue worldwide. Although more research is needed on both the diagnosis and treatment of FASD, and a broader and more culturally diverse range of services are needed to support those who suffer from FASD and their families, both research and practice for FASD raise significant ethical issues. In response, from the point of view of both research and clinical neuroethics, we provide a framework that emphasizes the need to maximize benefits and minimize harm, promote justice, and foster respect for persons within a global context.
The marketing of infant/child milk-based formulas (MF) contributes to suboptimal breast-feeding and adversely affects child and maternal health outcomes globally. However, little is known about recent changes in MF markets. The present study describes contemporary trends and patterns of MF sales at the global, regional and country levels.
Design
Descriptive statistics of trends and patterns in MF sales volume per infant/child for the years 2008–2013 and projections to 2018, using industry-sourced data.
Setting
Eighty countries categorized by country income bracket, for developing countries by region, and in countries with the largest infant/child populations.
Subjects
MF categories included total (for ages 0–36 months), infant (0–6 months), follow-up (7–12 months), toddler (13–36 months) and special (0–6 months).
Results
In 2008–2013 world total MF sales grew by 40·8 % from 5·5 to 7·8 kg per infant/child/year, a figure predicted to increase to 10·8 kg by 2018. Growth was most rapid in East Asia particularly in China, Indonesia, Thailand and Vietnam and was led by the infant and follow-up formula categories. Sales volume per infant/child was positively associated with country income level although with wide variability between countries.
Conclusions
A global infant and young child feeding (IYCF) transition towards diets higher in MF is underway and is expected to continue apace. The observed increase in MF sales raises serious concern for global child and maternal health, particularly in East Asia, and calls into question the efficacy of current regulatory regimes designed to protect and promote optimal IYCF. The observed changes have not been captured by existing IYCF monitoring systems.
This study investigated the incidence and risk to staff groups for sustaining needlestick injuries (NSIs) in the National University Hospital (NUH), Singapore. A retrospective cohort review of incident NSI cases was undertaken to determine the injury rate, causation, and epidemiological profile of such injuries. Analysis of the risk of sustaining recurrent NSI by occupation and location was done using the Cox proportional hazards model. There were 244 NSI cases in 5957 employees in NUH in 2014, giving an incidence rate of 4·1/100 healthcare workers (HCWs) per year. The incidence rate was highest for doctors at 21·3, and 2·7 for nurses; 40·6% of injuries occurred in wards, and 32·8% in operating theatres. There were 27 cases of repeated NSI cases. The estimated cost due to NSIs in NUH ranged from US$ 109 800 to US$ 563 152 in 2014. We conclude that creating a workplace environment where top priority is given to prevention of NSIs in HCWs, is essential to address the high incidence of reported NSIs. The data collected will be of value to inform the design of prevention programmes to reduce further the risk of NSIs in HCWs.
Human factors certification criteria are being developed for large civil aircraft with the objective of reducing the incidence of design-induced error on the flight deck. Many formal error identification techniques currently exist which have been developed in non-aviation contexts but none have been validated for use to this end. This paper describes a new human error identification technique (HET – human error template) designed specifically as a diagnostic tool for the identification of design-induced error on the flight deck. HET is benchmarked against three existing techniques (SHERPA – systematic human error reduction and prediction approach; human error HAZOP – hazard and operability study; and HEIST – human error In systems tool). HET outperforms all three existing techniques in a validation study comparing predicted errors to actual errors reported during an approach and landing task in a modern, highly automated commercial aircraft. It is concluded that HET should provide a useful tool as a adjunct to the proposed human factors certification process.