We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The roles and responsibilities of the public health emergency preparedness (PHEP) and response workforce have changed since the last iteration of competencies developed in 2010. This project aims to identify current competencies (i.e., knowledge, skills, and abilities) for the PHEP workforce, as well as all public health staff who may contribute to a response.
Methods
Five focus groups with members of the PHEP workforce across the US focused on their experiences with workforce needs in preparedness and response activities. Focus group transcripts were thematically analyzed using qualitative methods to identify key competencies needed in the workforce.
Results
The focus groups revealed 7 domains: attitudes and motivations; collaboration; communications; data collection and analysis; preparedness and response; leadership and management; and public health foundations. Equity and social justice was identified as a cross-cutting theme across all domains.
Conclusions
Broad validation of competencies through ongoing engagement with the PHEP practice and academic communities is necessary. Competencies can be used to inform the design of PHEP educational programs and PHEP program development. Implementation of an up-to-date, validated competency model can help the workforce better prepare for and respond to disasters and emergencies.
Rates of childhood mental health problems are increasing in the UK. Early identification of childhood mental health problems is challenging but critical to children’s future psychosocial development. This is particularly important for children with social care contact because earlier identification can facilitate earlier intervention. Clinical prediction tools could improve these early intervention efforts.
Aims
Characterise a novel cohort consisting of children in social care and develop effective machine learning models for prediction of childhood mental health problems.
Method
We used linked, de-identified data from the Secure Anonymised Information Linkage Databank to create a cohort of 26 820 children in Wales, UK, receiving social care services. Integrating health, social care and education data, we developed several machine learning models aimed at predicting childhood mental health problems. We assessed the performance, interpretability and fairness of these models.
Results
Risk factors strongly associated with childhood mental health problems included age, substance misuse and being a looked after child. The best-performing model, a gradient boosting classifier, achieved an area under the receiver operating characteristic curve of 0.75 (95% CI 0.73–0.78). Assessments of algorithmic fairness showed potential biases within these models.
Conclusions
Machine learning performance on this prediction task was promising. Predictive performance in social care settings can be bolstered by linking diverse routinely collected data-sets, making available a range of heterogenous risk factors relating to clinical, social and environmental exposures.
Hearing consists of peripheral components (outer and middle ear, cochlea) and the central auditory system (cochlear nuclei to the auditory cortex). Speech perception relies on peripheral hearing abilities (i.e., pure-tone thresholds) and central auditory processing (CAP) and cognitive functioning. Specifically, working memory, executive function, attention, and verbal functioning allow for speech understanding. As a result, CAP deficits are also influenced by peripheral hearing sensitivity and cognitive functioning. Assessing CAP deficits can be difficult because of these complex interactions. Prior work has shown persons living with HIV (PWH) are at higher risk for sensorineural hearing loss compared to persons living without HIV (PWOH) after adjusting for age, sex, and noise exposure. Further, HIV is a risk factor for cognitive impairment, one example being Alzheimer's disease (AD) and its precursor, Mild Cognitive Impairment (MCI), with auditory dysfunction occurring in earlier stages of AD. Therefore, the purpose of this study was to evaluate: 1) the peripheral hearing sensitivity and CAP in PWH and PWOH; and 2) the association between cognitive function measures and CAP in PWH and PWOH.
Participants and Methods:
Participants included 59 PWH (39 men and 20 women, mean age=66.7 years [SD=4.4 years]) and 27 PWOH (13 men and 14 women, mean age=71.9 years [SD=7.1 years]). Participants completed a standard neuropsychological battery assessing the domains of learning, recall, executive function, working memory, verbal fluency, processing speed and motor. Raw scores were transformed to demographically corrected, domain T-scores. Cognitive function was normal for 39 (66.1%) PWH and 16 (59.3%) PWOH while 43 (72.9%) PWH and 17 (63.0%) PWOH were determined to have MCI. Participants with dementia were excluded. Participants also completed a hearing assessment, a portion of which consisted of pure-tone thresholds, peripheral hearing measure, and dichotic digits testing (DDT), a CAP measure. Pure-tone air-conduction thresholds were obtained at octave frequencies from 0.25 through 8 kHz, including 3 and 6 kHz. A pure-tone average (PTA) was calculated from 0.5, 1, 2, and 4 kHz thresholds for each ear. The DDT involves the presentation of numbers from 1 to 10, excluding 7, in which two different digits are presented to one ear while two other digits are simultaneously presented to the opposite ear. The outcome of DDT is percent correct.
Results:
PWH had slightly lower (i.e., better) mean PTAs in both ears compared to PWOH, but this was not statistically significant. Conversely, PWH had lower percent correct DDT results compared to PWOH, but this difference was also not statistically significant. Participants with impairment in verbal fluency, executive functioning, and working memory had significantly worse DDT results by approximately 10%, but only for right ear data.
Conclusions:
PWH in our sample had better hearing than PWOH, which can be explained by PWH having a lower mean age. PWH had poorer DDT results, however, indicative of CAP deficits rather than peripheral hearing problems. Poor right ear DDT was associated with impairments specifically in frontal-based cognitive processes with an executive component.
To determine the reliability of teleneuropsychological (TNP) compared to in-person assessments (IPA) in people with HIV (PWH) and without HIV (HIV−).
Methods:
Participants included 80 PWH (Mage = 58.7, SDage = 11.0) and 23 HIV− (Mage = 61.9, SDage = 16.7). Participants completed two comprehensive neuropsychological IPA before one TNP during the COVID-19 pandemic (March–December 2020). The neuropsychological tests included: Hopkins Verbal Learning Test-Revised (HVLT-R Total and Delayed Recall), Controlled Oral Word Association Test (COWAT; FAS-English or PMR-Spanish), Animal Fluency, Action (Verb) Fluency, Wechsler Adult Intelligence Scale 3rd Edition (WAIS-III) Symbol Search and Letter Number Sequencing, Stroop Color and Word Test, Paced Auditory Serial Addition Test (Channel 1), and Boston Naming Test. Total raw scores and sub-scores were used in analyses. In the total sample and by HIV status, test-retest reliability and performance-level differences were evaluated between the two consecutive IPA (i.e., IPA1 and IPA2), and mean in-person scores (IPA-M), and TNP.
Results:
There were statistically significant test-retest correlations between IPA1 and IPA2 (r or ρ = .603–.883, ps < .001), and between IPA-M and TNP (r or ρ = .622–.958, ps < .001). In the total sample, significantly lower test-retest scores were found between IPA-M and TNP on the COWAT (PMR), Stroop Color and Word Test, WAIS-III Letter Number Sequencing, and HVLT-R Total Recall (ps < .05). Results were similar in PWH only.
Conclusions:
This study demonstrates reliability of TNP in PWH and HIV−. TNP assessments are a promising way to improve access to traditional neuropsychological services and maintain ongoing clinical research studies during the COVID-19 pandemic.
This comprehensive collection examines a broad spectrum of Islamic governance during colonial and postcolonial eras. The book pays special attention to the ongoing battles over the codification of Islamic education, religious authority, law and practice while outlining the similarities and differences in British, French and Portuguese colonial rule in Islamic regions. Using a shared conceptual framework the contributors to this volume analyze the nature of regulation in different historical periods and geographical areas. From Africa and the Middle East to Asia and Europe, 'Colonial and Post-Colonial Governance of Islam' opens up new vistas for research in Islamic studies
To develop a staff training intervention for agitation in people with severe dementia, reaching end-of-life, residing in nursing homes (NHs), test feasibility, acceptability, and whether a trial is warranted.
Design:
Feasibility study with pre- and post-intervention data collection, qualitative interviews, and focus groups.
Setting:
Three NHs in South East England with dementia units, diverse in terms of size, ownership status, and location.
Participants:
Residents with a dementia diagnosis or scoring ≥2 on the Noticeable Problems Checklist, rated as “severe” on Clinical Dementia Rating Scale, family carers, and staff (healthcare assistants and nurses).
Intervention:
Manualized training, delivered by nonclinical psychology graduates focusing on agitation in severe dementia, underpinned by a palliative care framework.
Measurements:
Main outcomes were feasibility of recruitment, data collection, follow-up, and intervention acceptability. We collected resident, family carer, and staff demographics. Staff provided data on resident’s agitation, pain, quality of life, and service receipt. Staff reported their sense of competence in dementia care. Family carers reported on satisfaction with end-of-life care. In qualitative interviews, we explored staff and family carers’ views on the intervention.
Results:
The target three NHs participated: 28 (49%) residents, 53 (74%) staff, and 11 (85%) family carers who were eligible to participate consented. Eight-four percent of staff attended ≥3 sessions, and we achieved 93% follow-up. We were able to complete quantitative interviews. Staff and family carers reported the intervention and delivery were acceptable and helpful.
Conclusions:
The intervention was feasible and acceptable indicating a larger trial for effectiveness may be warranted.
Excess body fat is associated with the production of pro-inflammatory molecules from dysfunctional adipose tissue resulting in systemic inflammation. Inflammation stimulates expression of the iron regulatory hormone hepcidin, resulting in elevated serum ferritin and iron overload in metabolic tissues. Hepcidin driven iron maldistribution may be implicated in the development of metabolic diseases such as Type 2 diabetes and CVD. The aim of this study was to investigate the effect of body fat and the associated inflammation on markers of iron homeostasis.
Analyses were based on data from the cross-sectional National Adult Nutrition Survey (2008–2010) (www.iuna.net). Percentage body fat (BF%) of participants (n = 1211) was measured by a Tanita BC420MA device. Participants were classified as healthy, overweight or obese based on age and gender-specific BF% ranges. Serum ferritin and serum hepcidin were measured using immunoturbidimetric immunoassays. ANCOVA with Bonferroni post hoc (p < 0.05) was used to compare anthropometric parameters, biochemical markers of iron status and inflammation and nutrient intakes between BF% groups. Predictors of serum hepcidin and serum ferritin were determined using linear regression analysis.
In the population 42% were classified as healthy, 33% as overfat and 25% as obese. Serum hepcidin was significantly elevated in obese participants (8.42ng/ml ± 4.2) compared to their healthy counterparts (6.49ng/ml ± 3.9)(p < 0.001). Significantly higher serum ferritin was observed in obese (223ng/ml ± 170) and overfat males (166ng/ml ± 120) compared to healthy males (135ng/ml ± 91)(p < 0.001). A significant percentage of overweight (20%) and obese (32%) participants were at severe risk of iron overload compared to healthy participants (8%)(p < 0.001). No significant differences in dietary iron intakes were observed between BF% groups. Linear regression analysis indicated that BF% was a significant (p < 0.001) predictor of hepcidin in males (β = 0.327) and females (β = 0.226). IL-6 (β = 0.317,p < 0.001) and TNFα (β = 0.229,p < 0.001) were the strongest inflammatory predictors of hepcidin in females only. In males, leptin was a positive predictor (β = 0.159,p = 0.003) of hepcidin, while adiponectin displayed a negative predictive relationship (β = -0.145,p = 0.001)
Our results indicate that excessive adiposity is associated with elevated serum ferritin and hepcidin independent of dietary intake. Cytokines are a potential driver of hepcidin in females, with adipose-derived hormones seeming to have the greater effect in males. These results may help to elucidate the relationship between obesity and dysregulated iron metabolism. Further research is required to investigate the metabolic effects of hepcidin-induced iron overload in those with excess body fat.
Current dietary recommendations encourage increased fibre and reduced sugar consumption. In the UK, specific targets and benchmarks have been established for the sugar content of some foods but not for fibre. National Food Consumption Surveys provide comprehensive information of all foods consumed by representative population samples. The Irish national food surveys as completed by the Irish Universities Nutrition Alliance (IUNA) capture dietary data at brand level with all details as gathered on pack entered into a discrete but inter-linked database, the Irish National Food Ingredient Database (INFID). The aim of this study was to profile the carbohydrate quality of a convenience sub-sample of packaged foods as eaten by Irish children during the National Children's Food Survey II (2017/2018) as entered into INFID.
Materials and Methods:
All on-pack details from 385 available foods in the categories ‘white breads and rolls’; ‘brown breads and rolls’; ‘other breads and scones’; ‘ready to eat breakfast cereals (RTEBC)’; ‘biscuits’; and ‘cakes, buns and pastries’ were entered in to INFID and quality control completed. The carbohydrate profile of the products was assessed with respect to fibre labelling criteria and UK sugar guidelines and targets. SPSS Version 25 was used for all analyses.
Results:
Although 56% (n210) of all products entered were eligible to make a ‘source of’ or ‘high’ fibre claim, only 20% (n78) made such a claim. Of this, 46% stated ‘high fibre’ and 32% ‘source’, predominately in the ‘brown breads and rolls’ and ‘RTEBC’ groups. When compared to UK Department of Health guidance for ‘low’, ‘medium’ and ‘high’ sugar, 65% of all products examined (n250) were either ‘low’ or ‘medium’ sugar. Comparison of median sugar contents with Public Health England sugar reformulation targets revealed different responses in each category, with all categories other than foods deemed as “morning goods” yet to meet the 2020 target of 20% reduction in sugar content.
Discussion:
This small pilot study of a convenience sample of foods suggests that for the limited number of foods examined, for some there remains challenges to reduce sugar and increase fibre contents. Strategies such as reformulation, change in portion size, flexibility in labelling and/or a shift in sales portfolios could be considered but only alongside technological and safety considerations. Further research to broaden this analysis and to link nutrient levels as listed on pack with actual consumption patterns could help ensure all recent initiatives including reformulation are recognised.
Being physically active is associated with fundamental health benefits and assists with the maintenance of normal weight in children. The current World Health Organizations’ recommendation is for children to accumulate 60 minutes of physical activity (PA) per day to obtain such benefits. Conversely, time spent in sedentary behaviours including watching screens (ST) are positively associated with the risk of overweight and obesity in young people. The aim of this research was to estimate PA levels and ST usage of Irish children and to examine the relationship with body fat.
This analysis was based on data collected from a nationally representative sample of Irish children aged 5–12-years (n = 591, 50% female) from The National Children's Food Consumption Survey II (www.iuna.net). The Child/Youth Physical Activity Questionnaires (C-PAQ/Y-PAQ) were used to measure PA and ST in 5–8 and 9–12-year-olds respectively. Both questionnaires were self-administered, recall instruments that assessed the frequency/duration of activities participated in over the previous 7-day period. The MET minutes (metabolic cost of the activity multiplied by the duration in minutes) of the PA's were calculated per child. Percentage body fat (%BF) was measured by a Tanita BC420MA device and participants were classified into categories based on their %BF, age and gender. Independent t-tests and ANOVA (post-hoc DunnettT-3) were used to assess differences between gender and %BF category.
Overall, children spent 93 mins/d being physically active with 69% meeting the > 1hr recommendation. There was a significant difference in the time spent undertaking PA between boys (99 mins/d) and girls (88 mins/d) p = 0.020. Children spent 107 mins/d watching screens with 68% meeting the < 2hr guidance. Girls spent significantly less time watching screens (89 mins/d) than boys (124 mins/d) p ≤ 0.001. Children who had a normal %BF accumulated more PA MET mins/day compared to those who were classified as obese, which was significant in the total population (p = 0.007), for boys (p ≤ 0.001), but not girls (p = 0.929).
This preliminary analysis indicates that a high proportion of Irish children are meeting the PA and ST recommendations, with boys being more physically active and spending more time watching screens compared to girls. However, results should be interpreted with caution as PA and ST usage were self-reported by participants. The association between PA MET minutes and %BF suggest that advice to encourage PA participation to combat excess adiposity in Irish children is justified. Future work should examine the role of other potential determinants of obesity in this cohort.
The Food Safety Authority of Ireland (FSAI) have set target maximum daily salt intakes for children (4–6y: 3 g, 7–10y: 5 g, 11–14y: 6g) while the European Food Safety Authority (EFSA) have set Adequate Intakes (AI) for potassium of 1100mg/d, 1800mg/d and 2700mg/d for children of the same respective age groups. An individual's sodium to potassium (Na:K) intake ratio is an important predictor of hypertension and the World Health Organization (WHO) recommend a Na:K intake ratio of ≤ 1.0mmol/mmol for both adults and children. Although the morbidities associated with hypertension may not be seen until adulthood, blood pressure in childhood has a significant association with blood pressure in adulthood. Therefore, estimation of Na:K intake ratios (best measured by urinary excretion) in children may predict their susceptibility to hypertension related diseases in later life. The aim of this study was to estimate sodium and potassium intake and mean molar Na:K intake ratio of Irish children and to assess compliance with dietary guidance.
Morning spot urine samples were collected for 572 children aged 5–12 years (95% of total sample) as part of the nationally representative Irish National Children's Food Survey II (2017–2018) (NCFSII; www.iuna.net). Samples were transported, processed and stored using best practice procedures. Urinary excretion of sodium and potassium were measured using a Randox RX Daytona and were corrected for gender and age-specific 24-hour urine volume estimations based on 24-hour urine volume estimates from Australian children. SPSS Version 25 was used for all analyses.
Mean 24-hour urinary sodium excretion was 2018mg/d, equivalent to an average salt excretion of 5.0g/d exceeding the FSAI maximum target intake for all age groups except 11–12 year olds. Mean 24-hour urinary potassium excretion was 1411mg/d with mean intakes below the AI from EFSA for all age groups with the exception of 5–6 year olds. The mean molar Na:K ratio of Irish children was 2.8 for boys and 3.4 for girls. Only 5% of Irish children met the WHO recommendation for a Na:K ratio of ≤ 1.0mmol/mmol.
High intakes of sodium and low intakes of potassium reported in this study result in a low compliance with the WHO recommendation of a Na:K ratio ≤ 1.0mmol/mmol. This may lead to a higher risk of hypertension and related morbidities in later life. Based on these findings, dietary interventions to combat hypertension related diseases (such as lowering sodium and increasing potassium intakes) should be implemented from childhood.
Under Regulation (EC) No 1924/2006 the usage of nutrition and health claims are permitted, however foods that are high in fat, sugars and salt are advised not to use such claims as foods promoted with these claims may influence consumer food choice. The use of nutrient profiles has been proposed as a means of avoiding the potential of such claims masking the overall nutritional status of a product. Ready to eat breakfast cereals (RTEBC) often display nutrition claims whilst also contributing significantly to total sugar and energy intake. The aim of this study was to profile a variety of RTEBC and compare nutrient composition and claim information between nutrient profile categories.
The Irish National Food Ingredient database (INFID) is a record of brand specific information from food labels collected during the Irish national food surveys. A convenience sub-sample of RTEBC as eaten by Irish children during the National Children's Food Survey 2 (2017/2018) were selected (n = 102). Nutrient profile (NP) scores were calculated using the UK Nutrient Profiling Model (FSA). NP scores were calculated based on a set of negative macronutrient indicators (energy, saturated fat, total sugars and sodium) minus positive indicators (protein, fibre, “fruit, vegetables and nuts”) present per 100 g. Foods scoring four points or more were classified as “less healthy”.
More than half of RTEBC were classed “less healthy” (53%) with a median NP score of 8.0 with “healthy” RTEBC scoring significantly lower at -0.0 (p < 0.001). “Healthy” RTEBC had a median sugar content of 13.4g/100 g compared to 24g/100 g in the “less healthy” (p < 0.001). “Healthy” RTEBC had a higher fibre content of 8.8g/100 g compared to 5.72g/100 g in the “less healthy” (p = 0.001), with 35% of healthy and 28% of less healthy RTEBC making a substantiated “high in fibre” claim. Micronutrient contents of all RTEBC were similar, with only iron significantly higher in “healthy” (13.3mg/100g) compared to “less healthy” (9.5mg/100g) (p = 0.02). The prevalence of substantiated micronutrient related claims was the same between “healthy” and “less healthy” RTEBC.
“Healthy” and “less healthy” RTEBC display similar micronutrient profiles, with most of the nutrition claims on both pertaining to the micronutrient and fibre content, potentially overshadowing the macronutrient contribution of the cereals. This analysis shows the ability of nutrient profiling to distinguish products by macronutrient profiles however it identifies the complexity of application with respect to micronutrient content. Further research is required to investigate the contribution of the profiled RTEBC to total nutrient intakes.
Objectives: Studies of neurocognitively elite older adults, termed SuperAgers, have identified clinical predictors and neurobiological indicators of resilience against age-related neurocognitive decline. Despite rising rates of older persons living with HIV (PLWH), SuperAging (SA) in PLWH remains undefined. We aimed to establish neuropsychological criteria for SA in PLWH and examined clinically relevant correlates of SA. Methods: 734 PLWH and 123 HIV-uninfected participants between 50 and 64 years of age underwent neuropsychological and neuromedical evaluations. SA was defined as demographically corrected (i.e., sex, race/ethnicity, education) global neurocognitive performance within normal range for 25-year-olds. Remaining participants were labeled cognitively normal (CN) or impaired (CI) based on actual age. Chi-square and analysis of variance tests examined HIV group differences on neurocognitive status and demographics. Within PLWH, neurocognitive status differences were tested on HIV disease characteristics, medical comorbidities, and everyday functioning. Multinomial logistic regression explored independent predictors of neurocognitive status. Results: Neurocognitive status rates and demographic characteristics differed between PLWH (SA=17%; CN=38%; CI=45%) and HIV-uninfected participants (SA=35%; CN=55%; CI=11%). In PLWH, neurocognitive groups were comparable on demographic and HIV disease characteristics. Younger age, higher verbal IQ, absence of diabetes, fewer depressive symptoms, and lifetime cannabis use disorder increased likelihood of SA. SA reported increased independence in everyday functioning, employment, and health-related quality of life than non-SA. Conclusions: Despite combined neurological risk of aging and HIV, youthful neurocognitive performance is possible for older PLWH. SA relates to improved real-world functioning and may be better explained by cognitive reserve and maintenance of cardiometabolic and mental health than HIV disease severity. Future research investigating biomarker and lifestyle (e.g., physical activity) correlates of SA may help identify modifiable neuroprotective factors against HIV-related neurobiological aging. (JINS, 2019, 25, 507–519)
To better understand barriers and facilitators that contribute to antibiotic overuse in long-term care and to use this information to inform an evidence and theory-informed program.
Methods
Information on barriers and facilitators associated with the assessment and management of urinary tract infections were identified from a mixed-methods survey and from focus groups with stakeholders working in long-term care. Each barrier or facilitator was mapped to corresponding determinants of behavior change, as described by the theoretical domains framework (TDF). The Rx for Change database was used to identify strategies to address the key determinants of behavior change.
Results
In total, 19 distinct barriers and facilitators were mapped to 8 domains from the TDF: knowledge, skills, environmental context and resources, professional role or identity, beliefs about consequences, social influences, emotions, and reinforcements. The assessment of barriers and facilitators informed the need for a multifaceted approach with the inclusion of strategies (1) to establish buy-in for the changes; (2) to align organizational policies and procedures; (3) to provide education and ongoing coaching support to staff; (4) to provide information and education to residents and families; (5) to establish process surveillance with feedback to staff; and (6) to deliver reminders.
Conclusions
The use of a stepped approach was valuable to ensure that locally relevant barriers and facilitators to practice change were addressed in the development of a regional program to help long-term care facilities minimize antibiotic prescribing for asymptomatic bacteriuria. This stepped approach provides considerable opportunity to advance the design and impact of antimicrobial stewardship programs.
The Florida Department of Health in Miami-Dade County (DOH-Miami-Dade) investigated 106 reported carbon monoxide (CO) exposures over a 9-day timeframe after Hurricane Irma. This report evaluates risk factors for CO poisoning and the importance of heightened surveillance following natural disasters.
Methods
Data on CO poisoning cases from September 9 to 18, 2017 were extracted from Merlin, the Florida Department of Health Surveillance System. Medical records were obtained and follow-up interviews were conducted to collect data on the confirmed CO poisoning cases. Data were analyzed using SAS v9.4.
Results
Ninety-one of the 106 people exposed to CO met the case definition for CO poisoning: 64 confirmed, 7 probable, and 20 suspect cases. Eighty-eight percent of the affected individuals were evaluated in emergency departments and 11.7% received hyperbaric oxygen treatment. The most frequently reported symptoms included headache (53.3%), dizziness (50.7%), and nausea (46.7%). Three patients expired due to their exposure to CO.
Conclusions
Post Hurricane Irma, the DOH-Miami-Dade investigated numerous cases for CO exposure. By understanding who is most likely to be impacted by CO and the impact of generators’ location on people’s health, education efforts can be tailored to the population most at risk and further CO exposures and related mortalities following natural disasters can be reduced. (Disaster Med Public Health Preparedness. 2019;13:94–96)
Patients who experience Transient Ischaemic Attack (TIA) should be assessed and treated in a specialist clinic to reduce risk of further TIA or stroke. But referrals are often delayed. We aimed to identify published studies describing pathways for emergency assessment and referral of patients with suspected TIA at first medical contact: primary care; ambulance services; and emergency department.
METHODS:
We conducted a scoping literature review. We searched four databases (PubMed, CINAHL, Web of Science, Scopus). We screened studies for eligibility. We extracted and analysed data to describe setting, assessment and referral processes reported in primary research on referral of suspected TIA patients directly to specialist outpatient services.
RESULTS:
We identified eight studies in nine papers from five countries: 1/9 randomized trial; 6/9 before-and-after designs; 2/9 descriptive account. Five pathways were used by family doctors and three by Emergency Department (ED) physicians. None were used by paramedics. Clinicians identified TIA patients using a checklist incorporating the ABCD2 tool to describe risk of further stroke, online decision support tool or clinical judgement. They referred to a specialist clinic, either directly or via a telephone helpline. Anti-platelet medication was often given, usually aspirin unless contraindicated. Some patients underwent neurological and blood tests before referral and discharge. Five studies reported reduced incident of stroke at 90 days, from 6–10 percent predicted rate to 1.2-2.1 percent actual rate. Between 44 percent and 83 percent of suspected TIA cases in these studies were directly referred to stroke clinics through the pathways.
CONCLUSIONS:
Research literature has focused on assessment and referral by family doctors and ED physicians to reduce hospitalization of TIA patients. No pathways for paramedic use were reported. Since many suspected TIA patients present to ambulance services, effective pre-hospital assessment and referral pathways are needed. We will use review results to develop a paramedic referral pathway to test in a feasibility trial.
Transient Ischaemic Attack (TIA) is a neurologic event with symptom resolution within 24 hours. Early specialist assessment of TIA reduces risk of stroke and death. National United Kingdom (UK) guidelines recommend patients with TIA are seen in specialist clinics within 24 hours (high risk) or seven days (low risk).
We aimed to develop a complex intervention for patients with low risk TIA presenting to the emergency ambulance service. The intervention is being tested in the TIER feasibility trial, in line with Medical Research Council (MRC) guidance on staged development and evaluation of complex interventions.
METHODS:
We conducted three interrelated activities to produce the TIER intervention:
• Survey of UK Ambulance Services (n = 13) to gather information about TIA pathways already in use
• Scoping review of literature describing prehospital care of patients with TIA
• Synthesis of data and definition of intervention by specialist panel of: paramedics; Emergency Department (ED) and stroke consultants; service users; ambulance service managers.
RESULTS:
The panel used results to define the TIER intervention, to include:
1. Protocol for paramedics to assess patients presenting with TIA and identify and refer low risk patients for prompt (< 7day) specialist review at TIA clinic
2. Patient Group Directive and information pack to allow paramedic administration of aspirin to patients left at home with referral to TIA clinic
3. Referral process via ambulance control room
4. Training package for paramedics
5. Agreement with TIA clinic service provider including rapid review of referred patients
CONCLUSIONS:
We followed MRC guidance to develop a clinical intervention for assessment and referral of low risk TIA patients attended by emergency ambulance paramedic. We are testing feasibility of implementing and evaluating this intervention in the TIER feasibility trial which may lead to fully powered multicentre randomized controlled trial (RCT) if predefined progression criteria are met.
Objectives: Human immunodeficiency virus (HIV) disproportionately affects Hispanics/Latinos in the United States, yet little is known about neurocognitive impairment (NCI) in this group. We compared the rates of NCI in large well-characterized samples of HIV-infected (HIV+) Latinos and (non-Latino) Whites, and examined HIV-associated NCI among subgroups of Latinos. Methods: Participants included English-speaking HIV+ adults assessed at six U.S. medical centers (194 Latinos, 600 Whites). For overall group, age: M=42.65 years, SD=8.93; 86% male; education: M=13.17, SD=2.73; 54% had acquired immunodeficiency syndrome. NCI was assessed with a comprehensive test battery with normative corrections for age, education and gender. Covariates examined included HIV-disease characteristics, comorbidities, and genetic ancestry. Results: Compared with Whites, Latinos had higher rates of global NCI (42% vs. 54%), and domain NCI in executive function, learning, recall, working memory, and processing speed. Latinos also fared worse than Whites on current and historical HIV-disease characteristics, and nadir CD4 partially mediated ethnic differences in NCI. Yet, Latinos continued to have more global NCI [odds ratio (OR)=1.59; 95% confidence interval (CI)=1.13–2.23; p<.01] after adjusting for significant covariates. Higher rates of global NCI were observed with Puerto Rican (n=60; 71%) versus Mexican (n=79, 44%) origin/descent; this disparity persisted in models adjusting for significant covariates (OR=2.40; CI=1.11–5.29; p=.03). Conclusions: HIV+ Latinos, especially of Puerto Rican (vs. Mexican) origin/descent had increased rates of NCI compared with Whites. Differences in rates of NCI were not completely explained by worse HIV-disease characteristics, neurocognitive comorbidities, or genetic ancestry. Future studies should explore culturally relevant psychosocial, biomedical, and genetic factors that might explain these disparities and inform the development of targeted interventions. (JINS, 2018, 24, 163–175)
There are few reported efforts to define universal disaster response performance measures. Careful examination of responses to past disasters can inform the development of such measures. As a first step toward this goal, we conducted a literature review to identify key factors in responses to 3 recent events with significant loss of human life and economic impact: the 2003 Bam, Iran, earthquake; the 2004 Indian Ocean tsunami; and the 2010 Haiti earthquake. Using the PubMed (National Library of Medicine, Bethesda, MD) database, we identified 710 articles and retained 124 after applying inclusion and exclusion criteria. Seventy-two articles pertained to the Haiti earthquake, 38 to the Indian Ocean tsunami, and 14 to the Bam earthquake. On the basis of this review, we developed an organizational framework for disaster response performance measurement with 5 key disaster response categories: (1) personnel, (2) supplies and equipment, (3) transportation, (4) timeliness and efficiency, and (5) interagency cooperation. Under each of these, and again informed by the literature, we identified subcategories and specific items that could be developed into standardized performance measures. The validity and comprehensiveness of these measures can be tested by applying them to other recent and future disaster responses, after which standardized performance measures can be developed through a consensus process. (Disaster Med Public Health Preparedness. 2017;11:505–509)