We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
According to International Union for the Conservation of Nature (IUCN) guidelines, all species must be assessed against all criteria during the Red Listing process. For organismal groups that are diverse and understudied, assessors face considerable challenges in assembling evidence due to difficulty in applying definitions of key terms used in the guidelines. Challenges also arise because of uncertainty in population sizes (Criteria A, C, D) and distributions (Criteria A2/3/4c, B). Lichens, which are often small, difficult to identify, or overlooked during biodiversity inventories, are one such group for which specific difficulties arise in applying Red List criteria. Here, we offer approaches and examples that address challenges in completing Red List assessments for lichens in a rapidly changing arena of data availability and analysis strategies. While assessors still contend with far from perfect information about individual species, we propose practical solutions for completing robust assessments given the currently available knowledge of individual lichen life-histories.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
In this controlled study, we found that exposure to ultraviolet-C (UV-C) radiation was able to arrest the growth of selected pathogenic enteric and nonfermenting Gram-negative rods. Further studies are needed to confirm the clinical efficacy and determine optimal implementation strategies for utilizing UV-C terminal disinfection.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery (WCPCCS) will be held in Washington DC, USA, from Saturday, 26 August, 2023 to Friday, 1 September, 2023, inclusive. The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery will be the largest and most comprehensive scientific meeting dedicated to paediatric and congenital cardiac care ever held. At the time of the writing of this manuscript, The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery has 5,037 registered attendees (and rising) from 117 countries, a truly diverse and international faculty of over 925 individuals from 89 countries, over 2,000 individual abstracts and poster presenters from 101 countries, and a Best Abstract Competition featuring 153 oral abstracts from 34 countries. For information about the Eighth World Congress of Pediatric Cardiology and Cardiac Surgery, please visit the following website: [www.WCPCCS2023.org]. The purpose of this manuscript is to review the activities related to global health and advocacy that will occur at the Eighth World Congress of Pediatric Cardiology and Cardiac Surgery.
Acknowledging the need for urgent change, we wanted to take the opportunity to bring a common voice to the global community and issue the Washington DC WCPCCS Call to Action on Addressing the Global Burden of Pediatric and Congenital Heart Diseases. A copy of this Washington DC WCPCCS Call to Action is provided in the Appendix of this manuscript. This Washington DC WCPCCS Call to Action is an initiative aimed at increasing awareness of the global burden, promoting the development of sustainable care systems, and improving access to high quality and equitable healthcare for children with heart disease as well as adults with congenital heart disease worldwide.
Recent large-scale disasters have exposed the interconnected nature of modern societies, exacerbating the risk of cascading impacts. Examining elements of community health status, such as social determinants of health, their perceived health status, and how they relate to disaster resilience, can illuminate alternative actions for cost-effective disaster prevention and management. Moreover, agricultural communities are essential to food security and provide a working example of the importance of mitigation in escalation of crises. To that aim, this research examines perceptions of the relationship between disaster resilience and determinants of health, including health status. Participants also reported their views on perceived vulnerable groups in their community and proposed design characteristics of more effective community disaster plans.
Here investigated are these elements in a small agricultural community of Western Australia previously exposed to bushfires. A questionnaire was used based on health elements from the Social Determinants of Health described by the World Health Organization (WHO) and compared this with quantitative data describing the community health status. A mixed methods approach combining qualitative (semi-structured interview) and quantitative (closed questions using a Likert scale) tools was undertaken with a small group of community members.
It was found that community connection and social capital were perceived to provide knowledge and support that enhanced individual disaster risk awareness and preparedness and improved an individual’s disaster resilience. Stress and social exclusion within a community were perceived to decrease an individual’s resilience to disaster. Disaster resilience was reported to be a function of good physical and mental health. To achieve effective disaster planning, community partnership in the development, education, and testing of plans and robust communication were described as essential traits in community emergency plans.
The syndromes subsumed under the general umbrella term of impulse control disorders (ICDs), punding, compulsive disorders, and the dopamine dysregulation syndrome (DDS), all share the common theme of an overwhelming need to perform some activity. The actions are generally closer in nature to addictive disorders, being ego syntonic, and less like true impulsive disorders which patients may try to resist [1]. Punding represents a need to perform senseless activities repeatedly, such as folding and refolding clothes in a drawer for hours at a time, polishing pennies, or pulling weeds from a lawn or threads from a rug. The more common ICDs include gambling disorder, compulsive sexual disorder, consumerism, and hobbyism, but may include strikingly unusual activities that are extraordinarily narrow in their focus. The DDS seems to be a form of drug addictive behavior, similar to that of the usual addictive drugs.
Response to lithium in patients with bipolar disorder is associated with clinical and transdiagnostic genetic factors. The predictive combination of these variables might help clinicians better predict which patients will respond to lithium treatment.
Aims
To use a combination of transdiagnostic genetic and clinical factors to predict lithium response in patients with bipolar disorder.
Method
This study utilised genetic and clinical data (n = 1034) collected as part of the International Consortium on Lithium Genetics (ConLi+Gen) project. Polygenic risk scores (PRS) were computed for schizophrenia and major depressive disorder, and then combined with clinical variables using a cross-validated machine-learning regression approach. Unimodal, multimodal and genetically stratified models were trained and validated using ridge, elastic net and random forest regression on 692 patients with bipolar disorder from ten study sites using leave-site-out cross-validation. All models were then tested on an independent test set of 342 patients. The best performing models were then tested in a classification framework.
Results
The best performing linear model explained 5.1% (P = 0.0001) of variance in lithium response and was composed of clinical variables, PRS variables and interaction terms between them. The best performing non-linear model used only clinical variables and explained 8.1% (P = 0.0001) of variance in lithium response. A priori genomic stratification improved non-linear model performance to 13.7% (P = 0.0001) and improved the binary classification of lithium response. This model stratified patients based on their meta-polygenic loadings for major depressive disorder and schizophrenia and was then trained using clinical data.
Conclusions
Using PRS to first stratify patients genetically and then train machine-learning models with clinical predictors led to large improvements in lithium response prediction. When used with other PRS and biological markers in the future this approach may help inform which patients are most likely to respond to lithium treatment.
Public support for the implementation of personalised medicine policies (PMPs) within routine care is important owing to the high financial costs involved and the potential for redirection of resources from other services.
Aims
We aimed to determine the attributes of a PMP most likely to elicit public support for implementation. We also aimed to determine whether such support differed between a depression PMP and one for cystic fibrosis.
Method
In a discrete-choice experiment, paired vignettes illustrating both the current model of care (CMoC) and a hypothetical PMP for either depression or cystic fibrosis were presented to a representative sample of the UK public (n = 2804). Each vignette integrated varying attributes, including anticipated therapeutic benefit over CMoC, and the annual cost to the taxpayer. Respondents were invited to express their preference for either the PMP or CMoC within each pair.
Results
The financial cost was the most important attribute influencing public support for PMPs. Respondents favoured PMP implementation where it benefited a higher proportion of patients or was anticipated to be more effective than CMoC. A reduction in services for non-eligible patients reduced the likelihood of support for PMPs. Respondents were more willing to fund PMPs for cystic fibrosis than for depression.
Conclusions
Cost is a significant factor in the public's support for PMPs, but essential caveats, such as protection for services available to PMP-ineligible patients, may also apply. Further research should explore the factors contributing to condition-specific nuances in public support for PMPs.
Disaster impact databases are important resources for informing research, policy, and decision making. Therefore, understanding the underpinning methodology of data collection used by the databases, how they differ, and quality indicators of the data recorded is essential in ensuring that their use as reference points is valid.
Methods:
The Australian Disaster Resilience Knowledge Hub (AIDRKH) is an open-source platform supported by government to inform disaster management practice. A comparative descriptive review of the Disaster Mapper (hosted at AIDRKH) and the international Emergency Events Database (EM-DAT) was undertaken to identify differences in how Australian disasters are captured and measured.
Results:
The results show substantial variation in identification and classification of disasters across hazard impacts and hazard types and a lack of data structure for the systematic reporting of contextual and impact variables.
Conclusions:
These differences may have implications for reporting, academic analysis, and thus knowledge management informing disaster prevention and response policy or plans. Consistency in reporting methods based on international classification standards is recommended to improve the validity and usefulness of this Australian database.
A recent genome-wide association study (GWAS) identified 12 independent loci significantly associated with attention-deficit/hyperactivity disorder (ADHD). Polygenic risk scores (PRS), derived from the GWAS, can be used to assess genetic overlap between ADHD and other traits. Using ADHD samples from several international sites, we derived PRS for ADHD from the recent GWAS to test whether genetic variants that contribute to ADHD also influence two cognitive functions that show strong association with ADHD: attention regulation and response inhibition, captured by reaction time variability (RTV) and commission errors (CE).
Methods
The discovery GWAS included 19 099 ADHD cases and 34 194 control participants. The combined target sample included 845 people with ADHD (age: 8–40 years). RTV and CE were available from reaction time and response inhibition tasks. ADHD PRS were calculated from the GWAS using a leave-one-study-out approach. Regression analyses were run to investigate whether ADHD PRS were associated with CE and RTV. Results across sites were combined via random effect meta-analyses.
Results
When combining the studies in meta-analyses, results were significant for RTV (R2 = 0.011, β = 0.088, p = 0.02) but not for CE (R2 = 0.011, β = 0.013, p = 0.732). No significant association was found between ADHD PRS and RTV or CE in any sample individually (p > 0.10).
Conclusions
We detected a significant association between PRS for ADHD and RTV (but not CE) in individuals with ADHD, suggesting that common genetic risk variants for ADHD influence attention regulation.
Background: Certain nursing home (NH) resident care tasks have a higher risk for multidrug-resistant organisms (MDRO) transfer to healthcare personnel (HCP), which can result in transmission to residents if HCPs fail to perform recommended infection prevention practices. However, data on HCP-resident interactions are limited and do not account for intrafacility practice variation. Understanding differences in interactions, by HCP role and unit, is important for informing MDRO prevention strategies in NHs. Methods: In 2019, we conducted serial intercept interviews; each HCP was interviewed 6–7 times for the duration of a unit’s dayshift at 20 NHs in 7 states. The next day, staff on a second unit within the facility were interviewed during the dayshift. HCP on 38 units were interviewed to identify healthcare personnel (HCP)–resident care patterns. All unit staff were eligible for interviews, including certified nursing assistants (CNAs), nurses, physical or occupational therapists, physicians, midlevel practitioners, and respiratory therapists. HCP were asked to list which residents they had cared for (within resident rooms or common areas) since the prior interview. Respondents selected from 14 care tasks. We classified units into 1 of 4 types: long-term, mixed, short stay or rehabilitation, or ventilator or skilled nursing. Interactions were classified based on the risk of HCP contamination after task performance. We compared proportions of interactions associated with each HCP role and performed clustered linear regression to determine the effect of unit type and HCP role on the number of unique task types performed per interaction. Results: Intercept-interviews described 7,050 interactions and 13,843 care tasks. Except in ventilator or skilled nursing units, CNAs have the greatest proportion of care interactions (interfacility range, 50%–60%) (Fig. 1). In ventilator and skilled nursing units, interactions are evenly shared between CNAs and nurses (43% and 47%, respectively). On average, CNAs in ventilator and skilled nursing units perform the most unique task types (2.5 task types per interaction, Fig. 2) compared to other unit types (P < .05). Compared to CNAs, most other HCP types had significantly fewer task types (0.6–1.4 task types per interaction, P < .001). Across all facilities, 45.6% of interactions included tasks that were higher-risk for HCP contamination (eg, transferring, wound and device care, Fig. 3). Conclusions: Focusing infection prevention education efforts on CNAs may be most efficient for preventing MDRO transmission within NH because CNAs have the most HCP–resident interactions and complete more tasks per visit. Studies of HCP-resident interactions are critical to improving understanding of transmission mechanisms as well as target MDRO prevention interventions.
Funding: Centers for Disease Control and Prevention (grant no. U01CK000555-01-00)
Disclosures:Scott Fridkin, consulting fee, vaccine industry (spouse)
Implementation of genome-scale sequencing in clinical care has significant challenges: the technology is highly dimensional with many kinds of potential results, results interpretation and delivery require expertise and coordination across multiple medical specialties, clinical utility may be uncertain, and there may be broader familial or societal implications beyond the individual participant. Transdisciplinary consortia and collaborative team science are well poised to address these challenges. However, understanding the complex web of organizational, institutional, physical, environmental, technologic, and other political and societal factors that influence the effectiveness of consortia is understudied. We describe our experience working in the Clinical Sequencing Evidence-Generating Research (CSER) consortium, a multi-institutional translational genomics consortium.
Methods:
A key aspect of the CSER consortium was the juxtaposition of site-specific measures with the need to identify consensus measures related to clinical utility and to create a core set of harmonized measures. During this harmonization process, we sought to minimize participant burden, accommodate project-specific choices, and use validated measures that allow data sharing.
Results:
Identifying platforms to ensure swift communication between teams and management of materials and data were essential to our harmonization efforts. Funding agencies can help consortia by clarifying key study design elements across projects during the proposal preparation phase and by providing a framework for data sharing data across participating projects.
Conclusions:
In summary, time and resources must be devoted to developing and implementing collaborative practices as preparatory work at the beginning of project timelines to improve the effectiveness of research consortia.
We prove that the continuous function${\rm{\hat \Omega }}:2^\omega \to $ that is defined via$X \mapsto \mathop \sum \limits_n 2^{ - K\left( {Xn} \right)} $ for all $X \in {2^\omega }$ is differentiable exactly at the Martin-Löf random reals with the derivative having value 0; that it is nowhere monotonic; and that $\mathop \smallint \nolimits _0^1{\rm{\hat{\Omega }}}\left( X \right)\,{\rm{d}}X$ is a left-c.e. $wtt$-complete real having effective Hausdorff dimension ${1 / 2}$.
We further investigate the algorithmic properties of ${\rm{\hat{\Omega }}}$. For example, we show that the maximal value of ${\rm{\hat{\Omega }}}$ must be random, the minimal value must be Turing complete, and that ${\rm{\hat{\Omega }}}\left( X \right) \oplus X{ \ge _T}\emptyset \prime$ for every X. We also obtain some machine-dependent results, including that for every $\varepsilon > 0$, there is a universal machine V such that ${{\rm{\hat{\Omega }}}_V}$ maps every real X having effective Hausdorff dimension greater than ε to a real of effective Hausdorff dimension 0 with the property that $X{ \le _{tt}}{{\rm{\hat{\Omega }}}_V}\left( X \right)$; and that there is a real X and a universal machine V such that ${{\rm{\Omega }}_V}\left( X \right)$ is rational.
To evaluate the health status and quality of life of young patients who had cone reconstruction for Ebstein anomaly.
Methods:
We reviewed all patients who had cone reconstruction from 2007 to 2016 at our institution. Prospective surveys were mailed to all eligible patients. Quality of life was assessed using the PedsQL 4.0 Generic Core Scales, including four domains: physical, emotional, social, and school functioning.
Results:
Of 116 eligible patients, 72 (62%) responded. About 96% reported their health as excellent or good, and 52% were symptom-free. Only 37% of patients were taking any medications, the most common of which was aspirin (30%). Only 19% had been hospitalised for cardiac reasons following cone reconstruction. The average self-reported quality of life was 85.3/100, whereas the average parent proxy-reported quality of life was 81.8/100. There was no difference by self or parent proxy-report in quality of life between cone reconstruction patients and healthy children; however, quality of life was significantly better compared with children with other chronic health conditions. By self-report and parent proxy-report, 15.1 and 16.7% of patients were deemed “at risk” for reduced quality of life, respectively. Socially, 63/64 (98%) patients over 5 years old were either full-time students or working full-time.
Conclusion:
Children with Ebstein anomaly following cone reconstruction have excellent quality of life comparable with healthy peers and significantly better than other children with chronic health conditions. Families of children with Ebstein anomaly can expect excellent quality of life, long-term health status, and social functioning following cone reconstruction.
This study profiles climate change as an emerging disaster risk in Oceania. The rationale for undertaking this study was to investigate climate change and disaster risk in Oceania. The role of this analysis is to examine what evidence exists to support decision-making and profile the nature, type, and potential human and economic impact of climate change and disaster risk in Oceania.
Aim:
To evaluate perceptions of climate change and disaster risk in the Oceania region.
Methods:
Thirty individual interviews with participants from 9 different countries were conducted. All of the participants were engaged in disaster management in the Oceania region as researchers, practitioners in emergency management, disaster health care and policy managers, or academics. Data collection was conducted between April and November 2017. Thematic analysis was conducted using narrative inquiry to gather first-hand insights on their perceptions of current and emerging threats and propose improvements in risk management practice to capture, monitor, and control disaster risk.
Results:
Interviewees who viewed climate change as a risk or hazard described a breadth of impacts. Hazards identified included climate variability and climate-related disasters, climate issues in island areas and loss of land mass, trans-nation migration, and increased transportation risk due to rising sea levels. These emerging risks are reflective of both the geographical location of countries in Oceania, where land mass due to rising oceans has been previously reported and climate change-driven migration of island populations.
Discussion:
Climate change was perceived as a significant contemporary and future risk, and as an influencing factor on other risks in the Oceania region.
The rationale for undertaking this study was to investigate how characteristics of population health relate to and impact disaster risk, resilience, vulnerability, impact, and recovery. The multi-disciplinary environment that contextualizes disaster practice can influence determinants of health. Robust health determinants, or lack thereof, may influence the outcomes of disaster events affecting an individual or a community.
Aim:
To investigate how the social determinants of health inform community perceptions of disaster risk.
Methods:
Community perception of disaster risk in reference to the social determinants of health was assessed in this study. Individual interviews with participants from a community were conducted, all of whom were permanent community residents. Thematic analysis was conducted using narrative inquiry to gather firsthand insights on their perceptions of how characteristics of population health relate to and impact an individual’s disaster risk.
Results:
Analysis demonstrated commonality between interviewees in perceptions of the influence of the social determinants of health on individual disaster risk by determinant type. Interviewees sensed a strong correlation between low community connection and disaster risk vulnerability. Specific populations thought to have low community connection were perceived to be socially isolated, resulting in low knowledge or awareness of the surrounding disaster risks, or how to prepare and respond to disasters. In addition, they had reduced access to communication and support in time of need.
Discussion:
The importance of a strong social community connection was a feature of this research. Further research on how health determinants can enable disaster risk awareness and disaster risk communication is warranted.