We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Patients with schizophrenia have a significantly elevated risk of mortality. Clozapine is effective for treatment-resistant schizophrenia, but its use is limited by side-effects. Understanding its association with mortality risk is crucial.
Aims
To investigate the associations of clozapine with all-cause and cause-specific mortality risk in schizophrenia patients.
Method
In this 18-year population-based cohort study, we retrieved electronic health records of schizophrenia patients from all public hospitals in Hong Kong. Clozapine users (ClozUs) comprised schizophrenia patients who initiated clozapine treatment between 2003 and 2012, with the index date set at clozapine initiation. Comparators were non-clozapine antipsychotic users (Non-ClozUs) with the same diagnosis who had never received a clozapine prescription. They were 1:2 propensity score matched with demographic characteristics and physical and psychiatric comorbidities. ClozUs were further defined according to continuation of clozapine use and co-prescription of other antipsychotics (polypharmacy). Accelerated failure time (AFT) models were used to estimate the risk of all-cause and cause-specific mortality (i.e. suicide, cardiovascular disease, infection and cancer).
Results
This study included 9,456 individuals (mean (s.d.) age at the index date: 39.13 (12.92) years; 50.73% females; median (interquartile range) follow-up time: 12.37 (9.78–15.22) years), with 2020 continuous ClozUs, 1132 discontinuous ClozUs, 4326 continuous non-ClozUs and 1978 discontinuous Non-ClozUs. Results from adjusted AFT models showed that continuous ClozUs had a lower risk of suicide mortality (acceleration factor 3.01; 99% CI: 1.41–6.44) compared with continuous Non-ClozUs. Continuous ClozUs with co-prescription of other antipsychotics exhibited lower risks of suicide mortality (acceleration factor 3.67; 1.41–9.60) and all-cause mortality (acceleration factor 1.42; 1.07–1.88) compared with continuous Non-ClozUs. No associations were found between clozapine and other cause-specific mortalities.
Conclusions
These results add to the existing evidence on the effectiveness of clozapine, particularly its anti-suicide effects, and emphasise the need for continuous clozapine use for suitable patients and the possible benefit of clozapine polypharmacy.
A daily prompt to offer vaccination to inpatients awaiting transfer to rehabilitation resulted in increased SARS-CoV-2 (OR 5.64, 95% CI 3.3–10.15; P < 0.001) and influenza (OR 3.80, 95% CI 2.45–6.06; P < 0.001) vaccination. Compared to baseline, this intervention was associated with reduced incidence of viral respiratory infection during subsequent admission to rehabilitation.
Identify risk factors for central line-associated bloodstream infections (CLABSI) in pediatric intensive care settings in an era with high focus on prevention measures.
Design:
Matched, case–control study.
Setting:
Quaternary children’s hospital.
Patients:
Cases had a CLABSI during an intensive care unit (ICU) stay between January 1, 2015 and December 31, 2020. Controls were matched 4:1 by ICU and admission date and did not develop a CLABSI.
Methods:
Multivariable, mixed-effects logistic regression.
Results:
129 cases were matched to 516 controls. Central venous catheter (CVC) maintenance bundle compliance was >70%. Independent CLABSI risk factors included administration of continuous non-opioid sedative (adjusted odds ratio (aOR) 2.96, 95% CI [1.16, 7.52], P = 0.023), number of days with one or more CVC in place (aOR 1.42 per 10 days [1.16, 1.74], P = 0.001), and the combination of a chronic CVC with administration of parenteral nutrition (aOR 4.82 [1.38, 16.9], P = 0.014). Variables independently associated with lower odds of CLABSI included CVC location in an upper extremity (aOR 0.16 [0.05, 0.55], P = 0.004); non-tunneled CVC (aOR 0.17 [0.04, 0.63], P = 0.008); presence of an endotracheal tube (aOR 0.21 [0.08, 0.6], P = 0.004), Foley catheter (aOR 0.3 [0.13, 0.68], P = 0.004); transport to radiology (aOR 0.31 [0.1, 0.94], P = 0.039); continuous neuromuscular blockade (aOR 0.29 [0.1, 0.86], P = 0.025); and administration of histamine H2 blocking medications (aOR 0.17 [0.06, 0.48], P = 0.001).
Conclusions:
Pediatric intensive care patients with chronic CVCs receiving parenteral nutrition, those on non-opioid sedative infusions, and those with more central line days are at increased risk for CLABSI despite current prevention measures.
Background: Per the Centers for Disease Control and Prevention, health equity stipulates all have a fair, just opportunity to attain their highest level of health. Limited evidence exists for disparities in health equity and healthcare-associated infections (HAI), with no evidence on language or primary insurance payor. While reviewing quality metrics, a disparity signal in central line-associated bloodstream infections (CLABSIs) prompted a multidisciplinary deep dive, with iterative analyses to understand potential inequities to identify improvement opportunities. Methods: CLABSI data was stratified and analyzed for evidence of disparity by race/ethnicity, primary insurance payor, and preferred spoken language utilizing an internal methodology. Subsequent analyses included a root cause analysis (RCA), case mix index (CMI) analysis, analysis of CLABSI Kamishibai card (K-card) rounding to monitor maintenance bundle reliability, and comparison of distribution of central venous catheter (CVC) line days to K-card audits [Figure 1]. Chi-square tests were used to test for significant differences for categorical variables in RCA and K-card analyses. ANOVA was used to compare CMI between demographic groups. Multiple logistic regression was used to compare K-card compliance rates by demographic groups. Results: When stratifying CLABSI rate by primary payor, pairwise comparisons indicated patients with a public payor had a statistically higher rate of CLABSI compared to private (p=0.02) [Figure 2A]. RCA analysis revealed when compared to patients with private payors, those with public had significantly higher rates of overdue needless connector changes (p = 0.03) and increased number of daily CVC entries (p = 0.05), while patients speaking another language (p = 0.02) were significantly more likely to have CVC contamination events. CMI analyses on CLABSI cases did not show patient acuity to vary significantly between demographics. Bivariate analysis of K-card data revealed minor differences in reliability with 7 Core Maintenance Bundle Elements by demographics; adjusting for all demographics and accounting for unit, pairwise comparisons indicated public payors had significantly higher compliance than international [Figure 2B]. We found no major differences in demographic distribution of CVC line days compared to K-card audits, suggesting we representatively audit maintenance bundle process measures. Conclusions: Our review of health equity in CLABSI events ultimately led to subsequent questions requiring analysis of other data sets. Utilizing an exploratory approach and assembling a multidisciplinary team to identify potential drivers of identified disparities adds value to health equity analyses. This is the first description of HAI data beyond race and ethnicity and can assist other institutions in their process of evaluating healthcare disparities and HAIs.
The current research framework recommends using biomarkers to further understand Alzheimer’s disease (AD) pathogenesis, including other contributing factors like cerebrovascular disease. In longitudinal studies of people with neuropathological examination after death, baseline loneliness was associated with lower cognition, faster cognitive decline, and future AD risk, independent of AD pathology. Examination of memory impairment along with AD and cerebrovascular biomarkers, could aid risk reduction efforts earlier in the lifecourse and among populations with more exposure to loneliness. We hypothesized that loneliness is associated with amyloid, vascular, and neurodegeneration biomarkers; with worse memory; and that loneliness increases the susceptibility to biomarker-related memory impairment.
Participants and Methods:
A subset of cognitively unimpaired older adults with available amyloid PET, vascular MRI (white matter hyperintensity volume, WMH), structural MRI (cortical thickness in AD signature regions), neuropsychological testing (memory factor score), dichotomized loneliness data (one item from CES-D), and relevant medical data were drawn from the community-based Washington Heights-Inwood Columbia Aging Project (WHICAP; n=169; covariates included age=81±6 years; 63% women; 49/31/20% Non-Hispanic Black/Non-Hispanic White/Hispanic; education=13±4 years; 32% APOE-e4 carriers). General linear models in the overall sample and stratified by race and ethnicity tested the association between loneliness and AD and cerebrovascular biomarkers, loneliness and memory, and the interaction of loneliness and biomarkers on memory, adjusting for covariates.
Results:
Loneliness was endorsed in 18% of participants, marginally associated with older age (2.1 [-0.2, 4.4], p=0.08), was more likely in those with untreated diabetes (13/0.1% lonely/not lonely, p=0.001), associated with lower cortical thickness (-0.05 [-0.09, -0.02], p=0.01), and associated with lower memory (0.3 [-0.6, -0.001], p=0.05). In Non-Hispanic White participants, loneliness was associated with greater WMH volume (0.5 [0.07, 0.82], p=0.03), while in Hispanic participants, loneliness was associated with lower cortical thickness (-0.16 [-0.24, -0.08], p=0.0006). In Non-Hispanic Black participants, loneliness was associated with lower memory (-13 [-26, -0.5], p=0.05), and the association between lower cortical thickness and lower memory was stronger in those that endorse loneliness (5 [0.2, 10], p=0.05). In Hispanic participants, loneliness was associated with higher memory (13 [4, 22], p=0.009), but the association between higher amyloid burden and lower memory was stronger in those that endorse loneliness (-12 [-20, -4], p=0.006); further, loneliness was marginally associated with lower memory (-0.7 [-1.4, 0.1], p=0.09), independently of WMH.
Conclusions:
Associations between loneliness and biomarkers may relate to health seeking behavior, reported as treatment status for diabetes, for cerebrovascular burden and general neurodegeneration, but might be more complex for amyloid. The degree to which loneliness increased the susceptibility to amyloid and neurodegeneration-related, but not cerebrovascular-related, memory impairment, specifically, may suggest that domains beyond memory should be considered. Future work should be longitudinal to disentangle the effects of loneliness from related constructs like depression and anxiety, incorporate other AD biomarkers such as hyperphosphorylated tau, and incorporate biological mechanisms (e.g., stress, inflammation) into models of loneliness and AD pathogenesis. Older adults from all backgrounds may be more susceptible to loneliness, which was associated with lower memory; culturally-humble, social support-based interventions may reduce the risk of cognitive impairment.
Background: Pediatric patients often require central venous catheters (CVCs) for a variety of clinical indications, including medication administration, parenteral nutrition, and venous blood sampling. Patients with CVCs are at risk for central-line–associated bloodstream infections (CLABSI). These hospital-acquired infections are often preventable and may lead to increased morbidity and mortality. Clinicians at a 477-bed, freestanding pediatric academic hospital completed a quality improvement project to identify factors that place pediatric patients at increased risk for CLABSI and to outline strategies aimed at CLABSI reduction for our highest-risk patients. Methods: Project leaders completed a literature review to evaluate current research on the topic and then assembled a project team. The team completed a retrospective analysis and categorization of CLABSI cases and reviewed internal CLABSI root-cause analysis data. The group then completed a case–control analysis to identify risk factors in patients with CVCs who developed CLABSIs, compared to patients with CVCs who did not develop CLABSI. Following this analysis, the team created a CLABSI risk-factor tool for use by bedside nurses. This tool described patients with CLABSI risk factors and outlined best practices for CLABSI prevention. Results: Based upon literature review, root-cause analysis data, and retrospective CLABSI case review over the period from 2017 to 2021, an initial list of 9 potential CLABSI risk factors was compiled. A case–control analysis was performed comparing 97 CLABSI cases with 103 matched controls. Univariate, multivariate, and additional covariate analyses were employed to identify 3 factors placing pediatric patients at increased risk for CLABSI. These included (1) multiple enteral devices (ie, 2 or more devices, including gastrostomy tube, jejunostomy tube, gastrostomy or jejunostomy tube, ostomy, and peritoneal drain); (2) multiple CVC entries (ie, CVC used for medications and venous sampling); and (3) long-term CVC plus parenteral nutrition (CVC in place for >21 days and receiving parenteral nutrition and/or intralipids). Conclusions: Pediatric patients with central venous access are vulnerable to CLABSI, and certain patients may be at increased risk. Frontline clinicians may be able to identify these patients and adopt best practices to prevent infection. A tool for use by bedside nurses can be a useful adjunct to existing CLABSI prevention practices.
While there is an increasing prevalence of dieting in the overall population, weight loss (WL) practices could be a risk factor for weight gain (WG) in normal-weight (NW) individuals. The aim of the present work was to systematically review all the studies implicating diet restriction and body weight (BW) evolution in NW people. The literature search was registered in PROSPERO (CRD42021281442) and was performed in three databases from April 2021 to June 2022 for articles involving healthy NW adults. From a total of 1487 records initially identified, eighteen were selected in the systematic review. Of the eight dieting interventional studies, only one found a higher BW after weight recovery, but 75 % of them highlighted metabolic adaptations in response to WL favouring weight regain and persisting during/after BW recovery. Eight of the ten observational studies showed a relationship between dieting and major later WG, while the meta-analysis of observational studies results indicated that ‘dieters’ have a higher BW than ‘non-dieters’. However, considering the high methodological heterogeneity and the publication bias of the studies, this result should be taken with caution. Moreover, the term ‘diet’ was poorly described, and we observed a large heterogeneity of the methods used to assess dieting status. Present results suggest that dieting could be a major risk factor for WG in the long term in NW individuals. There is, however, a real need for prospective randomised controlled studies, specifically assessing the relationship between WL induced by diet and subsequent weight in this population.
Modelling loss reserve data in run-off triangles is challenging due to the complex but unknown dynamics in the claim/loss process. Popular loss reserve models describe the mean process through development year, accident year, and calendar year effects using the analysis of variance and covariance (ANCOVA) models. We propose to include in the mean function the persistence terms in the conditional autoregressive range model for modelling the persistence of claim across development years. In the ANCOVA model, we adopt linear trends for the accident and calendar year effects and a quadratic trend for the development year effect. We investigate linear or log-transformed mean functions and four distributions, namely generalised beta type 2, generalised gamma, Weibull, and exponential extension, with positive support to enhance the model flexibility. The proposed models are implemented using the Bayesian user-friendly package Stan running in the R environment. Results show that the models with log-transformed mean function and persistence terms provide better model fits. Lastly, the best model is applied to forecast partial loss reserve and calendar year reserve for three years.
To describe the evolution of respiratory antibiotic prescribing during the coronavirus disease 2019 (COVID-19) pandemic across 3 large hospitals that maintained antimicrobial stewardship services throughout the pandemic.
Design:
Retrospective interrupted time-series analysis.
Setting:
A multicenter study was conducted including medical and intensive care units (ICUs) from 3 hospitals within a Canadian epicenter for COVID-19.
Methods:
Interrupted time-series analysis was used to analyze rates of respiratory antibiotic utilization measured in days of therapy per 1,000 patient days (DOT/1,000 PD) in medical units and ICUs. Each of the first 3 waves of the pandemic were compared to the baseline.
Results:
Within the medical units, use of respiratory antibiotics increased during the first wave of the pandemic (rate ratio [RR], 1.76; 95% CI, 1.38–2.25) but returned to the baseline in waves 2 and 3 despite more COVID-19 admissions. In ICU, the use of respiratory antibiotics increased in wave 1 (RR, 1.30; 95% CI, 1.16–1.46) and wave 2 of the pandemic (RR, 1.21; 95% CI, 1.11–1.33) and returned to the baseline in the third wave, which had the most COVID-19 admissions.
Conclusions:
After an initial surge in respiratory antibiotic prescribing, we observed the normalization of prescribing trends at 3 large hospitals throughout the COVID-19 pandemic. This trend may have been due to the timely generation of new research and guidelines developed with frontline clinicians, allowing for the active application of new research to clinical practice.
To investigate a cluster of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections in employees working on 1 floor of a hospital administration building.
Methods:
Contact tracing was performed to identify potential exposures and all employees were tested for SARS-CoV-2. Whole-genome sequencing was performed to determine the relatedness of SARS-CoV-2 samples from infected personnel and from control cases in the healthcare system with coronavirus disease 2019 (COVID-19) during the same period. Carbon dioxide levels were measured during a workday to assess adequacy of ventilation; readings >800 parts per million (ppm) were considered an indication of suboptimal ventilation. To assess the potential for airborne transmission, DNA-barcoded aerosols were released, and real-time polymerase chain reaction was used to quantify particles recovered from air samples in multiple locations.
Results:
Between December 22, 2020, and January 8, 2021, 17 coworkers tested positive for SARS-CoV-2, including 13 symptomatic and 4 asymptomatic individuals. Of the 5 cluster SARS-CoV-2 samples sequenced, 3 were genetically related, but these employees denied higher-risk contacts with one another. None of the sequences from the cluster were genetically related to the 17 control sequences of SARS-CoV-2. Carbon dioxide levels increased during a workday but never exceeded 800 ppm. DNA-barcoded aerosol particles were dispersed from the sites of release to locations throughout the floor; 20% of air samples had >1 log10 particles.
Conclusions:
In a hospital administration building outbreak, sequencing of SARS-CoV-2 confirmed transmission among coworkers. Transmission occurred despite the absence of higher-risk exposures and in a setting with adequate ventilation based on monitoring of carbon dioxide levels.
Understanding core statistical properties and data features in mortality data are fundamental to the development of machine learning methods for demographic and actuarial applications of mortality projection. The study of statistical features in such data forms the basis for classification, regression and forecasting tasks. In particular, the understanding of key statistical structure in such data can aid in improving accuracy in undertaking mortality projection and forecasting when constructing life tables. The ability to accurately forecast mortality is a critical aspect for the study of demography, life insurance product design and pricing, pension planning and insurance-based decision risk management. Though many stylised facts of mortality data have been discussed in the literature, we provide evidence for a novel statistical feature that is pervasive in mortality data at a national level that is as yet unexplored. In this regard, we demonstrate in this work a strong evidence for the existence of long memory features in mortality data, and second that such long memory structures display multifractality as a statistical feature that can act as a discriminator of mortality dynamics by age, gender and country. To achieve this, we first outline the way in which we choose to represent the persistence of long memory from an estimator perspective. We make a natural link between a class of long memory features and an attribute of stochastic processes based on fractional Brownian motion. This allows us to use well established estimators for the Hurst exponent to then robustly and accurately study the long memory features of mortality data. We then introduce to mortality analysis the notion from data science known as multifractality. This allows us to study the long memory persistence features of mortality data on different timescales. We demonstrate its accuracy for sample sizes commensurate with national-level age term structure historical mortality records. A series of synthetic studies as well a comprehensive analysis of real mortality death count data are studied in order to demonstrate the pervasiveness of long memory structures in mortality data, both mono-fractal and multifractal functional features are verified to be present as stylised facts of national-level mortality data for most countries and most age groups by gender. We conclude by demonstrating how such features can be used in kernel clustering and mortality model forecasting to improve these actuarial applications.
Several recent reports have raised concern that infected coworkers may be an important source of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) acquisition by healthcare personnel. In a suspected outbreak among emergency department personnel, sequencing of SARS-CoV-2 confirmed transmission among coworkers. The suspected 6-person outbreak included 2 distinct transmission clusters and 1 unrelated infection.
Mortality projection and forecasting of life expectancy are two important aspects of the study of demography and life insurance modelling. We demonstrate in this work the existence of long memory in mortality data. Furthermore, models incorporating long memory structure provide a new approach to enhance mortality forecasts in terms of accuracy and reliability, which can improve the understanding of mortality. Novel mortality models are developed by extending the Lee–Carter (LC) model for death counts to incorporate a long memory time series structure. To link our extensions to existing actuarial work, we detail the relationship between the classical models of death counts developed under a Generalised Linear Model (GLM) formulation and the extensions we propose that are developed under an extension to the GLM framework known in time series literature as the Generalised Linear Autoregressive Moving Average (GLARMA) regression models. Bayesian inference is applied to estimate the model parameters. The Deviance Information Criterion (DIC) is evaluated to select between different LC model extensions of our proposed models in terms of both in-sample fits and out-of-sample forecasts performance. Furthermore, we compare our new models against existing models structures proposed in the literature when applied to the analysis of death count data sets from 16 countries divided according to genders and age groups. Estimates of mortality rates are applied to calculate life expectancies when constructing life tables. By comparing different life expectancy estimates, results show the LC model without the long memory component may provide underestimates of life expectancy, while the long memory model structure extensions reduce this effect. In summary, it is valuable to investigate how the long memory feature in mortality influences life expectancies in the construction of life tables.
Background: In the United States, carbapenemases are rarely the cause of carbapenem resistance in Pseudomonas aeruginosa. Detection of carbapenemase production (CP) in carbapenem-resistant P. aeruginosa (CRPA) is critical for preventing its spread, but testing of many isolates is required to detect a single CP-CRPA. The CDC evaluates CRPA for CP through (1) the Antibiotic Resistance Laboratory Network (ARLN), in which CRPA are submitted from participating clinical laboratories to public health laboratories for carbapenemase testing and antimicrobial susceptibility testing (AST) and (2) laboratory and population-based surveillance for CRPA in 8 sites through the Emerging Infection Program (EIP). Objective: We used data from ARLN and EIP to identify AST phenotypes that can help detect CP-CRPA. Methods: We defined CRPA as P. aeruginosa resistant to meropenem, imipenem, or doripenem, and we defined CP-CRPA as CRPA with molecular identification of carbapenemase genes (blaKPC, blaIMP, blaNDM, or blaVIM). We applied CLSI break points to 2018 ARLN CRPA AST data to categorize isolates as resistant, intermediate, or susceptible, and we evaluated the sensitivity and specificity of AST phenotypes to detect CP among CRPA; isolates that were intermediate or resistant were called nonsusceptible. Using EIP data, we assessed the proportion of isolates tested for a given drug in clinical laboratories, and we applied definitions to evaluate performance and number needed to test to identify a CP-CRPA. Results: Only 203 of 6,444 of CRPA isolates (3%) tested through AR Lab Network were CP-CRPA harboring blaVIM (n = 123), blaKPC (n = 53), blaIMP (n = 16), or blaNDM (n = 13) genes. Definitions with the best performance were resistant to ≥1 carbapenem AND were (1) nonsusceptible to ceftazidime (sensitivity, 93%; specificity, 61%) (Table 1) or (2) nonsusceptible to cefepime (sensitivity, 83%; specificity, 53%). Most isolates not identified by definition 2 were sequence type 111 from a single-state blaVIM CP-CRPA outbreak. Among 4,209 CRPA isolates identified through EIP, 80% had clinical laboratory AST data for ceftazidime and 96% had clinical laboratory AST data for cefepime. Of 967 CRPA isolates that underwent molecular testing at the CDC, 7 were CP-CRPA; both definitions would have detected all 7. Based on EIP data, the number needed to test to identify 1 CP-CRPA would decrease from 135 to 42 for definition 1 and to 50 using definition 2. Conclusions: AST-based definitions using carbapenem resistance combined with ceftazidime or cefepime nonsusceptibility would rarely miss a CP-CRPA and would reduce the number needed to test to identify CP-CRPA by >60%. These definitions could be considered for use in laboratories to decrease the testing burden to detect CP-CRPA.
Funding: None
Disclosures: In the presentation we will discuss the drug combination aztreonam-avibactam and acknowledge that this drug combination is not currently FDA approved.
The COVID-19 pandemic has placed unprecedented demands on health systems, where hospitals have become overwhelmed with patients amidst limited resources. Disaster response and resource allocation during such crises present multiple challenges. A breakdown in communication and organization can lead to unnecessary disruptions and adverse events. The Federal Emergency Management Agency (FEMA) promotes the use of an incident command system (ICS) model during large-scale disasters, and we hope that an institutional disaster plan and ICS will help to mitigate these lapses. In this article, we describe the alignment of an emergency department (ED) specific Forward Command structure with the hospital ICS and address the challenges specific to the ED. Key components of this ICS include a hospital-wide incident command or Joint Operations Center (JOC) and an ED Forward Command. This type of structure leads to a shared mental model with division of responsibilities that allows institutional adaptations to changing environments and maintenance of specific roles for optimal coordination and communication. We present this as a model that can be applied to other hospital EDs around the country to help structure the response to the COVID-19 pandemic while remaining generalizable to other disaster situations.
To outline features of the neurologic examination that can be performed virtually through telemedicine platforms (the virtual neurological examination [VNE]), and provide guidance for rapidly pivoting in-person clinical assessments to virtual visits during the COVID-19 pandemic and beyond.
Methods:
The full neurologic examination is described with attention to components that can be performed virtually.
Results:
A screening VNE is outlined that can be performed on a wide variety of patients, along with detailed descriptions of virtual examination maneuvers for specific scenarios (cognitive testing, neuromuscular and movement disorder examinations).
Conclusions:
During the COVID-19 pandemic, rapid adoption of virtual medicine will be critical to provide ongoing and timely neurological care. Familiarity and mastery of a VNE will be critical for neurologists, and this article outlines a practical approach to implementation.
The existence of long memory in mortality data improves the understandings of features of mortality data and provides a new approach for establishing mortality models. The findings of long-memory phenomena in mortality data motivate us to develop new mortality models by extending the Lee–Carter (LC) model to death counts and incorporating long-memory model structure. Furthermore, there are no identification issues arising in the proposed model class. Hence, the constraints which cause many computational issues in LC models are removed. The models are applied to analyse mortality death count data sets from three different countries divided according to genders. Bayesian inference with various selection criteria is applied to perform the model parameter estimation and mortality rate forecasting. Results show that multivariate long-memory mortality model with long-memory cohort effect model outperforms multivariate extended LC cohort model in both in-sample fitting and out-sample forecast. Increasing the accuracy of forecasting of mortality rates and improving the projection of life expectancy is an important consideration for insurance companies and governments since misleading predictions may result in insufficient funds for retirement and pension plans.
Every year, there are larger and more severe disasters and health organizations are struggling to respond with services to keep public health systems running. Making decisions with limited health information can negatively affect response activities and impact morbidity and mortality. An overarching challenge is getting the right health information to the right health service personnel at the right time. As responding agencies engage in social media (eg, Twitter, Facebook) to communicate with the public, new opportunities emerge to leverage this non-traditional information for improved situational awareness. Transforming these big data is dependent on computers to process and filter content for health information categories relevant to health responders. To enable a more health-focused approach to social media analysis during disasters, 2 major research challenges should be addressed: (1) advancing methodologies to extract relevant information for health services and creating dynamic knowledge bases that address both the global and US disaster contexts, and (2) expanding social media research for disaster informatics to focus on health response activities. There is a lack of attention on health-focused social media research beyond epidemiologic surveillance. Future research will require approaches that address challenges of domain-aware, including multilingual language understanding in artificial intelligence for disaster health information extraction. New research will need to focus on the primary goal of health providers, whose priority is to get the right health information to the right medical and public health service personnel at the right time.
Disasters, such as cyclones, create conditions that increase the risk of infectious disease outbreaks. Epidemic forecasts can be valuable for targeting highest risk populations before an outbreak. The two main barriers to routine use of real-time forecasts include scientific and operational challenges. First, accuracy may be limited by availability of data and the uncertainty associated with the inherently stochastic processes that determine when and where outbreaks happen and spread. Second, even if data are available, the appropriate channels of communication may prevent their use for decision making.
In April 2019, only six weeks after Cyclone Idai devastated Mozambique’s central region and sparked a cholera outbreak, Cyclone Kenneth severely damaged northern areas of the country. By June 10, a total of 267 cases of cholera were confirmed, sparking a vaccination campaign. Prior to Kenneth’s landfall, a team of academic researchers, humanitarian responders, and health agencies developed a simple model to forecast areas at highest risk of a cholera outbreak. The model created risk indices for each district using combinations of four metrics: (1) flooding data; (2) previous annual cholera incidence; (3) sensitivity of previous outbreaks to the El Niño-Southern Oscillation cycle; and (4) a diffusion (gravity) model to simulate movement of infected travelers. As information on cases became available, the risk model was continuously updated. A web-based tool was produced, which identified highest risk populations prior to the cyclone and the districts at-risk following the start of the outbreak.
The model prior to Kenneth’s arrival using the metrics of previous incidence, projected flood, and El Niño sensitivity accurately predicted areas at highest risk for cholera. Despite this success, not all data were available at the scale at which the vaccination campaign took place, limiting the model’s utility, and the extent to which the forecasts were used remains unclear. Here, the science behind these forecasts and the organizational structure of this collaborative effort are discussed. The barriers to the routine use of forecasts in crisis settings are highlighted, as well as the potential for flexible teams to rapidly produce actionable insights for decision making using simple modeling tools, both before and during an outbreak.