We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper presents an artificial neural network (ANN)-based nonlinear model predictive visual servoing method for mobile robots. The ANN model is developed for state predictions to mitigate the unknown dynamics and parameter uncertainty issues of the physics-based (PB) model. To enhance both the model generalization and accuracy for control, a two-stage ANN training process is proposed. In a pretraining stage, highly diversified data accommodating broad operating ranges is generated by a PB kinematics model and used to train an ANN model first. In the second stage, the test data collected from the actual system, which is limited in both the diversity and the volume, are employed to further finetune the ANN weights. Path-following experiments are conducted to compare the effects of various ANN models on nonlinear model predictive control and visual servoing performance. The results confirm that the pretraining stage is necessary for improving model generalization. Without pretraining (i.e., model trained only with the test data), the robot fails to follow the entire track. Weight finetuning with the captured data further improves the tracking accuracy by 0.07–0.15 cm on average.
Operative cancellations adversely affect patient health and impose resource strain on the healthcare system. Here, our objective was to describe neurosurgical cancellations at five Canadian academic institutions.
Methods:
The Canadian Neurosurgery Research Collaborative performed a retrospective cohort study capturing neurosurgical procedure cancellation data at five Canadian academic centres, during the period between January 1, 2014 and December 31, 2018. Demographics, procedure type, reason for cancellation, admission status and case acuity were collected. Cancellation rates were compared on the basis of demographic data, procedural data and between centres.
Results:
Overall, 7,734 cancellations were captured across five sites. Mean age of the aggregate cohort was 57.1 ± 17.2 years. The overall procedure cancellation rate was 18.2%. The five-year neurosurgical operative cancellation rate differed between Centre 1 and 2 (Centre 1: 25.9%; Centre 2: 13.0%, p = 0.008). Female patients less frequently experienced procedural cancellation. Elective, outpatient and spine procedures were more often cancelled. Reasons for cancellation included surgeon-related factors (28.2%), cancellation for a higher acuity case (23.9%), patient condition (17.2%), other factors (17.0%), resource availability (7.0%), operating room running late (6.4%) and anaesthesia-related (0.3%). When clustered, the reason for cancellation was patient-related in 17.2%, staffing-related in 28.5% and operational or resource-related in 54.3% of cases.
Conclusions:
Neurosurgical operative cancellations were common and most often related to operational or resource-related factors. Elective, outpatient and spine procedures were more often cancelled. These findings highlight areas for optimizing efficiency and targeted quality improvement initiatives.
Face-to-face administration is the “gold standard” for both research and clinical cognitive assessments. However, many factors may impede or prevent face-to-face assessments, including distance to clinic, limited mobility, eyesight, or transportation. The COVID19 pandemic further widened gaps in access to care and clinical research participation. Alternatives to face-to-face assessments may provide an opportunity to alleviate the burden caused by both the COVID-19 pandemic and longer standing social inequities. The objectives of this study were to develop and assess the feasibility of a telephone- and video-administered version of the Uniform Data Set (UDS) v3 cognitive batteries for use by NIH-funded Alzheimer’s Disease Research Centers (ADRCs) and other research programs.
Participants and Methods:
Ninety-three individuals (M age: 72.8 years; education: 15.6 years; 72% female; 84% White) enrolled in our ADRC were included. Their most recent adjudicated cognitive status was normal cognition (N=44), MCI (N=35), mild dementia (N=11) or other (N=3). They completed portions of the UDSv3 cognitive battery, plus the RAVLT, either by telephone or video-format within approximately 6 months (M:151 days) of their annual in-person visit, where they completed the same in-person cognitive assessments. Some measures were substituted (Oral Trails for TMT; Blind MoCA for MoCA) to allow for phone administration. Participants also answered questions about the pleasantness, difficulty level, and preference for administration mode. Cognitive testers provided ratings of perceived validity of the assessment. Participants’ cognitive status was adjudicated by a group of cognitive experts blinded to most recent inperson cognitive status.
Results:
When results from video and phone modalities were combined, the remote assessments were rated as pleasant as the inperson assessment by 74% of participants. 75% rated the level of difficulty completing the remote cognitive assessment the same as the in-person testing. Overall perceived validity of the testing session, determined by cognitive assessors (video = 92%; phone = 87.5%), was good. There was generally good concordance between test scores obtained remotely and in-person (r = .3 -.8; p < .05), regardless of whether they were administered by phone or video, though individual test correlations differed slightly by mode. Substituted measures also generally correlated well, with the exception of TMT-A and OTMT-A (p > .05). Agreement between adjudicated cognitive status obtained remotely and cognitive status based on in-person data was generally high (78%), with slightly better concordance between video/in-person (82%) vs phone/in-person (76%).
Conclusions:
This pilot study provided support for the use of telephone- and video-administered cognitive assessments using the UDSv3 among individuals with normal cognitive function and some degree of cognitive impairment. Participants found the experience similarly pleasant and no more difficult than inperson assessment. Test scores obtained remotely correlated well with those obtained in person, with some variability across individual tests. Adjudication of cognitive status did not differ significantly whether it was based on data obtained remotely or in-person. The study was limited by its’ small sample size, large test-retest window, and lack of randomization to test-modality order. Current efforts are underway to more fully validate this battery of tests for remote assessment. Funded by: P30 AG072947 & P30 AG049638-05S1
Neuronal dysfunction of the locus coeruleus (LC), the primary producer of norepinephrine, has been identified as a biomarker of early Alzheimer's disease (AD) pathophysiology. Norepinephrine has been implicated in attentional control, and its reduced cortical circulation in AD may be associated with selective attentional difficulties. Additionally, greater pupil dilation indicates greater effort needed to perform a cognitive task, and greater compensatory effort to perform the digit span task has been found in individuals at risk for AD. In this study, we examined associations between a neuroimaging biomarker of the LC and pupil dilation during the Stroop task as a sensitive measure of attentional control.
Participants and Methods:
64 older adults without dementia were recruited from the San Diego community (mean [SD] age = 74.3 [6.3]; 39 cognitively unimpaired and 25 with mild cognitive impairment). All participants underwent magnetic resonance imaging of the LC and generated behavioral data from a computerized Stroop task that included 36 incongruent trials (e.g., GREEN presented in red ink), 36 congruent trials (e.g., GREEN presented in green ink), and 32 neutral trials (e.g., LEGAL presented in green ink) in a randomized presentation. Mean pupil dilation for each trial (change relative to baseline at the start of each trial) was measured at 30 Hz using the Tobii X2-30 system (Tobii, Stockholm, Sweden) and averaged within each Stroop condition. Paired t-tests assessed for differences in mean pupil dilation across incongruent and congruent Stroop conditions. Iterative re-weighted least squares regression was used to assess the association between a rostral LC contrast ratio measure derived from manually marked ROIs and mean pupil dilation during incongruent trials divided by congruent trials, adjusting for age, sex, and education. Follow-up analyses also assessed the association of these variables with mean reaction time (RT) for incongruent trials divided by congruent trials.
Results:
Mean pupil dilation significantly differed across conditions (t = 3.74, mean difference = .13, 95% CI [.06, .20]) such that dilation was higher during the incongruent condition (mean [SD] dilation = .18 [.38] mm) relative to the congruent condition (mean [SD] dilation = .05 [.35] mm). A significant association was observed between pupil dilation and LC contrast ratio, such that increased levels of mean dilation during incongruent trials relative to congruent trials were observed at lower levels of LC contrast ratio (i.e., lower LC integrity; r = -.37, 95% CI [-.55, -.13]). This association was not observed for mean dilation during only congruent trials (r = -.08, 95% CI [-.31, .18]). Additionally, neither LC contrast ratio [r = .24, 95% CI [-.02, .46]) nor mean incongruent/congruent pupil dilation (r = .14, 95% CI [-.13, .37]) were associated with incongruent/congruent RT.
Conclusions:
Findings suggest that increased pupil dilation during a demanding attentional task is indicative of increased compensatory effort needed to achieve the same level of performance for individuals with reduced LC biomarker integrity. Pupillometry assessment offers a low-cost, non-invasive, and scalable biomarker of LC dysfunction that may be indicative of preclinical AD.
The locus coeruleus (LC) plays a key role in cognitive processes such as attention, executive function, and memory. The LC has been identified as an early site of tau accumulation in Alzheimer’s disease (AD). LC neurons are thought to survive, albeit with limited functionality, until later stages of the disease, though how exactly this limited functionality impacts cognition through the course of AD is still poorly understood. We investigated the interactive effects of an imaging biomarker of the LC and AD-related cerebrospinal fluid (CSF) biomarkers on attention, executive function, and memory.
Participants and Methods:
We recruited 67 older adults from the San Diego community (mean age=74.52 years; 38 cognitively normal, 23 with mild cognitive impairment, and 6 with probable AD). Participants had LC-sensitive magnetic resonance imaging (MRI) used to obtain a measure of LC signal relative to surrounding tissue, with lower LC signal possibly indicating limited functionality. Participants also underwent a lumbar puncture to obtain CSF measurements of amyloid-beta 42 (Ab42) and phosphorylated tau (p-tau). We calculated the p-tau/Ab42 ratio, which is positively correlated with AD progression. Finally, participants were administered a comprehensive neuropsychological battery, and cognitive composites were created for attention (Digit Symbol, Digit Span Forward, Trails A), executive function (Digit Span Backward, Trails B, Color-Word Inhibition Switching), and two measures of verbal memory [learning (CVLT List A 1-5, Logical Memory Immediate Recall) and delay (CVLT Long Free Recall, Logical Memory Delayed Recall)]. Four multiple linear regressions modeled the relationship between each composite with age, gender, education, p-tau/Ab42, average LC contrast, and interactions between average LC contrast and p-tau/Ab42. For models that were statistically significant, additional regressions were assessed to determine which segment of the LC (caudal, middle, rostral) contributed to the relationship.
Results:
Our model predicted attention (p=.001, R2=.298) with main effects of average LC signal, p-tau/Ab42, and LC by p-tau/Ab42 interaction. Follow-up regressions revealed that each LC segment contributes to this relationship. Our model predicted executive function (p=.006, R2=.262) with a main effect of average LC signal and LC by p-tau/Ab42 interaction. Follow-up regressions revealed that this relationship was limited to the caudal and middle LC. Our models predicted both verbal learning (p<.001, R2=.512) and delayed memory (p<.001, R2=.364); both with main effects of gender and education. Follow-up regressions revealed that the rostral LC signal interacts with p-tau/Ab42 to predict both verbal learning and delayed memory. For all interactions, those with low p-tau/Ab42 exhibited a positive relationship between LC signal and cognition, whereas those with higher p-tau/Ab42 showed a negative relationship.
Conclusions:
MR-assessed LC signal relates to attention, executive function, and verbal learning and memory in a manner that depends on CSF levels of p-tau and Ab42. The relationship between LC signal and cognition is positive at low levels and negative at higher levels of p-tau/Ab42. If lower LC signal indicates reduced integrity, these findings imply that MR-assessed LC signal may be a more meaningful marker of AD progression in earlier stages of the disease. Alternatively, this measure may capture a different underlying mechanism depending on tau and amyloid biomarker status.
Are difficult decisions best made after a momentary diversion of thought? Previous research addressing this important question has yielded dozens of experiments in which participants were asked to choose the best of several options (e.g., cars or apartments) either after conscious deliberation, or after a momentary diversion of thought induced by an unrelated task. The results of these studies were mixed. Some found that participants who had first performed the unrelated task were more likely to choose the best option, whereas others found no evidence for this so-called unconscious thought advantage (UTA). The current study examined two accounts of this inconsistency in previous findings. According to the reliability account, the UTA does not exist and previous reports of this effect concern nothing but spurious effects obtained with an unreliable paradigm. In contrast, the moderator account proposes that the UTA is a real effect that occurs only when certain conditions are met in the choice task. To test these accounts, we conducted a meta-analysis and a large-scale replication study (N = 399) that met the conditions deemed optimal for replicating the UTA. Consistent with the reliability account, the large-scale replication study yielded no evidence for the UTA, and the meta-analysis showed that previous reports of the UTA were confined to underpowered studies that used relatively small sample sizes. Furthermore, the results of the large-scale study also dispelled the recent suggestion that the UTA might be gender-specific. Accordingly, we conclude that there exists no reliable support for the claim that a momentary diversion of thought leads to better decision making than a period of deliberation.
The coronavirus disease 2019 (COVID-19) pandemic has resulted in shortages of personal protective equipment (PPE), underscoring the urgent need for simple, efficient, and inexpensive methods to decontaminate masks and respirators exposed to severe acute respiratory coronavirus virus 2 (SARS-CoV-2). We hypothesized that methylene blue (MB) photochemical treatment, which has various clinical applications, could decontaminate PPE contaminated with coronavirus.
Design:
The 2 arms of the study included (1) PPE inoculation with coronaviruses followed by MB with light (MBL) decontamination treatment and (2) PPE treatment with MBL for 5 cycles of decontamination to determine maintenance of PPE performance.
Methods:
MBL treatment was used to inactivate coronaviruses on 3 N95 filtering facepiece respirator (FFR) and 2 medical mask models. We inoculated FFR and medical mask materials with 3 coronaviruses, including SARS-CoV-2, and we treated them with 10 µM MB and exposed them to 50,000 lux of white light or 12,500 lux of red light for 30 minutes. In parallel, integrity was assessed after 5 cycles of decontamination using multiple US and international test methods, and the process was compared with the FDA-authorized vaporized hydrogen peroxide plus ozone (VHP+O3) decontamination method.
Results:
Overall, MBL robustly and consistently inactivated all 3 coronaviruses with 99.8% to >99.9% virus inactivation across all FFRs and medical masks tested. FFR and medical mask integrity was maintained after 5 cycles of MBL treatment, whereas 1 FFR model failed after 5 cycles of VHP+O3.
Conclusions:
MBL treatment decontaminated respirators and masks by inactivating 3 tested coronaviruses without compromising integrity through 5 cycles of decontamination. MBL decontamination is effective, is low cost, and does not require specialized equipment, making it applicable in low- to high-resource settings.
The efficacy of a specialized pediatric cardiac rapid response team is unknown. We hypothesized that a specialized cardiac rapid response team would facilitate team-wide communication between the cardiac stepdown unit and cardiac intensive care unit (ICU) teams and improve patient care.
Materials and methods:
A specialized pediatric cardiac rapid response team was implemented in June 2015. All pediatric cardiac rapid response team activations and outcomes from implementation through December 2018 were reviewed. Cardiac arrests and unplanned transfers to the cardiac ICU were indexed to 1000 patient-days to account for inpatient volume trends and evaluated over time.
Results:
There were 202 cardiac rapid response team activations in 108 unique patients during the study period. After implementation of the pediatric cardiac rapid response team, unplanned transfers from the cardiac stepdown unit to the cardiac ICU decreased from 16.8 to 7.1 transfers per 1000 patient days (p = 0.012). The stepdown unit cardiac arrest rate decreased from 1.2 to 0.0 arrests per 1000 patient-days (p = 0.015). There was one death on the cardiac stepdown unit in the 5 years since the implementation of the cardiac rapid response team, compared to four deaths in the previous 5 years.
Conclusions:
A reduction in unplanned cardiac ICU transfers, cardiac arrests, and mortality on the cardiac stepdown unit has been observed since the implementation of a specialized pediatric cardiac rapid response team. A specialized cardiac rapid response team may improve communication and empower the interdisciplinary care team to escalate care for patients experiencing clinical decline.
An intermediate-depth (1751 m) ice core was drilled at the South Pole between 2014 and 2016 using the newly designed US Intermediate Depth Drill. The South Pole ice core is the highest-resolution interior East Antarctic ice core record that extends into the glacial period. The methods used at the South Pole to handle and log the drilled ice, the procedures used to safely retrograde the ice back to the National Science Foundation Ice Core Facility (NSF-ICF), and the methods used to process and sample the ice at the NSF-ICF are described. The South Pole ice core exhibited minimal brittle ice, which was likely due to site characteristics and, to a lesser extent, to drill technology and core handling procedures.
Reward Deficiency Syndrome (RDS) is an umbrella term for all drug and nondrug addictive behaviors, due to a dopamine deficiency, “hypodopaminergia.” There is an opioid-overdose epidemic in the USA, which may result in or worsen RDS. A paradigm shift is needed to combat a system that is not working. This shift involves the recognition of dopamine homeostasis as the ultimate treatment of RDS via precision, genetically guided KB220 variants, called Precision Behavioral Management (PBM). Recognition of RDS as an endophenotype and an umbrella term in the future DSM 6, following the Research Domain Criteria (RDoC), would assist in shifting this paradigm.
We investigate the spatial distribution, spectral properties and temporal variability of primary producers (e.g. communities of microbial mats and mosses) throughout the Fryxell basin of Taylor Valley, Antarctica, using high-resolution multispectral remote-sensing data. Our results suggest that photosynthetic communities can be readily detected throughout the Fryxell basin based on their unique near-infrared spectral signatures. Observed intra- and inter-annual variability in spectral signatures are consistent with short-term variations in mat distribution, hydration and photosynthetic activity. Spectral unmixing is also implemented in order to estimate mat abundance, with the most densely vegetated regions observed from orbit correlating spatially with some of the most productive regions of the Fryxell basin. Our work establishes remote sensing as a valuable tool in the study of these ecological communities in the McMurdo Dry Valleys and demonstrates how future scientific investigations and the management of specially protected areas could benefit from these tools and techniques.
Chromosome 22q11.2 deletion syndrome (22q11DS) is associated with high rates of psychiatric disorders, including schizophrenia in up to 30% of individuals with the syndrome. Despite this, we know relatively little about trajectories and predictors of persistence of psychiatric disorders from middle childhood to early adulthood. Accordingly, we followed youth over four timepoints, every 3 years, to assess long-term trajectories of attention-deficit hyperactivity disorder (ADHD), anxiety, mood, and psychosis-spectrum disorders (PSDs), as well as medication usage.
Methods
Eighty-seven youth with 22q11DS and 65 controls between the ages of 9 and 15 years at the first timepoint (T1; mean age 11.88 ± 2.1) were followed for 9 years (mean age of 21.22 ± 2.01 years at T4). Baseline cognitive, clinical, and familial predictors of persistence were identified for each class of psychiatric disorders.
Results
Baseline age and parent-rated hyperactivity scores predicted ADHD persistence [area under curve (AUC) = 0.81]. The presence of family conflict predicted persistence of anxiety disorders (ADs) whereas parent ratings of child internalizing symptoms predicted persistence of both anxiety and mood disorders (MDs) (AUC = 0.84 and 0.83, respectively). Baseline prodromal symptoms predicted persistent and emergent PSDs (AUC = 0.83). Parent-reported use of anti-depressants/anxiolytics increased significantly from T1 to T4.
Conclusions
Psychiatric, behavioral, and cognitive functioning during late childhood and early adolescence successfully predicted children with 22q11DS who were at highest risk for persistent psychiatric illness in young adulthood. These findings emphasize the critical importance of early assessments and interventions in youth with 22q11DS.
Studies were conducted for 3 yr to evaluate herbicides and herbicide combinations for triazine-resistant smooth pigweed (TR-AMACH) control in field corn. Of the PRE treatments, combinations of atrazine plus acetochlor, metolachlor plus dicamba, and atrazine plus alachlor provided the most complete control of this weed (77 to 81%). The best early postemergence (EP) combination was pendimethalin plus atrazine plus dicamba (93% control). Pyridate plus atrazine applied POST provided a four-site average of 98% control. The most effective sequential herbicide treatments consisted of either metolachlor or pendimethalin PRE followed by POST treatments containing either pyridate, thifensulfuron, bromoxynil, or dicamba.
Chylothorax after paediatric cardiac surgery incurs significant morbidity; however, a detailed understanding that does not rely on single-centre or administrative data is lacking. We described the present clinical epidemiology of postoperative chylothorax and evaluated variation in rates among centres with a multicentre cohort of patients treated in cardiac ICU.
Methods
This was a retrospective cohort study using prospectively collected clinical data from the Pediatric Cardiac Critical Care Consortium registry. All postoperative paediatric cardiac surgical patients admitted from October, 2013 to September, 2015 were included. Risk factors for chylothorax and association with outcomes were evaluated using multivariable logistic or linear regression models, as appropriate, accounting for within-centre clustering using generalised estimating equations.
Results
A total of 4864 surgical hospitalisations from 15 centres were included. Chylothorax occurred in 3.8% (n=185) of hospitalisations. Case-mix-adjusted chylothorax rates varied from 1.5 to 7.6% and were not associated with centre volume. Independent risk factors for chylothorax included age <1 year, non-Caucasian race, single-ventricle physiology, extracardiac anomalies, longer cardiopulmonary bypass time, and thrombosis associated with an upper-extremity central venous line (all p<0.05). Chylothorax was associated with significantly longer duration of postoperative mechanical ventilation, cardiac ICU and hospital length of stay, and higher in-hospital mortality (all p<0.001).
Conclusions
Chylothorax after cardiac surgery in children is associated with significant morbidity and mortality. A five-fold variation in chylothorax rates was observed across centres. Future investigations should identify centres most adept at preventing and managing chylothorax and disseminate best practices.
The limitations of current and immediate future single-frequency, single-polarization, space-borne SARs for winter sea-ice mapping are quantitatively examined, and improvements are suggested by combining frequencies and polarizations. Ice-type maps are generated using multi-channel, air-borne SAR observations of winter sea ice in the Beaufort Sea to identify six ice conditions: (1) multi-year sea ice; (2) compressed first-year ice; (3) first-year rubble and ridges; (4) first-year rough ice; (5) first-year smooth ice; and (6) first-year thin ice. At a single polarization, C- (λ = 5.6 cm) and L- (λ = 24 cm) band frequencies yield a classification accuracy of 67 and 71%, because C-band confuses multi-year ice and compressed, rough, thick first-year ice surrounding multi-year ice floes, and L-band confuses multi-year ice and deformed first-year ice. Combining C- and L-band improves classification accuracy by 20%. Adding a second polarization at one frequency only improves classification accuracy by 10–14% and separates thin ice and calm open water. Under similar winter-ice conditions, ERS-1 (Cvv) and Radarsat (CHH) would overestimate the multi-year ice fraction by 15% but correctly map the spatial variability of ice thickness; J-ERS-1 (LHH) would perform poorly;and J-ERS-1 combined with ERS-1 or Radarsat would yield reliable estimates of the old, thick, first-year and thin-ice fractions, and of the spatial distribution of ridges. With two polarizations, future single-frequency space-borne SARs could improve our current capability to discriminate thinner ice types.
Control of early-emerging weeds is essential to protect the yield potential of maize. An understanding of the physiological changes that occur as a result of weed interference is required to address variability in yield loss across sites and years. Field trials were conducted at the University of Guelph (UG), the Ohio State University (OSU), and Colorado State University (CSU) during 2009 and 2010. There were six treatments (season-long weedy and weed-free, and weed control at the 1st-, 3rd-, 5th-, and 10th-leaf-tip stages of maize development) and 20 individual plants per plot were harvested at maturity. We hypothesized that, as weed control was delayed, weed interference in the early stages of maize development would increase plant-to-plant variability in plant dry-matter accumulation, which would result in a reduction of grain yield at maturity. The onset of the critical period for weed control (CPWC) occurred on average between the third and fifth leaf tip stages of development (i.e., V1 to V3, respectively). Rate of yield loss following the onset of the CPWC ranged from 0.05 MG ha−1 d−1 at UG 2009 to 0.22 MG ha−1 d−1 at CSU 2010 (i.e., 0.5 and 1.6% d−1, respectively). On average, reductions in kernel number per plant accounted for approximately 65% of the decline in grain yield as weed control was delayed. Biomass partitioning to the grain was stable through early weed removal treatments, increased and peaked at the 10th-leaf-tip time of control, and decreased in the season-long weedy treatment. Plant-to-plant variability in dry matter at maturity and incidence of bareness increased as weed control was delayed. As weed control was delayed, the contribution of plant-to-plant variability at maturity to the overall yield loss was small, relative to the decline of mean plant dry matter.
Hospital evacuations that occur during, or as a result of, infrastructure outages are complicated and demanding. Loss of infrastructure services can initiate a chain of events with corresponding management challenges. This report describes a modeling case study of the 2001 evacuation of the Memorial Hermann Hospital in Houston, Texas (USA). The study uses a model designed to track such cascading events following loss of infrastructure services and to identify the staff, resources, and operational adaptations required to sustain patient care and/or conduct an evacuation. The model is based on the assumption that a hospital’s primary mission is to provide necessary medical care to all of its patients, even when critical infrastructure services to the hospital and surrounding areas are disrupted. Model logic evaluates the hospital’s ability to provide an adequate level of care for all of its patients throughout a period of disruption. If hospital resources are insufficient to provide such care, the model recommends an evacuation. Model features also provide information to support evacuation and resource allocation decisions for optimizing care over the entire population of patients. This report documents the application of the model to a scenario designed to resemble the 2001 evacuation of the Memorial Hermann Hospital, demonstrating the model’s ability to recreate the timeline of an actual evacuation. The model is also applied to scenarios demonstrating how its output can inform evacuation planning activities and timing.
VugrinED, VerziSJ, FinleyPD, TurnquistMA, GriffinAR, RicciKA, Wyte-LakeT. Modeling Evacuation of a Hospital without Electric Power. Prehosp Disaster Med. 2015;30(3):1-9