We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Antidepressants are effective for depression, but most evidence excludes individuals with comorbid physical conditions.
Aims
To assess antidepressants’ efficacy and tolerability in individuals with depression and comorbid physical conditions.
Methods
Systematic review and network meta-analysis of randomised controlled trials (RCTs). Co-primary outcomes were efficacy on depressive symptoms and tolerability (participants dropping out because of adverse events). Bias was assessed with the Cochrane Risk-of-Bias 2 tool and certainty of estimates with the Confidence in Network Meta-Analysis approach. A study protocol was registered in advance (https://osf.io/9cjhe/).
Results
Of the 115 included RCTs, 104 contributed to efficacy (7714 participants) and 82 to tolerability (6083 participants). The mean age was 55.7 years and 51.9% of participants were female. Neurological and cardiocirculatory conditions were the most represented (26.1% and 18.3% of RCTs, respectively). The following antidepressants were more effective than placebo: imipramine, nortriptyline, amitriptyline, desipramine, sertraline, paroxetine, citalopram, fluoxetine, escitalopram, mianserin, mirtazapine and agomelatine, with standardised mean differences ranging from −1.01 (imipramine) to −0.34 (escitalopram). Sertraline and paroxetine were effective for the largest number of ICD-11 disease subgroups (four out of seven). In terms of tolerability, sertraline, imipramine and nortriptyline were less tolerated than placebo, with relative risks ranging from 1.47 (sertraline) to 3.41 (nortriptyline). For both outcomes, certainty of evidence was ‘low’ or ‘very low’ for most comparisons.
Conclusion
Antidepressants are effective in individuals with comorbid physical conditions, although tolerability is a relevant concern. Selective serotonin reuptake inhibitors (SSRIs) have the best benefit–risk profile, making them suitable as first-line treatments, while tricyclics are highly effective but less tolerated than SSRIs and placebo.
The negative role of malnutrition in patients with Crohn’s disease is known; however, many coexisting disease-related factors could cause misinterpretation of the real culprit. This study aimed to describe the role of malnutrition using a novel methodology, entropy balancing. This was a retrospective analysis of consecutive patients undergoing elective major surgery for Crohn’s disease, preoperatively screened following the European Society for Clinical Nutrition guidelines. Two-step entropy balancing was applied to the group of malnourished patients to obtain an equal cohort having a null or low risk of malnutrition. The first reweighting homogenised the cohorts for non-modifiable confounding factors. The second reweighting matched the two groups for modifiable nutritional factors, assuming successful treatment of malnutrition. The entropy balancing was evaluated using the d-value. Postoperative results are reported as mean difference or OR, with a 95 % CI. Of the 183 patients, 69 (37·7 %) were at moderate/high risk for malnutrition. The malnourished patients had lower BMI (d = 1·000), Hb (d = 0·715), serum albumin (d = 0·981), a higher lymphocyte count (d = 0·124), Charlson Comorbidity Index (d = 0·257), American Society of Anaesthesiologists (d = 0·327) and Harvey-Bradshaw scores (d = 0·696). Protective loop ileostomy was more frequently performed (d = 0·648) in the malnourished group. After the first reweighting, malnourished patients experienced a prolonged length of stay (mean difference = 1·9; 0·11, 3·71, days), higher overall complication rate (OR 4·42; 1·39, 13·97) and higher comprehensive complication index score (mean difference = 8·9; 2·2 15·7). After the second reweighting, the postoperative course of the two groups was comparable. Entropy balancing showed the independent role of preoperative malnutrition and the possible advantages obtainable from a pre-habilitation programme in Crohn’s disease patients awaiting surgery.
This article contributes to the growing historical literature on the ‘first globalization’ (1815–1913) and income inequality in countries that exported agricultural products. International market integration is expected to increase the demand for exports and therefore their prices. We estimate the effects of increased prices from international market integration on national welfare and income inequality between and within regions in three major exporters of agricultural products—British India, Colonial Indonesia, and the United States—using the prices of eleven key primary commodities. Market integration significantly increased aggregate welfare, but the gains were unevenly distributed. Producing regions gained up to nearly 6% of their GDP. Since the regions that made most welfare gains were also the poorest in their countries, market integration mitigated inequality between regions. Within the southern United States and Java, plantation owners obtained most gains, causing a substantial increase in inequality between persons.
The story told in this chapter is that of two major waves in liberalization and globalization, occurring in the second halves of the nineteenth and twentieth centuries. Iberian economies participated in both waves, but in a way different from the core European economies. During the first globalization boom, despite the smaller domestic market, Portugal was more protectionist than Spain, what probably discouraged export competitiveness in international markets, and promoted a bigger dynamism in pushing more labour and pulling more capital lending from abroad. During the second globalization, Portugal was slightly more trade-friendly and international labour integrated than its Iberian neighbour, as expected for an economy with a small domestic market, and a robust global migration network. Finally, after fast industrialization and welfare convergence process to the more prosperous Europe, both Iberian countries have recently enjoyed, within the scope of European Institutions, more balanced growth and active participation in the international economy, at least until the financial crisis of 2008.
Airport emergencies are rare but potentially catastrophic; therefore, system preparedness is crucial. Airport emergency plans include the organization of emergency drills on a regular basis, including full-scale exercises, to train and test the entire rescue organization.
Objective:
This report describes a full-scale simulation at Bologna International Airport, Italy, in October 2022, involving local EMS resources.
Methods:
A full-scale aeroplane crash was simulated on the airport ground, activating the Airport emergency plan, and requiring the intervention of supplementary resources (ambulances, medical cars, and other emergency vehicles).
Results:
Twenty-seven simulated patients were evaluated by EMS: START triage assessment was correct for 81.48% of patients; 11.11% were over-triaged and 7.41% were under-triaged. All patients were transported to the hospitals of the area. The simulation ended 2 hours and 28 minutes after the initial alarm.
Conclusion:
The response time proved a good response. Triage accuracy was correct in more than 80% of simulated patients. The availability of a trauma centre within 6 kilometres allowed the transportation of a quota of patients directly from the event, without affecting transportation times. Areas for improvement were identified in the communication within the different agencies and in moving ambulances within the airport runway without airport personnel guidance.
Data on real-time individuals’ location may provide significant opportunities for managing emergency situations. For example, in the case of outbreaks, besides informing on the proximity of people, hence supporting contact tracing activities, location data can be used to understand spatial heterogeneity in virus transmission. However, individuals’ low consent to share their data, proved by the low penetration rate of contact tracing apps in several countries during the coronavirus disease-2019 (COVID-19) pandemic, re-opened the scientific and practitioners’ discussion on factors and conditions triggering citizens to share their positioning data. Following the Antecedents → Privacy Concerns → Outcomes (APCO) model, and based on Privacy Calculus and Reasoned Action Theories, the study investigates factors that cause university students to share their location data with public institutions during outbreaks. To this end, an explanatory survey was conducted in Italy during the second wave of COVID-19, collecting 245 questionnaire responses. Structural equations modeling was used to contemporary investigate the role of trust, perceived benefit, and perceived risk as determinants of the intention to share location data during outbreaks. Results show that respondents’ trust in public institutions, the perceived benefits, and the perceived risk are significant predictor of the intention to disclose personal tracking data with public institutions. Results indicate that the latter two factors impact university students’ willingness to share data more than trust, prompting public institutions to rethink how they launch and manage the adoption process for these technological applications.
In the field of neurocognitive disorders, the perspective offered by new disease-modifying therapy increases the importance of etiological diagnosis. The prescription of cerebrospinal fluid analysis (CSF) and imaging biomarkers is a common practice in the clinic but is often driven more by personal expertise and local availability of diagnostic tools than by evidence of efficacy and cost-effectiveness analysis. This leads to a widely heterogeneous dementia care across Europe. Therefore, a European initiative is currently being conducted to establish a consensus for biomarker-based diagnosis of patients with mild cognitive impairment (MCI) and mild dementia.
Participants and Methods:
Since November 2020, a European multidisciplinary task force of 22 experts from 11 scientific societies have been defining a diagnostic workflow for the efficient use of biomarkers. The Delphi consensus procedure was used to bridge the gaps of incomplete scientific evidence on biomarker prioritization. The project has been in two phases. During Phase 1, we conducted a literature review on the accuracy of imaging, CSF, neurophysiological and blood biomarkers in predicting the clinical progression or in defining the underpinning aetiology of main neurocognitive disorders. Evidence was provided to support the panelists’ decisions. In phase 2, a modified Delphi procedure was implemented, and consensus was reached at a threshold of 70% agreement, or 50%+1 when a question required rediscussion.
Results:
In phase 1, 167 out of 2,200 screened papers provided validated measures of biomarker diagnostic accuracy compared with a gold standard or in predicting progression or conversion of MCI to the dementia stage (i.e., MRI, CSF, FDG-PET, DaT-imaging, amyloid-PET, tau-PET, and myocardial MIBG-scintigraphy and EEG). During phase 2, panelists agreed on the clinical workspace of the workflow, the stage of application, and the patient age window. The workflow is patient-centered and features three levels of assessment (W): W1 defines eleven clinical profiles based on integrated results of neuropsychology, MRI atrophy patterns, and blood tests; W2 describes the first-line biomarkers according to W1 versus clinical suspicion; and W3 suggests the second-line biomarkers when the results of first-line biomarkers are inconsistent with the diagnostic hypothesis, uninformative or inconclusive. CSF biomarkers are first-line in the suspect of Alzheimer’s disease (AD) and when inconsistent neuropsychological and MRI findings hinder a clear diagnostic hypothesis; dopamine SPECT/PET for those leading to suspect Lewy body spectrum. FDG-PET is first-line for the clinical profiles leading to suspect frontotemporal lobar degeneration and motor tauopathies and is followed by CSF biomarkers in the case of atypical metabolic patterns, when an underlying AD etiology is conceivable.
Conclusions:
The workflow will promote consistency in diagnosing neurocognitive disorders across countries and rational use of resources. The initiative has some limitations, mainly linked to the Delphi procedure (e.g., kickoff questions were driven by the moderators, answers are driven by the Delphi panel composition, a subtle phrasing of the questions may drive answers, and 70% threshold for convergence is conventional). However, the diagnostic workflow will be able to help clinicians achieve an early and sustainable etiological diagnosis and enable the use of disease-modifying drugs as soon as they become available.
As Heintz & Scott-Phillips rightly argued, pragmatics has been too commonly considered as a supplement to linguistic communication. Their aim to reorient the study of cognitive pragmatics as the foundation of many distinctive features of human behavior finds echo in the neuropsychological literature on tool use, in which the investigation of semantic dementia challenges the classical semantics versus pragmatics dissociation.
Several in-person and remote delivery formats of cognitive-behavioural therapy (CBT) for panic disorder are available, but up-to-date and comprehensive evidence on their comparative efficacy and acceptability is lacking. Our aim was to evaluate the comparative efficacy and acceptability of all CBT delivery formats to treat panic disorder. To answer our question we performed a systematic review and network meta-analysis of randomised controlled trials. We searched MEDLINE, Embase, PsycINFO, and CENTRAL, from inception to 1st January 2022. Pairwise and network meta-analyses were conducted using a random-effects model. Confidence in the evidence was assessed using Confidence in Network Meta-Analysis (CINeMA). The protocol was published in a peer-reviewed journal and in PROSPERO. We found a total of 74 trials with 6699 participants. Evidence suggests that face-to-face group [standardised mean differences (s.m.d.) −0.47, 95% confidence interval (CI) −0.87 to −0.07; CINeMA = moderate], face-to-face individual (s.m.d. −0.43, 95% CI −0.70 to −0.15; CINeMA = Moderate), and guided self-help (SMD −0.42, 95% CI −0.77 to −0.07; CINeMA = low), are superior to treatment as usual in terms of efficacy, whilst unguided self-help is not (SMD −0.21, 95% CI −0.58 to −0.16; CINeMA = low). In terms of acceptability (i.e. all-cause discontinuation from the trial) CBT delivery formats did not differ significantly from each other. Our findings are clear in that there are no efficacy differences between CBT delivered as guided self-help, or in the face-to-face individual or group format in the treatment of panic disorder. No CBT delivery format provided high confidence in the evidence at the CINeMA evaluation.
Atrial fibrillation (AF) is the most important cause of embolic stroke of undetermined source (ESUS). Implantable loop recorder (ILR) demonstrated the highest sensitivity for detecting it. This register was created to confirm the high prevalence of AF in patients after ESUS and to verify possible benefits on clinical outcomes such as TIA (Transient Ischaemic Attack)/stroke recurrence and death using ILR.
Methods:
A total of 278 patients admitted to “Molinette” Hospital in Stroke Unit department between 2011 and 2016, diagnosed with ESUS, underwent ILR implantation if they had at least one risk factor for AF. A total of 165 patients admitted to other departments in the same center for the same pathology, without ILR, represent the control group. We used propensity score to select 132 patients from each group (matching age, sex, CHADS-VASC, and HAS-BLEED baseline characteristics).
Results:
The detection rate of AF episodes was significantly higher in the ILR group (p < 0.001). No significant protective role of ILR for clinical endpoints was found on univariate analysis, although a trend towards significance has been pointed for the composite outcome of death and ischemic events recurrence (OR 0.52, CI 0.26–1.04, p = 0.06). A protective role of ILR was found for deaths (OR 0.4, CI 0.17–0.94, p 0.03) and for the composite outcome (OR 0.41, CI 0.19–0.87, p 0.02) on multivariate analysis in the best subsets.
Conclusion:
With our statistical models, we identified a significant clinical benefit from ILR monitoring, evidenced by a trend of less death and TIA/stroke recurrence and relevant ILR protection for prediction of TIA/stroke recurrence.
Mass-casualty incidents (MCIs) and disasters are characterized by a high heterogeneity of effects and may pose important logistic challenges that could hamper the emergency rescue operations.
The main objective of this study was to establish the most frequent logistic challenges (red flags) observed in a series of Italian disasters with a problem-based approach and to verify if the 80-20 rule of the Pareto principle is respected.
Methods:
A series of 138 major events from 1944 through 2020 with a Disaster Severity Score (DSS) ≥ four and five or more victims were analyzed for the presence of twelve pre-determined red flags.
A Pareto graph was built considering the most frequently observed red flags, and eventual correlations between the number of red flags and the components of the DSS were investigated.
Results:
Eight out of twelve red flags covered 80% of the events, therefore not respecting the 80-20 rule; the number of red flags showed a low positive correlation with most of the components of the DSS score. The Pareto analysis showed that potential hazards, casualty nest area > 2.5km2, number of victims over 50, evacuation noria over 20km, number of nests > five, need for extrication, complex access to victims, and complex nest development were the most frequently observed red flags.
Conclusions:
Logistic problems observed in MCIs and disaster scenarios do not follow the 80-20 Pareto rule; this demands for careful and early evaluation of different logistic red flags to appropriately tailor the rescue response.
Refreezing of meltwater in firn is a major component of Greenland ice-sheet's mass budget, but in situ observations are rare. Here, we compare the firn density and total ice layer thickness in the upper 15 m of 19 new and 27 previously published firn cores drilled at 15 locations in southwest Greenland (1850–2360 m a.s.l.) between 1989 and 2019. At all sites, ice layer thickness covaries with density over time and space. At the two sites with the earliest observations (1989 and 1998), bulk density increased by 15–18%, in the top 15 m over 28 and 21 years, respectively. However, following the extreme melt in 2012, elevation-detrended density using 30 cores from all sites decreased by 15 kg m−3 a−1 in the top 3.75 m between 2013 and 2019. In contrast, the lowest elevation site's density shows no trend. Thus, temporary build-up in firn pore space and meltwater infiltration capacity is possible despite the long-term increase in Greenland ice-sheet melting.
Psychotherapies are the treatment of choice for panic disorder, but which should be considered as first-line treatment is yet to be substantiated by evidence.
Aims
To examine the most effective and accepted psychotherapy for the acute phase of panic disorder with or without agoraphobia via a network meta-analysis.
Method
We conducted a systematic review and network meta-analysis of randomised controlled trials (RCTs) to examine the most effective and accepted psychotherapy for the acute phase of panic disorder. We searched MEDLINE, Embase, PsycInfo and CENTRAL, from inception to 1 Jan 2021 for RCTs. Cochrane and PRISMA guidelines were used. Pairwise and network meta-analyses were conducted using a random-effects model. Confidence in the evidence was assessed using Confidence in Network Meta-Analysis (CINeMA). The protocol was published in a peer-reviewed journal and in PROSPERO (CRD42020206258).
Results
We included 136 RCTs in the systematic review. Taking into consideration efficacy (7352 participants), acceptability (6862 participants) and the CINeMA confidence in evidence appraisal, the best interventions in comparison with treatment as usual (TAU) were cognitive–behavioural therapy (CBT) (for efficacy: standardised mean differences s.m.d. = −0.67, 95% CI −0.95 to −0.39; CINeMA: moderate; for acceptability: relative risk RR = 1.21, 95% CI −0.94 to 1.56; CINeMA: moderate) and short-term psychodynamic therapy (for efficacy: s.m.d. = −0.61, 95% CI −1.15 to −0.07; CINeMA: low; for acceptability: RR = 0.92, 95% CI 0.54–1.54; CINeMA: moderate). After removing RCTs at high risk of bias only CBT remained more efficacious than TAU.
Conclusions
CBT and short-term psychodynamic therapy are reasonable first-line choices. Studies with high risk of bias tend to inflate the overall efficacy of treatments. Results from this systematic review and network meta-analysis should inform clinicians and guidelines.
This chapter reviews dramatic changes in the European economy between 1700 and 1870. The period saw a rapid population increase, slow structural transformation away from agriculture, and a gradual spread of modern economic growth across the continent. We discuss the proximate causes of these phenomena – namely, technical progress and the adaptation of English technology, growing integration of markets and upsurge of trade, and institutional modernization and the birth of the modern state.
The dispatch of Advanced Life Support (ALS) teams in Emergency Medical Services (EMS) is still a hardly studied aspect of prehospital emergency logistics. In 2015, the dispatch algorithm of Emilia Est Emergency Operation Centre (EE-EOC) was implemented and the dispatch of ALS teams was changed from primary to secondary based on triage of dispatched vehicles for high-priority interventions when teams with Immediate Life Support (ILS) skills were dispatched.
Objectives:
This study aimed to evaluate the effects on the appropriateness of ALS teams’ intervention and their employment time, and to compare sensitivity and specificity of the algorithm implementation.
Design:
This was a retrospective before-after observational study.
Settings and Participants:
Primary dispatches managed by EE-EOC involving ambulances and/or ALS teams were included. Two groups were created on the basis of the years of intervention (2013-2014 versus 2017-2018).
Intervention:
A switch from primary to secondary dispatch of ALS teams in case of high-priority dispatches managed by ILS teams was implemented.
Outcomes:
Appropriateness of ALS team intervention, total task time of ALS vehicles, and sensitivity and specificity of the algorithm were reviewed.
Results:
The study included 242,501 emergency calls that generated 56,567 red code dispatches. The new algorithm significantly increased global sensitivity and specificity of the system in terms of recognition of potential need of ALS intervention and the specificity of primary ALS dispatch. The appropriateness of ALS intervention was significantly increased; total tasking time per day for ALS and the number of critical dispatches without ALS available were reduced.
Conclusion:
The revision of the dispatch criteria and the extension of the two-tiered dispatch for ALS teams significantly increased the appropriateness of ALS intervention and reduced both the global tasking time and the number of high-priority dispatches without ALS teams available.
This paper examines price convergence and changes in the efficiency of wheat markets, covering the period from the mid-fourteenth to the early twentieth century and most of Europe. The analysis is based on a new data set of prices from almost 600 markets. Unlike previous research, we find that convergence was a predominantly pre-modern phenomenon. It started in the late fifteenth century, advanced rapidly until the beginning of the seventeenth century when it temporarily stalled, resumed after the Thirty Years’ War, and accelerated after the Napoleonic Wars in response to trade liberalization. From the late 1840s, convergence petered out and turned into divergence after 1875 as policy decisions dominated technological change. Our results point to the ‘Little Divergence’ between North-Western Europe and the rest of the continent as starting about 1600. Long-term improvements in market efficiency began in the early sixteenth century, with advances over time being as uneven as in price convergence. We trace this to differential institutional change and the non-synchronous spread of modern media and systems of information transmission that affected the ability of merchants to react to news.
Sierra Leone is one of the least developed low-income countries (LICs), slowly recovering from the effects of a devastating civil war and an Ebola outbreak. The health care system is characterized by chronic shortage of skilled human resources, equipment, and essential medicines. The referral system is weak and vulnerable, with 75% of the country having insufficient access to essential health care. Consequently, Sierra Leone has the highest maternal and child mortality rates in the world. This manuscript describes the implementation of a National Emergency Medical Service (NEMS), a project aiming to create the first prehospital emergency medical system in the country. In 2017, a joint venture of Doctors with Africa (CUAMM), Veneto Region, and Research Center in Emergency and Disaster Medicine (CRIMEDIM) was developed to support the Ministry of Health and Sanitation (MOHS) in designing and managing the NEMS system, one of the very few structured, fully equipped, and free-of-charge prehospital service in the African continent. The NEMS design was the result of an in-depth research phase that included a preliminary assessment, literature review, and consultations with key stakeholders and managers of similar systems in other African countries. From May 27, 2019, after a timeframe of six months in which all the districts have been progressively trained and made operational, the NEMS became operative at national level. By the end of March 2020, the NEMS operation center (OC) and the 81 ambulances dispatched on the ground handled a total number of 36,814 emergency calls, 35,493 missions, and 31,036 referrals.
Coercive treatment comprises a broad range of practices, ranging from implicit or explicit pressure to accept certain treatment to the use of forced practices such as involuntary admission, seclusion and restraint. Coercion is common in mental health services.
Aims
To evaluate the strength and credibility of evidence on the efficacy of interventions to reduce coercive treatment in mental health services. Protocol registration: https://doi.org/10.17605/OSF.IO/S76T3.
Method
Systematic literature searches were conducted in MEDLINE, Cochrane Central, PsycINFO, CINAHL, Campbell Collaboration, and Epistemonikos from January 2010 to January 2020 for meta-analyses of randomised studies. Summary effects were recalculated using a common metric and random-effects models. We assessed between-study heterogeneity, predictive intervals, publication bias, small-study effects and whether the results of the observed positive studies were more than expected by chance. On the basis of these calculations, strength of associations was classified using quantitative umbrella review criteria, and credibility of evidence was assessed using the GRADE approach.
Results
A total of 23 primary studies (19 conducted in European countries and 4 in the USA) enrolling 8554 participants were included. The evidence on the efficacy of staff training to reduce use of restraint was supported by the most robust evidence (relative risk RR = 0.74, 95% CI 0.62–0.87; suggestive association, GRADE: moderate), followed by evidence on the efficacy of shared decision-making interventions to reduce involuntary admissions of adults with severe mental illness (RR = 0.75, 95% CI 0.60–0.92; weak association, GRADE: moderate) and by the evidence on integrated care interventions (RR = 0.66, 95% CI 0.46–0.95; weak association, GRADE: low). By contrast, community treatment orders and adherence therapy had no effect on involuntary admission rates.
Conclusions
Different levels of evidence indicate the benefit of staff training, shared decision-making interventions and integrated care interventions to reduce coercive treatment in mental health services. These different levels of evidence should be considered in the development of policy, clinical and implementation initiatives to reduce coercive practices in mental healthcare, and should lead to further studies in both high- and low-income countries to improve the strength and credibility of the evidence base.
An unprecedented wave of patients with acute respiratory failure due to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) disease 2019 (COVID-19) hit emergency departments (EDs) in Lombardy, starting in the second half of February 2020. This study describes the direct and indirect impacts of the SARS-CoV-2 outbreak on an urban major-hospital ED.
Methods:
Data regarding all patients diagnosed with COVID-19 presenting from February 1 to March 31, 2020, were prospectively collected, while data regarding non-COVID patients presenting within the same period in 2019 were retrospectively retrieved.
Results:
ED attendance dropped by 37% in 2020. Two-thirds of this reduction occurred early after the identification of the first autochthonous COVID-19 case in Lombardy, before lockdown measures were enforced. Hospital admissions of non-COVID patients fell by 26%. During the peak of COVID-19 attendance, the ED faced an extraordinary increase in: patients needing oxygen (+239%) or noninvasive ventilation (+725%), transfers to the intensive care unit (+57%), and in-hospital mortality (+309%), compared with the same period in 2019.
Conclusions:
The COVID-19 outbreak determined an unprecedented upsurge in respiratory failure cases and mortality. Fear of contagion triggered a spontaneous, marked reduction of ED attendance, and, presumably, some as yet unknown quantity of missed or delayed diagnoses for conditions other than COVID-19.