Hostname: page-component-68c7f8b79f-p5c6v Total loading time: 0 Render date: 2026-01-05T09:53:26.468Z Has data issue: false hasContentIssue false

Stakeholder Perspectives on Evaluating Emergency Medical Teams Deployments

Published online by Cambridge University Press:  05 January 2026

Tiffany Yeung*
Affiliation:
Department of Disease Control, London School of Hygiene & Tropical Medicine , London, UK
Daniel Bausch
Affiliation:
Department of Disease Control, London School of Hygiene & Tropical Medicine , London, UK Centre for Infectious Disease Emergency Response, National University of Singapore, 21 Lower Kent Ridge Rd, Singapore 119077
Arlinda Cerga Pashoja
Affiliation:
St Mary’s University—Strawberry Hill Campus , UK
Joanna Schellenberg
Affiliation:
Department of Disease Control, London School of Hygiene & Tropical Medicine , London, UK
*
Corresponding author: Tiffany Yeung; Email: tiffany.yeung@lshtm.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Objective

A standardized framework for evaluating Emergency Medical Teams (EMT) deployments is currently lacking. This study aimed to identify evaluation practices and elucidate stakeholder perspectives on evaluating EMT deployments.

Methods

Qualitative interviews were conducted with seventeen participants from all World Health Organization regions, including EMT members, researchers, funders, EMT deploying organizations, and host governments. Thematic analysis using Braun and Clarke’s 6-step process was applied to generate data-driven codes and themes.

Results

Participants generally agreed on the importance of evaluating EMT deployments and sharing lessons learned to establish best practices. Participants recommended that evaluations be carried out externally for objectivity, incorporating both qualitative and quantitative data. They highlighted that voices of local stakeholders are essential but often overlooked. Participants identified evaluation areas which could be used to develop a comprehensive evaluation framework, which included leadership, partner coordination, information management and planning, health operations and technical expertise, operations support and logistics, and finance and administration.

Conclusions

Stakeholders generally recognized the value of establishing a standardized evaluation framework for EMT deployments to enable sharing of best practices and learning for improvement. Further research should prioritize identifying evaluation priorities, with next steps being piloting in both training and deployment settings.

Information

Type
Original Research
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of Society for Disaster Medicine and Public Health, Inc

Introduction

An Emergency Medical Teams (EMT) is a group of health professionals that provides urgent medical care during sudden-onset disasters. 1 The World Health Organization (WHO) established the EMT Initiative in 2014 to set standards for healthcare services provided by EMTs, aiming to enhance the quality and effectiveness of their response.

Over the past 2 decades, there has been growing concern regarding the accountability of EMT services, emphasizing the need for teams to deliver timely, sustainable, and high-quality care integrated with local health systems.Reference Tan and von Schreeb 2 Reference Jarrett, Fozdar and Abdelmagid 5 Before the establishment of the WHO EMT Initiative, EMT activities were criticized for delays, lack of adaptation to local health systems, service quality, self-sufficiency, and cost, as well as political motives behind international aid. 6 Reference von Schreeb, Riddez and Samnegård 8 Reluctance to share information that might reveal gaps in performance, with publications only increasing after the 2010 Haiti earthquake, has further constrained evidence-based policy and operational improvements in the field.Reference Jarrett, Fozdar and Abdelmagid 5 , Reference von Schreeb, Riddez and Samnegård 8 Reference Wolff, Shankiti and Salio 13

The WHO EMT Initiative currently lacks a standardized, comprehensive evaluation framework, relying only on a defined “minimum data set” and daily situation reports that often focus on clinical data. 14 , 15 Having a unified approach could facilitate meaningful comparisons, ensure that the right capacities are deployed where they are most needed, and improve accountability.Reference Gerdin, Wladis and von Schreeb 12 , Reference Jafar, Norton and Lecky 16 Reference Peiris, Buenaventura and Zagaria 18

As a step toward the development of a standardized framework, we conducted a study to identify stakeholders’ evaluation practices and elucidate stakeholder perspectives on evaluating EMT deployments.

Methods

The reporting in this paper adheres to the COnsolidated criteria for REporting Qualitative research (COREQ) checklist.Reference Tong, Sainsbury and Craig 19

Study Design

This was a descriptive cross-sectional qualitative study of EMT members and stakeholders. The study used an adaptation of action research,Reference Polit and Beck 20 whereby the knowledge produced is expected to be used to improve EMT processes and practices.Reference Meyer 21 The study focused on the three most common types of disasters that the United Nations Disaster Assessment and Coordination teams have addressed since 1993: floods, tropical cyclones, and earthquakes and related tsunamis. 22

With the support of existing literature, an open-ended interview guide was developed (Appendices 1 and 2), allowing participants to provide broad and comprehensive responses regarding their perspectives on current EMT evaluations and what elements might be included in a standard evaluation framework. There were 2 versions of the interview guide with minor differences, 1 for aid-providing participants, and 1 for aid-receiving participants.

Reflexivity and Positionality

Lead author TY conducted and oversaw the interviews. Acknowledging the potential challenges or trust-building as an external researcher, TY’s neutral stance appeared to facilitate candid participant responses. TY’s prior involvement in an EMT training and research project—supporting EMTs through the WHO classification process—informed the study’s conception. While TY possessed contextual knowledge of EMT operations, TY had no direct field experience, which enabled her to elicit detailed explanations from participants.

Pilot Testing of Interview Guide

To identify any ambiguities or flow issues, the interview guide was pilot tested with 6 participants who were purposefully recruited based on their experience working in or alongside EMTs as host organizations. Participants for the pilot test had the same inclusion criteria as the study participants, but were not included in the subsequent study and data analysis. Based on the pilot testing, the interview guide was adjusted for question order and syntax to allow for better understanding, and some questions were combined or separated for clarity.

Sampling and Recruitment

Participants were selected through purposive and snowball sampling based on their roles and experience working in or with EMTs. This study approached participants by email, providing information about the research and the participants’ information sheet, and offered an opportunity to ask about the research before requesting consent. To help assure candid input, this study emphasized to participants—both in writing and verbally—that the investigators were unaffiliated with any EMT or EMT-sponsoring organization, and that the participant identities would remain anonymous. This study encouraged open discussion by focusing on hypothetical ideals and examples.

Interviews

All interviews were held one-to-one online by a single investigator (TY) using Microsoft Teams (Microsoft Corporation, Redmond, Washington, United States), through a business account provided by the London School of Hygiene & Tropical Medicine (LSHTM), protected and in line with the General Data Protection Regulation of the European Union. At the request of the participants, 3 interviews were conducted in other languages—1 each in French, Spanish, and Mandarin—the former 2 translated by doctoral students briefed by study lead TY, and the third by TY herself. TY recorded the interviews once participants provided consent, initially saving them on an LSHTM Microsoft OneDrive account, then transcribed them in English, and deleting them once transcription was completed. When appropriate and useful, field notes made during the interviews that related to interviewing style were used to revise the interview guide, for example, regarding questions that required clarification or additional prompts. Two questions were combined, 1 question was reordered to be asked earlier, and 2 questions had changes in wording.

No repeat interviews were carried out, and transcripts were not routinely returned to participants for comments or correction, although 2 participants were contacted by email to clarify or further elaborate certain points, such as to clarify the name of a training mentioned. Participants were not asked to give feedback directly on the findings of the interviews, but the results of the interviews were considered in the development of a Delphi method questionnaire for a planned further study, in which participants were invited to participate.

Data Analysis

The data coding was completed in NVivo 12 Pro (Lumivero, Denver, Colorado, United States), generating themes, then codes, from the data. Thematic analysis of the interviews followed the 6 steps outlined by Braun and ClarkeReference Braun and Clarke 23: (1) familiarization of data, (2) generation of codes, (3) combining codes into themes, (4) reviewing themes, (5) determining significance of themes, and (6) reporting of findings. The line-to-line approach was used because a line is considered the most basic element of raw data that can be assessed in a meaningful way.Reference Boyatzis 24 The study was not designed, and participant numbers were too small to make statistical treatment of differences between groups of participants meaningful, and therefore, such quantitative analyses were not conducted. To minimize bias during analysis, the study prioritized participants’ perspectives and incorporated relevant insights from grey literature. In order to ensure reliability and trustworthiness, the research results were transformed into a framework for a future Delphi method study on EMT deployment evaluation (in preparation), for which participants were invited to rate the suitability of questions and themes, as well as suggest others that were not identified from the interviews.

Ethical Approval

Ethical approval for the research was provided by the LSHTM Ethics Online (Ref: 29517).

Results

Participants

Between September 2023 and January 2024, 37 individuals or groups were invited to participate, of whom 17 (46%) accepted, representing a range of demographic characteristics (Table 1). Of the 20 participants who did not accept, one cited being too busy, but the remaining provided no reason, either never responding to inquiries or stopping responding after initial exchanges. Interviews lasted an average of 59 minutes (range: 53–76 minutes). Although data saturation was reached after around 13 interviews, all planned interviews were nevertheless carried out to capture perspectives from each type of stakeholder. Other than one participant in “Deploying governments or funders” and one participant from “Host governments,” all participants are EMT members.

Table 1. Demographics of participants

Note.

* Other than one participant in “Deploying governments or funders” and one participant from “Host governments,” all participants are EMT members.

Interview data and themes

The results of the interviews allowed grouping into the following themes and technical areas:

  1. 1. General thoughts on evaluations

    1. Barriers to conducting evaluation

    2. How evaluations should be conducted

    3. Disseminating the results of evaluation

  2. 2. Important areas for a successful deployment

    • Leadership

    • Partner coordination

    • Information management and planning

    • Health operations and technical expertise

    • Operations support and logistics

    • Finance and administration

  3. 3. Adapting the evaluation

    • Difference in evaluating pre- and post-COVID-19 pandemic

    • Difference in evaluating different disaster types

Below the authors elaborate on these themes and provide representative examples in the words of the participants:

I. General thoughts on evaluations

The participants reported no major differences in perspectives between aid providers and aid receivers, although host governments generally had more concerns regarding the logistics of EMT arrival in their countries. This may be because the aid-receiving participants are from countries that have extensive experience both receiving EMTs from abroad and deploying their own EMTs internationally.

Participants felt that conducting evaluations is important, that they should be part of the compulsory minimum standards for EMTs, and should use a common set of standards. Illustrative quotes include the following:

“… the diversity of reporting is so huge … if we have a consolidated way to report, we can make a multi-team analysis, which could bring a really strong added value in the evaluation of the impact.”Interviewee 4

“I think we need a better approach to what an evaluation is. I think the work you do (i.e. referring to this study) is very important …”Interviewee 5

Barriers to conducting evaluation

A common barrier noted was unfamiliarity with how to conduct an EMT evaluation, in part because many of the potential indicators cited by participants were qualitative in nature, and thus not easily measured or summarized. Due to the dynamic nature of all deployments, participants explained that a lot of the changes in practice occurred organically, and hence, the rationale behind such changes was not recorded. Participants also reflected that there is no time during deployments to conduct an evaluation, and hence, there is no expectation or customary practice to do so:

“I think it can be quite time-consuming, and it demands resources to do it, but it also demands like organisational reflection and to be a bit humble …”—Interviewee 6

How evaluations should be conducted

Participants suggested methods to conduct an evaluation. They emphasized that to ensure that all relevant stakeholder voices are heard, and to evaluate comprehensively, evaluations should not only include EMT team members, but also those on the receiving end, including communities and patients, as well as local partners working with the EMT during the deployment. Some participants mentioned that, rather than having an EMT evaluate itself, they hired external facilitators to carry out the evaluations to be as objective as possible. They felt it is important to have both qualitative and quantitative measurements for a comprehensive evaluation:

“You need to be interviewing people and getting the EMT network to ensure that there is a lesson learned procedure … after every big operation, when there are deployments, where you have key informant interviews, with the government, the Ministry of Health, the EMT coordinator and the individual EMTs ….”—Interviewee 3

“So, what we created are basically ‘Yes or No’ questions. It’s not a scale; it’s meant to be ‘this is the standard, are you following it?’ … Because why did we oversimplify like that? A) we wanted to get done, B) we can compare … so simple is better for our purposes.”—Interviewee 9

Some participants pointed out that teams need a system to make sure the lessons learnt are not only recorded but put into practice by establishing and fostering a learning culture. Since evaluation is often seen as a requirement of funders or sponsoring organizations, rather than an opportunity to identify lessons learnt and improve, evaluations are often seen as a tick-box exercise with no impact:

“I think it’s just creating that time and bringing that learning culture. To me that would be number 1.”—Interviewee 7

Disseminating the results of evaluation

Participants recommended that EMTs should take more initiative to share their deployment processes so others can learn:

“… the sad part in the EMT space is the lack of published data, the lack of published information. And the challenge we’ve got is the journals will often only publish stuff that is quantitative in nature … the field craft and the way we do our business; we need the opportunity to be able to publish some of those descriptive studies to help teams grow and develop their capabilities …”—Interviewee 1

Participants expressed that, after each disaster where EMTs deployed, a neutral party should have a strong role in organizing collective action, such as forums or panels to discuss the deployments, and summarizing lessons learned, instead of EMTs simply submitting a report that was perceived as not subsequently used. Participants suggested that EMTs could gather annually to discuss the deployments that happened during that year and collectively identify areas for improvement. This could be a preferred option for teams not accustomed to producing written publications:

“I think in the EMT global meetings or regional meetings, they should have a … lessons learned panel session or something like that … What the EMT Initiative should do is organise a debriefing, or some type of evaluation. People can … share with other people and see what they can learn.”—Interviewee 2

II. Important areas for a successful deployment

When asked for their EMT’s evaluation examples or template, many participants replied that those are private documents and therefore cannot be shared, while others stated that no standard template existed, but rather that an ad hoc evaluation or team debrief was usually conducted after each deployment. However, many participants mentioned doing an After Action Review (AAR), and one mentioned using the WHO AAR as an evaluation framework. Through these responses, the study identified the following evaluation themes corresponding to the WHO AAR proposed pillars:

Leadership

Participants mentioned that good leadership is a key component of success, and this requires specific training. Leaders should be selected by the organization and should have the relevant deployment experience:

“… the leadership … should be strong enough and sometimes you send people, and they must identify who will be the leader. So, leadership, people working with each other, there is a system, harmony between them, I think this is a success.”—Interviewee 8

Partner coordination

Participants deemed coordination and effective communication between the EMT and all stakeholders in the field is one of the most crucial factors in the success of a deployment. The WHO EMT Initiative and WHO EMT Coordination Cell play a particularly important and unique role in EMT deployments since they can access and mobilize resources. As a United Nations agency, they can negotiate between the host government and various EMTs to facilitate partnerships:

“… because the WHO will coordinate between different countries … And I think the only people who can do that is the WHO team.”—Interviewee 8

The EMT’s coordination and relationship with the host government are also important since a positive attitude toward the EMT can support and facilitate the deployment. Being well-connected and supported by the local communities and health agencies were viewed as critical to being accepted by local patients, as well as making it easier to access local resources and other partnerships, due to the local partners’ familiarity with the local contexts:

“I think the best deployments happen when there’s the political will from the host government. They’re able to facilitate entry. They’re able to facilitate medicines, kit, supplies to get in very quickly.”—Interviewee 7

“So, demonstrating that we were humble, demonstrating we were there to amplify and support their own response and to not take over was critical to being accepted.”—Interviewee 1

Information management and planning

Good organization and planning of documentation are important to ensure that every decision made is recorded, and to allow smooth handover back to the local health authorities, as well as to maintain continuity of services:

“From all the patient records, daily reports, customer satisfaction surveys, we also have a copy of the exit report, list of donations and list of patients, that we shared, both soft and hard copy, to the Ministry of Health on the last day of our mission.”—Interviewee 14

Health operations and technical expertise

In line with the WHO EMT Initiative, participants acknowledged that one of the most important things in sudden onset disaster deployments is that EMTs act as a surge capacity while the local health system picks itself back up. In other words, the EMT should fill gaps in health needs and support local medical infrastructure:

“…we are not only thinking about the patients, but we are also thinking about the medical doctors, nurses, and the physical structure of the health system facilities and so on.”—Interviewee 11

A variety of clinical measurements mentioned by participants are valuable for evaluation:

“In terms of evaluating the success of an intervention, I would also look at, this would be very difficult, but … how many people did the EMT actually give consultations and treatment to, compared to the number of people that were affected by flooding or earthquakes or fires or whatever it could be.”—Interviewee 7

Other than the health operations themselves, the sustainability of the deployment should also be considered, including where the EMT deployment should fit into existing systems. Hence, engagement with local stakeholders is important to ensure sustainability:

“… when you arrive somewhere, you arrive before someone and after someone. It means that when you arrive, you need to link your activity to what exists. And when you plan to leave, you need to link your activity to what will remain after your departure …”—Interviewee 4

Operations support and logistics

Logistics of deployments is also a key area, and it is important to bring only what is needed, depending on the disaster type, with local needs and practices in mind:

“… there is the minimum requirement of drugs and medication you need to take with you. I think this needs to be reviewed each time because sometimes we go and many we don’t use because of the situation.”—Interviewee 8

Finance and administration

Lastly, participants mentioned that the monetary costs of a deployment can be one of the quantitative metrics used to compare deployments:

“Lastly, the allocation of resources, how much did we use, the costs of the supplies, calculate the final costs, including that of members, patients, supplies, for coordination, and the whole timeline.”—Interviewee 15

III. Adapting the evaluation

Difference in evaluating pre- and post-COVID-19 pandemic

Participants’ general view was that although the COVID-19 pandemic did prompt changes in how deployments were conceived and implemented during the first 2 years of the pandemic, there was no change in how deployments should be evaluated. Instead, there was added consideration of aspects already implemented, such as infection prevention and control measures, being prepared for compound disasters, and staff mental and physical well-being during deployment.

Difference in evaluating different disaster types

Participants’ general view was that evaluation of deployments would be similar in floods, tropical cyclones, and earthquakes and related tsunamis:

“With a flood … people tend to leave, or people die. So, it’s dead body management and primary healthcare…And tsunamis would be the same [as floods] because most people either just die or they fled.”—Interviewee 3

Limitations

The following limitations were identified in the study: (1) Local communities that have experienced EMT support are a key stakeholder that was not reached for this research, due to the lack of resources to access this population. It is possible that their views and priorities regarding EMT responses, and thus evaluation, may be considerably different from those of the stakeholders interviewed. Further research should be conducted to include this essential group. (2) Participants were skewed toward upper-middle and high-income countries, and thus possibly divergent views from low- and lower-middle-income countries may not be represented. Again, further research should endeavor to include perspectives from lower-resource regions. (3) The study was unable to interview members of the WHO EMT Initiative due to their competing commitments. As a leading agency on EMTs, their views are perhaps the most pertinent to this subject matter and should be sought and incorporated in future work related to EMT evaluation. (4) Less than half of those invited agreed to be interviewed—a common challenge when trying to get time from undoubtedly busy professionals participating out of their own interest, with no compensation. (5) This research focused on the three most common types of disasters that the United Nations Disaster Assessment and Coordination teams have addressed since 1993: floods, tropical cyclones, and earthquakes and related tsunamis, 22 and did not cover human-caused disasters such as armed conflicts, since they present different challenges for EMTs.Reference Severin, Jacobson, Goodhue and Blake 25

Discussion

Climate change is increasing the frequency and unpredictability of disasters, with an estimated 299.4 million people needing humanitarian assistance in 2024, 26 a trend that is unfortunately unlikely to abate any time soon. Although not a sustainable solution, EMTs will continue to be a crucial component of disaster mitigation. However, their efficacy can only be optimized through a systematic process of evaluation, learning, and adaptation.Reference Bartolucci, Walter and Redmond 27

I. General thoughts on evaluations

Participants agreed that evaluation enables EMTs to learn from deployments, compare between them, and make evidence-based decisions to improve impact, resource allocation, and sustainability.Reference Shalash, Abu-Rmeileh, Kelly and Elmusharaf 28 , Reference Daftary, Cruz and Reaves 29 It is thus important to establish standardized evaluation themes to enable EMTs to share openly their evaluation tools and findings.Reference Aitken, Leggat and Robertson 30 Reference Rossignoli, Giani and Di Iacovo 33

Although few teams have standardized evaluations, there was consensus that evaluations should be conducted by an external third party to maintain objectivity; combine both qualitative and quantitative data; and include aid recipients, particularly patients, community members, and local partners. These are in line with past publications and the WHO Practical Guide to Evaluation.Reference Tan and von Schreeb 2 , Reference Daftary, Cruz and Reaves 29 , Reference Pérouse de Montclos 34 Reference Merry 36 Past evaluations often overlooked voices from local stakeholders and aid recipients, who may perceive unmet needs differently, and enhance consideration of the local context.Reference Jarrett, Fozdar and Abdelmagid 5 , Reference Rossignoli, Giani and Di Iacovo 33 , Reference Laperrière 37 , Reference Rubin, Heuvelmans and Tomic-Cica 38

Participants noted common barriers in evaluating deployments that were consistent with published reports, including fragile healthcare infrastructures resulting in limited baseline data,Reference Shalash, Abu-Rmeileh, Kelly and Elmusharaf 28 , Reference Kubo, Salio, Koido, Chan and Shaw 39 the typically complex and chaotic nature of sudden-onset disasters limiting EMT personnel to allocate time for data collection,Reference Jafar 40 and reliance on volunteer medical professionals who may lack the experience or capacity to conduct evaluations or publish findings.

One of the objectives of the WHO EMT 2030 Strategy aims to “Strengthen information systems, evidence, and research,” including standardizing evaluation practices and the development of an integrated knowledge and information management system to facilitate evaluations. 41 This study is responding to the Strategy’s call for having standardized processes for monitoring, evaluation, and reporting of EMT activities, and that tools and approaches should be informed by evidence. The WHO Practical Guide to Evaluation emphasizes the importance of clear evaluation objectives. 35 Hence, aligning with the WHO EMT Initiative’s goals, EMTs should be evaluated on whether they can bridge the health gaps after a sudden onset disaster. The standardized evaluation can be made compulsory for all deployed EMTs and integrated with the MDS and EMT exit forms to produce a more comprehensive picture of a deployment and the disaster response. Although no widely accepted evaluation framework of EMT deployments currently exists, many participants mentioned the WHO AAR, which was developed as part of the International Health Regulations Monitoring and Evaluation Framework to evaluate nation-wide emergency response, and designed to identify best practices, highlight gaps, and capture lessons learned. Rather than measuring performance against benchmarks, it provides an opportunity for stakeholders to collaboratively identify opportunities for strengthening their capabilities. Thus, it can serve as a reasonable place to start in the development of a standardized evaluation tool. 42

II. Important areas for a successful deployment

Key indicators and aspects reported in the published literature, many of them also cited by our study participants, can be incorporated into a standardized EMT evaluation. These include clear leadership, defined roles, competency-based training, preparedness for challenges, and strong partnerships to ensure integrated response and avoid duplicating services.Reference Aitken, Leggat and Robertson 30 , Reference Lane 43 Reference Hamilton, Södergård and Liverani 46 Information management, standardized reporting, and understanding local needs enable targeted deployments and sustainable operations,Reference Yang, Chen and Liu 47 Reference Savage, Christian and Smith 49 while technical expertise must cover trauma, primary care, mental health, and public health education, often involving local personnel to strengthen capacity.Reference Kwak, Shin and Kim 17 , Reference Chauhan and Chopra 50 , Reference Kuday, Özcan and Çalışkan 51 Logistical readiness, self-sufficiency, rapid deployment, and locally sourced supplies, are essential, alongside quality assurance through standardized forms like the MDS.Reference Kwak, Shin and Kim 17 , Reference Bartolucci, Walter and Redmond 27 , Reference Redmond, Watson and Nightingale 45 , Reference Arbon 52 Cost-effective studies are limited,Reference Bartolucci, Walter and Redmond 27 and knowledge sharing between experienced and less-experienced teams—especially involving Global South EMTs—remains inadequate despite their significant field experience.Reference von Schreeb, Riddez and Samnegård 8 , Reference McPherson, Counahan and Hall 53

III. Adapting the evaluation

This evaluation framework is designed to be adaptable to different contexts and EMT types, using only relevant questions depending on the specific deployment. While disaster types may lead to varied patient profiles and timing of life-saving interventions, dictating differences in specialized personnel or equipment required, their evaluation methods remain consistent.Reference Pearce, Mark and Gray 44 Climate change is increasing the frequency and unpredictability of events like floods, earthquakes, and tropical cyclones, which disproportionately affect lower socioeconomic groups and widen disparities in disaster response capacities. 54 Despite criticisms of EMT sustainability, they remain vital for urgent responses where infrastructure is damaged. Recognition of this has led to increased investments to strengthen local and regional emergency capacities as more sustainable long-term solutions.Reference Bartolucci, Walter and Redmond 27 , Reference Casey, Noste and Cook 55

Conclusions

Participants generally recognized the value of establishing a standardized evaluation framework for EMT deployments. Participants suggested that an evaluation framework should use both qualitative and quantitative data collection methods, and should include perspectives of all stakeholders, including patients who received aid from EMTs. Further research is needed to explore aid recipient perspectives, identify evaluation priorities, with the next steps being to develop and pilot test an appropriate framework in training and deployment settings.

Supplementary material

The supplementary material for this article can be found at http://doi.org/10.1017/dmp.2025.10295.

Author contribution

TY conceived the research idea, performed the data collection and analysis, and wrote the manuscript. DGB, ACP, and JS provided critical ideas in the conception and analysis stages and read and edited the full manuscript. All authors read and approved the final manuscript.

Competing interests

None.

References

United Nations Office for Disaster Risk Reduction. 2017 Definition: Disaster. PreventionWeb. Accessed July 7, 2025. https://www.preventionweb.net/terminology/disaster#:~:text=A%20sudden%2Donset%20disaster%20is,critical%20infrastructure%20failure%2C%20transport%20accidentGoogle Scholar
Tan, YSA, von Schreeb, J. Humanitarian assistance and accountability: what are we really talking about? Prehosp Disaster Med. 2015;30(3):264270. doi:10.1017/S1049023X15000254CrossRefGoogle ScholarPubMed
Abdelmagid, N, Checchi, F, Garry, S, et al. Defining, measuring and interpreting the appropriateness of humanitarian assistance. J Int Humanit Action. 2019;4(1):14. doi:10.1186/s41018-019-0062-yCrossRefGoogle Scholar
Bryant, C. Evaluation and accountability in emergency relief. In: Ebrahim, A, Weisband, E, eds. Global Accountabilities: Participation, Pluralism, and Public Ethics. Cambridge University Press; 2007:168192.10.1017/CBO9780511490903.012CrossRefGoogle Scholar
Jarrett, P, Fozdar, Y, Abdelmagid, N, et al. Healthcare governance during humanitarian responses: a survey of current practice among international humanitarian actors. Confl Health. 2021;15(1):25. doi:10.1186/s13031-021-00355-8CrossRefGoogle ScholarPubMed
World Health Organization PAHO. Guidelines for the use of foreign field hospitals in the aftermath of sudden-impact disaster. Prehosp Disaster Med. 2003;18(4):278290.10.1017/S1049023X00001229CrossRefGoogle Scholar
Moradian, MJ, Ardalan, A, Nejati, A, et al. Importance of site selection for stockpiling field hospitals for upcoming disasters. Bull Emerg Trauma. 2016;4(3):124125.Google ScholarPubMed
von Schreeb, J, Riddez, L, Samnegård, H, et al. Foreign field hospitals in the recent sudden-onset disasters in Iran, Haiti, Indonesia, and Pakistan. Prehosp Disaster Med. 2008;23(2):144151; discussion 152-153. doi:10.1017/s1049023x00005768CrossRefGoogle ScholarPubMed
Amat Camacho, N, Karki, K, Subedi, S, et al. International emergency medical teams in the aftermath of the 2015 Nepal earthquake. Prehosp Disaster Med. 2019;34(3):260264. doi:10.1017/s1049023x19004291CrossRefGoogle Scholar
Brolin, K, Hawajri, O, von Schreeb, J. Foreign medical teams in the Philippines after Typhoon Haiyan 2013—who were they, when did they arrive and what did they do? PLoS Curr. 2015;7. doi:10.1371/currents.dis.0cadd59590724486bffe9a0340b3e718Google Scholar
Redmond, AD, Mardel, S, Taithe, B, et al. A qualitative and quantitative study of the surgical and rehabilitation response to the earthquake in Haiti, January 2010. Prehosp Disaster Med. 2011;26(6):449456. doi:10.1017/s1049023x12000088CrossRefGoogle Scholar
Gerdin, M, Wladis, A, von Schreeb, J. Foreign field hospitals after the 2010 Haiti earthquake: how good were we? Emerg Med J. 2013;30(1):e8. doi:10.1136/emermed-2011-200717CrossRefGoogle Scholar
Wolff, E, Shankiti, I, Salio, F, et al. The response by international emergency medical teams following the Beirut Harbor explosion in 2020—who were they, when did they arrive, what did they do, and were they needed? Prehosp Disaster Med 2022;37(4):529534. doi:10.1017/s1049023x22000784CrossRefGoogle Scholar
Emergency Medical Team Coordination Cell. 2024. Situation Report. World Health Organization. Accessed July 7, 2025. https://extranet.who.int/emt/sites/default/files/EMTCC_SItrep.pdfGoogle Scholar
World Health Organization. 2024. Emergency Medical Team Exit Report. World Health Organization. Accessed July 7, 2025. https://extranet.who.int/emt/formsGoogle Scholar
Jafar, AJ, Norton, I, Lecky, F, et al. A literature review of medical record keeping by foreign medical teams in sudden onset disasters. Prehosp Disaster Med. 2015;30(2):216222. doi:10.1017/s1049023x15000102CrossRefGoogle ScholarPubMed
Kwak, YH, Shin, SD, Kim, KS, et al. Experience of a Korean disaster medical assistance team in Sri Lanka after the South Asia tsunami. J Korean Med Sci. 2006;21(1):143150. doi:10.3346/jkms.2006.21.1.143CrossRefGoogle ScholarPubMed
Peiris, S, Buenaventura, J, Zagaria, N. Is registration of foreign medical teams needed for disaster response? Findings from the response to Typhoon Haiyan. Western Pac Surveill Response J. 2015;6(Suppl 1):2933. doi:10.5365/wpsar.2015.6.2.Hyn_014CrossRefGoogle ScholarPubMed
Tong, A, Sainsbury, P, Craig, J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349357. doi:10.1093/intqhc/mzm042CrossRefGoogle ScholarPubMed
Polit, DF, Beck, CT. Essentials of Nursing Research: Appraising Evidence for Nursing Practice. 9th ed. Wolters Kluwer Health; 2018.Google Scholar
Meyer, J. Using qualitative methods in health related action research. BMJ. 2000;320(7228):178181. doi:10.1136/bmj.320.7228.178CrossRefGoogle ScholarPubMed
The United Nations Disaster Assessment and Coordination. Missions in 2022. 2023. Accessed July 7, 2025. https://www.unocha.org/sites/unocha/files/2022_UNDAC_deployments_0.pdfGoogle Scholar
Braun, V, Clarke, V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77101. doi:10.1191/1478088706qp063oaCrossRefGoogle Scholar
Boyatzis, RE. Transforming Qualitative Information: Thematic Analysis and Code Development. SAGE; 1998.Google Scholar
Severin, PN, Jacobson, PA. Types of disasters. In: Goodhue, CJ, Blake, N, eds. Nursing Management of Pediatric Disaster. Springer International Publishing; 2020:85197.10.1007/978-3-030-43428-1_5CrossRefGoogle Scholar
United Nations Office for the Coordination of Humanitarian Affairs. Global Humanitarian Overview 2024. 2023. Global Humanitarian Overview. Accessed July 7, 2025. https://www.un-ilibrary.org/content/books/9789213587089Google Scholar
Bartolucci, A, Walter, D, Redmond, T. Comparative review on the cost-effectiveness analysis of relief teams’ deployment to sudden-onset disasters. Prehosp Disaster Med. 2019;34(4):415421. doi:10.1017/S1049023X19004540CrossRefGoogle ScholarPubMed
Shalash, A, Abu-Rmeileh, NME, Kelly, D, Elmusharaf, K. The need for standardised methods of data collection, sharing of data and agency coordination in humanitarian settings. BMJ Glob Health. 2022;7 e007249 https://doi.org/10.1136/bmjgh-2021-007249.CrossRefGoogle ScholarPubMed
Daftary, RK, Cruz, AT, Reaves, EJ, et al. Making disaster care count: consensus formulation of measures of effectiveness for natural disaster acute phase medical response. Prehosp Disaster Med. 2014;29(5):461467. doi:10.1017/s1049023x14000922CrossRefGoogle ScholarPubMed
Aitken, P, Leggat, PA, Robertson, AG, et al. Leadership and use of standards by Australian disaster medical assistance teams: results of a national survey of team members. Prehosp Disaster Med. 2012;27(2):142147. doi:10.1017/s1049023x12000489CrossRefGoogle ScholarPubMed
Dhungana, N, Cornish, F. Beyond performance and protocols: early responders’ experiences of multiple accountability demands in the response to the 2015 Nepal earthquake. Disasters. 2021;45(1):224248. doi:10.1111/disa.12425CrossRefGoogle Scholar
Doocy, S, Lyles, E, Tappis, H, et al. Effectiveness of humanitarian health interventions: a systematic review of literature published between 2013 and 2021. BMJ Open. 2023;13(7):e068267. doi:10.1136/bmjopen-2022-068267CrossRefGoogle Scholar
Rossignoli, CM, Giani, A, Di Iacovo, F, et al. Enhancing participatory evaluation in a humanitarian aid project. Evaluation. 2017;23(2):134151. doi:10.1177/1356389017700207CrossRefGoogle Scholar
Pérouse de Montclos, MA. Humanitarian action in developing countries: who evaluates who? Eval Program Plann. 2012;35(1):154160. doi:10.1016/j.evalprogplan.2010.11.005CrossRefGoogle ScholarPubMed
World Health Organization. Practical Guide to Evaluation for Programme Managers and Evaluation Staff. World Health Organization. Accessed July 7, 2025. https://www.who.int/publications/i/item/WHO-DGO-EVL-2023.3Google Scholar
Merry, SE. The Seductions of Quantification: Measuring Human Rights, Gender Violence, and Sex Trafficking. University of Chicago Press; 2016:Chapter one.10.7208/chicago/9780226261317.001.0001CrossRefGoogle Scholar
Laperrière, H. Evaluations, International agencies and censorship: a field doer’s viewpoint. Evaluation. 2014;20(3):296310. doi:10.1177/1356389014540705CrossRefGoogle Scholar
Rubin, M, Heuvelmans, JH, Tomic-Cica, A, et al. Health-related relief in the former Yugoslavia: needs, demands, and supplies. Prehosp Disaster Med. 2000;15(1):111.10.1017/S1049023X00024870CrossRefGoogle ScholarPubMed
Kubo, T, Salio, F, Koido, Y. Breakthrough on health data collection in disasters—knowledge arises in Asia spread to the world. In: Chan, EYY, Shaw, R, eds. Public Health and Disasters: Health Emergency and Disaster Risk Management in Asia. Springer Singapore; 2020:299312.10.1007/978-981-15-0924-7_19CrossRefGoogle Scholar
Jafar, AJN. Disaster documentation: improving medical information-sharing in sudden-onset disaster scenarios. Third World Q. 2020;41(2):321339. doi:10.1080/01436597.2019.1650263CrossRefGoogle Scholar
World Health Organization. Emergency Medical Teams 2030 Strategy. 2023. Accessed July 7, 2025. https://www.who.int/publications/b/69538Google Scholar
World Health Organization. Guidance for After Action Review (AAR). World Health Organization. 2019. Accessed July 7, 2025. https://www.who.int/publications/i/item/WHO-WHE-CPI-2019.4Google Scholar
Lane, DA. Medical support to Sri Lanka in the wake of tsunamis: planning considerations and lessons learned. Mil Med. Oct 2006;171(10 Suppl 1):1923. doi:10.7205/milmed.171.1s.19CrossRefGoogle ScholarPubMed
Pearce, A, Mark, P, Gray, N, et al. Responding to the Boxing Day tsunami disaster in Aceh, Indonesia: Western and South Australian contributions. Emerg Med Australas. Feb 2006;18(1):8692. doi:10.1111/j.1742-6723.2006.00810.xCrossRefGoogle Scholar
Redmond, AD, Watson, S, Nightingale, P. The south Manchester Accident Rescue Team and the earthquake in Iran, June 1990. BMJ. Jun 22 1991;302(6791):15211523. doi:10.1136/bmj.302.6791.1521.CrossRefGoogle ScholarPubMed
Hamilton, ARL, Södergård, B, Liverani, M. The role of emergency medical teams in disaster response: a summary of the literature. Nat Hazards. 2022 Feb 1;110(3):14171426. doi:10.1007/s11069-021-05031-xCrossRefGoogle Scholar
Yang, J, Chen, J, Liu, H, et al. The Chinese national emergency medical rescue team response to the Sichuan Lushan earthquake. Nat Hazards. 2013 Dec 1;69(3):22632268. doi:10.1007/s11069-013-0758-zCrossRefGoogle Scholar
Yamashita, K, Natsukawa, T, Kubo, T, et al. Vulnerability of pregnant women after a disaster: experiences after the Kumamoto Earthquake in Japan. Prehosp Disaster Med. Oct 2019;34(5):569571. doi:10.1017/s1049023x1900476xCrossRefGoogle ScholarPubMed
Savage, E, Christian, MD, Smith, S, et al. The Canadian Armed Forces medical response to Typhoon Haiyan. Can J Surg. Jun 2015;58(3 Suppl 3):S146S152. doi:10.1503/cjs.013514CrossRefGoogle ScholarPubMed
Chauhan, A, Chopra, BK. Deployment of Medical Relief Teams of the Indian Army in the aftermath of the Nepal Earthquake: lessons learned. Disaster Med Public Health Prep. 2017;11(3):394398. doi:10.1017/dmp.2016.146CrossRefGoogle ScholarPubMed
Kuday, AD, Özcan, T, Çalışkan, C, et al. Challenges faced by medical rescue teams during disaster response: a systematic review study. Disaster Med Public Health Prep. 2023 Dec 7;17:e548. doi:10.1017/dmp.2023.217CrossRefGoogle ScholarPubMed
Arbon, P. Applying lessons learned to the Haiti Earthquake response. Australas Emerg Care. 2010;13(1):46. doi:10.1016/j.aenj.2010.02.003CrossRefGoogle Scholar
McPherson, M, Counahan, M, Hall, JL. Responding to Typhoon Haiyan in the Philippines. Western Pac Surveill Response J. Oct-Dec 2015;6 Suppl 1(Suppl 1):14. doi:10.5365/wpsar.2015.6.4.Hyn_026Google ScholarPubMed
International Science Council. Picturing the future of complex, cascading climate risks. 2021. Accessed July 7, 2025. https://council.science/blog/picturing-the-future-of-complex-cascading-climate-risks/Google Scholar
Casey, ST, Noste, E, Cook, AT, et al. Localizing health emergency preparedness and response: emergency medical team development and operations in Pacific island countries and areas. Western Pac Surveill Response J. 2023 Jun 15;14(6):4. doi:10.5365/wpsar.2023.14.6.1021Google ScholarPubMed
Figure 0

Table 1. Demographics of participants

Supplementary material: File

Yeung et al. supplementary material 1

Yeung et al. supplementary material
Download Yeung et al. supplementary material 1(File)
File 29 KB
Supplementary material: File

Yeung et al. supplementary material 2

Yeung et al. supplementary material
Download Yeung et al. supplementary material 2(File)
File 27.2 KB