Hostname: page-component-7c8c6479df-fqc5m Total loading time: 0 Render date: 2024-03-28T13:48:58.570Z Has data issue: false hasContentIssue false

Rapid qualitative evidence syntheses (rQES) in health technology assessment: experiences, challenges, and lessons

Published online by Cambridge University Press:  09 October 2020

Umair Majid*
Affiliation:
Institute of Health Policy, Management, and Evaluation, University of Toronto, Ontario, Canada Division of Clinical Decision-Making and Health Care, Toronto General Hospital Research Institute, University Health Network, Toronto, Ontario, Canada
Laura Weeks
Affiliation:
Canadian Agency for Drugs and Technologies in Health (CADTH), Ottawa, Ontario, Canada
*
Author for correspondence: Umair Majid, E-mail: umair.majid@mail.utoronto.ca
Rights & Permissions [Opens in a new window]

Abstract

Healthcare decision makers are increasingly demanding that health technology assessment (HTA) is patient focused, and considers data about patients' perspectives on and experiences with health technologies in their everyday lives. Related data are typically generated through qualitative research, and in HTA the typical approach is to synthesize primary qualitative research through the conduct of qualitative evidence synthesis (QES). Abbreviated HTA timelines often do not allow for the full 6–12 months it may take to complete a QES, which has prompted the Canadian Agency for Drugs and Technologies in Health (CADTH) to explore the concept of “rapid qualitative evidence synthesis” (rQES). In this paper, we describe our experiences conducting three rQES at CADTH, and reflect on challenges faced, successes, and lessons learned. Given limited methodological guidance to guide this work, our aim is to provide insight for researchers who may contemplate rQES. We suggest several lessons, including strategies to iteratively develop research questions and search for eligible studies, use search of filters and limits, and use of a single reviewer experienced in qualitative research throughout the review process. We acknowledge that there is room for debate, though believe rQES is a laudable goal and that it is possible to produce a quality, relevant, and useful product, even under restricted timelines. That said, it is vital to recognize what is lost in the name of rapidity. We intend our paper to advance the necessary debate about when rQES may be appropriate, and not, and enable productive discussions around methodological development.

Type
Article Commentary
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
Copyright © The Author(s), 2020. Published by Cambridge University Press

Background

To better understand the value of health interventions or technologies, decision makers are increasingly wanting to understand their feasibility and acceptability to patients, and if they have an impact on patient important outcomes (Reference Lewin and Glenton1). In the field of health technology assessment (HTA), it is recognized that data about patients' perspectives and experiences help ensure HTA remains patient-focused, and additionally help in the interpretation of economic and clinical data through consideration of how patients use health technologies in their everyday lives and their interactions with the healthcare system. In this way, HTA decisions can be informed by the values, beliefs, perspectives, experiences, and preferences of patients who are the intended users of that technology.

Related data are most commonly generated from the exploration, description, and examination of patients' perspectives and experiences using the methods of qualitative research. In HTA, the typical approach is to conduct rigorous syntheses of primary qualitative research—referred to as qualitative evidence syntheses (QES) (Reference Lewin, Booth, Glenton, Munthe-Kaas, Rashidian and Wainwright2)—that document patients' perspectives and experiences surrounding health technologies of interest. QES are typically a variant form of the traditional systematic review design that assesses intervention effects with methodological changes made to account for qualitative versus quantitative data. Depending on the specific methodology, the aim of QES is either to describe and aggregate findings across multiple primary studies, or to synthesize findings to produce a new, integrative interpretation (Reference Toye, Seers, Allcock, Briggs, Carr and Barker3). QES are becoming more common, as evidenced by the near doubling of published QES in the previous two decades (Reference Hannes and Macaitis4), likely indicating an emerging need, greater capacity, and increased interest in engaging in QES work.

Since 2016, for example, within the Medical Device and Clinical Interventions portfolio at the Canadian Agency for Drugs and Technologies in Health (CADTH), a QES to explore patients' perspectives and experiences is a typical component of most HTAs. These QES involve a peer-reviewed systematic search of multiple relevant databases, followed by screening, selection, data extraction, critical appraisal, and data synthesis by at least two reviewers. While these QES can take between 6 and 12 months to produce, CADTH has received increasing requests to conduct QES under shorter timelines to meet a range of decision-making needs. Such requests have led CADTH to explore the concept of “rapid qualitative evidence synthesis” (rQES), building off their long-established Rapid Response Service that typically addresses questions related to clinical benefits, clinical harms, and cost-effectiveness (5).

While a sizable amount of guidance is available for rapid reviews of clinical research (e.g., Reference Tricco, Langlois and Straus6), an immediate challenge is limited guidance on how to conduct rQES. A recent scoping review found increasing examples of, but no methodological guidance on conducting rQES (Reference Campbell, Weeks, Booth, Kaunelis and Smith7). Since the publication of this scoping review, examples of rQES continue to be published in the context of HTA and non-HTA (e.g., 8;9). Further, Healthcare Improvement Scotland has released their guide for conducting rQES in the context of HTA, indicating increasing interest in producing this work within shorter timelines amongst the HTA community (10).

While not explored conceptually in depth, to our understanding, it stands to reason that a different approach is required for the rapid synthesis of qualitative literature than clinical literature, given the fundamental differences in epistemology. It is well acknowledged that QES researchers need to dedicate long periods of time to investigate themes, explore unanticipated ideas, problematize theoretical directions, and compare and contrast findings across participants, studies, and contexts. Not unlike rapid reviews of clinical research, then, it is therefore expected that abbreviated timelines will be in tension with the ability to conduct comprehensive and fully rigorous work. What is unclear, however, is whether the methodological decisions typically applied within rapid reviews of clinical research have similar implications for the credibility of rQES, and similar implications for decision making.

With limited guidance on how to conduct rQES, and pressures to respond to requests for decision support with short timelines, CADTH has used slightly modified processes and methods from within their standard Rapid Response Service for some rQES (11). In this context, the CADTH experience has given rise to an opportunity to explore the impact of certain procedural and methodological decisions within CADTH rapid responses that may impact the production and rigor of rQES. We examine our experiences conducting three rQES's at CADTH and through those examples describe our experiences and reflections on the challenges faced, successes and lessons learned as a means to provide some guidance to researchers who may consider conducting rQES, and thereby enhance the quality of any resultant rQES. We emphasize that there is no one definition or approach for conducting rQES. In this article, we provide a description of an approach used at CADTH for three projects and reflect on the approach to generate lessons learned. Our goal is not to provide a comprehensive guide on conducting rQES, as this type of evidence synthesis is, in our opinion, too nascent to allow for such guidance given the variety of time and resource constraints, and decision-making needs they may be intended to support. Rather, we explore the challenges we have faced in making certain methodological decisions in rQES and share these by way of lessons that CADTH has taken forward for the conduct of future rQES.

We first describe the three rQES case studies we selected to anchor our reflections and experiences and use these case studies to draw common lessons across multiple topics and policy contexts. We then discuss our experiences, reflections, and lessons learned, to aid researchers who may conduct rQES in HTA. We conclude this paper by commenting on the limitations of this paper and invite discussion to further explore best practices moving forward for this nascent field.

Case studies

We anchor our reflections in the three rQES that were conducted by the same author, which allowed us to compare the methodological and procedural choices within the rQES across different topics and contexts. As with all CADTH rapid response reports, the research questions were developed in response to an active request from a Canadian decision maker. Once a request is received by CADTH, research questions are refined through a collaborative process, involving one or more phone calls and emails as required. The refinement process intends to clearly establish the policy or practice issue the decision maker is facing and develop appropriate research questions to help address that issue. The topics and research questions were as follows:

  • Cardiac monitoring: How do patients experience, make decisions around, and live with outpatient cardiac monitors for the diagnosis of stroke, atrial fibrillation, and/or heart failure? (Reference Majid and Visintini12)

  • Prescription drug monitoring programs: How do healthcare providers who prescribe or dispense monitored drugs use prescription drug monitoring programs? (Reference Majid and Frey13)

  • Opioid agonist treatments: How do people with opioid use disorders, and their healthcare providers, understand, communicate, and make decisions related to opioid agonist treatments? (Reference Majid and Loshak14)

The full rapid response reports describing the context, methods, and results are available on CADTH's Web site (www.cadth.ca). Once research questions were refined in collaboration with decision makers, we (independent of the decision maker) identified congruent selection criteria, and conducted literature searches with a qualitative search filter on a combination of the following databases: PubMed, Medline, CINAHL, the Cochrane Library, and the University of York Centre for Reviews and Dissemination (CRD) databases. In addition, the search was limited to 5- (Cardiac Monitoring) or 10-years (Prescription Drug Monitoring Programs and Opioid Agonist Treatments) and studies published in English. A single author (UM) with experience in qualitative research and guidance from LW screened all retrieved citations based on the selection criteria through two stages of title and abstract screening, then full-text screening to identify a set of included studies (nine for cardiac monitoring, eighteen for prescription drug monitoring programs, and twenty-nine for opioid agonist treatments). Quality appraisal was conducted by UM using the Quality of Reporting Tool (QuaRT) as a guide (Reference Carroll, Booth and Cooper15), followed by data extraction of descriptive study details. UM used a qualitative meta-synthesis approach to guide data analysis. Decision makers are once again engaged when a final report developed using a standardized CADTH template is delivered to them via email following an internal review, reference check, and formatting process. The methodological decisions that were made and that deviated from standard QES, therefore, were searching fewer databases, applying date and language limits to the search strategy, using an abbreviated quality appraisal tool, and using a single reviewer for the conduct of screening, appraisal, extraction, and analysis. In total, each rQES was conducted over the course of 4 weeks and each taking approximately 50 h of reviewer time after the literature search was executed. Table 1 summarizes the differences between a typical QES approach and how we conducted rQES at CADTH for the three case studies.

Table 1. Summary of Differences in a Typical QES and Three rQES Conducted at CADTH

a These steps refer to a typical qualitative meta-synthesis approach (e.g., Reference Sandelowski and Barroso16), which are followed for full QES at CADTH when timelines allow, acknowledging there is wide variation in approaches based on epistemology, resources, audience, and type of data (Reference Booth, Noyes, Flemming, Gerhardus, Wahlster and van der Wilt17).

b These experiences represent what we have done in the three rQES at CADTH.

We engaged in focused discussion, critique, and debate based on our mutual experiences conducting the three rQES. Debate occurred through email, and through focused telephone conversations, and included reflection on memos and notes taken throughout the conduct of the three rQES. For each step described in Table 1, we considered the distinct research questions, relevant and eligible literature, decision-maker goals and needs, and the final product of each of the three rQES, to draw out our perceptions of what went well, and what did not, with an eye to share reflections and lessons we will take forward to the conduct of future rQES.

Lessons and experiences conducting rapid qualitative evidence syntheses

A summary of the experiences, perspectives, and lessons outlined in this paper is provided in Table 2.

Table 2. Summary of Experiences, Perspectives, and Lessons from Conducting Three rQES at CADTH

Primary study identification

Formulating a research question

Qualitative research is emergent and iterative and as such, we believe rQES should maintain these characteristics. In our experience, iteration at the research question formulation stage is important to ensure adequate literature is retrieved to respond to a research question, but not too much that would be unmanageable in a rapid context. Further, qualitative research questions should reflect the nuances of the retrieved body of literature. For example, our initial research questions for the cardiac monitoring rQES were constrained to a particular intervention (i.e., implantable cardiac monitors) and one population (i.e., patients with cryptogenic stroke). After searching the literature, however, we determined that there were few studies that looked at this population and intervention specifically (i.e., three studies). Given the small number of studies available to inform the research question, we contemplated for which medical conditions people may share similar experiences to monitoring for cryptogenic stroke. Through discussions with decision makers, we consequently expanded the population scope to include heart failure and atrial fibrillation. With this broadened focus, we increased the total number of studies included in the analysis from three studies to nine studies, thereby incorporating wider dimensions of the phenomenon and ensuring that the HTA decision was supported with more relevant evidence. On the contrary, our experience with the opioid agonist treatment rQES required us to narrow the focus through this iterative process, to ensure the included literature was manageable in the rapid context. An initial search identified close to 100 eligible articles, which was deemed impossible to synthesize appropriately by a single reviewer in 5 weeks. As such, we narrowed the focus to issues related to access and adherence, which were the primary issues of interest for the decision maker. At the same time, we acknowledged in the report that narrowing the focus in this way may have removed important insight on the topic that could have informed policy perspectives. We commented on unexplored directions in our analysis in a limitations section, and in an appendix listed citations to each potentially relevant article that was not included in the analysis, such that decision makers maintained the opportunity to explore these areas to better inform their policy decisions. In both these cases, building the research question through an iterative process helped to ensure that both a relevant and manageable data set was available to inform the analysis.

Identifying relevant research to answer the research questions

Generally, database searching aims to retrieve literature that comprehensively represents the body of evidence relevant to a research question. In QES, the methods and approaches to literature searching mirror those employed in quantitative evidence syntheses (Reference Tong, Flemming, McInnes, Oliver and Craig18), with some distinctions in terms of which databases are searched and how the search strategy is constructed. These distinctions arise from differences in how quantitative and qualitative research are indexed and in what databases. In particular, qualitative research studies have been found to be poorly indexed in databases (Reference DeJean, Giacomini, Simeonov and Smith19), and there is limited empirical guidance on retrieving and reporting database searching for QES (Reference Booth20), which complicates the search for and retrieval of relevant studies in a comprehensive and representative manner.

Due to the poorly indexed nature of qualitative research, database searches may retrieve an unmanageable number of citations. In addition, search filters for qualitative research may not have adequate specificity, which further increases the time reviewers may need to screen citations (Reference Shaw, Booth, Sutton, Miller, Smith, Young and Dixon-Woods21). As such, there is a need to balance the intended comprehensiveness of database searching with the time constraints of any given QES, and in particular rQES. In our rQES work, we have adopted a more rapid approach to database searching that includes limiting the number of databases searched, the use of date limits (i.e., last 5 or 10 years), the use of language limits (i.e., English only), and the use of a CADTH qualitative search filter. In our experience, we have found that at most 1,000 hits are manageable to screen for rQES within a 5-week time frame. Again, iteration is important when developing a search strategy. Iteratively moving between question formulation and literature searching can help ensure a relevant and manageable body of included literature. For topics where relevant qualitative research may be limited (e.g., cardiac monitoring), the use of more liberal limits may be appropriate, for example, a 10- versus 5-year date limit. For other topics that may be more intensely researched (e.g., opioid agonist treatments), narrower limits—such 5-year and country limitations—may be applied.

Screening of identified studies

Initial and full-text screening stages

In QES, at least two reviewers perform title/abstract and full-text screening and reviewers verify each other's screening results. Multiple reviewer screening is the most widely adopted practice in evidence syntheses because it reduces the risk of missing relevant papers (Reference Waffenschmidt, Knelangen, Sieben, Bühn and Pieper22). However, multiple reviewer screening requires at least twice the amount of time and attention, which may not be available in rQES timelines.

In each of our rQES, UM screened through title/abstracts and full text without verification from another reviewer. This single screener approach substantially reduced the time needed to screen and select articles that are eligible for inclusion. By reducing the time spent on these synthesis activities, more time became available for analyzing, writing, and reporting the findings in a way that is useful and appropriate for the HTA questions.

Due to the time constraints of rQES, a single screening approach appears to be appropriate. However, for the purposes of transparency and accountability, we detailed our screening methods, procedures, and tools used in the final reports. If reviewers are concerned that their screening approaches may result in missing relevant papers, then we suggest two solutions. First, reviewers can employ pilot screening to explore a small subset of hits and accordingly modify the eligibility criteria to improve the sensitivity of their strategy. Second, reviewers may consider discussing their screening with another researcher without a full verification. In this way, the other researcher who may be peripheral to the rQES can help to navigate through screening challenges.

We also suggest reviewers reflect on the implications of missing an eligible article through the screening process, and what the impact may be on the rQES results and their utility for decision makers. To our knowledge, there has been no research on the implications of not including all relevant published qualitative studies in a final QES, in particular relation to missing an article through use of a single versus multiple screener approach. In our view, missing an eligible article from a QES may not have the same implications for HTA decision making compared to missing an article from a clinical or economic review, although the implications of both are likely dependent on many factors, such as study design and quality. The “correct” number of studies to analyze for QES or rQES is informed by considerations of saturation with regards to the topic, population, and available literature. For this reason, we contend that if data saturation appears to be achieved, concerns over the use of a single reviewer and the consequent potential of missing a relevant article may be at least somewhat relieved. In our experience, we found that in some cases reviewing a high number of studies did not meaningfully change our interpretations; however, it may offer greater depth and clarity for subgroups characterized by ethnicity, demography, and sexual orientation, for example. As one example, in one large QES conducted at CADTH, on the topic of incentives and disincentives women face participating in cervical cancer screening, initial database searching and screening identified 108 eligible articles. Regular alerts identified an additional 9 eligible articles, which were subsequently incorporated into the analysis, for a total of 117 included studies. Overall, the author team agreed that the nine additional studies found through the alert process did not substantially change the final themes, categories, and conclusions (Reference Majid, Kandasamy, Arora and Vanstone23), as saturation had primarily already been achieved. The newly identified included studies did, however, offer greater depth and clarity for specific subgroups of women characterized by ethnicity, demography, and sexual orientation.

When considering the implications of missing an eligible article through the use of a single versus multiple screeners, it may help to contemplate the notion of data saturation and the importance of providing a rich analysis of all relevant subgroups, or if a higher-level descriptive outline of the issues relevant to the policy question is sufficient. Notwithstanding, these reflections need to be documented in the limitations section of the final rQES report for transparency.

Quality appraisal

Similar to screening, quality appraisal in QES usually occurs with two appraisers who verify each other's adjudications. Multiple appraisers also discuss challenges, issues, and problems with the quality appraisal process. This step can be time-consuming, especially when there are a considerable number of articles to appraise. In our experience, we have shortened the time spent on quality appraisal by using one reviewer to appraise all eligible studies and also by using an abbreviated tool to guide the appraisal process: The Quality of Reporting Tool (QuaRT) (Reference Carroll, Booth and Cooper15).

While a range of tools have been developed to guide quality appraisal of qualitative research, some more comprehensive and others more brief, we have used the QuaRT in the three rQES described here. QuaRT is comprised of four items that assess quality based on how authors have described methodological details of four commonly reported study characteristics: research question and study design, selection of participants, methods of data collection, and methods of data analysis (Reference Carroll, Booth and Cooper15). We chose to use QuaRT for two reasons. First, it is a brief tool that saves time in a rapid review context. Second, this tool helps to address the scholarly discussion concerning the discrepancies between methodological details reported in a qualitative manuscript and its design and conduct. Some scholars have asserted that quality appraisal of qualitative research may seek to evaluate the design and conduct of a study but in actuality assess how the manuscript reports methodological details (Reference Majid and Vanstone24). However, since QuaRT focuses on the four methodological characteristics that in our experience have almost always been present in qualitative manuscripts, we believe that this tool reduces the ambiguity appraisers may experience during this step. In our final reports, we summarized the findings of quality appraisal narratively in the text, highlighting major points for consideration, and also depicted the strengths and limitations of each study in a table at the end of each report.

It is notable to mention, however, reviewers using QuaRT or any tool for appraising qualitative research should have a level of exposure to the various methodological issues in conducting qualitative studies. In our context, UM had considerable experience with the principles of qualitative research design and conduct, which facilitated a rigorous appraisal of included qualitative manuscripts. For reviewers who have little or no experience in qualitative research, however, using a tool such as QuaRT may lead to intense ambiguity surrounding how quality influences the overall findings.

Inclusion, synthesis, and reporting

Extracting descriptive (study and patient characteristics) data

In this step, descriptive data of included studies and their participants are extracted from eligible articles. Descriptive data extraction is a distinct step from the synthesis of findings and aims to clarify the data set and inform priorities for synthesis and writing the findings. In this way, data extraction provides the foundation for data synthesis. For our three rQES, UM conducted descriptive data extraction alongside quality appraisal. Descriptive study details included first author, publication year, country where the research was conducted, study objectives, study design or analytic approach, study setting, sample size, inclusion criteria, and data collection method(s). Descriptive participant characteristics included age range in years, percentage of male participants, and the type of health technology used in each study. If a study did not report on one or more of these details, then we identified the characteristic as NR (not reported). Since a single reviewer used a standardized process for descriptive data extraction, and that this step was performed concurrently with quality appraisal, there was a considerable amount of time saved that was then used to ensure that the synthesized findings were well described and relevant to the policy questions. We suggest that the extraction of descriptive data be conducted by a single reviewer in rQES, as the time savings can be substantial with limited impact on overall review quality.

Synthesizing and writing the findings

In most qualitative projects, researchers work collaboratively through data extraction, analysis, and writing. Collaboration is especially pertinent for QES where multiple investigators contribute through co-reflection and co-iteration. There are many benefits to collaborative research such as researcher triangulation, improved depth of the findings, and the discovery of theoretical nuances through collective knowledge-building. However, collaborative research takes more time and resources that may not be available to researchers conducting rQES.

As we used only one reviewer in our three rQES, the relationship between the researcher and research was different than in more typical QES or primary qualitative research. In particular, the emphasis was less on the interactions, relationships, and communications between researchers, and more on the internal cognitive processes and knowledge structures that the single reviewer employed. UM used strategies such as reflexivity and personal verification to improve the rigor of his work. The role of researcher in this case was not to manage the various conceptualizations and interpretations that emerge from a collaborative dialogue, but to pursue theoretical insights that came from individual cognitive processes. The advantage of this approach is that it takes less time, making it more appropriate for a rapid review context.

In our experience, synthesizing and writing happened somewhat concurrently and iteratively. Initially, for each rQES, UM conducted initial coding on five included articles. UM then composed a Word document with all codes developed from the five articles, including a schema of these codes and a set of broad themes and their definitions. A discussion with a second researcher, LW, occurred to probe the emergent themes and seek alternative organizations and explanations before using the schema to code the remaining studies. It is important to note that to save time, UM examined articles only once; the Word document served as the primary resource for focused coding and writing narrative summaries of themes.

For writing the findings, UM reflected on the research questions and data, and then determined which emergent theme was most relevant to the question and captured the most relevant data. UM then used this theme as the anchor for writing the entire results sections and descriptions of each theme. Through this approach, the reporting of findings remained aggregative, descriptive, and applied to the specific policy issue, which we have experienced to be the most practical and feasible in the context of rQES. Generally, we aim to discuss the most frequent codes and concepts present in the data because they have been not only the easiest to capture but also often the most relevant to the research questions. We acknowledge this to be in contrast to the more interpretive nature of full QES, where conceptual power, theory, and moral implications may be important considerations.

Once a narrative description for the first theme was produced, UM repeated the same process for other themes. As UM wrote the summaries for each theme, he reviewed and modified the summaries of other themes to better align codes and concepts and improve relevance of the findings. It is important to note that due to the richness of qualitative data and time constraints of rQES, we have not been able to include all possible themes or concepts in our final reports. Choices must be made to respect rapid timelines. Our choices of which themes to prioritize were informed by their relevance to the overarching research question and policy questions, and the quantity and quality of data available. Upfront collaboration and discussion with the decision maker, to allow for a thorough understanding of the decision the rQES is intended to support, will help inform choices to prioritize reporting amongst emergent themes. In our rQES, we balanced keeping decision-makers' policy questions at the forefront of the analysis process, while maintaining an open mind to unanticipated and emergent themes, ideas, and concepts that could enhance the interpretations of the topic, as is typical within any rigorous qualitative research.

Limitations of this paper

The lessons and experiences outlined in this paper derive from limited experience, specifically the conduct of three distinct rQES at CADTH. We therefore have as a secondary objective to raise awareness and initiate discussions on the appropriate methods to conduct rQES. We acknowledge that given the range of purposes, timelines, and available expertise, that there will not be one “right” way to conduct rQES. Instead, rQES methods should be adapted for the particular context in which a review is being undertaken such that the most efficient and robust approach is followed within the given constraints. While this article draws on three case examples, as HTA researchers and methodologists, we have published multiple QES and rQES for different HTA agencies (e.g., 24–30). Reflecting on the three case examples, our collective experiences should be helpful to guide researchers producing rQES to make decisions about the most appropriate approach to support decision making.

While outside the scope of this paper, an important consideration that we did not address is when it may be considered inappropriate to conduct an rQES at all, acknowledging the inherent compromise in rigor, and resultant limitations. We take as a starting point that rQES, as we describe it here, is a laudable goal and that it is possible to produce a quality, relevant, and useful product even under restricted timelines. We suspect others might disagree and may instead believe that the compromises in rigor are too great, that they do the disciplines of qualitative research a disservice. We respect this perspective, although in an applied discipline such as HTA, CADTH has chosen to privilege the provision of analyses to respond to qualitative questions, while acknowledging the limitations as opposed to leaving this important information out of the decision-making process. For those who may also wish to engage in an rQES, we suggest some preliminary considerations to help guide decisions as to when rQES may be appropriate or not. First, we suggest that rQES should be considered only when there is a direct request from a decision maker. To ensure relevance and enhance quality in HTA, a direct request from a decision maker is essential to avoid research waste and also inform the adaptation of rQES methods to balance rigor and timeliness. Further, we suggest that some aspects of the review process are essential to ensure a quality product. For example, the involvement of a Research Information Scientist or Medical Librarian is essential to optimize the sensitivity and specificity of a search strategy such that the records returned for screening are relevant to the topic. Additionally, and as described above, a reviewer with experience in qualitative research is essential to ensure a meaningful critical appraisal and coherent analysis. Finally, rQES must be feasible. While timelines vary across examples of published rQES (Reference Campbell, Weeks, Booth, Kaunelis and Smith7), from less than 1 month to up to 6 months, we suggest that anything shorter than the CADTH timelines (4 weeks, with 50 h of reviewer time postsearch strategy execution) may be unfeasible. If less time is available, a different approach to addressing decision-maker needs may be required. Again, we invite discussions as to when rQES may be appropriate, or not, and suggest the considerations in this paper as a starting point.

Discussion

Qualitative research seeks depth, explanation, and exploration of phenomena, which typically takes considerable time to become appropriately immersed in data in order to develop a credible, relevant, and rigorous analysis. Unfortunately, healthcare decision makers do not always have the time to wait for such analyses; and this situation has given rise to the rQES. With the alternative to leave important qualitative information out of decision making, CADTH has begun to explore appropriate research methods to produce rQES. In this article, we describe our experience conducting three rQES's at CADTH as a means to identify lessons learned and ultimately provide some guidance to researchers who may also be contemplating rQES for their own purposes. We intend our reflections and lessons learned to add to the limited published guidance on the conduct of rQES. Our reflections here primarily mirror the only published guidance of which we are aware: guidance published by Healthcare Improvement Scotland in 2019, to which CADTH also contributed. Collectively, these highlight the need to tailor methodological choices to available time, resources, and expertise, acknowledging again that there is no one “right” way to conduct rQES. For example, both sets of guidance suggest engagement with the end-user during the research question formulation stage and tailoring the scope of a review to ensure a manageable set of useful and relevant studies. Recommended analytic approaches vary, however, between the sets of guidance, which appears to reflect stronger traditions of framework synthesis in the United Kingdom and meta-synthesis in Canada.

Overall, we reflect that an aggregative, descriptive, and applied approach to rQES is feasible and can be rigorous. Since HTA is an applied field, a descriptive approach may be appropriate because more attention can be placed on describing, identifying, and elaborating the implications of evidence on HTA questions and less time on more interpretive dimensions of qualitative inquiry that may be less relevant to decision makers and more time-consuming to generate.

In all cases, it remains important to adapt the level of analysis to the unique needs of the research and policy questions at hand. Consequently, it is equally vital to recognize what is lost and gained due to methodological decisions that must be made in the name of rapidity. There are real challenges to keeping rQES “qualitative” by conducting review steps rapidly—reviewers may lose the ability to be iterative and emergent in a way that is grounded in the data. For a field that at times must argue for its relevance at decision-making tables, it is perhaps even more important for people conducting rQES to consider and transparently report the implications of methodological decisions, so as to not leave the impression that qualitative research can be done quickly without compromising rigor. We are aware that for some researchers who have years of experience conducting qualitative research, rQES may invoke feelings of discomfort, unease, and “low-quality” because foundational steps that exemplify rigorous qualitative research are often skipped or shortened. Indeed, we experience the same discomfort, resulting for example through the lack of opportunity to explore connections between concepts and themes, conduct comparative analyses, more fulsomely elaborate analytic ideas, or explore alternative explanations for observations. As with the more established traditions of rapid clinical reviews, we have however, privileged the priority of providing decision makers with some evidence regarding patients' perspectives and experiences while acknowledging the shortcomings of the rapid approach, in particular a tradeoff between rigor and relevance. In response to increasing requests from decision makers, CADTH continues to produce rQES and continues to keep reflections on methodological rigor at the fore. As appropriate, different methods for searching (e.g., adapting search filters, cluster searching), screening (e.g., automation), quality appraisal (e.g., concept-based), and reporting are or will be tried, to assess the impact on the balance of rigor and timeliness. We invite discussions on methods used by others who have followed a rapid approach, and collaboration to explore best practices moving forward.

Acknowledgements

We would like to acknowledge the information scientists at CADTH who developed and conducted the search strategy for each of the three rQES discussed in this paper: Sarah Visintini, Hannah Loshak, Nina Frey.

Financial support

CADTH receives funding from Canada's federal, provincial, and territorial governments, with the exception of Quebec. CADTH contributed in-kind support in relation to LW's time to prepare this paper. No further funding was received to prepare this paper.

Conflict of interest

Umair Majid was engaged as an independent contractor, responsible for the conduct of the three rapid response reports described in this manuscript. Dr. Laura Weeks is employed as a Manager of Scientific Affairs at CADTH.

References

Lewin, S, Glenton, C. Are we entering a new era for qualitative research? Using qualitative evidence to support guidance and guideline development by the World Health Organization. Int J Equity Health. 2018;17:126.CrossRefGoogle ScholarPubMed
Lewin, S, Booth, A, Glenton, C, Munthe-Kaas, H, Rashidian, A, Wainwright, M et al. Applying GRADE-CERQual to qualitative evidence synthesis findings: Introduction to the series. Implementation Science 2018;13(Suppl 1):2.CrossRefGoogle Scholar
Toye, F, Seers, K, Allcock, N, Briggs, M, Carr, E, Barker, K. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies. BMC medical research methodology, 2014;14:114.CrossRefGoogle ScholarPubMed
Hannes, K, Macaitis, K. A move to more systematic and transparent approaches in qualitative evidence synthesis: Update on a review of published papers. Qual Res. 2012;12:402–42.CrossRefGoogle Scholar
The Canadian Agency for Drugs and Technologies in Health (CADTH). About the Rapid Response Service. Programs and Services. 2019 Available from: https://www.cadth.ca/about-cadth/what-we-do/products-services/rapid-response-service (Accessed November 2, 2019).Google Scholar
Tricco, AC, Langlois, E, Straus, SE, World Health Organization. Rapid reviews to strengthen health policy and systems: A practical guide. Geneva, Switzerland, World Health Organization; 2017.Google Scholar
Campbell, F, Weeks, L, Booth, A, Kaunelis, D, Smith, A. A scoping review found increasing examples of rapid qualitative evidence syntheses and no methodological guidance. Journal of clinical epidemiology. 2019;115:160171.CrossRefGoogle ScholarPubMed
Houghton, C, Meskell, P, Delaney, H, Smalle, M, Glenton, C, Booth, A et al. Barriers and facilitators to healthcare workers’ adherence with infection prevention and control (IPC) guidelines for respiratory infectious diseases: A rapid qualitative evidence synthesis. Cochrane Database Syst Rev. 2020;4.Google ScholarPubMed
Smith, A, Farrah, K. Biopsy for adults with suspected skin cancer: a rapid qualitative review. Canadian Agency for Drugs and Technologies in Health (CADTH) Rapid Response Review. 2019 Available from: https://cadth.ca/sites/default/files/pdf/htis/2019/RC1215%20Perspectives%20on%20Biposy%20Final.pdfGoogle Scholar
NHS Scotland. A guide to conducting rapid qualitative evidence synthesis for health technology assessment. Healthcare Improvement Scotland. 2019 Available from: https://htai.org/wp-content/uploads/2019/11/Rapid-qualitative-evidence-synthesis-guide.pdfGoogle Scholar
Canadian Agency for Drugs and Technologies in Health (CADTH). Summary with critical appraisal process. 2015 Available from: https://www.cadth.ca/sites/default/files/external_rr_l2_l2_5_process.pdf (Accessed April 5, 2019).Google Scholar
Majid, U, Visintini, S. Patients’ experiences with cardiac monitors for stroke: A rapid qualitative review. Canadian Agency for Drugs and Technologies in Health (CADTH) Rapid Response Review. 2018 Available from: https://cadth.ca/patients-experiences-cardiac-monitors-stroke-atrial-fibrillation-and-heart-failure-rapid-qualitativeGoogle Scholar
Majid, U, Frey, N. The perspectives of prescribers and dispensers on prescription drug monitoring programs: A rapid qualitative review. Canadian Agency for Drugs and Technologies in Health (CADTH) Rapid Response Review. 2019 Available from: https://cadth.ca/prescription-drug-monitoring-programs-rapid-qualitative-reviewGoogle Scholar
Majid, U, Loshak, H. Patients’ and providers’ experiences with opioid agonist treatments for opioid use disorders: A rapid qualitative review. Canadian Agency for Drugs and Technologies in Health (CADTH) Rapid Response Review. 2019 Available from: https://cadth.ca/buprenorphine-opioid-use-disorders-rapid-qualitative-review-0Google Scholar
Carroll, C, Booth, A, Cooper, K. A worked example of “best fit” framework synthesis: A systematic review of views concerning the taking of some potential chemopreventive agents. BMC Med Res Methodol. 2011;11:29.CrossRefGoogle ScholarPubMed
Sandelowski, M, Barroso, J. Creating metasummaries of qualitative findings. Nurs Res. 2003;52:226–33.CrossRefGoogle ScholarPubMed
Booth, A, Noyes, J, Flemming, K, Gerhardus, A, Wahlster, P, van der Wilt, GJ et al. Structured methodology review identified seven (RETREAT) criteria for selecting qualitative evidence synthesis approaches. J Clin Epidemiol. 2018;99:4152.CrossRefGoogle ScholarPubMed
Tong, A, Flemming, K, McInnes, E, Oliver, S, Craig, J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC medical research methodology. 2012;12:181.CrossRefGoogle ScholarPubMed
DeJean, D, Giacomini, M, Simeonov, D, Smith, A. Finding qualitative research evidence for health technology assessment. Qualitative health research. 2016;26:1307–17.CrossRefGoogle ScholarPubMed
Booth, A. Searching for qualitative research for inclusion in systematic reviews: A structured methodological review. Syst Rev. 2016;5:74.CrossRefGoogle ScholarPubMed
Shaw, RL, Booth, A, Sutton, AJ, Miller, T, Smith, JA, Young, B, Dixon-Woods, M. Finding qualitative research: an evaluation of search strategies. BMC medical research methodology. 2004;4:5.CrossRefGoogle ScholarPubMed
Waffenschmidt, S, Knelangen, M, Sieben, W, Bühn, S, Pieper, D. Single screening versus conventional double screening for study selection in systematic reviews: a methodological systematic review. BMC medical research methodology. 2019;19:132.CrossRefGoogle ScholarPubMed
Majid, U, Kandasamy, S, Arora, N, Vanstone, M et al. HPV testing for primary cervical cancer screening: A health technology assessment—patients’ perspectives and experiences review. Canadian Agency for Drugs and Technologies in Health (CADTH) HTA Review, p. 105–29. 2019 Available from: https://www.cadth.ca/sites/default/files/ou-tr/op0530-hpv-testing-for-pcc-report.pdfGoogle Scholar
Majid, U, Vanstone, M. Appraising qualitative research for evidence syntheses: A compendium of quality appraisal tools. Qual Health Res. 2018;28:2115–31.CrossRefGoogle ScholarPubMed
Vanstone, M, Cernat, A, Majid, U, Trivedi, F, De Freitas, C. Perspectives of pregnant people and clinicians on noninvasive prenatal testing: a systematic review and qualitative meta-synthesis. Ontario health technology assessment series. 2019;19:1.Google ScholarPubMed
Kandasamy, S, Khalid, AF, Majid, U, Vanstone, M. Prostate cancer patient perspectives on the use of information in treatment decision-making: a systematic review and qualitative meta-synthesis. Ontario health technology assessment series. 2017;17:1.Google ScholarPubMed
Cernat, A, De Freitas, C, Majid, U, Higgins, C, Vanstone, M. Facilitating informed choice about non-invasive prenatal testing (NIPT): a systematic review and qualitative meta-synthesis of women's experiences. BMC pregnancy and childbirth. 2019;19:15.CrossRefGoogle ScholarPubMed
Weeks, L, Garland, S, Moulton, K, Kaunelis, et al. Patient experience and preferences. In: DNA mismatch repair deficiency tumour testing for patients with colorectal cancer: A health technology assessment [Internet]. Ottawa, ON: Canadian Agency for Drugs and Technologies in Health (CADTH); 2016. Available from: http://www.ncbi.nlm.nih.gov/books/NBK384771/ PubMed PMID: 27631047Google Scholar
Figure 0

Table 1. Summary of Differences in a Typical QES and Three rQES Conducted at CADTH

Figure 1

Table 2. Summary of Experiences, Perspectives, and Lessons from Conducting Three rQES at CADTH