Increasingly, health technology assessment (HTA) organizations are involving patients, members of the public, or both, in some aspect of their HTA processes (1–8). Patient and public involvement (PPI) includes a range of strategies used across the HTA and decision-making process with the goal of informing (e.g., information broadcasts, Web site), consulting (e.g., survey, focus group) or actively engaging with patients or members of the public for the purpose of research, policy, or program development (e.g., citizen jury, advisory committee participation) (9–12).
Although there may be increased awareness of PPI amongst HTA organizations, published evaluations of PPI initiatives are still relatively scarce (12). Where such evaluations do exist, more tend to focus on evaluating the impact of PPI (1;2;4), as opposed to the process (13), and tend to be circumscribed to a single HTA topic, or program (1;2;4;14). In two surveys of member agencies of the International Network of Agencies for Health Technology Assessment (INAHTA) in 2005 (15) and 2010 (16), eight and six agencies, respectively, reported having evaluated their consumer involvement activities, although the definition for evaluation was broad and included limited activities such as noting the type of input and number of submissions received, and the influence of input on HTA quality and relevance. The Patient and Citizen Involvement Group of Health Technology Assessment International (HTAi) leads an initiative to document and share good practice examples of PPI in HTA (17). This initiative has helped document a variety of strategies used across eleven international HTA programs, and share perceptions of best practice. While it is possible that some descriptions are out of date, only three of the eleven programs who contributed good practice examples also reported evaluating their strategies or measuring their impact.
The lack of published evaluations of PPI initiatives in HTA limits the refinement of theory to guide best practices. Who best to include in HTA, how best to recruit, consult, involve or engage them, and with what support remain somewhat unanswered questions (4;18–20). We conducted a survey of international HTA agencies to address the need to better understand whether and how HTA programs are evaluating their PPI strategies and with what results, as well as perceived facilitators and barriers to evaluation.
A questionnaire (Supplemental File 1) was developed and informed by studies on the impact of PPI on health and social care research, and their stakeholders (18;21;22). Questions sought information on: (i) HTA organizations, including the structure and jurisdiction; (ii) how patients and members of the public are involved in HTA processes (e.g., topic identification, appraising evidence) and HTA decision making (e.g., participation on committees, making recommendations); (iii) whether and how PPI strategies have been evaluated, and if so how the information was used and disseminated; (iv) lessons learned including facilitators and challenges to PPI evaluations; and (v) specific details about completed evaluations including the objectives, methods, and results.
In March 2016, we sent email invitations: (i) through the INAHTA Secretariat to its 52 members; (ii) to contacts within the eleven HTA organizations who responded to the HTAi initiative to collect good practice examples of PPI in HTA (one of which is not an INAHTA member); and, (iii) to seven personal contacts of the authors (one of which was not contacted through either of the two prior methods). In total, we directly invited fifty-four individual organizations to participate, and additionally pursued snowball sampling by encouraging participants to forward the questionnaire to their contacts who might be doing this type of work.
The invitation introduced the purpose of our survey, requested that only one questionnaire per organization was completed, and requested the recipient to forward the questionnaire to “the appropriate person” within their organization for completion. The questionnaire took approximately 20 to 30 minutes to complete, and was hosted on the online SurveyMonkey platform (www.surveymonkey.com). Before distribution, a draft questionnaire was pilot tested by a staff member at the Canadian Agency for Drugs and Technologies in Health (CADTH) who was not involved in the development of the questionnaire, and the resultant feedback was used to revise the questionnaire.
Potential respondents were given the option to request a structured telephone interview instead of completing the online questionnaire. During the two interviews that took place, a study investigator covered the same content as was in the online questionnaire, took detailed notes and developed a written account of the participant's responses, which became part of the same dataset as responses submitted online. Similarly, we asked online participants for permission to contact them to clarify any submitted responses, if necessary, and four participants were contacted for follow up.
Two investigators (J.P. and A.H.) reviewed all responses for clarity, completeness and analysis. Frequencies of responses were calculated for close-ended questions, and responses to open-ended questions were summarized narratively.
This survey did not require formal approval from a research ethics board as the focus was on HTA organization procedures and not the individuals completing the questionnaire (23), although we followed ethical practices for survey research. We provided sufficient information about the purpose and process of the survey to enable an informed decision to participate or not, indicated that any information provided would be used in a peer-reviewed journal publication such that submission of a completed questionnaire implied informed consent for that purpose, and explicitly informed participants that their information would be shared publicly and their organization and related information would be identifiable.
We received fifteen completed questionnaires (15/54 = 27.8 percent response rate), among which twelve countries were represented. Two organizations each were from the United Kingdom (National Institute for Health and Care Excellence [NICE], Scottish Medicines Consortium (SMC]), Netherlands (Zorginstituut Nederland [ZiNL], Netherlands Organization for Health and Research Development [ZonMw]), and Taiwan (Center for Drug Evaluation [CDE], Clinical Effectiveness Group). The additional respondents were based in Canada (Canadian Agency for Drugs and Technology in Health [CADTH]), Columbia (Instituto de Evaluación Tecnológica en Salud [IETS]), France (Haute Autorité de Santé [HAS]), Germany (Gemeinsamer Bundesausschuss [G-BA]), Italy (Agenzia Nazionale per i servizi sanitari Regionali [AGENAS]), Luxembourg (Cellule d'expertise médicale [CEM]), Poland (Agency for Health Technology Assessment in Poland [AHTAPol]), Romania (Center of health care quality and control), and Sweden (Swedish Agency for health technology assessment and assessment of social services [SBU]). Table 1 describes participating HTA organizations, their stated reasons for involving patients and members of the public in HTA, and details of when in the HTA process patients and members of the public are involved.
Table 1. Description of Participating HTA Organizations
Eleven respondents (73.3 percent) reported involving both patients and members of the public in the HTA process or HTA decision making, while three (20.0 percent) reported involving patients only. One organization (6.7 percent) reported not involving patients or members of the public and indicated PPI would be considered in future cases if a commissioned request required it. Organizations reported involving patients for a range of activities spanning both HTA processes and HTA decision making including, in order of the HTA process, participating in a working group or committee to provide opinions and perspectives (n = 10; 66.7 percent), identifying topics for assessment (n = 4; 26.7 percent), refining the scope of assessments (n = 7; 46.7 percent), identifying clinical outcomes (n = 5; 33.3 percent), reviewing protocols (n = 3; 20.0 percent), collecting data (n = 3; 20 percent), analyzing data (n = 1; 6.7 percent), writing reports (n = 1; 6.7 percent), reviewing draft reports (n = 7; 46.7 percent), appraising evidence (n = 5; 33.3 percent), making recommendations (n = 3; 20.0 percent), and helping to disseminate results (n = 8; 53.3 percent).
Members of the public were similarly reported to participate across the HTA process, including: participating in a working group or committee (n = 9; 60.0 percent), identifying topics for assessment (n = 3; 20.0 percent), refining the scope of assessments (n = 4; 26.7 percent), identifying clinical outcomes (n = 2; 13.3 percent), reviewing research protocols (n = 3; 20.0 percent), analyzing data (n = 1; 6.7 percent), writing reports (n = 1; 6.7 percent), reviewing draft reports (n = 7; 46.7 percent), appraising evidence (n = 4; 26.7 percent), making recommendations (n = 4; 26.7 percent), and helping to disseminate results (n = 2; 13.3 percent).
Evaluation of Patient and Public Involvement
Types and Frequency of Evaluation Activities
As outlined in Table 1, of the fourteen respondents who conduct PPI activities, seven organizations (50.0 percent) responded that they evaluate, or have evaluated, those activities and two (14.3 percent) reported that they planned to start the evaluation process for upcoming HTAs. Of the remaining five organizations, one commented that they have not conducted any evaluation to date due to a lack of resources, but otherwise no specific reasons were reported for not evaluating PPI activities.
Of the seven organizations who have conducted some evaluation work, three (42.8 percent) reported having conducted evaluations of participant satisfaction, process evaluations, and impact evaluations, one (14.3 percent) reported conducting both process evaluations and impact evaluations, one reported conducting process evaluations only, and two (28.6 percent) reported conducting evaluations of participant satisfaction only. The frequency of evaluation varied across the respondents, the type of evaluation conducted and the type of PPI activity.
Application and Dissemination of Evaluation Results
Five of the seven organizations who have conducted evaluation work responded to an open-ended question about how results of their evaluation activities are used within their programs. All five commented that results are used to inform changes to PPI activities with the overall goal of improving activities. Some examples are to ensure that patients’ perspectives are captured efficiently and reliably, identify education and training needs of participants (e.g., patients, patient groups, HTA staff), identify and address issues raised by participants in the process, and help to direct strategic priorities and plan for PPI activities for the upcoming year. HAS and IETS reported that the results of their evaluation activities, in this case a survey, are summarized and used to identify any issues or concerns. Relevant feedback is then generated and used to refine particular processes.
At NICE, evaluation results are used to guide the level and type of support provided by their Patient Involvement Program. At the SMC, continuous evaluation informs annual work planning for PPI activities under the direction of the PIN Advisory Group. As required, formal recommendations are prepared for the SMC Executive, who to date have enacted all such recommendations. CADTH commented that in addition to shaping process change over time, evaluation results have been used to illustrate the value of PPI activities both internally and externally.
Four organizations reported how evaluation results are shared with other organizations, and all four reported sharing the results at conferences (n = 4; 57.1 percent), three reported publishing them on their Web site (n = 3; 42.8 percent) and two reported publishing them in a newsletter (n = 2; 28.6 percent).
Changes Made to Patient and Public Involvement Activities
Five organizations described specific changes made to their PPI activities as a result of their evaluations, with varying levels of detail provided. IETS commented generally that following their evaluation, channels of communication have been improved with patients gaining access to HTA results at various and appropriate stages. At HAS, conducting and reflecting on evaluation results motivated their intention to develop documents to clarify the intent of PPI activities, and guidance and tools to address specific issues, for example, desirable qualities for a patient representative and how to encourage participation among patient representatives on committees.
SMC identified numerous changes as a result of continuous evaluation, many focused on patient groups. Changes include streamlining the submission process, providing a document that summarizes background information on the technology under review, increased transparency around conflict of interest declarations, training and education, standardizing presentations at committee meetings, and changes to the embargoed decision-making process that gives patient groups advance warning. In addition, SMCs evaluation work motivated them to increase public awareness of their HTA activities, develop a proactive approach to identify patient and caregiver representatives for each HTA, and establish Patient and Clinician Engagement mentors.
Past evaluations at NICE (24;25) prompted changes in response to a key finding that patient expert members found committee participation to be daunting. Accordingly, several changes to committee structure were implemented including ensuring support is provided before committee meetings, having the committee chair personally greet them at meetings where possible, having a lay member sit next to them and updating the patient expert submission form to improve clarity, provide guidance on completion, and distinguish forms for patient organizations and individual patients.
A 2012 evaluation of the patient input process into the Common Drug Review at CADTH (26) revealed that, at the time, CADTH's PPI program was equivalent or more evolved compared with most other HTA programs and also identified several gaps. Resultant recommendations included ensuring that stakeholders are aligned on the purpose, value, and credibility of incorporating patient input and that CADTH learn from and apply methods used by their international counterparts. Accordingly, CADTH implemented several changes including developing information sessions and awareness strategies, hiring a dedicated staff member, holding training sessions, and developing a process for individual patients and caregivers to provide input when a patient group does not exist. In addition, the CADTH Patient Community Liaison Forum was formed and annual stakeholder sessions were established to better understand patient groups’ needs.
Lessons Learned from Evaluation of Patient and Public Involvement Activities
Challenges Faced during the Evaluation Process
Six organizations responded to an open-ended question about challenges faced during evaluation. Several were identified, with many repeated across organizations. Identified challenges include achieving stakeholder buy-in, managing conflicting stakeholder opinions, managing expectations of patients and caregiver representatives, resistance to change and resource constraints for both the evaluation itself and implementation of any resultant recommendations. Additionally, variation in HTA processes across different programs (e.g., drugs, medical devices, medical procedures, rapid HTA, health economics) was identified as a challenge to developing an overall evaluation strategy. A further challenge results from the variation in the goals for PPI for different stakeholders, for example patient groups, researchers, or committee members. Each stakeholder group may experience HTA involvement differently and have different interpretations of success.
Facilitators to the Evaluation Process
Three respondents (CADTH, NICE, SMC) identified specific facilitators to evaluation, two which focused on having sufficient resources to conduct evaluation activities and implement any recommendations, and the support of senior leadership to embrace any changes. NICE stated that their evaluations work best when patients are involved on the evaluation team, including patient groups, patient experts and lay members, for example, to help design a questionnaire, interpret results, and define recommendations and implementation plans. Similarly at the SMC, the PIN Advisory Group, which includes representation from public partners and patient and caregiver organizations, helps to ensure understanding of current experiences with SMC processes and advise on improvement initiatives that are both feasible and acceptable.
Insights on Evaluation of Patient and Public Involvement Activities
NICE and HAS shared further insights through an open-ended question on the evaluation of PPI activities. HAS advocated for sharing among HTA organizations, for example PPI satisfaction questionnaires, experienced challenges faced during the evaluation process and cases where the inclusion of patient perspectives was helpful. NICE recommended setting explicit objectives and developing an evaluation process at the same time as PPI activities are established. They also suggested that patients and patient groups be involved in designing and executing the evaluation process and applying the learnings in practice. Furthermore, NICE recommended that proposed changes be implemented using a phased approach, so that they remain manageable, and also managing expectations as to what can be achieved or changed.
Future Plans for Evaluation of Patient and Public Involvement Activities
Six respondents described their future plans, each indicating an ongoing commitment to evaluation. At SMC, the PIN Advisory Group ensures a continuous focus on developing and strengthening PPI. NICE reported a current and ongoing evaluation of PPI across the organization (broader that HTA), and CADTH similarly reported a current evaluation as part of a requirement for formal evaluation every 5 years. In addition, HAS reported a current initiative to both develop and evaluate a process for PPI in rapid HTA. Finally, while ZonMW and SBU reported not having yet conducting formal evaluations, they are currently planning for future evaluations of their PPI activities.
A primary goal of this survey was to identify approaches used by HTA organizations to evaluate their PPI initiatives, including perceived facilitators and barriers to evaluation. We obtained responses from fifteen organizations from twelve countries, representing a 27.8 percent response rate. Consistent with the findings of recent reviews (8;11;12), patients (14/15) and members of the public (10/15) are involved in a wide range of HTA processes conducted by the organizations in our sample.
The results reveal that, of the organizations that responded, evaluation of PPI activities is occurring across a small but diverse set. Seven of the responding organizations reported having conducted evaluations, including patient satisfaction, process evaluation or impact evaluation. These results signal that HTA organizations are conducting evaluation activities more broadly than represented in the published literature, which have focused predominantly on evaluating and describing the impact of PPI (1;2;4;12). Due to our small sample size and low response rate (27.8 percent), however, the proportion of HTA organizations that both conduct and evaluate PPI activities remains unclear.
Approaches to evaluating PPI appear to vary widely, from extensive interviews or document reviews for example to something more streamlined including regular surveying of participants. Regardless of the intensity of the strategy, a focus on evaluation is particularly notable in light of the considerable workload of HTA organizations, with many competing deadlines and finite resources. It is encouraging in this context to observe priority being given to evaluation activities, which ultimately aim to enhance efficiency and effectiveness.
Importantly, many specific changes were outlined by respondents as following from their evaluation activities, which spanned a wide range of issues. Conducting evaluations and implementing resultant recommendations appears to have positively impacted both the experience of participating in HTA from the perspective of patients and members of the public, as well as the quality of patient and caregiver input into HTA. In specific instances, evaluation activities were also reported to lead to increased awareness of PPI initiatives in HTA, and, therefore, facilitate the proactive recruitment of future participants, and to illustrate the value of PPI both internally to an HTA organization and externally to stakeholders.
Through this survey, we were able to elicit insights and perspectives on the evaluation process, which should be of value to those planning this sort of work in the future. Specifically, challenges noted by the respondents included both general issues related to the evaluative process (e.g., achieving stakeholder buy-in, managing conflicting opinions, resistance to change) and general methodological issues (e.g., how to define success with wide ranging goals, how to compare PPI in the context of rapid versus full HTA). Facilitators included provision of adequate resources to both conduct evaluations as well as implement any recommendations, in addition to the support of senior leadership and participation of patients and members of the public in the evaluative process. While no respondent explicitly commented that methodological development needs to occur, this seems implied in the elements stated as challenging the evaluation process.
The reported facilitators and challenges with evaluation of PPI activities in HTA are not unlike those reported in the broader evaluation literature. A foundation of the evaluation literature relates to the development and documentation of program theories, for example through the use of logic models, or theories of change. A program theory should outline the inputs, activities, outputs and short- and long-term outcomes intended for a program. A logic model, for example, can help make explicit the expected relationship between these program elements (27). Logic models can be useful to help design programs, facilitate accountability to a stated plan, and also guide program evaluation. Critical to the development of a program theory is the involvement of all relevant stakeholders to ensure buy-in regarding inputs, activities, and program goals in particular how program goals will be measured to define success. Stakeholder involvement should persist throughout the evaluation cycle, including data collection, analysis, and the development and implementation of recommendations.
Many of these concepts were mentioned implicitly or explicitly by respondents to our survey, although without reference to formal program evaluation theory or methods. For example, NICE recommended that organizations develop explicit objectives for PPI at the outset, and ideally also develop an evaluation plan at the same time as the PPI activities are established. NICE also remarked that their evaluations are more productive when patients or members of the public are part of the evaluation team. These reflections speak to the need to plan PPI programs and evaluation activities simultaneously, and also to be specific in terms of stated goals and how those goals should be measured. They also suggest that developing a greater understanding of evaluation theory and methods could be an important step forward for organizations engaged in PPI.
Of note, most organizations in our sample reported multiple reasons for implementing PPI activities. While it is widely acknowledged that PPI activities are grounded in a broad set of goals, including enhancing the relevance of assessments, strengthening the evidentiary contribution, complementing clinical and researcher expertise, and enhancing the openness and inclusiveness of the decision process (1;12), these broad ranging objectives may complicate evaluation (20), as each objective would require its own set of anticipated and measureable outcomes.
First, broad agreement among stakeholders is required regarding how to evaluate whether or not often vaguely articulated objectives have been achieved. Second, tailored approaches to collect data against which to measure success for each distinct objective might be needed. What is important is that the goals for involving patients and members of the public are prespecified and measurable, that an evaluation plan gathers data targeted for those goals, and there is consensus among relevant stakeholders regarding how to define success. The concept of evaluability assessment might also be relevant, as a precursor to evaluation. Evaluability assessments could be used to assess and ensure that PPI programs are ready for evaluation, with sufficient logic or theory to support committed resources and activities resulting in the achievement of measurable objectives, and that there is sufficient stakeholder buy-in to both conduct an evaluation and implement resultant recommendations (28).
The completeness of this survey is limited as it is based on only fifteen responses from twelve countries, representing seven organizations who conduct and evaluate PPI activities. While we cannot be certain of the extent, it is likely that some organizations who evaluate their PPI activities did not respond to our questionnaire. Furthermore, the maximum number of responses received from each organization was two, but in most cases there was one response per country or organization. Given the possibility that different PPI or evaluation strategies are used by different programs or groups within a given country or organization, there is further reason to believe that our results do not reflect all experiences with evaluation. While we made attempts to contact authors to clarify submitted responses, in the end, we spoke directly to seven of the fifteen respondents. We, therefore, did not verify reported data from eight represented HTA organizations, which raises the potential for inaccurate, or out of date, data especially given the evolving nature of PPI activities.
Finally, in our questioning relating to evaluation strategies, we did not specifically ask whether there were any differences in approach related to evaluating patient involvement as compared to involving members of the public, primarily due to not wanting to add further questions to an already long questionnaire. It is likely that a broader evaluation of experiences will expand over the coming years, and we hope this report may serve to encourage evaluation among those who have not yet established a process.
Our survey identified international HTA organizations that have developed and conducted initiatives to evaluate their PPI activities. A range of strategies are described that span the evaluation of process, impact and satisfaction, and at varying levels of time and resource requirements. Few explicit references to evaluation theory were noted, although respondents appear to acknowledge established facilitators to program evaluation including the need for explicit, measurable objectives and the inclusion of a range of stakeholders, including patients and members of the public on evaluation teams.
There is a continued interest in the evaluation of PPI activities through HTAi, and a recently published book (29) focused on patient involvement in HTA contains a chapter on this topic, with a proposed evaluation framework. It will be important for HTA organizations to share their approaches and experiences with evaluation and perhaps to test this framework.
CONFLICTS OF INTEREST
The authors have nothing to disclose.
J Abelson , Y Bombard , FP Gauvin , D Simeonov , S Boesveld . Assessing the impacts of citizen deliberations on the health technology process. Int J Technol Assess Health Care. 2013;29:282–289.
S Berglas , L Jutai , G MacKean , L Weeks . Patients' perspectives can be integrated in health technology assessments: An exploratory analysis of CADTH Common Drug Review.
Res Involve Engag. 2016;7:2.
I Cleemput , W Christiaens , L Kohn , C Leonard , F Daue , A Denis . Acceptability and perceived benefits and risks of public and patient involvement in health care policy: A Delphi survey in Belgian stakeholders. Value Health. 2015;18:477–483.
MT Dipankui , MP Gagnon , M Desmartis , F Legare , F Piron , J Gagnon , et al.
Evaluation of patient involvement in a health technology assessment. Int J Technol Assess Health Care. 2015;31:166–170.
S Wortley , J Wale , D Grainger , P Murphy . Moving beyond the rhetoric of patient input in health technology assessment deliberations. Aust Health Rev. 2016 [Epub ahead of print].
E Lopes , J Street , D Carter , T Merlin . Involving patients in health technology funding decisions: Stakeholder perspectives on processes used in Australia.
Health Expect. 2016;19:331–344.
D Menon , T Stafinski . Role of patient and public participation in health technology assessment and coverage decisions. Expert Rev Pharmacoecon Outcomes Res. 2011;11:75–89.
D Hailey , S Werko , R Bakri , A Cameron , B Gohlen , S Myles , et al.
Involvement of consumers in health technology assessment activities by INAHTA agencies. Int J Technol Assess Health Care. 2013;29: 79–83.
FP Gauvin , J Abelson , M Giacomini , J Eyles , JN Lavis . “It all depends”: Conceptualizing public involvement in the context of health technology assessment agencies. Soc Sci Med. 2010;70: 1518–1526.
MP Gagnon , M Desmartis , D Lepage-Savary , J Gagnon , M St-Pierre , M Rhainds , et al.
Introducing patients' and the public's perspectives to health technology assessment: A systematic review of international experiences. Int J Technol Assess Health Care. 2011;27:31–42.
JA Whitty . An international survey of the public engagement practices of health technology assessment organizations. Value Health. 2013;16:155–163.
J Abelson , F Wagner , D DeJean , S Boesveld , FP Gauvin , S Bean , et al.
Public and patient involvement in health technology assessment: A framework for action. Int J Technol Assess Health Care. 2016;32:256–264.
E Lopes , J Street , D Carter , T Merlin . Involving patients in health technology funding decisions: Stakeholder perspectives on processes used in Australia.
Health Expect. 2016;19:331–344.
S Oliver , R Milne , J Bradburn , P Buchanan , L Kerridge , T Walley , et al.
Involving consumers in a needs-led research programme: A pilot project. Health Expect. 2001;4:18–28.
D Hailey , M Nordwall . Survey on the involvement of consumers in health technology assessment programs. Int J Technol Assess Health Care. 2006;22:497–499.
S Staniszewska , J Brett , C Mockford , R Barber . The GRIPP checklist: Strengthening the quality of patient and public involvement reporting in research. Int J Technol Assess Health Care. 2011;27:391–399.
S Staniszewska . Patient and public involvement in health services and health research: A brief overview of evidence, policy and activity. J Res Nurs. 2009;14:295–298.
J Brett , S Staniszewska , C Mockford , S Herron-Marx , J Hughes , C Tysall , et al.
Mapping the impact of patient and public involvement on health and social care research: A systematic review. Health Expect. 2014;17:637–650.
J Brett , S Staniszewska , C Mockford , S Herron-Marx , J Hughes , C Tysall , et al.
A systematic review of the impact of patient and public involvement on service users, researchers and communities. Patient. 2014;7:387–395.
MQ Patton . Utilization-focused evaluation: The new century text. 3rd ed.
Thousand Oaks, CA: Sage Publishing; 1997.
J Hare , T Guetterman . Evaluability assessment: Clarifying organizational support and data availability. J Multidiscip Eval. 2014;10:9–25.
K Facey , HP Hansen , ANV Single , eds. Patient involvement in health technology assessment. Singapore: Springer ADIS; 2017.