Crossref Citations
This article has been cited by the following publications. This list is generated based on data provided by
Crossref.
Staniszewska, Sophie
Adebajo, Ade
Barber, Rosemary
Beresford, Peter
Brady, Louca-Mai
Brett, Jo
Elliott, Jim
Evans, David
Haywood, Kirstie L.
Jones, David
Mockford, Carole
Nettle, Mary
Rose, Diana
and
Williamson, Tracey
2011.
Developing the evidence base of patient and public involvement in health and social care research: the case for measuring impact.
International Journal of Consumer Studies,
Vol. 35,
Issue. 6,
p.
628.
Mockford, C.
Staniszewska, S.
Griffiths, F.
and
Herron-Marx, S.
2012.
The impact of patient and public involvement on UK NHS health care: a systematic review.
International Journal for Quality in Health Care,
Vol. 24,
Issue. 1,
p.
28.
Gooberman‐Hill, Rachael
Burston, Amanda
Clark, Emma
Johnson, Emma
Nolan, Sharon
Wells, Victoria
and
Betts, Lizzy
2013.
Involving Patients in Research: Considering Good Practice.
Musculoskeletal Care,
Vol. 11,
Issue. 4,
p.
187.
Carter, Pam
Beech, Roger
Coxon, Domenica
Thomas, Martin J.
and
Jinks, Clare
2013.
Mobilising the experiential knowledge of clinicians, patients and carers for applied health-care research.
Contemporary Social Science,
Vol. 8,
Issue. 3,
p.
307.
Petit-Zeman, Sophie
and
Locock, Louise
2013.
Health care: Bring on the evidence.
Nature,
Vol. 501,
Issue. 7466,
p.
160.
Staniszewska, Sophie
and
Denegri, Simon
2013.
Patient and public involvement in research: future challenges.
Evidence Based Nursing,
Vol. 16,
Issue. 3,
p.
69.
Evans, Bridie
Bedson, Emma
Bell, Philip
Hutchings, Hayley
Lowes, Lesley
Rea, David
Seagrove, Anne
Siebert, Stefan
Smith, Graham
Snooks, Helen
Thomas, Marie
Thorne, Kym
and
Russell, Ian
2013.
Involving service users in trials: developing a standard operating procedure.
Trials,
Vol. 14,
Issue. 1,
p.
219.
Gooberman‐Hill, Rachael
Jinks, Clare
Bouças, Sofia Barbosa
Hislop, Kelly
Dziedzic, Krysia S.
Rhodes, Carol
Burston, Amanda
and
Adams, Jo
2013.
Designing a placebo device: involving service users in clinical trial design.
Health Expectations,
Vol. 16,
Issue. 4,
Kleme, Jenni
Pohjanoksa-Mäntylä, Marika
Airaksinen, Marja
Enlund, Hannes
Kastarinen, Helena
Peura, Piia
and
Hämeen-Anttila, Katri
2014.
PATIENT PERSPECTIVE IN HEALTH TECHNOLOGY ASSESSMENT OF PHARMACEUTICALS IN FINLAND.
International Journal of Technology Assessment in Health Care,
Vol. 30,
Issue. 3,
p.
306.
Mathie, Elspeth
Wilson, Patricia
Poland, Fiona
McNeilly, Elaine
Howe, Amanda
Staniszewska, Sophie
Cowe, Marion
Munday, Diane
and
Goodman, Claire
2014.
Consumer involvement in health research: a UK scoping and survey.
International Journal of Consumer Studies,
Vol. 38,
Issue. 1,
p.
35.
Jones, Emma
Lees, Nicholas
Martin, Graham
and
Dixon-Woods, Mary
2014.
Describing methods and interventions: a protocol for the systematic analysis of the perioperative quality improvement literature.
Systematic Reviews,
Vol. 3,
Issue. 1,
Bibi, Rukhsana
Redwood, Sabi
and
Taheri, Shahrad
2014.
Raising the issue of overweight and obesity with the South Asian community.
British Journal of General Practice,
Vol. 64,
Issue. 625,
p.
417.
Buck, Deborah
Gamble, Carrol
Dudley, Louise
Preston, Jennifer
Hanley, Bec
Williamson, Paula R
and
Young, Bridget
2014.
From plans to actions in patient and public involvement: qualitative study of documented plans and the accounts of researchers and patients sampled from a cohort of clinical trials.
BMJ Open,
Vol. 4,
Issue. 12,
p.
e006400.
Morris, Christopher
Janssens, Astrid
Allard, Amanda
Thompson Coon, Joanne
Shilling, Valerie
Tomlinson, Richard
Williams, Jane
Fellowes, Andrew
Rogers, Morwenna
Allen, Karen
Beresford, Bryony
Green, Colin
Jenkinson, Crispin
Tennant, Alan
and
Logan, Stuart
2014.
Informing the NHS Outcomes Framework: evaluating meaningful health outcomes for children with neurodisability using multiple methods including systematic review, qualitative research, Delphi survey and consensus meeting.
Health Services and Delivery Research,
Vol. 2,
Issue. 15,
p.
1.
Robinson, A
2014.
Patient and public involvement: in theory and in practice.
The Journal of Laryngology & Otology,
Vol. 128,
Issue. 4,
p.
318.
Snape, D
Kirkham, J
Britten, N
Froggatt, K
Gradinger, F
Lobban, F
Popay, Jennie
Wyatt, K
and
Jacoby, Ann
2014.
Exploring perceived barriers, drivers, impacts and the need for evaluation of public involvement in health and social care research: a modified Delphi study.
BMJ Open,
Vol. 4,
Issue. 6,
p.
e004943.
Brett, Jo
Staniszewska, Sophie
Mockford, Carole
Herron-Marx, Sandra
Hughes, John
Tysall, Colin
and
Suleman, Rashida
2014.
A Systematic Review of the Impact of Patient and Public Involvement on Service Users, Researchers and Communities.
The Patient - Patient-Centered Outcomes Research,
Vol. 7,
Issue. 4,
p.
387.
Snape, D
Kirkham, J
Preston, J
Popay, J
Britten, N
Collins, M
Froggatt, K
Gibson, A
Lobban, F
Wyatt, K
and
Jacoby, A
2014.
Exploring areas of consensus and conflict around values underpinning public involvement in health and social care research: a modified Delphi study.
BMJ Open,
Vol. 4,
Issue. 1,
p.
e004217.
Brett, Jo
Staniszewska, Sophie
Mockford, Carole
Herron‐Marx, Sandra
Hughes, John
Tysall, Colin
and
Suleman, Rashida
2014.
Mapping the impact of patient and public involvement on health and social care research: a systematic review.
Health Expectations,
Vol. 17,
Issue. 5,
p.
637.
McNichol, Elaine
2014.
Involving patients with leg ulcers in developing innovations in treatment and management strategies.
British Journal of Community Nursing,
Vol. 19,
Issue. Sup9,
p.
S27.
Patient and public involvement (PPI) in Health Technology Assessment (HTA) and health research has become increasingly common internationally. Interest and activity have grown, with projects in the European Union, Australia, Canada, and other countries focusing on different aspects of PPI (Reference Abma1;Reference Boote, Telford and Cooper3;Reference Coupland, Maher and Enriquez6;11;Reference McKenzie and Hanley14;18;Reference Shea, Santesso and Qualman20;Reference Staley21;24). There is a general agreement on the need for more patient-focused HTA methods and several HTA agencies and HTA researchers are reviewing ways to incorporate the patients’ or, more generally, the public's perspectives into their methods (Reference Gagnon, Desmartis and Lepage-Savary9). However, the need for evidence through robust evaluation has also been emphasized to convince a broader constituency of the HTA community about the impacts of PPI (Reference Gauvin, Abelson and Giacomini10;Reference Staniszewska22). In the United Kingdom, the Director General of NHS Research and Chief Medical Officer has recently stated that involvement should be the norm, not the exception, in health research (including HTA), although progress is still needed to implement this vision. The overall aims of involvement are to enhance the quality, relevance, and appropriateness of research and also to contribute to the broader democratization of research, through participatory forms of involvement that encourage partnership in research (Reference Brett, Staniszewska and Mockford4,Reference Coupland, Maher and Enriquez6). Within the United Kingdom, considerable effort has been focused on developing an infrastructure to operationalize the policy commitment to PPI, through the work of organizations such as INVOLVE (12) and the Research Design Services (19), which enable researchers to embed patient and public involvement into their work. Together, the policy focus and the infrastructure in the United Kingdom and in other countries have created a supportive context in which PPI activity has flourished.
While PPI activity has continued to expand, there has been relatively little scrutiny of the difference it makes. We undertook two recent systematic reviews, PIRICOM (Reference Brett, Staniszewska and Mockford4) (Patient and Public Involvement in Research: Impact, Conceptualisation, Outcomes and Measurements) which examined the impact of patient and public involvement on health and social care research, and PAPIRIS (Reference Mockford, Staniszewska and Griffiths15) (Patient And Public Involvement Review on the Impact on healthcare Services), which focused on the impact of patient and public involvement on health service provision, evaluation, and delivery. Both reviews examined the conceptualization, definition, measurement, impact, and outcomes of patient and public involvement in their respective areas. The full results of each review are reported elsewhere (Reference Brett, Staniszewska and Mockford4;Reference Mockford, Staniszewska and Griffiths15). INVOLVE have also recently conducted a structured review of the impact of user involvement in research (Reference Staley21).
While the results of PIRICOM (Reference Brett, Staniszewska and Mockford4), PAPIRIS (Reference Mockford, Staniszewska and Griffiths15), and the INVOLVE study (Reference Staley21) have been helpful in identifying a range of impacts that were attributed to patient and public involvement, it has become clear that the underlying evidence base is relatively poor, primarily because of poor reporting, making it difficult to assess the impact of involvement. Problems with reporting in health and social care research are not unusual, and so difficulties in reporting patient and public involvement research are not exceptional. In health research, there is evidence that many papers lack clarity, transparency, and completeness in how the authors actually carried out the research (Reference Begg, Cho and Eastwood2;Reference Moher, Hopewell and Schultz16;Reference Moher, Schulz and Altman17). Poor reporting can cause a range of difficulties. If readers do not have sufficient details concerning a study, they are left with an incomplete picture which makes its appraisal very difficult and has implications for the judgment of reliability and the interpretation of results (Reference Moher, Hopewell and Schultz16;Reference Moher, Schulz and Altman17). In addition, there are moral and ethical imperatives for the good reporting of research which cannot be ignored. The lack of adequate reporting in studies prompted the creation of the EQUATOR Network, which is an international initiative that seeks to enhance the reliability and value of medical research by promoting transparent and accurate reporting of research studies (7;Reference Makela and Stein13). EQUATOR recognized that randomized controlled trials were often reported poorly, which fuelled the development of the original CONSORT (CONsolidated Standards of Reporting Trials) statement in 1996 with its revision 5 years later (Reference Moher, Schulz and Altman17), and an updated recently published version, CONSORT 2010 (Reference Moher, Hopewell and Schultz16). The most recent statement aims to assist authors in writing reports of randomized controlled trials, editors and peer reviewers in reviewing manuscripts for publication, and readers in critically appraising published articles. While the EQUATOR network has been very helpful in enhancing the quality of research reporting, there is little similar guidance for researchers reporting patient and public involvement, beyond more general guidance for reporting the different types of qualitative and quantitative research and recently published guidelines for appraising the quality and impact of user involvement in research, which relies on good quality reporting (Reference Wright, Foster, Amir, Elliott and Wilson26). This paper addresses this deficit by reporting the findings of a follow-up study, which synthesized the key issues from the PIRICOM and PAPIRIS systematic reviews and proposed the GRIPP checklist (Guidance for Reporting Involvement of Patients and Public), to enhance the quality of PPI reporting in HTA and in health research more broadly to enable quality assessment. The GRIPP checklist is the first attempt to comprehensively identify the important elements of good reporting for PPI, based on systematic reviews of the PPI evidence base. The aim of the GRIPP checklist is to help researchers and service users improve the quality, content, detail, consistency, transparency, and completeness of their PPI reporting, thus strengthening the PPI evidence base, and ultimately enabling a clearer understanding of what PPI works, for whom, why and in what circumstances.
The aim of this study was to develop the rationale and content of the GRIPP checklist.
METHODS
The PIRICOM (http://www.ukcrc.org/publications/reports/) and PAPIRIS studies both used the York Centre for Reviews and Dissemination Guidance (5) for undertaking systematic reviews. The PIRICOM study recruited three service users to the advisory board, who were involved in developing the study aims, analysis, interpretation, writing, and dissemination. An expert seminar with twenty-four service users and researchers in the field was held to assist with the interpretation of the evidence base. The full findings are reported elsewhere (Reference Brett, Staniszewska and Mockford4). The PAPIRIS study was also guided by an advisory group, consisting of twelve leading experts in patient and public involvement and systematic reviews, including two lay members. The advisory group assisted at all key stages specifically the protocol stage, data retrieval, and results stages. The full findings are reported elsewhere (Reference Mockford, Staniszewska and Griffiths15).
As the PIRICOM and PAPIRIS reviews progressed several issues relating to poor quality of reporting were identified. A separate study, following on from PIRICOM and PAPIRIS, was undertaken, in which narrative synthesis of each key issue was undertaken, comparing their type and nature across both reviews, to enable an interrogation by S.S., J.B., C.M., and R.B. of each issue. The following section describes these key issues, and provides the underpinning rationale for the GRIPP checklist presented in Table 1. While the key issues are concerned with reporting, some have implications for how the future PPI evidence base PPI should evolve, particularly in relation to the way in which PPI impact should be evaluated and measured (Reference Staniszewska22). These implications are briefly discussed in the paper.
Table 1. The GRIPP Checklist
Note. This checklist is designed for studies that have included some form of patient and public involvement in research. The aim of the GRIPP checklist is to assist authors in writing their PPI papers and reports, editors and peer reviewers in reviewing manuscripts for publication, and readers in critically appraising published articles and reports.
RESULTS
Conceptualizing or Theorizing PPI
Good reporting is ideally based on clear concepts and definitions. While there are some helpful definitions of involvement, the conceptualization or theorization of PPI has generally been poor. There have been some attempts to develop conceptual or theoretical frameworks, but there is no overall conceptual model of PPI impact that captures the essence of the concept and has been empirically tested. Such models can be very helpful because they can provide a blueprint for evaluation, identifying key areas for assessment. The often poor reporting of definitions and the lack of an agreed conceptual framework or model for PPI was problematical in assessing the conceptual equivalence of studies, that is, the extent to which they were examining the same concept (Reference Brett, Staniszewska and Mockford4;Reference Mockford, Staniszewska and Griffiths15). The lack of conceptual frameworks or models also made it difficult when attempting to assess impact of PPI. Both the PIRICOM and PAPIRIS reviews adopted a broad approach and regarded impact as the reporting of any information concerned with the difference patient and public involvement has made to any aspects of research, researchers, patients and the public, users, organizations, or to service delivery, development, or evaluation, to ensure all potentially relevant information was included. The lack of a clear conceptual or theoretical underpinning for the reporting of impact may explain why studies have adopted a variety of approaches, epitomizing the phrase “letting a thousand flowers bloom.” While this provides a fascinating insight into the richness and diversity of patient and public involvement, it hinders consistent and more formal evaluation of impact.
Reporting of Methods
Studies of patient and public involvement varied enormously in the data provided about the PPI activity, the level of PPI adopted (consultative, collaborative, or user-led), the stages at which the activity occurred (whether one stage or multiple stages), and the research design used. A key difficulty was that methods were often described in very little detail and often inconsistently with varying information. It was rare to have a full description of participants, both service users and researchers. In addition, authors rarely identified the level or levels of involvement (consultation, collaboration or user-led) they used, leaving it to the reader to form an assessment. The stages at which PPI occurred in a study were also poorly reported, and it was often unclear whether PPI had occurred at one point or throughout a study.
Content Validity of Studies
A key difficulty with many studies of PPI was that they rarely assess the content validity of their investigation. Content validity is an assessment of the extent to which a study or an instrument captures all the relevant dimensions or aspects of a concept (Reference Streiner and Norman23). Ideally the assessment of content validity would be based on some conceptual or theoretical understanding of a concept. For example, when Ware and Sherbourne (Reference Ware and Sherbourne25) developed the Short-Form 36 (SF-36) questionnaire they stated that health comprised eight concepts including physical function, role physical, bodily pain, mental health, role emotional, social function, general health, and vitality. Thus, to ensure content validity, any generic patient reported outcome measure (PROM) should measure all eight concepts. This conceptual framework provides guidance on which dimensions of health all generic patient reported outcome measures should include, and offers a basis for the assessment of content validity. While content validity is a standard part of evaluating PROMs, the concept could also be transferred to studies of patient and public involvement. PPI studies rarely report the extent to which they have considered all the potentially relevant dimensions or aspects of PPI, particularly important when evaluating impact. Using the concept of content validity in the field of PPI could enable more consistent reporting of all relevant aspects of a study and so enhance future evaluation of the full spectrum of possible impacts.
Reporting of Context and Process
The PIRICOM review, in particular, through collaboration with patients and the public, identified the importance of context and process in the interpretation of PPI impact (Reference Brett, Staniszewska and Mockford4). Context refers to the conditions required for PPI to have an impact. For example, the appropriate support and training, the appropriate funding, positive attitudes toward PPI, and appropriate time allocation might be important in a particular situation. Process refers to the methods used to undertake the involvement (see Figure 1), such as level of involvement and the stages of the research process where involvement occurs. While some studies did describe context and process information, although in varying detail, it was rarely linked to any interpretation of impact. The importance of context and process suggests that PPI should be viewed as a complex intervention that requires multi-layered reporting.
Figure 1. The complexity of PPI impact evaluation.
The Status of Impact
While PIRICOM and PAPIRIS identified a wide range of studies that reported on various PPI activities, relatively few papers included an evaluation of the impact of involvement as a primary aim, possibly because studies focused on assessing the effectiveness of an intervention. This may indicate the low status accorded to PPI and the difference it makes and may explain why less emphasis has been placed on consistent reporting.
Locating the Evidence of Impact
In searching for and selecting papers for the PIRICOM and PAPIRIS review, it became apparent that there were some significant challenges in developing a sensitive and specific search strategy, which identified relevant papers, while excluding irrelevant ones. A key difficulty was the lack of MeSH (Medical Subject Headings) for PPI, which if they existed, could offer an efficient way of finding relevant literature. Instead authors have used a wide variety of keywords, often in an inconsistent way, which limited the utility of keyword searching and made the development of a search string very complex. A further difficulty related to using the title and abstract information as a way of identifying relevant studies, a common approach in reviewing and helpful in sorting through large numbers of studies. Sifting PPI studies by title and abstract was not a useful approach as information about impact was not always mentioned in the title or abstract, but could appear in another part of the paper, such as the results or discussion. As a result, entire papers had to be read for a robust assessment of relevance, which had a significant impact on the time needed for the review of papers. Such difficulties have important implications for future syntheses of evidence as the same difficulties will remain unless there is more emphasis placed on appropriate reporting in the title or abstract and appropriate use of keywords.
Locating Impact Information in a Paper
In addition to difficulties locating and selecting relevant papers, the PIRICOM and PAPIRIS studies also highlighted important difficulties locating information about impact within papers. Information about impact could appear anywhere in a paper and there was no consistent place where authors reported impact, and which a reviewer could consistently use during data extraction. Rarely was there a full and consistent appraisal of impact data with information about impact appearing in all the appropriate sections of a paper, (for example as an aim, in the methods section, or in the results, discussion, and conclusion) to allow a full appraisal of the difference PPI made in the study. More commonly, information about impact appeared as a one-off comment, which could occur anywhere in the paper. This inconsistency in reporting meant that papers had to be read in full for the purposes of data extraction, a very inefficient way of identifying and extracting relevant information which normally relies on key data appearing in expected parts of the paper.
In addition to the difficulties of finding the evidence, there was a range of issues that inhibited the appraisal of information about impact which relates to the nature of the PPI evidence base more broadly, including the poor conceptualization and theorization of impact. These issues need to be addressed as part of strengthening this evidence base alongside the development of better reporting and are briefly considered in the conclusion.
The Level of Detail About Impact
Both the PIRICOM and PAPIRIS reviews attempted to extract data on impact or the difference PPI made to research or to a health service. Any form of data was sought, including qualitative descriptions or quantitative measurements. Of the studies that considered impact, most reported impact using short descriptions. These were often very limited, lacking in detail and with no consistent structure. Descriptions of impact usually took the form of one or two lines of text and were not always described as a PPI impact, meaning that the researcher had to decide that this data represented impact. During data extraction, the identification of longer sections of text reporting impact were like “nuggets of gold.” In the absence of formal and structured forms of reporting, the most helpful types of report in the PAPIRIS study were the small number of case studies, often contained in longer reports, which were time-consuming to read and not always peer-reviewed.
While helpful in providing some insight and enabling the identification of broad groupings of impact (impact on research, researchers, service users, community, policy, funders, journals), overall the level of reporting was inadequate if the reader's aim was to understand properly the difference that patient and public involvement had made, to use this information to develop a more coherent conceptual or theoretical model of impact, or to use such information to develop a bank of items for the development of an instrument that includes all relevant dimensions of impact. In many ways, the nature of current patient and public involvement reporting could be compared with a scenario where a researcher reports the findings of a randomized controlled trial (RCT) by stating that “a particular intervention has worked and patients feel better,” and there is no further information. Such poor reporting would obviously impede a rigorous assessment of the effectiveness of an intervention. Such difficulties in RCT reporting prompted the development of CONSORT guidance to help improve the quality of reporting (Reference Moher, Hopewell and Schultz16;Reference Moher, Schulz and Altman17).
Positive and Negative Impacts
In reporting the impact of PPI most studies report positive impacts, with negative impacts more rarely considered. This may reflect an implicit assumption that PPI is a good and worthwhile activity, or that negative impacts have been politically more difficult to report. In ensuring content validity, papers should report both positive and negative impacts and also evidence of no impact, to enable a comprehensive understanding of the full breadth of PPI impact in a particular study. The need for such a comprehensive approach to reporting positive and negative impacts is particularly important for the development of robust instruments to measure impact.
Quality of PPI
Current approaches to assessing the quality of research focus on evaluating the study design but do not usually include PPI as an element of quality. There was no formal way of evaluating the quality of PPI or the quality of PPI impact reporting when the PIRICOM and PAPIRIS studies were conducted. In many respects, this might reflect the lack of critical evaluation which pervades much of the PPI evidence-base. More recently, guidance on evaluating the quality of PPI has emerged and may help strengthen quality assessment in this area (Reference Wright, Foster, Amir, Elliott and Wilson26). This guidance aims to help reader's assess the quality of published studies, researchers to develop effective strategies for user engagement, and funding bodies to establish principles of effective user involvement. It relies on good quality reporting, which the GRIPP checklist will seek to encourage.
Capture and Measurement of PPI
The main ways in which PPI impact is represented is through short descriptions. No standard formats exist for describing or capturing these impacts, and so they tend to vary in content, structure, and presentation. While other areas, such as patient experiences or patient-reported outcome measures have developed instruments which, with varying degrees of success, measure the concept of interest, PPI does not have a pool of robust, well developed instruments to measure PPI impact (Reference Ware and Sherbourne25). Robust measurement of the extent of PPI impact could provide additional information that could enable a greater understanding of what works, for whom and in what circumstances. The application of psychometrically derived methods of measurement has much to offer PPI in developing robust instruments to measure the impact, context, and process.
Economic Aspects
In exploring the impacts of patient and public involvement, it is important to consider all potential impacts, including economic impacts, particularly if forming a judgment about whether a particular involvement activity is cost-effective. In the PIRICOM and in the PAPIRIS systematic reviews, there was no evidence of any economic modeling of costs or benefits, with only a very small number of papers mentioning costs of particular patient and public involvement activities. It is important that, in future theorizing of involvement, economic impacts are considered alongside forms of impact as part of a broader development of the patient and public involvement evidence-base.
A Linked Body of Research
The PIRICOM (Reference Brett, Staniszewska and Mockford4) and PAPIRIS (Reference Mockford, Staniszewska and Griffiths15) systematic reviews both identified a large number of diverse studies that have examined different aspects of patient and public involvement. However, this diversity poses several difficulties when attempting to synthesize different types of information about patient and public involvement including impact. Many studies simply report the results of a project, not always linking their study to a much broader body of work, or clearly outlining how their work moves thinking forward conceptually or methodologically. We risk reinventing the wheel many more times unless greater efforts are placed on developing a coherently linked area of work. This provides significant challenges for researchers in stating explicitly how their study adds to the body of knowledge to enable the conceptual and methodological developments to be clearly visible.
Reflective Evaluation and Interpretation
The ability to critique and to reflectively evaluate and interpret research underpins the development of conceptual and methodological thinking. While such reflective thinking is present in the field of PPI, it tends to be found in opinion or reflective articles and less often in studies reporting the results of a project trying to establish the impact of PPI. For example, when studies presented their findings in relation to impact, the short descriptions (which formed the main way in which such data were captured) were rarely accompanied by any critical reflection. There was rarely any attempt to interpret or explain any relationship with context and process in evaluating impact. As a result, there is an important need to develop this critical reflective capacity in the reporting of impact, to enable the reader to critically assess a study and judge its merit and contribution to our broader understanding of patient and public involvement.
Development of the GRIPP Checklist
This paper has reported the narrative synthesis of the key issues that emerged from the PIRICOM and PAPIRIS systematic reviews and were primarily concerned with the poor quality of reporting within PPI studies. To develop the GRIPP checklist in this follow-up study, the research team carefully considered each issue in relation to several criteria: (i) whether the information was important to report within a paper that included some level of PPI, (ii) whether it would contribute to enhancing the evidence-base of PPI reporting more generally, and (iii) where the information should be reported to create greater transparency and so enhance the ease of future synthesis.
Criteria iii was useful for considering where information about an aspect of PPI should be reported within the structure of a paper, to enhance the quality of reporting. These deliberations helped to structure the checklist according to the key sections usually expected within a paper. The aim was to create a checklist that was logically structured and could be easily used by authors in writing their PPI papers and reports, editors and peer reviewers in reviewing manuscripts for publication, and readers in critically appraising published articles and reports.
DISCUSSION
This follow-up study has presented a synthesis of the key issues from the PIRICOM (Reference Brett, Staniszewska and Mockford4) and PAPIRIS (Reference Mockford, Staniszewska and Griffiths15) systematic reviews in relation to the reporting of patient and public involvement. If we are to understand the aims, methods, processes, and impact of PPI in HTA and health research more generally, it is important that there is a significant improvement in the quality of PPI reporting. This applies to a range of outputs, including peer-reviewed papers, case studies, and HTA reports. There have been recent suggestions by Facey et al. (Reference Facey, Boivin, Gracia, Hansen and Lo Scalzo8) that all HTA reports include a section on patient issues.
At present, the PPI evidence base is like an ice-berg, only partly visible within the literature, with much information hidden, either not reported or poorly reported. However, the absence of this information does not mean absence of activity or impact within individual studies. The GRIPP checklist represents the first international attempt to develop a checklist that provides robust guidance to enhance the quality of PPI reporting. It can be used by researchers, reviewers, editors, service users, and policy makers with any paper that attempted to include PPI, particularly those that have attempted to evaluate the impact of PPI. The GRIPP checklist has been registered with EQUATOR (http://www.equator-network.org/) and a process of consensus development will start shortly, in collaboration with EQUATOR and with other key international stakeholders. This process will develop international consensus and further refine and test the GRIPP checklist to ensure its robustness.
The GRIPP checklist is relevant for studies that are primarily reporting the results of their PPI activity. For other studies where the PPI component may be a secondary aim, for example, clinical studies that have evaluated an intervention and have included PPI, there is a need for authors to consider the utility of the checklist items for their own work. Ideally, authors should be encouraged to publish a separate PPI methods paper produced utilizing the GRIPP checklist, as this would provide a significant contribution to the broader evidence base. However, we recognize this is not always possible and we hope to provide firmer guidance in the future on the GRIPP checklist items that are vital, as opposed to optional, for studies where PPI is a secondary or tertiary aim. This deliberation will form part of the next phase of consensus development with EQUATOR and other stakeholders.
Strengths and Weaknesses
While the GRIPP checklist presented in this paper is based on two systematic reviews undertaken using robust methods, it has some limitations. A key limitation is the lack of international input to date. While PIRICOM included international studies. PAPIRIS only focused on UK-based studies, as it was concerned with health and social care services in the UK. The next stage of this work will include international collaboration to ensure a broad view is taken in developing consensus and in ensuring the robustness of the GRIPP checklist. However, as PIRICOM included international studies, we predict the GRIPP checklist has international relevance, although this will require further testing.
Implications for Policy Makers
The development of a stronger PPI evidence base, through better quality reporting, will enable policy makers to form a clearer understanding of the impact or difference that PPI makes to research. Such evidence may be vital in times of fiscal constraint where philosophical arguments that support PPI based on societal good are harder to defend. The strengthened PPI evidence base will also provide policy makers with an important recognition of the validity and relevance of PPI as an activity that strengthens the quality of research and so ultimately underpins evidence-based policy making.
CONCLUSIONS
The GRIPP checklist represents the first attempt to provide guidance that can be used internationally, to enhance the quality of PPI reporting to strengthen the future PPI evidence base. This will enable a wider range of audiences, including researchers, service users, and policy makers to better understand the impact, or difference, PPI can make to research. Elements of better reporting depend on future developments in the PPI evidence base. For example, it is important for future researchers to consider the importance of context and process in the interpretation and reporting of impact. This has obvious implications for the design and data collection stages of a study. Another key area where the PPI evidence base needs enhancement, to enable better reporting, is the development of robust instruments to quantitatively measure the impact of patient and public involvement (Reference Staniszewska22). This will enable researchers, policy makers, and others to form better judgments about where PPI has greatest impact, the extent of the impact, and its nature. There is also a need for greater clarity in the conceptualization, definition, and theorization of PPI to ensure studies have conceptual equivalence, and so are comparing the same concept. Such changes require a paradigm change in PPI, moving it from an area that relies on case studies and narrative data to one that builds on this data by embracing more quantitative forms of evidence with known properties of reliability and validity that demonstrate the extent of impact more effectively. This requires a change in the nature of studies undertaken, and a need for funders to support the development of robust quantitative instruments that measure impact. Such changes in the nature of the PPI evidence base, together with better quality reporting, facilitated by the GRIPP checklist, will reveal the hidden iceberg of PPI evidence, thus enabling more effective future evaluation of what PPI works, for whom, why and in what circumstances.
CONTACT INFORMATION
Sophie Staniszewska, DPhil (Oxon) (sophie.staniszewska@warwick.ac.uk), Jo Brett, MSc, MA (J.Brett@warwick.ac.uk), Carole Mockford, DPhil (Oxon) (C.Mockford@warwick.ac.uk), Royal College of Nursing Research Institute, School of Health and Social Studies, University of Warwick, CV4 7AL Warwick, UK
Rosemary Barber, MSc (rosemary.barber@sheffield.ac.uk), University of Sheffield, School of Health and Related Research, Regent Coord, 30 Regent Street, Sheffield S1 4DA, UK
CONFLICT OF INTEREST
All authors report they have no potential conflicts of interest.