Skip to main content Accessibility help
×
Hostname: page-component-65f69f4695-kc48j Total loading time: 0 Render date: 2025-06-26T16:02:03.793Z Has data issue: false hasContentIssue false

Audit, Feedback, and Behaviour Change

Published online by Cambridge University Press:  24 June 2025

Noah Ivers
Affiliation:
Women's College Hospital Research Institute, Toronto
Robbie Foy
Affiliation:
University of Leeds

Summary

Using audit to identify where improvement is needed and providing feedback to healthcare professionals to encourage behaviour change is an important healthcare improvement strategy. In this Element, the authors review the evidence base for using audit and feedback to support improvement, summarising its historical origins, the theories that guide it, and the evidence that supports it. Finally, the authors review limitations and risks with the approach, and outline opportunities for future research. This title is also available as open access on Cambridge Core.

Information

Type
Element
Information
Online ISBN: 9781009604697
Publisher: Cambridge University Press
Print publication: 17 July 2025
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Audit, Feedback, and Behaviour Change

1 Introduction

Gaps between what is supposed to happen and what does happen are common in clinical practice.Reference Steel, Bachmann and Maisey1Reference Fleming-Dutra, Hersh and Shapiro6 Widely used internationally, audit and feedback (A&F) is a major improvement strategy for addressing this problem. It involves careful measurement of clinical performance against standards and feedback to encourage change where needed or maintain high performance. Clinical audit, which can help in identifying performance against standards and in prioritising areas for improvement, has been increasingly used as part of clinical governance, accountability, and regulatory approaches in many countries from the 1980s onwards. Combining audit data with feedback to clinicians, particularly when based on behavioural theories, is a potentially powerful way to stimulate change in professional behaviour and perhaps also in the microsystems (the individual care units) in which clinicians work (see the Element on clinical microsystems and team coachingReference Harrison, Finn, Godfrey, Dixon-Woods, Brown and Marjanovic7).

The philosophy underpinning A&F is sound, but designing and implementing effective A&F models that maximise improvement while minimising unintended consequences can be challenging. Advances in research, theory, and methodology (increasingly taken forward within the field of implementation scienceReference Wilson, Kislov, Dixon-Woods, Brown and Marjanovic8) are now offering promising new directions. This Element provides an overview of current knowledge on what makes A&F effective, with an emphasis on understanding theory through practical examples. The implications for future research and development within the field are considered.

2 What Is Audit and Feedback?

As well as introducing some definitions of A&F – what it is and what it isn’t – we explain some fundamental theories that underpin it, including control theory,Reference Carver and Scheier9 and also describe the more recently developed clinical performance feedback intervention theory (CP-FIT),Reference Brown, Gude and Blakeman10 which uses control theory as well as goal-setting theoryReference Locke and Latham11 and feedback intervention theoryReference Kluger and DeNisi12 as its foundation.

2.1 Defining Audit and Feedback

A range of terms are used in the literature and in healthcare in overlapping ways to describe strategies with similarities to A&F, including ‘scorecards’, ‘dashboards’, ‘practice reports’, ‘benchmarking’, and more. The National Institute for Health and Care Excellence (NICE) describes clinical audit as follows:

A quality improvement process that seeks to improve patient care and outcomes through systematic review of care against explicit criteria and the implementation of change. Aspects of the structure, processes, and outcomes of care are selected and systematically evaluated against specific criteria. Where indicated, changes are implemented at an individual, team, or service level and further monitoring is used to confirm improvement in healthcare delivery.13

The authors of the 2025 Cochrane Review on A&F offer a similar definition:

[P]roviding healthcare professionals and/or organisations with a summary of clinical performance over time on objectively measured quality indicators and is a foundational component of many quality improvement activities.Reference Ivers, Yogasingam and Lacroix14

One useful way of thinking about clinical audit is as a cyclical process involving five steps:

  1. (1) Preparing for audit (engaging stakeholders and identifying topics of interest)

  2. (2) Selecting criteria (determining what to measure and how to measure it)

  3. (3) Measuring performance (assessing the criteria against standards)

  4. (4) Making improvements (directing clinicians to respond to the measures)

  5. (5) Sustaining improvements (repeating the measures to ensure high performance).Reference Benjamin15

To promote improvement, A&F may use a wide range of behaviour change techniques, of which there are many: the 2025 Cochrane review of A&F identified 48 unique behaviour change techniques in trials testing A&F interventions. Understood in this way, A&F can be seen as a complex, multifaceted implementation strategy for recommended, evidence-based professional practices. It identifies gaps between desired and actual care delivered over time, promotes reflection and encourages appropriate changes in practice, and enables upgrades to knowledge, skills, or resources needed to close these gaps.

It is useful to distinguish A&F from other similar activities and interventions used to improve quality in healthcare. One common source of confusion is the difference between audit and research. A helpful way of thinking about this is to see research as seeking to generate new knowledge, whereas audit alone does not: ‘research is concerned with discovering the right thing to do whereas audit is intended to make sure that the thing is done right’.Reference Benjamin15

Audit and feedback is also different from clinical decision support, which is meant to prompt a specific decision, for a specific patient, during a clinical interaction. Audit and feedback does not prompt specific clinical decisions in real time;Reference Brown, Peek and Buchan16 it may provide patient-specific information, but not usually at the point-of-care. The goal of A&F is to provide the clinician with insights on their patterns of practice, usually through summary measures of care over time, and is not specific to individual decisions. The sort of reflection and planning for change that A&F aims to bring about is best done when the patient is not immediately in front of the clinician. That said, the lines between clinical decision support and A&F can become blurred,Reference Ivers, Brown and Grimshaw17 for example as the use of electronic medical record data evolves to give aggregate views of achievement of guidelines along with nuanced, patient-specific recommendations.Reference Brown, Balatsoukas and Williams18

Audit and feedback as an implementation strategy is not the same as the many types of feedback delivered in educational settings. A separate literature examines best practices for feedback during skill development and simulation-based training (for further details, see the Element on simulation as an improvement techniqueReference Brazil, Purdy, Bajaj, Dixon-Woods, Brown and Marjanovic19).

Audit and feedback is also different from relay of clinical information (a message sent about an action needed for a particular patient), which involves patient-specific advice. Relay of clinical information is ad hoc in terms of which patient is included and is typically dependent on patient-specific advice being produced by another health professional. Similarly, biofeedback, which measures biological parameters (e.g. data from a patient’s own blood glucose level or blood pressure measurement device) and sends immediate data to the patient (and sometimes also their clinician) to prompt behaviour change, is also not A&F.

Usually A&F is delivered to individual clinicians, but sometimes to clinical teams or leaders of healthcare organisations, depending on what is being measured and who could be responsible for improvements. Audit and feedback may involve similar processes as measurement for governance reporting, accountability, or pay for performance, but the ethos of A&F is to support health professionals in their continuous efforts to improve patient care. It seeks to leverage implicit rather than explicit incentives and motivations for improvement.

2.2 Understanding How Audit and Feedback Works

Audit and feedback is based on two key assumptions: (i) relevant aspects of quality of care can be accurately and rapidly measured; (ii) healthcare professionals, once made aware of the gap between desired and actual performance in these areas, will seek to make improvements. Underlying these assumptions are the following beliefs:

Psychological theories of self-regulation and behaviour change, such as control theory, are relevant to A&F.Reference Carver and Scheier9 Control theory involves a feedback loop detecting and reducing discrepancies between actual and desired performance in motivated individuals. Common elements of relevant theories include a step in which the person who gets the feedback determines whether they are satisfied with their performance as measured in the audit. If so, they continue with their day. If not, they are expected to both scrutinise the data and consider how improvement might be made. Another crucial aspect of A&F emphasised by control theory is its iterative, cyclical nature. One-time-only feedback could easily be deprioritised given other demands on time, while repeated feedback demonstrating ongoing gaps in care is more difficult to ignore, especially if peers are improving.

Clinical performance feedback intervention theoryReference Brown, Gude and Blakeman10 was developed based upon a meta-synthesis of qualitative studies examining A&F. Building upon control theory,Reference Carver and Scheier9 goal-setting theory,Reference Locke and Latham11 and feedback intervention theory,Reference Kluger and DeNisi12 CP‐FIT involves a cycle of goal-setting, audit, and feedback. It also considers necessary precursors to change, including those relating to perception, acceptance (and possible verification), and intention to change, while considering both individual and organisational responses that could enable clinical performance improvement. In so doing, CP-FIT incorporates important elements of behavioural science, for example noting that any change requires the actor(s) to have the capability, the opportunity, and the motivation to enact the desired behaviour(s).

Clinical performance feedback intervention theory puts forward potential explanations of how a feedback intervention might interact with the recipient and contextual factors to predict how well the recipient moves through the improvement cycle. Ultimately, it gives three hypotheses to explain these relationships:

  • Healthcare professionals and organisations have finite capacity to engage with feedback.

  • These parties have strong beliefs about how patient care should be provided that influence their interactions with feedback.

  • Feedback that directly supports clinical behaviours is most effective.

Clinical performance feedback intervention theory also highlights that feedback can have unintended outcomes. These can include encouraging people to manipulate data to make it appear that certain targets are being achieved (gaming) or pursuing action for patients in areas measured at the price of other aspects of care that should be higher priority (tunnel vision).

3 A Brief History of Audit and Feedback

3.1 Influences on the Development of Audit and Feedback

One of the earliest pioneers of clinical audit was Florence Nightingale.Reference Fee and Garofalo21 In 1854, she and her nursing team arrived to care for British soldiers fighting in the Crimean War and rapidly identified problems at Scaturi Barracks. Soldiers, poorly cared for in appalling hygienic conditions with shortages of medicines and other essential supplies, were much more likely to die from infectious diseases, such as typhus or cholera, than from battlefield injuries. Nightingale organised care, instigated strict standards of hygiene, and brought in resources such as fresh food and linen. Crucially, with her background in mathematics, she kept meticulous records and was able to demonstrate a fall in mortality rates from 40% to 2%.Reference Fee and Garofalo21

Calls for systematic, scaled-up scrutiny of patient outcomes came with the work of the US orthopaedic surgeon, Ernest Codman, at the start of the twentieth century.Reference Codman22 Codman gathered data from the case histories of patients following surgery to monitor surgical outcomes and identify errors in their care. He published his own results with a frankness that would now expose him to litigation, listing complications under headings such as ‘Errors Due to Lack of Judgment’ and ‘Errors Due to Lack of Technical Skill’.Reference Donabedian23 He conceived and adopted the then controversial ‘end results idea’ that every hospital should follow every patient it treats long enough to determine and understand outcomes of care. Instrumental in establishing the American College of Surgeons, his work prompted the first proposal for standardised monitoring of surgical outcomes in the UK.Reference Hey Groves24

The US epidemiologist, Paul Lembcke, published a seminal medical audit in 1956, demonstrating that the feedback of data on major pelvic surgery – comparing performance amongst surgeons in one hospital – reduced the number of operations assessed as unjustified.Reference Lembcke25 His selection and design of audit criteria satisfied six principles that are still broadly in use today:

  • Objectivity

  • Verifiability

  • Uniformity (e.g. independent of factors such as type of hospital)

  • Specificity

  • Pertinence (i.e. relevant to the aim of improving outcomes)

  • Acceptability.

Further advances in conceptualising and measuring quality of care included Avedis Donabedian’s influential 1966 model outlining the framework of structure (all factors that affect the context of healthcare delivery), process (the actions that make up the delivery of healthcare), and outcomes (the effects of healthcare on patients and populations).Reference Donabedian26

3.2 The Evolution of Audit and Feedback

Audit and feedback emerged in a variable and often disjointed fashion, frequently led by self-organising groups of clinicians.27 In the UK, a systematic, policy-driven framework for measuring performance and providing feedback aimed at changing practice only emerged after a series of failures and scandals in the 1990s, including the paediatric cardiac surgery programme at Bristol Royal Infirmary.28 These events knocked public confidence in the NHS and led to recognition of the need to better monitor and improve quality and safety of care. Clinical governance was introduced as part of the response, with the aim of ensuring, among other things, that national standards for clinical care were set, met, and monitored.Reference Scally and Donaldson29 This approach united previous clinical audit and effectiveness initiatives with more explicit procedures for managing and dealing with clinical risk and poor performance. Later developments included the establishment and growth of national clinical audits, including those addressing healthcare priorities such as cancer, diabetes, and heart disease, and the development of supporting central structures, including the National Clinical Audit and Patient Outcomes Programme. Use of data is also a prominent feature of performance management schemes, such as the Quality and Outcomes Framework, which remunerates general practices according to levels of achievement for annually reviewed indicators.Reference Roland and Guthrie30

The increasing use of data to monitor and drive improvements in care has followed a similar path in many other countries, dating back to the 1980s in the Netherlands and the 1990s across Germany and France,Reference Jamtvedt, Flottorp, Ivers, Busse, Klazinga and Panteli31 with parallel developments in North America and Australasia.Reference Evans, Scott, Johnson, Cameron and McNeil32 In the United States, for example, the Veterans Health Administration (VHA), the largest integrated healthcare system, underwent significant reforms in the mid-1990s to tackle problems relating to poor-quality healthcare.Reference Perlin, Kolodner and Roswell33 Alongside structural and organisational changes, the reforms included a strong emphasis on measurement and active management of quality, with clear lines of accountability and a supporting information infrastructure. Within five years, patients from the VHA were receiving higher-quality care compared with a national sample across a broad range of measures, with the greatest differences in areas where the VHA had actively monitored performance.Reference Asch, McGlynn and Hogan34

Supporting infrastructure, or the lack of it, has been the main barrier to the establishment of A&F in low- and middle-income countries. A particular limitation has been the availability of health record systems with wide population coverage.Reference Djellouli, Shawar and Mwaba35 However, the number of trials evaluating A&F and different ways of delivering it in low- and middle-income countries is growing. Further examples of A&F from across different countries are included in Table 1.

Table 1 Examples of A&F across different countriesReference Jamtvedt, Flottorp, Ivers, Busse, Klazinga and Panteli31

CountryFocus of programmeAudited information (data sources)Type of feedback
Care areaTypes of provider organisations
AustraliaReference Evans, Scott, Johnson, Cameron and McNeil32, 36Clinical registries covering multiple domainsMainly hospital specialismsCombinations of processes of care and health outcomes from electronic records and patient-reported outcome measures (PROMs)Risk-adjusted reports in varying formats
CanadaReference Glazier, Hutchison, Kopp and Dobell37Many aspects of primary careFamily physicians in the province of OntarioAdministrative data sourcesUpdated online semi-annually, aggregated with peer comparison information and generic improvement recommendations
FinlandPrevention, acute, chronic carePrimary health centresElectronic patient recordsFeedback report and webpage for potential exchange between health centres
Germany30 acute care areasInpatient careSpecifically documented quality assurance data, administrative dataReport with comparison to national average performance provided to hospital. Mandatory, with many indicators also publicly reported
IrelandSix different audits for secondary and tertiary careHospitalsHospital recordsBenchmark report, comparing with similar providers
ItalyMany aspects of primary careGPsAdministrative data sourcesGPs mandated to join a primary care team to collaborate and share information; goal to promote teamwork and create a culture of quality, not to be punitive
The Netherlands23 different treatmentsHospitals, medical teamsHospital recordsRegular (monthly) feedback to providers, usually combined with plan-do-study-act (PDSA) cycles. Indicators selected yearly together with scientific associations, hospital organisations, and patients
Adapted from a monograph from the European Observatory on Health Systems and PoliciesReference Jamtvedt, Flottorp, Ivers, Busse, Klazinga and Panteli31 and other sources.

4 Approach in Action

The expansion of audit programmes internationally has been partly facilitated by the increase in efficiency and capacity for scale-up delivered by increasing availability of routinely collected data supported by health information technologies. The growth of A&F has been accompanied by research interest, typically focused on improving understanding of how and when A&F works, and how to enhance its effects.

4.1 Decisions to Make When Planning to Use Audit and Feedback as an Improvement Strategy

Many decisions need to be made when developing an A&F intervention, including the following:

  • Which quality indicators should be the focus?

  • Which healthcare professionals should receive it, and when should the information be delivered?

  • How should the information be delivered to best stimulate the desired actions in response?

  • What can be done to encourage healthcare professionals to engage in the process?

This section describes a variety of ways in which A&F has been operationalised and covers some of the tensions to be balanced.

The ability of A&F to improve care depends on whether healthcare professionals engage with and respond to the data, so how recipients perceive an A&F initiative is important to its success.Reference Wagner, Durbin and Barnsley38,Reference Ivers, Taljaard and Giannakeas39 One challenge is that, because A&F initiatives are naturally limited to areas that can be objectively measured, they may or may not align with the priorities of patientsReference Ivers and Maybee40 or indeed what clinicians consider most important. Within the realm of what is measurable, there remain many decisions to make: adding more quality indicators may please some stakeholders and overwhelm others, while removing a quality indicator may signal that clinical efforts in that area are no longer deemed important.

How A&F initiatives are organised can be consequential. In large-scale audits, some potential for tension can arise between the goals of system leaders (those responsible for large populations and/or budgets spanning multiple organisations) and local leaders (those responsible for patient care in a given clinical practice) – even when everyone at each of these ‘levels’ is seeking improvement on the same quality indicators. This tension is particularly likely to arise when the A&F initiative is delivered to, rather than created with, healthcare professionals. One way to balance top-down goals and bottom-up engagement is through local champions or opinion leaders. This may help with credibility and trustworthiness of the feedback and is a way to facilitate engagement,Reference Sykes, Rosenberg-Yunger and Quigley41 and may enable collective or individual action planning in response to the data. However, these personalised approaches are more difficult to scale, requiring more personnel, time, and funds, and the return on such investments may be uncertain.Reference Desveaux, Ivers and Devotta42

How the data are collected also matters for professional engagement with A&F. With increasing digitisation of healthcare, A&F can now leverage routinely collected data from administrative records or from structured data elements in clinical records. This type of ‘secondary’ data use, in which data are collected for one purpose (i.e. administration or clinical care) or are re-purposed for monitoring purposes, can allow for rapid measurement of quality indicators at scale. But the efficiency can come at a cost, since secondary data use may lack the nuance required to assess appropriateness of many aspects of care, in contrast to careful chart review. The risk is that secondary use of health records as the basis of A&F could potentially undermine engagement with A&F, since without clinical granularity, professionals may lack confidence in the relevance and credibility of the data.

How data are presented can also have an impact. For example, interactive dashboards that aim to enable healthcare professionals to tailor how they engage with their practice data are becoming more common. While some feedback recipients may prefer this approach, the benefits – and return on investment – remain uncertain.Reference Daneman, Lee and Bai43 Regardless of whether feedback is delivered through a dashboard or static report, A&F designers must grapple with how to encourage improvement given that discussing the data with peers may help with developing or sharing effective action plans. In some clinical contexts, there remains a culture of judgement and fear, rather than one of learning and growth. Contexts lacking in psychological safety (an environment in which individuals feel they can speak up, make mistakes, and share ideas without fearing negative consequences) may prioritise confidentiality of audit data, even though open sharing, transparency, and discussions with others can be a fruitful way to move from data to action.Reference Desveaux, Ivers and Devotta42

A further consideration is that A&F is unlikely to be useful if there is little room for improvement. Where there is very substantial need for improvement, more intensive interventions than just A&F alone may be required. Audit and feedback may therefore be used alongside other co-interventions that seek to enable behaviour change in response to the data – for example educational outreach as well as A&F. Again, however, approaches that use co-interventions are likely to involve greater cost.

4.2 Evidence and Best Practice Suggestions for Audit and Feedback

There is a substantial evidence base underpinning the use of A&F to improve clinical practice, with further suggestions around how to optimise its design and impact drawn from theory and wider disciplinary perspectives.

4.2.1 Evidence

Audit and feedback has featured in hundreds of randomised trials. Overall, evidence suggests that A&F tends to have small to moderate effects on patient care, although potentially substantial population-level impacts. A 2012 Cochrane Review of A&F, which included 140 randomised trials published up to 2011, found that A&F had modest effects on patient processes of care, leading to a median 4.3% absolute improvement (interquartile range of 0.5–16%) in compliance with recommended practice. The 2025 update of the Cochrane ReviewReference Ivers, Yogasingam and Lacroix14 included 292 trials published up to 2020. Meta-analysis (which combines the results of multiple studies) of 177 trials comparing A&F versus control found a mean absolute increase in desired practice of 6.2% (95% confidence interval (CI) 4.1–8.2). Effects were greater when performance was lower at baseline. The analyses indicated that effect sizes achieved in trials testing A&F have slightly increased over the past decade, perhaps due to changes in how A&F is designed and implemented. Meta-regressions (a statistical technique used to analyse effect sizes) found that greater A&F effects were achieved when

  • giving data to individuals, rather than at team level

  • comparing performance to top-performing peers or a benchmark

  • involving a local champion with whom the recipient had a relationship

  • using interactive methods rather than didactic or written format to give feedback

  • using facilitation to support engagement

  • developing action plans to improve performance.

The meta-regressions did not find significant effects for the number of indicators in the audit, the inclusion of a comparison to average performance of all peers, or for co-development by recipients of action plans. Whilst repeated delivery of feedback might be expected to reinforce change, contrary to expectations, the meta-regressions indicated that repeated delivery was associated with lower effect size.

4.2.2 Best Practice Suggestions

These findings, along with some of the tensions described in Section 4.1, are largely reflected in Brehaut et al.’s 15 suggestions for optimising the effects of practice feedback,Reference Brehaut, Colquhoun and Eva44 which fall into four categories:

  • Nature of the desired action

  • Nature of the data available for feedback

  • Feedback display

  • Delivery of the feedback intervention (Table 2).

Table 2 Suggestions for optimising performance feedback

Suggestion for designers of practice feedbackExamples of implementation strategy
Nature of the desired action
Actions that are consistent with established goals and prioritiesEnsure that recipients think that the actions needed to improve performance are important compared with other competing priorities
Actions that can improve and are under the recipient’s controlMeasure baseline performance before providing feedback. Ensure that actions for improvement are seen as feasible by recipients
Specific actions

Consider using the AACTT frameworkReference Presseau, McCleary and Lorencatto45 to specify the following suggested actions:

  • Action required (‘what’ needs to be done)

  • The actor(s) performing the action (‘who’)

  • The context in which the action is taken (‘where’)

  • The targeted individuals or population the action is taken for or with (‘whom’)

  • The required timing (period and duration) of the action (‘when’).

Nature of the data available for feedback
Multiple instances of feedbackUse regular instead of one-off feedback to encourage continuous improvement
Feedback as soon as possible and at a frequency informed by the number of new patient casesIncrease/decrease interval of feedback for outcomes with many patient cases
Individual rather than general dataProvide practitioner-specific rather than hospital-specific data
Comparators that reinforce desired behaviour changeChoose one comparator to represent a target, rather than several comparators
Feedback display
Visual display and summary message are closely linkedPut a declarative summary message near the graphical or numerical data supporting it
Feedback given in more than one wayPresent key messages both textually and numerically and provide graphic elements that mirror key recommendations
Feedback designed to be easy to understand for recipients

Reduce cognitive load by minimising the effort required to process information. This includes the following:

  • Prioritising key messages

  • Reducing the amount of data presented

  • Improving readability

  • Reducing visual clutter

Delivery of the feedback intervention
Address possible barriers to feedback useAssess the possible barriers before feedback is provided and incorporate feedback into a care pathway and/or routine workflows
Provide short, actionable messages followed by optional detailAllow recipients who only have the time for the main messages to focus just on these. Other recipients may wish to ‘drill down’ to check the credibility of the feedback and understand their data
Address credibility of the informationEnsure that feedback comes from a trusted local champion or colleague, increase the transparency of data sources, and disclose any conflicts of interest
Prevent defensive reactions to feedbackGuide reflection, including positive messaging along with negative, and focus on what could be better in the future (‘feedforward’)
Construct feedback through social interactionAllow users to respond to feedback, enable dialogue with peers, and provide facilitated conversations/coaching about the feedback
Adapted from Brehaut et al.Reference Brehaut, Colquhoun and Eva44
Nature of the Desired Action

Since the theories underlying A&F all start from the assumption that recipients want to improve and achieve high performance in the area of interest, they are more likely to do so if the nature of the desired action fits with their priorities. Furthermore, A&F should recommend specific actions that the recipient can perform and that will improve the ‘score’ on the feedback. Since the energy of healthcare professionals is finite, these characteristics are thought to increase the likelihood that they will engage.

Nature of the Data Available for Feedback

To be seen as valid by professionals, feedback should include a credible and desirable comparator to act as a target. It should ideally include data about individual-level performance rather than only aggregated group-level data. The feedback data should be available without long data lags and delivered at a frequency informed by the time needed to measure a change in practice. These characteristics are thought to increase the likelihood that recipients will perceive the data as a potentially accurate and fair reflection of their practice and, therefore, that any ‘gaps’ between desired and actual care are more likely to be seen as worthy of attention.

Feedback Display

Usability is important for A&F intervention design. Design thinking recognises the cognitive limitations of users; it strives to understand users as they truly are, rather than as we’d like them to be. For A&F, cognitive load should be minimised, with graphics and data presented as simply as possible. Users should be presented with multiple ways of enabling them to come to the same conclusion (e.g. with a graph and a written summary), and the summary message about performance should be nearby to the relevant data. The idea is that these features allow the user to understand the key messages quickly and decide from there whether to engage.

Delivery of the Feedback Intervention

The challenges of moving from data to action should be recognised, as should the value of fitting the A&F into routine workflows. Brief, actionable messages should be presented at the outset of the feedback, with additional detail available (e.g. about data completeness for recipients who may wish to verify data to ensure its credibility). The source of the feedback should be credible and influential, and the intervention should incorporate components that help manage the possible defensive reactions to any negative feedback. Dialogue with peers or trained facilitators to guide action in response to the data is recommended. Including more of these features in the intervention helps users move from engagement to implementation of desired clinical actions.

4.3 Examples of Audit and Feedback in Action

Examples of A&F from different contexts, linked to suggestions for effective feedback summarised in Table 2,Reference Brehaut, Colquhoun and Eva44 are given next.

4.3.1 Implementation of Guideline Recommendations in Primary Care

In a pragmatic cluster-randomised trial (a type of trial where groups are randomly assigned to different interventions) involving 144 general practices in West Yorkshire, England,Reference Willis, Collinson and Glidewell46 practices were randomly assigned to a multifaceted package of strategies to support implementation of guideline recommendations. This included multiple embedded behaviour change techniques within A&F, educational outreach, and computerised support, with content tailored to promote adherence.

The implementation package used quarterly practice-specific feedback reports presenting achievement ranked by practice and compared over time, using remotely gathered, individualised practice data. The reports prompted recall of clinical goals, highlighted consequences of changing – or not changing – practice, suggested strategies for change, and encouraged goal-setting and reflection on progress towards goals. Reports also contained evidence-based clinical messages, responses to common queries, and action-planning templates. Practices received computerised search tools to identify relevant patients for review.

The implementation package was effective for targeting prescribing behaviours within the control of clinicians, reducing high-risk prescribing from 7.2% to 5.2% (a relative reduction of 27.7%), and saved healthcare costs due to reduced gastrointestinal bleeds that can be caused by high-risk prescriptions. Factors that contributed towards success included the use of comparators to reinforce change (e.g. clinicians’ personal achievement compared with the top quartile of performers) and positive messaging to manage defensive reactions to feedback. The implementation package had no effect on more complex targets that additionally required patient engagement to achieve (e.g. diabetes control). This is compatible with the good practice suggestion that feedback is generally more effective at changing behaviours directly under the recipients’ control.

The feedback component of this implementation package was adapted and scaled up further. It was shown to be effective and acceptable to clinicians in reversing a rising trend in opioid prescribing for non-cancer pain amongst 316 general practices.Reference Wood, Foy and Willis47,Reference Alderson, Farragher and Willis48 This corresponded to 15,000 fewer patients prescribed opioids and £0.9 million saved in prescribing costs. The biggest reduction was in the over-75s, a population at higher risk of opioid-related falls and death.

4.3.2 Control Charts to Reduce Major Adverse Events in Digestive Tract Surgery

In this trial, 40 hospital surgical departments across France were randomly allocated to the monitoring of outcomes using control charts with regular feedback on indicators or to usual care only.Reference Duclos, Chollet and Pascal49

The intervention involved prospective monitoring of outcomes using control charts, provided in sets quarterly, with regular feedback on indicators (see the Element on statistical process controlReference Mohammed, Dixon-Woods, Brown and Marjanovic50). A key feature of control charts is that they track variability in key performance indicators over time, and provide visual feedback on both positive and negative trends, so can prompt investigations into causes of worsening performance and remedial action.Reference Duclos, Chollet and Pascal49 To facilitate implementation of the programme, study champion partnerships were established at each site, involving members of the surgical team who were trained to conduct team meetings, and were asked to display posters in operating rooms, maintain a logbook, and devise an improvement plan.

After the introduction of the control chart, the absolute risk of a major adverse event was reduced by 0.9% in intervention compared with control hospitals. Risks of patient deaths and intensive care stays were also lower in intervention hospitals.

Factors contributing to success in this case include the following:

  • The long-term provision of repeated feedback

  • The delivery of feedback in multiple ways

  • The organisation of local-level leadership, which encouraged reflection and action in response to feedback.

4.3.3 Antibiotic Stewardship in Hospitals

Twenty-five hospitals providing primary care to rural populations in two Chinese provinces were randomly allocated to an antibiotic stewardship programme or to usual care.Reference Wei, Zhang and Walley51 The intervention aimed to reduce inappropriate antibiotic prescribing for children diagnosed with upper respiratory tract infections by targeting providers and caregivers. It included clinician guidelines and training on appropriate prescribing, prescribing peer review, and brief caregiver education.

Peer review was integrated within routine monthly administrative meetings, during which doctors’ antibiotic prescribing rates were assessed using routine data from electronic health records or randomly sampled paper prescriptions. Doctors who were listed within the top 10% of antibiotics prescribers were identified and asked to provide reasons for not following the guidelines. By six months, antibiotic prescribing had fallen from 82% in children with upper respiratory tract infections to 40% in the intervention group and from 75% to 70% in the usual care group. After adjusting for baseline prescribing rates and potential confounders, the stewardship programme was estimated as delivering an absolute reduction of 29% in antibiotic prescribing.

Factors contributing to success include the following:

  • The specificity of the feedback on antibiotic prescribing for upper respiratory tract infections in children

  • The feedback taking place through social interaction through peer review

  • The multifaceted intervention addressing wider barriers to change, including parental expectations for antibiotics.

4.3.4 Paediatric Hospital Care

Eight rural Kenyan district hospitals were randomised to an intensive multifaceted intervention to improve paediatric care or to a less-intensive control intervention.Reference Ayieko, Ntoburi and Wagai52 The intensive intervention included face-to-face feedback, along with evidence-based guidelines, training, job aids, local facilitation, and supervision. The control intervention included written feedback with guidelines, didactic training, and job aids (such as structured admission records and dosing charts).

Indicators reflected standards defined by clinical guidelines and focused on assessment, treatment, and supportive care for pneumonia, malaria, and diarrhoea, as well as further preventive care (e.g. immunisation). After 18 months, adherence to recommended processes of care was generally higher across the intervention than control hospitals, including once-daily gentamicin (89.2% versus 74.4%, respectively), loading doses of quinine (91.9% versus 66.7%) and adequate prescriptions of intravenous fluids for severe dehydration (67.2% versus 40.6%). The proportion of children receiving inappropriate doses of drugs in intervention hospitals was also lower.

Factors contributing to success include the following:

  • The recommendation of specific actions

  • Providing feedback through face-to-face social interaction

  • Local facilitation with problem-solving and action planning.

5 Critiques of the Audit and Feedback Approach

Cumulative meta-analysis of the A&F trials included in the 2012 Cochrane Review suggested that the effect size and associated confidence intervals stabilised in 2003, after 51 comparisons from 30 trials; the field of research was therefore described as ‘stagnant’.Reference Ivers, Grimshaw and Jamtvedt53 However, more recent literature has recognised the need to shift focus from asking whether A&F can improve quality practice towards how the effect of A&F interventions can be optimised. The discussion next summarises some of the issues that need to be resolved in moving the field forward.Reference Foy, Skrypak and Alderson54

5.1 Common Limitations in How Audit and Feedback Is Implemented

Even in trials, the best practices summarised in Brehaut et al.Reference Brehaut, Colquhoun and Eva44 are not always used. Those responsible for conducting A&F ‘in the wild’ may not even be aware of these best practices, or real-world timelines and budgets may mean that full implementation of A&F best practice is not possible. From a pragmatic perspective, trade-offs are always necessary, but it can be hard to know how much can be traded off before the potential small benefit of A&F is lost altogether. Another challenge is that the interacting elements in the A&F process – feedback providers, feedback report characteristics, and feedback recipients within a specific context – are multiple and complex, so may be difficult to control and predict.

One key problem is that, in practice, established clinical audit programmes rightly invest considerable time and effort in ensuring the accuracy – and hence credibility – of audit data. Sometimes, however, the focus on the quality of data is not matched by attention to the whole audit cycle, so good practice in design and delivery of A&F programmes is not applied.Reference Willis, Wood and Brehaut55 This results in suboptimal implementation, poor clinician engagement, and missed opportunities for improvement.Reference Desveaux, Rosenberg-Yunger and Ivers56 For example, an analysis of UK national clinical audit programmesReference Willis, Wood and Brehaut55 identified scope for the following:

  • Strengthening feedback (such as improving the targeting of feedback recipients)

  • Reducing the cognitive burden of complex feedback

  • Specifying recommendations for action

  • Demonstrating benefits of participation to feedback recipients.

Further opportunities worth exploring include improving the alignment of A&F with wider improvement drives, such as aligning audits more closely with national guidance, standards, and incentives.

A consistently recurring theme in A&F initiatives is lack of clinician engagement. When faced with data on performance, three common reactions may be observed: ‘my patients are different’; ‘the data are inadequate’; and ‘I don’t know how to do better’.Reference Ivers, Barnsley and Upshur57 These reactions may reflect limitations in the perceived credibility, relevance, and usability of feedback. As we have described, however, well-designed A&F interventions will guide clinicians to understand and verify the data and support them to make plans for improvement. Nonetheless, pushing an ever-growing list of tasks on to the (virtual) desks of healthcare professionals may not be a sustainable approach for improvement.

When A&F is viewed by healthcare professionals as a threat to their identity, rather than as a tool to help them achieve their professional goals, there is the potential to worsen burnout or lead to disengagement from quality efforts. Kluger and DeNisi,Reference Kluger and DeNisi12 in their feedback intervention theory published in 1996, emphasised this risk when feedback is perceived as an overall judgement of the ‘person’ rather than a specific assessment of the ‘task’. ‘My patients are different’ and ‘the data are inadequate’ are commonly heard and may reflect efforts to resolve the struggle created by highlighting gaps between the ‘desired’ and the actual performance. However, professional distrust in the validity of the data may sometimes be appropriate; it can often be difficult to account for differences in patient characteristics using the available data in the audit. Sometimes, too, large sets of quality indicators are provided, summarising a wide range of care, using only aggregated data. It is easy to understand how receiving dozens or hundreds of quality indicators without patient-specific information could be seen as unactionable or overwhelming. It is possible that conducting and communicating casemix adjustment in better ways and demonstrating ‘fair’ comparisons to ‘like peers’, as well as highlighting key, personalised messages stemming from the data, could lead to more positive responses to feedback.Reference Desveaux, Rosenberg-Yunger and Ivers56

A further consideration is that A&F often seeks to encourage changes in practice that could benefit entire populations, while many clinicians think primarily about trying their best, one patient at a time.Reference Ivers, Barnsley and Upshur57 Facilitation along with A&F may be a useful way to address this challenge, but is not standard practice.Reference Sykes, Rosenberg-Yunger and Quigley41 It is possible that as education and skill-building in improvement are increasingly offered as part of healthcare professional education, this perceived disagreement amongst some between patient-specific clinical tasks and improvement work will decrease. It is likely that local social pressures, norms, and culture related to audit (i.e. supportive versus punitive) interact with recipient factors, but sometimes A&F interventions proceed without careful consideration of these factors.

The literature has also identified unintended consequences of A&F. For example, Catlow et al. showed that feedback to endoscopists led to gaming in order to be seen to be achieving targets.Reference Catlow, Bhardwaj-Gosling and Sharp58 However, ‘balance measures’ are rarely included in A&F initiatives. Beyond gaming, another form of unintended consequence is the potential for aspects of care not readily measurable to become devalued. The lack of patient engagement observed in most A&F initiatives may exacerbate this issue. Patients may expect their care to follow measurable guideline recommendations but may also hope that their clinicians truly care about them as a person. Indeed, patients usually want their clinicians to be more than just technicians; unless A&F can find ways to increasingly focus on areas like compassion and shared decision-making, it may limit the (cognitive) space available for improvement on such central elements of healthcare.

5.1.1 Implementation Laboratories

When planning to use A&F, an opportunity to embed scientific activities arises, specifically for the following purposes:

  • Optimising the effects of the A&F initiative to achieve organisational goals

  • Rigorously evaluating so that others conducting A&F can learn from the experience.

There is considerable potential for ‘implementation laboratories’ to both drive improvement at scale and generate robust evidence.Reference Grimshaw, Ivers and Linklater59 Implementation laboratories were proposed as involving embedded, sequential head-to-head trials within an established improvement initiative. These trials would test different ways of enhancing improvement interventions. In the case of A&F, changes to feedback identified as more effective than the current standard would become the new standard, while those which are not more effective are discarded. The resulting incremental changes could accumulate to deliver larger improvement. This approach fits well with – and, in some ways, helps to formalise – the goals of a learning health system or organisation (for further details, see the Element on learning health systemsReference Foley, Horwitz, Dixon-Woods, Brown and Marjanovic60).

5.2 Limitations in the Current Evidence Base

Better understanding of when A&F is likely to help may be supported by the following:

  • Designing A&F interventions based on an explicit theoretical rationale rather than ‘ISLAGIATT’ (‘It Seemed Like a Good Idea at the Time’) or post hoc rationalisation (Martin Eccles, personal communication, 2025)

  • Using process evaluations alongside randomised trials to illuminate the processes and mechanisms of action in addition to outcomes

  • Improving the reporting of evaluative research.

5.2.1 The Links between Design, Contextual Influences, and Behaviour Change Theory

Improving the understanding of key drivers of clinical behaviour may offer promising avenues for the future development of A&F. For example, in one study, ‘nudges’ based on behaviour change theory effectively prompted clinicians to engage more with their clinical data.Reference Vaisson, Witteman and Chipenda-Dansokho61 Relatedly, changes in the environment and the choice architecture to simplify desired responses may help increase effects of A&F. Linking A&F to required activities (such as those related to certification or accreditation) is another option for increasing engagement. Alternatively, combining public reporting of quality indicators with A&F may be a way to increase professionals’ attention to specific areas requiring improvement.

Understanding the organisational contexts in which A&F occurs is crucial. Clinical performance feedback intervention theory (Section 2.2) views feedback as both a range of behaviour change techniques and an intervention cycle that sits within broader systemic influences. For this reason, an approach that seeks to optimise the design of A&F within its context of use – accounting for the social, cultural, and behavioural dimensions of feedback-driven improvement – may be fruitful. Much existing research has focused on the psychological mechanisms governing behavioural response to feedback at the individual level (i.e. change in personal professional practice). Less is known about the optimal process for dissemination of feedback within an organisation, the internal processes for absorption, action planning, and organisational learning, and how effective organisational responses are enacted. Case studies using behaviour change models such as the theoretical domains framework provide some insight into these processes, but healthcare organisations could benefit from further support for local dissemination, action planning, and integration of feedback from audit within local improvement mechanisms.Reference Sykes, O’Halloran and Mahon62

5.2.2 Economic Perspectives on Audit and Feedback

A systematic review,Reference Moore, Guertin and Tardif63 including 35 studies in which the cost-effectiveness of A&F was assessed, found that the majority showed the potential for reducing costs and improving care. Ultimately, since A&F is usually a low-cost intervention – especially if the audit is automated based on electronic data – the cost-effectiveness will depend on the cost of the targeted clinical behaviours, and the effects of changes in those processes on costly patient outcomes. This is a key area for future research and development, given the problems associated with optimising the impact of feedback from audit, as well as the role that increasing implementation of electronic patient records and other health information technologies may play in facilitating/automating data aggregation, analysis, and dissemination.

Effective use of feedback offers potential advantages over other improvement approaches (such as educational outreach visits or inspections) in terms of reach and cost-effectiveness, particularly given the scope to enhance impact on patient care within existing resources and systems. Increases in efficiency in data collection through routine electronic data aggregation offer opportunities for low-cost feedback interventions at scale. Similarly, although national audit programmes may appear to be relatively costly, even modest effects can potentially be cost-effective if audit programmes build in efficiencies. The cost-effectiveness of A&F interventions is also related to the sustainment of effects, but this is a key area requiring further study.

5.2.3 Equity Considerations

Usually, A&F interventions are selected when there is a well-accepted clinical recommendation that is inadequately implemented (e.g. underuse or overuse). Certain populations may be more affected than others by gaps in implementation. Consider, for example, evidence regarding the overuse of opioids in general and the undertreatment of pain in patients presenting with sickle cell crisis. An A&F intervention that encourages the de-prescribing of opioids could worsen inequity if it doesn’t give nuanced messages and data about the needs of specific patient groups.

The tendency to apply generic (rather than highly personalised or locally adapted) A&F at scale, which is not tailored to individual needs, may result in different responses to feedback. This could inadvertently risk worsening inequalities in care if only well-resourced and higher performing clinicians and clinical teams are able to engage with the A&F and use it to improve. Further research is needed to understand the nature of this risk and how it might be addressed. Ideally, A&F interventions should both ‘shift the bell curve’ (move the entire population of healthcare professionals closer to desired practice) and ‘tighten the bell curve’ (reduce inappropriate variations in care). How to best use A&F to achieve this dual aim, and to avoid any unintentional exacerbation of inequities, should be a key priority for future research.

5.2.4 Evidence for Selecting Co-interventions

It is very common for A&F to be used alongside other interventions, also known as co-interventions. The 2025 Cochrane Review summarised the effects of combining A&F with various co-interventions, including educational outreach, showing that, in general, this co-intervention amplifies the effects of A&F.Reference Ivers, Yogasingam and Lacroix14 This combination seems most likely to be helpful when knowledge barriers that can be addressed through education are a key challenge in responding to feedback. Decisions about which co-interventions to select should be based on both empirical evidence and the potential for synergy in addressing the underlying determinants of behaviour, but further research is needed to understand when, why, and how to select co-interventions.

6 Conclusions

Audit and feedback is a popular improvement strategy that improves patient care by reviewing clinical performance against explicit standards, using clinician-focused feedback to direct action towards areas not meeting those standards. Audit and feedback generally has modest effects on patient care, but aggregated across large numbers can have an important population health impact, particularly when repeated audit cycles deliver cumulative change. The growing availability of routinely collected healthcare data offers potentially exciting opportunities for improved efficiency and effectiveness of A&F. An increasing body of evidence and theory is available to inform the design and implementation of A&F interventions. Identifying the desired behaviour change sought amongst feedback recipients, ensuring that the intervention components enable those actions, and monitoring for potential unintended consequences are all likely to be important. When planning new A&F initiatives, learning from, and applying, existing evidence is important, while a scientific approach to learning how to optimise the intervention will help to move the field forward. It is also useful to recognise when A&F is less suitable – for example when performance is already near-optimal, when it is not possible to develop distinct, credible, and measurable standards of care, when the approach may detract attention from unmeasured aspects of care, or when the issue of concern is not under the scope of clinician control.

7 Further Reading

Contributors

Noah Ivers led the initial drafting of this Element. Both authors contributed different sections and made critical revisions. Both authors have approved the final version.

Conflicts of Interest

RF receives grant funding from the National Institute for Health and Care Research and chairs the Implementation Strategy Group for the National Institute for Health and Care Excellence. NI declares no relevant conflicts of interest.

Acknowledgements

We have worked with many colleagues from around the world for more than a decade in our efforts to advance the understanding of A&F; we thank them for their contributions to our thinking, which continues to evolve as we collaboratively learn. We also thank the peer reviewers for their insightful comments and recommendations to improve this Element. A list of peer reviewers is published at www.cambridge.org/IQ-peer-reviewers.

Funding

This Element was funded by THIS Institute (The Healthcare Improvement Studies Institute, www.thisinstitute.cam.ac.uk). THIS Institute is strengthening the evidence base for improving the quality and safety of healthcare. THIS Institute is supported by a grant to the University of Cambridge from the Health Foundation – an independent charity committed to bringing about better health and healthcare for people in the UK.

About the Authors

Noah Ivers is a family physician at Women’s College Hospital Academic Family Health Team and Professor in the Department of Family and Community Medicine at the University of Toronto. He uses a range of research methods in the quest to understand how to improve quality of healthcare and is best known for his work to advance how audit and feedback is used to improve health professional practice.

Robbie Foy is Professor of Primary Care at the Leeds Institute of Health Sciences and a general practitioner in Leeds, United Kingdom. His field of implementation research aims to inform policy decisions about how best to use resources to improve the uptake of research findings by evaluating approaches to change professional and organisational behaviour.

Creative Commons License

The online version of this work is published under a Creative Commons licence called CC-BY-NC-ND 4.0 (https://creativecommons.org/licenses/by-nc-nd/4.0). It means that you’re free to reuse this work. In fact, we encourage it. We just ask that you acknowledge THIS Institute as the creator, you don’t distribute a modified version without our permission, and you don’t sell it or use it for any activity that generates revenue without our permission. Ultimately, we want our work to have impact. So if you’ve got a use in mind but you’re not sure it’s allowed, just ask us at enquiries@thisinstitute.cam.ac.uk.

The printed version is subject to statutory exceptions and to the provisions of relevant licensing agreements, so you will need written permission from Cambridge University Press to reproduce any part of it.

All versions of this work may contain content reproduced under licence from third parties. You must obtain permission to reproduce this content from these third parties directly.

Improving Quality and Safety in Healthcare

Editors-in-Chief

  • Mary Dixon-Woods

  • THIS Institute (The Healthcare Improvement Studies Institute)

  • Mary is Director of THIS Institute and is the Health Foundation Professor of Healthcare Improvement Studies in the Department of Public Health and Primary Care at the University of Cambridge. Mary leads a programme of research focused on healthcare improvement, healthcare ethics, and methodological innovation in studying healthcare.

  • Graham Martin

  • THIS Institute (The Healthcare Improvement Studies Institute)

  • Graham is Director of Research at THIS Institute, leading applied research programmes and contributing to the institute’s strategy and development. His research interests are in the organisation and delivery of healthcare, and particularly the role of professionals, managers, and patients and the public in efforts at organisational change.

Executive Editor

  • Katrina Brown

  • THIS Institute (The Healthcare Improvement Studies Institute)

  • Katrina was Communications Manager at THIS Institute, providing editorial expertise to maximise the impact of THIS Institute’s research findings. She managed the project to produce the series until 2023.

Editorial Team

  • Sonja Marjanovic

  • RAND Europe

  • Sonja is Director of RAND Europe’s healthcare innovation, industry, and policy research. Her work provides decision-makers with evidence and insights to support innovation and improvement in healthcare systems, and to support the translation of innovation into societal benefits for healthcare services and population health.

  • Tom Ling

  • RAND Europe

  • Tom is Head of Evaluation at RAND Europe and President of the European Evaluation Society, leading evaluations and applied research focused on the key challenges facing health services. His current health portfolio includes evaluations of the innovation landscape, quality improvement, communities of practice, patient flow, and service transformation.

  • Ellen Perry

  • THIS Institute (The Healthcare Improvement Studies Institute)

  • Ellen supported the production of the series during 2020–21.

  • Gemma Petley

  • THIS Institute (The Healthcare Improvement Studies Institute)

  • Gemma is Senior Communications and Editorial Manager at THIS Institute, responsible for overseeing the production and maximising the impact of the series.

  • Claire Dipple

  • THIS Institute (The Healthcare Improvement Studies Institute)

  • Claire is Editorial Project Manager at THIS Institute, responsible for editing and projectmanaging the series.

About the Series

  • The past decade has seen enormous growth in both activity and research on improvement in healthcare. This series offers a comprehensive and authoritative set of overviews of the different improvement approaches available, exploring the thinking behind them, examining evidence for each approach, and identifying areas of debate.

Improving Quality and Safety in Healthcare

References

Steel, N, Bachmann, M, Maisey, S, et al. Self reported receipt of care consistent with 32 quality indicators: National population survey of adults aged 50 or more in England. BMJ 2008; 337: a957. https://doi.org/10.1136/bmj.a957.CrossRefGoogle ScholarPubMed
Levine, DM, Linder, JA, and Landon, BE. The quality of outpatient care delivered to adults in the United States, 2002 to 2013. JAMA Intern Med 2016; 176(12): 1778–90. https://doi.org/10.1001/jamainternmed.2016.6217.CrossRefGoogle ScholarPubMed
Zeitlin, J, Manktelow, BN, Piedvache, A, et al. Use of evidence-based practices to improve survival without severe morbidity for very preterm infants: Results from the EPICE population-based cohort. BMJ 2016; 354: i2976. https://doi.org/10.1136/bmj.i2976.CrossRefGoogle ScholarPubMed
Glasziou, P, Straus, S, Brownlee, S, et al. Evidence for underuse of effective medical services around the world. Lancet 2017; 390(10090): 169–77. https://doi.org/10.1016/S0140-6736(16)30946-1.CrossRefGoogle ScholarPubMed
Brownlee, S, Chalkidou, K, Doust, J, et al. Evidence for overuse of medical services around the world. Lancet 2017; 390: 156–68. https://doi.org/10.1016/S0140-6736(16)32585-5.CrossRefGoogle ScholarPubMed
Fleming-Dutra, KE, Hersh, AL, Shapiro, DJ, et al. Prevalence of inappropriate antibiotic prescriptions among US ambulatory care visits, 2010–2011. JAMA 2016; 315(17): 1864–73. https://doi.org/10.1001/jama.2016.4151.CrossRefGoogle ScholarPubMed
Harrison, S, Finn, R, Godfrey, MM, et al. Clinical Microsystems and Team Coaching. In Dixon-Woods, M, Brown, K, Marjanovic, S, et al., editors. Elements of Improving Quality and Safety in Healthcare. Cambridge: Cambridge University Press; 2025.Google Scholar
Wilson, P, and Kislov, R. Implementation Science. In Dixon-Woods, M, Brown, K, Marjanovic, S, et al., editors. Elements of Improving Quality and Safety in Healthcare. Cambridge: Cambridge University Press; 2022. https://doi.org/10.1017/9781009237055.Google Scholar
Carver, CS, and Scheier, MF. Control theory: A useful conceptual framework for personality – social, clinical, and health psychology. Psychol Bull 1982; 92(1):111–35.10.1037/0033-2909.92.1.111CrossRefGoogle ScholarPubMed
Brown, B, Gude, WT, Blakeman, T, et al. Clinical performance feedback intervention theory (CP-FIT): A new theory for designing, implementing, and evaluating feedback in health care based on a systematic review and meta-synthesis of qualitative research. Implement Sci 2019; 14(1): 40. https://doi.org/10.1186/s13012-019-0883-5.CrossRefGoogle ScholarPubMed
Locke, EA, and Latham, GP. Building a practically useful theory of goal setting and task motivation: A 35-year odyssey. Am Psychol 2002; 57: 705–17.10.1037/0003-066X.57.9.705CrossRefGoogle ScholarPubMed
Kluger, A, and DeNisi, A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol Bull 1996; 119: 254–84.10.1037/0033-2909.119.2.254CrossRefGoogle Scholar
Ivers, N, Yogasingam, S, Lacroix, M, et al. Audit and feedback: Effects on professional practice. Cochrane Database Syst Rev 2025; 3: CD000259. https://doi.org/10.1002/14651858.CD000259.pub4.Google ScholarPubMed
Benjamin, A. Audit: How to do it in practice. BMJ 2008; 336: 1241. https://doi.org/10.1136/bmj.39527.628322.AD.Google Scholar
Brown, B, Peek, N, and Buchan, I. The case for conceptual and computable cross-fertilization between audit and feedback and clinical decision support. Stud Health Technol Inform 2015; 216: 419–23.Google ScholarPubMed
Ivers, N, Brown, B, and Grimshaw, J. Clinical Performance Feedback and Decision Support. Improving Patient Care. Chichester: John Wiley & Son, 2020, 235–51. https://doi.org/10.1002/9781119488620.ch13.Google Scholar
Brown, B, Balatsoukas, P, Williams, R, et al. Multi-method laboratory user evaluation of an actionable clinical performance information system: Implications for usability and patient safety. J Biomed Inform 2018; 77: 6280. https://doi.org/10.1016/j.jbi.2017.11.008.CrossRefGoogle ScholarPubMed
Brazil, V, Purdy, E, and Bajaj, K. Simulation as an Improvement Technique. In Dixon-Woods, M, Brown, K, Marjanovic, S, et al., editors. Elements of Improving Quality and Safety in Healthcare. Cambridge: Cambridge University Press; 2023Google Scholar
Davis, DA, Mazmanian, PE, Fordis, M, et al. Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. JAMA 2006; 296(9): 1094–102. https://doi.org/10.1001/jama.296.9.1094.CrossRefGoogle ScholarPubMed
Fee, E and Garofalo, ME. Florence Nightingale and the Crimean war. Am J Public Health 2010; 100: 1591. https://doi.org/10.2105/AJPH.2009.188607.CrossRefGoogle Scholar
Codman, E. A Study in Hospital Efficiency: As Demonstrated by the Case Report of the First Five Years of a Private Hospital. Boston: Thomas Todd, 1920.Google Scholar
Donabedian, A. The end results of health care: Ernest Codman’s contribution to quality assessment and beyond. The Milbank Quarterly 1989; 67: 233–56.Google ScholarPubMed
Hey Groves, E. Surgical statistics: A plea for a uniform registration of operation results. BMJ 1908; 2: 1008–9.Google Scholar
Lembcke, PA. Medical auditing by scientific methods: Illustrated by major female pelvic surgery. JAMA 1956; 162: 646–55.Google ScholarPubMed
Donabedian, A. Evaluating the quality of medical care. The Milbank Memorial Fund Quarterly. 1966; 44: Suppl:166206.10.2307/3348969CrossRefGoogle ScholarPubMed
Secretaries of State for Health and Social Services WNIaS: Working for Patients. London: HMSO, 1989.Google Scholar
Bristol Royal Infirmary Inquiry: Learning from Bristol. London: Stationery Office, 2001.Google Scholar
Scally, G and Donaldson, LJ. Clinical governance and the drive for quality improvement in the new NHS in England. BMJ 1998; 317: 61–5. https://doi.org/10.1136/bmj.317.7150.61.CrossRefGoogle ScholarPubMed
Roland, M and Guthrie, B. Quality and outcomes framework: What have we learnt? BMJ 2016; 354: i4060. https://doi.org/10.1136/bmj.i4060.CrossRefGoogle ScholarPubMed
Jamtvedt, G, Flottorp, S, and Ivers, N. Audit and Feedback as a Quality Strategy. In Busse, R, Klazinga, N, Panteli, D, et al., editors. Improving Healthcare Quality in Europe: Characteristics, Effectiveness and Implementation of Different Strategies [Internet]. Copenhagen (Denmark): European Observatory on Health Systems and Policies; 2019. (Health Policy Series, No. 53.) 10. www.ncbi.nlm.nih.gov/books/NBK549284/.Google Scholar
Evans, SM, Scott, IA, Johnson, NP, Cameron, PA, and McNeil, JJ. Development of clinical-quality registries in Australia: The way forward. Med J Aust 2011; 194(7): 360–3. https://doi.org/10.5694/j.1326-5377.2011.tb03007.x.CrossRefGoogle ScholarPubMed
Perlin, JB, Kolodner, RM, and Roswell, RH. The Veterans Health Administration: Quality, value, accountability, and information as transforming strategies for patient-centered care. Am J Manag Care 2004; 10: 828–36.Google ScholarPubMed
Asch, SM, McGlynn, EA, Hogan, MM, et al. Comparison of quality of care for patients in the Veterans Health Administration and patients in a national sample. Ann Intern Med 2004; 141: 938–45. https://doi.org/10.7326/0003-4819-141-12-200412210-00010.CrossRefGoogle ScholarPubMed
Djellouli, N, Shawar, YR, Mwaba, K, et al. Effectiveness of a multi-country implementation-focused network on quality of care: Delivery of interventions and processes for improved maternal, newborn and child health outcomes. PLOS Glob Public Health 2024; 4(3): e0001751. https://doi.org/10.1371/journal.pgph.0001751.CrossRefGoogle ScholarPubMed
European Observatory on Health Systems and Policies; 2019. (Health Policy Series, No. 53.) https://eurohealthobservatory.who.int/publications/all-publications.Google Scholar
Glazier, RH, Hutchison, B, Kopp, A, and Dobell, G. Primary care practice reports: Administrative data profiles for identifying and prioritizing areas for quality improvement. Healthc Q 2015; 18(1): 710. https://doi.org/10.12927/hcq.2015.24251.CrossRefGoogle ScholarPubMed
Wagner, DJ, Durbin, J, Barnsley, J, et al. Beyond quality improvement: Exploring why primary care teams engage in a voluntary audit and feedback program. BMC Health Serv Res 2017; 17(1): 803. https://doi.org/10.1186/s12913-017-2765-3.CrossRefGoogle Scholar
Ivers, NM, Taljaard, M, Giannakeas, V, et al. Effectiveness of confidential reports to physicians on their prescribing of antipsychotic medications in nursing homes. Implement Sci Commun 2020; 1: 30. https://doi.org/10.1186/s43058-020-00013-9.CrossRefGoogle ScholarPubMed
Ivers, NM, and Maybee, A. Ontario healthcare implementation laboratory team: Engaging patients to select measures for a primary care audit and feedback initiative. CMAJ 2018; 190(Suppl): S42S43. https://doi.org/10.1503/cmaj.180334.CrossRefGoogle ScholarPubMed
Sykes, M, Rosenberg-Yunger, ZRS, Quigley, M, et al. Exploring the content and delivery of feedback facilitation co-interventions: A systematic review. Implement Sci 2024; 19(1): 37 https://doi.org/10.1186/s13012-024-01365-9.CrossRefGoogle ScholarPubMed
Desveaux, L, Ivers, NM, Devotta, K, et al. Unpacking the intention to action gap: A qualitative study understanding how physicians engage with audit and feedback. Implement Sci 2021; 16(1): 19. https://doi.org/10.1186/s13012-021-01088-1.CrossRefGoogle Scholar
Daneman, N, Lee, SM, Bai, H, et al. Population-wide peer comparison audit and feedback to reduce antibiotic initiation and duration in long-term care facilities with embedded randomized controlled trial. Clin Infect Dis 2021; 73(6): e1296–e304. https://doi.org/10.1093/cid/ciab256.CrossRefGoogle ScholarPubMed
Brehaut, JC, Colquhoun, HL, Eva, KW, et al. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med 2016; 164(6): 435–41. https://doi.org/10.7326/M15-2248.CrossRefGoogle ScholarPubMed
Presseau, J, McCleary, N, Lorencatto, F, et al. Action, actor, context, target, time (AACTT): A framework for specifying behaviour. Implement Sci 2019; 14: 102. https://doi.org/10.1186/s13012-019-0951-x.CrossRefGoogle ScholarPubMed
Willis, TA, Collinson, M, Glidewell, L, et al. An adaptable implementation package targeting evidence-based indicators in primary care: A pragmatic cluster-randomised evaluation. PLoS Med 2020; 17(2): e1003045. https://doi.org/10.1371/journal.pmed.1003045.CrossRefGoogle ScholarPubMed
Wood, S, Foy, R, Willis, T, et al. General practice responses to opioid prescribing feedback: A qualitative process evaluation. Br J Gen Pract 2021; 71(711): e788–e96. https://doi.org/10.3399/BJGP.2020.1117CrossRefGoogle ScholarPubMed
Alderson, SL, Farragher, TM, Willis, TA, et al. The effects of an evidence and theory-informed feedback intervention on opioid prescribing for non-cancer pain in primary care: A controlled interrupted time series analysis. PLoS Med 2021; 18(10): e1003796. https://doi.org/10.1371/journal.pmed.1003796.CrossRefGoogle ScholarPubMed
Duclos, A, Chollet, F, Pascal, L, et al. Effect of monitoring surgical outcomes using control charts to reduce major adverse events in patients: Cluster randomised trial. BMJ 2020; 371: m3840. https://doi.org/10.1136/bmj.m3840.Google ScholarPubMed
Mohammed, MA. Statistical Process Control. In Dixon-Woods, M, Brown, K, Marjanovic, S, et al., editors. Elements of Improving Quality and Safety in Healthcare. Cambridge: Cambridge University Press; 2024.Google Scholar
Wei, X, Zhang, Z, Walley, JD, et al. Effect of a training and educational intervention for physicians and caregivers on antibiotic prescribing for upper respiratory tract infections in children at primary care facilities in rural China: A cluster-randomised controlled trial. Lancet Glob Health 2017; 5(12): e1258–e67. https://doi.org/10.1016/S2214-109X(17)30383-2.CrossRefGoogle Scholar
Ayieko, P, Ntoburi, S, Wagai, J, et al. A multifaceted intervention to implement guidelines and improve admission paediatric care in Kenyan district hospitals: A cluster randomised trial. PLoS Med 2022; 8(4): e1001018. https://doi.org/10.1371/journal.pmed.1001018.CrossRefGoogle Scholar
Ivers, NM, Grimshaw, JM, Jamtvedt, G, et al. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med 2014; 29(11): 1534–41. https://doi.org/10.1007/s11606-014-2913-y.CrossRefGoogle ScholarPubMed
Foy, R, Skrypak, M, Alderson, S et al. Revitalising audit and feedback to improve patient care. BMJ 2020; 368: m213. https://doi.org/10.1136/bmj.m213.Google ScholarPubMed
Willis, TA, Wood, S, Brehaut, J, et al. Opportunities to improve the impact of two national clinical audit programmes: A theory-guided analysis. Implement Sci Commun 2022; 3(1): 32. https://doi.org/10.1186/s43058-022-00275-5.CrossRefGoogle ScholarPubMed
Desveaux, L, Rosenberg-Yunger, ZRS, and Ivers, N. You can lead clinicians to water, but you can’t make them drink: The role of tailoring in clinical performance feedback to improve care quality. BMJ Qual Saf 2023; 32(2): 7680. https://doi.org/10.1136/bmjqs-2022-015149.CrossRefGoogle ScholarPubMed
Ivers, N, Barnsley, J, Upshur, R, et al. ‘My approach to this job is … one person at a time’: Perceived discordance between population-level quality targets and patient-centred care. Can Fam Physician 2014 Mar; 60(3): 258–66.Google Scholar
Catlow, J, Bhardwaj-Gosling, R, Sharp, L, et al. Using a dark logic model to explore adverse effects in audit and feedback: A qualitative study of gaming in colonoscopy. BMJ Quality & Safety 2022; 31: 704–15. https://qualitysafety.bmj.com/content/31/10/704.10.1136/bmjqs-2021-013588CrossRefGoogle ScholarPubMed
Grimshaw, JM, Ivers, N, Linklater, S, et al. Reinvigorating stagnant science: Implementation laboratories and a meta-laboratory to efficiently advance the science of audit and feedback. BMJ Qual Saf 2019; 28(5): 416–23. https://doi.org/10.1136/bmjqs-2018-008355.CrossRefGoogle Scholar
Foley, T and Horwitz, L. Learning Health Systems. In Dixon-Woods, M, Brown, K, Marjanovic, S, et al., editors. Elements of Improving Quality and Safety in Healthcare. Cambridge: Cambridge University Press; 2025.10.1017/9781009325912CrossRefGoogle Scholar
Vaisson, G, Witteman, HO, Chipenda-Dansokho, S, et al. Testing e-mail content to encourage physicians to access an audit and feedback tool: Factorial randomized experiment. Curr Oncol 2019; 26(3): 205–16. https://doi.org/10.3747/co.26.4829.CrossRefGoogle ScholarPubMed
Sykes, M, O’Halloran, E, Mahon, L, et al. Enhancing national audit through addressing the quality improvement capabilities of feedback recipients: A multi-phase intervention development study. Pilot Feasibility Stud 2022; 8(1): 143. https://doi.org/10.1186/s40814-022-01099-9.CrossRefGoogle Scholar
Moore, L, Guertin, JR, Tardif, PA, et al. Economic evaluations of audit and feedback interventions: A systematic review. BMJ Qual Saf 2022; 31(10): 754–67. https://doi.org/10.1136/bmjqs-2022-014727.CrossRefGoogle ScholarPubMed
Figure 0

Table 1 Examples of A&F across different countries31

Adapted from a monograph from the European Observatory on Health Systems and Policies31 and other sources.
Figure 1

Table 2 Suggestions for optimising performance feedback

Adapted from Brehaut et al.44

Save element to Kindle

To save this element to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Audit, Feedback, and Behaviour Change
  • Noah Ivers, Women's College Hospital Research Institute, Toronto, Robbie Foy, University of Leeds
  • Online ISBN: 9781009604697
Available formats
×

Save element to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Audit, Feedback, and Behaviour Change
  • Noah Ivers, Women's College Hospital Research Institute, Toronto, Robbie Foy, University of Leeds
  • Online ISBN: 9781009604697
Available formats
×

Save element to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Audit, Feedback, and Behaviour Change
  • Noah Ivers, Women's College Hospital Research Institute, Toronto, Robbie Foy, University of Leeds
  • Online ISBN: 9781009604697
Available formats
×