Introduction
Recent debates on the ‘how’ of behavioural public policy (BPP) rightly focus on understanding mechanisms for behaviour change (Grüne-Yanoff, Reference Grüne-Yanoff2016; Strassheim, Reference Strassheim2021); and the need to understand heterogeneity of treatment effects from a given behaviour change intervention (Bryan et al., Reference Bryan, Tipton and Yeager2021). We are increasingly nuanced in specifying the target audience and target behaviour (Dewies and Reisch, Reference Dewies and Reisch2025), understanding them in granular details of geography, culture and time (Michie et al., Reference Michie, Jochelson, Markham and Bridle2009; Schimmelpfennig and Muthukrishna, Reference Schimmelpfennig and Muthukrishna2023), and with advanced computational social science methods (Veltri, Reference Veltri2023). But something is missing from these debates.
To enhance our field’s capacity to deliver significant, appropriate and even transformational behaviour change, we must also consider qualitative methods. In this Perspective, I argue that qualitative methods can improve the identification of behaviour change goals, the design of interventions and the analysis of their effectiveness.Footnote 1 Qualitative methods are diverse in their approaches and methodological foundations, but share certain elements that will appeal to BPP scholars: they embrace messiness and complexity, they emphasise richness and detail, and they strive to make sense of human behaviour and experience (Bryman, Reference Bryman2016). Qualitative methods are flexible in their application: they can be incorporated into both inductive and deductive research designs. Common approaches to data-gathering include interviews and focused group discussions, observation and ethnography; and analysis techniques include coding and thematic analysis, discourse analysis and process tracing.
We know from previous literature that policy evaluations and experiments can benefit from qualitative insights. There is now a well-established critique of RCT as the ‘gold standard’, on the grounds that it does not open up the black box of causal effects; because it privileges quantitative measurement, it may overlook other outcomes that are best measured qualitatively; and because the emphasis on counterfactual analysis crowds out a more qualitative factual analysis of what happened, which might be particularly relevant in the much less tightly controlled environment of field experiments (Hesse-Biber, Reference Hesse-Biber2013; White, Reference White2013). Further, policy makers want varied evidence and analyses in evaluations of what works (van Bavel and Dessart, Reference van Bavel and Dessart2018; Varazzani et al., Reference Varazzani, Tuomaila, Emmerling, Brusoni and Fontanesi2023).
Despite these now being familiar arguments, most behavioural science studies tend to default to a narrower range of approaches and data, such as online survey experiments to test for changes in intentions and attitudes; and field experiments to test for changes to behaviour and policy outcomes. A majority of submissions to the Behavioural Public Policy journal present quantitative indicators and statistical analysis only (Galizzi, Reference Galizzi2024). With this may come the statistical rigour of large-n analyses and precision around the results of hypothesis tests. But there are lost opportunities too if qualitative insights are foregone.
I aim to respond to this gap with an overview of why qualitative methods should be a valued part of the BPP toolkit; and how they can be applied, with a focus on combining them with familiar quantitative methods.Footnote 2 In the next section, I identify ways in which qualitative approaches can improve the design of BPP, through the dual routes of understanding both the challenge and the solutions better. I provide illustrative examples drawn from studies of health, active travel and anti-social behaviours to show how qualitative methods can be used in BPP research. Then, I review common barriers to working with qualitative methods and how they might be countered. Finally, I lay out an agenda for action, with distinct recommendations for researchers, reviewers and editors, policy makers and higher education, policy and funding institutions.
What can qualitative approaches offer?
Qualitative methods and data can be incorporated into any stage of the policy analysis cycle (identifying the problem, designing a solution, testing and evaluating it), and any stage of a specific intervention (before, during and after). In the early stages, qualitative data can help interrogate the assumptions researchers and policy makers bring to the table, for example, by delving into the lived experiences of the target group. What is the problem we expect to see, and does the data confirm that it is indeed the most significant barrier to a desired behaviour change and policy outcome? Once an intervention is underway, we will want to know: how is it implemented in practice, and how do target individuals engage with it, if at all? Qualitative data and analysis can answer these questions and more, facilitating a better understanding of the problem and better understanding of when solutions work well. Together, these add up to more appropriate policy design and implementation. Either of these factors would confer the intervention with a higher chance of success (see Figure 1).Footnote 3

Figure 1. How qualitative methods can enhance behavioural public policy.
Better understanding of the problem
It is not always clear what are the drivers of a target behaviour, the most important barriers to behaviour change, and the levers most likely to shift behaviours. Getting the diagnosis right is a key requisite to getting the policy design right (Lunn, Reference Lunn2020). The more transformative we hope to be, the more methods we will need to understand the problem and precursors to behaviour change (Krpan, Reference Krpan2024). This is asking quite a lot—it is tricky to identify a problem accurately. For example, in their work on social policy, Hall et al., (Reference Hall, Galvez and Sederbaum2014) argue that assumptions about preferences and decision making in the context of poverty can be flawed or inadequately understood by academics and policy makers. This can contribute to less effective policy design and implementation in fields as varied as banking, healthy food choices and housing.
Qualitative inquiry can offer a deeper understanding of the problem, of the context, and the presence of multiple challenges or decision-making hurdles that may require either sequenced or simultaneous action. This is illustrated in a qualitative meta-synthesis of housing choice voucher programmes in the US by Graves (Reference Graves2019). The study documents the cognitive load that a violent neighbourhood places on individuals. Relocation decisions can be significantly affected by the desire to escape violence, which could take precedence over economic factors. Households receiving a housing voucher were not able to exercise full ‘freedom of choice’ as programme designers intended. This finding helps explain why evaluations of housing choice voucher programmes often yielded unexpected results. It implies that such programmes should include neighbourhood safety outcomes alongside employment and schooling outcomes; and more generally, that policy design and expectations be consistent with lived realities. Qualitative approaches, in their emphasis on people’s experiences and the rationale behind citizen preferences and choices, can help ensure that policy is indeed consistent with lived realities (van Bavel and Dessart, Reference van Bavel and Dessart2018).
Better evidence on solutions
Heterogeneity of treatment effects
There will be diversity within and across any given target population. Heterogeneity of treatment effects is widely understood and embraced in the BPP field. However, by relying only on quantitative data, we may make the assumption that the most important sources of heterogeneity are already known to us (these are the indicators we set out to collect data on). Qualitative inquiry allows for the discovery of new factors and the overlap of multiple factors, which can determine how well an intervention actually works. Qualitative data gathered during an intervention or in a follow-up phase can be triangulated with statistical outcome data to contextualise and/or corroborate a statistical finding. This can shed new light on potential sources of heterogeneity that can affect average treatment effects.
Acceptability
Acceptability is something that might be assumed when nudging; after all, we maintain (through the ‘libertarian’ part of libertarian paternalism) the right to choose one’s own way. But it is entirely possible to overestimate the acceptability of an intervention that retains freedom of choice. This is evident from studies that identify backlash effects on actual and intended behaviour from nudge interventions (de Jonge et al., Reference de Jonge, Zeelenberg and Verlegh2018; Dewies et al., Reference Dewies, Schop-Etman, Rohde and Denktaş2021; Sprengholz et al., Reference Sprengholz, Felgendreff, Böhm and Betsch2022). Where an intervention is more top-down and mandates change, qualitative insights can be instructive for understanding levels of acceptability, what this implies for (present or future) compliance, and the reasoning beneath that (Dewaelheyns et al., Reference Dewaelheyns, Raymaekers, Lange, Somers and Steen2025).
Qualitative inquiry that investigates how people feel about different policy intervention approaches might do better at understanding what a ‘5 out of 10’ or a ‘neither approve nor disapprove’ response to a survey question actually means. Does it mean that the respondent is genuinely indifferent? Or that they understand what the intervention is trying to do but have reservations enough that they would not take it up or comply themselves? Could it even engender a sense of frustration or irritation, which might lead to behavioural intentions going against the desired direction? With closed survey questions alone, using pre-defined response options, we cannot derive this level of nuance. Data collection on public attitudes and beliefs that allows researchers to probe, follow up and understand such ‘middle ground’ responses can generate a richer understanding of how individuals might approach and respond to a behaviour change intervention.
Unintended consequences and spillovers that could lead to null or negative treatment effects
Most BPP begins with a desired outcome—for example, reduced carbon emissions, better recycling, less smoking, or more exercise. We know what the intended changes look like, and we set about capturing them with one or more statistical indicators, and perhaps levels of outputs and outcomes to shed light on an entire theory of change. So far, so good, but of course it would be an error to assume this is the whole picture. Any intervention can stimulate unintended consequences, and this problem is made worse if they are unknown and unintended consequences. We may not know to gather data on them, and this could lead to blind spot in our analysis of what worked and why. This is not necessarily a problem of poor design, since every context is different and even tried and tested interventions may collide with unexpected issues in other places or times.
Qualitative approaches make a virtue of open-ended inquiry. Asking questions that allow the individual to talk about their lived experiences can lead to unexpected discovery. Observing participants at the time and place where they interact with a given choice architecture might yield new insights that we did not know to look for. Ex ante, these insights might change the intervention design; ex post, they can help explain take-up, engagement, usage, or the opposite. This is not about identifying where trials have gone wrong, although it could help with that objective. Rather, this qualitative data gathering can help researchers plan and undertake process evaluations, understand the intensity of treatment received, and contextual or implementation issues which may have unexpectedly interacted with the intervention (Saunders et al., Reference Saunders, Evans and Joshi2005). Qualitative data can also draw attention to potential spillovers, positive or negative. Negative spillovers might explain why an intervention’s effects were recorded as zero in the quantitative data, despite the intervention being delivered as intended.
Anyone designing an experiment (at least for a second time) will know to build in checks and measures to identify if a null result can be argued to be a true null effect. This is a pragmatic and scientific approach to explaining how we can interpret one possible result, even if it is not the anticipated one. Yet qualitative approaches are not utilised enough at this stage to help with understanding a whole range of eventualities: whether a null result is due to the survey or trial not quite operating as planned, whether the treatment was interpreted as intended, whether the significance and salience of other factors outweighed the treatment, or if there were pockets of positive treatment effects that were not clearly visible in the statistical analysis of average treatment effects.
Some examples of qualitative approaches to understanding behaviours and behaviour change
In Table 1, I provide three illustrative examples of studies that have applied qualitative methods to contextualise a behaviour change problem, better understand statistical results and theory frameworks, and enhance understanding of unintended consequences. The first case incorporates qualitative data and analysis in a randomised field experiment on health behaviour change; the second showcases interviews and photo elicitation methods as part of a longitudinal study on active travel; and the third uses interviews to design and evaluate a campaign against anti-social behaviour, uncovering unintended consequences. These examples highlight that qualitative approaches can add value as part of a mixed methods approach (Johnson et al., Reference Johnson, Onwuegbuzie and Turner2007)Footnote 4; combining with quantitative research methods in diverse ways (Creswell, Reference Creswell2003).Footnote 5
Table 1. Illustrative examples integrating qualitative methods into behavioural public policy

Potential critiques and barriers to undertaking qualitative inquiry
The BPP toolkit is always evolving—and yet qualitative approaches give behavioural researchers pause, and may even seem a daunting prospect compared with more familiar ways of working. In the following, I review common critiques or concerns, and offer counter-arguments for researchers seeking to persuade skeptical colleagues or reviewers.Footnote 6
‘Qualitative data gathering is expensive’
• Qualitative approaches prioritise in-depth and rich exploration of data and do not require sample sizes of the magnitude required for statistical analysis. Using the concept of saturation in the data (Guest et al., Reference Guest, Bunce and Johnson2006) can mean sample sizes in the low double digits is normal; or even single digits if multiple forms of qualitative data gathering are used for triangulation (such as interviews alongside observation).
• Online methods for recruitment and data gathering are now easy to use, widely accepted, and offer cost savings relative to in-person events.
• Researcher time will need to be costed in, perhaps for second or third coders and iterative development of thematic analysis, but for a team effort, this is not likely to be any greater than the time that would be budgeted for statistical data cleaning and analysis.
• Preventing design and implementation flaws by drawing on qualitative data can prove cost-effective for BPP overall.
‘Qualitative approaches are too time-consuming’
• Qualitative data gathering can be incorporated into intervention and survey design phases, survey data collection with open text boxes, and run alongside field experiments to gather data on implementation and participant experiences. While follow-up activities might extend the overall data collection phase, the benefits of having data after the intervention often outweigh the cost in terms of time.
• Analysis may require more than one researcher but can be run efficiently using conventional software (NVivo) as well as AI tools to support thematic analysis (De Paoli, Reference De Paoli2024; Turobov et al., Reference Turobov, Coyle and Harding2024).
‘Qualitative findings will not be seen as robust’
• Different criteria of validity and rigour need to be applied, for example, when evaluating whether the sample is large enough, or asking questions of researcher subjectivity and the replicability of analyses. After all, qualitative data will help answer different questions from those answered by quantitative data, so findings should be assessed for robustness and reach in different ways.
• Thematic analysis of interview or focus group transcripts lends itself well to count data and comparisons of how different researchers coded the data. AI tools offer further opportunities to test the reproducibility of thematic findings and the potential for alternative interpretation and coding of text data.
• Qualitative approaches can fit well with open science principles. Exploratory work can be included in pre-registration in the same way that statistical pre-analysis plans may mention scenarios where exploratory work may be undertaken. Importantly, qualitative inquiry is no different from statistical when it comes to researchers specifying whether explanatory or exploratory analysis was undertaken. In fact, it may offer more opportunity for transparent reflexivity, and for documenting the process of sense-making (e.g. with abductive thematic analysis).
An agenda for action
This article does not pit quantitative methods against qualitative. It is a call for more studies to draw on an optimal combination of both. I believe the number of researchers in the BPP field who are either conducting qualitative work, or are intellectually curious about the work done by others, exceeds the number of published studies using qualitative or mixed methods research designs. This is something the BPP scholarly community can and should address to enrich our research and policy-facing activities. In closing, I offer some suggestions for smarter use of qualitative methods in BPP research.
First, as researchers, we need to better incorporate qualitative methods within our research. This requires planning and foresight to ensure appropriate time, expertise, and resources in research designs and grant applications. Qualitative methods could be applied at an early stage to support intervention design and feasibility assessments, observation during intervention phases, or at a later stage alongside follow-up activities. Qualitative data could be gathered through interviews, observation, or surveys. Thematic analysis of this data can be both theory-driven (and specified beforehand) and data-driven (allowing for new themes to emerge). These activities fit neatly with the field’s growing emphasis on understanding causal mechanisms and scaling up across policy settings. Much of this practice will be familiar to researchers working with policy partners, and part of our task now is to ensure that good practice in applied settings is equally reflected in academic work and publications.
Second, as reviewers and editors, we need to recognise the merit of qualitative data and analysis, and appreciate the different ways of appraising validity and rigour with qualitative datasets and findings (Cartwright and Igudia, Reference Cartwright and Igudia2024). Researchers are more likely to invest effort in preparing qualitative analyses for journal submission if they believe their work will receive a fair and open-minded reading, and an informed peer review.
Third, policy makers have an important role to play as collaborators in and co-designers of qualitative inquiry. They are especially well placed to generate and signal demand for qualitative data and analysis by seeking, facilitating and funding BPP research that incorporates and values qualitative insights.
Fourth, responsibility also rests with higher education institutions, grant-making bodies, policy institutions and scholarly communities to fund mixed methods and qualitative research, facilitate more interdisciplinary training, incentivise collaboration across fields, and bring together researchers with complementary expertise on BPP challenges.
A fundamental change in our openness towards relatively unfamiliar data and methods might seem an ambitious ask. The approach I have suggested earlier breaks down such change into a set of discrete and feasible steps that require behaviour change from different stakeholders. These actions can enhance both demand and supply of robust qualitative work in BPP research. If authors expand their research designs, if grant bodies support these research designs, if reviewers apply appropriate standards of robustness, if editors value qualitative and mixed methods submissions and nurture reviewer pools with more diverse methodological expertise—and particularly if all this happens together—then we will see a growth in mixed methods research in BPP. If members of the BPP community serving in these different roles act in unison, progress is possible.