Skip to main content Accessibility help
Hostname: page-component-544b6db54f-dkqnh Total loading time: 0.567 Render date: 2021-10-25T05:44:24.192Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": true, "newCiteModal": false, "newCitedByModal": true, "newEcommerce": true, "newUsageEvents": true }

Benefit-Cost Analysis of Public Safety: Facing the Methodological Challenges

Published online by Cambridge University Press:  30 November 2017

Maria Ponomarenko
Adjunct Professor of Law, New York University School of Law, New York, NY 10012, USA
Barry Friedman*
Jacob D. Fuchsberg Professor of Law, New York University School of Law, New York, NY 10012, USAe-mail:
Rights & Permissions[Opens in a new window]


Although more than 100 billion dollars is spent each year on policing, we know very little about what works, and still less about whether the benefits of various policing policies and practices outweigh the costs. In particular, although there has been some important work done to assess the effects of various practices, and even to monetize some of the benefits of reducing crime, there has been virtually no attention paid to the other side of the benefit-cost equation: the social costs that particular policing practices potentially can impose. In February 2017, the Policing Project at NYU School of Law held a conference aimed at jumpstarting the use of benefit-cost analysis to assess policing practices, and to begin to tackle the many methodological challenges to doing so. Here, we provide an overview of the existing literature, identify the serious gaps that remain, and sketch out a research agenda for moving forward.

Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (, which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
© Society for Benefit-Cost Analysis 2017

1 Introduction

In the past few years we have seen notable – perhaps unprecedented – turmoil around policing. Events in Ferguson, Missouri following the death of Michael Brown raised sharp concern about the militarization of the police (Beaver & Shank, Reference Beavers and Shank2014; Szoldra, Reference Szoldra2014). Other police shootings have spawned protests, movements such as Black Lives Matter, and calls to re-examine the use of force. Disclosures about police use of various surveillance technologies – from drones to domain awareness systems – have prompted calls for oversight boards and privacy assessments. Litigation around stop-and-frisk in major metropolitan areas – such as Floyd v. City of New York – brought societal awareness of the practice, and engendered sharp debate (Floyd, 2014).

This turmoil has had its predictable backlash. Black Lives Matter was met with Blue Lives Matter. Police officials warned of a “Ferguson effect” in which police officers under sharp scrutiny engage in “de-policing.” Attorney General of the United States, Jeff Sessions, ordered a review of existing U.S. Department of Justice efforts around policing reform, and made clear his view that to curb gun violence what is needed is more policing and less scrutiny of the police (Kaczynski, Reference Kaczynski2017).

This may, on its face, seem like a classic political struggle. Debates about police and policing raise, in the minds of many, a tension between safety and security, on the one hand, and liberties on the other. People also point to fissures between those who want more policing, and those who object to how they are being policed – typically poor communities, communities of color, and other marginalized groups. Participants in these debates properly raise basic questions of fairness and equity, as well as public safety.

But these heated debates may reflect equally a shortfall of research. Much of the debate that is occurring over these policing practices is happening in the absence of clear facts, and certainly in the absence of good and reliable science. We spend over 100 billion dollars a year on public safety in this country, and we know far too little about what works (US Bureau of Justice Statistics 2013; Fackler, Henrichson, Janszky and Neusteter (Reference Fackler, Henrichson, Janszky and Neusteter2017, this issue)). Even where we know what works – and we discuss below some of the important work in this area – we often know far too little about whether those measures are good policy.

In this fraught space, there are three questions with which we should all be concerned. First, are the measures that we take to keep us safe efficacious? Second, even if they are efficacious, are the benefits we are achieving reasonable in light of the costs, particularly social costs, that are incurred? And finally, even if the benefits are worth it, are there distributive costs about which we should be concerned, which is to say, are the costs of policing falling inordinately or inappropriately on some segment of society?

Elsewhere in government, on matters that are on par with public safety – we employ some form of benefit-cost analysis (BCA) to assess our policies and practices (e.g., Executive Order 12866 1993; or Executive Order 13563 2011; Hahn and Sunstein Reference Hahn and Sunstein2002). Benefit-cost analysis is a valuable tool because it asks for a comprehensive assessment of all the costs and benefits of a policy, including alternatives. Yet, as a 2014 report from the Vera Institute for Justice pointed out, BCA is not widely used in the criminal justice field (Matthies & Chiu, Reference Matthies and Chiu2014).Footnote 1

Observing the serious gap in scholarship in this area, the Policing Project at New York University School of Law – with support from the Laura and John Arnold Foundation – organized a convening of scholars to tackle these issues. Criminologists were an important part of the group, but given the gaps in knowledge in this area, we widened the circle to include methodologists from a variety of fields, including environmental, political, health, and economic sciences. In order to establish something approaching a level playing field we circulated a literature review, developed a series of germane hypotheticals, and offered up writing prompts. All the participants wrote thought pieces addressing the set of questions we consider here. The Journal of Benefit-Cost Analysis offered us this symposium to present a sample of that good work; the full Conference Report, and the entire set of thought pieces, are available on the Policing Project website,

In this Foreword, we seek to accomplish three things. We begin, in Part I, by summarizing the state of the field, providing a sampling of the work that has been done to date in order to quantify the costs of crime, and as well as the efficacy of some techniques designed to enhance public safety. Despite this work, there remain critical questions that have not been tackled – or tackled adequately – in the literature. We turn to these in Part II, singling out two in particular. First, there has been virtually no work done to incorporate into BCA of policing the social costs imposed by a particular policing practice itself. Second, too often ignored are the notable distributive costs associated with policing, which is to say that even if certain policing measures are effective, the cost of pursuing them often falls unequally in society. After indicating the importance of incorporating these features into any BCA of policing practices, we turn – in Part III – to some of the most difficult challenges to conducting BCA of policing practices along the lines we urge. It is enormously difficult to construct field experiments to assess accurately the benefits of policing tactics; randomized controlled trials (RCTs) and even quasi-experiments are not always readily available or ethically appropriate. And many of the social costs that have been omitted from prior studies are not easily susceptible to monetization. Still, in Part III we offer tangible suggestions and takeaways from the conference and its papers, many of which were about facing these challenges. We discuss techniques for taking advantage of various discontinuities to analyze the benefits of policing approaches, and we explore how contingent valuation and other approaches can be used to monetize intangible costs.

Two critical lessons emerged from this collection of papers and the conference discussions. First, and foremost, it is imperative that researchers turn their attention to conducting comprehensive BCA of policing practices. Public safety is foundational; it is essential that it be assured, and in a way that is most attentive to the competing benefits and costs. Second, although there are indeed challenges, they can be addressed, as they have been in other areas in which BCA has proven an invaluable tool in social planning.

2 Defining the Terrain

There is a body of important social scientific work in the public safety and policing field, a small proportion of which we cite below. For an overview, one might look at the excellent papers in Welsh, Farrington and Sherman (Reference Welsh, Farrington and Sherman2001). The efforts we hope to advance here build on the shoulders of all those who have been leaders in this field for so long. We have been privileged to learn from these scholars. The gaps we point to in Part II – although notable – should in no way detract from the work that has been done to date.

Both before, and at, the conference, in order to make our work concrete, we relied upon a series of hypotheticals as prompts for discussion. The hypotheticals mirror the kinds of questions and difficulties that public safety officials face on a daily basis. We will use two of those hypotheticals to frame our discussions here.

The first hypothetical reflected the sorts of challenges that departments often face in deciding whether to acquire and deploy new technologies, such as surveillance technologies. We used the question of whether the fictional city of Westville should spend the money to acquire license plate reader (LPR) technology. LPRs can be stationary or mounted on the rear of patrol cars. They use optical character recognition to scan in license plates, looking for “hits” in law enforcement databases, such as one including the plates of stolen vehicles. Although LPR technology originally was focused on auto theft, its use has been expanded to tracking down suspects with outstanding warrants, to the enforcement of municipal fines (such as unpaid parking tickets), and even to tracking down those who owe municipal taxes. License plate data also can be stored in databases and used subsequently in criminal investigations – for example by providing police with a list of cars near the scene of a crime.

The question, of course, is whether the costs of LPRs justify the benefits. The units can be expensive, and there also are data storage costs. The enforcement benefits are those described above. Yet with some of those benefits come additional costs, to privacy. There are complicated questions of how long data should be stored, and on what basis they should be accessed. Can resident data be stored indefinitely, and used to track the travels of vehicles?

This scenario is hardly hypothetical. Jurisdictions have been acquiring LPRs at a rapid pace: a 2014 RAND report estimated that 70% of departments were using LPRs, with 85% expressing interest (Gierlack, Tom LaTourrette, Anderson, Mayer & Zmud, Reference Gierlack, Tom LaTourrette, Anderson, Mayer and Zmud2014). In addition, LPRs have been experiencing a sort of mission creep from their early days as a tool to address auto theft, to much broader uses today. Some departments build and store databases of license plate location information for use in criminal investigations; others use them for enforcement of outstanding traffic tickets. Mission creep has been accompanied by data creep, as more jurisdictions have begun to store the data for investigative purposes, or to contract with third-party vendors like Vigilant Solutions to share their data.Footnote 2

The question is whether the costs of acquiring and utilizing LPRs justify the benefits, and it is here that we begin our brief review of the state of the field.

2.1 What works?

The first question in any BCA is whether the intervention is efficacious. Does it yield any benefits? If not, there is no sense in pursuing the question further.Footnote 3

Excellent work is being done across a range of domains to assess the effects of policing policies and practices. Chief among them at present is work on hot-spot policing. Study after study seems to confirm that flooding criminogenic hot spots with police reduces crime, seemingly with minimal displacement (Ariel, Weinborn & Sherman, Reference Ariel, Weinborn and Sherman2016; Weisburd & Green, Reference Weisburd and Green1995). Other policing interventions also have been the subject of examination, including stop-and-frisk (e.g., Rosenfeld & Fornango, Reference Rosenfeld and Fornango2014; Weisburd, Wooditch, Weisburd & Yang, Reference Weisburd, Wooditch, Weisburd and Yang2016), consent searches (e.g., Lichtenberg and Smith Reference Lichtenberg and Smith2001), and closed-circuit television cameras (e.g., Caplan, Kennedy & Petrossian, Reference Caplan, Kennedy and Petrossian2011; Welsh & Farrington, Reference Welsh and Farrington2009). Outside of the policing space, researchers have assessed a variety of public safety initiatives, and diversion programs aimed at reducing crime without incarceration. (For a recent meta-analysis, see Wilson and Hoge Reference Wilson and Hoge2013).

Even acknowledging all the work being done, however, there still are wide domains in policing practices, tactics, and technologies about which we know very little. An instructive example is LPRs. Two early studies assessing the value of LPRs were largely inconclusive (Lum, Hibdon, Cave, Koper & Merola, Reference Lum, Hibdon, Cave, Koper and Merola2011; Taylor, Koper & Woods, Reference Taylor, Koper and Woods2011). One of the studies showed that LPRs can substantially increase the rate at which officers recover stolen vehicles, but neither study found there to be any reduction in vehicle thefts as a result. As we discuss in greater detail in Part III, even these results are uncertain, given some of the methodological complications that these studies faced. For many other technologies and tactics, our knowledge is more limited still. When it comes to policing, it is difficult to argue that even this first question of efficacy – on which there has been by far the most work done to date – has been satisfactorily addressed.

2.2 Is it worth it?

Even if a tactic or technology works, the next question is whether it is worth it. Answering this question in BCA terms requires putting both the costs and benefits of a particular practice on a comparable scale – typically expressed in monetary terms. In short, one must quantify both the costs and benefits of a particular tactic and then compare them.

2.2.1 Valuing the benefits

One of the principal goals of policing is to drive down crime. In assessing the benefits of public safety interventions, one must first have some way of monetizing the cost of crime (and thus the benefit of bringing it down).

A fairly extensive literature – in both economics and criminology – attempts to measure the costs of crime to society, both in the aggregate and on a per-crime basis (e.g., McCollister, French and Fang Reference McCollister, French and Fang2010, Logan, Walker and Hoyt Reference Logan, Walker and Hoyt2012, Moore and Shepherd Reference Moore and Shepherd2006). These studies include estimates of both tangible costs (e.g., medical bills, property damage, neighborhood property values, and security expenditures), as well as intangible costs (e.g., pain and suffering, fear of crime, and reduced quality of life).

The studies take a variety of approaches, which, as we discuss in Part III, could be used to value other costs and benefits as well. First there are studies that use a “bottom up” approach attempt to measure the actual costs incurred by the victims of crime, through an “accounting” method (e.g., adding up tangible costs incurred by crime victims, including lost wages, medical expenses, and lost property), and by looking to civil jury awards as a proxy for intangible costs incurred (e.g., Cohen and Miller Reference Cohen and Miller1998).Footnote 4 Some of the studies used a combination of both approaches to come up with possible estimates for the cost of various crimes (Cohen, Reference Cohen1988; Post, Mezey, Maxwell & Wibert, Reference Post, Mezey, Maxwell and Wibert2002). The principal shortcoming of direct or “bottom up” approaches is that they understate the total cost of each crime to society. Crime victims are not the only ones who are harmed – family, friends, and community members also bear indirect costs of crime.

Top-down approaches provide a more comprehensive measure of the total cost of each crime to society by estimating both the costs to victims and to those around them. Top-down studies typically rely on contingent valuation, a survey-based technique that can be used to estimate intangible costs by eliciting information about tradeoffs. For example, the tradeoff might be how much money individuals would be willing to pay to reduce the risk of a particular crime such as an assault, or how much they would have to be paid in order to accept a slightly greater risk of that same thing happening; (see e.g., Ludwig and Cook Reference Ludwig and Cook2001, Cohen, Rust, Steen and Tidd Reference Cohen, Rust, Steen and Tidd2004; and Nagin, Piquero, Scott and Steinberg Reference Nagin, Piquero, Scott and Steinberg2006).Footnote 5 Because contingent valuation incorporates both the direct and indirect costs of crime, including costs to potential victims and society at large (not just the victim), it typically produces much larger estimates than the bottom up approach for the cost of particular crimes.

Still another top-down approach is hedonic pricing, which is a regression-based technique that attempts to tease out the costs of crime by looking at individuals’ actual market behavior. One study, for example, compared property values between neighborhoods with different crime rates, while controlling for other potential factors like square footage, school quality, transportation access, and the like, to determine what individuals are willing to pay to live in an area with lower crime (Tita, Petras and Greenbaum Reference Tita, Petras and Greenbaum2006). Although there are advantages to using hedonic prices – because it relies upon real market transactions for valuation – in reality estimates produced using hedonic measures tend to underestimate crime costs due to ability to pay, the fact that they ignore costs to others who are not part of the market transaction, and also suffer from data or methodological shortcomings that often make it difficult to isolate the effect of crime from other confounding factors such as poor schools or unhealthy drinking water. In addition, at its best, hedonic pricing can produce estimates of the potential cost of broad categories of crime, like violent or property crime, or the overall cost of crime generally (e.g., Linden and Rockoff Reference Linden and Rockoff2008).

Before turning to the cost side of the equation, it is important to note that crime reduction is not the only benefit from public safety measures. Crime rates may remain stable, and yet certain other measures may make people in a community feel safer – such as better lighting in particular venues – and as a consequence use public spaces more, or live with less fear. In the realm of public safety BCA, there has been much less attention given to these other benefits.Footnote 6 Part of the problem, of course, is that they are more difficult to quantify, a problem we take up in Part III, C.

2.2.2 Weighing benefits against costs

Once the benefits have been assessed, they must be weighed against the costs. Here too, there are bodies of literature that undertake this task.

Much of the scholarship is outside the field of policing. Studies have looked at a variety of interventions – ranging from youth diversion to restorative justice – and demonstrated the value of pursuing alternatives to incarceration (e.g., Sherman et al. Reference Sherman, Strang, Barnes, Woods, Bennett, Inkpen, Newbury-Birch, Rossner, Angel, Mearns and Slothower2015, Shapland et al. Reference Shapland, Atkinson, Atkinson, Dignan, Edwards, Hibbert, Howes, Johnstone, Robinson and Sorsby2008, Wilson and Hoge Reference Wilson and Hoge2013).

But some of this literature goes right to heart of policing. For example, in a study sponsored by the RAND Corporation, Paul Heaton estimated the value of adding an additional police officer by comparing the budgetary costs of adding an additional officer with the expected benefits generated by those officers in terms of reduced crime, and concluded that adding additional officers generates substantial social benefits (Heaton, Reference Heaton2010). Similarly, a COPS Office-funded study likewise compared the budgetary costs of adding additional officers to the expected benefits of reduced crime to conclude that spending $4.4 billion on hiring additional officers nationally had produced a net benefit of $2.9 to $3.4 billion (Evans & Owens, Reference Evans and Owens2007). Others have looked more closely at the costs and benefits of specific interventions. Ariel et al. Reference Ariel, Weinborn and Sherman2016, for example, found that hot-spot policing can have potential positive net benefits with benefit-to-cost ratios of up to 26:1. Another study found that sobriety checkpoints have a net benefit of $46 per stop (Miller, Galbraith & Lawrence, Reference Miller, Galbraith and Lawrence1998).

3 What is left unattended?

The problem with many of these policing studies is that if we really want to know if a particular practice, tactic, or technology is worth the cost, they are radically incomplete. In Part I we pointed to areas in which more could be done. But the difficulty runs much deeper. We simply are failing to address some of the most critical things we need to know.

In order to highlight what is neglected all too often in assessing policing practices, we relied for our conference upon a second hypothetical, involving the use of “proactive” traffic stops in another fictional city, Ridgemont. In many jurisdictions in the United States, police departments have turned to frequent traffic stops in an attempt to fight crime. Officers are deployed to high-crime areas, and instructed to use aggressive traffic enforcement as a way of searching for guns or contraband. Officers often will say that with many traffic laws on the books, it is easy to find violations. Once violators are stopped, police then can seek – and often obtain – “consent” to search the vehicle. Despite the frequency of this technique, there is very little hard data of its effectiveness. Also, here too there are costs. There is concern that these stops alienate residents, impinge on community trust in the police, and ultimately limit the extent to which communities are willing to cooperate with the police in crime fighting.

Like our LPR example, this example is not hypothetical at all. Along with police shootings, proactive police stops are one of the most controversial issue in policing today. These proactive measures – whether pedestrian stop–question–frisk or the aggressive use of traffic enforcement – have been the subject of lawsuits and presidential debates.

Proactive enforcement has been a widely used law enforcement tool, particularly when it comes to drug enforcement and gun interdiction, and it is easy to see why. Often these are crimes for which it is difficult for officers to develop sufficient cause to go after someone directly. On the other hand, virtually every driver on the road is guilty of violating at least some traffic ordinance (e.g., drifting across a lane marker, changing lanes without signaling, not coming to a full stop at a stop sign, just to name a few). That means it is easy to find a basis for stopping someone and inquiring further. Under existing Supreme Court precedent, these sorts of “pretextual” stops are entirely lawful – so long as police have a reasonable basis to suspect that some violation has occurred, they can stop someone even if the real reason for doing so is to investigate an unrelated crime.Footnote 7 (Courts also have said that police are free to approach people on the street and ask questions, so long as the people stopped are free to walk away.)

Here too, there have been some very preliminary attempts to assess efficacy, mostly in the stop-and-frisk and foot patrols. (We know of no extant study that assesses traffic stops themselves.) The results of these studies have been mixed. Although some have found modest effects on violent crimes – like robbery – others have not (e.g., Rosenfeld & Fornango, Reference Rosenfeld and Fornango2014; Weisburd et al., Reference Weisburd, Wooditch, Weisburd and Yang2016; Ratcliffe, Taniguchi, Groff & Wood, Reference Ratcliffe, Taniguchi, Groff and Wood2011).

But what the traffic stop hypothetical highlights quite vividly is gaps in the social science around policing, such as the social costs to which we now turn.

3.1 Social costs

Although many studies attempt to estimate the social costs of crime (and thus the benefits of particular police practices), what is commonly missing from many studies in this area is an assessment of the social costs (and sometimes unintended social benefits as well) of the policing practice itself. In addition to looking at efficacy, most extant policing studies take account of the tangible costs of policing, be it the additional officer or the surveillance device or the cost of data storage. But they ignore entirely or fail to monetize in any helpful way the intangible, but still very serious, social costs.Footnote 8 In a recent working paper, Cohen (Reference Cohen2016) discusses a number of areas where the literature needs to improve, including the: (1) uncertainty of monetary estimates, (2) alternative policing strategies, (3) social costs of crime, and (4) income disparities’s impact on estimates. And in this issue, Fackler et al. discuss how, curiously, most analysis of policing to date has been benefit driven. (Fackler et al., Reference Fackler, Henrichson, Janszky and Neusteter2017, this issue).

Consider the intangible costs at stake in this example. Individuals may experience a variety of psychological and dignitary harms. They may believe the stops are unjustified or racially motivated, which itself can impose an independent harm. There may be downstream costs as well, for example if someone is caught with a small amount of marijuana that later is used to turn another subsequent misdemeanor charge into jail time. If police make frequent use of stops in a particular neighborhood, this can potentially erode the community’s trust in the police. That itself is a cost – as is the cost of losing residents’ cooperation in combatting crime. These costs are hardly imaginary: the broad public debate about these practices has brought to light intense feelings about their extent. Whatever they are, they are not zero. They likely are substantial. We simply have failed to monetize them or bring them into BCA. As a consequence, the analysis inevitably is biased in favor of proactive policing.

A small number of studies have tried to monetize these sorts of costs. Merola, Lum, Cave and Hibdon (Reference Merola, Lum, Cave and Hibdon2014), for example, tried to get at some of the social costs of LPRs, such as privacy and community trust, by using a community survey. The authors found that the public’s views about the use of these devices depended in part on the purposes for which they were deployed. Respondents were more tolerant of LPR use to catch car thieves than to enforce unpaid parking tickets. This provides a cautionary tale for law enforcement about the need to consider these costs. Another study – a 1998 paper by Ted Miller et al. – tried to assess a fuller range of costs and benefits in the context of sobriety checkpoints. In addition to estimating the various benefits of checkpoints, like reduced accident rates, the authors considered the costs imposed on drivers who were stopped but were not in fact under the influence. Using wage data to estimate the value of drivers’ and passengers’ time, the authors estimated that a checkpoint imposes approximately $1 in lost-time costs per vehicle. The authors also acknowledged that checkpoints might impose dignitary or “freedom costs” on drivers, but did not attempt to put a dollar figure on these costs. Instead, the authors aggregated all of the other costs and benefits – which showed a net benefit of $46 per stop. They concluded that each driver would have to demand $46 in dignitary compensation per stop in order for the costs and benefits to even out (which seemed unlikely).

Still, these examples are outliers. For the most part, social scientists working in this area have made no particular effort to monetize the costs imposed by those who are being policed – although many acknowledge that they exist and may be substantial. This stands in sharp contrast with other fields, such as environmental research (Morgenstern, Pizer & Shih, Reference Morgenstern, Pizer and Shih2001; Pearce, Atkinson & Mourato, Reference Pearce, Atkinson and Mourato2006) and health research (e.g., Murray et al. Reference Murray, Vos, Lozano, Naghavi, Flaxman, Michaud and Ezzati2012), where intangible costs are routinely incorporated into BCAs.

3.2 Distributive effects

Equally absent from the body of social science literature we are discussing is a sufficiently rigorous analysis of the distributive aspect of policing.

Although some policing falls relatively evenly on the population – for example, when the government collects telephone metadata in bulk from every phone user, or sets up generalized security measures at airports – most tactics are not so universal. Policing typically falls more heavily on lower income and otherwise marginalized communities, with noted racial effects. That is true of the sort of proactive policing techniques we are discussing here. It is even likely true of various forms of surveillance, like LPRs, given that police cars are not uniformly distributed throughout cities.

To be clear, there is a large and growing body of literature that establishes the fact of racial disparity (e.g., Gelman, Fagan and Kiss Reference Gelman, Fagan and Kiss2007, Knowles et al. Reference Knowles, Persico and Todd2001). And there is some really interesting literature on why it occurs or its effects (e.g., Swencionis and Goff Reference Jillian and Goff2017).

But these sorts of insights have not been incorporated into the BCA framework in assessing the social costs of policing practices. The literature is not bereft of ideas here, though it is sparse. Mark Cohen (Reference Cohen2017, this issue) in particular has thought a great deal about how to price the racial effects of policing. In his paper for this symposium, he highlights a number of potential harms that accrue when policing tactics are deployed (or are perceived as being deployed) in racially disparate ways.

Modern social BCA should provide estimates of the net social benefits of policing practice and, in addition, provide estimates of important distributional impacts when they might be expected to effect a decision. The impact of policing on minorities is a prime example.

4 Tackling the hard questions

At this juncture, the overarching point should be clear: We are not doing enough BCA of policing practices, and the work that is being done is incomplete in some essential ways. As we said, one goal here is to help fashion a research agenda.

Still, conducting BCA of policing practices presents its challenges. These challenges are not unique to policing – indeed, many of the ideas we share here are drawn from scholars in fields other than criminology that employ BCA. In some cases, however, either the challenge is particularly acute when it comes to policing or the need to address it especially urgent.

In this Part, we identify the hard questions and gesture at answers.

4.1 Who is the audience and what is within the frame?

A threshold question in any BCA – and as it turns out, a particularly challenging one in this context – is which costs and benefits to count. To see the problem, consider again the LPR hypothetical. LPRs have budgetary costs, including the costs of purchasing the equipment, storing the license plate “reads,” and officer time spent operating the units. But deploying LPRs also can produce a range of downstream benefits and costs. If an LPR flags a stolen vehicle, the driver may be arrested and eventually sent to prison. That means the government will need to pay the costs of trial (or time spent reaching a plea bargain), incarceration, and parole. The driver also will incur a variety of costs, including loss of liberty and employment. His or her family members may lose much-needed income. Children may start to do worse at school.

In short, downstream costs can be quite diverse, and sometimes attenuated, and one must think about whether to, and where to, draw lines. For example, there has been some – though not enough – thinking about the extent to which the cost imposed on someone convicted of a crime should count in such analyses (Cohen, Reference Cohen2000). Cohen includes the loss of an offender’s productivity in a BCA insofar as it imposes a cost on society, but excludes the offender’s loss of liberty, because at least in theory, that is intended to balance out the “gain” to the offender from committing the crime in the first place. As Cohen makes clear, BCA “is not a value-free concept but instead involves definitions and explicit boundaries to determine whose costs and benefits matter” (page 276).

4.1.1 Whose perspective matters?

In tackling this question, the first step is to decide from whose perspective the BCA should be conducted. In their symposium essay, Fackler, Henrichson, Janszky, and Neusteter suggest three possible perspectives to consider – the chief’s perspective, the municipal perspective, and the perspective of society, broadly defined (Fackler et al., Reference Fackler, Henrichson, Janszky and Neusteter2017, this issue).

Police chiefs are likely to adopt the narrowest frame for a public safety BCA. (See generally Fackler et al. Reference Fackler, Henrichson, Janszky and Neusteter2017, this issue.) Police chiefs and sheriffs primarily are responsible for promoting public safety, and thus their incentives may be to maximize the gains to public safety given the various constraints they face. In deciding whether to use LPRs, for example, the chief will want to know how much the units will cost, how they will impact staffing decisions, and whether they will help address automobile theft or other crime. The chief might even think about the effect that LPRs are likely to have on community trust: if residents are unhappy about the units and become less willing to cooperate, the department may have a harder time meeting its public safety goals. Chiefs are much less likely to focus on downstream costs, like incarceration, which are paid for by other government actors, and therefore may simply fall outside their radar. Similarly, chiefs may have a tendency to ignore spillover costs into other jurisdictions. Another thing that might work fully into their calculus are subsidies they receive from other units of government, such as the federal government. In short, one can see why a chief’s BCA horizon can be quite low.

In contrast, the social planner perspective – looking at the benefits and costs to society writ large – will adopt the broadest possible frame and asks whether a particular practice is cost-beneficial for society as a whole. A social planner ap- proach includes a full accounting of all of the costs and benefits of a policy or practice, whether they are incurred by a government entity or by private citizens, and whether they are incurred by entities within the jurisdiction or in another part of the city or county, or even in another state or country. Although giving standing to “outsiders” can be controversial, a social BCA should include it as part of sensitivity analysis that shows benefits and costs with them included and excluded.

Although in practice, many of the decisions described throughout this report are made by police chiefs and the people who work for them, this does not necessarily mean that the chief’s perspective is the right one to adopt in conducting a public safety BCA. Federal administrative agencies, for example, are required to adopt a social planner perspective when they conduct BCAs of economic or environmental regulations, even though the agencies themselves internalize only a small portion of those costs (e.g., Executive Order 12866 1993; or Executive Order 13563 2011). As Fackler et al. point out, there is some sense in this: police officials are tasked with bringing down crime because society values public safety. To the extent that one includes the social benefits of policing, one ought to include the social costs as well. Moreover, the social benefits cost analysis should include benefits and costs that are realized by entities beyond the department but still part of society.

4.1.2 What should be included within the analysis?

Even from a social planner perspective, however, it still is not clear whether all of the potential downstream costs and benefits should be attributed to a particular policing practice in question. In deciding whether to use LPRs, for example, should a jurisdiction try to account for prison costs imposed by legislators for automobile theft, irrespective of the method of detection? Doing so would mean that all else being equal, LPRs might be cost-beneficial in one jurisdiction but not in another due simply to differences in incarceration costs or sentencing regimes. In addition, even if LPRs are not used, some offenders still may be apprehended by other means, incurring the same costs and benefits. The right question may be what the marginal costs would be of using LPRs as opposed to some other detection tool.

What is clear is that in conducting BCAs of policing practices, it is essential to consider the alternatives. The right set of alternatives will depend on the goals that the department hopes to accomplish. If the goal is to address a particular crime problem – like automobile theft – alternatives to LPRs may include stepping up patrols in affected areas, or working with residents to improve security measures for their vehicles. If the goal is to improve overall public safety, the range of alternatives may be quite a bit broader. Defining the alternatives can help focus attention on the costs and benefits that are the most important to measure. Of course, it is fundamentally infeasible to measure everything in a BCA, but with a properly defined set of alternatives, policy makers can determine ex ante what difficult social costs and benefits are likely to be significant factors worthy of closer attention Fackler et al. (Reference Fackler, Henrichson, Janszky and Neusteter2017, this issue).

4.2 Measuring the effects

The next step, as we have said, in assessing the costs and benefits of a policing practice is identifying the effects that the tactic, technology, or practice has. This too presents a number of challenges, which are evident in the LPR hypothetical used above.

4.2.1 Defining the causal mechanism

In designing a study of LPRs, it is important to consider not only the outcomes that must be measured, but also the means by which those outcomes will be achieved. One of the goals of using LPRs is to bring down auto theft. But how do LPRs achieve this result? One possible mechanism is that LPRs make it easier to detect stolen cars, which can improve clearance rates and get repeat offenders off the street. Another possible mechanism is that deploying LPRs may have some deterrent effect on theft because people will know that they are more likely to get caught (Ayres & Levitt, Reference Ayres and Levitt1998).

One challenge of studying policing practices is that policing agencies often are insufficiently attentive to the causal mechanism of their own work. Yet, the supposed causal mechanism has implications for study design. If the goal is deterrence, people need to know that LPRs are being used (or at least to be able to observe over time that officers have become particularly adept at recovering stolen vehicles). A “blind” study, in which community members do not know about the units, would not work in this case. Similarly, if the idea is that LPRs will reduce theft by helping police catch suspects, one would need to run the study long enough to see these sorts of effects take hold (Carr Reference Carr2017, this issue).

4.2.2 Challenges with RCTs

Randomized control trials are currently understood to be the “gold standard” of research design. In conducting an RCT, researchers identify a group of people or a set of neighborhoods, and then randomly assign some of them to receive the treatment (i.e., the policing practice or technology one wants to assess), and the rest to be the control. If the only difference between the two groups is that one is subjected to the policing practice while the other is not, then researchers can attribute any changes that result to the practice at issue. In the case of LPRs, for example, an RCT might involve deploying units in only half of the neighborhoods that have experienced problems with automobile theft (or where stolen vehicles typically are found). If LPRs already have been deployed, the department could temporarily turn units off in some neighborhoods and see if there is any effect on crime.

Unfortunately, as the authors in this symposium acknowledge, RCTs are not always feasible when it comes to policing (See Carr Reference Carr2017, this issue; Geller Reference Geller2017, this issue; Goldstein Reference Goldstein2017, this issue). Designing an RCT around a policing practice can present real ethical or political concerns. The assumption is that the practice being studied can reduce crime or make a neighborhood safer. Although it would be useful for the sake of study to use the practice or technology in some random locations, people reasonably would be troubled to learn that some communities got the treatment and others did not. Police chiefs understandably are reluctant to allow potentially useful tactics only to be employed in some places. As Rebecca Goldstein points out in this issue, “[t]he key research design challenge RCTs evaluating policing policy is that they require the police to treat similarly situated high-crime neighborhoods differently for no other reason than those neighborhoods’ randomization into either the treatment or control group.” (Goldstein Reference Goldstein2017, this issue).

Randomized control trials also can be very expensive run. In order to make sure that the only difference between the treatment and control groups is the policing practice at issue, researchers need to monitor carefully the implementation process. Depending on what is being tested, departments may need to alter staffing or deployment practices to accommodate the study. This can be costly and disruptive.

The combination of ethical issues, political concerns, and cost can diminish the value of RCTs by limiting the amount of time they can be run. Researchers may only be permitted to run a trial for a month or two, which may not be long enough to see any real effects. For example, as Carr points out in this issue, Lum et al. (Reference Lum, Hibdon, Cave, Koper and Merola2011) were only able to run their LPR study for 30 days. Officers used the units only briefly during each shift, and got a total of just 10 hits related to automobile theft. This likely was not enough time to see any effects on crime under any causal theory (Carr Reference Carr2017, this issue).

When it comes to policing, it also may be difficult to employ an RCT in a way that makes this technique the gold standard in practice. Outside of a laboratory setting, it is difficult to control all of the possible factors, or variables, that could affect the outcome. Altering policing practices in one area could inadvertently affect things in other neighborhoods as well. For example, people may learn that the department has purchased LPR units, but not know precisely where they are used – and thus avoid stealing cars anywhere in the city, including in control neighborhoods.

Finally, as Geller (Reference Geller2017, this issue) points out in her symposium essay, it may not always be clear from an RCT why a particular intervention is having the effect that it is. In the case of proactive stops, for example, effects may depend both on the quantity of stops conducted, and also the quality of the stops – including the level of suspicion that officers had prior to initiating the encounter, and the manner in which the stop is conducted. This makes it difficult to reach meaningful conclusions from the results. One could try to design a study that tried to isolate each of the possible mechanisms, but that would make it still more expensive (and it is not clear that a police department would ever go along).

4.2.3 Alternatives to RCTs

While recognizing the great potential value of RCTs, it still is possible to learn a great deal about policing practices by employing a variety of alternative research designs in clever way. Symposium authors suggest a variety of alternatives that often can be implemented at a much lower cost, and without raising some of the same ethical concerns.

Many of the techniques described take advantage of discontinuities – either across time or space (or often both) – to isolate the effects of a particular policing policy or practice from all of the other factors that may affect crime rates in a particular neighborhood. As Jillian Carr (Reference Carr2017, this issue) and Rebecca Goldstein (Reference Goldstein2017, this issue) both note, researchers could approximate an RCT by taking advantage of a rollout (or a rollback, as the case may be). Often when a police department deploys a new practice or technology, it does so gradually, starting with one area before moving on to others. Researchers can use a rollout to assess whether anything changes in the neighborhoods that first get the technology – using comparable areas in other parts of the city as the control. Or, researchers could look for discontinuities in historical data. Geller points out, for example, that policing practices may differ across precinct or jurisdictional boundaries even though neighborhoods on each side of the line may in fact be quite similar (2017, this issue). Carr notes that the neighborhoods themselves need not even be that similar, so long as there is reason to think that they are subject to the same broad economic and social trends (2017, this issue). Occasionally, researchers may be able to take advantage of natural experiments – which are exogenous shocks that result in changes policing practices or staffing levels for reasons unrelated to changes in crime. A number of studies have taken advantage of the fact that departments sometimes deploy additional officers to a particular neighborhood in response to a terrorist threat (as opposed to stepped up criminal activity), and then used these temporary spikes in police activity to assess the degree to which officer presence affects the rates of traditional violent and property crimes (e.g., Draca, Machin and Witt Reference Draca, Machin and Witt2011).

There are, of course, a variety of challenges to using each of these techniques, a set of concerns elaborated by Goldstein in this issue. Rollouts, for example, are rarely conducted in a random fashion. Departments often deploy new technologies first in neighborhoods where they are needed most. Researchers looking to use historical data may find that data are missing or incomplete. Any number of factors can affect crime rates in a particular neighborhood, many of which cities simply do not track. For example, a dispute between rival gangs could lead to an uptick in crime, or a streetlight could go out, suddenly making a particular corner more dangerous than usual. Spillover effects may corrupt the results of the quasi-experiment (Goldstein Reference Goldstein2017, this issue). Still, it often is possible to learn something – and across multiple studies to start to get a picture of policing’s effects.

Finally, as Goldstein reminds us, absent the ability to use an RCT or quasi-experiment, retrospective nonexperimental study is still available. It suffers from any of a number of familiar difficulties, from simultaneous policy changes that are difficult to untangle from the treatment being studied, to the concern that not all possibly confounding variables have been included in a regression analysis (Goldstein Reference Goldstein2017, this issue). Nonetheless, if we are going to learn something from new policing policies and tactics, this may be the best available option in some cases.

4.3 Valuing intangibles

The next challenge to conducting BCAs around policing is figuring out how to put a value on the benefits and costs – many of which are intangible and thus not easily expressed in dollar terms. The difficulty is apparent when one picks apart the various costs that may be incurred as a result of Ridgemont’s proactive traffic stops. These include dignitary and psychological harms, as well as racial discrimination harms (whether perceived or real). In communities where such stops are frequent, individuals may engage in various avoidance behaviors to reduce the risk of being stopped, which also is a cost. And as Fackler et al. and Geller both point out in their symposium essays, there may be a significant cost in terms of community trust, which in turn can affect people’s willingness to cooperate with the police (Fackler et al. Reference Fackler, Henrichson, Janszky and Neusteter2017, this issue; Geller Reference Geller2017, this issue).

Still, analysts can use a variety of tools to try to put a dollar figure on intangible benefits and costs.

4.3.1 Borrowing values from other contexts

Sometimes it may be possible to find similar activities or situations for which we do know the cost – or at least can more easily assess it.

One place to look for estimates is jury awards. Juries often are asked to put a dollar figure on intangibles like pain and suffering when deciding how much a defendant should have to pay to compensate a plaintiff for his or her injuries (Cohen & Miller, Reference Cohen and Miller2003). A number of researchers have used jury awards to come up with estimates for the intangible cost of various crimes to society, like homicide or assault.

This approach is not without its problems. As Cohen notes, jury awards represent what it would take to compensate someone for a particular injury, which is not necessarily the same as what that individual or others in society would be willing to pay in order to avoid the harm in the first place. And, of course, juries are not spending their money, but someone else’s, which may serve badly to put an accurate price on certain injuries. Jury awards also vary widely and are susceptible to a range of biases, including both case selection and the age and demographics of the people involved (e.g., Anwar, Bayer & Hjalmarsson, Reference Anwar, Bayer and Hjalmarsson2012, Reference Anwar, Bayer and Hjalmarsson2014; Grossman et al., Reference Grossman, Gazal-Ayal, Pimentel and Weinstein2016). Also, one should be careful in presuming the analogous nature of something that may in fact be different. Someone who is assaulted by another member of the public may experience a different set of harms than someone who is assaulted by a police officer – even if the physical injuries are the same. Still, in the absence of alternatives, jury awards may offer at least a plausible estimate of the underlying costs.

4.3.2 Looking to people’s revealed preferences

Sometimes it may be possible to estimate certain values by looking at decisions that people actually make about how to spend their money or time, that is, revealed preferences. For example, some people pay to enroll in TSA precheck, which cuts down on the time and intrusiveness of airport security. Others pay more for housing in order to live closer to work. As noted above, some drivers may avoid certain roads where stops are frequent.

Looking at these sorts of decisions can provide at least some estimate of the value that people put on their privacy or time. On the other hand, one concern with these estimates is that people may not be able to afford to pay for things like time or privacy, even if they in fact value them a great deal.

4.3.3 Contingent valuation

Another approach is to use contingent valuation – a survey-based technique that involves posing a variety of hypotheticals and asking respondents how much they would value a particular outcome. By varying the hypotheticals – for example, the probability of being stopped, or the description of what happens during a typical stop – researchers can begin to tease out the costs of different encounters. Two symposium papers – by Mark Cohen and by Richard Carson and Jordan Louviere – suggest a number of strategies for conducting contingent valuation surveys in the policing context (and address some of the challenges inherent in this approach). Carson and Louviere (Reference Carson and Louviere2017, this issue) in particular explain developments in the field of contingent valuation since the historic NOAA report co-authored by two Nobel laureates, accepting the value of the contingent valuation approach under carefully specified conditions (Arrow et al. 1993). Carson and Louviere talk about various increasingly sophisticated surveying techniques – often accompanied by visual and verbal depictions so that participants can make informed choices – that are designed to elicit preferences, and put a willingness to pay (WTP) measure on those choices as a means of monetization. At the conference, participants put forward a number of additional ideas for how one could develop and implement a study (or studies) using contingent valuation that could go a significant way toward putting a dollar value on some of the intangible costs and benefits of policing.

In designing a contingent valuation survey, it is important to ensure that the stories respondents are told are plausible – and that they involve real tradeoffs that a person could imagine making. Because people often pay and receive money from the government through taxes, surveys often ask how much respondents would be willing to see their taxes go up (or down) in exchange for various changes in policy.

In asking these questions, however, one must be careful not to confuse the value one places on a practice as a citizen from the value one places on it as the potential target of the practice. When asked about stop-and-frisk, for example, someone who is unlikely to ever be stopped will respond based on the value they attach as a citizen to reducing (or increasing) the number of coercive encounters. Individuals who expect to be stopped will respond based on how they think the proposed policy will affect them personally. For this reason, it is important to survey individuals in different groups to ensure each of these perspectives is taken into account.

A number of other factors also may affect people’s responses. For example, some people may have strong views about policing in general that could swamp their more fine-grained valuations of the specific practice at issue. Someone who thinks police budgets are too high may not be willing to pay higher taxes for de-escalation training – even though they generally think that such training would be a good idea. In addition, people’s responses can vary based on their familiarity with the practice at issue. Someone who has been stopped frequently by the police may give a different answer than someone for whom the question is more abstract. In designing a survey, researchers may need to include some preliminary questions that get at these baseline views and experiences – and then potentially tailor the scenarios presented to various respondents.

All of these are merely cautions, which themselves can be leveraged in survey design. One could learn a lot by doing a large-scale study, and segmenting out the groups along some of the variables discussed, in order to get a better sense of how precisely to price a practice.Footnote 9

4.3.4 When valuation is very hard

Finally, it may be possible to include certain intangible costs or benefits in a BCA without having to assign a precise dollar value to them.

In some cases, decision makers can anticipate certain types of harms and price out measures to avoid them. Say, for example, that a chief is concerned that a certain amount of racial profiling may follow from a decision to engage in proactive stops. The cost of that profiling is serious but difficult to price. The chief can reduce the risk of profiling by implementing a strong bias-free policing policy and carefully tracking demographic data on police encounters. These alternatives might be priced more easily.

Researchers also can employ breakeven analysis, which involves adding up all of the costs and benefits that can be measured, and then asking what the intangible values would need to be in order to tilt the decision one way or another. Breakeven analysis works best when one side of the cost-benefit equation is much easier to quantify than the other. Otherwise, one ends up with intangibles on both sides of the equation. Obviously, with regard to traffic stops, neither side is particularly easy. But for some police practices, we know what the costs might be, or can reasonably price the benefits, which makes this a promising approach.

5 Conclusion

As we indicated at the outset, the conference conversations and the papers by participants, both those published in this issue and additional ones on the Policing Project website ( made two things very clear. First, it is remarkable, to say the least, given the controversy and uncertainty surrounding various policing practices, that BCA has not been employed significantly in this area. It can and should be. Policing policies, technologies, and tactics, should be subjected to careful BCA. Second, although there are challenges to doing so, these challenges can be overcome with careful methodology. Collaboration between researchers and police officials can lead to clever research design that an assist in our developing a better understanding of what works and what does not. Also, with the proper resources, we can do reasonable job of assessing and quantifying the intangible costs of policing, which have been all too often ignored to date. It is our hope that this conference, and these papers, lead a generation of scholars to do BCA work on policing practices, and that funding is forthcoming – both from government entities and private philanthropy – to see that this work is done.


1 Although the norm in the world is to refer to cost-benefit analysis, or CBA, we follow the lead of this journal, and refer to BCA. The reason should be apparent: if there are no discernible benefits, costs are irrelevant, and so one starts with benefits (Aos, Reference Aos2015). That said, if quantifying the benefits is difficult, but the costs are measurable, a form of breakeven analysis may be the way to go. (Aos, Reference Aos2015).

2 Vigilant Solutions runs the National Vehicle Location Service (NVLS), which as of 2014 aggregated nearly 2 billion license plate records into a searchable database shared with cooperating law enforcement agencies.

3 See infra note 1.

4 See also infra Part IV(B).

5 See infra Part IV(B).

6 This is certainly not to say there has been no discussion. For example, see the OECD’s Summary and Conclusions of the Round Table on Security, Risk Perception and Cost-Benefit Analysis, held on December, 2008 in Paris, France.

7 See Whren v. United States. The Court did say in Whren that if “pretextual” stops are conducted in racially discriminatory ways, they could be challenged under the Equal Protection Clause, but in practice the sort of intentional discrimination that is required is difficult to prove.

8 We are not the first to notice this shortfall. In a symposium in Criminology and Public Policy Black et al. offered a powerful critique of the shortfalls in the existing BCA literature around policing and criminal justice (Black, Solow & Taylor, Reference Black, Solow and Taylor2015).

9 In addition to using hypothetical scenarios, it also is possible to survey the public following real encounters. These are typically referred to as “contact surveys.” For example, one could take advantage of the fact that some people now receive traffic tickets by mail (e.g., from a red light camera), whereas others receive tickets from officers during a traffic stop. The difference in how people feel about their tickets in the two scenarios can give some sense of the cost of the stop itself.


Anwar, Shamena, Bayer, Patrick & Hjalmarsson, Randi (2012). The Impact of Jury Race in Criminal Trials. The Quarterly Journal of Economics, 127(2), 10171055.CrossRefGoogle Scholar
Anwar, Shamena, Bayer, Patrick & Hjalmarsson, Randi (2014). The Role of Age in Jury Selection and Trial Outcomes. The Journal of Law and Economics, 57(4), 10011030.CrossRefGoogle Scholar
Aos, Steve (2015). What Is the Bottom Line? Criminology and Public Policy, 14(4), 633638.CrossRefGoogle Scholar
Ariel, Barak, Weinborn, Cristobal & Sherman, Lawrence W. (2016). ‘Soft’ Policing at Hot Spots - Do Police Community Support Officers Work? A Randomized Controlled Trial. Journal of Experimental Criminology, 12(3), 277317.CrossRefGoogle Scholar
Arrow, Kenneth J., Solow, Robert, Portney, Paul R., Leamer, Edward E., Radner, Roy & Schuman, Howard (1993). Report of the NOAA Panel on Contingent Valuation. Federal Register, 58(10), 46014614.Google Scholar
Ayres, Ian & Levitt, Steven D. (1998). Measuring Positive Externalities from Unobservable Victim Precaution: An Empirical Analysis of Lojack. The Quarterly Journal of Economics, 113(1), 4377.CrossRefGoogle Scholar
Beavers, Elizabeth & Shank, Michael (2014). Get the Military Off of Main Street: Ferguson Shows the Risks of Militarized Policing. New York Times; Scholar
Black, Dan A., Solow, Robert M. & Taylor, Lowell J. (2015). Comments on Domínguez and Raphael. Criminology and Public Policy, 14(4), 639646.CrossRefGoogle Scholar
Caplan, Joel M., Kennedy, Leslie W. & Petrossian, Gohar (2011). Police-Monitored CCTV Cameras in Newark, NJ: A Quasi-Experimental Test of Crime Deterrence. Journal of Experimental Criminology, 7(3), 255274.CrossRefGoogle Scholar
Carr, Jillian (2017). Estimating the Effects of Police Technology Using Quasi- Experimental Methods. Journal of Benefit-Cost Analysis, 8(3), 360368.CrossRefGoogle Scholar
Carson, Richard & Louviere, Jordan (Forthcoming 2017). Estimation of Broad-Scale Tradeoffs in Community Policy Policies. Journal of Benefit-Cost Analysis, 8(3), 385398.CrossRefGoogle Scholar
Cohen, Mark A. (1988). Pain, Suffering, and Jury Awards: A Study of the Cost of Crime to Victims. Law and Society Review, 537555.CrossRefGoogle Scholar
Cohen, Mark A. (2000). Measuring the Costs and Benefits of Crime and Justice. Washington, DC: National Institute of Justice.Google Scholar
Cohen, Mark A.(2016) The ‘Cost of Crime’ and Benefit-Cost Analysis of Criminal Justice Policy: Understanding and Improving upon the State-of-the-Art. Available at Scholar
Cohen, Mark A. (2017). The Social Cost of a Racially Targeted Police Encounter. Journal of Benefit-Cost Analysis, 8(3), 369384.CrossRefGoogle Scholar
Cohen, Mark A. & Miller, Ted R. (1998). The Cost of Mental Health Care for Victims of Crime. Journal of Interpersonal Violence, 13(1), 93110.CrossRefGoogle Scholar
Cohen, Mark A. & Miller, Ted R. (2003). ‘Willingness to award’ Nonmonetary Damages and the Implied Value of Life from Jury Awards. International Review of Law and Economics, 23(2), 165181.CrossRefGoogle Scholar
Cohen, Mark A., Rust, Roland T., Steen, Sara & Tidd, Simon T. (2004). Willingness-to-Pay for Crime Control Programs. Criminology, 42(1), 89110.CrossRefGoogle Scholar
Draca, Mirko, Machin, Stephen & Witt, Robert (2011). Panic on the Streets of London: Police, Crime & the July 2005. Terror Attacks. American Economic Review, 101(5), 21572181.CrossRefGoogle Scholar
Evans, William N. & Owens, Emily G. (2007). COPS and Crime. Journal of Public Economics, 91(1), 181201.CrossRefGoogle Scholar
Executive Order58 F.R. 51735 (1993).Google Scholar
Executive Order76 FR 3821 (2011).Google Scholar
Fackler, Ryan K., Henrichson, Christian, Janszky, Elizabeth G. & Neusteter, Rebecca (2017). Closing the Gap: The Need for Inclusive Benefit-Cost Analysis in Policing. Journal of Benefit-Cost Analysis, 8(3), 330338.CrossRefGoogle Scholar
Floyd v. City of New York302 F.R.D. 69 (S.D.NY 2014).Google Scholar
Geller, Amanda (2017). Benefit-Cost Analysis in Policing Research: Assessing Crime-Control Benefits of Proactive Enforcement Practices. Journal of Benefit-Cost Analysis, 8(3), 339347.CrossRefGoogle Scholar
Geller, Amanda, Fagan, Jeffrey, Tyler, Tom & Link, Bruce G. (2014). Aggressive Policing and the Mental Health of Young Urban Men. American Journal of Public Health, 104(12), 23212327.CrossRefGoogle ScholarPubMed
Gelman, Andrew, Fagan, Jeffrey & Kiss, Alex (2007). An Analysis of the New York City Police Department’s ‘Stop-and-Frisk’ Policy in the Context of Claims of Racial Bias. Journal of the American Statistical Association, 102(476), 813823.CrossRefGoogle Scholar
Gierlack, Keith, Tom LaTourrette, Shara, Anderson, James M., Mayer, Lauren A. & Zmud, Johanna (2014). License Plate Readers for Law Enforcement: Opportunities and Obstacles. RAND Corporation.Google Scholar
Goldstein, Rebecca (2017). Evaluating Proactive Police Units: A Case Study of Retrospective Benefit-Cost Analysis with Nonexperimental Data. Journal Benefit-Cost Analysis, 8(3), 348359.CrossRefGoogle Scholar
Grossman, Guy, Gazal-Ayal, Oren, Pimentel, Samuel D. & Weinstein, Jeremy M. (2016). Descriptive Representation and Judicial Outcomes in Multiethnic Societies. American Journal of Political Science, 60(1), 4469.CrossRefGoogle Scholar
Hahn, Robert W. & Sunstein, Cass R. (2002). A New Executive Order for Improving Federal Regulation? Deeper and Wider Cost-Benefit Analysis. University of Pennsylvania Law Review, 150(5), 14891552.CrossRefGoogle Scholar
Heaton, Paul (2010). Hidden in Plain Sight: What Cost-of-Crime Research Can Tell Us about Investing in Police. RAND Corporation.Google Scholar
Kaczynski, Andrew (2017). Attorney General Jeff Sessions: Consent Decrees ‘can reduce morale of the police officers’. CNN, Scholar
Knowles, John, Persico, Nicola & Todd, Petra (2001). Racial Bias in Motor Vehicle Searches: Theory & Evidence. Journal of Political Economy, 109(1), 203229.CrossRefGoogle Scholar
Lichtenberg, Illya D. & Smith, Alisa (2001). Testing the Effectiveness of Consent Searches as a Law Enforcement Tool. Criminal Justice Studies, 14(1), 95111.Google Scholar
Linden, Leigh & Rockoff, Jonah E. (2008). Estimates of the Impact of Crime Risk on Property Values from Megan’s Laws. American Economic Review, 98(3), 11031127.CrossRefGoogle Scholar
Logan, T. K., Walker, Robert & Hoyt, William (2012). The Economic Costs of Partner Violence and the Cost-Benefit of Civil Protective Orders. Journal of Interpersonal Violence, 27(6), 11371154.CrossRefGoogle ScholarPubMed
Ludwig, Jens & Cook, Philip J. (2001). The Benefits of Reducing Gun violence: Evidence from Contingent-Valuation Survey Data. Journal of Risk and Uncertainty, 22(3), 207226.CrossRefGoogle Scholar
Lum, Cynthia, Hibdon, Julie, Cave, Breanne, Koper, Christopher S. & Merola, Linda (2011). License Plate Reader (LPR) Police Patrols in Crime Hot Spots: an Experimental Evaluation in Two Adjacent Jurisdictions. Journal of Experimental Criminology, 7(4), 321345.CrossRefGoogle Scholar
Matthies, Carl & Chiu, Tina (2014). Putting a Value on Crime Analysts. Vera Institute of Justice.Google Scholar
McCollister, Kathryn E., French, Michael T. & Fang, Hai (2010). The Cost of Crime to Society: New Crime-Specific Estimates for Policy and Program Evaluation. Drug and Alcohol Dependence, 108(1), 98109.CrossRefGoogle ScholarPubMed
Merola, Linda, Lum, Cynthia, Cave, Breanne & Hibdon, Julie (2014). Community Support for License Plate Recognition. Policing: An International Journal of Police Strategies & Management, 37(1), 3051.CrossRefGoogle Scholar
Miller, Ted R., Galbraith, Maury S. & Lawrence, Bruce A. (1998). Costs and Benefits of a Community Sobriety Checkpoint Program. Journal of Studies on Alcohol, 59(4), 462468.CrossRefGoogle ScholarPubMed
Moore, Simon & Shepherd, Jonathan P. (2006). The Cost of Fear: Shadow Pricing the Intangible Costs of Crime. Applied Economics, 38(3), 293300.CrossRefGoogle Scholar
Morgenstern, Richard D., Pizer, William A. & Shih, Jhih-Shyang (2001). The Cost of Environmental Protection. The Review of Economics and Statistics, 83(4), 732738.CrossRefGoogle Scholar
Murray, Christopher J. L., Vos, Theo, Lozano, Rafael, Naghavi, Mohsen, Flaxman, Abraham D., Michaud, Catherine & Ezzati, Majid et al. (2012). Disability-Adjusted Life Years (DALYs) for 291 Diseases & Injuries in 21 regions, 1990–2010: A Systematic Analysis for the Global Burden of Disease Study 2010. The Lancet, 380(9859), 21972223.CrossRefGoogle ScholarPubMed
Nagin, Daniel S., Piquero, Alex R., Scott, Elizabeth S. & Steinberg, Laurence (2006). Public Preferences for Rehabilitation Versus Incarceration of Juvenile Offenders: Evidence from a Contingent Valuation Survey. Criminology & Public Policy, 5(4), 627651.CrossRefGoogle Scholar
OECD(2009). Security, Risk Perception & Cost-Benefit Analysis: Summary and Conclusions (2009): Discussion Paper No. 2009-6 OECD/ITF Round Table on Security, Risk Perception and Cost-Benefit Analysis, International Transport Forum, 11–12 December 2008.Google Scholar
Pearce, David, Atkinson, Giles & Mourato, Susana (2006). “Cost-Benefit Analysis and the Environment: Recent Developments”. Organisation for Economic Co-operation and Development. Scholar
Post, Lori A., Mezey, Nancy J., Maxwell, Christopher & Wibert, Wilma Novalés (2002). The Rape Tax: Tangible and Intangible Costs of Sexual Violence. Journal of Interpersonal Violence, 17(7), 773782.CrossRefGoogle Scholar
Ratcliffe, Jerry H., Taniguchi, Travis, Groff, Elizabeth R. & Wood, Jennifer D. (2011). The Philadelphia Foot Patrol Experiment: A Randomized Controlled Trial of Police Patrol Effectiveness in Violent Crime Hotspots. Criminology, 49(3), 795831.CrossRefGoogle Scholar
Rosenfeld, Richard & Fornango, Robert (2014). The Impact of Police Stops on Precinct Robbery & Burglary Rates in New York City, 2003–2010. Justice Quarterly, 31(1), 96122.CrossRefGoogle Scholar
Shapland, Joanna, Atkinson, Anne, Atkinson, Helen, Dignan, James, Edwards, Lucy, Hibbert, Jeremy, Howes, Marie, Johnstone, Jennifer, Robinson, Gwen & Sorsby, Angela (2008). Does Restorative Justice Affect Reconviction? The Fourth Report from the Evaluation of Three Schemes. Ministry of Justice Research Series, 10(8), 176.Google Scholar
Sherman, Lawrence W., Strang, Heather, Barnes, Geoffrey, Woods, Daniel J., Bennett, Sarah, Inkpen, Nova, Newbury-Birch, Dorothy, Rossner, Meredith, Angel, Caroline, Mearns, Malcolm & Slothower, Molly (2015). Twelve Experiments in Restorative Justice: the Jerry Lee Program of Randomized Trials of Restorative Justice Conferences. Journal of Experimental Criminology, 11(4), 501540.CrossRefGoogle Scholar
Jillian, Swencionis & Goff, Phillip (Forthcoming 2017). The Psychological Science of Racial Bias and Policing. Psychology, Public Policy, and Law, 20.Google Scholar
Szoldra, Paul (2014). This is the Terrifying Result of Police Militarization. Business Insider, Scholar
Taylor, Bruce, Koper, Christopher & Woods, Daniel (2011). Combating Auto Theft in Arizona: A Randomized Experiment with License Plate Recognition Technology. Final Report to the National Institute of Justice. Washington, DC: Police Executive Research Forum.Google Scholar
Tita, George E., Petras, Tricia L. & Greenbaum, Robert T. (2006). Crime and Residential Choice: A Neighborhood Level Analysis of the Impact of Crime on Housing Prices. Journal of Quantitative Criminology, 22, 299317.CrossRefGoogle Scholar
US Bureau of Justice Statistics (2013). Employment and Expenditures. Washington, DC: United States Department of Justice;  Office of Justice Programs. ScholarPubMed
Weisburd, David & Green, Lorraine (1995). Policing Drug Hot Spots: The Jersey City Drug Market Analysis Experiment. Justice Quarterly, 12(4), 711735.CrossRefGoogle Scholar
Weisburd, David, Wooditch, Alese, Weisburd, Sarit & Yang, Sue-Ming (2016). Do Stop, Question, and Frisk Practices Deter Crime? Criminology & Public Policy, 15(1), 3135.CrossRefGoogle Scholar
Welsh, Brandon C. & Farrington, David P. (2009). Public area CCTV and Crime Prevention: An Updated Systematic Review and Meta-Analysis. Justice Quarterly, 26(4), 716745.CrossRefGoogle Scholar
Welsh, Brandon C., Farrington, David P. & Sherman, Lawrence W. (2001). Costs and Benefits of Preventing Crime. Westview Press.Google Scholar
Whrenv. United States (1996) 517 U.S. 806, 819.Google Scholar
Wilson, Holly A. & Hoge, Robert D. (2013). The Effect of Youth Diversion Programs on Recidivism: A Meta-Analytic Review. Criminal Justice and Behavior, 40(5), 497518.CrossRefGoogle Scholar
You have Access
Open access
Cited by

Send article to Kindle

To send this article to your Kindle, first ensure is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

Note you can select to send to either the or variations. ‘’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Benefit-Cost Analysis of Public Safety: Facing the Methodological Challenges
Available formats

Send article to Dropbox

To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

Benefit-Cost Analysis of Public Safety: Facing the Methodological Challenges
Available formats

Send article to Google Drive

To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

Benefit-Cost Analysis of Public Safety: Facing the Methodological Challenges
Available formats

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *