Skip to main content
×
×
Home
  • Get access
    Check if you have access via personal or institutional login
  • Cited by 1
  • Cited by
    This (lowercase (translateProductType product.productType)) has been cited by the following publications. This list is generated based on data provided by CrossRef.

    Sullivan, Christopher J. and Welsh, Brandon C. 2017. Preventing Crime and Violence. p. 339.

    ×
  • Print publication year: 2013
  • Online publication date: June 2014

9 - Using Regression Discontinuity Designs in Crime Research

Summary

INTRODUCTION

Some of the most interesting and important questions in criminology are causal in nature: Does neighborhood disadvantage influence an individual’s risk of criminal involvement? Does drug treatment work? Do changes in the threat of punishment deter crime? Distinguishing between causation and mere correlation is central to developing, testing, and refining our theories about the determinants of criminal behavior, most of which build on basic facts about how different constructs are causally related to one another. The distinction between statistical correlations and causal relationships is also of more than just academic interest, as most policy decisions hinge on causal questions. Getting the wrong answers to these questions leads to misguided policies that divert resources away from more effective alternatives, and sometimes even impose direct harm on society as well.

Although everyone in empirical criminology recognizes the challenges to valid causal inference created by the threat of omitted variables, the field remains divided about the most constructive way to proceed. One camp is often viewed as having adopted a purist, “randomized-trial-or-bust” perspective – a group that Sampson (2010) calls “randomistas.” Another camp consists of those we would call “research pluralists,” who are happy to consider findings from any sort of research design on a case-by-case basis.

Recommend this book

Email your librarian or administrator to recommend adding this book to your organisation's collection.

Experimental Criminology
  • Online ISBN: 9781139424776
  • Book DOI: https://doi.org/10.1017/CBO9781139424776
Please enter your name
Please enter a valid email address
Who would you like to send this to *
×
References
Angrist, Joshua D., and Pischke, Jörn-Steffen. 2009. Mostly Harmless Econometrics: An Empiricists’s Companion. Princeton, NJ: Princeton University Press.
Becker, Gary. 1968. “Crime and Punishment: An Economic Approach.” Journal of Political Economy 76(2): 169–217.
Berk, Richard. 2010. “Recent Perspectives on the Regression Discontinuity Design.” In Handbook of Quantitative Criminology, edited by Piquero, A. and Weisburd, D., pp. 563–79. New York: Springer.
Berk, Richard, Barnes, Geoffrey, Alhman, Lindsay, and Kurtz, Ellen. 2010. “When Second Best Is Good Enough: A Comparison between a True Experiment and a Regression Discontinuity Quasi-Experiment.” Journal of Experimental Criminology 6(2): 191–208.
Berk, Richard A., and Leeuw, Jan de. 1999. “An Evaluation of California’s Inmate Classification System Using a Generalized Regression Discontinuity Design.” Journal of the American Statistical Association 94(448): 1045–52.
Berk, Richard A., and Rauma, David.1983. “Capitalizing on Nonrandom Assignment to Treatments: A Regression-Discontinuity Evaluation of a Crime-Control Program.” Journal of the American Statistical Association 78(381): 21–7.
Chen, Keith M., and Shapiro, Jesse. 2007. “Do Harsher Prison Conditions Reduce Recidivism? A Discontinuity-based Approach.” American Law and Economics Review 9(1): 1–29.
Cook, Thomas D., and Wong, Vivian C.. 2008. “Empirical Tests of the Validity of the Regression Discontinuity Design.” Annales d’Economie et de Statistique (91/92): 127–50.
Gerber, Alan S., Green, Donald P., and Kaplan, Edward H.. 2004. “The Illusion of Learning from Observational Research.” In Problems and Methods in the Study of Politics, edited by Shapiro, Ian, Smith, Rogers M., and Masoud, Tarek E., pp. 251–73. New York: Cambridge University Press.
Heckman, James, and Joseph Hotz, V.. 1989. “Choosing among Alternative Nonexperimental Methods for Estimating the Impact of Social Programs: The Case of Manpower Training.” Journal of the American Statistical Association 84(408): 862–74.
Heckman, James, LaLonde, Robert, and Smith, Jeff. 1999 “The Economics and Econometrics of Active Labor Market Programs.” In Handbook of Labor Economics, edited by Ashenfelter, O. and Card, D., pp. 1865–2097. Philadelphia: Elsevier.
Hjalmarsson, Randi. 2009a. “Juvenile Jails: A Path to the Straight and Narrow or Hardened Criminality?Journal of Law and Economics 52(4): 779–809.
Hjalmarsson, Randi 2009b. “Crime and Expected Punishment: Changes in Perceptions at the Age of Criminal Majority.” American Law and Economics Review 11(1): 209–48.
Holland, Paul. 1988. “Causal Inference, Path Analysis, and Recursive Structural Equations Models.” Sociological Methodology 18: 449–84.
Imbens, Guido, and Kalyanaraman, Karthik. 2009. “Optimal Bandwidth Choice for the Regression Discontinuity Estimator.” NBER Working Papers No. 14726. Cambridge: National Bureau of Economic Research.
Imbens, Guido, and Lemieux, Thomas. 2008. “Regression Discontinuity Designs: A Guide to Practice.” Journal of Econometrics 142(2): 615–35.
LaLonde, Robert. 1986. “Evaluating the Econometric Evaluations of Training Programs.” American Economic Review 76(4): 4604–20.
Lee, David, and Lemeiux, Thomas. 2010. “Regression Discontinuity Designs in Economics.” Journal of Economic Literature 48(2): 281–355.
Lee, David, and McCrary, Justin. 2005. “Crime, Punishment, and Myopia.” NBER Working Paper #11491. Cambridge: National Bureau of Economic Research.
Lerman, Amy E. 2009. “The People Prisons Make: Effects of Incarceration on Criminal Psychology.” In Do Prisons Make Us Safer? The Benefits and Costs of the Prison Boom, edited by Raphael, Steven and Stoll, Michael A., pp. 152–76. New York: Russell Sage Foundation.
Ludwig, Jens, and Miller, Douglas. 2007. “Does Head Start Improve Children’s Life Chances? Evidence from a Regression Discontinuity Design.” Quarterly Journal of Economics, 122(1): 159–208.
Marie, Olivier, Walmsley, Rachel, and Moreton, Karen. 2011. “The Effect of Early Release of Prisoners on Home Detention Curfew on Recidivism.” United Kingdom Ministry of Justice Report.
McCrary, Justin. 2008. “Manipulation of the Running Variable in the Regression Discontinuity Design: A Density Test.” Journal of Econometrics 142(2): 698–714.
McKenzie, David, Gibson, John, and Stillman, Steven. 2010. “How Important Is Selection? Experimental versus Nonexperimental Measures of Income Gains from Migration.” Journal of the European Economic Association 8(4): 913–45.
Rauma, David, and Berk, Richard A.. 1987. “Remuneration and Recidivism: The Long-term Impact of Unemployment Compensation on Ex-offenders.” Journal of Quantitative Criminology 3(1): 3–27.
Rubin, Donald. 1977. “Assignment to a Treatment Group on the Basis of a Covariate.” Journal of Educational Statistics 2(1): 1–26.
Sampson, Robert. 2010. “Gold Standard Myths: Observations on the Experimental Turn in Quantitative Criminology.” Journal of Quantitative Criminology 26(4): 489–500.
Smith, Jeffrey, and Todd, Petra. 2005. “Does Matching Overcome LaLonde’s Critique of Nonexperimental Methods?Journal of Econometrics 125(1–2): 305–53.
Thistlethwaite, Donald L., and Campbell, Donald T.. 1960. “Regression-Discontinuity Analysis: An Alternative to the Ex-Post Facto Experiment.” Journal of Educational Psychology, 51: 309–17.
Wilde, Elizabeth. T., and Hollister, R.. 2007. “How Close Is Close Enough? Testing Nonexperimental Estimates of Impact against Experimental Estimates of Impact with Education Test Scores as Outcomes.” Journal of Policy Analysis and Management 26: 455–77.