Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-22dnz Total loading time: 0 Render date: 2024-04-28T23:28:49.324Z Has data issue: false hasContentIssue false

4 - Causal Inference and the Design and Analysis of Experiments

from Part II - Methods

Published online by Cambridge University Press:  27 July 2017

Oliver James
Affiliation:
University of Exeter
Sebastian R. Jilke
Affiliation:
Rutgers University, New Jersey
Gregg G. Van Ryzin
Affiliation:
Rutgers University, New Jersey
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Experiments in Public Management Research
Challenges and Contributions
, pp. 59 - 88
Publisher: Cambridge University Press
Print publication year: 2017

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Angrist, J. D. and Pischke, J. S. 2010. ‘The credibility revolution in empirical economics: how better research design is taking the con out of econometrics’, Journal of Economic Perspectives, 24(2): 330.CrossRefGoogle Scholar
Angrist, J. D. and Pischke, J. S. 2014. Mastering ‘Metrics: The Path from Cause to Effect. Princeton, NJ: Princeton University Press.Google Scholar
Benjamini, Y. and Hochberg, Y. 1995. ‘Controlling the false discovery rate: a practical and powerful approach to multiple testing’, Journal of the Royal Statistical Society. Series B (Methodological), 289300.Google Scholar
Brown, S. R. and Melamed, L. E. 1990. Experimental Design and Analysis. (Quantitative Applications in the Social Sciences, No. 74). London: Sage Publications.CrossRefGoogle Scholar
Campbell, D. T. and Stanley, J. C. 1966. Experimental and Quasi-experimental Designs for Research. Chicago: Rand-McNally.Google Scholar
Favero, N. and Bullock, J. 2015. ‘How (not) to solve the problem: an evaluation of scholarly responses to common source bias’, Journal of Public Administration Research and Theory, 25(1): 285308.CrossRefGoogle Scholar
Freedman, D. 2008. ‘On regression adjustment to experimental data’, Advances in Applied Mathematics, 40(2): 180–93.CrossRefGoogle Scholar
Gerber, A. S. and Green, D. P. 2012. Field Experiments: Design, Analysis and Interpretation. New York: W.W. Norton and Company.Google Scholar
Graham, J. W. 2012. Missing Data: Analysis and Design. New York: Springer.CrossRefGoogle Scholar
Grimmelikhuijsen, S. and Klijn, A. 2015. ‘The effects of judicial transparency on public trust: evidence from a field experiment’, Public Administration, 93: 9951011. doi: 10.1111/padm.12149.CrossRefGoogle Scholar
Groeneveld, S., Tummers, L., Bronkhorst, B., Ashikali, T., and Van Thiel, S. 2014. ‘Quantitative methods in public administration: their use and development through time’, International Public Management Journal, 18(1): 6186.CrossRefGoogle Scholar
Guala, F. 2005. The Methodology of Experimental Economics, Cambridge, Cambridge University Press.CrossRefGoogle Scholar
Holland, P. W. 1986. ‘Statistics and causal inference’. Journal of the American Statistical Association, 81(396): 945–60.Google Scholar
Holm, S. 1979. ‘A simple sequentially rejective multiple test procedure’, Scandinavian Journal of Statistics, 6(2): 6570.Google Scholar
Jakobsen, M. and Jensen, R. 2015. ‘Common method bias in public management studies’, International Public Management Journal, 18(1): 330.CrossRefGoogle Scholar
James, O., Jilke, S., Petersen, C., and Van de Walle, S. 2016. ‘Citizens’ blame of politicians for public service failure: experimental evidence about blame reduction through delegation and contracting’, Public Administration Review, 76(1): 8393.CrossRefGoogle Scholar
James, O. and Van Ryzin, G. 2015. ‘Motivated reasoning about public performance: an experimental study of how citizens judge Obamacare’, Paper presented to PMRA 2015 Annual Conference, University of Minnesota, Minneapolis, MN.Google Scholar
Jilke, S., Van Ryzin, G., and Van de Walle, A. 2015. ‘Responses to decline in marketized public services: an experimental evaluation of choice-overload’, Journal of Public Administration Research and Theory, doi: 10.1093/jopart/muv021.CrossRefGoogle Scholar
Kirk, R. E. 2012. Experimental Design: Procedures for the Behavioral Sciences: Procedures for the Behavioral Sciences. London: Sage Publications.Google Scholar
LaLonde, R. J. 1986. ‘Evaluating the econometric evaluations of training programs with experimental data’, The American Economic Review, 76(4): 604–20.Google Scholar
Lin, W. 2013. ‘Agnostic notes on regression adjustments to experimental data: Reexamining Freedman’s critique’, The Annals of Applied Statistics, 7(1): 295318.CrossRefGoogle Scholar
Meier, K. J. and O’Toole, L. J. 2013. ‘Subjective organizational performance and measurement error: common source bias and spurious relationships’, Journal of Public Administration Research and Theory, 23(2): 429–56.CrossRefGoogle Scholar
Morgan, S. L. and Winship, C. 2015. Counterfactuals and Causal Inference. 2nd Edition. Cambridge: Cambridge University Press.Google Scholar
Murphy, K. R., Myors, B., and Wolach, A., 2014. Statistical Power Analysis: A Simple and General Model for Traditional and Modern Hypothesis Tests. New York: Routledge.CrossRefGoogle Scholar
Remler, D. K. and Van Ryzin, G. G. 2015. Research Methods in Practice: Strategies for Description and Causation. London: Sage Publications.Google Scholar
Rubin, D. B. 1974. ‘Estimating causal effects of treatment s in randomized and non-randomized studies’, Journal of Educational Psychology, 66, 688701.CrossRefGoogle Scholar
Shadish, W. R., Cook, T. D., and Campbell, D. T. 2002. Experimental and Quasi-experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin Company.Google Scholar
Stuart, E. A. 2010. ‘Matching methods for causal inference: a review and a look forward’, Statistical Science: A Review Journal of the Institute of Mathematical Statistics, 25(1): 121.CrossRefGoogle Scholar
Wendorf, C. A. 2004. ‘Primer on multiple regression coding: common forms and the additional case of repeated contrasts’, Understanding Statistics, 3(1): 4757.CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×