Skip to main content Accessibility help
×
Home
Hostname: page-component-8bbf57454-wdwc2 Total loading time: 0.559 Render date: 2022-01-21T10:25:21.575Z Has data issue: false Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": true, "newCiteModal": false, "newCitedByModal": true, "newEcommerce": true, "newUsageEvents": true }

On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning

Published online by Cambridge University Press:  23 July 2020

Justin B. Biddle*
Affiliation:
School of Public Policy, Georgia Institute of Technology Atlanta, GA, USA

Abstract

Recent scholarship in philosophy of science and technology has shown that scientific and technological decision making are laden with values, including values of a social, political, and/or ethical character. This paper examines the role of value judgments in the design of machine-learning (ML) systems generally and in recidivism-prediction algorithms specifically. Drawing on work on inductive and epistemic risk, the paper argues that ML systems are value laden in ways similar to human decision making, because the development and design of ML systems requires human decisions that involve tradeoffs that reflect values. In many cases, these decisions have significant—and, in some cases, disparate—downstream impacts on human lives. After examining an influential court decision regarding the use of proprietary recidivism-prediction algorithms in criminal sentencing, Wisconsin v. Loomis, the paper provides three recommendations for the use of ML in penal systems.

Type
Article
Copyright
© The Author(s) 2020. Published by Canadian Journal of Philosophy

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Alexander, Michelle. 2012. The New Jim Crow: Mass Incarceration in the Age of Colorblindness. Rev. ed. New York: The New Press.Google Scholar
Angwin, Julia, Larson, Jeff, Mattu, Surya, and Kirchner, Lauren. 2016. “Machine Bias.” ProPublica. May 23. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.Google Scholar
Biddle, Justin B. 2013. “State of the Field: Transient Underdetermination and Values in Science.” Studies in History and Philosophy of Science 44: 124–33.CrossRefGoogle Scholar
Biddle, Justin B. 2016. “Inductive Risk, Epistemic Risk, and Overdiagnosis of Disease.” Perspectives on Science 24 (2): 192205.CrossRefGoogle Scholar
Biddle, Justin B. 2018. “‘Antiscience Zealotry’? Values, Epistemic Risk, and the GMO Debate.” Philosophy of Science 85: 360–79.CrossRefGoogle Scholar
Biddle, Justin B. 2020. “Epistemic Risks in Cancer Screening: Implications for Ethics and Policy.” Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 79: 101200.CrossRefGoogle ScholarPubMed
Biddle, Justin B., and Kukla, Rebecca. 2017. “The Geography of Epistemic Risk.” In Exploring Inductive Risk: Case Studies of Values in Science, edited by Elliott, K. and Richards, T., 215–37. Oxford: Oxford University Press.Google Scholar
Biddle, Justin, and Winsberg, Eric. 2010. “Value Judgements and the Estimation of Uncertainty in Climate Modeling.” In New Waves in Philosophy of Science, edited by Magnus, P.D. and Busch, J., 172–97. Basingstoke, England: Palgrave MacMillan.CrossRefGoogle Scholar
Bolukbasi, Tolga, Chang, Kai-Wei, Zou, James Y, Saligrama, Venkatesh, and Kalai, Adam T. 2016. “Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings.” In Advances in Neural Information Processing Systems 29, edited by Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., 4349–57.Google Scholar
Brown, Matthew. 2013. “Values in Science beyond Underdetermination and Inductive Risk.” Philosophy of Science 80 (5): 829–39.CrossRefGoogle Scholar
Brown, Matthew. 2020. Science and Moral Imagination: A New Ideal for Values in Science. Pittsburgh: University of Pittsburgh Press.Google Scholar
Buolamwini, Joy, and Gebru, Timnit. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81: 115.Google Scholar
Chouldechova, Alexandra. 2017. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data 5 (2). https://doi.org/10.1089/big.2016.0047.CrossRefGoogle ScholarPubMed
Corbett-Davies, Sam, Pierson, Emma, Feller, Avi, and Goel, Sharad. 2016. “A Computer Program Used for Bail and Sentencing Decisions Was Labeled Biased against Blacks. It’s Actually Not That Clear.” Washington Post, October 17. https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas.Google Scholar
Desai, Devin, and Kroll, Joshua. 2018. “Trust but Verify: A Guide to Algorithms and the Law.” Harvard Journal of Law and Technology 31 (1).Google Scholar
Dieterich, William, Mendoza, Christina, and Brennan, Tim. 2016. “COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity.” Northpointe Inc. Research Department. http://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf.Google Scholar
Douglas, Heather. 2000. “Inductive Risk and Values in Science.” Philosophy of Science 67 (4): 559–79.CrossRefGoogle Scholar
Douglas, Heather. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh, PA: University of Pittsburgh Press.CrossRefGoogle Scholar
Douglas, Heather. 2017. “Why Inductive Risk Requires Values in Science.” In Current Controversies in Values and Science, edited by Elliott, Kevin and Steel, Daniel, 8193. New York: Routledge.CrossRefGoogle Scholar
Eaglin, Jessica. 2017. “Constructing Recidivism Risk.” Emory Law Journal 67: 59122.Google Scholar
Bejnordi, Ehteshami, Babak, Mitko Veta, van Diest, Paul Johannes, van Ginneken, Bram, Karssemeijer, Nico, Litjens, Geert, van der Laak, Jeroen A. W. M.et al. 2017.” Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women with Breast Cancer.” JAMA 318 (22): 2199–210. https://doi.org/10.1001/jama.2017.14585.CrossRefGoogle Scholar
Elliott, Kevin. 2011. Is a Little Pollution Good for You? Incorporating Societal Values in Environmental Research. New York: Oxford University Press.CrossRefGoogle Scholar
Elliott, Kevin. 2020. “A Taxonomy of Transparency in Science.” Canadian Journal of Philosophy.CrossRefGoogle Scholar
Elliott, Kevin, and Resnik, David. 2014. “Science, Policy, and the Transparency of Values.” Environmental Health Perspectives 122 (7): 647–50.CrossRefGoogle ScholarPubMed
Giere, Ronald. 1988. Explaining Science: A Cognitive Approach. Chicago: University of Chicago Press.CrossRefGoogle Scholar
Gillespie, Tarleton. 2014. “The Relevance of Algorithms.” In Media Technologies, edited by Gillespie, Tarleton, Boczkowski, Pablo, and Foot, Kirsten, 167–94. Cambridge, MA: MIT Press.Google Scholar
Haenssle, H. A., Fink, C., Schneiderbauer, R., Toberer, F., Buhl, T., Blum, A., Kalloo, A., Hassen, A. Ben Hadjet al. 2018. “Reader Study Level-I and Level-II Groups, Man against Machine: Diagnostic Performance of a Deep Learning Convolutional Neural Network for Dermoscopic Melanoma Recognition in Comparison to 58 Dermatologists.” Annals of Oncology 29 (8): 1836–42. https://doi.org/10.1093/annonc/mdy166.CrossRefGoogle ScholarPubMed
Harcourt, Bernard. 2015. “Risk as a Proxy for Race: The Dangers of Risk Assessment.” Federal Sentencing Reporter 27 (4): 237–43.CrossRefGoogle Scholar
Harris, Grant T., Rice, Marnie E., Quinsey, Vernon L., and Cormier, Catherine A.. 2015. Violent Offenders: Appraising and Managing Risk, 3rd ed. Washington, DC: American Psychological Association.CrossRefGoogle Scholar
Havstad, Joyce. 2020. “Archaic Hominin Genetics and Amplified Inductive Risk.” Canadian Journal of Philosophy.Google Scholar
IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. 2019. Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems. 1st ed. https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/autonomous-systems.html.Google Scholar
Intemann, Kristen. 2015. “Distinguishing between Legitimate and Illegitimate Values in Climate Modeling.” European Journal for Philosophy of Science 5 (2): 217–32.CrossRefGoogle Scholar
Kaminski, Margot E. 2019. “The Right to Explanation, Explained.” Berkeley Technology Law Journal 34 (1). http://dx.doi.org/10.2139/ssrn.3196985.Google Scholar
Kehl, Danielle, Guo, Priscilla, and Kessler, Samuel. 2017. Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing. Responsive Communities Initiative, Berkman Klein Center for Internet and Society, Harvard Law School. http://nrs.harvard.edu/urn-3:HUL.InstRepos:33746041.Google Scholar
Kitcher, Philip. 2001. Science, Truth, and Democracy. New York: Oxford University Press.CrossRefGoogle Scholar
Larson, Jeff, Mattu, Surya, Kirchner, Lauren, and Angwin, Julia. 2016. “How We Analyzed the COMPAS Recidivism Algorithm.” ProPublica, May 23. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.Google Scholar
Lashbrook, Angela. 2018. “AI-Driven Dermatology Could Leave Dark-Skinned Patients Behind.” The Atlantic, August 16. https://www.theatlantic.com/health/archive/2018/08/machine-learning-dermatology-skin-color/567619.Google Scholar
LeCun, Y., Bengio, Y., and Hinton, G.. 2015. “Deep Learning.” Nature, 521 (7553): 436–44.CrossRefGoogle ScholarPubMed
Longino, Helen. 1990. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Longino, Helen. 2002. The Fate of Knowledge. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Morley, Jessica, Floridi, Luciano, Kinsey, Libby, Elhalal, Anat. 2019. “From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices.” Science and Engineering Ethics. https://doi.org/10.1007/s11948-019-00165-5.CrossRefGoogle Scholar
Northpointe, . 2012. COMPAS Risk and Need Assessment System: Selected Questions Posed by Inquiring Agencies. http://www.northpointeinc.com/files/downloads/FAQ_Document.pdf.Google Scholar
Okruhlik, Kathleen. 1994. “Gender and the Biological Sciences.” Biology and Society 20: 2142.Google Scholar
O’Neil, Cathy. 2016. Weapons of Math Destruction. New York: Crown.Google Scholar
Parker, W. 2006. “Understanding Pluralism in Climate Modeling.” Foundations of Science 11: 3494349–68.CrossRefGoogle Scholar
Perrot, Patrick. 2017. “What about AI in Criminal Intelligence? From Predictive Policing to AI Perspectives.” European Police Science and Research Bulletin 16: 6576.Google Scholar
Petrov, Christo. 2019. “Big Data Statistics 2019.” TechJury (blog). https://techjury.net/stats-about/big-data-statistics.Google Scholar
Potochnik, Angela. 2012. “Feminist Implications of Model-Based Science.” Studies in History and Philosophy of Science 43: 383–89.CrossRefGoogle Scholar
The Royal Society. 2017. Machine Learning: The Power and Promise of Computers That Learn by Example. https://royalsociety.org/~/media/policy/projects/machine-learning/publications/machine-learning-report.pdf.Google Scholar
Rudner, Richard. 1953. “The Scientist Qua Scientist Makes Value Judgments.” Philosophy of Science 20 (1): 16.CrossRefGoogle Scholar
Schroeder, Andrew. 2020. “Values in Science: Ethical vs. Political Approaches.” Canadian Journal of Philosophy.Google Scholar
Shueh, Jason. 2016. “White House Challenges Artificial Intelligence Experts to Reduce Incarceration Rates.” Government Technology, June 7. https://www.govtech.com/computing/White-House-Challenges-Artificial-Intelligence-Experts-to-Reduce-Incarceration-Rates.html.Google Scholar
Singh, Jatinder, Walden, Ian, Crowcroft, Jon, and Bacon, Jean. 2016. “Responsibility and Machine Learning: Part of a Process.” SSRN, October 27. https://ssrn.com/abstract=2860048 or http://dx.doi.org/10.2139/ssrn.2860048.CrossRefGoogle Scholar
Solomon, Miriam. 2001. Social Empiricism. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Sullivan, Emily. 2019. “Understanding from Machine Learning Models.” British Journal for Philosophy of Science. axz035. doi:10.1093/bjps/axz035.CrossRefGoogle Scholar
Starr, Sonja B. 2015. “The New Profiling: Why Punishing Based on Poverty and Identity Is Unconstitutional and Wrong.” Federal Sentencing Reporter 27 (4): 229–36.CrossRefGoogle Scholar
State v. Loomis. 2016. Supreme Court of Wisconsin, 881 N.W. 2d 749. https://casetext.com/case/state-v-loomis-22.Google Scholar
Tashea, Jason. 2019. “France Bans Publishing of Judicial Analytics and Prompts Criminal Penalty.” ABA Journal, June 7. http://www.abajournal.com/news/article/france-bans-and-creates-criminal-penalty-for-judicial-analytics.Google Scholar
Turek, Matt. 2018. “Explainable Artificial Intelligence (XAI).” DARPA. https://www.darpa.mil/program/explainable-artificial-intelligence.Google Scholar
Verma, Sahil, and Rubin, Julia. 2018. “Fairness Definitions Explained.” In Proceedings of the International Workshop on Software Fairness. New York, NY: Association for Computing Machinery. https://doi.org/10.1145/3194770.3194776.Google Scholar
Wilholt, Torsten. 2009. “Bias and Values in Scientific Research.” Studies in History and Philosophy of Science 40: 92101.CrossRefGoogle Scholar
Wilholt, Torsten. 2013. “Epistemic Trust in Science.” British Studies in Philosophy of Science 64: 233–53.CrossRefGoogle Scholar
7
Cited by

Send article to Kindle

To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning
Available formats
×

Send article to Dropbox

To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning
Available formats
×

Send article to Google Drive

To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning
Available formats
×
×

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *