Hostname: page-component-848d4c4894-ttngx Total loading time: 0 Render date: 2024-05-22T03:48:15.230Z Has data issue: false hasContentIssue false

Hammer or Measuring Tape? Artificial Intelligence and Justice in Healthcare

Published online by Cambridge University Press:  16 May 2023

Jan-Hendrik Heinrichs*
Affiliation:
Institute for Neuroscience and Medicine 7: Brain and Behaviour, Forschungszentrum Jülich GmbH, Jülich, Germany; Institute of Philosophy, RWTH Aachen University, Aachen, Germany

Abstract

Artificial intelligence (AI) is a powerful tool for several healthcare tasks. AI tools are suited to optimize predictive models in medicine. Ethical debates about AI’s extension of the predictive power of medical models suggest a need to adapt core principles of medical ethics. This article demonstrates that a popular interpretation of the principle of justice in healthcare needs amendment given the effect of AI on decision-making. The procedural approach to justice, exemplified with Norman Daniels and James Sabin’s accountability for reasonableness conception, needs amendment because, as research into algorithmic fairness shows, it is insufficiently sensitive to differential effects of seemingly just principles on different groups of people. The same line of research generates methods to quantify differential effects and make them amenable for correction. Thus, what is needed to improve the principle of justice is a combination of procedures for selecting just criteria and principles and the use of algorithmic tools to measure the real impact these criteria and principles have. In this article, the author shows that algorithmic tools do not merely raise issues of justice but can also be used in their mitigation by informing us about the real effects certain distributional principles and criteria would create.

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Notes

1. Díaz V, Viceconti M, Stroetmann K, Kalra D. Roadmap for the Digital Patient –DISCIPULUS. Bonn: Empirica; 2013.

2. Topol, EJ. High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine 2019;25(1):4456 CrossRefGoogle ScholarPubMed. There is a much-repeated justification for investing heavily into AI systems in healthcare allegedly stemming from Topol’s article. Here is an example for the justification: “Healthcare systems across the globe are struggling with increasing costs and worsening outcomes.” (Morley J, Machado CCV, Burr C, Cowls J, Joshi I, Taddeo M, et al. The ethics of AI in health care: A mapping review. Social Science & Medicine 2020;260:113172). But this is an inadequate reaction to what Topol can really show. Topol explicitly claims that these trends (increasing costs and worsening outcomes) are specific for the United States healthcare system. Here is the original line: “The first is a failed business model, with increasing expenditures and jobs allocated to healthcare, but with deteriorating key outcomes, including reduced life expectancy and high infant, childhood, and maternal mortality in the United States.” Topol refers to two sources in order to substantiate his claim, one of them being a comparison between child mortality in the US compared to 19 other OECD countries, the other being an article with the telling title “Link between health spending and life expectancy: US is an outlier”. He does go on to claim that this issue is not limited to the U.S., but he does not provide any evidence for any other national health system.

3. Chancellor, S, De Choudhury, M. Methods in predictive techniques for mental health status on social media: A critical review. npj Digital Medicine 2020;3(1):43 CrossRefGoogle ScholarPubMed. While Chancellor and De Choudhury identify several methodological shortcomings in present studies detecting mental health status from social media postings, they also point out directions to compensate for these shortcomings and thus generate diagnostically valuable tools.

4. Heinrichs, B, Eickhoff, SB. Your evidence? Machine learning algorithms for medical diagnosis and prediction. Human Brain Mapping 2020;41(6):1435–44CrossRefGoogle ScholarPubMed. doi:10.1002/hbm.24886.

5. Murdoch, B. Privacy and artificial intelligence: Challenges for protecting health information in a new era. BMC Medical Ethics 2021;22(1):122 CrossRefGoogle ScholarPubMed.

6. Vogt, H, Hofmann, B, Getz, L. The new holism: P4 systems medicine and the medicalization of health and life itself. Medicine Health Care & Philosophy 2016;19(2):307–23CrossRefGoogle Scholar.

7. Laacke, S, Mueller, R, Schomerus, G, Salloch, S. Artificial intelligence, social media and depression. A new concept of health-related digital autonomy. The American Journal of Bioethics 2021;21(7):420 CrossRefGoogle ScholarPubMed.

8. Beauchamp, TL, Childress, JF. Principles of biomedical ethics. 8th ed. Oxford, New York: Oxford University Press; 2019 Google Scholar.

9. Chen, IY, Szolovits, P, Ghassemi, M. Can AI help reduce disparities in general medical and mental health care? AMA Journal of Ethics 2019;21(2):E16779 Google Scholar.

10. Obermeyer, Z, Powers, B, Vogeli, C, Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019;366(6464):447–53CrossRefGoogle Scholar.

11. Oliva, J. Dosing discrimination: Regulating PDMP risk scores. California Law Review 2022;110(1):47115 Google Scholar.

12. Kilby AE. Algorithmic fairness in predicting opioid use disorder using machine learning. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. New York: Association for Computing Machinery; 2021:272.

13. Röösli, E, Bozkurt, S, Hernandez-Boussard, T. Peeking into a black box, the fairness and generalizability of a MIMIC-III benchmarking model. Scientific Data 2022;9(1):24 CrossRefGoogle Scholar.

14. See note 11,

15. See note 10, Obermeyer et al. 2019, at 447–53.

16. See Braun, M, Hummel, P, Beck, S, Dabrock, P. Primer on an ethics of AI-based decision support systems in the clinic. Journal of Medical Ethics 2021;47(12):e3CrossRefGoogle Scholar.

17. Cowden Hindash, AH, Lujan, C, Howard, M, O’Donovan, A, Richards, A, Neylan, TC, et al. Gender differences in threat biases: Trauma type matters in posttraumatic stress disorder. Journal of Traumatic Stress 2019;32(5):701–11CrossRefGoogle ScholarPubMed.

18. Gutmann, A, Thompson, D. Deliberating about bioethics. Hastings Center Report 1997;27(3):3841 Google Scholar.

19. Fleck, LM. Just Caring: Health Care Rationing and Democratic Deliberation. Oxford, New York: Oxford University Press; 2009 Google Scholar.

20. Daniels N, Sabin JE. Setting limits fairly. Can we learn to share medical resources? Oxford: Oxford University Press; 2002. In prior work, Daniels realized the transfer of Rawls’ conception of fair equality of opportunity into the healthcare field. Accordingly, justice in healthcare does not aim at equality of opportunity sans phrase. Individual opportunity depends on several factors, many of which clearly lie outside the healthcare field (e.g., education), and others that might even allow for inequality (e.g., talent). Rather, justice in healthcare aims at eliminating specific undeserved restrictions on persons’ opportunities and chances, namely those that result from disease and disability ( Daniels, N. Just Health Care. Cambridge: Cambridge University Press; 1985 CrossRefGoogle ScholarPubMed). This aim of reducing health-related restrictions to opportunity applies equally across all members of a society.

21. Badano, G. If You’re a Rawlsian, How Come You’re So Close to Utilitarianism and Intuitionism? A Critique of Daniels’s Accountability for Reasonableness. Health Care Analysis 2018;26(1):116 CrossRefGoogle ScholarPubMed.

22. See note 20, Daniels, Sabin 2002.

23. See note 20, Daniels, Sabin 2002, at 45.

24. Friedman, A. Beyond accountability for reasonableness. Bioethics 2008;22(2):101–12CrossRefGoogle ScholarPubMed.

25. Lauridsen, S, Lippert-Rasmussen, K. Legitimate allocation of public healthcare: Beyond accountability for reasonableness. Public Health Ethics 2009;2(1):5969 CrossRefGoogle Scholar.

26. Ford, A. Accountability for reasonableness: The relevance, or not, of exceptionality in resource allocation. Medicine, Health Care & Philosophy 2015;18(2):217–27CrossRefGoogle Scholar.

27. Rid, A. Justice and procedure: How does “accountability for reasonableness” result in fair limit-setting decisions? Journal of Medical Ethics 2009;35(1):12–6CrossRefGoogle ScholarPubMed.

28. Ashcroft, R. Fair process and the redundancy of bioethics: A polemic. Public Health Ethics 2008;1(1):39 CrossRefGoogle Scholar.

29. Hasman, A, Holm, S. Accountability for reasonableness: Opening the black box of process. Health Care Analysis 2005;13(4):261–73CrossRefGoogle ScholarPubMed.

30. Landwehr, C. Procedural justice and democratic institutional design in health-care priority-setting. Contemporary Political Theory 2013;12(4):296317 CrossRefGoogle Scholar.

31. Rawls, J. A Theory of Justice. Cambridge, MA: Belknap Press; 1971 CrossRefGoogle Scholar.

32. Scanlon, T. What we owe to each other. Cambridge, MA: Belknap Press; 1998 Google Scholar.

33. See note 20, Daniels, Sabin 2002.

34. See note 24, Friedman 2008, at 104.

35. See note 24, Friedman 2008, at 105.

36. See note 24, Friedman 2008, at 105.

37. See note 24, Friedman 2008, at 107.

38. See note 21, Badano 2018, at 1–16.

39. See note 21, Badano 2018, at 11.

40. See note 20, Daniels, Sabin 2002, at 45, emphasis added.

41. Friedman, B, Nissenbaum, H. Bias in computer systems. ACM Transactions of Information Systems 1996;14(3):330–47CrossRefGoogle Scholar.

42. Corbett-Davies S, Goel S. The measure and mismeasure of fairness: A critical review of fair machine learning. 2018. https://doi.org/10.48550/arXiv.1808.00023.

43. Fazelpour, S, Danks, D. Algorithmic bias: Senses, sources, solutions. Philosophy Compass 2021;16(8):e12760 CrossRefGoogle Scholar.

44. Beauchamp, TL, Childress, JF. Principles of biomedical ethics. 5th ed. Oxford, New York: Oxford University Press; 2001 Google Scholar.

45. See note 8, Beauchamp, Childress 2019, at 286.

46. Zimmermann, A, Lee-Stronach, C. Proceed with caution. Canadian Journal of Philosophy 2021;52(1):120 CrossRefGoogle Scholar.

47. See note 46, Zimmermann, Lee-Stronach 2021, at 2.

48. Wong, , P-H. Democratizing algorithmic fairness. Philosophy & Technology 2020;33(2):225–44CrossRefGoogle Scholar.

49. Kleinberg J, Mullainathan S, Raghavan M. Inherent trade-offs in the fair determination of risk scores. 2016. https://doi.org/10.48550/arXiv.1609.05807.

50. Christian B. The Alignment Problem: Machine Learning and Human Values. New York: Norton; 2020 Google Scholar.

51. See note 48, Wong 2020, at 325.

52. See note 48, Wong 2020, at 325.

53. See note 48, Wong 2020, at 325.

54. See note 46, Zimmermann, Lee-Stronach 2021, at 15.

55. Benjamin, R. Assessing risk, automating racism. Science 2019;366(6464):421–22CrossRefGoogle ScholarPubMed.

56. See note 11, Oliva 2022, at 200.

57. See note 12, Kilby 2021.

58. See note 13, Röösli et al. 2022, at 9.

59. Canali S. Big data, epistemology and causality: Knowledge in and knowledge out in EXPOsOMICS. Big Data & Society 2016;3(2):2053951716669530.

60. Pearl, J. Causality. 2nd ed. Cambridge: Cambridge University Press; 2009 Google Scholar.

61. von Kügelgen J, Karimi A-H, Bhatt U, Valera I, Weller A, Schölkopf B. On the fairness of causal algorithmic recourse. 2020. https://doi.org/10.48550/arXiv.2010.06529.

62. See note 13, Röösli et al. 2022, at 24.

63. Hernandez-Boussard, T, Bozkurt, S, Ioannidis, JPA, Shah, NH. MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care. Journal of the American Medical Informatics Association 2020;27(12):2011–5CrossRefGoogle ScholarPubMed.

64. See note 13, Röösli et al. 2022, at 24.

65. The What-If Tool; available at https://pair-code.github.io/what-if-tool/. [Last accessed 04/10/2023]

66. The AI Fairness 360 tool; available at https://aif360.mybluemix.net/. [Last accessed 04/10/2023]

67. Collins, GS, Moons, KGM. Reporting of artificial intelligence prediction models. The Lancet 2019;393(10181):1577–9CrossRefGoogle ScholarPubMed.

68. Using machine learning algorithms as epistemic tools to detect bias and injustice has probably been pioneered by Garg, N, Schiebinger, L, Jurafsky, D, Zou, J. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences 2018;115(16):E363544 CrossRefGoogle ScholarPubMed. doi:10.1073/pnas.1720347115. Garg and colleagues can identify bias in word associations using machine learning tools. A similar use for the detection of discrimination has been suggested by Heinrichs B. Discrimination in the age of artificial intelligence. Ai & Society 2021;37(1):143–54. doi: 10.1007/s00146-021-01192-2.

69. Henrich, J, Heine, SJ, Norenzayan, A. The weirdest people in the world? The Behavioral and Brain Sciences 2010;33(2–3):6183 CrossRefGoogle ScholarPubMed; discussion: 83–135. doi:10.1017/s0140525x0999152x.

70. See note 24, Friedman 2008, at 101–12.

71. See note 28, Ashcroft 2008, at 3–9.

72. See note 48, Wong 2020, at 225–44.