Hostname: page-component-89b8bd64d-46n74 Total loading time: 0 Render date: 2026-05-08T15:33:55.152Z Has data issue: false hasContentIssue false

Reflection Machines: Supporting Effective Human Oversight Over Medical Decision Support Systems

Published online by Cambridge University Press:  10 January 2023

Pim Haselager*
Affiliation:
Donders Institute for Brain, Cognition and Behaviour, Department of AI, Radboud University, Nijmegen, The Netherlands
Hanna Schraffenberger
Affiliation:
Information Science, iHub, Radboud University, Nijmegen, The Netherlands
Serge Thill
Affiliation:
Donders Institute for Brain, Cognition and Behaviour, Department of AI, Radboud University, Nijmegen, The Netherlands
Simon Fischer
Affiliation:
Donders Institute for Brain, Cognition and Behaviour, Department of AI, Radboud University, Nijmegen, The Netherlands
Pablo Lanillos
Affiliation:
Donders Institute for Brain, Cognition and Behaviour, Department of AI, Radboud University, Nijmegen, The Netherlands
Sebastiaan van de Groes
Affiliation:
Health Sciences, Radboud UMC, Nijmegen, The Netherlands
Miranda van Hooff
Affiliation:
Health Sciences, Radboud UMC, Nijmegen, The Netherlands St Maartenskliniek, Nijmegen, The Netherlands
*
*Corresponding author. Email: pim.haselager@donders.ru.nl
Rights & Permissions [Opens in a new window]

Abstract

Human decisions are increasingly supported by decision support systems (DSS). Humans are required to remain “on the loop,” by monitoring and approving/rejecting machine recommendations. However, use of DSS can lead to overreliance on machines, reducing human oversight. This paper proposes “reflection machines” (RM) to increase meaningful human control. An RM provides a medical expert not with suggestions for a decision, but with questions that stimulate reflection about decisions. It can refer to data points or suggest counterarguments that are less compatible with the planned decision. RMs think against the proposed decision in order to increase human resistance against automation complacency. Building on preliminary research, this paper will (1) make a case for deriving a set of design requirements for RMs from EU regulations, (2) suggest a way how RMs could support decision-making, (3) describe the possibility of how a prototype of an RM could apply to the medical domain of chronic low back pain, and (4) highlight the importance of exploring an RM’s functionality and the experiences of users working with it.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press
Figure 0

Figure 1 A screenshot from the NDT-CLBP as presented to the physician during the patient’s visit (Voorspelling voor patient: Prediction for patient; CPP: Combined Physical and Psychological program; Chirurgie: surgery; Geen interventie: no intervention). The bars represent the likelihood that a specific patient will be “a responder” or a “non-responder” for each of the specified treatments. Responder is defined as a patient-acceptable symptom state and non-response as severe disability and persistence of LBP.27

Figure 1

Figure 2 Simplified framework for a joint human–machine (DSS & RM) decision-making process. Case information & the DSS recommendation suggest the physician to proceed with option 1 (e.g., no surgical intervention), but upon RM questioning the physician may switch to option 2 (e.g., surgery) as the final decision, or re-affirm, with more trust based on increased reasoning, option 1.