Hostname: page-component-89b8bd64d-n8gtw Total loading time: 0 Render date: 2026-05-07T12:26:43.743Z Has data issue: false hasContentIssue false

Proceed with Caution

Published online by Cambridge University Press:  29 July 2021

Annette Zimmermann*
Affiliation:
Department of Philosophy, University of York, York, United Kingdom Carr Center for Human Rights Policy, Harvard University, Cambridge, MA, USA
Chad Lee-Stronach
Affiliation:
Department of Philosophy and Religion, Northeastern University, Boston, MA, USA
*
*Corresponding author. Email: annette.zimmermann@york.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely available to human agents. After exploring the limitations of existing formal algorithmic fairness strategies, the article argues that procedural justice requires that human agents relying wholly or in part on algorithmic systems proceed with caution: by avoiding doxastic negligence about algorithmic outputs, by exercising deliberative capacities when making similarity judgments, and by suspending belief and gathering additional information in light of higher-order uncertainty.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of Canadian Journal of Philosophy
Figure 0

Figure 1. Structural injustice (S) is an unobserved confounding variable in an algorithmic system comprising data inputs about an individual (I), a model (M) that optimises the accuracy of its predictions (P) with respect to the target property (T).