Hostname: page-component-cb9f654ff-w5vf4 Total loading time: 0 Render date: 2025-09-06T16:59:02.239Z Has data issue: false hasContentIssue false
Accepted manuscript

On Values in Fairness Optimization with Machine Learning

Published online by Cambridge University Press:  02 September 2025

Heather Champion*
Affiliation:
Department of Philosophy, University of Western Ontario Rotman Institute of Philosophy, London, Ontario
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Statistical criteria of fairness, though controversial, bring attention to the multiobjective nature of many predictive modelling problems. In this paper, I consider how epistemic and non-epistemic values impact the design of machine learning algorithms that optimize for more than one normative goal. I focus on a major design choice between biased search strategies that directly incorporate priorities for various objectives into an optimization procedure, and unbiased search strategies that do not. I argue that both reliably generate Pareto optimal solutions such that various other values are relevant to making a rational choice between them.

Information

Type
Contributed Paper
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Philosophy of Science Association