Hostname: page-component-89b8bd64d-mmrw7 Total loading time: 0 Render date: 2026-05-05T22:50:07.981Z Has data issue: false hasContentIssue false

Shaping an adaptive approach to address the ambiguity of fairness in AI: Theory, framework, and illustrations

Published online by Cambridge University Press:  16 May 2025

Swaptik Chowdhury*
Affiliation:
RAND Corporation, Santa Monica, CA, USA
Lisa Klautzer
Affiliation:
Tezo Analytics, Los Angeles, CA, USA
*
Corresponding author: Swaptik Chowdhury; Email: swaptikchowdhury16@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

The adoption of AI is pervasive, often operating behind the scenes and influencing decisions without our explicit awareness. It impacts different aspects of our lives, from personalized recommendations to crucial determinations like hiring decisions or credit approvals. Yet, even to their developers, AI algorithms’ opacity raises concerns about fairness. The biases inherent in our data further complicate matters, as current AI systems often lack moral or logical judgment, relying solely on predictive outputs derived from learned data patterns. Efforts to address fairness in AI models face significant challenges, as different definitions of fairness can lead to conflicting outcomes. Despite attempts to mitigate biases and optimize fairness criteria, achieving a universal and satisfactory solution remains elusive. The multidimensional nature of fairness, with its roots in philosophy and evolving concepts in organizational justice, underscores the complexity of the task. Technology is inherently political, shaped by various societal factors and human biases. Recognizing this, stakeholders must engage in nuanced discussions about the types of fairness relevant in specific contexts and the potential trade-offs involved. Just as in other spheres of decision-making, navigating trade-offs is inevitable, requiring a flexible approach informed by diverse perspectives.

This study acknowledges that achieving fairness in AI is not about prescribing a singular definition or solution but adapting to evolving needs and values. Embracing ambiguity and tension in decision-making can lead to more inclusive outcomes. An interdisciplinary examination of application-specific and consensus-driven frameworks is adopted to consider fairness in AI. By evaluating factors such as application nuances, procedural frameworks, and stakeholder dynamics, this study demonstrates the framework’s expansive potential applicability in understanding and operationalizing fairness by the way of two illustrations.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NoDerivatives licence (http://creativecommons.org/licenses/by-nd/4.0), which permits re-use, distribution, and reproduction in any medium, provided that no alterations are made and the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press.
Figure 0

Figure 1. Fairness in AI framework.

Figure 1

Figure 2. Stakeholder relationship.