Hostname: page-component-6766d58669-88psn Total loading time: 0 Render date: 2026-05-16T10:50:16.100Z Has data issue: false hasContentIssue false

What’s Wrong with Automated Influence

Published online by Cambridge University Press:  24 September 2021

Claire Benn*
Affiliation:
Australian National University, Canberra, Australia
Seth Lazar
Affiliation:
Australian National University, Canberra, Australia
*
*Corresponding author. Email: cmabenn@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

Automated Influence is the use of Artificial Intelligence (AI) to collect, integrate, and analyse people’s data in order to deliver targeted interventions that shape their behaviour. We consider three central objections against Automated Influence, focusing on privacy, exploitation, and manipulation, showing in each case how a structural version of that objection has more purchase than its interactional counterpart. By rejecting the interactional focus of “AI Ethics” in favour of a more structural, political philosophy of AI, we show that the real problem with Automated Influence is the crisis of legitimacy that it precipitates.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of Canadian Journal of Philosophy