Hostname: page-component-77f85d65b8-blhq5 Total loading time: 0 Render date: 2026-03-26T12:26:20.646Z Has data issue: false hasContentIssue false

Locating Values in the Space of Possibilities

Published online by Cambridge University Press:  29 October 2024

Sara Aronowitz*
Affiliation:
University of Toronto, Canada
Rights & Permissions [Opens in a new window]

Abstract

Where do values live in thought? A straightforward answer is that we (or our brains) make decisions using explicit value representations which are our values. Recent work applying reinforcement learning to decision-making and planning suggests that, more specifically, we may represent both the instrumental expected value of actions as well as the intrinsic reward of outcomes. In this paper, I argue that identifying value with either of these representations is incomplete. For agents such as humans and other animals, there is another place where reward can be located in thought: the division of the space of possibilities or “state space.”

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of the Philosophy of Science Association
Figure 0

Figure 1. A schematic of influences on state space representations. Note that these nodes depict synchronically distinct lines of influence on the state space representation, but they are likely interconnected at least diachronically. The current reward function influences future personal/historical factors, and both of these can influence cultural and social factors, for instance by causing the agent to seek out a different subcultural group.