Hostname: page-component-89b8bd64d-7zcd7 Total loading time: 0 Render date: 2026-05-07T05:53:51.229Z Has data issue: false hasContentIssue false

Static-Dynamic Hybridity in Dynamical Models of Cognition

Published online by Cambridge University Press:  12 January 2022

Naftali Weinberger
Affiliation:
Ludwig-Maximilians-Universität München, Munich Center for Mathematical Philosophy, Ludwigstr 31, München 80539, Germany
Colin Allen*
Affiliation:
1101 Cathedral of Learning, Pittsburgh, PA, US
Rights & Permissions [Opens in a new window]

Abstract

Dynamical models of cognition have played a central role in recent cognitive science. In this paper, we consider a common strategy by which dynamical models describe their target systems neither as purely static nor as purely dynamic, but rather using a hybrid approach. This hybridity reveals how dynamical models involve representational choices that are important for understanding the relationship between dynamical and non-dynamical representations of a system.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the Philosophy of Science Association
Figure 0

Figure 1: Dynamic Causal Model for the Watt Governor: Solid arrows indicate causal arrows, dashed arrows (labeled “I”) indicate integration links (see text for details).

Figure 1

Figure 2: Reproduced from Phattanasri et al. 2007, 388. The axes ${y_1}$, ${y_2}$, and ${y_3}$ refer to the states of neurons 1, 2, and 3, respectively. The system was strobed at the end of each smell and reinforcement signal. Figure (A) uses ovals to divide strobed regions into distinct clusters. (B) depicts a trajectory through the state space in a trial involving multiple changes of environment. (C) provides a finite state machine representation for how the system responds to signals with $ \uparrow $ and $ \downarrow $ denoting the two different “smells” and with + and $ - $ denoting positive and negative reinforcement signals. [Figure copyright © 2007 by International Society for Adaptive Behavior. Reprinted by Permission of SAGE Publications, Ltd.].