Hostname: page-component-89b8bd64d-5bvrz Total loading time: 0 Render date: 2026-05-10T12:58:43.400Z Has data issue: false hasContentIssue false

Explananda and explanantia in deep neural network models of neurological network functions

Published online by Cambridge University Press:  06 December 2023

Mihnea Moldoveanu*
Affiliation:
Desautels Centre for Integrative Thinking, Rotman School of Management, University of Toronto, Toronto, ON, Canada mihnea.moldoveanu@rotman.utoronto.ca https://www.rotman.utoronto.ca/FacultyAndResearch/Faculty/FacultyBios/Moldoveanu

Abstract

Depending on what we mean by “explanation,” challenges to the explanatory depth and reach of deep neural network models of visual and other forms of intelligent behavior may need revisions to both the elementary building blocks of neural nets (the explananda) and to the ways in which experimental environments and training protocols are engineered (the explanantia). The two paths assume and imply sharply different conceptions of how an explanation explains and of the explanatory function of models.

Information

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable