Gilead et al. propose that the brain embodies a hierarchy of abstraction processes whose levels differ not only in what they are representing (its inputs), but also in how they impose constraints in their interactions with other representations. In other words, they propose a structured system of symbolic mental representations that allow for various distinct algorithmic operations. They further argue that because predictive processing (PP) neglects the importance of symbolic processing and its functional heterogeneity, it cannot account for the mind's representational diversity and combinatorial nature.
We agree with the authors that any all-encompassing theory of brain function needs to accommodate the brain's representational diversity. We argue, however, that in hierarchical PP this is captured by deep and less contextualized generative models encoded in the neural system, whose acquisition and update are based on a unifying inferential principle (Friston et al. Reference Friston, FitzGerald, Rigoli, Schwartenbeck and Pezzulo2017a; Melloni et al Reference Melloni, Buffalo and Dehaene2019; Snyder et al. Reference Snyder, Schwiedrzik, Vitela and Melloni2015). The generative model probabilistically represents our beliefs about the hidden states that give rise to our sensory experiences, in order to minimize surprise (Friston et al. Reference Friston, FitzGerald, Rigoli, Schwartenbeck and Pezzulo2017a). Central to our point, the model can be ascribed key characteristics that make it suitable to explain the mind's representational diversity (Melloni et al. Reference Melloni, Buffalo and Dehaene2019).
First, the generative model can represent the probabilities of both discrete and continuous events (Friston et al. Reference Friston, FitzGerald, Rigoli, Schwartenbeck and Pezzulo2017a). In the former case, where events in the world can be categorical (e.g., we can be either dead or alive but not both at the same time), the corresponding beliefs would be represented as a probability distribution over a finite set of states. In contrast, continuous events (e.g., a moving object at a particular place in the visual field) would be encoded as an analogous probability density. The kinds of generative models suited for the higher-level abstract representations introduced in the article, such as categories, would thus rely on discrete states of the world that can be described with symbolic or semantic labels.
The second relevant feature is that the generative model can be expressed as a hierarchical Bayesian graph with nodes and edges, where the nodes represent the hidden states and the connections stand for the conditional dependencies between them (Friston et al. Reference Friston, FitzGerald, Rigoli, Schwartenbeck and Pezzulo2017a). In this context, the network structure of the hierarchy of abstractions postulated by Gilead et al. can be accounted for.
Third, the generative model can be a deep temporal nested hierarchy (Friston et al. Reference Friston, Rosch, Parr, Price and Bowman2018) allowing mental simulation of possible future states during decision making (i.e., mental time travel). This third property also gives the neural system the capability of making predictions at multiple scales of abstraction.
Finally, the generative model can take a factorial form, in which diverse causes are represented as independent and separate states that can be brought together (e.g., through convergent connectivity) to explain the sensory input at hand and/or produce a new representational outcome (Friston & Buzsáki Reference Friston and Buzsáki2016). An advantage of factorizing the generative model is to reduce combinatorial complexity, as the system does not need to explicitly code for all possible combinations of states. For instance, a visual event can contain any arbitrary combination of objects (a “what” attribute), their location (“where”), and a timestamp (“when”). The brain could factorize those attributes such that they can be put together to represent every possible event (Auksztulewicz et al. Reference Auksztulewicz, Schwiedrzik, Thesen, Doyle, Devinsky, Nobre, Schroeder, Friston and Melloni2018; Friston & Buzsáki Reference Friston and Buzsáki2016). A similar scheme can be devised for the case of higher-order representations, in which predicators and other abstracta could be separately represented in the graph, and the result of their combination can be obtained via convergence of the associated nodes.
All in all, if we take the described properties into account, it becomes evident that the generative model can place structural constraints that impact the kind of neuronal computations allowed by the representational system (Melloni et al. Reference Melloni, Buffalo and Dehaene2019). How this structure is acquired and implemented in the brain, however, remains a question to be addressed (the structure learning problem, see Griffiths et al. Reference Griffiths, Chater, Kemp, Perfors and Tenenbaum2010; Melloni et al. Reference Melloni, Buffalo and Dehaene2019). Regardless, it can still be argued that PP, at the computational level, can accommodate symbolic representations: these are embedded in the generative model.
But, is the account of the representational diversity enough for PP to become a unifying theory of the brain? We argue that this is not the case, as PP still neglects the mind's diversity at the experiential level. The theory fails to explain why and how the processing of different representations “feels” the way they do. In our daily life, clear-cut qualia discontinuities appear between modality-specific representations that differ in their sensory input. For example, “seeing” a dog does not evoke the same subjective experience than “hearing” it barking. How this experiential distinction can be explained in terms of neural processing is ignored by current PP theories. Along the same lines, PP fails to differentiate between conscious and unconscious events. A hierarchy of abstract representations is also evident in the case of unconscious predictions carried out automatically during perception, like pattern completion processes (Schwiedrzik & Freiwald Reference Schwiedrzik and Freiwald2017). Why those automatic predictions do not reach consciousness is not yet articulated by PP. To state the problem in broader terms, why certain processes are accompanied by consciousness and others are not, how differences in qualia translate to differences at the neural processing level, and how subjective experience arises from neural signals, is currently not spelled out by PP models. Of note, diversity at the experiential level cannot be tackled by mapping different qualia to distinct cortical areas or brain networks. A map by itself does not explain why or how experiential diversity comes about (Poeppel Reference Poeppel2012).
To conclude, although we agree with the authors that the mind's representational diversity imposes challenges to PP, we propose that it is at the level of the experience that PP fails, and not at the level of the functional diversity, which is within reach of the hierarchical and deep temporal generative models.
Gilead et al. propose that the brain embodies a hierarchy of abstraction processes whose levels differ not only in what they are representing (its inputs), but also in how they impose constraints in their interactions with other representations. In other words, they propose a structured system of symbolic mental representations that allow for various distinct algorithmic operations. They further argue that because predictive processing (PP) neglects the importance of symbolic processing and its functional heterogeneity, it cannot account for the mind's representational diversity and combinatorial nature.
We agree with the authors that any all-encompassing theory of brain function needs to accommodate the brain's representational diversity. We argue, however, that in hierarchical PP this is captured by deep and less contextualized generative models encoded in the neural system, whose acquisition and update are based on a unifying inferential principle (Friston et al. Reference Friston, FitzGerald, Rigoli, Schwartenbeck and Pezzulo2017a; Melloni et al Reference Melloni, Buffalo and Dehaene2019; Snyder et al. Reference Snyder, Schwiedrzik, Vitela and Melloni2015). The generative model probabilistically represents our beliefs about the hidden states that give rise to our sensory experiences, in order to minimize surprise (Friston et al. Reference Friston, FitzGerald, Rigoli, Schwartenbeck and Pezzulo2017a). Central to our point, the model can be ascribed key characteristics that make it suitable to explain the mind's representational diversity (Melloni et al. Reference Melloni, Buffalo and Dehaene2019).
First, the generative model can represent the probabilities of both discrete and continuous events (Friston et al. Reference Friston, FitzGerald, Rigoli, Schwartenbeck and Pezzulo2017a). In the former case, where events in the world can be categorical (e.g., we can be either dead or alive but not both at the same time), the corresponding beliefs would be represented as a probability distribution over a finite set of states. In contrast, continuous events (e.g., a moving object at a particular place in the visual field) would be encoded as an analogous probability density. The kinds of generative models suited for the higher-level abstract representations introduced in the article, such as categories, would thus rely on discrete states of the world that can be described with symbolic or semantic labels.
The second relevant feature is that the generative model can be expressed as a hierarchical Bayesian graph with nodes and edges, where the nodes represent the hidden states and the connections stand for the conditional dependencies between them (Friston et al. Reference Friston, FitzGerald, Rigoli, Schwartenbeck and Pezzulo2017a). In this context, the network structure of the hierarchy of abstractions postulated by Gilead et al. can be accounted for.
Third, the generative model can be a deep temporal nested hierarchy (Friston et al. Reference Friston, Rosch, Parr, Price and Bowman2018) allowing mental simulation of possible future states during decision making (i.e., mental time travel). This third property also gives the neural system the capability of making predictions at multiple scales of abstraction.
Finally, the generative model can take a factorial form, in which diverse causes are represented as independent and separate states that can be brought together (e.g., through convergent connectivity) to explain the sensory input at hand and/or produce a new representational outcome (Friston & Buzsáki Reference Friston and Buzsáki2016). An advantage of factorizing the generative model is to reduce combinatorial complexity, as the system does not need to explicitly code for all possible combinations of states. For instance, a visual event can contain any arbitrary combination of objects (a “what” attribute), their location (“where”), and a timestamp (“when”). The brain could factorize those attributes such that they can be put together to represent every possible event (Auksztulewicz et al. Reference Auksztulewicz, Schwiedrzik, Thesen, Doyle, Devinsky, Nobre, Schroeder, Friston and Melloni2018; Friston & Buzsáki Reference Friston and Buzsáki2016). A similar scheme can be devised for the case of higher-order representations, in which predicators and other abstracta could be separately represented in the graph, and the result of their combination can be obtained via convergence of the associated nodes.
All in all, if we take the described properties into account, it becomes evident that the generative model can place structural constraints that impact the kind of neuronal computations allowed by the representational system (Melloni et al. Reference Melloni, Buffalo and Dehaene2019). How this structure is acquired and implemented in the brain, however, remains a question to be addressed (the structure learning problem, see Griffiths et al. Reference Griffiths, Chater, Kemp, Perfors and Tenenbaum2010; Melloni et al. Reference Melloni, Buffalo and Dehaene2019). Regardless, it can still be argued that PP, at the computational level, can accommodate symbolic representations: these are embedded in the generative model.
But, is the account of the representational diversity enough for PP to become a unifying theory of the brain? We argue that this is not the case, as PP still neglects the mind's diversity at the experiential level. The theory fails to explain why and how the processing of different representations “feels” the way they do. In our daily life, clear-cut qualia discontinuities appear between modality-specific representations that differ in their sensory input. For example, “seeing” a dog does not evoke the same subjective experience than “hearing” it barking. How this experiential distinction can be explained in terms of neural processing is ignored by current PP theories. Along the same lines, PP fails to differentiate between conscious and unconscious events. A hierarchy of abstract representations is also evident in the case of unconscious predictions carried out automatically during perception, like pattern completion processes (Schwiedrzik & Freiwald Reference Schwiedrzik and Freiwald2017). Why those automatic predictions do not reach consciousness is not yet articulated by PP. To state the problem in broader terms, why certain processes are accompanied by consciousness and others are not, how differences in qualia translate to differences at the neural processing level, and how subjective experience arises from neural signals, is currently not spelled out by PP models. Of note, diversity at the experiential level cannot be tackled by mapping different qualia to distinct cortical areas or brain networks. A map by itself does not explain why or how experiential diversity comes about (Poeppel Reference Poeppel2012).
To conclude, although we agree with the authors that the mind's representational diversity imposes challenges to PP, we propose that it is at the level of the experience that PP fails, and not at the level of the functional diversity, which is within reach of the hierarchical and deep temporal generative models.