I can be puzzled by my own action. In this circumstance, I can't directly introspect my reasons, but I can infer plausible beliefs and desires from observing my actions. This is called rationalization. Cushman's central argument is that rationalization has a particular benefit: From rationalizing my actions, I learn true and useful information. The puzzling actions were generated by mechanisms that are not accessible to introspection, but are adaptive, serving my actual interests by increasing my capacity to survive and prosper. Thus through rationalization of my unplanned actions, I could come to make better future plans.
We have three questions about this idea.
When faced with another person's puzzling action, observers only sometimes rationalize it, assuming that the action was an efficient way to achieve the person's goals given her costs (Gershman et al. Reference Gershman, Gerstenberg, Baker and Cushman2016). On other occasions, observers may explain puzzling behavior by writing it off as irrational. Indeed, observers too often infer that that people who chose unfamiliar actions don't just have different beliefs and desires, but are fundamentally irrational (Kennedy & Pronin Reference Kennedy and Pronin2008). Just as we can explain away others’ actions as irrational, we can do the same for ourselves; I could interpret my own puzzling action as the result of a habit, or emotional aversion, or descriptive norm (“I always do that”; “it just feels bad”; “it's what we do”). So, our first question for Cushman is: Can we predict when observers do, and when they don't, rationalize (their own, or others’) puzzling actions?
Perhaps we rationalize when the benefits of doing so are the greatest, but when is that? The target article provides surprisingly little empirical evidence of when and how the promised benefits of rationalization actually accrue. Cushman proposes that rationalization lets me learn useful new information, which I can incorporate into better subsequent plans. This idea ought to make testable predictions: Training or enhancing rationalization should cause improved planning; limiting or preventing rationalization should impair planning; and people who more frequently rationalize their past actions should therefore make better plans in future. Some phenomena do seem consistent with these predictions. For example, cognitive behavioral therapy, which may be a means of enhancing rationalization, is an effective strategy for reducing self-harm and improving problem solving (Hawton et al. Reference Hawton, Witt, Salisbury, Arensman, Gunnell, Hazell, Townsend and van Heeringen2016); and people who can give more distinct and differentiated descriptions of their emotions appear to cope better with unexpectedly negative or threatening events (Kashdan et al. Reference Kashdan, Ferssizidis, Collins and Muraven2010; Pond et al. Reference Pond, Kashdan, DeWall, Savostyanova, Lambert and Fincham2012; Zaki et al. Reference Zaki, Coifman, Rafaeli, Berenson and Downey2013). In general, we would like Cushman to translate his framework into more specific, falsifiable predictions.
On the other hand, even if some rationalization can generate some benefits for planning, more rationalization probably doesn't always generate more benefits (Trapnell & Campbell Reference Trapnell and Campbell1999). Too much rationalization is called rumination: Persistently chewing the cud of one's past actions is associated with depression and anxiety, not particularly good planning (Nolen-Hoeksema Reference Nolen-Hoeksema2000). In sum, our second question for Cushman is: Under what circumstances do people who rationalize their actions actually make better plans? And how should we measure “better plans” to test this idea?
There may be a link here to the concept of “resource rationality” (Griffiths et al. Reference Griffiths, Lieder and Goodman2015). The principle of resource rationality is to achieve the best cognitive outcome, counting the computational costs. A resource-rational creature would thus rationalize specifically when the benefits outweigh costs of more extensive computation; for example, where situations have complex and contingent payoff structures; where they involve close relationships of high value; around highly consequential decisions made under uncertainty; in the wake of conflict with significant others; or when following instincts, learned associations, or norms leads to a large error signal. It would be valuable to understand empirically if these are the cases where most rationalization does indeed occur. The idea of resource rationality also links quite naturally to the possibility of pathological over-rationalization: This would occur when rationalization resources are deployed beyond the point where their marginal benefit for future situations could exceed their cognitive cost.
Finally, we feel the target article may underestimate the importance of the interpersonal functions of rationalization, as well as how tight the linkage is between inter- and intra-personal ones. Cushman considers one way in which rationalization operates in interpersonal interaction: when I offer an exculpatory but false explanation of harmful actions. In this context, since I know my explanation is false, the only reason I might actually acquire the rationalized beliefs and desires is to achieve more effective deception of others (Trivers Reference Trivers2000). We suspect that this example is too narrow, missing much of the proximal motive and the ultimate benefit of rationalization.
Like Cushman, we expect that human minds contain multiple mechanisms vying for control of behavior. In our view, such competition occurs not just between separate modules with different functions. There can also be conflict within any one system, for example, between competing desires, fears, norms and expectations. Inner conflicts arise because people are uncertain about the present, about the future, about the value of key rewards, and so on.
Our third question is therefore: do the benefits of rationalizing one’s own actions actually arise mostly from simplifying the sources of our actions that we share with other people? When a person's actions actually emerge from inner conflict and competition, the resulting sequence of behaviors could be quite confusing to third-party observers trying to infer a unitary set of beliefs and desires. Rendering our own actions seemingly rational and comprehensible could have many interpersonal benefits, including signaling our value as a future partner or persuading others to subscribe to joint plans (Mercier & Sperber Reference Mercier and Sperber2011).
In short, the target article stimulates further questions: about when rationalization is most likely to occur and to not occur, and about where to expect, and how to measure, its benefits.
I can be puzzled by my own action. In this circumstance, I can't directly introspect my reasons, but I can infer plausible beliefs and desires from observing my actions. This is called rationalization. Cushman's central argument is that rationalization has a particular benefit: From rationalizing my actions, I learn true and useful information. The puzzling actions were generated by mechanisms that are not accessible to introspection, but are adaptive, serving my actual interests by increasing my capacity to survive and prosper. Thus through rationalization of my unplanned actions, I could come to make better future plans.
We have three questions about this idea.
When faced with another person's puzzling action, observers only sometimes rationalize it, assuming that the action was an efficient way to achieve the person's goals given her costs (Gershman et al. Reference Gershman, Gerstenberg, Baker and Cushman2016). On other occasions, observers may explain puzzling behavior by writing it off as irrational. Indeed, observers too often infer that that people who chose unfamiliar actions don't just have different beliefs and desires, but are fundamentally irrational (Kennedy & Pronin Reference Kennedy and Pronin2008). Just as we can explain away others’ actions as irrational, we can do the same for ourselves; I could interpret my own puzzling action as the result of a habit, or emotional aversion, or descriptive norm (“I always do that”; “it just feels bad”; “it's what we do”). So, our first question for Cushman is: Can we predict when observers do, and when they don't, rationalize (their own, or others’) puzzling actions?
Perhaps we rationalize when the benefits of doing so are the greatest, but when is that? The target article provides surprisingly little empirical evidence of when and how the promised benefits of rationalization actually accrue. Cushman proposes that rationalization lets me learn useful new information, which I can incorporate into better subsequent plans. This idea ought to make testable predictions: Training or enhancing rationalization should cause improved planning; limiting or preventing rationalization should impair planning; and people who more frequently rationalize their past actions should therefore make better plans in future. Some phenomena do seem consistent with these predictions. For example, cognitive behavioral therapy, which may be a means of enhancing rationalization, is an effective strategy for reducing self-harm and improving problem solving (Hawton et al. Reference Hawton, Witt, Salisbury, Arensman, Gunnell, Hazell, Townsend and van Heeringen2016); and people who can give more distinct and differentiated descriptions of their emotions appear to cope better with unexpectedly negative or threatening events (Kashdan et al. Reference Kashdan, Ferssizidis, Collins and Muraven2010; Pond et al. Reference Pond, Kashdan, DeWall, Savostyanova, Lambert and Fincham2012; Zaki et al. Reference Zaki, Coifman, Rafaeli, Berenson and Downey2013). In general, we would like Cushman to translate his framework into more specific, falsifiable predictions.
On the other hand, even if some rationalization can generate some benefits for planning, more rationalization probably doesn't always generate more benefits (Trapnell & Campbell Reference Trapnell and Campbell1999). Too much rationalization is called rumination: Persistently chewing the cud of one's past actions is associated with depression and anxiety, not particularly good planning (Nolen-Hoeksema Reference Nolen-Hoeksema2000). In sum, our second question for Cushman is: Under what circumstances do people who rationalize their actions actually make better plans? And how should we measure “better plans” to test this idea?
There may be a link here to the concept of “resource rationality” (Griffiths et al. Reference Griffiths, Lieder and Goodman2015). The principle of resource rationality is to achieve the best cognitive outcome, counting the computational costs. A resource-rational creature would thus rationalize specifically when the benefits outweigh costs of more extensive computation; for example, where situations have complex and contingent payoff structures; where they involve close relationships of high value; around highly consequential decisions made under uncertainty; in the wake of conflict with significant others; or when following instincts, learned associations, or norms leads to a large error signal. It would be valuable to understand empirically if these are the cases where most rationalization does indeed occur. The idea of resource rationality also links quite naturally to the possibility of pathological over-rationalization: This would occur when rationalization resources are deployed beyond the point where their marginal benefit for future situations could exceed their cognitive cost.
Finally, we feel the target article may underestimate the importance of the interpersonal functions of rationalization, as well as how tight the linkage is between inter- and intra-personal ones. Cushman considers one way in which rationalization operates in interpersonal interaction: when I offer an exculpatory but false explanation of harmful actions. In this context, since I know my explanation is false, the only reason I might actually acquire the rationalized beliefs and desires is to achieve more effective deception of others (Trivers Reference Trivers2000). We suspect that this example is too narrow, missing much of the proximal motive and the ultimate benefit of rationalization.
Like Cushman, we expect that human minds contain multiple mechanisms vying for control of behavior. In our view, such competition occurs not just between separate modules with different functions. There can also be conflict within any one system, for example, between competing desires, fears, norms and expectations. Inner conflicts arise because people are uncertain about the present, about the future, about the value of key rewards, and so on.
Our third question is therefore: do the benefits of rationalizing one’s own actions actually arise mostly from simplifying the sources of our actions that we share with other people? When a person's actions actually emerge from inner conflict and competition, the resulting sequence of behaviors could be quite confusing to third-party observers trying to infer a unitary set of beliefs and desires. Rendering our own actions seemingly rational and comprehensible could have many interpersonal benefits, including signaling our value as a future partner or persuading others to subscribe to joint plans (Mercier & Sperber Reference Mercier and Sperber2011).
In short, the target article stimulates further questions: about when rationalization is most likely to occur and to not occur, and about where to expect, and how to measure, its benefits.