In previous sections, we saw that implicit measures of prejudice were not consistent predictors of behavior, a conclusion in line with meta-analyses documenting relatively weak associations between implicit measures and behavior (Greenwald et al., Reference Greenwald, Poehlman and Uhlmann2009, Reference Greenwald, Banaji and Nosek2015; Kurdi et al., Reference Kurdi, Seitchik and Axt2018; Oswald et al., Reference Oswald, Mitchell and Blanton2015). If we take implicit bias scores too literally, this can lead to the labeling of some people as prejudiced when they do not manifest prejudiced behavior. Some observers wonder whether such mismatch instances are the results of base rate knowledge.
For example, if African Americans are overrepresented in crime statistics, quicker associations of Black faces with negative words such as “jail” may not reflect racial bias and instead may result simply from knowledge of culturally disseminated information.
Communicating such limitations of implicit measures to the general public is important to avoid misperceptions and misguided decisions. But such communication has not always been accomplished. For example, some companies have used implicit measures to deem some employees to be implicitly racist and therefore ineligible to be hired or in need of “diversity training”. Yet there is little evidence that such anti-implicit bias training programs have the desired effects, as we shall see.
We will also see that new technologies developed to measure implicit bias during the past twenty years have not always performed as hoped. Low reliability, weak correlations between different measures of implicit bias, model misspecification, random and systematic method error, and properties of some measures’ metrics and scoring algorithms are reasons for hesitation. This sort of evidence has led to refinement of understanding of the meaning and function of implicit bias measures and their value (Mitchell & Tetlock, Reference Mitchell, Tetlock, Lilienfeld and Waldman2017). In light of all this, it will be valuable for future research to establish exactly what implicit bias measures assess, how to interpret implicit bias scores, and whether and when implicit bias affects consequential behaviors.
Neuroscience research has provided valuable evidence regarding an assumption of much research on implicit bias: that the associations yielding implicit bias scores are attributable to stable knowledge structures that are activated in certain situations and thereby influence behavior. And in fact, the predictive processing framework in neuroscience challenges this assumption (Hutchinson & Barrett, Reference Hutchinson and Barrett2019). This research suggests that there are no stable knowledge structures involved and instead, internal representations are constructed dynamically, in a context-sensitive way. The low reliability of implicit bias measures should therefore be unsurprising – this is to be expected when dynamic construction occurs in real time. Principles of neural design also suggest that implicit attitudes and associated behaviors will be affected by stress or by the metabolic or energetic state of the body more generally. This means that factors that affect the energetic state such as sleep, ingestion of caffeine, sugar, or nicotine might affect assessments of implicit bias.
These ideas from neuroscience invite reconsideration of some traditional perspectives on implicit bias and its measurement.