Implicit Bias & Philosophy

This week, I’m talking about implicit bias over at The Brains Blog. I’m including my portion of the discussion below.

1.  The Implicit Association Test (IAT)

The implicit association test (IAT) is one way to measure implicitly biased behavior. In the IAT, “participants […] are asked to rapidly categorize two [kinds of stimuli] (black vs. white [faces]) [into one of] two attributes (‘good’ vs. ‘bad’). Differences in response latency (and sometimes differences in error-rates) are then treated as a measure of the association between the target [stimuli] and the target attribute” (Huebner 2016). Likewise, changes in response latencies and error-rates resulting from experimental interventions are treated as experimentally manipulated changes in associations.

2.  The Effect Of Philosophy

As philosophers, we are in the business of arguments and their propositions, not associations. So we might wonder whether we can use arguments to intervene on our implicitly biased behavior. And it turns out that we can — even if the findings are not always significant and the effect sizes are often small. Some think that this effect of arguments on IAT performance falsifies the idea that implicitly biased behavior is realized by associations (Mandelbaum 2015). The idea is that propositions are fundamentally different than associations. So associations cannot be modified by propositions. So if an arguments’ propositions can change participants’ implicitly biased behavior — as measured by the IAT — then implicit biases might “not [be] predicated on [associations] but [rather] unconscious propositionally structured beliefs” (Mandelbaum 2015, bracketed text and italics added). But there is some reason to think that such falsification relies on oversimplification. After all, there are many processes involved in our behavior — implicitly biased or otherwise. So there are many processes that need to be accounted for when trying to measure the effect of an intervention on our implicitly biased behavior — e.g., participants’ concern about discrimination, their motivation to respond without prejudice (Plant & Devine, 1998), and their personal awareness of bias. So what happens when we control for these variables? In many cases, we find that argument-like interventions on implicitly biased behavior are actually explained by changes in participants’ concern(s), motivation(s), and/or awareness, but not changes in associations (Devine, Forscher, Austin, and Cox 2013; Conrey, Sherman, Gawronski, Hugenberg, and Groom 2005).


Conrey, F. R., Sherman, J. W., Gawronski, B., Hugenberg, K., & Groom, C. J. (2005). Separating Multiple Processes in Implicit Social Cognition: The Quad Model of Implicit Task Performance. Journal of Personality and Social Psychology, 89(4), 469–487.

Devine, P. G., Forscher, P. S., Austin, A. J., & Cox, W. T. L. (2012). Long-term reduction in implicit race bias: A prejudice habit-breaking intervention. Journal of Experimental Social Psychology, 48(6), 1267–1278.

Huebner, B. (2016). Implicit Bias, Reinforcement Learning, and Scaffolded Moral Cognition. In Implicit Bias and Philosophy, Vol 1.

Mandelbaum, E. (2016). Attitude, Inference, Association: On the Propositional Structure of Implicit Bias. Noûs, 50(3), 629–658.

Ashby, E., & Devine, P. G. (1998). Internal and external motivation to respond without prejudice. Journal of Personality and Social Psychology, 75(3), 811–832.

Published by

Nick Byrd

Nick is a cognitive scientist at Florida State University studying reasoning, wellbeing, and willpower. Check out his blog at