Wednesday, January 11, 2017

Scientific Method (Let evidence drive the conclusions) vs. Ideologue (Let the conclusions drive the evidence)

Implicit Association Test burst on the scene in the late nineties and was a godsend to those who wished to believe that America is an institutionally racist society. Gramsci's cultural hegemony machine of media, elite universities, and entertainment have seized on IAT as a strong pillar supporting their Critical Theory, Critical Race Theory, Gender Theory, Postmodernist theory and reform Marxism positions.

IAT has, in the common discourse, especially among journalists, become explicitly associated with discrimination and bigotry but the evidence against it has been building for a number of years. In the past couple of years I have noticed more and more academics being much more careful when being interviewed by NPR journalists to distance themselves from the journalist's characterization of IAT as a measure of bigotry and discrimination.

There have long been major questions about the IAT. Does it really measure implicit bigotry or does it measure lived experience? Do IAT scores predict discriminatory behavior? If it does, what is the effect size? The evidence for all these necessary conditions has always been weak and has just taken a further significant hit with A Meta-Analysis of Change in Implicit Bias by Patrick Forscher, Calvin K. Lai, Jordan R. Axt, and Brian Nosek. While I am leery of meta-analysis, I attach some significance to the findings as one of the authors (Nosek) was also one of the three co-developers of the Implicit Association Test.

The new meta-analysis is reported in Can We Really Measure Implicit Bias? Maybe Not by Tom Bartlett.
But the link between unconscious bias, as measured by the test, and biased behavior has long been debated among scholars, and a new analysis casts doubt on the supposed connection.

Researchers from the University of Wisconsin at Madison, Harvard, and the University of Virginia examined 499 studies over 20 years involving 80,859 participants that used the IAT and other, similar measures. They discovered two things: One is that the correlation between implicit bias and discriminatory behavior appears weaker than previously thought. They also conclude that there is very little evidence that changes in implicit bias have anything to do with changes in a person’s behavior. These findings, they write, "produce a challenge for this area of research."

That’s putting it mildly. "When you actually look at the evidence we collected, there’s not necessarily strong evidence for the conclusions people have drawn," says Patrick Forscher, a co-author of the paper, which is currently under review at Psychological Bulletin. The finding that changes in implicit bias don’t lead to changes in behavior, Forscher says, "should be stunning."
The article then goes on to report the back-and-forth between one of the main IAT debunkers and the other two developers of the IAT test. One of the key issues is replicability and forecasting accuracy.
He drew a graph illustrating how high IQ scores tend to predict achievement, a claim backed up by reams of data. In contrast, the IAT — a sort of IQ test for bias — doesn’t reveal whether a person will tend to act in a biased manner, nor are the scores on the test consistent over time. It’s possible to be labeled "moderately biased" on your first test and "slightly biased" on the next. And even within those categories the numbers fluctuate in a way that, Blanton contends, undermines the test’s value. "The IAT isn’t even predicting the IAT two weeks later," Blanton says. "How can a test predict behavior if it can’t even predict itself?"
Given the following, it almost seems like much ado about nothing. Or much ado about a research gravy train which those with a vested interested don't want to stop.
What’s striking, though, is how, in some respects, their conclusions about the IAT don’t seem all that far apart. Greenwald [one of the three developers and a defender of IAT] acknowledges that a person’s score can vary significantly, depending on when the test is taken, and he doesn’t think it’s reliable enough to be used to, say, select bias-free juries. "We do not regard the IAT as diagnosing something that inevitably results in racist or prejudicial behavior," he says.

Everyone agrees that the statistical effect linking bias to behavior is slight. They only disagree about how slight. Blanton’s 2013 meta-analysis found less of a link than a 2009 meta-analysis by Banaji and Greenwald. Blanton sees the correlation as so small as to be trivial. Banaji and Greenwald, in a 2015 paper, argue that "statistically small effects" can have "societally large effects."

The new analysis seems to bolster Blanton’s less-sanguine take. It found that the correlation between implicit bias and behavior was even smaller than what Blanton had reported. That came as a surprise, the researchers write.
So, the IAT varies widely for an individual over time, it is not usable for practical purposes, it is not diagnostic of anything, it is not predictive of racist behavior, and independent reviewers have found no statistical linkage. Other than that, it is worth talking about (/sarc).

I like this final quote from the IAT critic, Blanton about bias, "It is such an important problem that it deserves a stronger science," he says.

No comments:

Post a Comment