I recently read a paper on confirmation bias: the tendency to focus on information that corroborates theories and dismiss information that contradicts them. It explained a general problem that I’ve noticed with many scientific studies (particularly in the field of software development) but I hadn’t been able to identify.
The problem was neatly stated in a ChaosEngine forum post looking for an explanation of a strange behaviour common among experienced game programmers:
they have an opinion on every facet of the game, they have probably worked on every part of it too, and they just seem to know more than anyone else. Yet there are just some things that are clearly ‘wrong’ and it doesn’t take an expert to spot it, but they will be adamant that their opinion or choice is the correct one.
I suspected that my understanding of this problem in game development was more through familiarity than it being a peculiar trait of experienced game programmers. And I previously researched anchoring for an article I wrote on the USP. Anchoring is a specific form of confirmation bias more usually associated with estimation. Retracing my research on anchoring lead me to the paper on confirmation bias.
The paper is a thorough treatment of the various forms of bias that have been identified by experiment over the years. It concludes by suggesting that the bias is not always a problem since there are situations where bias is advantageous. However, should we want to counter the effects, the evidence suggests some behaviours that might help and some things that don’t.
Many behaviours that seem like they should help, don’t. We might imagine that by attempting not to favour our opinion on a subject we might protect ourselves from bias. Unfortunately the evidence suggests that bias occurs in even the most tentatively held hypothesis. Doing wider research is also unlikely to help since it is our weighing of the evidence that is under question.
Ironically, behaviours most likely to help are those often associated with weaknesses. Holding multiple competing theories simultaneously might be considered woolly-minded, but has been known to reduce bias. Abandoning a theory upon finding contradictory evidence might be considered defeatest but adjusting a theory or otherwise explaining away the evidence is a classic symptom of confirmation bias at work.
For a while now I have been checking up on the practices expounded by our industry gurus. In every case I have examined so far, the positive conclusions have been based on flimsy evidence. I had hoped that this was just bad luck or due to the relative immaturity of the field. But reading this paper has somewhat shaken my belief in the scientific method in general.
Leave a Reply