A "Formula" to Account for Resistance to Scientific Consensus

By Michael Edmonds 14/04/2015 3


In some areas of science there is very little resistance to the scientific consensus. Very few people will challenge the consensus that water flows downhill because of gravity or that objects are different colours because they absorb/reflect different wavelengths of light. However in other areas of science, for example – climate change, alternative health treatments, immunisation the corresponding consensus in the scientific community receives more resistance. Lately, I’ve been thinking about what some of the factors are that make this difference, and have come up with the following “formula” as an explanation:

R is proportional to C x A x (NS/S)

where

R = resistance to scientific consensus

C = complexity of the system involved

A = how negatively the individual is affected if the consensus is accepted

NS = exposure to information not supporting the scientific consensus

S = exposure to information supported the scientific consensus

 

Thoughts? I think I already see at least one flaw in this formulaic approach, but I’d be interested to see what others think. Are there other factors I’ve missed?

 

 

 

 

 


3 Responses to “A "Formula" to Account for Resistance to Scientific Consensus”

  • Nice Michael.
    Having published several papers on flaws in the first consensus definition of a disease, I think we need a formula which grades the strength of the consensus. Consensus Strength = f x (NpT/NnT) x (NQSf/NQSa) where NpT is the number of positive trials (experiments), NnT the number of negative trials, NQSf number of qualified (their area) scientists for and NQSa against. f is the fudge factor because Einstein had one.

  • I wonder if you need to factor in what Terry Pratchett would call the narrativium of the alternative thesis vs the consensus thesis. It’s part of the C of the equation. If C can be simplified into a compelling, accessible story that fits with people’s preconceptions, it’s more likely to be accepted. If it’s hard to explain, or uncertain, or novel, it’s more likely to be rejected. Exposure to NS or S isn’t just a matter of encountering it, it’s whether the person accepts it and integrates it into their picture of the world, or rejects it as lies.

  • I don’t think complexity, negative consequences, and exposure to information are enough to fully explain it. My first thought is that negative consequences isn’t quite the right term, as people act on what they *perceive* would be the negative consequences. In the case of vaccines, for example, these perceptions are often incorrect.

    I also think you need to account for whether or not the view being rejected opposes an existing view. For multiple reasons, it’s easier to accept something you already agree with or are sympathetic towards. It will probably have a higher prior probability in your eyes, and we’re all inherently more motivated to seek out information we already agree with and to critically scrutinise information we disagree with more heavily. These things are hard to quantify though, even if it’s just in an imaginary way 🙂