6 Comments

Trust science, not scientists says the title of an article written by virologist Professor Vincent Racanielo.

It’s a great line.

Vincent Racanielo was writing in the context of the continued saga over proposals of a link between the mouse XMRV virus (xenotropic murine leukemia virus-related virus) and chronic fatigue syndrome (CFS). My interest here is not on this particular saga,[1] intriguing as it might be, but on the wider issue of who or what do you trust.

He closes his article writing,

There are many lessons to be learned from XMRV, but an important one is that science progresses not from the work of a single investigator, but from the collective efforts of many laboratories. XMRV reminds us to trust science, not scientists.

One useful way to view new research papers, is as an argument for a case that has yet to be heard by a jury of their peers–the in-house peer-review of the research journal not withstanding.[2]

It’s a cautiousness and a willingness to critique, rather than accept at face value; Science’s community-based sanity filter.This applies to all users of science.

Scientists themselves – of course.

Advocates of a whole spectrum of positions and organisations–including some commercial advertising–commonly present just one or two papers or scientists (or doctors) as ‘evidence’ for their position. There’s an obvious flaw in that, right?  That a few individuals present contrary views, or garble the science (innocently or otherwise), doesn’t make the science wrong. Readers of those advocates should also take note–that’s us consumers, you and I.

Journalists presenting each research article as definitive on it’s own and before it’s reviewed by it’s peers can lead to a ping-pong effect with each new report seemingly countering the other, when in practice there is an improved understanding emerging as different conflicting portions of the issue at hand are addressed. Sometimes the path isn’t very linear, nor at times very clear–that comes with hindsight–but it usually gets there over time. Reporting like each research paper in isolation can also lead to media reports that are just plain wrong.

With that in mind, scientists should consider if they are best to present their work to the public as an argument for a case yet to be confirmed, rather than as a conclusion, tempting as it might be.

Using recent well-known examples, science writer David Dodds tweeted that comparing how arsenic life the faster-than-light neutrino work were presented was instructive – the former being presented essentially as conclusion and the latter as a hypothesis to be confirmed.

Among other sources, I suggest readers dip into what former chemist and philosopher Janet Stemwedel has written in her Scientific American blog articles. For example there is Drawing the line between science and pseudo-science and Evaluating scientific claims (or, do we have to take the scientist’s word for it?). If you can’t already tell from the titles, Janet writes about ethical and philosophical issues in science. I recommend her blog and writing to anyone interested in sorting out the good wheat from the chaff of the science trough – or if you just like mulling things over.

Footnotes

Some time elapsed between writing this article, revisiting it and publishing it in briefer revised form.

1. But for those that are following the XMRV-CFS saga his penultimate paragraph is a very strong statement about the leader researcher’s (Judy Mitovits’) intended future direction to investigate possible roles of gammaviruses in CFS:-

[…] pursuing the CFS-gammaretrovirus hypothesis is a disservice to those with CFS, and detracts from efforts to solve the disease. There are no data to support such an association, and to suggest that a lab contaminant, XMRV, has pointed the way to a bona fide etiologic agent seems implausible.

For those not used to formal scientific discourse, this is typically polite but firm.

CFS is also known (by some) as myalgic encephalomyelitis (ME).

[2] In-house peer-review cannot realistically hope address all possible criticism; that comes after the paper is published in the form of further research papers, review papers, and so on.


Other articles on Code for Life:

When the abstract or conclusions aren’t accurate or enough

Monkey business, or is my uncle also my Dad?

Conspiring against science

Three kinds of knowledge about science and journalism

Reproducible research and computational biology

Why (some) people don’t trust science