Trust science, not scientists

By Grant Jacobs 03/11/2011 6


Trust science, not scientists says the title of an article written by virologist Professor Vincent Racanielo.

It’s a great line.

Vincent Racanielo was writing in the context of the continued saga over proposals of a link between the mouse XMRV virus (xenotropic murine leukemia virus-related virus) and chronic fatigue syndrome (CFS). My interest here is not on this particular saga,[1] intriguing as it might be, but on the wider issue of who or what do you trust.

He closes his article writing,

There are many lessons to be learned from XMRV, but an important one is that science progresses not from the work of a single investigator, but from the collective efforts of many laboratories. XMRV reminds us to trust science, not scientists.

One useful way to view new research papers, is as an argument for a case that has yet to be heard by a jury of their peers–the in-house peer-review of the research journal not withstanding.[2]

It’s a cautiousness and a willingness to critique, rather than accept at face value; Science’s community-based sanity filter.This applies to all users of science.

Scientists themselves – of course.

Advocates of a whole spectrum of positions and organisations–including some commercial advertising–commonly present just one or two papers or scientists (or doctors) as ‘evidence’ for their position. There’s an obvious flaw in that, right?  That a few individuals present contrary views, or garble the science (innocently or otherwise), doesn’t make the science wrong. Readers of those advocates should also take note–that’s us consumers, you and I.

Journalists presenting each research article as definitive on it’s own and before it’s reviewed by it’s peers can lead to a ping-pong effect with each new report seemingly countering the other, when in practice there is an improved understanding emerging as different conflicting portions of the issue at hand are addressed. Sometimes the path isn’t very linear, nor at times very clear–that comes with hindsight–but it usually gets there over time. Reporting like each research paper in isolation can also lead to media reports that are just plain wrong.

With that in mind, scientists should consider if they are best to present their work to the public as an argument for a case yet to be confirmed, rather than as a conclusion, tempting as it might be.

Using recent well-known examples, science writer David Dodds tweeted that comparing how arsenic life the faster-than-light neutrino work were presented was instructive – the former being presented essentially as conclusion and the latter as a hypothesis to be confirmed.

Among other sources, I suggest readers dip into what former chemist and philosopher Janet Stemwedel has written in her Scientific American blog articles. For example there is Drawing the line between science and pseudo-science and Evaluating scientific claims (or, do we have to take the scientist’s word for it?). If you can’t already tell from the titles, Janet writes about ethical and philosophical issues in science. I recommend her blog and writing to anyone interested in sorting out the good wheat from the chaff of the science trough – or if you just like mulling things over.

Footnotes

Some time elapsed between writing this article, revisiting it and publishing it in briefer revised form.

1. But for those that are following the XMRV-CFS saga his penultimate paragraph is a very strong statement about the leader researcher’s (Judy Mitovits’) intended future direction to investigate possible roles of gammaviruses in CFS:-

[…] pursuing the CFS-gammaretrovirus hypothesis is a disservice to those with CFS, and detracts from efforts to solve the disease. There are no data to support such an association, and to suggest that a lab contaminant, XMRV, has pointed the way to a bona fide etiologic agent seems implausible.

For those not used to formal scientific discourse, this is typically polite but firm.

CFS is also known (by some) as myalgic encephalomyelitis (ME).

[2] In-house peer-review cannot realistically hope address all possible criticism; that comes after the paper is published in the form of further research papers, review papers, and so on.


Other articles on Code for Life:

When the abstract or conclusions aren’t accurate or enough

Monkey business, or is my uncle also my Dad?

Conspiring against science

Three kinds of knowledge about science and journalism

Reproducible research and computational biology

Why (some) people don’t trust science


6 Responses to “Trust science, not scientists”

  • So how do you square that Racaniello with this one?

    “In my view the CDC paper should not have been published without a proper positive control, eg patient samples known to contain XMRV. If I had reviewed the CDC paper that’s what I would have asked for.” Professor Racaniello

    http://www.forums.aboutmecfs.org/content.php?187-Dr-Mikovits-and-Dr-Racaniello-on-XMRV

    That Racaniello agrees that all papers using unvalidated assays should be pulled. That will include the ViPDx assays used in the blood working group. Yes, Lombardi conducted that study and not Mikovits at the WPI.

    This is of course ignoring the proven issues with that study. The controls could not have been declared negative as PBMCs were not sent out. Some collection tubes were in the CDC lab with the 22Rv1 used to spike analytical controls (Don’t confuse that with diagnostic valiation), Lo’s team used the wrong assay from Lo et al. and not the one that worked. Is there much point continuing.

    So the Lipkin study should be cancelled. Neither the CDC, Lo or WPI/Lombardi labs have validated assays. VP62/XMRV has now been shown to have no relationship to the viruses discovered in Lombardi et al. and confirmed in Lo et al.

  • Nice post Grant… my wife (not a scientist) gets very frustrated with “ping-pong.” Eg she wants to know if it’s good to take Asprin as a preventative measure or not….one week it will prevent X, the second, make Y more likely…. The frustration is with science as well as scientists and this is built, I think, on the great success of science having resulted in high expectations of it giving more and more definitive answers. Whilst the point is taken about reporting an “argument yet to be confirmed” unfortunately, the cat is out of the bag and such reports are less likely to be picked up in the broad media than “conclusions.” The science community is not without guilt in this context – vis-à-vis the lack of negative trial reporting.

  • kiwiski,

    I’m sorry – I’m confused as to your point. You say the frustration is with science as well, but then go back to talking about individual results, to my reading.

    The science community reports negative trials, as scientific publications; the media may chose not to, however – the newspaper editor (TV producer, etc) might consider it of less interest, for example. The science community doesn’t make that call.

    From the point of view of the science community it’s ‘reporting the results of a trial’ – that’s not about if they were negative or positive, just that you did a trial and these were the outcomes.

    When you’re funded to do a trial you definitely want to publish regardless of the outcome. A scientist’s career needs publications and trials are big efforts. (I’m writing about scientists at universities or research institutes.)

    You can quibble that negative results don’t get the same profile in the research literature in my experience (and generalising) the size [read: ‘strength’] of the trial and how important the treatment that is being trialled factor into how likely a high-profile publication will result (i.e. regardless of if the outcomes are positive for the treatment or not).

    Regards of this a point here is that researchers who specialise in the issue will see the lower-profile publications, but the media may not necessarily. I think people outside of science don’t appreciate just how big the scientific literature is; media report on a tiny fraction of it.

  • Jacob,

    Excuse the slow reply. My article is about the wider implications of the phrase Racaniello headlined his article with, not the specifics of the XMRV-CFS saga. Wouldn’t the thing to do be to just ask Racaniello – I can’t speak for him, after all.

Site Meter