Statistically illiterate….

By Siouxsie Wiles 19/08/2012 5


‘Statistically illiterate’ is how scientist and blogger Stephen Curry describes those who support impact factors, used to rank scientific journals. The impact factor, calculated annually, reflects the mean number of citations to articles published in a journal in the two preceding years.

He has a point. As Stephen describes:

….typically only 15% of the papers in a journal account for half the total citations. Therefore only this minority of the articles has more than the average number of citations denoted by the journal impact factor. Take a moment to think about what that means: the vast majority of the journal’s papers — fully 85% — have fewer citations than the average. The impact factor is a statistically indefensible indicator of journal performance; it flatters to deceive, distributing credit that has been earned by only a small fraction of its published papers.

In light of this, Stephen has started a call to arms of sorts, suggesting the start of a smear campaign to embarrass those who trumpet impact factors. So, he says, you can consider yourself statistically illiterate if you:

– include journal impact factors in the list of publications in your cv
– are judging grant or promotion applications and find yourself scanning the applicant’s publications, checking off the impact factors
– publish a journal that trumpets its impact factor in adverts or emails
– see someone else using impact factors and make no attempt at correction

I’m wondering if we need a boycott like that started against Elsevier?!

Oh, and speaking of statistically illiterate, the terrible use of statistics by biologists has long irritated me so a friend and I have been working on a smart phone app to help with this. Its in the final stages of development so watch this space!


5 Responses to “Statistically illiterate….”

  • The nonsense of impact factors is being extended to grading university researchers with a view to sacking or demoting them.

    At least one such worker has been sacked for complaining about the policy, presumably on the grounds of bringing the statistically illiterate into disrepute.

    http://www.dcscience.net/?p=5388

  • Absolutely

    The JIF was invented by Eugene Garfield, the founder of the Institute for Scientific Information (they sell publication databases, now part of Thomson Reuters) to identify target markets for their citation index. A high JIF indicates that a group of authors tend to publish in the one journal and cite each other a lot – and therefore ISI should include that journal in their citation database if they want the widest possible market for their service.

    The JIF of a journal is the average number of citations received per paper published in that journal during the two preceding years. It has noiting to do with the quality of the paper or the quality of the journal.

    JIF is also very strongly correlated with discipline. Journals from fields like biology and medicine that involve a lot of meta-analysis or gathering of observation data from many papers, have a high JIF. Maths journals have very long-lived papers (much older than the 2 years) with fewer references, and therefore have a low JIF.

    Absolutely we should get rid of the JIF. It is distorting publication practices and personal assessments of scientists all over the world.

  • What do you expect? We now live in a society supposedly better educated than ever before, where the general population can be persuaded that the use of time-specific data on an individual (National Standards results) can be aggregated to provide meaningful information on the longitudinal performance of a corporate body (a school).

    If the government (educated or otherwise) can be seen to treat statistics with contempt , what hope the rest of the population, professionals included?

  • So, putting impact factors aside, what is the best way to rate journals, or indeed, is there a need to rate journals??

    Playing devils advocate, however, if you have a journal with an impact factor of 5 and one with an impact factor of 2, then surely this demonstrates that overall the one with the higher impact factor gains more citations. Even if most of these citations are related to only 15% of the papers, it still demonstrates that this journal is selecting papers which are of more interest to other researchers.
    The problem arises when papers end up in journals because of the “old boys network” and not due to the quality of the research.

    I think the bigger problem with impact factors is that it is a positive feedback system – a couple of good papers by prominent authors could start a positive feedback that would enhance the impact factor. If this was in the business world an editor would probably “buy” the papers of a few prominent authors to boost the impact factor.

Site Meter