In a post a few weeks ago, I looked at the total published output of the CRIs from 1993. Now I want to look at the citations to CRI papers. I will use two citation measures. The first is a two year impact factor, which is a measure that is often used to rank journals. The impact factor of a CRI in 2008, for example, is the average number of citations in 2008 for papers published by authors at that CRI in 2006 and 2007. The second measure I will use is a 5-year impact factor i.e. the average number of citations to papers in 2008 that were published between 2003-2007 is the 2008 5-year impact factor.
Now, the analysis I am going to give below is somewhat naive. I should really be breaking down the citations by subject area (as pointed by Crikey Creek’s Daniel Collins in a comment last year). This is important because rates of citations differ considerably between disciplines – unfortunately I haven’t had the time to do this, except in a few special cases such as my own Institute. Thus, differences in impact factor between Institutes will depend on the areas in which they work. Changes in that difference over time may reflect changes in focus within Institutes, rather than changes in impact of the research conducted.
Why do citation rates differ between disciplines? At least part of the difference comes from the degree of empiricism within a discipline. Medical science frequently makes use of the aggregation of meta-data from many studies, some of which may be too small to have statistical significance on their own. So if your small study suggests that smoking is a risk factor for diabetes, it will be important to cite as many other studies of smoking and diabetes as possible to give your reader context. Mathematics on the other hand relies on mathematical proof. To prove the Reimann hypothesis, you may only need to cite a handful of papers that contain results you rely on in your proof. You hardly need to cite every paper on the Reimann hypothesis that has appeared in print. Not surprisingly, journals in medical science typically have much higher impact factors that mathematics journals.
On to the results. Firstly I have plotted the CRI (2 year) impact factor from 1995 to 2008 (on the right) against the New Zealand impact factor as calculated from the Thompson Reuters database. Firstly, we note that both data series show large increases over this time period. However, in 1995 the CRIs trail New Zealand as a whole, whereas in 2008 the CRIs lead New Zealand. The data is sufficiently noisy that one can’t to assert that the CRIs are significantly different from the rest of the country with much confidence however.
However, with the 5-year impact factor, the trend seems clearer: the 5-year impact factor of the CRIs is below those of New Zealand as a whole at the end of the 1990s, but by the mid 2000s it surpasses those of the rest of the country. As I mentioned above, there could be a number of explanations for this. CRI citations per paper have grown faster than New Zealand as a while. For example, I wonder if this could reflect a diversification of research activities at Universities, where disciplines with lower impact factors have started publishing more, perhaps as a result of the Performance Based Research Fund.
Unfortunately, without breaking down citations by discipline we can’t really tell whether this does reflect an increase in relative impact by CRI researchers. However, the data does suggest that this would be a worthwhile exercise: why has CRI impact surpassed that of the rest of New Zealand in the last decade?