A measure of science

By Shaun Hendy 26/09/2009

As a theoretical physicist and applied mathematician, I’m interested in using numbers to describe all sorts of phenomena. And as a researcher in the MacDiarmid Institute, I’m also interested in innovation. So for me, it’s natural to try to study innovation quantitatively. One of the goals of this blog will be to look at science innovation using tools developed to study complex systems, drawing on quantitative data sources and statistics.

What is already out there? In a New Zealand context, MoRST publishes an RS&T scorecard and also commissions a national bibliometric report from time to time – there is one due out this year. The Ministry of Education also recently published a bibliometric analysis of the Universities in order to assess the impact of the performance based research fund. Similarly, the Marsden fund commissioned a bibliometric study to look at the impact of scientific papers that were produced as a result of its funding.

A bibliometric study counts or analyses the scientific journal articles that scientists are publishing. It can give information on the subject areas scientists are working in, and can provide an assessment of the impact that those scientists are having in their field. One way to assess impact is through citations – i.e. looking at where and how often a particular journal article is being referenced by later journal articles. The value of bibliometric studies is controversial, particularly when they are used to rank individual scientists who are competing for funding. Nonetheless, as journals are the most important forum for communication of scientific ideas and results, bibliometrics is here to stay.

Another measure that is frequently used is the number of patents produced by a country. Patents are principally produced by researchers in the private sector, so they complement scientific publications, which are mainly authored by researchers in the public sector. MoRST’s RS&T scorecard has some interesting information on the patents produced by New Zealand. Further information can be obtained from national patent offices:  New Zealand’s Intellectual Property Office has a searchable database of patents. Counting patents has its drawbacks too. Assessing the value of a patent is a difficult task.

The OECD is another organisation that monitors scientific performance. Its studies are interesting because they put New Zealand’s scientific output into an international context. The OECD reviewed the New Zealand innovation system in 2007 — in this document you will find a large amount of financial data:  business expenditure in research and development, dollars spent on basic versus targeted research, etc. In fact, much of the quantitative discussion on innovation focuses on the dollars spent.

Another piece of the puzzle is provided by Statistics New Zealand. It publishes a report every two years on the number of scientists and researchers in New Zealand. The Ministry of Education also tracks the number and subject areas of advanced degrees (such as PhDs) granted at Universities in New Zealand.

What will I add to these sources? There is a wealth of data available, but it is held in diverse locations. One thing I’ll try to do is pull some of this information together. For example, I’ll look at how the number of scientific papers and dollars per researcher has changed over the past 20 years. I’ll also try to use new tools for looking at the data, particularly some of the methods that have been developed recently for studying complex systems. New Zealand’s innovation system is, if nothing else, complex.

0 Responses to “A measure of science”

  • Some other complications with using publications as metrics is interdisciplinary. The nature of research and publishing today means that impact factors for biological sciences are much higher than earth sciences, for example. This bias is also reflected in the high profile journals, Nature, Science and PNAS. Yet even within a discipline, number of pubs or citations can be linked to repetition of work and preferential citation of colleagues, rather than significance of novel research.

    On top of all that lies the challenges of measuring non-traditional science communication: popular science articles, interviews, gray literature, and… brace yourselves… blogging.

  • Yes I agree – sadly mathematics has one of the lowest impact factors 🙁 . As for how you measure blogging, maybe we’ll be quoting our Google PageRank in our pbrf portfolios in 2012!