Most of us know anecdotally that print media on occasion present immunisation information incorrectly, but you can’t put a finger on how often and when without hard numbers.
A recent research article examining New Zealand newspapers puts numbers to the errors.
Helen Petousis-Harris led a team surveying the immunisation statements in articles printed in four national New Zealand newspapers over 6 years (2002-2007).
They find that the proportion of unsubstantiated statements about the meningococcal immunisation programme increased with increased public interest and fell again as interest waned.
Overall they attribute this to a reliance on ‘he said, she said’ journalism, with the fallacies this brings, writing (pages 525-6)
[…] For new vaccines, prior to introduction, the coverage is less emotive and more scientifically substantiated, focusing on the potential of the vaccine to prevent disease. […]
Following the routine use of a vaccine the proportion of unsubstantiated reporting increases, including fallacious arguments. These arguments are used by both supportive and anti-immunisation spokespeople, although more commonly by those opposed to immunisation. Most arguments proposed by anti-immunisation lobbyists have been demonstrated to be fallacious.4,20-22 Apparent here is the failure of the press at times to verify both the material and the credibility of their sources. […]
Most media ‘balance’ given to immunisation relies on ‘he said, she said’ arguments using quotes from opposing spokespersons with a failure to verify the scientific validity of both the material and the source.
It’s tempting to think that what is happening is that as public interest increases some journalists and editors increasingly ’present the other side’–which is not a necessarily appropriate thing to do–rather more thoroughly assess and critique the arguments presented by those interviewed.
A problem with ‘he said, she said’ is that in matters of fact it is not people’s opinions that matter, but the strength of the evidence backing the statement that matters.
Considerable space in the article is devoted to discussing errors of fact and logic, concisely introducing the errors examined.
I feel it’s a pity that this article is not open-access. The discussion deserves to be more widely read by journalists, editors and the wider sphere of science writers. (Including science bloggers.)
In addition to introducing the types of errors, examples are given, with sources cited. These ‘worked examples’ would be an excellent aid to help those writing the articles gain a ‘feel’ for what to be alert to.
From the 360 articles with vaccine-related information, overall they observed that 21% (76 articles) contained ’some factually unsubstantiated information about vaccine efficacy or safety.’
Articles about new vaccines, or vaccines not in routine use, were observed to be accurate, rare cases excepted. By contrast, one-third of articles about the MMR vaccine and one-quarter of articles about the New Zealand-specific meningococcal B vaccine contained erroneous information.*
Their survey is based on a database of articles derived from a commercial media watch service containing words associated with vaccines and vaccine-preventable disease.
They consider both errors of fact and errors of logic under a taxonomy of errors, which are identified, marked** and the number of lines of each category in the article counted. Errors in statements both favouring vaccination and against vaccination are considered as errors; what is being assessed is accuracy of the statement, not the viewpoint of the statement.
About the cartoon
Perhaps I shouldn’t pollute a serious article with a cartoon like that, but I like a little fun… This joke runs deeper than the surface: each of the twitter accounts in the cartoon can be found on twitter. (Yes, I had a hunch and just had to check for myself.) hanneloreEC’s next tweet after the one in the cartoon reads ‘I’ve got 500 surgical masks, twenty gallons of hand sanitizer, and a flare gun. As ready for swine flu as I’ll ever be.’ Now that’s being prepared.)
If you want to check these twitter accounts for yourself, enter http://twitter.com/#!/ followed by the twitter name, skeeve37, hanneloreec, etc.
* I don’t know if these differences are statistically significant.
** You might argue that they ought to have counted the number of statements, rather than the number of lines in scoring the articles. It seems to me that this would depend on your objective. If the idea is to assess how much ‘space’ is spend in different states (correct or fallacious in various ways) in the article then counting lines seems reasonable, assuming all lines are of the same length as they typically are in print newspaper articles. This would cover if the writer spend more space emphasising a particular aspect compared to others, something counting the number of statements might overlook.
Petousis-Harris, H., Goodyear-Smith, F., Kameshwar, K., & Turner, N. (2010). Fact or fallacy? Immunisation arguments in the New Zealand print media Australian and New Zealand Journal of Public Health, 34 (5), 521-526 DOI: 10.1111/j.1753-6405.2010.00601.x
Other articles in Code for life: