Science Literacy Test Initial Follow-up

By Darcy Cowan 23/01/2013 10


Yesterday’s post about the Science Literacy test has gotten some good responses.

Thought I’d put up a couple of initial thoughts/feed-back on the testing:

First off it’s becoming clear that some of the questions are ambiguously worded. This is especially obvious in the results for questions 12 and 14.

Question 12, looking at categorizing sources, is worded in such a way that it is not clear whether the question refers to the story extract itself or the sources used in the story extract. This means that respondents incorrectly label the source as “Primary” (correct for the sources used for the story extract) and “Tertiary” (correct for the story extract itself and therefore the correct answer for the question).

The other one that people are obviously getting wrong because of the wording (including myself) is question 14. This question asks what element of a study design is not a strength of the study.

This implies that you are to critique the design as it is actually presented, not how it could have been. Thus people are choosing the option that is “least wrong”. A bit of a change in this wording to make it clear what design could have been used but wasn’t or that could have made the study better or even restricting the answer options to just the study elements present in the background information would probably bring the score for this question up.

Interestingly there are a few questions nobody has gotten wrong, indicating they may be a little too easy (but perhaps the sample size is still too small yet, 45 responders so far).

The first question “Which of the following is a valid scientific argument?” has a 100% responder correctness score. As does Question 16 on the proportions of house building materials and question 20 on the rat population. Question 27 “Which of the following actions is a valid scientific course of action?” also has a 100% score.

So, great stuff so far, as I mentioned there’s been about 45 people taking part so far and things already are shaping up nicely. So spread the word and lets see how many people we can get. If possible it would be nice to get constructive criticism on the question wording like I have done above that can be feed back to the original test designers.

Thanks for the interest so far and keep it up!

Enhanced by Zemanta

Filed under: Sciblogs, Science Tagged: Education, Science and Society, Science Literacy


10 Responses to “Science Literacy Test Initial Follow-up”

  • I found it interesting that the questions were kind of a little biased. It’s hard to explain (obviously, I’m rambling a bit), but I think if you know what scibloggers often blog about e.g. pharmaceutical vs. quack, climate change deniers etc. you can kind of guess which answer is going to be right. For example in question 5, the ‘journalists’ choice (b) seems like an obvious, if deserved, jab at journalists. We know you weren’t going to make that one the right answer! There were more like this.

    But… maybe that just means that reading sciblogs is increasing my scientific literacy?!

    I liked doing the quiz, and was pleased with my score. I got both of the controversially worded questions noted above wrong. So that makes my score even better if I take those two out :)

  • Interesting Claire,
    I mean none of the scibloggers had any input into the question wording so you might be getting some science literacy beneath all the jabs?

    But does raise a question I’ve been worried about, namely that the pool of test takers is going to be biased towards those who are fairly science literate to begin with and familiar with some of the targets in the questions to boot.

    Feel free to pass on to less well read friends if you can and we might see what the non-sciblogs population thinks.

    Thanks for taking part!

  • I took the test, got 28/28. But I did fudge one of the answers.

    In one question you asked whether I’d most trust an article because of where it was published, the research team, peer review, or some other option. I picked peer review. But, knowing that lags in Econ can sometimes be 2 years +, I actually often pick by reputation. So much is moving to online working papers because getting stuff out takes so freaking long… you’d be two years behind the curve if you waited for the journals.

    So while peer review is best, I do sometimes trust stuff simply because I know the person’s work. If I have to weigh up a working paper by somebody whose stuff is always solid, against a published piece by somebody known to get fragile results through specification searches, I’ll often trust the working paper.

  • Yeah, think there might be a couple of questions like that where you know what the “right” answer is – but also know that’s not what happens :)

  • Back to the original authors?
    That wasn’t the plan, I suspect I’d need to collect some demographic data for that to be useful. I just figured it could be a fun thing and possible available as a teaching aid.
    Feedbcak to the initial authors on the questions was also in my mind.

  • Ok, I’m not too worried about only getting 26 out of 28, since I agree the wording of question 12 is ambiguous … I thought we were to look at where the media had got hold of its information… and that was a primary source.

    It would be nice to know if any Journalists are taking this quiz.

  • @Maggy, yeah with 12 – it’s really only obvious what the question was about when you see the answer. I’m also grateful to the people who are coming at this with a non-science background who can point out the assumed parts of the questions that you might not really notice if you are already familiar with the subject material.

    I’ve seen some quite low score and it would be nice to get the perspective of those people.

Site Meter