Given the audience of Sciblogs.co.nz, I doubt I’ll be going out on a limb by saying that science communication and outreach is important. The issue I’ve been grappling with recently is how effective, and important the outreach we actually do is.
Let me set the scene. In Wellington* we have a huge number (circa 70) of different organisations that are in some way (or have been) involved in science outreach and communication programmes. They cross the spectrum from non-profits to CRIs and Universities and community groups, and the list is still growing – please get in touch if you know of any omissions! Most of these operate independently from one another, most struggle daily for funding and people to continue their work, and all approach the issue of science outreach and communication in their own personal ways.
So, put yourself in the shoes of a potential sponsor for a moment (or a volunteer with time wanting to help out). With limited money, people and time, try and decide which one(s) should you support. Part of it will come down, of course, to personal interest, for example if you’re a Porirua-based company you might be more interested in supporting Porirua focussed programmes. Yet once you have whittled down the list by preference, often there are still tens of programmes to choose from. If your goal (like mine) is to promote general science literacy across the board – then most of them seem like great causes worthy of your support.
But what if you dig a little deeper? How could you compare the ‘efficacy’ of something like a Royal Society sponsored lecture that attracts 200 people to a bi-weekly meeting of parents and kids akin to what is currently happening at The Clinic at Ngaio school (check it out here – it looks awesome!)?
Here’s my list of factors so far (again it’s incomplete and suggestions are encouraged):
- How many times has it occurred over the last 3 months? (n)
- How many people does it attract (on average) per session? (p)
- How much does it cost to run – to both the participants and organisers? (C)
- How long does each session take? (h)
- What is involved in each session? (is it simply a talk? Or is there creativity involved? Or hands-on demonstrations?) (Q)
- What audience is it reaching? (d)
Taking these variables (and any others you feel are relevant) how would you choose to combine them to give some ‘measure’ of a particular activity. My first instinct is to combine them as = (nphQ)/(Cd) **.
Personally, I would take d as the average decile value of the 5 schools closest to the location of the event (as a point to start from – this is hardly a robust metric!), Q as 1 for a talk, 2 for a hands on demo, 3 for an actual experiment and 4 for the creation and design of something. Finally I would estimate on a scale from 1-10 with 1 being a free event to arrange and 10 being something that requires attendance fees or large degrees of sponsorship to occur. Interestingly this particular formulation scores community efforts that encourage face-time and hands on experimentation higher than more traditional outreach methods such as lectures (which is hardly surprising as that’s the way I constructed it to act!).
So would you agree? Or is this approach simply too coarse grained to have any meaningful comparative value? If so, how then would you choose what to support?
UPDATE: NZCER (NZ Council for Educational Research) has also collated a bunch of their science education research here for anyone that’s interested in NZ specific publications. They can be a bit lengthy, but many of them contain some valuable insight into how, when and where we might be best placed in applying our science outreach/education efforts for the maximum social effect.
* the majority of my work has a strong Wellington focus simply because of the amount of time I have available to dedicate to outreach. I dearly hope that getting detailed information about one location might generate some insight about how to proceed in other areas of the country at some stage in the future.
** Yes I apologise for committing the cardinal sin of putting a formula in a blog post. It’s also laughably simplistic to expect an accurate comparison from such a simple exercise and there are plenty of ways it could be expanded. For instance the Q and h terms are linked and should suffer from diminishing returns. There’s also the issue of personal weighting of particular variables (my big issues are obviously demographic (d) and quality (Q)), but it may still be an interesting way to start considering this problem.