SciBlogs

Archive October 2012

Looking at New Zealand’s IT professionals aimee whitcroft Oct 25

No Comments

Are you one of the many denizens of the industry that is ‘IT’? How did you become one?  What do your loved ones think of it?  What sort of hours do you work? Etc etc etc

The Institute of IT Professionals, IITP, has constructed a survey which, it says, is “the most comprehensive research project ever completed on the New Zealand IT profession!“. Of course, it’s looking for people to actually take the survey.

It takes about 20 minutes, and they’ve sweetened the pot with a couple of ThinkGeek vouchers*. Why is it being done, you ask? Well, the IITP hopes the results will help it better serve its members, represent their interests and opinions** and advise government on policy and whatnot, and add to our national stats. No doubt there will be other benefits, too, as there often are from having some good (hopefully) data :)

Of course, I haven’t yet done the survey***, as I’m still trying to figure out whether, working on website content and in science comms as I do, I even _count_ as an ‘ICT Professional’…

Update: I’m _already_ embroiled in a debate/conversation about how one defines who is or is not an IT or ICT (different things) professional. What are your thoughts?

—–

* Ah, how they know us. Or some of us, at least :P

** The example of the Patents Bill is an…interesting one (the Bill has passed its second reading, and more on the subject here).

*** Which is why there are no possibly impertinent comments on the questions and survey style (from a market research background point of view) :P

Shiny sexy data centre pr0n aimee whitcroft Oct 18

No Comments

Yep, folks, you heard me right. Data centre pr0n.

I’ve had a couple of friends* send me this link today, from Google.  The page it directs one to will give you lots and lots of shiny data centre photos grouped around the tech, the people and the places. Of course, explanations of the photos are also included, and there’s hours and hours of fun here. SO MANY PIPES.

 

I also recommend the Story of Send, released earlier this year. The very cute little HTML5 adventure takes one on a tour of how Google handles your emails through its data centres and beyond.  Also featuring photos and videos and so forth. And some really, really interesting numbers, such as the amount of energy a Google search can take (see below). Yep, both sites are an ad for how cool they are, but it’s still interesting and fun, and a great way, I think, of showing how a company can communicate with people without the use of dry press releases.

Enjoy!

 

And DCs like this are particularly interesting for those of us in New Zealand, for example, where nothing even CLOSE to this in size exists, let alone lots of them…

—–

* You guys know me so well :P

Engaging early- and mid-career scientists aimee whitcroft Oct 18

1 Comment

Those involved in New Zealand’s science scene may have heard of Stratus, ‘a network of emerging and early career University of Auckland researchers’, which was launched in 2008.

Well, there’s a new kid on the block – allow me to introduce WEMCR, or Wellington Early- and Mid-career Researchers.

WEMCR is currently led by 5 researchers from across Wellington’s science-related organisations – so far, a museum, two CRIs, a independent research organisation, and a university. It also has the support of luminaries such as Shaun Hendy (who, it has just been announced, is one of 11 people elected to be Fellows of the Royal Society of New Zealand!), so can’t be an entirely crackpot initiative :P

The group was formed in response to a point consistently raised earlier this year at the NZAS conference, themed “Do Emerging Scientists Have a Future in New Zealand?”: that support networks and organisations are of enormous importance for scientists who are just starting out, or beginning to gather steam.

It’s not surprising, really, that networks would be this important – in just about ay field of endeavour, the primacy of networks has been realised: just think of all of those horrifying ‘networking’* events the business community put on!  Julia Lane, in her talk on Measuring the value of science, talked about networks of scientists (either in or between organisations) as the engines of innovation**!

But what counts as an EMCR, you ask? Well, the group says:

The Australian Academy of Science considers an EMCR to be a researcher who is under 15 years post-PhD (or other research higher degree) irrespective of their professional appointment.

The group, which will aim to provide a support network and also to identify and (where possible) address issues which affect early- and mid-career scientists, is having a launch party, and you’re all invited.

Said event will be help at 6pm on Tuesday 6th November, at the Southern Cross.  The Listener columnist Dr Rebecca Priestley will be there to talk about her experience as a researcher, and of course it’ll be an opportunity for everyone to meet each other and chat. Possible subjects include horror stories, problems, their solutions, and, of course, what people want from WEMCR itself. Suggestions I’ve seen include advocacy, workshops on funding sources and applications, and social events and networking opportunities.

It’s free, but you still need to register at http://wemcr.eventbrite.co.nz. So far, nearly 30 people have registered, which is a good sign!

A personal observation: I think networks which work across organisations, rather than just within them, is a great idea.  Less siloing leads to better support, more shared knowledge and collaboration, and less of a fiefdom/territorial approach. So good onya :)

—–

At present there isn’t a Facebook page or website, although the group’s leaders say they would like to have one in time. They also hope the launch event will shine a light on what people want, including how they want to engage.

There IS a twitter account: @wtn_emcr

—–

* To be clear – I’m a fan of networks, and of meeting new people, and talking and stuff and whatnot. Hell, I’m so enthusiastic about it now know a range of people here and overseas, and am often asked to help out with finding people for XY situation. But I think a bunch of people standing around with glazed eyes and bad wine, or talking to each other ONLY to see what sort of future utility they can extract from each other, is not cool at all, and certainly doesn’t foster real networks. Actually, there’s a whole bunch of writing about this out there, as people start to say that sometimes, just sometimes, helping someone out shouldn’t be because you’re going to expect something from _them_ one day :)

** Or knowledge.  Both work, and both generate the same outcome – innovation and new knowledge :)

 

Of bikes and buses aimee whitcroft Oct 16

11 Comments

Bill’s post today on Wellington buses, and why he chooses to drive his car instead, is a timely one.

While his post looks primarily at the time he saves by not using public transport, I thought I’d focus on something else: its cost*.

Now, don’t get me wrong – I’m a big fan of public transport.  I think driving a car (especially one carrying only one person) to and from work every day isn’t exactly the most environmentally-conscious thing one can do. Especially when reams and reams of people are doing just that.

However, part of the payoff for people taking public transport is that it’s supposed to be a _better_ option, in more ways than just the environmental, for travel. Something which is, very sadly, not the case for Wellington buses. Just looking out of my window at a car park full of cars shows that…

I live in the CBD, about 5 kilometres from where I work. On the few days when Wellington isn’t being buffeted by mad winds, I can brave mad drivers who think bicycle lanes are some sort of optional gap for them to use, and cycle to work. Walking is another option, but takes, well, almost an hour.

Happily, I also have a car which I bring in occasionally.  What prompts me to do so? The buses.

There is only one bus an hour (each way) between 9:24 and 4:44.  Each way is offset such that any errand one takes during the day has to take either less than 15 minutes or an hour fifteen (at least), essentially. This means that it’s impossible to run any errands during the day and also that, should an appointment or some such thing require my presence in town in the morning or evening, I _have_ to use some other option than the bus. The bus is also, most of the time, either late (sometimes substantially) or early, meaning I then have to wait until the next one comes.  Sometimes, a scheduled bus service doesn’t rock up at ALL.

It’s also incredibly expensive. That 10 kilometre a day round trip ends up costing me on the order of $120 a month, if not more.  I think that’s extortionate.

So, I’ve bought a motorbike :) It gives me the ability to run errands, pop back into town if necessary (as well as get out to work!), and runs on something very close to fumes.

Let’s look at costs, then.

Bike:

Brand new motorbike: $2500

Insurance: $300

Motorbike learner’s license: variable, but around $350 (includes lesson time)

Gear: again variable, and my partner had a bunch of stuff I can use, so my costs have been $200 for boots, and $200 for a jacket.

Total: $3 550

Travel:

In 2012, there will have been 261 weekdays total. Minus 20 for holidays, and, say, 6 over Christmas. That’s 235 week days.

So, at 10 km a day, that’s 2 350 km (say).

My fuel tank is 10.3 l, and the bike can travel, comfortably, 300 kilometres on that.

So, a year’s worth of weekday trips to and from work uses 7.83 tanks of petrol, or 80.7 litres of fuel.

At current fuel prices (October 2012), a litre of 91 costs 217.9 c.

Total: $175.81

Total costs:

Let’s say I sell my bike in a year. I’m told that, should I fail to damage it badly (or at all, rather), it should still be worth a solid $1 800-2 000. Let’s say $1 900.

UPDATE: I forgot rego, at $407.17 a year.

This means that the cost of my commute, over a year, will be $175.81+$3 550-$1800

Total: $1 825.1 (UPDATE: $2 229.27)

Let’s take away the cost of gear, though, and the learner’s license (they’re sunk costs, basically, and don’t recur each year).

Total: $1 075.1 (UPDATE: $ 1 482.27)

 

Bus:

$2.66 (with snapper, or else it’s $3.50!) each way, each day.

Over the same 261 weekdays as above. Note that I’m not counting the cost of the snapper card itself, replacing it, the fees that they charge to top up, or the cost of the occasional cash trip.

Total: 1388.52

UPDATE: AND that big saving includes ownership of an actual object, in the form of a motorbike… If I don’t see it, and just keep using it, my yearly costs are just $900* or so, as opposed to the bus’s almost $1 400… It’s eyepopping

So.  There you have it. In return for giving me for a whole bunch of inconvenience, GO Wellington also expects me to pay several hundred dollars more each year.

Yeah right.

—–

As for Auckland busses? I can’t comment, although this piece on Stuff certainly did. For the record, I think calling a bus a ‘loser cruiser’, and the people who use them ‘the great unwashed’, was unbelievable rude and condescending. It’s also inaccurate.

—–

* And don’t even get me started on the costs of trying to travel through New Zealand without flying on an airplane.  It’s like they WANT people to use cars over anything else.

** I originally stated $475 – the new number includes rego***. I have not included maintenance costs as they’re difficult to know, and the bike is brand new, meaning it’s under warranty.

*** Does anyone know why rego for a bike is significantly more than for a car? Is it because they use less petrol?

It’s Ada Lovelace Day! aimee whitcroft Oct 16

4 Comments

Today, October 16th, is Ada Lovelace Day.

Ada Lovelace, for those who didn’t already know (and you all do, right? * wink *), also called Augusta Ada King, Countess of Lovelace, is one of the shining stars in mathematics and computer history.

Ada Lovelace, a woodcut graphic by Colin Adams based on the original watercolor by Alfred Edward Chalon. Donated by the Ada Initiative to Wikimedia Commons.

Yep, you heard right – a _girl_ was incredibly good at maths :P

She was born and lived during the 1800s, and it known primarily for her work on Babbage’s analytical machine*.

Which means that, in some circles at least, she is also considered to have been the world’s first computer programmer.

Anyway, today is her day!

There’s a lovely page on the website Finding Ada devoted to the day, and I encourage you to go check it out. Christchurch City Libraries has also put out this great list of nonfiction books about female scientists.

And a fun challenge – what’s YOUR favourite story about women in STEM (science, technology, engineering. ,maths).  Who has inspired you?

—–

Related posts

Greetings, 2012ers (in which I talk about the Ada Initiative, and going to a barcamp held by them)

—–

* I’ve seen a giant replica of its predecessor, the Difference Engine, at the Computer History Museum in Mountain View, California. Huge, and amazing.**

** The Difference Engine was never actually built during Babbage’s lifetime, and the question remains whether it would have been possible to do so (even if anyone had tried), given milling technology at the time.  Now, anyone can make their own using 3d printing, and plans are underfoot to construct a wor

Adapting to Mars aimee whitcroft Oct 12

No Comments

The subject of Mars is a popular one. Not only in science fiction but, of course, in real life, too.

ResearchBlogging.org

Curiosity’s presence on the red planet at the moment – you can look at interactive panoramas of Mars! – has been generating a great deal of international media interest, and there has been talk of trying to get real actual human beans out there by the 2030s, if not before.

Click here to view the video on YouTube.

There are, of course, a number of challenges, many of which relate to the fragile state of our brains, bodies and minds.

We’ve had projects such as KiwiMars simulate life (and science!) in a ‘Martian’ evironment. The MARS-500 project shut people in a mocked-up spacecraft for 520 days to see whether all those movies about space crews cracking would actually happen*. Examples, basically, abound of people thinking Very Seriously about a human presence on Mars.

Another issue, though, apart from the food and stress and travel (WAY worse than airplanes and LAX) and everything else, is sleep.

Or, more precisely, our circadian rhythms – the biological ‘clock’ upon which our bodies run. Earth’s day is on the order of 24 hours long, and we’ve adapted to that. Well, mostly – our innate clock is about 24 hours 12 minutes. And it’s the progression on earth from day into night, and back again, which helps us keep our clocks in check.  If you’re wondering ‘well, why would we care about that?’, they’re intimately involved in a number of very important processes, including metabolism, sleep** and mood…

However, the length of a day on Mars is slightly longer – 12 hours 39 minutes.  Not a huge difference, sure, but apparently our bodies have great difficulty adjusting to this difference on a day to day basis.

But retraining our body clocks isn’t impossible.  And, indeed, the fix is remarkably low tech – it’s a combination of blue light and caffeine.

A paper recently published in the journal SLEEP details how 19 of the crew of the gruelling Phoenix Mars Lander mission in 2008 took part in a study looking at ways to help circadian rhythms readjust. In addition to the training all of the crew received, about how to use caffeine properly, how to properly arrange a dark, comfy bedroom, and how to be OK with not being superhuman, 19 crew members volunteered for a little bit more.

These crew members wore an ‘actigraph’ watch, which helped to monitor their sleep patters, and kept a work and sleep diary.  In exchange, they were given panels of blue LEDs at their workstations.  This short-wavelength light, it would seem, mad a difference, with 87% of the 19 synchronizing to the Martian clock.

Phoenix Mars Lander crew member Morten Bo Madsen sitting next to his workstation blue LED panel. Madsen worked on the robotic arm camera for the mission. Credit: University of Arizona

Pretty cool, and a wonderful example of how, with a bit of thought and cleverness, a very simple and cheap solution can have a marked effect :)

Of course, all I can think of is ‘Hooray!  One step closer to the 25 hour day!’…

 

Reference

Barger LK, Sullivan JP, Vincent AS, Fiedler ER, McKenna LM, Flynn-Evans EE, Gilliland K, Sipes WE, Smith PH, Brainard GC, & Lockley SW (2012). Learning to Live on a Mars Day: Fatigue Countermeasures during the Phoenix Mars Lander Mission. Sleep, 35 (10), 1423-35 PMID: 23024441

 

 

* I’m paraphrasing significantly.  More accurately, it was a ‘psychosocial experiment’, involving only men, which looked at a range of different situations. It was actually really, really cool :)

** Apparently, blind people also often suffer from sleep disorders because they don’t perceive the light/dark difference of day/night. And their body clocks adjust to something closer to the Martian day!

Measuring the value of science aimee whitcroft Oct 11

7 Comments

UPDATE: Please see the end of this piece, where I give examples of possible ‘good’ metrics

The subject of innovation/science/tech/R&D, and its contribution to a country’s economy and society, appears to be top of mind for policymakers, funding bodies and scientific bodies all over the world.

In New Zealand alone, MSI has set a target of doubling the value of science and innovation for New Zealand. Of course, this begs the question – how would one even begin measure that?

Enter the very complex, very fraught, and very interesting world of science metrics. Metrics are used in all policy domains to help policymakers make decisions, and many fields have well-established, useful metrics.

Not so science and science policy, where investment decisions are made without sound, meaningful data and metrics which encompass the full value created by science.

Yesterday, I was privileged to attend a lecture at MSI by renowned economist Dr Julia Lane, on loan briefly from the US. Titled ‘Measuring the value of science‘, she discussed exactly this issue.

Amongst other achievements, Julia developed and led NSF’s Science of Science & Innovation Policy (SciSIP) program, which was begun after policymakers in the US realised that they had ‘no idea how to make science investments’ (in an evidence-led fashion), and a clarion call from Jack Marburger (1) for better science benchmarks (metrics).

Julia talked for almost an hour and a half, and there’s a tonne of additional reading material on this subject, so I’m not going to attempt to summarise it all. Instead, I’m just going to pull out some of the main points of her lecture, and comment where I think appropriate. I’ll reference where I can, and you’ll find further reading links at the bottom of this post, too.

The challenge

The current data infrastructure is inadequate for making sound, evidence-based decisions about science, science investment and so forth. At the moment, science funding is something akin to a dark art, based on anecdote or poor metrics (eg. bibliometrics, but more on that shortly) rather than sound, empirical data and metrics.

We also don’t understand how science investment and its products interact with other aspects of societies and mechanisms. It’s one thing to wave one’s hands and say ‘a miracle happened’, but quite another to actually track how Grant X rippled through society, providing both economic and other benefits (see ‘Further notes’ at end), over a time period of years to decades.

Part of the difficulty here is that the relationship of science to innovation is non-linear, and its inputs and outputs/outcomes are extremely complex, over hugely varying time frames, amongst very complicated networks of people (from teams to organisations to political systems both national and international). There’s also the challenge of conveying the results of any analysis of all of this to the public and to policymakers. Hell, even terms such as science, technology, innovation and R&D have different definitions depending on who you speak to.

Current metrics

There are a range of metrics used, of which probably the most well-known and widely-used are bibliometrics. These look at citations, the publications produced by a researcher, and the journals in which said publications were, well, published. Ask most researchers how they feel about their entire corpus being measured this narrowly, and the dreaded Journal Impact Factor, and you’ll likely be treated to a groan and some variation on * headdesk * (or the FFFFUUUUU meme).

And that’s the thing – bibliometrics weren’t developed to measure science. They were developed to help us understand the corpus of publications (certainly very useful), but are terrible for measuring science as a whole. They’re slow, narrow, secretive, open to gaming, and have a host of other issues (I’ll not go into that, as that’s the subject of lots of research out there. Just go have a look).

Herein a cautionary tale about metrics: what you measure is what you get.

A lesson learned by Heinz, by Sears, by Dun & Bradstreet, and no doubt many more, is this: what you measure, and how, influences what is produced. Think of it as the observer effect writ large.

And so it is with measuring science using bibliometrics. Recent research has showed that falsification, fraud and misconduct in scientific publishing is on the increase, while other research has suggested that the situation is far, far worse (2), (3). And all because of something very simple – we’ve created perverse incentives. Rather than rewarding researchers for the creation and transmission of new scientific ideas and knowledge, we’re rewarding them for citations, producing published papers and, often, poor-quality research.

Further, the world’s moved on – published papers are no longer the only activities which support the creation and transmission of scientific ideas and knowledge. Other, more modern approaches include blogs, mentoring, prototyping, perhaps even the production of YouTube videos (4), training students, and so on.

Burden

The current system also places an enormous burden on researchers, who spend vast proportions of their time (42% of the time of PIs (Principal Investigators, or lead scientists) in the US, for example (5) on administrative tasks.

To make matters worse, it’s work for which they’re not trained, at which they’re crap, which they hate, and which wastes very valuable time which could actually be spent on, well, the research for which they’re being paid.

How to even being fixing this?

It’s not an easy task. Mammoth doesn’t even begin to describe it. But there ARE ways which Julia and her colleagues have been working on which can address this.

Of this, there are two particularly important cornerstones.
The first is the development of an intellectually coherent framework for understanding how people, grants, institutions and scientific ‘products’ influence each other.

science measurement conceptual framework, with Julia Lane

Science measurement conceptual framework, as presented by the inimitable Julia Lane. White arrow added by me, with Julia’s agreement, to show that grants/products affect each other. Credit: me

 

Of key importance here is the realisation that science is done by people, so the core of any framework should have human beings. Not citations, note. Or grants. Or anything else that couldn’t also be called a meat popsicle.

Grants are the intervention, and funding (and measurement) is what affects the behaviour of the researchers and how networks are formed, grow and survive. The interest should be in describing the products (a range of different sorts of products, from publications to lives saved) that they produce, which could be YEARS after the initial intervention.

This is a longitudinal, interactive process. And of course, institutions form context, as their facilities, cultures and structures play in to what can be done.

The second is that a reliable, joined-up data infrastructure (DI) will be necessary* to provide all the information the framework has indicated is important in the measurement of science and scientific outcomes/products.

Key characteristics of the infrastructure are that it should be:

  • Timely (i.e. information is up to date)
  • Generalizable and replicable
    • The underlying source data should be OPEN and generally AVAILABLE, not “black box”. This also allows it be be reused for different purposes. It also means it can generalised, replicated and validated, and can therefore be seen as actually useful.
    • Oh yeah, also – open means it can be continually improved by the community. A Good Thing :)
  • Low cost, high quality, and reducing the burden of reporting on the scientific community

Here, thankfully, technology is on our side. As are all sorts of researchers in social science, economics, computer science, and so forth, all of whom are keen to develop something, internationally, which works :)

Some (there are more!) of the sorts of technology we could use to build this sort of DI include:

  • Visualisation
  • Topic modelling, graph theory, etc.
    • For example, topic modelling can be used to better understand which scientific topics (by looking at papers, reports, grant applications etc) occur in groups, not only bloody useful in and of itself, but also great for things like apportioning where grant funding went.
  • Social networks
  • Automatic scraping of databases (so PIs etc don’t need to waste time on manual reporting of their work, outputs, its impact, and other such tiresome and time-consuming tasks)
  • Big Data (in this context)
    • Disambiguated data on individuals
    • Automatic data creation (just have researchers VALIDATE it, not create it)
    • New text mining techniques

Even better, all of these lovely outputs can be pulled together, as Julia and co have done, into an API. Which is open, and allows any individual/organisation/funding body/etc to pull out and slice and/or aggregate all of the information in whichever ways make sense for them. Networks of science. Publications and patents by region. How a grant ended in the education of students, and their forward progress in science. Anything you can dream of, really, as long as the data’s there.

Of course, there is a problem here, and no doubt sharp readers will have noticed it.

The data itself.

There’s data all over the place, but it’s fragmented and isn’t always open, accessible or of useful quality. Examples include HR databases (organisations), patents (patent offices), publications (journals), publications/presentations (organisational libraries) grants (funding bodies), online profile databases for CVs and so on. The systems used to keep the data may be proprietary, meaning they don’t talk to anything else. Their metrics may be different, again making it different to merge them. All SORTS of problems abound.

But they’re fixable, as systems such as Brazil’s decade-plus-old Lattes system show – one of the cleanest databases on earth, it contains records for over a million researchers, in over 4,000 institutions in Brazil.

There are challenges, certainly, but funding, willpower and buy-in from funders and researchers means databases can be opened up, cleaned up, and made interoperable. It’s the whole POINT of federated data infrastructure work :)

Other examples where this sort of work is being done include STAR METRICS, R&D Dashboard, COMETS, Patent Network Dataverse, and others still in development.

And so?

A fair question. So, a system is built. It takes lots of lovely data and puts it together, allowing one to track all sorts of things. Where a grant’s money goes. How it ripples out through society in the form of training for students, publications, public science communication, patents and links with business, commercialisation and more.

But how does that actually help a bunch of people sitting around a table trying to decide which projects to fund? Or how to demonstrate that a given piece of science has value?

Well, there’s no easy answer. Each group will be looking for different characteristics and outcomes. They’ll have, in other words, different criteria fo measuring the worth of a project and its projected value.

But any such group won’t be trying to assemble relevant information from scratch, from piece-meal sources which may be distorting.

Instead, they’ll have the information they need to develop real, scientific metrics. Which will help us all to develop a system which fosters the creation and transmission of knowledge, of real, GOOD science.

‘Tis a consummation devoutly to be wished. ***

And thank you, Julia.  You inspired me.  I’m so thrilled so see that there are people out there applying sense to science measurement, and an understanding and love of the subject matter :)

—–

An opportunity for NZ

The US, while having oodles more money than NZ, is also very big, very complicated, and still hasn’t properly cracked this yet. I think NZ government, policymakers, and science bodies should see this as an opportunity – we’re smaller, less complicated, and there’s no shortage of people here and overseas who’re willing to help and share expertise. We already have systems like Koha, and researchers like Shaun Hendy (who looks at innovation networks in NZ, amongst other things), and all of those who attend conferences such as Living Data.

I’m sorry. I know this one was long :) And I _still_ feel like I left a tonne out, sigh.

—–

Further thoughts:

On the subject of government funding of science, Julia had some choice things to say. Firstly, and more widely discussed in a blog post by John Pickering, was that government should not be funding research which has a strong economic return. That is the business of private enterprise. Rather, government should be funding work which has strong public value, if not (possibly not yet) strong direct dollar value. Such as ARPANET in the US. Additionally, some basic research WILL fail, and it’s good to let people know about that – it’s of value to both industry and science, to prevent people reinventing circular objects.**

Julia was also quite vehement about the fact that any government (which should fund a mix of basic and applied science) which expects short-term economic results, especially to some sort of dollar value, is missing the point. Government and policy makers shouldn’t be interested in short-term return on science investment, but rather in which areas are growing nationally and internationally, and what’s happening generally. Of course, the identification of ‘hot’ areas is interesting, as it’s something of an endogenous process (more funding in an area makes it hotter, in many cases, while hotter areas get more funding. Hello positive feedback loop).

Additionally, science does not only have direct economic benefits, and any attempt to quantify it purely in those terms may well underestimate science’s true value. Instead, its public value should also be taken into account. These outcomes are public, nonsubstitutable and oriented to future generations, and capture elements such as competitiveness, equity, security, infrastructure and environment (6).

As to whether you can know for sure something’s not a bubble? You can never know if something’s a bubble, or just about to revive after looking dead (eg. graph theory). Any time you make a decision you may be wrong, but you as well make it with SOME evidence, which is what this framework and DI are designed for.

Examples of better metrics

/waiting on permission to show something here. Keep an eye out for updates

UPDATE: Here y’are!

Examples of high quality metrics linked to the appropriate scientific outcomes. Cummings & Kiesler 2007 (7). Click to enlarge if necessary

—–

References and further reading

(1) Marburger, J. (2012). Wanted: Better Benchmarks. Science, 2012. Science 20 May 2005: 308 (5725), 1087. doi:10.1126/science.1114801

(2) Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Med, 2(8), e124. doi:10.1371/journal.pmed.0020124

(3) Young, S., & Karr, A. (2011). Deming, data and observational studies. Significance, 8(3), 116–120. doi:10.1111/j.1740-9713.2011.00506.x

(4) Lane, J. (2010). Let’s make science metrics more scientific. Nature 464, 488-489 (25 March 2010). doi:10.1038/464488a

(5) Lane, J., Bertuzzi, S. (2011). Measuring the results of science investments. Science 11 February 2011: 331 (6018), 678-680. doi:10.1126/science.1201865

(6) Lane, J. (2009). Assessing the impact of science funding. Science 5 June 2009: 324 (5932), 1273-1275. doi:10.1126/science.1175335

(7) Kiesler, S., & Cummings, J. (2007). Modeling Productive Climates for Virtual Research Collaborations. Cummings, J., Kiesler, S. (2007). Coordination costs and project outcomes in multi-university collaborations. Research Policy 2007;36(10):1620-1634. Saw table in as yet unpublished chapter of “Next generation metrics: harnessing multidimensional indicators of scholarly performance“, edited by Blaise Cronin and Cassidy Sugimoto, MIT Press

See also:

The Science of Science Policy: A Federal Research Roadmap.

TED – Derek Sivers: How to Start a Movement

Science and the economy: Julia Lane on Radio New Zealand’s Nine to Noon programme, 10 Oct 2012

—–

* This year seems to be The Year I Hear A Lot About Federated Data Infrastructure Systems (did I mentioned I went to the awesome Living Data conference?) :P
** There’s a growing movement to divorce publication of papers from publication of results. The publication system is biased towards papers with positive results, not negative. Publishing ALL results, and openly, would mean everyone had access to said results, and would again save a HUGE amount of wasted time as people unknowingly repeat each others’ research. It also means advances could be made much more quickly, by the secondary analyses of different sets of data.

*** Consummation as in end. As in result. Not death. Although, of course, anything of this sort will be iterative. Poetic license, m’kay?! :P

No, actually, everyone is NOT entitled to their opinion aimee whitcroft Oct 09

6 Comments

The question of opinion is becoming an increasingly vexed issue all over the world, and in all kinds of disciplines.

While no one (well, no one in their right minds) would say that one isn’t entitled to the opinion that purple is a waaaaaay prettier colour than, say, orange, the same is not true when the question is about something with a demonstrable basis in fact.

Like, for example, science.

Today, I came across a fantastic article by Patrick Stokes, Lecturer in Philosophy at Deakin University in Australia. Titled ‘No, you’re not entitled to your opinion‘, he writes about the conversation he tries to have each year with his new students, about how, well, they’re not. Or, as he says:

“I’m sure you’ve heard the expression ‘everyone is entitled to their opinion.’ Perhaps you’ve even said it yourself, maybe to head off an argument or bring one to a close. Well, as soon as you walk into this room, it’s no longer true. You are not entitled to your opinion. You are only entitled to what you can argue for.”

Couldn’t agree more.

He then goes on to explain the difference between what Plato first distinguished as opinion (or common belief) and certain knowledge. There’s no point in arguing about the first, which will include such elevated subjects as whether korma is better than vindaloo, or whether or not Led Zeppelin kicks the arse of every single rock band before or since.

The problem, however, is when one gets to believing one is entitled to opinions about the second: actual facts. Knowledge. DATA. Here, anyone is entitled to what they think only if they can back it up.

Otherwise, we get to the situation which has been happening increasingly all over the Western world (I can’t speak for other parts of the world) – amateurs or, worse, people who know _nothing_ about a given subject, feel that their opinion on it is nonetheless just as valid as the opinion of an expert’s.

Sadly, examples abound. Vaccination. Homeopathy. Actually, most ‘alternative’ medicine. Climate change. The list goes on…

And this is where the media often makes the matter worse. In the interests of what they call ‘balance’, they will often put up the opinions of the factually wrong against those of the factually correct, giving both equal airtime and credibility.*

This sort of ‘balance’ should be there for matters of actual opinion – whether her dress at the Emmy’s was better than his, for example.

When it comes to matters where facts and data are required, this type of balance actually becomes bias, and not in favour of the facts. The media needs, desperately, to learn how to distinguish gumpf from truth.  If they can’t learn to do this better, they’ll only further undermine their credibility, and damage the ability of the societies they serve to make educated choices.

Anyway, read the rest of Stoke’s article – it looks at the concept of entitlement, what happened recently in Australia between ABC’s Mediawatch, WIN-TV  and the completely disingenuously named ‘Australian Vaccination Network’, and is well worth the read!

—–

* The cause of much tearing-out of hair both personally now, and professionally in my previous job at the SMC**.

** Gotta say, kudos to the SMC on their ongoing battle to introduce the press to the idea of real balance when it comes to science – y’all do good work :)

Lots of land still up for grabs on largest LEGO set aimee whitcroft Oct 04

No Comments

In around June this year, LEGO and Google Australia (who do the maps) released Build, a WebGL LEGO simulator.

Touting itself as the world’s largest LEGO set, it allows you to build, using Lego bricks, anything you want on a Google map.The map’s been divided up into small squares, and plots are a 32×32 grid.

Currently, only New Zealand and Australia have been opened up, but I imagine that more global space will be should there be enough interest.

Build on New Zealand. Notice all that empty space…

All you need is the Chrome browser, and a Google account. And the willingness to build.

There’s still tonnes and tonnes of space available in both New Zealand and Australia, and because space is allocated on a first come first served basis, New Zealanders could occupy Australia!

There’s also, obviously, lots of space in our surrounding EEZ, although some zooming-in suggests people have been having a bit of fun building boats and islands and whatnot. Still, oodles and oodles of space.

In order to choose a plot, one has a couple of choices.

Firstly, you can just hit the ‘Build’ button you’ll see on the homepage, and it’ll just assign you a plot. Currently, the default appears to be somewhere on Australia’s Eastern coast, or in the surrounding waters.

If, however, you’d rather choose your spot, you can either zoom in until you find the right place and hit ‘build here’, or you can actually type in the address on which you’d like to build, and start from there.

There are a few house rules, mostly around the fact that creations should be one’s own and one shouldn’t be an arse, or preachy, or gross, or anything like that. Summed up, basically, as ‘don’t be a d**k’. Totally sensible, I think.

What Building looks like.

There are 10 colours to build with already, but should you be one for mods, a hack exists which allows you to build with dayglo pink bricks (awesome). You also get 12 shapes with which to play, which makes for 132 blocks (including pink) with which to go wild.

Creations include a giant banana, medieval castle, dinosaur, Pacman ghost and much, much more – there are already thousands of these built or under construction.

The area around the Franz Josef glacier is actually pretty built up…

Why the Minecraft Masses have glommed onto this, I don’t know, but they should. They _totally_ should. As well as everyone else, of course :)

Let the land grabs begin!

—–

UPDATE: Sadly, the pieces provided do not allow me to build a bear :(

Further update: Below, a video showing the construction of a T-Rex (the vid previously posted here didn’t want to work)

YouTube Preview Image

Furthest update: My first build

More fraud behind paper retractions than you might have thought aimee whitcroft Oct 02

6 Comments

A subject that’s come up in discussion with my friend a couple of times recently has been the increase in retractions of scientific papers from journals. I’ve always staunchly defended allegations this might be due to naughty scientists.

I am  now having to make my own retraction about that.

According to research (as yet unretracted) in Nature last year, the number of retractions has increased by over tenfold in the last decade, to more than 300 a year. At the time, the article talked about how it was difficult to analyse what was causing this increase.

 

Retractions. http://blogs.nature.com/news/files/retraction%20numbers%20pic%20blog.png/caption

 

Today, famed science writer and scientific tattoo collector* Carl Zimmer published an article in the Washington Post with a very upsetting piece of news: while it’s generally been thought these retractions were due to error, it now turns out more may be due to misconduct and fraud than many of us had thought.

Which makes me want to go and kick something. Possibly a disingenous scientist. But more on why I’m so angry later on this rant piece.

A new study, published in PNAS, went and looked a little deeper than previous studies. Looking at the 2,047 retracted papers related to the biomedical and life sciences in PubMed, they found that misconduct (of which fraud was a major component)**  was responsible for fully 67.4% of the retractions where they could determine the retraction’s cause.

That’s appalling.

Now, one must remember that this still accounts for a very small percentage of the papers submitted. Quoting from Zimmer’s article:

Dr. Benjamin G. Druss, a professor of health policy of Emory University, said he found the statistics in the paper to be sound but added that they “need to be kept in perspective.” Only about one in 10,000 papers in PubMed have been officially retracted, he noted. By contrast, 112,908 papers have had published corrections.

I can’t read the paper (hello paywall), so I can’t say whether the total increase of retractions is in line with the increase in paper publication over the years (i.e. is the proportion of retraction increasing too?). However, the authors do stipulate that the number of retractions due to fraud has increased tenfold since 1975. UPDATE: having now read the paper (gonna protect my source), it would appear that “research publications: retractions for fraud or suspected fraud as a percentage of total articles have increased nearly 10-fold since 1975″. So, the abstract could have been a bit clearer, then :)

While the percentage of retractions when put against publications is still very small, this development is extremely worrying.

It’s being postulated by some that the increasing pressure on scientists to ‘Publish or Perish’ is pushing them too far – where a published paper can mean the difference between tenure and unemployment, suddenly the temptation to cheat can become unbearable. Thankfully, the problem’s been noticed and there aren’t tonnes of people already talking about how broken the current publishing system is, and what could be done to fix it (eagerly, as a rule, opposed by the journals).

The Publish or Perish culture is also extremely unfair to scientists who, for example, work very practically, or who work in organisations which focus on applied rather than research work.

Finally, however, and possibly most scarily – this simply fuels anti-science sentiment and propaganda. Those out there who believe that scientists lie and twist the truth in the ongoing battle for research grants are going to seize upon this as proof positive that they’re right. Science, and scientists, cannot be trusted. Something which, to be sure, is demonstrably untrue, but for which even the smallest numbers will be triumphally used.

So. For shame, to the scientists who cheat. You do yourselves, your work and your science a great disservice. But for shame, too, to the systems which encourage these scientists to do so.

Both need a long, hard look.

Further update:

The full paper also shows some other interesting numbers: amongst these, that journal-impact factor shows “a highly significant correlation with the number of retractions for fraud or suspected fraud.

Additionally, below are the numbers for country of origin for retraction types. These graphs would, I think, have been more useful had they included information about how many papers in total each country had been published, allowing us to see the proportional representation of each country, but yes. Still interesting.

 

Fang et al (2012). Misconduct accounts for the majority of retracted scientific publications. PNAS. Click to enlarge.

—–

Related posts:

The (threat) challenge to science publishing

Geopolitics and science activity: 30 years’ worth

—–

* Well, he collects pictures of them. I have no idea whether he collects them personally :P

** The breakdown of this misconduct is as follows: “fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%)” (quotation from paper abstract)

Network-wide options by YD - Freelance Wordpress Developer