SciBlogs

Open Access publishing shouldn’t be this hard Fabiana Kubke Apr 05

No Comments

We put a man on the moon about half a century ago yet we still haven’t solved the problem of access to the scientific literature.

“moonstruck” CC-BY Adnan Islam on Flickr

I was invited to speak at the New Zealand Association of Scientists meeting this year. The theme was “Science and Society” and I was asked to speak about Open Access from that perspective.

The timing was really good. Lincoln University published their Open Access Policy last year, Waikato University released their Open Access mandate a couple of weeks ago, and the University of Auckland is examining their position around Open Access. New Zealand is catching up.

I opened my talk by referring to the New Zealand Education Act which outlines the role of univeristies:

…a university is characterised by a wide diversity of teaching and research, especially at a higher level, that maintains, advances, disseminates, and assists the application of, knowledge, develops intellectual independence, and promotes community learning
[New Zealand Education Act (1989) Section162.4.b.iii] (emphasis mine)

I argued that those values could be best met by making the research outputs available under Open Access as defined by by the Budapest Open Access Initiative, that is, not limited to “access” but equally importantly, allowing re-use.

After summarising the elements of the Creative Commons licences that can support Open Access publishing, I invited the audience to have an open conversation with their communities of practice to examine what values each place on how to share the results of our work.

My position is that the more broadly we disseminate our findings the more likely we will achieve the goals set out by the NZ Education Act to maintain, advance, assist in the application of knowledge, develop intellectual independence and promote community learning. I am also of the position that this is what should be rewarded in academic circles. I think that. as a community , we should move away from looking for value in the branding of the research article (i.e., where it is published) and focus instead on measuring the actual quality and impact of the research within and outside the academic community.

How do we measure quality and impact?

cc-by aussiegall on Flickr

At times I feel we have we become lazy. We often stick to using impact factor as a proxy for quality instead of interrogating the research outputs to understand their contribution and impact. Impact factor may be an easy metric – but it is not one that measures in any way the quality or impact of an individual article, let alone of the researchers who authored it. It is just an easy way out, a number we can quickly look at so we can tick the right box. As a metric it is easy, quick and objective. As a metric of value of an individual piece of work it is also useless and, because of that, it inevitably lacks fairness in research assessment.

What does this have to do with OA?

By the end of the conference I couldn’t shake the thought that the barriers to Open Access may not be financial and the costs of publication fees may be the least of our problems. (This issue of cost just keeps coming up.) I can’t but wonder if the cost Open Access might just be a red herring that lets us avoid the real (and bigger) issue: quality assessment. Open Access may help our articles have a wider reach but, except for a few titles, Open Access journals are not recognisable brands. If we are forced to stop looking at the “journal brand” we will be forced to assess the individual articles for their intrinsic value and impact. And, although it may lead to better, more valid, assessment, it is also a big and difficult job.

A lot of what was said today at the conference revolved about the value of New Zealand science (and scientists) to society and the importance of science communication. We spoke about the importance of evidence-based policy, the need to be the critic and conscious of society and the challenges of working with the public to build a trust in scientific evidence despite its uncertainties. We expect politicians and society to do the hard job of making decisions based on evidence. I couldn’t help but ask whether we, as a community of scientists, can live up to those standards.

Can we ditch the bad and easy for the good and hard?

We put a man on the moon. Solving the issues around open access and research assessment must certainly be easier to solve. Are we ready to put our money where our mouth is?

 

Open Access Week 2014 Fabiana Kubke Oct 25

No Comments

What do brain machine interfaces and Open Science have in common?

They are two examples of concepts that I never thought I would get to see materialised in my lifetime. I was wrong.

oa_blue_orange02large

Kiwi Open Access Logo by the University of Auckland, Libraries and Learning Services is licensed under a Creative Commons Attribution 3.0 Unported License.

I had heard of the idea of Open Access as Public Library of Science was about to launch (or was in its early infancy) . It was about that time that I moved to New Zealand and was not able to go to conferences as frequently as I did in the USA, and couldn’t afford having an internet connection at home. Email communication (especially when limited to work hours) does not promote the same kind of chitter-chatter you might have as you wait in cue for your coffee – and so my work moved along, somewhat oblivious to what was going to become a big focus for me later on: Open Science.

About 6 years ofter moving to New Zealand things changed. Over a coffee with Nat Torkington, I became aware of some examples of people working in science embracing a  more open attitude. This conversation had a big impact on me.  Someone whom I never met before described me a whole different way of doing science. This resonated (strongly) because what he described were the ideals I had at the start of my journey; ideals that were slowly eroded by the demands of the system around me.  By 2009 I had found a strong group of people internationally that were working to make this happen, and who inspired me to try to do something locally.  And the rest is history.

What resonated with me about “Open Science” is the notion that knowledge is not ours to keep – that it belongs in the public domain where it can be a driver for change. I went to a free of fees University and we fought hard to keep it that way. Knowledge was a right and sharing knowledge was our duty. I moved along my career in parallel with shrinking funding pots and a trend towards academic commodification.  The publish or perish mentality, the fears of being back-stabbed if one shares to early or too often, the idea of the research article placed in the “well-branded” journal, and the “paper” as a measure of one’s worth as a scientist all conspire to detract us from exploring open collaborative spaces.  The world I walked into around 2009 was seeking to do away with all this nonsense. I have tried to listen and learn as much as I can, sometimes I even dared to put in my 2 cents or ask questions.

How to make it happen?

cc-by Mariano Kamp on Flickr

The biggest hurdle I have found is that I don’t do my work in isolation. As much as I might want to embrace Open Science, when the work is collaborative I am not the one that makes the final call. In a country as small as New Zealand it is difficult to find the critical mass at the intersection of my research interests (and knowledge) and the desire to do work in the open space. If you want to collaborate with the best, you may not be able to be picky on the shared ethos. This is particularly true for those struggling with building a career and getting a permanent position, the advice of those at the hiring table will always sound louder.

The reward system seems at times to be stuck in a place where incentives are (at all levels) stacked against Open Science; “rewards” are distributed at the “researcher” level. Open Research is about a solution to a problem, not to someone’s career advancement (although that should come as a side-effect).  It is not surprising then how little value is placed in whether one’s science can be replicated or re-used. Once the paper is out and the bean drops in the jar, our work is done. I doubt that even staffing committees or those evaluating us will even care about pulling those research outputs and reading them to assess their value – if they did we would not need to have things like Impact Factors, h-index and the rest.  And here is the irony – we struggle to brand our papers to satisfy a rewards system that will never look beyond its title. At the same time those who care about the content and want to reuse it are limited by whichever restrictions we chose to put at the time of publishing.

So what do we do?

I think we need to be sensitive to the struggle of those that might want to embrace open science, but are trying to negotiate the assessment requirements of their careers. Perhaps getting more people who embrace these principles at staffing and research University Committees might at least provide the opportunity to ask the right questions about “value” and at the right time. If we can get more open minded stances at the hiring level, this will go far in changing people’s attitudes at the bench.

I, for one, find myself in a relatively good position. My continuation was approved a few weeks ago, so I won’t need to face the staffing committee except for promotion.  A change in title might be nice – but it is not a deal-breaker, like tenure. I have tried to open my workflow in the past, and learned enough from the experience, and will keep trying until I get it right. I am slowly seeing the shift in my colleagues’ attitudes – less rolling of eyes, a bit more curiosity.  For now, let’s call that progress.

I came to meet in person many of those who inspired me through the online discussions since 2009, and they have always provided useful advice, but more importantly support.  Turning my workflow to “Open” has been as hard as I anticipated.  I have failed more than I have succeeded but always learned something from the experience. And one question that keeps me going is:

What did the public give you the money for?

Science gone bad Fabiana Kubke Oct 05

No Comments

or the day after the sting

I got the embargoed copy of Science Magazine article on peer review in Open Access earlier this week, which gave me a chance to read it with tranquility. I have to say I really liked it. It was a cool sting, and it exposed many of the flaws in the peer review system. And it did that quite well. There was a high rate of acceptance of a piece of work that did not deserve to see the light.  I also immediately reacted to the fact that the sting had only used Open Access journals – cognizant of how that could be misconstrued as a failure of Open Access and detracting from the real issue, which is peer review.

I had enough time to write a blog post, and was lucky enough to be able to link to Michael Eisens’ take on the issue before I posted, so I did not need to get into the nitty gritty of why the take from the sting had to be taken for nothing more than what it was – an anecdotal set of events. Because what it was not, is a scientific study.

One of the things that I found valuable from the sting (or at least my take-home message) was that there is enough information out there to help researchers navigate the Open Access publishing landscape they are so scared of and provided some information on how to choose good journals. The excuse that there are too many predatory journals to justify not publishing in Open Access is now made weaker. It also provided all of us with an opportunity to reflect on the failures of peer review and the value of the traditional publication system.

Or so I thought.

Then the embargo was lifted, and I have been picking up  brain bits spilled over twitter, blogs and other social media as the tsunami of heads exploding started. And as the morning alarm clocks went off as the sun rose in different  time zones, new waves of brain bits came along.

By now, I could look at the entire ‘special issue’ and what else was in it.  Here  is where I see the problem.

There were lots of articles talking about science communication. Not one of them could I find (please someone correct me if I am wrong!)  that took on the sting to refocus the discussion in the right direction (that is, peer review), nor to reflect on how Science and the AAAS behind it measure up to those issues they so readily seemed to criticise.

I never liked the AAAS – or rather I began disliking it after  I got my first invitation to join in the late 1980’s. It seemed that all I needed to do to become a member was send them cash. There was no reason to do that – since obviously, without requiring anyone to endorse me as a “proper scientist” I could not see what that membership said about me other than having the ability to write a check. I was already doing that with the New York Times, and if I couldn’t put that down in my CV, then neither could I put down my membership with AAAS. Nothing gained, nothing lost, move on.

What I didn’t know back at that time, was that that first letter would be the first in a long (long!) series of identical invitations that would periodically arrive in my mailbox where they were be quickly disposed of in the rubbish bin in the corner of the room. I am sure one would be able to find plenty of those in the world’s landfills.

“The vitality of the scientific meeting has given rise to a troubling cottage industry: meetings held more for profit than enlightenment.” (Stone, R., & Jasny, B.)

Wut? Let’s apply the same logic to the AAAS membership – Would we consider that predatory behaviour too?

Let’s move on to peer review.

Moving back to the sting. Yes, they sent a lot of articles out. The article in science seems to me to be delivered from a very high horse, and one with no legs to stand on. Their N is large (perhaps not large enough, but that is beyond the point).  Because to each journal they just sent one (n=1; “en equal one”) hoax paper (singular, not plural). I may ask – had they sent say 10 hoax papers to each journal, would each journal have accepted the 10, only 5 or perhaps only 1? Because that makes a difference at the individual journal level. If we are going to accept that such n=1 is enough to make any informed conclusion about whether a journal is predatory or not, then, well, arsenic life. ‘Nuff said.

Let’s take a second look at the arsenic paper. n=1. The arsenic paper was so bad that poor Michael Eisen’s head exploded because readers of his blog actually believed he had sent it in as a hoax – I myself even got caught doing a double-take when I started reading his blog post (but I kept on reading!). That’ll teach him for being such a convincing writer.

So, if n=1 is enough, does that mean Science magazine is ready to add their name to the list of journals that don’t meet the mark? I could not, on their issue, find any reflection on that (please someone correct me if I am wrong!).

… and to open access

But the bigger issue in my view was what appears to be a position of Science on Open Access. Now Science is not Nature. Science is the flagship journal of AAAS. AAAS says it is an organisation  “serving science, service society”. Here are some of their mission bullet points:

Enhance communication among scientists, engineers, and the public;
Promote and defend the integrity of science and its use;
Foster education in science and technology for everyone;
Increase public engagement with science and technology; and

How is any of this better served by having their flagship magazine behind a paywall?

Can they support, through scientific data, that having their flagship journal behind a paywall helps achieve any of those goals? Now those are data I would love to see. Because their “special issue” ‘s biased criticism (please someone correct me if I am wrong!) of Open Access seems to suggest so. Now, if they can’t provide a scientific argument as to why we should give them so much money to be members or access their publication, then how are they any different from the “cottage industry” they seem so ready to criticize? Is preying on libraries or readers less bad than on authors? If I purchase a “pay per view” article and don’t like it, or it does not contain the data promised by the abstract, do I get my money back? Or do these paywalled journals just take the money and run? Because, as much as I dislike the predatory open access journals, at least they are putting the papers out there so that we can all croudsource on how much crap they are.

Do I find an issue with they bringing to the attention of their readership the troubled state of the publishing industry? No.

Do I find an issue with some of the articles in the special issue focusing on some of the naughty players in the Open Access landscape? No.

What I do have a problem with, is the apparent lack of reflection on Science’s and AAAS’ own practices (please someone correct me if I am wrong!).

There was an opportunity to step up, and that opportunity was missed. Science might have a shiny coat of wool decorated with double digit impact factors, but I am not buying it.

I am sticking with the New York Times.

(Full disclosure: I am an academic editor for PLOS ONE and PeerJ and the Chair of the Advisory Panel of Creative Commons Aotearoa New Zealand. The views expressed here are purely my own.)

[Updated Oct 5 1:19 to add missing link]

Predatoromics of science communication Fabiana Kubke Oct 04

No Comments

CC-BY mjtmail (tiggy) on Flickr

The week ends with a series of articles in Science that make you roll your eyes. These articles explore different aspects of the landscape of science communication exposing how broken the system can be at times. The increased pressure to publish scientific results to satisfy some assessors’ need to count beans has not come without a heavy demand on the scientific community that inevitably becomes involved through free editorial and peer review services. For every paper that is published, there are a number of other scientists that take time off their daily work to contribute to the decision of whether the article should be published or not, in principle by assessing the scientific rigor and quality. In many cases, and unless the article is accepted by the first journal it is submitted to, this cycle is repeated. Over. And over. Again. The manuscript is submitted to a new journal, handled by a new editor and most probably reviewed by a new set of peers, this iterated as many times as needed until a journals takes the paper in. And then comes the back and forth of the revision process, modifications to the original article suggested or required through the peer review, until eventually the manuscript is published. Somewhere. Number of beans = n+1. Good on’ya!

But what is the cost?

CC-BY Jessica M Cross on Flickr

There just doesn’t seem to be enough time to go this process with the level of rigor it promises to deliver. The rise in multidisciplinary research means that it will be unlikely that a single reviewer can assess the entirety of a manuscript. The feedback we get as editors (or we provide as reviewers) can often be incomplete and miss fundamental scientific flaws. There are pressures to publish and to publish a lot and to do that (and still have something to publish about) we are tempted to minimise the amount of time that we spend in the publication cycle. Marcia McNutt says it in a nutshell [1]:

For science professionals, time is a very precious commodity.

It is then not surprising that the exhaustion of the scientific community would be exploited with the ‘fast food’ equivalent of scientific communication.

The vitality of the scientific meeting has given rise to a troubling cottage industry: meetings held more for profit than enlightenment  [2]

The same applies to some so-called scientific journals. These “predatory” practices as they have come to be known are exhausting.

Science published today the description of a carefully planned sting. Jon Bohannon created a spoof paper that he sent to a long list of Open Access journals [3]. The paper should have been rejected had anyone cared enough to assess the quality of the science and base their decision on that. Instead, the manuscript made it through and got accepted in a number of journals (98 journals rejected it, 157 accepted it). That the paper got accepted in more than one journal did not come as a surprise, but what where it got interesting to me was when he compared those accepting journals against Beall’s predatory journal list. Jeff Beall helps collate a list of predatory Open Access journals, which at least saves us from having to do even more research when trying to decide where to publish our results or what conferences we might want to attend.

 Like Batman, Beall is mistrusted by many of those he aims to protect. “What he’s doing is extremely valuable,” says Paul Ginsparg, a physicist at Cornell University who founded arXiv, the preprint server that has become a key publishing platform for many areas of physics. “But he’s a little bit too trigger-happy. [3]

What Bohannon’s experiment showed was that 82% of the publishers from Beall’s list that received the spoof paper accepted it for publication. There is no excuse to falling prey to these journals and conferences. “I didn’t know” just won’t cut it for much longer.

As Michael Eisen discusses, even though Bohannon used open access journals for his experiment, this lack of rigour seems to ignore paywalls, impact factors and journal prestige. Which raises the following question:

If the system is so broke, it costs so much money in subscriptions and publication fees and sucks so much out of our productive time – then why on earth should we bother?

Don’t get me wrong – sharing our findings is important. But does it all really have to be peer reviewed from the start? Take Mat Todd’s approach, for example, from the Open Source Malaria project. All the science is out there as soon as it comes out of the pipette tip. When I asked him how this changed the way his research cycle worked this is what he said:

We have been focusing on the data and getting the project going, so we have not rushed to get the paper out. The paper is crucial but it is not the all and all. The process has been reversed, we first share the data and all the details of the project as it’s going, then when we have finished the project we move to publishing.

Right. Isn’t this what we should all be doing? I didn’t see Mat Todd’s world collapse. There is plenty of opportunity to provide peer review on the project as it is moving forward. There is no incentive to write the paper immediately, because the information is out there. There is no need to take up time from journal editors and reviewers because the format of the project offers itself to peer review from anyone who is interested in helping get this right.

PeerJ offers a preprint publication service:

“By using this service, authors establish precedent; they can solicit feedback, and they can work on revisions of their manuscript. Once they are ready, they can submit their PrePrint manuscript into the peer reviewed PeerJ journal (although it is not a requirement to do so)”

F1000 Research does something similar:

“F1000Research publishes all submitted research articles rapidly […] making the new research findings open for scrutiny by all who want to read them. This publication then triggers a structured process of post-publication peer review […]”

So yes, you can put your manuscript out there, let peers review it at their leisure, when they actually care and when they have time and focus to actually do a good job. There is really no hurry to move the manuscript to the peer-reviewed journal (PeerJ or any other) because you have already communicated your results, so you might as well go get an experiment done.  And if, as a reviewer, you want any credit for your contribution, then you can go to Publons where you can write your review, and if the community thinks you are providing valuable feedback you will be properly rewarded in the form of a DOI. Try to get that kind of recognition from most journals.

But let’s say you are so busy actually getting science done, then you always have FigShare.

“…a repository where users can make all of their research outputs available in a citable, shareable and discoverable manner.”

Because, let’s be honest, other than the bean counters who else is really caring enough about what we publish to justify the amount of nonsense that goes with it?

According to ImpactStory, 20% of the items that were indexed by Web of Science in 2010 received 4 or less PubMed Central citations. So, 4 citations in almost 3 years puts yo at the top 20%.

So my question is: Is this nonsense really worth our time?

CC-BY aussiegall on Flickr

[1] McNutt, M. (2013). Improving Scientific Communication. Science, 342(6154), 13–13. doi:10.1126/science.1246449

[2] Stone, R., & Jasny, B. (2013). Scientific Discourse: Buckling at the Seams. Science, 342(6154), 56–57. doi:10.1126/science.342.6154.56

[3] Bohannon, J. (2013). Who’s Afraid of Peer Review? Science, 342(6154), 60–65. doi:10.1126/science.342.6154.60

ASAP Awards Finalists announced Fabiana Kubke Oct 02

No Comments

(Cross-posted from Mind the Brain)

Earlier this year, nominations opened for the Accelerating Science Awards Program (ASAP). Backed by major sponsors like Google, PLOS and the Wellcome Trust, and a number of other organisations, this award seeks to “build awareness and encourage the use of scientific research — published through Open Access — in transformative ways.” From their website:ASAP Finalist Announcement 300x250

The Accelerating Science Award Program (ASAP) recognizes individuals who have applied scientific research – published through Open Access – to innovate in any field and benefit society.

The list of finalists is impressive, as is the work they have been doing taking advantage of Open Access research results. I am sure the judges did not have an easy job. How does one choose the winners?

In the end, this has been the promise of Open Access: that once the information is put out there it will be used beyond its original purpose, in innovative ways. From the use of cell phone apps to help diagnose HIV in low income communities, to using mobile phones as microscopes in education, to helping cure malaria, the finalists are a group of people that the Open Access movement should feel proud of. They represent everything we believed that could be achieved when the barriers to access to scientific information were lowered to just access to the internet.

The finalists have exploited Open Access in a variety of ways, and I was pleased to see a few familiar names in the finalists list. I spoke to three of the finalists, and you can read what Mat Todd, Daniel Mietchen and Mark Costello had to say elsewhere.

One of the finalist is Mat Todd from University of Sydney, whose work I have stalked for a while now. Mat has been working on an open source approach to drug discovery for malaria. His approach goes against everything we are always told: that unless one patents one’s discovery there are no chances that the findings will be commercialised to market a pharmaceutical product. For those naysayers out there, take a second look here.

A different approach to fighting disease was led by Nikita Pant Pai, Caroline Vadnais, Roni Deli-Houssein and Sushmita Shivkumar tackling HIV. They developed a smartphone app to help circumvent the need to go to a clinic to get an HIV test avoiding the possible discrimination that may come with it. But with the ability to test for HIV with home testing, then what was needed was a way to provide people with the information and support that would normally be provided face to face. Smartphones are increasingly becoming a tool that healthcare is exploring and exploiting. The hope is that HIV infection rates could be reduced by diminishing the number of infected people that are unaware of their condition.

What happens when different researchers from different parts of the world use different names for the same species? This is an issue that Mark Costello came across – and decided to do something about it. What he did was become part of the WoRMS project – a database that collects the knowledge of individual species. The site receives about 90,000 visitors per month. The data in the WoRMS database is curated and available under CC-BY. You can read more about Mark Costello here.

We’ve all heard about ecotourism. For it to work, it needs to go hand in hand with conservation. But how do you calculate the value (in terms of revenue) that you can put on a species based on ecotourism? This is what Ralf Buckley, Guy Castley, Clare Morrison, Alexa Mossaz, Fernanda de Vasconcellos Pegas, Clay Alan Simpkins and Rochelle Steven decided to calculate. Using data that was freely available they were able to calculate to what extent the populations of threatened species were dependent on money that came from ecotourism. This provides local organisations the information they need to meet their conservation targets within a viable revenue model.

Many research papers are rich in multimedia – but many times these multimedia files are published in the “supplementary” section of the article (yes – that part that we don’t tend to pay much attention to!). These multimedia files, when published under open access, offer the opportunity to exploit them in broader contexts, such as to illustrate Wikipedia pages. That is what Daniel Mietchen, Raphael Wimmer and Nils Dagsson Moskopp set out to do. They created a bot called Open Access Media Importer (OAMI) that harvests the multimedia files from articles in PubMed Central. The bot also uploaded these files to Wikimedia Commons, where they now illustrate more than 135 Wikipedia pages. You can read more about it here.

Saber Iftekhar Khan, Eva Schmid and Oliver Hoeller were nominated for developing a low weight microscope that uses the camera of a smartphone. The microscope is relatively small, and many of its parts are printed on a 3D printer. For teaching purposes it has two advantages. Firstly, it is mobile, which means that you can go hiking with your class and discover the world that lives beyond your eyesight. Secondly, because the image of the specimen is seen through the camera function on your phone or ipod, several students can look at an image at the same time, which, as anyone who teaches knows, is a major plus. To do this with standard microscopes would cost a lot of money in specialised cameras and monitors. Being able to do this at a relative low cost can provide students with a way of engaging with science that may be completely different from what they were offered before.

Three top awards will be announced at the beginning of Open Access Week on October 21st. Good luck to all!

Failure to replicate, spoiled milk and unspilled beans Fabiana Kubke Sep 06

No Comments

Try entering “failure to replicate” in a google search (or better still, let me do that for you) and you will find no shortage of hits. You can even find a reproducibility initiative. Nature has a whole set of articles on the topic. If you live in New Zealand you have probably not escaped the coverage in the news about the botulism bacteria that never was, and you might be among those puzzled about how a lab test could be so “wrong”.

Yet, for scientists working in labs, this issue is commonplace.

Most scientists will acknowledge that reproducing someone else’s published results isn’t always easy. Most will also acknowledge that there they would receive little recognition for replicating someone else’s results. They may even add that the barriers to publish negative results are also too high.  The bottom line is that there is little incentive to encourage replication, more so in a narrowing and highly competitive funding ecosystem.

However, some kind of replication happens almost on a daily basis in our labs as we adopt techniques described by  others and try to adapt them to our own studies. A lot of time and money can be wasted when the original article does not provide enough detail on the materials and methods. Sometimes authors (consciously or unconsciously) do not articulate explicitly domain-specific tacit knowledge about their procedures, something which may not be easy to resolve. But in other cases, articles just simply lack enough detail about what specific reagents were used in an experiment, like a catalog number, and this is something may be able to fix more easily.

Making explicit the experiment’s reagents would should be quite straightforward, but apparently it is not, at least according to the new study published in PeerJ*. Vasilevsky and her colleagues surveyed articles in a number of journals and from different disciplines and recorded how well documented the raw materials used in the experiments were described. In other words, could anyone, relying solely on information provided in the article, be sure they would be buying the exact same chemical?

Simple enough? Yeah, right.

What their data exposed was a rather sad state of affairs. Based on their sample they concluded that  the reporting of “unique identifiers” for laboratory materials is rather poor and they could only unambiguously identify 56% of the resources. Overall, just a little over half of the articles don’t give enough information for proper replication. Look:

Replicabitily1

But not all research papers are created equal. A breakdown by research discipline and by type of resource shows that some areas or types of reagents do better than others. Papers in immunology, for example tend to report better than papers in neuroscience.

So, could  journals for immunology be better quality or have higher standards  than the journals for neuroscience?

The authors probably knew we would ask that, and they beat us to the punch.

(Note: Apparently, the IF does not seem to matter when it comes to the quality of reporting on materials**. )

What I found particularly interesting was that whether a  journal had good guidelines on reporting didn’t seem to make much of a difference. It appears the problem is more deeply rooted and these seeping through the submission, peer review and editorial process. How come neither authors, reviewers or editors are making sure that the reporting guidelines are followed? (Which in my opinion beats the purpose of having them there in the first place!)

replicability2

I am not sure I perform myself too much above average (I must confess I am too scared to look!). As authors we may be somewhat  blind to how well (or not) we articulate our findings because we are too embedded in the work, missing things that may be obvious to others. Peer reviewers and editors tend to pick up on our blind spots much better than us. Yet apparently a lot that still does not get picked up. Peer-reviewers don’t seem to be picking up on these reporting issues, perhaps they make assumptions based on what is standard in their their particular field of work. Editors may not detect what is missing because they are relying on the peer-review process to identify reporting shortcomings especially when the work is outside their field of expertise. But while I can see how not getting it right can happen, I also see the need to get it right.

While I think all journals should have clear guidelines for reporting materials (the authors developed a set of guidelines that can be found here), Vasilevsky and her colleagues showed that having them in place was not necessarily enough. Checklists similar to those put out by Nature [pdf] to help authors, reviewers and editors might help to minimise the problem.

I would, of course, love to see this study replicated. In the meantime I might give a go at playing with the data.

*Disclosure: I am an academic editor, author and reviewer for PeerJ and obtained early access to this article.

** no, I will not go down this rabbit hole

Vasilevsky et al. (2013), On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ 1:e148; DOI 10.7717/peerj.148

Brain Hype Fabiana Kubke Aug 31

No Comments

“Successful human-to-human brain interface” screamed the headlines – and so there I was clicking my way around the internet to read about it.

Those who know me also know that this is the kind of stuff what makes me tick, ever since learning about the pioneering work of Miguel Nicolelis.  A bit over a decade ago I first heard of him, a Brazilian scientist working at Duke University in the Department where I spent a short tenure before moving to New Zealand. What I heard at the time was that he was attempting to extract signals from a brain and use them to control a robotic arm. I was quite puzzled by the proposition, I had been trained with the idea that each neuron in the brain is important and responsible of taking care of a specific bit of information. so thought I’d never get to see the idea succeed within my lifetime.

CC-BY Bistrosavage on Flickr

Nicolelis’ paradigm was relatively straightforward. He was to record the activity of a small area of the brain while the animal moved his arm, and identify what was going on in the brain during different arm movements. Activity combination A means arm up, combination B arm down, etc. He then would use this code to program a robotic arm so that the robotic it moved up when combination A was sent to it, down when combination B was sent, and so on. The third step was to connect the actual live brain to the robotic arm, and have the monkey learn that it had the power to move it himself.

What puzzled me at the time (and the reason that I thought his experiment couldn’t work) was that he was going to attempt to do this by recording the activity from what I could best describe as only a handful of neurons, and with rather limited control over the choice of those neurons. I figured this was not going to give him enough (or even the right) information to guide the movement of the robotic arm. But  I was still really attracted to the idea. Not only did I love his deliberate imagination and how he was thinking outside the box,, but also, if he was successful, it would mean I’d have to start thinking about how the brain works in a completely different way.

It was not long before the word came out he had done it. He had managed to extract enough code from the brain activity that was going on during arm movements to program the robotic arm, and soon enough he had the monkey control the arm directly.  And then something even much more interesting (at least to me) happened – the monkey learned that he could move the robotic arm without having to move his own arm. In other words, the monkey had  ‘mapped’ the robotic arm into his brain as if it was his own. And that meant that it was time to revisit how I thought that brains worked.

CC-BY-NC-ND Photo Extremist on Flickr

I followed his work, and then in 2010 got a chance to have a chat with him at SciFoo. It was there that he told me how he was doing similar experiments but playing with avatars instead of real life robotic  arms. how he saw this technology being used to build exoskeletons to provide mobility to paralyzed patients, and how he thought he was close to getting a brain to brain interface in rats.

A brain to brain interface?

Well, if the first set of experiments had challenged my thinking I was up for a new intellectual journey. Although by now I had learned my lesson.

I finally got to see the published results of these experiment earlier this year.  Again, the proposition was straightforward. Have a rat learn a task in one room, collect the code and send that information to a second rat elsewhere and see if the second rat has been able to capture the learning.  You can read more about this experiment from Mo Costandi here.

So when I heard the news about human to human brain interfaces, I inevitably got excited.

But then….

The paradigm of  this preliminary study (which has not been published in a peer reviewed journal)  is simple. One person  is trying to play a video game imagining he pushes a firing button at the right time, and a second person elsewhere who actually needs to push the firing button for the game.  The activity from the brain of the first person (this time recorded from the scalp surface) is transmitted to the brain of the second person through a magnetic coil (a device that is becoming commonly used to stimulate or inhibit specific parts of the brain.)

But is this really a bran to brain intterface?

Although the brain code of the first subject ‘imagining’ moving the finger was extracted (much like the Nicolelis group did back a decade ago), there is nothing about that code that is ‘decoded’ by the subject pressing the button. That magnetic coils can be used to elicit movement is not new. What part of the body moves depends on where on top of the head the coil is placed, and the type of zapping that is sent through the coil. So reading their description of the experiment, it seems that the signal that is being sent is a turn on/off to the coil, not a motor code in itself.  The response from the second subject does not seem to need the decoding that signal – rather responding to a specific stimulation  (not too unlike the kicking we do when someone tests our knee jerk reflex, or closing our eyelids when someone shines a bright light at our eyes).

I am also uncertain of how much the second subject knows about the experiment and I can’t help but wonder how much of the movement is self generated in response to the firing of the coil. Any awake person participating whose finger is put on top of a keyboard key and has a piece of metal on their head wouldn’t take too long to figure out how the experiment is meant to run.

There are a few comments (here and here for example) from readers identifying these weaknesses and even Nicolelis himself is quoted as saying it is too early to declare victory.

Which brings me back to the title of this post.

There is nothing wrong with sharing the group’s progress, In fact I think it is great, and I wish more of us were doing this.  But I am less clear about what is so novel and what it contribute to our understanding of how the brain works to justify the hype.

This is a missed opportunity. There is value in their press release: here is a group that is sharing preliminary data in a very open way. This in itself is the news because this is good for science This should have been the hype.

Did you know?

CC-BY-NC by baboon on Flickr

  • In 1978 a machine to brain interface (says Wikipedia) was successfully tested in a blind patient. Apparently progress was hindered by the patient needing to be connected to a large mainframe computer
  • By 2006 a patient was able to operate a computer mouse and prosthetic hand using a brain machine interface that recorded brain activity using electrodes placed inside the brain. Watch the video.
  • In 2009 using brain activity recorded from surface scalp electrodes to control a computer text editor, a scientist was able to send a tweet

References

  • Carmena, J. M., Lebedev, M. A., Crist, R. E., O’Doherty, J. E., Santucci, D. M., Dimitrov, D. F., … Nicolelis, M. A. L. (2003). Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates. PLoS Biol, 1(2), e42. doi:10.1371/journal.pbio.0000042
  • Pais-Vieira, M., Lebedev, M., Kunicki, C., Wang, J., & Nicolelis, M. A. L. (2013). A Brain-to-Brain Interface for Real-Time Sharing of Sensorimotor Information. Scientific Reports, 3. doi:10.1038/srep01319
  • O’Doherty, J. E., Lebedev, M. A., Ifft, P. J., Zhuang, K. Z., Shokur, S., Bleuler, H., & Nicolelis, M. A. L. (2011). Active tactile exploration using a brain-machine-brain interface. Nature, 479(7372), 228–231. doi:10.1038/nature10489

[Open Science Sunday] Lincoln University’s Open Access Policy is out Fabiana Kubke Jul 28

53 Comments

New Zealand has its first Open Access Policy thanks to Lincoln University. We have been lagging behind in the OA landscape when it comes to tertiary institutions, and Lincoln’s position is a great step.

From their website:

Lincoln University takes the position that if public funding has supported the creation of research or other content then it’s reasonable to make it publicly accessible. So our new Open Access Policy endorses making this content openly and freely available as the preferred option.

That the public should have access to the outputs of the work they fund through their taxes has been a compelling argument around other international policies. A similar position statement was made in the  Tasman Declaration. New Zealand’s NZGOAL, released in 2010 provides a similar framework for State Service Agencies, but tertiary institutions are not included in the framework despite receiving substantial public funding in several forms. It has been then up to the individual universities to decide whether the principles of NZGOAL are adopted. Lincoln University has taken a leadership role for the tertiary sector, and I am hopeful that other  NZ institutions will follow their lead.

CC-BY-NC-SA by biblioteekje on Flickr

I have been often asked where the funds to pay for Open Access publishing will come from, at least in relation to the publication of research articles. What we sometimes seem to forget is that we are already paying for these costs through the portion of the overheads of our grants that go towards library costs for access and re-use of copyrighted material.  In many instances, too,  the charges for publication of, say a colour figure, can be equal or more than what it would cost to publish the same article in an Open Access journal. The maths just don’t work for me.

What we also seem to sometimes forget is that most publishers will allow the posting of the peer reviewed version of the author’s manuscript in their institutional repository. Why aren’t researchers not doing this more widely is not very clear.

And here is where Lincoln strikes a nice balance: posting in the institutional repository (aka Green Open Access) comes at no extra financial cost to the individual researcher.  IT will be interesting to see how the policy is implemented at Lincoln.

But is it enough?

It is a great start.

One of the issues with the Open Access discussion is that it sometimes the issue of copyright (and the resulting license to reuse) does not always feature prominently in the conversation. I (personally) consider that fronting the fee with a journal to make a paper open access when I still need to transfer the copyright to the journal is a waste of money. There is not much added value to the version of the manuscript that I can place in the repository and the final journal version (other than perhaps aesthetics). I am happy, however, to pay an OA fee when this comes attached with a Creative Commons licence that allows reuse, including commercial re-use, because that is where the true value of Open Access is. Lincoln University takes a good step by encouraging the use of Creative Commons licences – but In their absence the articles should still be made free to view through the institutional repository.

How is NZ doing in OA?

The articles that are deposited in Institutional repositories in New Zealand can be found through nzresarch.org.nz.  Today’s search returned 14,273  journal articles. It is unfortunate that the great majority of them (13,986) are “all rights reserved” and only 232 allow commercial reuse. If we really want to benefit from our research to drive innovation, then we should be doing better.

So where to next?

Lincoln University has taken a great first step, and hopefully the other NZ research institutions will follow. I am also hoping we will start to see a similar move from NZ funding agencies encouraging researchers to adopt the principles of NZGOAL or to place Open Access mandates on their funded research.

Perhaps next time a funding body or organisation asks you to donate money for their research to help cure a condition, you might ask them if they have an Open Access policy

Internet birdfest Fabiana Kubke Jun 01

1 Comment

A few days ago I got an email from a colleague of mine pointing me to a video about birds of paradise. I am happy I went and looked at it because it is quite amazing. There is no question why this group of birds stand apart from others – they are not beautiful to watch, but their behaviour, too, is quite amazing. Watch:

There are other birds that I find absolutely amazing. The Lyrebird for example, incorporates into its song sounds that it hears as it goes about life. There are two types of song learning birds (songbirds). Some will learn to imitate a song from an adult tutor as they are growing up, and pretty much sing that song as adults. Others can continue to incorporate elements to their song as adults. The lyrebird falls into this last group. But what I find amazing about the lyrebird is not that it incorporates new song elements, but that some of those sounds are not “natural” sounds. Watch:

Lyrebird

Another amazing bird is the New Caledonian crow. A while back Gavin Hunt (now at the University of Auckland) came to find out that these birds were able to manufacture tools in the wild. They modify leaves and twigs from local plants to make different types of tools which they then use to get food. This finding spurred a large body of work on bird intelligence. Watch:

And if you are interested of where these wonderful animals all came from, there is a fantastic blog by Ed Yong over at national Geographic. Read:

The changing science of just-about-birds and not-quite-birds
(HT @BjornBrembs)

I am sure there is a screenplay somewhere in there, inspired by Tron and involving cats chasing birds in cyberspace. But I shall leave that for someone more creative than me.

[Open] Science Sunday – 19-5-13 Fabiana Kubke May 19

No Comments

2012 was a really interesting year for Open Research.

The year started with a boycott to Elsevier (The Cost of Knowledge) , soon followed  in May by a petition at We The People in the US,  asking the US government to “Require free access over the Internet to scientific journal articles arising from taxpayer-funded research.”. By June we had The Royal Society publishing  a paper on “science as an open enterprise” [pdf]  saying:

The opportunities of intelligently open research data are exemplified in a number of areas of science.With these experiences as a guide, this report argues that it is timely to accelerate and coordinate change, but in ways that are adapted to the diversity of the scientific enterprise and the interests of: scientists, their institutions, those that fund, publish and use their work and the public.

The Finch report had a large share of media coverage [pdf]   -

Our key conclusion, therefore, is that a clear policy direction should be set to support the publication of research results in open access or hybrid journals funded by APCs. A clear policy direction of that kind from Government, the Funding Councils and the Research Councils would have a major effect in stimulating, guiding and accelerating the shift to open access.

By July the UK government announced the support for the Open Access recommendations from the Finch Report to ensure:

Walk-in rights for the general public, so they can have free access to global research publications owned by members of the UK Publishers’ Association, via public libraries. [and] Extending the licensing of access enjoyed by universities to high technology businesses for a modest charge.

The Research Councils OK joined by publishing a policy on OA (recently updated) that required [pdf] :

Where the RCUK OA block  grant is used to pay Article Processing Charges for a paper, the paper must  be made Open Accesess immediately at  the time of on line publication, using the Creative Commons Attribution (CC BY) licence.

Open Access Definition Cards and Buttons

CC-BY-NC-SA Jen Waller on Flickr

By the time that Open Access Week came around, there was plenty to discuss. The discussion of Open Access emphasised more strongly the re-use licences under which the work was published. The discussion also included some previous analysis showing that there are benefits from publishing in Open Access that affect economies:

adopting this model could lead to annual savings of around EUR 70 million in Denmark, EUR 133 in The Netherlands and EUR 480 million in the UK.

And in November, the New Zealand Open Source Awards recognised Open Science fro the first time too.

2013 promises not to fall behind

This year offers good opportunities to celebrate local and international advocates of Open Science.

The Obama administration not only responded to last year’s petition by issuing a memorandum geared towards making Federally funded research adopt open access policies, but is now also seeking “Outstanding Open Science Champions of Change” . Nominations for this close on May 14, 2013.  Simultaneously, The Public Library of Science, Google and the Wellcome Trust , together with a number of allies are sponsoring the “Accelerating Science Award Program” which seeks to recognise and reward individuals, groups or projects that have used Open Access scientific works in innovative manners. The deadline for this award is June 15.

Last year Peter Griffin  wrote:

The policy shift in the UK will open up access to the work of New Zealand scientists by default as New Zealanders are regularly co-authors on papers paid for by UK Research Councils funds. But hopefully it will also lead to some introspection about our own open access policies here.

There was some reflection at the NZAU Open Research Conference which led to the Tasman Declaration – (which I encourage you to sign) and those of us who were involved in it are hoping good things will come out of it. While that work continues, I will be revisiting the nominations of last years Open Science category for the NZ Open Source Awards to make my nominations for the two awards mentioned above.

I certainly look forward to this year – I will continue to work closely with Creative Commons Aotearoa New Zealand and with NZ AU Open Research to make things happen, and continue to put my 2 cents as an Academic Editor for PLOS ONE and PeerJ.

There is no question that the voice of Open Access is now loud and clear – and over the last year it has also become a voice that is not only being heard, but that it also generating the kinds of responses that will lead to real change.

Network-wide options by YD - Freelance Wordpress Developer