“Successful human-to-human brain interface” screamed the headlines – and so there I was clicking my way around the internet to read about it.
Those who know me also know that this is the kind of stuff what makes me tick, ever since learning about the pioneering work of Miguel Nicolelis. A bit over a decade ago I first heard of him, a Brazilian scientist working at Duke University in the Department where I spent a short tenure before moving to New Zealand. What I heard at the time was that he was attempting to extract signals from a brain and use them to control a robotic arm. I was quite puzzled by the proposition, I had been trained with the idea that each neuron in the brain is important and responsible of taking care of a specific bit of information. so thought I’d never get to see the idea succeed within my lifetime.
Nicolelis’ paradigm was relatively straightforward. He was to record the activity of a small area of the brain while the animal moved his arm, and identify what was going on in the brain during different arm movements. Activity combination A means arm up, combination B arm down, etc. He then would use this code to program a robotic arm so that the robotic it moved up when combination A was sent to it, down when combination B was sent, and so on. The third step was to connect the actual live brain to the robotic arm, and have the monkey learn that it had the power to move it himself.
What puzzled me at the time (and the reason that I thought his experiment couldn’t work) was that he was going to attempt to do this by recording the activity from what I could best describe as only a handful of neurons, and with rather limited control over the choice of those neurons. I figured this was not going to give him enough (or even the right) information to guide the movement of the robotic arm. But I was still really attracted to the idea. Not only did I love his deliberate imagination and how he was thinking outside the box,, but also, if he was successful, it would mean I’d have to start thinking about how the brain works in a completely different way.
It was not long before the word came out he had done it. He had managed to extract enough code from the brain activity that was going on during arm movements to program the robotic arm, and soon enough he had the monkey control the arm directly. And then something even much more interesting (at least to me) happened – the monkey learned that he could move the robotic arm without having to move his own arm. In other words, the monkey had ‘mapped’ the robotic arm into his brain as if it was his own. And that meant that it was time to revisit how I thought that brains worked.
I followed his work, and then in 2010 got a chance to have a chat with him at SciFoo. It was there that he told me how he was doing similar experiments but playing with avatars instead of real life robotic arms. how he saw this technology being used to build exoskeletons to provide mobility to paralyzed patients, and how he thought he was close to getting a brain to brain interface in rats.
A brain to brain interface?
Well, if the first set of experiments had challenged my thinking I was up for a new intellectual journey. Although by now I had learned my lesson.
I finally got to see the published results of these experiment earlier this year. Again, the proposition was straightforward. Have a rat learn a task in one room, collect the code and send that information to a second rat elsewhere and see if the second rat has been able to capture the learning. You can read more about this experiment from Mo Costandi here.
So when I heard the news about human to human brain interfaces, I inevitably got excited.
The paradigm of this preliminary study (which has not been published in a peer reviewed journal) is simple. One person is trying to play a video game imagining he pushes a firing button at the right time, and a second person elsewhere who actually needs to push the firing button for the game. The activity from the brain of the first person (this time recorded from the scalp surface) is transmitted to the brain of the second person through a magnetic coil (a device that is becoming commonly used to stimulate or inhibit specific parts of the brain.)
But is this really a bran to brain intterface?
Although the brain code of the first subject ‘imagining’ moving the finger was extracted (much like the Nicolelis group did back a decade ago), there is nothing about that code that is ‘decoded’ by the subject pressing the button. That magnetic coils can be used to elicit movement is not new. What part of the body moves depends on where on top of the head the coil is placed, and the type of zapping that is sent through the coil. So reading their description of the experiment, it seems that the signal that is being sent is a turn on/off to the coil, not a motor code in itself. The response from the second subject does not seem to need the decoding that signal – rather responding to a specific stimulation (not too unlike the kicking we do when someone tests our knee jerk reflex, or closing our eyelids when someone shines a bright light at our eyes).
I am also uncertain of how much the second subject knows about the experiment and I can’t help but wonder how much of the movement is self generated in response to the firing of the coil. Any awake person participating whose finger is put on top of a keyboard key and has a piece of metal on their head wouldn’t take too long to figure out how the experiment is meant to run.
Which brings me back to the title of this post.
There is nothing wrong with sharing the group’s progress, In fact I think it is great, and I wish more of us were doing this. But I am less clear about what is so novel and what it contribute to our understanding of how the brain works to justify the hype.
This is a missed opportunity. There is value in their press release: here is a group that is sharing preliminary data in a very open way. This in itself is the news because this is good for science This should have been the hype.
Did you know?
- In 1978 a machine to brain interface (says Wikipedia) was successfully tested in a blind patient. Apparently progress was hindered by the patient needing to be connected to a large mainframe computer
- By 2006 a patient was able to operate a computer mouse and prosthetic hand using a brain machine interface that recorded brain activity using electrodes placed inside the brain. Watch the video.
- In 2009 using brain activity recorded from surface scalp electrodes to control a computer text editor, a scientist was able to send a tweet
- Carmena, J. M., Lebedev, M. A., Crist, R. E., O’Doherty, J. E., Santucci, D. M., Dimitrov, D. F., … Nicolelis, M. A. L. (2003). Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates. PLoS Biol, 1(2), e42. doi:10.1371/journal.pbio.0000042
- Pais-Vieira, M., Lebedev, M., Kunicki, C., Wang, J., & Nicolelis, M. A. L. (2013). A Brain-to-Brain Interface for Real-Time Sharing of Sensorimotor Information. Scientific Reports, 3. doi:10.1038/srep01319
- O’Doherty, J. E., Lebedev, M. A., Ifft, P. J., Zhuang, K. Z., Shokur, S., Bleuler, H., & Nicolelis, M. A. L. (2011). Active tactile exploration using a brain-machine-brain interface. Nature, 479(7372), 228–231. doi:10.1038/nature10489