At the CogX event I attended a few weeks back, I went to a session from the CEO of CTRL-labs, Thomas Reardon. He was talking about Neural Interface and The Future of Control
This was a fascinating talk, absolutely cutting edge stuff. When I think about the potential of tech and how it can enable or progress human development, this is the stuff which is gonna make that happen.
Thomas gave some really great context about his talk. He said that the research they were looking into was not focused on the brain. They found that when you try and determine which part of the brain is responsible for moving different muscles and body parts, it’s not as clear cut as it may seem. Different parts of the brain activate and you can’t localise enough.
So when it comes to creating an ‘interface’ for the brain, they chose not to go near the brain at all. They didn’t want to plug an electrode or connection into a brain and hope it achieves what they want.
Instead they found that the electrical impulse sent to a muscle can be measured. You can wear a wristband or other attachment to measure electrical activity, and determine exactly when the brain is sending a signal to the muscle for it to be moved. Effectively that signal is a kind of binary – muscle activated / muscle not activated.
What they’ve done with that knowledge is create a VR simulation of your hand as an example. By wearing this wristband on the arm, you can control what the hand is doing in the VR simulation by ‘willing’ it. The brain sends a signal to the arm to move muscles in your hand, and the exact same movement is replicated in the VR simulation.
They took this further and asked someone with no fingers / limbless arms to wear the band and see what they could do in the environment. Those people could still control the VR hand as if it were their own. Imagine that. Imagine not having a hand, but by wearing a device you can mimic in a virtual environment the exact movements you want your hand to make.
Thomas highlighted this further by asking someone in the audience to drink from a glass of water. He described that that ‘simple’ act is actually incredibly complicated. You are automatically adjusting for multiple factors. Lifting of the glass. Moving the glass to you mouth. Drinking from it. Readjusting your grip based on loss of water in the glass. Placing it back down so it doesn’t smash. And it’s all done with very little active cognition. Meaning, I’m not actually thinking about the act of drinking water, I’m just doing it.
What this highlighted was that the complexity of human movement isn’t something that can be easily achieved with robots. Through a VR simulation, though, you can manipulate in such ways.
Additionally in the VR environment you can mimic the force of the virtual grip i.e. if you wanted to create a tight fist and create a lot of force, that would be replicated.
An unexpected result, was that the programme could identify the user based on them wearing the wristband. It recognised the electrical impulses as unique to that individual. I think that’s pretty fascinating. It essentially means we have another kind of unique identifier in our electrical impulses.
The application for this kind of tech is more readily understood for people who suffer motor diseases / conditions e.g. ALS or cerebral palsy, or those who are limbless. Imagine having a condition like muscle tremors and not being able to voluntarily stop, but in a virtual environment you experience none of those kinds of involuntary movements.
Thomas was also a very good speaker. He clearly knew his subject well, and was well studied in the area of neuroscience and gave a lot of clarity about what he was and was not seeking to research and therefore manipulate.