We wrote this post before, but it somehow disappeared. So this version will be a bit shorter.
Elon Musk is developing a system (Neuralink) to safely and rapidly implant electrodes in the human brain. A small hole is made in the skull and very thin slivers are ‘sowed’ into the brain tissue underneath. This system is a major improvement over the original method, which involved rigit needles that would cut through the neural tissue as the monkey moved around. The brain is not very stiff.
We saw the first example of this in the late 90’s, when a brazilian scientist managed to allow a monkey to control a robot arm using only its brain. It took a while for the monkey to get the hang of it, first also moving its real arm(s) but eventually it could control three arms seperatly.
This milestone taught us a lot, because it turned out that you could read out the arm position using only a few 16-64 electrodes (who would read multiple neurons each). This said a lot a about how the brain encodes stuff, because the regions that are ultimately active when you move an arm contain billions of neurons.
Elon Musk hopes to increase the “bandwith” between humans can computers through the neural implants. He fears that Artificially intelligent systems will overpower humans and the only way to stay on top is to somehow become a symbiot with them. We are already semi-android because we use our mobile devices continuously and depend on digital systems so much.
We have studied neuroscience for a decade and been highly focussed on understanding what the brain actually does. How it works so to say. This gives us a hopefully interesting prespective on the risks and opportunities of the neuralink technology.
Every brain is different. We all develop in our unique way, due to genetic and factors during pregancy (such as air pollution and alcohol) and early development every brain even has different sensitivities to start with. So one can not expect to plug into a brain and read out data as if it was an ethernet connection, for every neuralink connector the subject needs to teach the system the relationship between what the connector reads and what the subject thinks or says.
In fact it is highly unlikely a neuralink system can read from regions that are not sensory or motoric, so what you hear, feel, see or which action you wish to execute now. This is because where the primary senses are mapped quite predictably in specific cortex regions, more abstract concepts can really be anywhere (in the associative areas). They will certainly be in different places for people. It is likely to take a lot of time to train the neuralink to recognize them.
The difficulty to map more abstract concepts also becomes an issue when you try to connect two persons through their neuralinks. It is possible to imagine the Broca area of one person, which drives speech, to be connected to the Wernicke area of another person, at which point they could each know what the other is saying, still this would be very invasive (two neuralinks each). Airpods would be a cheaper solution..
No Homunculus and Brain plasticity
To understand how to think about the possibilities of Neuralink we need to understand a bit more about the brain. An important aspect of how it functions is that it does not know what it does, or what it is, its just a bunch of neurons who actually compete against each other to be usefull. A neuron that gets few inputs will become more sensitive and grow until it finds itself in regular use. This ‘plasticity’ is the reason why our brains survive all the small damage we do to it every day. The brain has stem cells that will repair damage for as long as you have them. A temporary change in environment can trigger significant changes in how colors are encoded. If you start driving an Uber your brain will adapt to facilitate navigation. Every time you learn something your brain basically rewires itself.
If you add neuralink input to your brain the neurons will not know where the signal comes from. If you put them in a color recognition region you will likely see colors, or rather, experience them. There are no senses for pain or touch in the brain so the activity of the neurons to the brain can have only one explanation: There’s a color out there.
If you add neuralink to the Wernicke area (which we use for interpretation of speech), you are likely to have an experience as if someone said something specific. You may also hear someone (you know or yourself) say it.
It may turn out that if you input activity in an associative region, and the subject can control it, you end up with the option of adding senses. So when the subject demands it the system inputs signals in the auditory cortex that signify the state of a server or whether there is someone behind them. This would then have an associated sound (can be anything) which the subject recognizes. For this purpose however one could use backward facing radar and input into the headset as well. The take home message is that any input will simply be integrated into the experience. The neurons in the active region will simply work with what is being put in. This leads to a possible problem.
Ignoring input may be impossible
The way our brain works is that a region that is active silences other regions. This mechanism doesn’t work well with epileptic patients, who have runaway activity in cortical regions when they have an attack. This inhibition is local and also lateral, so between the left and the right brain halves. You can interpret it as kind of a sending state of a brain region, so that it determines the activity elsewhere. The sending region is always the one with the most activity, so any region with high activity (allowed by local inhibition) will dominate al the others. It is easy to see that if you put enough power into a neuralink you will dominat the brain of the subject. There are other more interesting regions to make a neuralink to that can enable even stronger control, but the awareness of a person recieving sufficiently strong Neuralink input will be filled with the modality where the neuralink is planted. So there is a potential for abuse.
Knowing the incredible connectivity of the brain and its plasticity one can also imagine another use of Neuralink, which is simply to use the compute power of a persons brain, like a deep learning neural network of sorts. Plug in two neuralinks in one brain, send with one and read with the other. The person in question may have experiences, and for it to work its awareness will be dominated by the activity. This is not super likely to work and it will at least mean serious discomfort for the subject.
So we think that expecting Neuralink to increase the bandwith between man and machine is optimistic. Its ability to recieve control signals has been proven, and this can lead to human controlled exoskeleton cyborgs, and maybe the sensory input wil become sophistcated enough that the subject can hear and sense (also controlling the sensing devices) a lot like we imagine ourselves, pointing our ears and nose and squinting our eyes, maybe that will become scannning frequencies or peering into the infrared or checking server statusses or other flags.
Beyond our attentional capacity, our ability to comprehend a situation and place the input signals in their appropriate context, its unlikely a direct input into the brain would enhance things, unless it would become abusive, so perhaps a rougue AI using Neuralink to control humans or a lab using captives with Neuralinks to do difficult computations. The development of this technology will be a blessing for paraplegics and can enhance human ability to deal with complex environments, but it also has some potential for being abused.