The Missing Link in Development of AI

Years ago I was a scientist, working on understanding the brain. I wanted to know how we process information and was given card blanche to read every paper on brain physiology, every theory around. I build computer simulations of neural networks, not the common ones but more detailed ones with ion channels. I simulated learing and forgetting as a result of our emotional state.

A magnonic holographic memory device

What I concluded then was that any approach to AI based on logic or computational analogies where doomed to fail, because logic is a special case of behavior, a class so to say, of perfect percepts that kind of hijacks our brain. For example most things we see around us are more or less recognizable, but words, written down, are always perfectly recognizable, they are always that specific word. You will not see “Tree” written on a piece of paper and have an impression of anything else than the word “Tree”. This all or nothing kind of perception and action constitutes a small niche in our behavioral ‘space’. Most of our behavior and perception is vague and unreliable and not driven by any logic.


It is not like a computer

Looking at our brain and how we learn it became clear that there are some real challenges you never think of when you program a PC or build a website. Our brain does NOT know and has NO WAY to know what is important to its survival. It does not know it wants to survive. It is a part of a larger system and it has no idea what is going on outside. This is what it means when people say “There is no homuculus”. There is no interpreter inside our brain that decides what we pay attention to. We are really just a very complex mechanism that manages not to destroy itself and therefore exists for as long as it may last.

At the time I visited Daniel Dennet at Tufs University and was unimpressed. He spoke a lot in analogies, which did not tell me what I wanted to know : How does it work. Telling someone the brain is like a swiss cheese or whatever doesn’t tell me what it is exactly. Analogies are circular, they suggest an attitude towards something by comparing it to something else. Dennet was able to inspire many people to think about what we are, which I think is certainly good because it makes us more humble.

The frustrating thing about our brain is that it really doesn’t lend itself to easy understanding. There is a huge advantage in that, it may even be one of the most important factors in our survival, that we are not capable of readily hacking our own brain. When that is done, for instance by giving rats control over their own reward centers, it is very destructive. Heroin addicts are people that can circumvent their own reward systems. They know a shortcut and their brain does not let them take any other way. I predict that the downfall of all real AI is its ability ot hack itself.


Brain activity can be correlated with arm movements, such that a person with a neural implant can control a robot arm.

When we look inside a brain we see neurons, glia cells, all kinds of dendrites and constant activity. All the time our brainscells are stimulating each other with impulses, spikes, it would make a sound like the bustle of grand central station. The sound changes in different rythms when we sleep, or do something specific, and then returns when we rest again. Even if we are doing specific things the chatter appears random. One can analyse the neurons in our motor cortex (right side to the front) and extract our arm movements statistically, but what you would hear is just a chaos of spikes. Part of this is because our brain (the top or neocortex) only does part of the job, part of this is because many neurons partake in the same jobs (so we can lose some if we have to), and part of it is the way our brain works, it does not know how to organize because it does not know anything about that it is supposed to do.

 Listening to one neuron firing regularly

Granted we have specialized brain regions, sensory systems so in fact our brain gets a head start at processing information that is relevant to our specific organism in our environment. If it fails we die. For instance we have vocal cords, and areas in the brain that control them and others where we recognize words and language.  We get born in a world where our parents speak, and even if they dont we have a talent to show language behavour.

Other species have other brains. Some are highly differentiated (so conducive to specific behaviour) and others don’t seem to be (like that of a sea turtle, just a big mess really). Looking at these differences one can start to get an idea what our brain really does. And this brought me to my theory at the time, it was called the Entrance Identity or Liquid Basin theory of cognition.

Entrance identity theory/ Liquid basin theory is about allowing chaotic activity to capture and recognize itself, without requiring it is ‘human readable’ as a mechanism of cognition

The liquid basin theory of cognition focusses on what a typical pyramidal neuron does in our brain : It recognises a brain state. It is build to ‘fire’ when it recieves spikes from other neurons and the number of spikes it gets moves over a certain threshold. It can take a snapshot of such imput, which as mentioned above can be completely chaotic to the outside observer. That doesn’t matter, because 1. The same outside situation will cause approximately the same chaos. 2. If there are neurons that respond to part of the chaos in a predictable way, the chaos will become more recognizable to other neurons. If this happens the system of neurons will leave chaos and start to behave at specific rythms, which both allows individual neurons to be heard and helps others to program themselves for recognition. This is the learning state.

Recognition happens when our brain is able to self organize its activity. It can because it programmed itself to do so on a previous occasion.

So our brain is able to pick itself up from a chaotic state to an organized state because its neurons learn what the chaos looks like. The result is that we perform similar behavior in similar situations if (and this is a big if) our body sends the signal that we are doing well. So the above organizing is conditional on our reward centers giving the green light for learning, through dopamine mainly. We can also ‘forget’ which is mediated by Serotonin and of course this is a gross simplification. If the neurons in our brain can pick up the chaotic signature of outside input and pull itself towards an organized state we can say we have recognized something. This is an extremely important aspect of cognition, the most important. It enables us to be goal oriented, even robustly goal oriented, which is my definition of intelligence (not awareness yet). We are ARGO, Autonomous Robust Goal Oriented organisms.

So called ‘grandmother cells’ are neurons that are singularly sensitive to one specific percept (like your grandmother) they are theoretical. Most neurons seem to express a data compression lexicon element (if that means anything to you), so they can represent an approximate percept when combined, like the tiles of a Jpeg image.  

The picking up signals from the chaos part is clearly not something a computer easily does. It really doesn’t like chaos at all. It likes to know what is going on, zeros or ones or the ‘syntax error’ is sounded. Even making a computer act like it is a chaotic neuronal system is not easy, it needs to sequentially run through each neuron and calculate what happens with it and conclude if it fires and in what state it will be, for billions of neurons and trillions of connections between them. Dedicated systems have been build to do the task, but until now their capacity has been small. The quest would be for a system that from a state sensitive to all possible inputs can avelanche quickly towards one outcome state.

Grandmothers are recognized by many different neuronal areas whose activity is in turn recognized by other neurons.

It seems the device that can do that is here. It is called a “magnonic holographic memory device”. It is being developed in California in collaboration with University of California, Riverside Bourns College of Engineering and the Russian Academy of Sciences.

“The most appealing property of this approach is that all of the input ports operate in parallel”

It has the property that one can offer an input pattern in parallel, and gave it sweep into one of several stabile states in 100 nanoseconds, which is much quicker than our brain which needs 100 milliseconds to recognize something visually. This is similar to having a neuron that knows what to listen to inside the chaotic environment of our brain.

Like the memristor this new device opens up possibilities for instance to build a complex recognition system without classic CPUs, that can instantly differentiate between many possible input states and suggest ‘behaviour’, when implemented in some kind of robot. Of course such a system can feed back on itself either through simulation or through reality and become a super quick intelligent system. Why intelligent? Because it can be programmed to adjust itself to any situation so that it achieves its ‘goals’ (which inintially will be programmed directly). I have no idea how these devices are programmed right now, but if there is some kind of learning algorithm involved one can imagine this be driven by evaluation of the outcome (hopefull human evaluation).

Recognizing situations and initiating the actions that bring it closer to its goals is all an intelligent system does

I think these Magnonic holographic devices are the missing link for real AI, because they do what we do at incredible speed, they allow massive parallel input as we recieve and seem to offer the outcome of their recognition to logic manipulation (which for the time being can be taken on by normal computers).  They are something to watch closely, because the next time you’re in a battle field and the drone overhead knows all your moves, it most likely carries this technology.