Mad Maxes Mind, or Consciousness in Self Driving Cars

We are at the cusp of seeing real consciousness in artificial intelligence. It will show up in self driving cars. There are other systems in which it can occur but self driving cars are the most likely to demonstrate the first common embodiment of it. I write this as a former AI researcher, well versed in how our brain works, well versed in what machine learning strategies are common today, and uniquely versed in what it takes to be conscious.

Consciousness is our shifting awareness of our internal and external environment. Our awareness I define as everything that we can decide to adapt our behaviour to. I can be aware of a clock and point my finger to it, or tell the time, or walk over to it. If I’m not aware of it I can’t decide to do such things. We can adapt our behaviour to things we are not aware of, but then we can’t really decide, so we can’t make a conscious decision.

Our awareness is shifting and varies from second to second. Sometimes we are not aware of most external things except those that validate the routine we are executing. We may shift our awareness to where we think lies the most opportunity, so maybe internally to think of something interesting or externally if we see signs of a threat or something we may want to go after. There is a lot going on with what we experience as a result of our behaviour. This is too much to discuss here, but we can now use a self driving car as a full analogy of awareness and consious deciscion making.

A self driving car has a lot in common with a human. It has ARGO, the acronym for autonomous robust goal orientation, which is the principle definition of intelligence. Robustness depending on how easily it is disturbed in its function. Robustness seems a strange quality of intelligent systems, but this quality is expressed in living creatures having scales, fangs, agressive behaviour, hiding behaviour, but also in the fact that most thoughts involve many many redudant neuronal pathways. It does not take intelligence to hide, but if a system that decides to hide it can more easily continue to be intelligent. Try to get to the AI in a moving Tesla. Not easy. This makes AI dangerous in the same way as it makes a rhino dangerous.

Until now Tesla’s have been mapping out the cars in front and around it, mapping the route to where it was asked to go. The route optimization would tell it to avoid traffic jams or road construction. This would all be procedures, algorithms that churns out its best answer. Can the Tesla decide what route to act on? No, it can only suggest it to the driver who then tells it to take one of the suggestions. But now there is a mode, the Mad Max mode, in which the car tries to get though traffic fast. This is a high level incentive for the car to engage with its surroundings. What the Tesla is asked to do is to make decisions on the situation it is aware of to achieve a goal beyond just getting from A to B.

With the Mad Max mode the Tesla now wants something. It wants to go fast but it does not have all the information it needs to do that. It is limited by the ranges of its sensors. The trick to aware intelligence is to allow it to explore opportunities, to choose them and explore them. What I mean is that if the Tesla senses there is space between two cars ahead but it can’t see for sure if that’s where it can sneak forward, it must accelerate to bring the opportunity within ranges and scope it out. This means it has decided to alter its awareness because it can orient towards its goal (being fast) better.

With the option to scope out suspected opportunities the Tesla will behave very much like a human would, and as shifty. The process as it occurs with humans is that partial information triggers a vision of a future with something of value, and this value is initially set pretty high. In the human brain its a dopamine spike that releases our motoric system to behave freely. Without it we sieze up, which is the safest thing to do in most cases. So the Tesla will rank opportunities even if there is only partial data. Then it will estimate the cost of scoping out one of them. Then, if the cost is acceptable, for instance in terms of risk vs other vehicles, it will make its move to learn more. Once it learned the true value of the opportunity it can have an automatic response.

The purpose of our awarness is condition an automatic response. For much of our behaviour, even if the triggers are varied, we develop these responses and we chain them together. If we can’t tell how to respond, or like the Tesla we see no real opportunity, awareness kicks in and amplifies some part of our environment so we orient towards it and learn what we need to either force a response or abandon our attention to it. For this to occur in AI systems they need to be in a real environment, have an embodyment that can move about so the sensory input varies. Tesla cars are such embodiments.

If you adapt the Navigate on Autopilot so it will do what I describe above you can let it report and it will go on like “I don’t see much I can do, o wait, there is some space, let me check it out. Wow this is enough, I can slip in this space, let’s move on. This other road is more quiet let me get this off ramp..”. Is it general purpose AI, no, but it is aware. The question for Tesla is how did they implement it. This is where the crux of AI will lie, because it’s obvious that being able to raise stimuli to awareness is the key to its function, and not every system is capable of the required flexibility nor do most reseachers know how to achieve this. Luckily.

Leave a Reply