AI and Agression

AI is on the brink of becoming real. Many are still distracted by machine learning, which does not provide a good bridge to machine thinking, but this bridge will be crossed, and the quality of thought will approach that of animals and humans soon. I am convinced because I can see what this would take. There have been warnings about the risk of AI, the movie War games, Terminator, many science fiction novels have fantasized about alternative intelligences and what it would do. It is scary.

I would make a distinction between intelligent recognition and targeting systems, which can be hard wired, meaning their behaviour is fixed, and free roaming AI driven robots, who’s behaviour is truely dynamic. Elsewhere I wrote about ARGO, which stands for Autonomous, Robust, Goal Orienting systems. This defines intelligence for me. This post is about another aspect of free roaming AI, which is its aggression.

We understand aggression to be some determined act to destroy something by a person. It’s a primitive behaviour that simply ignores the integrity of whatever it is directed towards, takes it apart until it doesn’t exist anymore. Robocop could be a basic example of machine aggression, but this is not what I mean. The role of aggression is more specific in our brain then we think. We usually only call the extremes aggression, but we have to be ‘aggressive’ with every move we make. We have to destroy our will to do what we are doing, and want to do something else.

There are two directions in this, one is fear driven and one is driven by assertion, which is a result of concluding we will surive whatever the move is intact. When we are scared we can run but we will not be left with any new skills afterwards. When we are assertive and we do something -even though- it scares us we learn something new, we conquer new behaviour. These examples are also extremes. Most times this dynamic remains within known limits. We make ‘safe’ choices all the time, choices that create no risk to us in any way. That behavour is also mediated by a balance between fear and agression.

Now the problem with AI is that it will have these dynamics if it is to be truely intelligent. The reason is that you can not navigate an inperfectly known environment without having moving around, and you can’t move around without commiting to a movement, and you can not commit without ‘asserting’ that it is safe. Like humans an AI robot will have to decide itself what is safe behaviour and what not.

So this means that if you let a robot like Boston Dynamics Atlas move around outside, you will have to accept it does things that it thinks are safe. The more freedom you give such an AI to perform its tasks the more risk you take that it will decide that it can do something that is not safe, especially around humans. Say you have an ‘intelligent’ robot and you ask it to solve a problem that requires a stiff rod. It might decide to use one of your bones, grab it, extract it and finish the task, while you’re attempting to stay alive through this. It may not have sensed there was something to be ‘afraid’ of in this proces. Say an AI ‘assistant’ works with you on Mars and some accident happens the AI thinks it can solve by taking part of your spacesuit and stuffing it in a hole. Its could be like a dog that rips apart a pillow and then looks at you with eyes saying “why was that important to you?”.

Once you let behaviour truely be a balance between fear and agression based on the world model of the AI that can never be informed 100% (just like that of humans) you gain true intelligence, but you risk danger. The only solution to this risk is to have the AI demonstrate a thorough understanding of what humans are, why they should be integrated in every plan, every move, so that there is no risk to them. The AI should ‘love’ humans more then it loves itself, and this ‘love’ starts with knowing what they are, recogizing them etc. The short term solution is to make sure robots are weak. Even then they can still decide to replace citchen salt with a toxic salt, because it never learned not all salts are the same. “It says salt in the menu, why is zinc bromide not ok?”.

While AI will destroy our online experience, making it increasingly fantastic and at the same time attractive, undermining our own sense of reality and what is true and what is possible, while AI will be weaponized by every party that wants to get hold or control of stuff, we can expect true AI to be a risk as wel. There is a lot to say for not going that route, to stick with the soft tissue humans combined with dumb machines. Our innate desire to procreate however will probably manifest itself by driving us to ignore these risks just to see true AI roam free.

Leave a Reply