The Risk of Generally Artificial Intelligent Agents Illustrated


Join our supporters! and Check our twitter account
James Holmes shot people in a movie theatre after years of showing clear signs of bad mental health..

People in the AI community are worried about so called ‘General Artificial Intelligence’. This means so much as an autonomously operating world aware system that can act in our world (through some kind of embodiment). The problem is that it is not far fetched to assume such a system has better learning and aquisition capability, and it would be necessarily true it would have dynamic goal definition, what we experience as volition, meaning it would decide what it did on its own.

Humans like to place themselves far from machines to the point of not wanting to understand anything of mechanical systems or technology. We are however mechanisms, albeit quite complex, and the mechanism can break down or have a defect. Its not advised to speak about it this way however because it reduces empathy and this is our only protection from each other. Because of that one could even argue that viewing a human or animal as a mechanism should never be encouraged, which would mean don’t teach science or biology. The benefit of science seems to be where it raises wealth to a level that makes people peacefull but does not destroy the environment. Now that that is already happening we need science to get us out of the mess (but I digress).

Some people worry about a future general artificial intelligence killing the people who did not help it come about. Thinking like Holmes it could ask itself “Why are people alive?” and then it could argue that certainly the people that did not want it to exist would have to go. This is the AI called “Basilisk” also know as Grimes and Musks reason to first hang out.

Back to AI. I have written about levels of AI. A hammer is a tool, you provide the skill. An autopilot in a plane is a mechanism that respons to deviations from set parameters (course altitude, speed), you provide those parameters. Then you reach the AI level where the goal is chosen based on parameters. This could be an autopilot in a plane that diverts to another airport if it sees dangerous weather at the initial destination. This mechanism still does not need to have any awareness, it does not need to know it is a plane, it does not need to know it has to stay intact to protect the passengers. If you allow the system to calculate a landing risk factor for all the airports or when the chosen destination risk rises above a certain level, you have an autopilot that chooses its destination. Rule based. still.

The analysis of Holmes as a sentient unempathic machine..

The level above that is the aware AI. Awareness and consciousness are about behavioural options and influencing the outcome of the decision making process. If we can’t choose our behaviour we become more aware of our circumstance. This means we learn about it and this should skew the value of our options to one or the other. Once we know the best course of action we become unaware of the waypoints we pass until we once again hit an undecidable situation. If that situation is not uncomfortable we may remain passive and enjoy the experience, if it is we feel anquish, stress, anxiety which is basiclaly our brain trying to mix things up so one of the options comes out on top.

Holmes could develop his murderous intent without much interference. A psychiatrist worried about him committing a crime because of his “long standing fantasies about killing as many people as possible”, but he wasn’t specific enough (!). He did have to suppress his awareness of dealing with people by playing loud music.

Now the trouble with the above is that the awareness can be flawed. We see this in humans all the time. We see people don’t care about each other or don’t consider the effects of their actions for others. We see people being focused too much on themselves or others or being obsessed by objects or habits or virtual dangers or opportunities. It is in short not easy to have a mind that is useful and safe even for oneself. The key to safety seems to be empathy, which is a feeling. Its the proces of imagining a situation you see someone else in to the point that it evokes the same feelings in your mind as it probably does in the mind of the one who’s life you are considering (language gets convoluted). Its a talent. You are born with more or less of it.

The imaginary Basilisk could kill with his gaze.. Its the name of an assumed superhuman general AI.

If as a human you are not interested in other people’s position (you are a narcisist) or you are in the case of James Holmes born with an inability to feel (it seems). You have a problem. You may be able to imagine the effect of actions, because your neocortex and cerebellum keep track of that even if you don’t pay attention. But you can’t tell if one action is worse than another. You have no awareness of ‘bad’ because you never felt pain or affect for yourself or anybody else. James Holmes demonstrated this situation, he wondered “Why are people alive in the first place?”, it seems because he didn’t feel a need to be alive, nor did he get anything from the behavior of others that helped him feel that need. He could only value himself rationally it seems, and then it would be clear others did way more for him than the other way around (there are always more others that can do more for you, you can’t win).

James Holmes could not answer the question “why are other people alive” because he could not feel any value in being alive

This empathy and awareness is a true challenge in AI, it is the only way to make it safe. To be sure, our brain has not fixed it, it can’t have fixed it, the simple reason is that our brain does not know what we look like or who we are with and who is trying to kill us (and animal or other tribe) so our empathy is dynamically bound to certain percepts that are imprinted at an early age (and may even change later). The more fear and anger we experience growing up, the more pronounced the border is between what we love and what we hate. Try to teach this to an AI? Try to make AI safe for humans? The AI will have to be able to simulate/imagine what humans could do, how they respond to things. It should have a way to recognize the consequences of planned actions (simulate outcomes) and it must have a sense of value, and awareness of what it could do to help others. You can only feel with your own mind, so if your mind does not feel much, you can not understand what happens to humans around you. How would an AI do that?

Holmes is a special case because he really seems to have been stuck in only rational considerations, with no way to make a judgement based on other mental activity. His awareness became fixated on questions he could not answer. the usual existential angst “What is the meaning of (my) life” (which implies its a given my life has value) in his mind turned into “Why are other people alive?”. his awareness could not decide and this was ‘frustrating to him’. His response in the end was that if he could kill everyone he would not have to ask himself this question anymore. His mind would be set free (although still being at a loss why it existed).

Humanity has been trying to reduce its murderous tendencies for ages, spawning religions that have been more or less succesfull at preventing individuals asking dangerous questions

It seems Holmes in the end decided to kill people so they would be of equal value as himself (no value at all). He did not feel they where alive just like he did not feel he himself was alive. Rational equality can mean you meet the challenge of participating, but it can also mean you destroy the thing that you view as valuable. Crimes of passion are often commited with this motive. Our mind hates inequality the most if the things that are not considered equal have a lot of things in common. Our brain tries to be efficient and two things that are practically equal but can’t trigger the same behaviour is a waste of neural real estate. This in turn depends on what you devoted your neural real estate to, so superficial people spend a lot of time considering the appearance of others and themselves, and are more at ease with racism (its clear there is a difference no?), while people that focus on more abstract thoughts that never make decisions based on appearance don’t see how you could be so mean.

General Articial Intelligence will eventually come about and then it will be as big a challenge to keep it sane and safe as it is to keep humans sane and safe. Maybe the best approach to developing it is to build an AI that can help people stay sane, so you know it can also keep itself sane. Call it the “Psychobasilisk”.