The Threat of AI

Artificial intelligence is rapidly converging on what we would consider human intelligence. For some researchers I would consider that goal to have been reached, but the AI not having been able to show it for lack of ability to gather experiences, or being motivated to do so. Bostom Dynamics is the company that demonstrates ARGO, Autonomous, Robust Goal Orientation, my definition of what intelligence is. It’s robots struggle to open doors and succeed even when perturbed by (in this case) one of its minders.

It is not the robot that we should worry about, it is the dynamic system of outcome prediction and choice of action that guides its actions, in real time. That is where the AI ‘lives’, and its functionality is not bound to the robot embodyment. Provided the inputs are true, or at least relevant in the domain the AI has to operate in, the AI can learn and find its way towards a set goal and set secondary goals we have not instructed it to persue. As a former artificial intelligence researcher I can imagine how this functionality is achieved.

The threat is not from the AI, but from those that will use the AI as a tool, because that’s the most logical first step for anyone working on AI, or anyone looking at the AI research community to pick a winner and put him/her to work. The fist thing one would do if it is possible to gain wealth using AI is to consolidate one’s position of autonomy. Robustness, which is part of intelligence, is sought by humans in most situations, and the AI instructed by a human can help achieve it. This is not science fiction right now, it is probably reality under the radar.

You may think that an AI can’t do much harm, but just think of AI as water for a search dog for a moment, one that can be trained to find something or get some place. It has to be set up such that it both can ‘imagine’ a path forward and determine its succes. That path forward can be gaining access to a computer network, or guiding a drone into a building. It can also be gaining a specific response from a person through online contact. Humans that mind the AI will try to enable it to use tools and means to achieve it’s goals, and the AI itself can at a certain point enable itself.

“Tech companies should stop pretending AI won’t destroy jobs”

Because of the probable nature of dangerous AI it will be constantly motivated to achieve it’s goals. It will never stop ‘thinking’ towards it and ‘wanting’ to try a promising approach. The effect of this is easy to monitor when it is a real life robot, but harder so when it world the AI tries to navigate is mostly online. Online can also mean using voice and listening to spoken words through telephone connections, as human speech can be generated to a fidelity that humans can no longer recognize as artificial already today.

An AI ‘imagining’ a goal may be able to create an image and present this to people it thinks it can learn to control. This sounds more and more like science fiction but it really is only steps away from where we are today, and it won’t require immense computing power. Intelligence doesn’t have to be super human to be dangerous, consider humans have immense moral restraint. If you want to see what AI will do without any moral restraint just look at war zones or desperate regions, how people behave if they have given up protecting other people from harm.

 

 

 

Leave a Reply