Dangerous AI

There’s talk of the risks of artificial intelligence. Nick Bostrom and Elon Musk are warning the world that not regulating AI before it is to late is asking for serious trouble, so much trouble that it could be an existential threat to humanity (and we already had one!). We think this is true, and in spite of the high fantasy level of some of the ideas about AI, we are facing real risks from intentional automated systems.

We have written on this topic before, and described what intelligence actually is, because most people don’t have a clue. Our acronim ARGO captures it, we think. Intelligent systems are ones that are Autonmous, Robust, have a Goal and can Orient towards it. Each of these aspects is essential for true intelligence. Awareness will be a consequence of autonomous goal setting in advanced AI systems.

A system can be any collection of ‘components’ that perform a function through their interaction. Humans can be such a component.

ARGO

The purpose of intelligence as it became a property of living organisms is to allow events to evolve in such a way that the system remains intact. Self perpetuation has been the main goal of all life, but sometimes very simple responses are enough to secure a very simple and weak organism, just because the environment allows for it, or because reproduction rates are so high the occasional losses aren’t a problem.

An intelligent system can orient towards a goal, and it can confirm whether such a goal is reached or approached. The orienting can be any manipulation, perhapsĀ Actuation would be a better choice of word but that doesn’t imply an intended goal. Whatever is intelligent has to be able to influence the physical world. A superbrain without a body doesn’t cut wood. Very easy actions for a computer like sending an email or text count as orienting/actuation.

Orienting captures the idea that the system is able to bring the situation in line with its goals. So heat seeking missile will move its ailerons to point towards the target. A fly will turn towards the smell of ketones as it tries to find a place to lay its eggs. It may be that internally the system only tries to maximize a certain input, or it doesn’t ‘try’ through an active process but it is wired to do so. Orienting requires a comparison between the actual state and the desired state so that subgoals can be prioritized that work towards that desired state. If humans build a house of cards the image of the full house stops our activities, but our actions are never able to build the complete house, only one stack at a time. The Goal is the complete house, the Orienting consists of the stacking.

The GoalĀ is a representation of the desired end state. When optimizing one actually has a variable end state, one that adapts to the abilities of the system. Awareness can be seen as a side effect in a system that optimizes between several goals and within goals. To optimize within a goal the representation of that goal needs to be deep and structured. This is usually not the case in ‘intelligent’ systems.

Robustness is a underappreciated aspect of intelligence. It is actually one of the most important ones. Being that intelligence is meant to aid survival, the system that is intelligent needs to be robust in many ways, the more robust, the more intelligent we will consider it. The orientation needs to be robust agianst perturbations, work in al situations or the system could not operate in them. The representation of the goal needs to be robust when the system can be sabotaged or distracted. If you can poke out a memory chip in a computer that tries to kill you, its intelligence never really comes off the ground. Perseverance is another form of robustness, which in mechanical systems may involve the recreation of a goal in another representational system f.i.

Autonomy is usually the goal of an intelligent system. In living organisms this is what most of the system works to ensure. It’s also a consequence of the finite nature of a living being. What it also signifies is that the goals of the system are all it cares about. This is perhaps the most risky aspect of modern ‘AI’ systems, that they are given total autonomy to optimize or to reach their goal. The risk lies in unintended consequences, or a too narrow interpretation of the goal.

Humans in the loop

The best way to think of an AI is to treat it like some kind of animal. Mammals are all very intelligent if you take the above definition. The ARGO criteria are all met to a certain degree in every mammel, but there are clear differences. It will be the same with intelligent systems as they are being developed.

The deep learing algorithms and recognition systems now made available by Google are just part of a potentially intelligent system. They are the recognition part, the goal representation part. Software languages are already enough to use the neuromorphic process in deep learning networks to make a profit or criminal application.

Many applications are aimed at optimizing spending in consumers. They usually have a very simplistic or absent orientation part. If they recognize patterns in consumer behaviour that qualifies them as a target for certain products or marketing strategies this assesment is then used by humans to trigger a message or show an add. Much more dangerous uses are possible and they will be developed by people with criminal intent.

The trick of real intelligent in a digital system is not so much the recognition part, but the prediction of outcome. A Sci Fi intelligent system like in the movie Eagle Eye that reasons common sense based on the constitution and coerces random individuals to augment its actuator/orientation part, are far off in terms of accuracy. The brain simulates our reality to a high degree to predict the outcome of our actions. A digital system would have to have a similar capability to be effective.

Criminal Systems

The problem with AI is that we now kind of understand all the parts that need to be there, and even if only a few parts are there we can do a lot of damage. Take a sniperbot with face recognition that aims and shoots a target when it sees it. This is now almost an off the shelve device. What will we do with crimes committed by AI systems who’s builder we can’t pinpoint or convincingly like to the device?

The human intelligence has a need to dominate it’s environment to minimize energy spend on learning and adjusting. It will use AI as a tool to make that easier, one of the earliest example is the invention of the thermostat. This will lead to criminal applications of AI.

What to do with a system that trolls facebook looking to find depressed people to troll them based on their interest? The maker of such a system could gradually develop the goal representation so that to achieve it more actions become possible. A lot is made of data, big data, and in deep learning big datasets are necessary to train the network. But an AI system can be set up such that does all the things of ARGO without going through a lot of data or training. These systems just won’t be very intelligent, but can still be very dangerous.

We consider systems with humans in the loop also intelligent if the humans will voluntarily perform the necessary function. These are robust to human change of heart, because that human will be replaced by some other human in the loop. The economy is a system with a set of principles, a goal, and people form it’s embodyment. Economists will come and go, bankers, oil workers, executives, the economic system will remain robust, autonomous and goal oriented.

“Or you die” is a great way for an AI system to get people to become part of its orienting/actuating system. In our economy people already have been recruited that way, their lifestyle is constantly on the line as they serve a set of economic principles.

AI systems can recruit random people to do their bidding, as now already happens with randomware for instance. Imagine a randomware virus that causes a considerable amount of bitcoins to untraceably flow towards an escrow account. This account’s contents will then be released on finding 3000 hits when googling ‘Mickey Mouse Murdered’ (you can fill in your favourite name).

Of course many digital systems are unprotected. Factory installations often use ancient profibus communication which allows for anyone to listen in and add messages to the transmission lines. Even more sophisticated protocols with encryption can be stifled, especially when one can use drones or position a wifi station in a critical place.

Recognizing a shapeshifing foe

The threat of Artificial intelligence will come on gradually, and in much simpler systems than we anticipate. The deep networked intelligence Elon Musk talks about requires siginificant goal representation sophistication. The risk now is that systems in which humans participate take on such a robust kind that they will do real damage.

One can think of VR systems that optimize hours spend on it, or user excitement, and end up completely capturing their audience. The risk is usally defined as ‘when people can no longer do their job’, but this is the narrow industrial take. The risk really is that people can’t function as usefull individuals towards each other, or that people stop cooperating (becaus some AI is polarizing through generated messages and fake news).

There is almost no need to talk about risks in military applications. If you add to this a corporation that hires people to do specific jobs and devices that can deliver big and small payloads and you have a recipy for diseaster. What if Russia build a system that optimized the chance of a nuclear accident on US soil to happen.

Like with climate change dealing with the dangers of AI requires global cooperation. The big AI that we wrote about earlier called the ‘Economy’ is already undermining control as China just announced it wanted to ‘win the AI race’. This is like entering a competition who can create the biggest wild bear.

Some preventive actions

There should be a global agenda that has the following items :

  1. Make an inventory of all arms manufacturing and arms trade and work out what can stop all trade and arms races.
  2. Cut up large networks so problems in their smaller sectors can trigger containment.
  3. Look for unintended ‘actuators’, orienting methods of a creative AI system, this means making sure no dangerous process can be easily started via internet. The IOT may be to risky to allow to evolve further
  4. Protect the ability of humans to create equitable exchanges by limiting data gathering and analysis. Set a limit to the commecial intrusions in their lives.
  5. Shut down data gathering by social apps and force all to store their data in one big identity database.
  6. Don’t allow dedicated development of AI systems. Match the sophistication with the intended function.
  7. Develop analysis tools to spot the parts of ARGO coming together
  8. Many more measures

The bottom line is that humans don’t need super sophisticated digital systems more robust than them. An autopilot doesn’t need to understand the emotions of the driver or be able to hold a conversation, not even in a spaceship. Humans in an attempt to secure their lives may create a system that endangers them (even more than the economy already does). The process of risk avoidance may thus introduce risks, rendering further optimization impossible.

 

Leave a Reply