There’s talk of the risks of artificial intelligence. Nick Bostrom and Elon Musk are warning the world that not regulating AI before it is to late is asking for serious trouble, so much trouble that it could be an existential threat to humanity (and we already had one!). We think this is true, and in spite of the high fantasy level of some of the ideas about AI, we are facing real risks from intentional automated systems.
We have written on this topic before, and described what intelligence actually is, because most people don’t have a clue. Our acronim ARGO captures it, it’s autonmous goal orientation. The A can also stand for aware, in which case it can set its own goals to meet its needs, just like we can eat a banana or a hamburger, to not feel hungry. Most digital systems are simply goal oriented and not robust. Robustness is an important part of intelligence, as if a killer robot fals apart by the first blow with a baseball bat, or loses focus, it will not seem to be very intelligent. Some weapon systems of today are very robust, goal oriented and thust in our view intelligent.
The ARGO acronim hides two other complex aspects, which are the goal and the orienting. A goal may seem a straightforward thing, like a picture of a target. But how dos the system know it has reached it? It would have to be able to see a picture. If there’s no room for such a complex recognition system one needs to do some measurements, like GPS position for a location. The way a GPS position is represented, as a set of numbers, is however super specialized and thus the intelligence of the system is not very robust. Compare this to a human recognaissance officer, who can work his gps, compass, look at the sun, the stars, treebark, and won’t stop when he gets hit in the face by a treebranch. For living systems robustness is in itself a goal. Humans want to survive and much of our brian power is devoted to that goal.
The last part the orienting is where the intelligent system knows how to change to move closer to the goal. This change can be a movement, but can also be the production of a sound or other signal. It can even be a change to itself. For this the mechanisms to do this need to be in place and working. We can use the ARGO definition for any object or system. You can take a thermostat and say it has a goal, it’s hard wired, it is not very robust, not that intelligent. Or you can take a cruise missle, which is goal oriented, can find its targets throug different methods, can handle attempts to destroy it. You immediately get a sense of danger with the cruise missle that you didn’t get with the thermostat.
But we are surrounded by very mundain systems that are today becoming increasingly intelligent. Facebook is an example how the information we feed this site, which ends up in Zuckerbergs databases, is mined and used by algorithms that then change the way the pages look, what content it has, in order to achieve a goal, which is to give us a ‘better’ user experience. Facebook is intelligent in the application of rules, which we can now still assume to have been made by people. But what if Facebook used a genetic algorithm to come up with new rules and methods. A genetic algorithm is a way to create many variations of something, and allow the succes of the variations to determin the reuse of features of those variations. It works a lot like natural evolution. This could result in Facebook changing the way it uses your personal data so that you click more adds, and nobody really understanding what the changes are and why they work.
There are many of these simple optimization algorithms used in lots of places already, think of logistics planning, work schedule generation etc. For decades these have been developed further and become more sophisticated. These kind of systems are low level intelligent. Recently deep learning returned to the scene, mainly because computers have become more powerfull. The way deep learning works is by imitating brain processes on a statistical level. There’s still some steps to go before we will see true brain emulation (which is good).
Deep learning increases the GOAL part of the ARGO definition, because it allows for recognition and classification of highly complex stimuli. It can do so autonomously, so you can let it loose on any dataset you think has structure. Pictures, scenes, sounds can all be classified and recognized once a deep learning network has been trained. Google now offers a trained network for recognizing objects in pictures, and you can build applications on top of this (as a payed service, because running them is computationaly quite heavy).
If we reach a point where we develop truely brain like learning algorithms a lot will have happened. But only one research group or investor has to reach that goal and apply the power to his/her benefit to start unhinging the world. Its fine as long as the people involved like to compete in the wider economy, and use AI. But it becomes another story if the AI is used to hunt for legal loopholes around the world, or tries to find weak people to exploit, or learns to write extortion letters. The problem you offer a deep learning network doesn’t have to be understood, you offer it pictures of grannies and tell it which one can be easily robbed, and the network may learn to recognize new ones. One remark as someone that worked in AI for years, training a network is still an artform. But for many human tasks the distinctions needed are not hard to automate.
AI is a field that triggers the imagination, and humans tend to project more sentience that is usually present. Humans can make a person out of a hurricane, it seems to be one way of dealing with our environment. This can lead to romantic interpretations of even simle AI systems (take the Tamagochi rage for instance). This also makes it hard for people to recognize and respond to actual AI when it doesn’t take the form of a openly communicating human. The ARGO definition can even apply of systems, theories or philosophies humans use today, like economics, because the humans that apply such theories or philosophies are much like a substrate that obey rules even they hurt others (see the superhuman AI called Economics).
It is almost necessary to build a particularly malign AI to show how dangerous it is, while in fact (and that’s part of what this blogpost is about) we are already knee deep in systems that influence our behaviour, which have goals and which increasingly are out of human control. What if a political party decided it would allow its soundbites to be dictated by a system that mined historic political speech and it’s effects, then taking current facebook/twitter comments as imput, by region. So if Trump goes to North Carolina he gets a printout of all the political memes there, the top five that will endear him in the hearts of the majority. He doesn’t think, he just steps on the podium and goes “Well those migrants they could rob a bank” (because of a terrible bank robbery just days before his speach the AI integrated). Of course this is already part of political campaign organization which already use sophisticated data mining and focus group measurements, in a sense building an AI to do the job of touching the hearts of voters..