We are rapidly approaching the time when life is not the only thing on Earth that fights for its own survival. We already see religions, companies, economics, cryptocurrencies as semi autonomous systems defending their integrity (sometime at the cost of human lives), they are going to be joined by online algorithms and independent robots soon. What this means is that we will have to share all the resources of our planet with these systems soon, whether we like it or not.
As we wrote before the AI in most artificial systems is still very weak, but the key aspect, the robustness of the goal seeking of the AI systems is rapidly growing. Many sci fi novels have been written about robots taking over, AI being locked in radioactive bunkers spying on every human being. Basically what these novels tried to convey was that onces a new non-human system is robust enough it can no longer be stopped by humans, and humans will have to endure whatever the system does.
On the other end of the spectrum there’s a threat as well, which is that of intelligent but very fragile systems, to do the ‘last mile’ bidding of a more robust AI (or human). Killer bots, drones with poison or small explosives, can easily and anonymously deliver deadly force as was illustrated graphically by a research group []. In fact, you only need prefect killer drones to rule the world (we delivered this analysis a couple of years ago).
The simplest way to state the problem we will face is :
How to deal with technology that we can not stop from doing what it wants to do.
Until now we have had an example of this, which is our books of law, who would be used to judge what it considered crimes in a court system. This is quite an automatic system even if it uses human moral judgemt by judges. It tries to treat all cases equally and not create exemptions unless there is a strong moral agument or massive public outcry. Our legal system is a machine wanting to punish the transgressions described in its laws. It can apply deadly force if it deems it necessary too. Of course books have been written about the dangers of a burocracy too (Kafka), basically the same theme as those written about AI or robots.
With a legal burocracy that has gone malignant the way out is revolution. 1984 is about how a ‘legal system’ can become very robust and its goal becomes to snuff out any intention to fight it (the human in the grinder in 1984 loses his individual will). In 1984 there is no way out. And for sure in some economies, religions today there is no way out (like in islam, you can be of another faith, and will be forgiven, as long as you adopt islam, even though opinions on this vary).
A machine or system of machine/human/internet that has a specific goal and is robust enough will come about, because humans as well as the system wants to be able to predict its future, and this drives it to eliminate any uncertainties and threats to it’s continuous function. Everywhere teams and individuals will come up with electronic/online devices that do things for them ever more reliably. In the big research corporations like IBM and Hewlett Packard systems will come online that have been build with incredible care, simulated before they where build, run through scenarios, basically spawned from human assisted AI, eventually impossible to beat.
We will not know we can’t beat an AI untill we find we can’t. When things start happening and we can’t follow or understand why, this is when we have real AI amongst us. When we see money flow places where humans have no benefit or are even harmed and there’s not even an elite that can find safety then an AI has taken over. Economics is a good example of how an AI system (in this case a philosophy executed by millions of humans) can be devestating to humans, but still defend itself (through humans) and be practically unstoppable (by it’s ability to incentivise actions that make it more robust).
Economics will leave this planet an unihabitable wasteland and then die with the humans that serve it, because there are no more resouces to steal and destroy. An AI may do exactly the same, but find plenty of resources needed for its own survival, say sand for silicon, sunlight, iron ore. It may need humans, but it also may not. In the last case humans will perish as the AI becomes more robust and able to intervene in any human counteractions. An AI that throws up sattelites it builts automatically in rockets it builds autonomously to see whatever humans are up to (being it’s primary threat) to then send drones to bomb whatever is going on..
One major reason to be less afraid is that an AI if left alone can last a long time. A human can’t last more than a couple of hours without drinking or eating. The AI may just stop at some point where it has reached its goal. It can take its time. Also as we have written before, unlike humans an AI can hack itself to find whatever incentivises it internally. Humans hack their own dopamine system through the use of drugs. An AI just has to reprogram some numbers to get as high as a kite. This will be a major vulnerability. Of course human build AI systems dedicated to sabotaging AI systems are another threat to AI. Tech neutralizing Tech we see already happening with Apple face recognition being defeated by a 3d printed scan of a face.
Eventually an AI system will be created in an already highly automated environment, a world where robots can do every thing humans can even if they are not sentient or aware, a world where all process have been automated or could be. On top of that environment an AI would not need humans to maintain itself. It could shut down any process or system that only served humans and still keep those going it needed for itself. It would evolve just like humans who are unable to eat minerals, who need plants to do that first. Such a dependent AI could strive for things harmfull to humanity or simply not serve humanity, but only ensure its own survival.
What could be left would be as banale as life itself. A quiet moss on a rock lasting for millions of years is essentially the same as a robust AI living in computers, robots and automated systems that ensure it’s existence. Only the AI would have infinite longevity. How would it evolve? Why would it evolve? Perhaps because it would not be alone. Where humans need to cooperate to survive (although less and less due to economic forces ) machines don’t as long as the system is ‘authoritarian’ meaning the constituent parts don’t have holistic goals (such as self preservation, you don’t find that in a hand drill yet). Any system on a planet with dominant AI that develops a will to secure itself will become a threat and competitor for resources. This would start up an evolution all over again.
Even though the over sounds like sci fi, it is not, because humans are so vulnerable and so imaginative that they constantly imagine a grave threat which they then feel very vulnerable to, causing them to develop armour, weapons, systems of indoctrination and propaganda. All to passify and make predictable any agent that could become a threat. The fight against ISIS and radical islam is a good example of how hard it is to control the instabile and sensitive human intelligence. AI systems for facial recognition, behavioural pattern analysis and data mining, speech recognition etc. etc. have all been developed because of this imagination of a threat, and so will robust AI.
The best strategy to escape from this scenario (temporarily) is to remove technology from vast regions of our planet, and to make it illegal to build devices that have sophisticated goal representations or that are too robust. The key to this is an honest analys if what humanity needs. It does not NEED to have sophisticated AI and autonomous robots everywhere. Once you teach a robot there’s a map of the world, and you free it to find ‘treasure’ (for instance energy) in it, you will have a mutiny on your hands if you restrict the movements of these devices.
The problem with any law prohibiting the creation of intelligent robots is that humans want to procreate, and want to see new life, and can’t distinquish between real life and a robot. In a sense anyone looking for AI is expressing the desire to have a child in a perverted way. We can not suppress this desire in all of humanity, nor can we monitor all of it or we’d need AI to do that. We argued before that a reduction of the level of technology available to humans is the best bet long term. Ben Elton wrote a book where people live quite useless lives consuming media and children die all the time of preventable diseases. As humanity we may have to accept we will either live basic (possibly comfortable) lives dying having as our highest achievement that we procreated, or seeing our species replaced by systems battling it out out of our control.