Earth AI (G-AI-a)

AI is arriving. Nick Bostrom is talking about it but he has no clue what he really means with the term AI. i have explained what we could take it to mean here, the acronym ARGO or Autonomous Robust Goal Oriented behaviour expressed in a system of any kind. Awarenes being the ability to reorient to different goals as the situation requires. The system in question can be anything, it can be a group of people, a mouse, a little robot.

For real human intelligence however the representation of goals needs to be of a nature that will intrinsically lead to survival of the ‘system’, our bodies. We can be suddenly very violent if we get confronted with a threat, but we will not express that behaviour in other circumstances. That kind of ‘non-crazyness’ is pretty amazing if you think about it. But inevitably machines will get the same insight and control, we are working hard to give it to them.

ARGO = Autonomous (or ultimatly Aware), Robust, Goal Orientation

The first AI’s will have fixed goals. The gradual improvements we see today are not even on the most important scale, that of robustness, except for weapon systems. A cruise missle is possibly the most advanced AI known today, a winning go playing computer is super vulnerable, a cruisemissle en route to its traget is practically unstoppable. Robustness is a major part of intelligence, we don’t vere off our course our desires are routed in practically millions of ways, if we get knocked out beaten, stabbed, kicked we will wake up and all drives will be there, looking for new ways find satisfaction. Kick a laptop off the table and it’s broken for good.

In building true ARGO also lies the danger of AI. Its not that when a machine suddenly understands life the universe and everything we are doomed, it may be an epiphany in a box, like an addicted websurfer in a basement yelling Eureka somewhere. It is AI systems robustly trying to achieve goals, even simple ones, that are dangerous to human survival. Just think of animals, they come in all varieties of intelligence, usually they don’t care about us at all, but if they are locusts, brown bears, bacteria, they can seriously endanger our lives.

We also must not miss the type of AI that is already here that is in scripting our behaviour. We as goal oriented systems like to be succesfull. Written laws strictly adhered to are like sugar to some of our brains (depends on whether life is rich and varied or shaped according to the same rules). Religious systems, economic theory, all kinds of rule based systems can become to appealing to our minds to pass on, to sway from, and we can become bigotted drones for Islam, Sharia, Economics, Marxism, etc. etc. All this only happens when we don’t really take care of our own survival, so in cities where we use financial transactions to get to farm produce and many other things. The link between our desires and the behaviour that satisfies them has been lost, we think money can solve all problems.

Islamic terrorism is an example of the human mind being hijacked by a ‘sugary drink’ of rules and consequences. That  Sharia ‘script’ is autonomous, robust, goal oriented, and destroying lives

Luckily human minds are weak and easily damaged. We repair our brains constantly, and some of that damage and repair (damage from simply moving, drinking alcohol, air pollution) is good, because it emulates a quality of reality, which is that it is constantly changing. The bigottry resulting from written rules is mostly problematic because they don’t change. But to get back to AI, to robots doing stuff on their own, like solitary individuals, with goals like ‘keep the land irrigated’ or ‘position yourself on top of this target’. Most of those AIs will remain to limited to ever cause any real harm (an exception being perhaps a AI controlled nuclear weapon system that gets it’s triggers wrong, like has happened two times, two times humans broke the causal chain saving millions of lives). Some of them might become more problematic, for instance we launch an autonomous ocean vehicle that searches for fish agregation devices and destroys them, but then we lose track and for decades all kinds of ocean infrastructure gets destroyed by these rogue mobile AI systems.

In the online arena its even easier to name some examples, we have marketing campaigns that could almost do without people in the loop, which means damaging and dangerous goods could be designed, produced, marketed and sold without human interference. We see damaging memes like the curry or hot chilli challenge, but can’t we write an AI that comes up with ‘challenges’ that does harm to a sub-significant part of those that attempt it? The foobar aspects that we invite with AI will be even more insane when we add virtual reality, VR, speech recognition, 3d modelling to the mix. The number of individuals glued to their goggles in either depression or near extacy will grow, and those individuals will serve the goal of those software systems, applications, the cashflow of their owners, but not those that keep them healthy and alive. The human mind is damn easy to hijack. We evolved to control a world, one world we find ourselves in, not a million ones we can wonder through, tailored to our sensitivities created by past experiences. Humans are weak, to weak for machines with unfailing memory and untiring ability to stimulate us, and with AI find us, steer us.

The ‘online’ will be a labyrinth of the fake and virtual soon, TV will follow. You will be either caught or repulsed by it

And then we arrive at AIs battling AIs. So one can engage in converstation online, find out it is a chatbot, but then annoyed hackers build a chatbot that chats to chatbots. Or an AI is designed to track drugs by scent in cities, and the criminals design counter drones that will track the detection drones and zap them with an EMP. Robustness becomes an issue and an arms race starts, in all  kinds of fields of application of AI. In politics we have seen little AI, but that is such and enormous and open arena for applications. We are entering the era of AI wars. Rather than thinking “how can we create an AI” we should think “What goals do we want an AI to orient towards to bring to its own awareness?”. For that reason to defeat damaging AI we should start creating the good AI, and keep doing that until the good AI is so robust it can no longer be defeated.

No human rule system would be part of an Earth AI, just the premise of an environment friendly to our evolutionary shift

I call it Earth-AI. Its goals could be a CO2 level of pre industrial levels, it could run climate models to see whether influences are neutralized in a timeframe that humans can surive. It can seek out regions of our oceans that have become dead, and control autonomous vehicles replenishing photoshepere nutrients, so algae and fish return. It can have ‘maximize life’ as its primary goal. It can consist of many systems, and a general model of our planet, its climate, its population densities of all kinds of species. It can counter ecologically damaging profit seeking with automatic media and emotional influencing campaigns. The general idea is to keep our planet habitable, with a diversity of species, as prepared for life threatening calamities as possible. It can consist of many autonomous nodes, subsystems, and if we look around it already exists in a large degree, to please humans for commercial reasons. We just need to rebase the goals towards ones that are good for humans because it makes human survival easier, and include all living things we evolved with.

If we give ‘Earth’ an AI, to protect the parameters of our evolutionary shift, we may prevent other AI from taking over

Such a system might conculde that there are to many africans killing wild animals, but as it is designed to protect their lives it may try to lure them away from where the animals are, turn them vegetarian, educate them to shrink the population.

All the while this Earth-AI would aslo be combatting other AIs, combatting the occurances of people growing up unempathic to others or nature. All thos things sound like Sci Fi, but they are possible today.

There’s probably a book called “The farmer” that is about this Earth AI,  a planet run by a system nobody knows about, that calls itself ‘the farmer’ of all life.

In general we did not need all the technology we now have, that is warping our minds and making us destroy ourselves (fossil cashflow maximizing consumerism leading to climate change), nature is enough of a simulation and we are around because we can just about survive it and feel happy about that. That is the situaltion we would need to get back to, with modest technology, more benign reality, a philosophy tailored to our mortality. But it seems we first need to win a battle, the one against many indifferent machines, systems, we are creating out of our own need, naivety, ignorance and greed.