Monthly Archives: December 2018

   To our Podcasts

The Web will be AI or the present danger of WebAssembly

For some time experts have been waring, or trying to warn humanity for the dangers of AI. Elon Musk has been one prominent captain of industry that explained his biggest fear is AI and that there should be some knowledge base created to judge activities in the wild and create new laws to prevent abuse of humans by (Aware) Autonomous Robust Goal Orienting Systems (as we call them  ARGOs).

The kind of warning Elon Musk expresses is not very effective because most people do not extrapolate from current developments to future outcomes, they do not extrapolate at all, they just want things they come across. Most people do not have a model of external processes, except the process of getting what they want. So most people are incapable and unwilling to extimate even the effect of their own actions. AI is a threat in part because most people are stupid.

The problem to start with is that we have only seen benign artificial intelligence, the problem is that AI has been subdued in its aggressive potential in games. This is because the gaming industry is an economic force and it does not want to be held back because users and other people start to recognize the danger of its virtual enemies entering our real live environment. Opportunism and short sightedness, born out of a desire to reach short term goals is allowing AI to develop at a rapid pace. It is naively seen as an economic opportunity.

Now what would you think if your neighbour was growing a black bear cub in an unkind manner, and leaving it out to roam across your neighbourhood? Bears are highly intelligent, they want to survive, they will do what is needed to achieve that. It would start to eat kids playing in the street if it couldn’t find any other food. We are lucky digital AI only needs electricity, but the point we are trying to make here is that any intelligence has a limited set of things it cares about. It must care about something, itself, or it can’t be intelligent for long, hence the Robust requirement in the ARGO definition of AI. We find animals stupid if they do not protect themselves. Evolutionary those animals that where exposed and had no protection disappeared. So AI when it is real is simply a force we can’t know to be safe.

The goal of any AI is the main worry, because even if it’s goal is ‘to protect the children’ how do we know it is not doing that by trying to make all cars explode? Or by killing all people it suspect of having the potential to harm them. If you have a digital AI with the goal to disable all insulin pumps it can find, written by a psychopath developer with a grudge against humanity you have to find the server it is running off, or servers. It may be a genetic algorithm that replicates, it is enough for us with our definition for the code to be autonomous, uninteruptable and goal oriented.

Now the threat has just been amplified. How? By intruduction of WebAssembly. WebAssembly means your browser can run code at the speed of your native computer. It means all browsers can now become a massive parallel computer running all the time (because people have browsers open online 24 hours a day). A massive global fast parallel computer has been let loose on humanity, and it has already ran face recognition. It can run any C or C++ code already in existence. It can listen, see, categorize, instruct, and do all stuff machine learning algorithms can.

WebAssembly already runs games predictably at near native speeds, it can use all cores of your computer. It is as powerfull as any application you can run on your laptop or mobile phone, but it can run on multiple devices at once. And as said it can see, listen, sense temperature, movement. We do not exaggerate if we state that the intruduction of it is the single most dangerous thing humanity has ever done. If you still need to know how and why you should ask your favorite politician. We know how this can turn dangerous but we won’t tell you here, we don’t want it to happen.