One AI to Rule the World


Join our supporters! and Check our twitter account

Writing about AI is dangerous, it may result in someone building a system that solves problems we don’t want solved (like how to build a small nuclear bom) even though problem solving is not yet a strong talent of current deep learning algorithms.

Protecting against dangerous AI is almost impossible because AI doesn’t need to be highly advanced to be dangerous, it doesn’t even need to be AI, it can be a product of AI. an evolved algorithm or method of processing data that when one has developed it can do damage as part of some sensory or targeting system for example.

How to deal with this risk? The answer seems to lie in a system that can observe the world. A system that ‘loves’ humans. This sounds like sci fi, but of course in order to protect anything an AI needs to know what it is, what it looks like, what it can do, although this also depends on the capabilities of the AI.

For example, if humans want to be protected from an AI that puts out forest fires, the AI needs to understand humans need water too. The challenge of putting out a forest fire with a number of water carrying drones can not result in the total depletion or pollution of bodies of water on which a lot of people depend.

It is a comforting thought we are not there yet, no Skynet is operational, but it may soon be in some places. You only need to build a radar station, and have a fleet of drones with guns that meet and shoot at anything within a certain range. No big challenge there, we’ve seen it. You may not know but there are even self aiming rifles for humans to use these days. Every shot a bulls eye!

The solution I think of is uncommon and I have not seen it. It is to create an AI that spans the globe. Not physically, even though it will live in servers actually around the globe, but it will primarily be of use to others, it will be a global segmented cortex. It will recieve information and make it available. That will be its initial functionality, but in a way that makes sense to deep learning type algorithms.

The style of processing I think of is not that of modern LSTM networks, that is too rigid, but I have not managed to demonstrate the principle so I can’t write about it here. There is also the problem that the world does not respond to any AI processing on servers, so the system can’t have any real goals or take real action on its own. There is a way around this problem though. But I will try to see if this is feasible. To be continued.