Artificial intelligence is a topic of interest for many people. It is a potential threat, because it may be able to shape our world into one we can no longer control, one in which humanity may perish. Nick Bostrom is one of the vocal speakers on so called ‘Super Intelligence’ even though his analysis does not surpas that of many cheap sci-fi thrillers.
Humans are pretty stupid. Need proof? Look at our lack of control over fossil fuel use.
Humans are, in spite of their complex makeup, not very complex in their behaviour. We eat, drink, sleep, we may fabricate stuff we enjoy. Our ability to learn and change our behavior is very slow and limited, the older we grow the less flexible we are (this doesn’t mean we can demonstrate infinite variety). Our means of communication is laughably inefficient. We already had to design a set of rules we want each of us to obey, the rules of law and economics, because waiting until people figure it out for themselves is just a waste of time (according to the ‘bosses’ of this system). We have to be ware that although our intelligence is complex and intricate, it does not require the same amount of complexity to outsmart us. Take for instance a system that recoginzes patterns of fraud in financial transactions.
The economic system is an AI in the sense that we can’t control it, and it learns to ward off any attack on its reign using us humans
Our defensive instinct causes us to make others known and vulnerable, to ourselves and to whoever can access the information we gather about them (see big data, privacy and eugenics)
The major change to our automated, economically directed environment is the introduction of very powerfull low energy processor chips using a new electronic component the memristor. A memristor is a programmable resistor, a component that can have an analog varying resistance. Compare it to a bit in our DRAM, such a bit is a 0 or 1 state (implemented by capacitance or voltage), a memristor does two things differently, 1. : It can hold an arbitrary state between the common extreems of 0 and 1, and 2. : It doesn’t need power to maintain that state. A third thing can be added : A chip based on memristor technology can contain an entire computer architecture and does not need to be silicon based, can even be 3 dimensional.
A memristor is a device that adds persisitence of state to common electronic systems so that they can model aspects of reality without cost
What we got here is a new device category, using low/no power, with high processing capacity and way faster than what we know today (mainly because the memory can be where the processing occurs, not in a separate chip). Also we are dealing with a much less transparent type of design, one in which we can not easily see what is going on because it is going on inside a 3d layered chip. The state of a memristor can symbolize anything, just like a bit, but it is analog, so multi bit if we want, and the memristor state does not disappear, it is independently persistent until it is changed or erased. This means that if you have a memristor based drone that seeks a target, it can be powered down, and when it is powered up again it will be able to continue exactly where it left off. It may need so little power it can run on stray radio wave energy, solar or a nuclear battery, in which case it could never stop.
Hewlett Packhard thought memristors where a great idea and came up with a cloud based machine concept with all the advances mentioned above. But the backtracked on this idea. They will not use memristors as soon as they announced. One reason can be that they decided it would cut into their business of selling independent chips of all kinds to hard, or they found they could not convince their creditors to allow them to make devices that killed companies like Intell and AMD etc. overnight. This is the economic argument. The other could be that the memristor will make computers to powerfull, to autonomous, give them to much power in to little space. In the hands of a hacking public the potential for crime and mayham (which we already see in the malwar, online hacking sphere), would become uncontrollable. We don’t want electronics that can run complex simulations of reality like we can, out there, to be used as tools of power or as Bostrom would expect, protecting their own existence even if someone defined that to be as stupid as “make paperclips out of all available iron” resulting in a mountain of paperclips on a stone age planet.
We can make the mistake of creating autonomous systems that can persist against our will
Maybe the time has come for science to not share all knowledge equally, simply because knowledge is power and power needs to be controlled or it will become destructive. We can really use autonomous systems with low maintenance and power requirements as we are restoring life to our planet, but we don’t want them to become a weapon in the hands of some megalomaniac or overly self protecting group (like f.i. bankers) thereby creating a polar world paralized by a perpetual conflict (like Israel/Palestines).
Overly self protecting groups of people will use any means they can get their hands on
The problem with AI is that it does not have to be something we can talk to or see. We can’t talk to snakes and bears, but they can make good use of our protein. Comparing an AI to an animal also makes it easier to understand it may not have any interest in our existence. We think we need to live, but a wolf or shark doesn’t see it that way. This means simple autonomous systems can become a risk if we can’t figure out where and how they are implemented. As I wrote before, ARGO is intelligence, standing for Aware Robust Goal Orienting systems. Even without the Aware or Autonomous a robust system that strives to a goal may cause problems we can’t control it or don’t know where it exists. For this reason we should ban encryption, while the defensive instincts are pushing to make all communication, even within computers encrypted.
A sci fi future that can happen, will happen. Sci fi futures should be security risk profiles to be avoided
Our basic drive to avoid decomposition and rising entropy may result in the creation of ordered systems that are so much better at it that we won’t matter and won’t be able to control them. Then the irony is that an immortal AI has nothing much to do, so the planet could become a quiet place until some unanticipated event destroys the AI. It is only the principle of survival that made us the apex predator, and we may loose that position.
The best strategy forward is to cut of paths to futures we should fear. This may mean forced scientific regression.
Many, including Elon Musk and Bostrom, warn us of the risks of AI, and although I feel their imagination is running low on insight, they have a major point in worrying we may construct an enemy. We are good at that, because after all we have a century of happy destructive fossil fuel use that is about to tie a noose around our neck. We won’t know, we are generally to stupid to understand the implication sof large systems we create. Better focus on what a simple human being needs, occupy his/her time with challenges within it’s abilities (just like our consumer economy challenges us to all kinds of harmless stuff that has to meet on imporant criterium : It can’t make you economically independent!), but make it so that the least amount of technology or technological understanding is needed to perpetuate the lifestyle and culture. Let’s only use technology to get us to that reality.