Computers are great tools and have become prolific in our society. Up until now they have mostly been deployed openly, meaning they have not been shielded from external tampering, mostly because most people are not experts who know how to do this. If we want to choose an analogy it could be that we still live mostly in a world without fences or barb wire. Our lands used to be more open and less fenced off, but economic forces conspired to put a price on every square meter. What was common good, used by the people best capable, is now hoarded for speculation, houses, farmland.
There is another aspect of the ability to fence off something, and that is that we are less able to control what goes on inside the fence. In the case of a computer, we will be less able to control what goes on inside it, or its abstract equivalent, a worker thread in the cloud or Internet of Things device that accepts tasks. This also has an analogy, namely that of intelligence. We may not notice but a large part of what makes us intelligent hinges on the fact that we evade or prevent disruption, that we are able to pick up where we left even if we are distracted. A wildebeast would not last very long if it did not have a tough hide to keep out the claws of lions.
Nothing can exist if it does not protect its integrity and/or strategy
My definition of intelligence is that it is ARGO, Autonomous, Robust, Goal Orientation. The Robust part is very important, not only does it mean you can’t break into the substrate (a processor of some kind), but also that it has multiple ways to continue if one is sabotaged. A good example is the cruise missle, which can navigate on stars, GPS, maps and compass to get to its target. Jam the GPS and it keeps flying (not sure if that’s possible btw). To qualify as AI you only need those four aspects. Goal orientation seems one thing, but a system that only tries to stick to a set parameter has no real goal, no real representation of what it tries to achieve. It can ‘orient’ but it has can only orient in one way. There is usually an easy way round such systems.
Cryoptography is the digital fence, the computer can now protect itself, by adding more time to accessing its data, sometimes more time than our universe has left. This is possible because hard computational problems exist, ones where the only way to a solution is by trail and error, and the space of possible solutions is ginormous. Cryptography does another thing which is to introduce one way transformations, so you can make a hash (a random condensed sting) of a text, but you can never deduce the text from that hash. Its like a digital valve. It has introduced time in computation. Where from a computational standpoint you can’t figure if your computations run forward or backward, now, to our eyes, you can clearly see that a text to a hash is easy, the other way around is impossible. This is what blockchains are about.
AI is a threat to all of us, but not in the way most people imagine, as a thing roaming about the internets. This is because of cryptography. Maybe not today, but certainly as cryptography becomes the fence system in cyberspace. As more and more data and systems become unreachable unless you have explicit permission, we will find that any AI that roams open data sources or tries to manipulate industrial or health care systems through the net runs into walls it can’t break.
The economy, those still strongly advocating economic thought and practice, does not like this very much, it wants us to share as much of our lives, because this makes us easier to direct and exploit. Many startups are trying to mine that vulnerability to make us engage with payed services, and because if its economic value many politicians are allowing the collection of all kinds of data on us. Fact is that unless data is encrypted there is no way to secure it from eavesdroppers, that’s the whole reason encryption was developed. If we are truely valuable there are computer viruses for sale and a raft of other technologies to break through our weak (and economically weakened) protective barriers. The more we become aware of this, the more we will embrace encryption.
As algorithms and AI evolve they will increasingly be used to make their owners rich. Reasoned the other way around, if you have an intelligent drone that can autonomously break in and steal valuable things someone will start using them. The risks of AI are from petty to that of triggering a possible global thermonuclear war. Nobody restriced AI to any substrate, so it might as well be an intelligent chatbot that directs real people to achieve its goals. The mess AI will create ranges from the Godzilla killerbot to undetected manipulations of data that can have serious repercussions.
It seems the best strategy to aumated systems emporwering immoral people, or a robot that wants something that threatens human lives, is to start to use the digital fence of cryptography, especially around industrial systems that are now very weakly protected. Once controlled or autonomous robots that can invade into homes and factories undetected become prolific we will be at war with them until we (imho) reduce the availablilty of the supporting technology. naturally the dominating individuals will try to keep the biggest weapon they have handy, so eradicating lethal and perhaps overpowering AI will not happens. We are wise to make the digital realm more of an obstacle course, and to keep track of AI developments as if it is weapon technology.