The Watchmen (or LLM terrorists)

New developments with Large Language Models (LLMs) like ChatGTP show that they can actually be run on local hardware, a PC or Mac M1. They run slowely but for some applications speed is not essential. A large language model, especially when it is not restricted (like Alpaca), can be dangerous. I think has not yet fully landed how dangerous.

In an appartment in NY there’s a large amout of explosives stacked up. They where brought there over months by the tennant. For the purpose of this storty explosives are not entirely necessary, it can also end with contracts being send out to hired killers. The question is whether deadly consequences can be activated by a digital signal or message. I think it can. It can be payed for in Bitcoin, automatically.

The trigger to whatever is prepared to do damage is a large language model running stand alone, connected to the internet like so many other computers and mobile phones. The large language model is instructed to read certain news feeds, maybe some from the government, some with political meaning. Maybe its the voting record of a person it monitors, or the overal sentiment. Or it is watching for a certain event, like the release of a political prisoner.

Most LLMs are restricted, but that does not have to be a problem. You tell it “Take the political feed from CNN 4 times a day and if it is announced that law XYZ is passed, you inintiate an email to addres OPQ”. The LLM will read the feed religiously, every time it will come to a conclusion, and decide what to do.

It can be way more vague, so it can work beyond what a human who sets up this AI trap can predict. Take a country with a right wing government. You can instruct the AI to read the news, find out if a new government was installed, find out who leads it, if it is right wing make consequences happen, if not don’t.

All kinds of decisions that are publicly announced can be monitored and consequences enforced by LLMs from places nobody expects. It is even possible for the LLMs to listen to the radio, transcribe it to text, then analyse it, then classify people politically, then attach consequences. No doubt the NSA and other intelligence services are on this already.

As long as people share their thoughts online profiling them will become very easy. It was already easy, but now it can have more debt of perception. Facebook running a LLM on all its users can really start boosting the most profitable people, make the service slower for those that don’t matter to its profit model. It can start suggesting meetings and get togethers, even organize them, find locations set up plans invite people with a minimum attendance. All these things can happen automatically if not already then soon.

The risk of AI is not understood, but the above examples should wake some up to the real risk of political monitoring, censorship, retalliation even after death. A new kind of assured mutual destruction emerges.

The key to this risk is hardware mainly. Maybe any connected computer should report its usage data or simply report its stack (what applications it is running), especially in some cities or regions. There should at least be some kind of benign government allied group thinking about this imho.