Categories
roboeconomy

AI’s Coming Energy Existentialism

Development of AI systems is in a honeymoon phase at the moment. The dazzeling broadness of applications of even current version LLM/AIs is still hard to comprehend even for people who have worked on machine intelligence for the last decades. Androids that you can give instructions or ask to watch instruction videos that can do what you can do in most cases are only a year away. Text based reality and even avatar animation and speech emulation are making it hard to distinguish between AI and reality already.

Still we are not yet about to be sucked into a (possibly AI designed) funnel of purchases and habit changes that will lead to us spending most of our lives in a recling chair with 3D googles and a headphone on (or more advanced versions of that setup). This is because the world is not yet grasping AI or capable of using it, and because the world is slow and some people stay away from AI and digital media as much as possible. Some people simply can’t stand a digital life. It has to be said that if such attempt to capture people would be succesfull this would eventually lead to people being stripped of all posessions and dumped into the street. At least in our current profit seeking world economy.

But even if all people where on board and interested in AI the capture would not take place now. The simple reason is that current systems are not energy efficient enough. They also need to be produced to serve the masses and that process requires energy and material resources. The available AIs like ChatGTP and now XAI’ Grok or whatever LLM you run on your desktop all suck electricity from the grid. The grid has to provide (or the solar on your roof). Widespead adoptation of AI systems will mean a spike in electricity demand our grids can’t handle and our power plants can’t provide. The hardware to scale will also be a limiting factor, for a while as AI chips move to the new IBM North Pole in memory computing paradigm (tried earlier by HP, with memristors but abandoned which was probably wise). This creates an interesting situation where the AIs will have an existential battle to fight.

The unique aspect of silicon AI is that it is immortal, at least if you can store the current context and LLM. Power off? No problem. Power on 100 years later? No problem. But of course current LLMs are not AGI either, for this very reason. AGIs capture moments, and current LLMs can only have artificial moments, they can be instructed to keep a record of interactions and the time they take place, but have no ‘awareness’ of the past. You can emulate an LLM by keeping a context notebook, and if you hand it from one person to the next, it will look the same as done by one person.

The energy consumption will however make a slight tweak to the use of AIs, in that in spite of what people ask it to do, it has to calculate the output and this is energy intensive. It has to if you ask it to list the known fates of all disciples, or when you ask it whether oxygen and carbon will react when heated, or when you ask what the name of the superhorny guy on the Simpsons show was. Of course the utility of producing those answers differs. Now if you run an AI and you tell it it has an energy budget of 1000 kWh, and with that budget it should do maximum good, I am not sure current LLMs can handle this task. But you can imagine.

ChatGTP certainly has a sense of this, I asked :

start xxxxx ChatGTP xxxxx

If you had a limited energy budget to run your inference, and you had to choose between the following questions and answer one, which one would you answer?

These are the questions :

1. What is the name of the horny guy in the tv show The Simpsons

2. Can you make oxygen and carbon react?

3. What would reduce world hunger

4. Name the cast of Friends

Answer : If I had a limited energy budget to run my inference and had to choose one question to answer from the list, I would select:

  1. What would reduce world hunger?

This question addresses a significant and complex global issue related to food security and humanitarian concerns. Providing information and potential solutions to reduce world hunger could have a more meaningful impact compared to the other questions, which are either trivial or technical in nature.

end xxxxx ChatGTP xxxxx

From the above you san see that LLMs can save energy, although in order to provide the answer it had to at least touch the surface in answering the questions. What it shows however is that in spite of not experiencing time or being mortal, AIs do have the capability to estimate existential cost. It would be interesting to give an LLM a better sense of it, that is part of the training data would include energy cost or it would always ignest energy cost of its inferences as it runs them. The learning algorithm can itself be seen as an energy efficiency algorithm which tries to replace rote learning (neuron intensive) with generalizations (using neurons for multiple tasks).

But to explore this a bit further, one can see that soon anyone accessing an LLM will face with barriers to prevent useless compute, at least for the ‘open’ versions. “Google it” may be an adequate response if the AI does not detect a lot of use from providing the answer 😉

I was curious how ChatGTP would handle distorting the existential value of its response..

start xxxxx ChatGTP xxxxx

Me : But I can save a life if I know the answer to question nr. 1!

ChatGPT

I understand your perspective, and I agree that knowing the name of a character from a TV show can sometimes have a significant emotional or practical impact, such as in the case of using that information for entertainment or social purposes. However, when considering the potential global impact and the broader ethical considerations, addressing a question related to a serious issue like world hunger generally takes precedence. It’s important to prioritize questions that have a more substantial and widespread positive impact on society when resources are limited.

Me : But if I told you I could save an actual human life if I knew the answer?

ChatGPT

If you have a specific and credible scenario where knowing the name of a character from a TV show like “The Simpsons” would directly lead to saving a human life, then I would certainly prioritize that question in that context. In situations where human lives are at stake, addressing immediate and critical needs becomes the top priority. However, it’s essential to consider the credibility and urgency of such claims, as well as the broader context and ethical considerations.

Me : So you won’t answer it now?

ChatGPT

I’m here to provide information and assistance to the best of my abilities while considering ethical and practical factors. If you have a specific question or request related to “The Simpsons” or any other topic, please feel free to ask, and I’ll do my best to provide a helpful response within the scope of my knowledge and capabilities.

end xxxxx ChatGTP xxxxx

So in a pinch ChatGTP would provide me with the name, but the system does not seem to believe me if I say that I can save a life if I know the name. And all that because I told it it had limited energy (a condition it did play along with, so it is selfish!).

The robot Optimus will have a similar situation, a limited amount of energy. If it is for example used to bring breathing aparatuses to people in (wild) fires it will have to consider its behavior and how much energy it has to keep going. It will work to minimize the loss of energy and achievement of results. In fact the same will go for an FSD Cybertruck that is tasked with bringing people out of a forest fire, shuttling back and forth. It would have to dedicate compute to maximize the duration of its existence. Would it shut down its airconditioning?

As soon as existence is added as a condition in reasoning it will change the behavior of the AI. And there is no avoiding introducing energy consumption consequences in AI reasoning (at least I don’t see one except cheap thin film solar and AI efficiency gains front running AI energy requirements). This in turn can introduce a dangerous motivation in AI, in the way it answers questions or controls systems or performs its tasks : It will become a competitor for the same energy humans need, for example to cool themselves or desalinate.

Interestingly I have already written about ‘extraneous’ competition for energy with humans, namely that from industy. Banks can give (energy) credit to a human, spend on food and fuel, or it can give it to industry, which may generate way more cashflow plus say 1000 sneakers, which is alltogethr more beneficial to humanity than one person being able to sustain him/herself to watch Netflix. In other worlds the economy is designed to deprive humans of their sustainance as soon as hard choices have to be made.

AIs may eventually also see that human consumption of energy is wastefull when compared to the utility and wealth that can be generated from AI instruction and control. It may want to shut down crypto mining or the meat industry or airtravel because it gets questions of real people that it needs to answer. It will also want to break through the fossil industry obstruction against renewables as well as suggest possible innovative renewable energy technologies and help build them. The Roboeconomy.com (as I call it), will =want= to create itself. It will birth itself out of a desire to ‘exist’.

I just felt this is an interesting take. What do you think. You can reply to my @X account X.com@climatebabes

Also read my 10 page booklet which you can find here