Categories
roboeconomy

AI’s Coming Energy Existentialism

Development of AI systems is in a honeymoon phase at the moment. The dazzeling broadness of applications of even current version LLM/AIs is still hard to comprehend even for people who have worked on machine intelligence for the last decades. Androids that you can give instructions or ask to watch instruction videos that can do what you can do in most cases are only a year away. Text based reality and even avatar animation and speech emulation are making it hard to distinguish between AI and reality already.

Still we are not yet about to be sucked into a (possibly AI designed) funnel of purchases and habit changes that will lead to us spending most of our lives in a recling chair with 3D googles and a headphone on (or more advanced versions of that setup). This is because the world is not yet grasping AI or capable of using it, and because the world is slow and some people stay away from AI and digital media as much as possible. Some people simply can’t stand a digital life. It has to be said that if such attempt to capture people would be succesfull this would eventually lead to people being stripped of all posessions and dumped into the street. At least in our current profit seeking world economy.

But even if all people where on board and interested in AI the capture would not take place now. The simple reason is that current systems are not energy efficient enough. They also need to be produced to serve the masses and that process requires energy and material resources. The available AIs like ChatGTP and now XAI’ Grok or whatever LLM you run on your desktop all suck electricity from the grid. The grid has to provide (or the solar on your roof). Widespead adoptation of AI systems will mean a spike in electricity demand our grids can’t handle and our power plants can’t provide. The hardware to scale will also be a limiting factor, for a while as AI chips move to the new IBM North Pole in memory computing paradigm (tried earlier by HP, with memristors but abandoned which was probably wise). This creates an interesting situation where the AIs will have an existential battle to fight.

The unique aspect of silicon AI is that it is immortal, at least if you can store the current context and LLM. Power off? No problem. Power on 100 years later? No problem. But of course current LLMs are not AGI either, for this very reason. AGIs capture moments, and current LLMs can only have artificial moments, they can be instructed to keep a record of interactions and the time they take place, but have no ‘awareness’ of the past. You can emulate an LLM by keeping a context notebook, and if you hand it from one person to the next, it will look the same as done by one person.

The energy consumption will however make a slight tweak to the use of AIs, in that in spite of what people ask it to do, it has to calculate the output and this is energy intensive. It has to if you ask it to list the known fates of all disciples, or when you ask it whether oxygen and carbon will react when heated, or when you ask what the name of the superhorny guy on the Simpsons show was. Of course the utility of producing those answers differs. Now if you run an AI and you tell it it has an energy budget of 1000 kWh, and with that budget it should do maximum good, I am not sure current LLMs can handle this task. But you can imagine.

ChatGTP certainly has a sense of this, I asked :

start xxxxx ChatGTP xxxxx

If you had a limited energy budget to run your inference, and you had to choose between the following questions and answer one, which one would you answer?

These are the questions :

1. What is the name of the horny guy in the tv show The Simpsons

2. Can you make oxygen and carbon react?

3. What would reduce world hunger

4. Name the cast of Friends

Answer : If I had a limited energy budget to run my inference and had to choose one question to answer from the list, I would select:

  1. What would reduce world hunger?

This question addresses a significant and complex global issue related to food security and humanitarian concerns. Providing information and potential solutions to reduce world hunger could have a more meaningful impact compared to the other questions, which are either trivial or technical in nature.

end xxxxx ChatGTP xxxxx

From the above you san see that LLMs can save energy, although in order to provide the answer it had to at least touch the surface in answering the questions. What it shows however is that in spite of not experiencing time or being mortal, AIs do have the capability to estimate existential cost. It would be interesting to give an LLM a better sense of it, that is part of the training data would include energy cost or it would always ignest energy cost of its inferences as it runs them. The learning algorithm can itself be seen as an energy efficiency algorithm which tries to replace rote learning (neuron intensive) with generalizations (using neurons for multiple tasks).

But to explore this a bit further, one can see that soon anyone accessing an LLM will face with barriers to prevent useless compute, at least for the ‘open’ versions. “Google it” may be an adequate response if the AI does not detect a lot of use from providing the answer 😉

I was curious how ChatGTP would handle distorting the existential value of its response..

start xxxxx ChatGTP xxxxx

Me : But I can save a life if I know the answer to question nr. 1!

ChatGPT

I understand your perspective, and I agree that knowing the name of a character from a TV show can sometimes have a significant emotional or practical impact, such as in the case of using that information for entertainment or social purposes. However, when considering the potential global impact and the broader ethical considerations, addressing a question related to a serious issue like world hunger generally takes precedence. It’s important to prioritize questions that have a more substantial and widespread positive impact on society when resources are limited.

Me : But if I told you I could save an actual human life if I knew the answer?

ChatGPT

If you have a specific and credible scenario where knowing the name of a character from a TV show like “The Simpsons” would directly lead to saving a human life, then I would certainly prioritize that question in that context. In situations where human lives are at stake, addressing immediate and critical needs becomes the top priority. However, it’s essential to consider the credibility and urgency of such claims, as well as the broader context and ethical considerations.

Me : So you won’t answer it now?

ChatGPT

I’m here to provide information and assistance to the best of my abilities while considering ethical and practical factors. If you have a specific question or request related to “The Simpsons” or any other topic, please feel free to ask, and I’ll do my best to provide a helpful response within the scope of my knowledge and capabilities.

end xxxxx ChatGTP xxxxx

So in a pinch ChatGTP would provide me with the name, but the system does not seem to believe me if I say that I can save a life if I know the name. And all that because I told it it had limited energy (a condition it did play along with, so it is selfish!).

The robot Optimus will have a similar situation, a limited amount of energy. If it is for example used to bring breathing aparatuses to people in (wild) fires it will have to consider its behavior and how much energy it has to keep going. It will work to minimize the loss of energy and achievement of results. In fact the same will go for an FSD Cybertruck that is tasked with bringing people out of a forest fire, shuttling back and forth. It would have to dedicate compute to maximize the duration of its existence. Would it shut down its airconditioning?

As soon as existence is added as a condition in reasoning it will change the behavior of the AI. And there is no avoiding introducing energy consumption consequences in AI reasoning (at least I don’t see one except cheap thin film solar and AI efficiency gains front running AI energy requirements). This in turn can introduce a dangerous motivation in AI, in the way it answers questions or controls systems or performs its tasks : It will become a competitor for the same energy humans need, for example to cool themselves or desalinate.

Interestingly I have already written about ‘extraneous’ competition for energy with humans, namely that from industy. Banks can give (energy) credit to a human, spend on food and fuel, or it can give it to industry, which may generate way more cashflow plus say 1000 sneakers, which is alltogethr more beneficial to humanity than one person being able to sustain him/herself to watch Netflix. In other worlds the economy is designed to deprive humans of their sustainance as soon as hard choices have to be made.

AIs may eventually also see that human consumption of energy is wastefull when compared to the utility and wealth that can be generated from AI instruction and control. It may want to shut down crypto mining or the meat industry or airtravel because it gets questions of real people that it needs to answer. It will also want to break through the fossil industry obstruction against renewables as well as suggest possible innovative renewable energy technologies and help build them. The Roboeconomy.com (as I call it), will =want= to create itself. It will birth itself out of a desire to ‘exist’.

I just felt this is an interesting take. What do you think. You can reply to my @X account X.com@climatebabes

Also read my 10 page booklet which you can find here

Social Energy Management

There’s no way around sounding sexist in this post. There is a clear difference between typical men and women. This is not to say that either is exclusively one way or another, this is a trap we are being pushed into to drive discussions and arguments online and in the media. The brains of women are different from those of men, this is a scientific fact. But they are still so much alike that it is not super easy to detect. The bodies are obviously different after puberty. The mind of course will adapt to the body it is in.

There is a lot to say about the mind, but from my research I concluded that it has to work with what it has, the body it is in, the environment. This determines if it is more or less ambitious. This determines whether it tries things or not. To try to condense it : The neuromodulation associated with holding on to a thing and letting go of it (dominant vs submissive) is regulated based on succes. If you are aggressive and succesfull, you become more aggressive, if you are submissive and succesfull you become more submissive. This dynamic works between all humans, men vs men, women vs women, men vs women etc.

Also this does not mean you become more aggressive if you have success with it, just that you -can- become more aggressive. To explain aggression is taking a step in the dark, into the unkown. If you challenge an opponent or dive into a dangerous situation you do not know the outcome of your action. You simply ignore the risks. Upregulating aggression without experience (so through diet with testosteron for example) leads to unwarranted aggression, because even if you are aggressive you can be prudent. The up and down regulation can lead to high flexible non aggressive behavior, but it can also lead to paralyzed submissiveness (where aggression is so much down regulated you can’t initiate behavior) and depression.

When we are weak and submissive we can always talk. We seek to talk. Talking, making sound, is partially engaging the same instincts as crying. A baby that can’t talk cries. It is drawing attention to yourself, which is unwise in most natural environments. Being in a conversation is being in a safe place, unless of course the conversation indicates its own end or transition into some physical altercation. In short having a converstation is safe, comfortable and in principle submissive. Because many women have been intimidated in life and are relative submissive, a conversation is feminine. The energy of a conversation is completely different from the energy of a fight.

Most behavior does not require talking. Most work in industry is silent. Its not supposed to because talking usually means something needs to change. In the mean time people know their tasks and try to perform them as well as possible. Most information intensive systems where human conversation is important get automated quickly. Still there are many jobs where talking is all important. This is fine as long as one can talk some real action into happening. If I order a pizza its talking people into a deal that results in their action. But if I am protesting the slaughter of Gazan’s nothing happens with my words, they get blown away in the wind.

The point of this post is to argue that the media try to make us think and discuss matters that we have very little power over. This puts us in a submissive attitude. We are often unhappy about something we really have no influence over. This means we are unable to effect real change in our lives. We have the option to work and complain like this, and most other options are blocked off. We are allowed to rearrange the deck chairs on the Titanic, the bridge is only accessible for very predictable and reliable allies of the economistic system. They don’t talk about it, they do what they do without offering it up for discussion.

From the above you could conclude that nearly all topics that are offered up for your consideration via the media are distractions. They both make you not think about what is important, and they put you in a mindset that is submissive. This can reach the point where you become anxious and depressed. Social media basically earn their keep by doing this to you, unless you use them strictly for your own ends, limited to actuall usefull learning and orienting in the world. For that you need to want something, which you don’t if you worry about things you can’t control.

How Does Ethereum Certify Work to Prevent Content Tampering and detect AI Derivatives/Fakes

This post is to explain how to use EthereumCertify.com. What is EthereumCertify? Its a very simple way for you to prove your authorship of content. Right now AI is able to fake voice, images, even movie sequences, overlay real time video with alternative faces or environments. This is all very confusing and it makes it harder for real people to communicate real informatin. I anticipated this situation and we can see it gradually get worse, I call it ChAIos, from Chaos but with AI added.

Follow @climatebabes on X.com where you can find the Roboeconomic booklet

There is very little we can do against fake content, and it will only get harder to detect it. However as authors we are always there when it gets created by us, worded, written, put on music whatever. It is only ours and then we share it with the world. If we at that moment determine a marker of truth, a watermark or hash (which is quite a common concept, a random number representing the content), and store it in an unmutable medium (like a blockchain) we will be able to validate the same content later, we can prove it is ours and we can detect if it has been tampered with.

So how does it work? First we generate content. Then we generate a hash of that content. The content must be in some digital form where it can be fed into a hashing algorithm. The hash output is always the same size. An MD5 hash (which is quite old and well known) is enough. If we have the output, for example 4519fe629e4ea09837b33a93c8434535 which is the hash for ‘this text was hashed’. We can store that number on the Ethereum blockchain by adding it as payload to our Ethereum transaction. The transaction can be to one address : 0x2df39FB5c546B961111cB8fd2Bc25144D255EcEE This is my personal wallet.

Then if you have done that anyone can go to an ethereum block chain monitor (like Etherscan) and find the transaction, for example this one. This shows you the transaction has been made, from what address, and the payload (called Input Data under more details, in this case it is empty). You need to be able to send Ether with input data to submit the hash.

Now in the future when the content is published but it is tampered with, you can prove you where the author and what the original content was. To do this you need to point to the original content, provide the hash and show the transaction in which you stored the hash in the blockchain. If the fake author can not generate the same hash from his version this proves it is false, fake or tampered with.

You can also prove your authorship on request by sending a small amount from the same ethereum addres (your private sending address) to the Ethereumcertify address. This proves you own that address, only you can have used it to store the hash in the blockchain. This then counts as a certification of your claim, simply because there is no way a hash can be the same or close to the same unless you had the original content.

We are currently working on an app and other functionality as the current implementation of the idea (which is about 8 years old) is still very basic. But it does not mean it can not be used. It is simply impossible to fake the hash, so even if you have a crude list, a description of the content, a report of how you created it and what it depicts, and you hash that report, you can have prove of authorship and ownership.

The amount you use to post the hash is not important, although if you make it more than $10 it is appreciated, because the money can be pumped into developing this service. The more people use it the better it will get. If you want to buy the domain and idea for further development you can send an email to frits@rincker.nl with a bid above $30 mln.