I seem to talk to AIs a lot these days. They've mostly replaced Google as a search engine for me. When I want to find sources of information, I use several AIs and narrow down my search much more quickly than using the standard search engines. Recently, as I was reading some articles on AI, I realized that the implications were so diverse that one could really let their imagination run away with it. So, I thought I'd ask some of the mainstream AIs what they "thought," and here I'm summarizing and, in some cases, including the word-for-word responses to the questions I had.
This is kind of random, I know, but hey, it is what it is.
Diverse Perceptions of AI
The first thing that came to mind is that when you talk to ten different people about what AI is, they may give you ten different answers:
- Robot Overlords
- Magic Black Box
- Smart Machines
- Computer Programs
- Chat Boxes
- Self-Driving Cars
- The Terminator
- Siri, Alexa
- Pattern Matching Machine
- Prediction Machine
- Automation Tool
- Next-Gen Solver of all the World's Problems
- Job Eliminator
- Threat to Human Existence
- ...and on and on.
So, what is the reality? Do these LLMs think? Do any AIs think? What about these robots that look human? There are YouTube videos of machines doing all kinds of things that humans can do plus much more—they're stronger, faster, tougher... smarter???
The Reality Behind AI and LLMs
I have to admit that I am not one to believe that we are on the verge of an AI superbrain taking over. I grew up on Star Trek and Lost in Space—remember that one? I ate those episodes up and let my imagination fly supported by the possibilities. As I got more into physics in my youth, I realized that the physics we knew and know today—the natural limits and order imposed upon everything we know of in the universe—simply makes some things very, very difficult if not impossible.
We often like to think in terms of the subatomic scale and the weird things that happen there as something that can easily be projected into the scale we function in as human beings—and this is easy to do with our imagination and creativity, but practice is simply not quite the same.
We like to think of the incredible multiple dimensions and the fantastic parallel universes, but in reality, the maths which describe anything beyond the four dimensions of general relativity—discussions around the cosmological constant notwithstanding—show up at scales infinitesimally smaller than we can imagine; well, let's say anything beyond our human interaction or even classical physical interaction.
The point is that yes, the universe is incredible, and we continue to discover new incredible things both at cosmological and at quantum levels, but we do tend to let our imaginations run just a bit wild sometimes—good or bad.
How Much Time
Thinking about this stuff can really make your brain hurt—no pain, no gain, right? Well, just the thought that there might be a fundamentally shortest amount of time... so basically this tiny bit, quantum of time. (Planck Time)
Okay, there are theories that say nay—it is just that time gets so small other effects become dominant, and time just kind of goes berserk at that point. Or is it that time is a consequence rather than a dimension itself?
Well, okay, so I've obviously digressed. For more discussions on this matter, both philosophical and physical, have a look at some of these:
- A Debate Over the Physics of Time - Quanta Magazine
- Dissertation on Time in Theoretical Physics - Imperial College London
- Time, Metaphysics of - Routledge Encyclopedia of Philosophy
- Arrow of Time - Internet Encyclopedia of Philosophy
- Quantum Physics and Time - Nature
- Time Might Be a Mirage Created by Quantum Physics - Live Science
So a simple subject like time - studied for centuries - yet enigmatic as ever. Imagination fodder.
Media, Misconceptions, and AI
So yes, we can find very esoteric subjects that will have implications in ways the vast majority just can't imagine, but we are helped to imagine through the creative books, movies, and now computer programs that present the various subject matter in easy-to-digest ways.
AI fits the bill.
Can't count the number of conspiracy theories, movies about the end of the world due to some technical or scientific "development." Is it any wonder then that in this disinformation age of AI-driven marketing and factory-mode movies that feed what we're hungry for—taking our very nature and turning it against ourselves—we tend to go a bit overboard?
Given the nature of AI as we perceive it historically, it is somewhat unfortunate that we call the LLMs AI at all.
AI evokes sentience and consciousness, and this is not what LLMs are. It conflates tools with concepts in a way that a wide range of technologies are lumped together—simple computer algorithms and hypothetical artificial general intelligence (AGI).
This leads to an amplification of fear and distrust, which are only further amplified by movies like Terminator or Ex Machina in seeing AI as a threat—but LLMs are no more a threat than your laptop computer.
Understanding LLMs vs. AI Applications
AI applications such as medical diagnosis or climate modeling/simulation have very explicit and narrow goals quite different from LLMs, and putting them into the same category as LLMs is misleading, to say the least.
So why do we keep calling it AI? Well, it is historical, it is sexy, it is attention-grabbing; it implies there is more there than meets the eye.
Reality is that the more you use LLMs, the more you realize how little AI there is really. Pattern matching, tokenization, probability, statistical analysis, correlation, transformation, contextualization, deep learning, machine learning, natural language processing, neural networks, parameters, training data, fine-tuning, loss function, backpropagation, Gradient Descent, inference, context, prompt, sampling, temperature...
These do not add up to sentience, consciousness, or even general intelligence.
So let's all get back to reality.
Energy Consumption: The Not So Hidden Cost of AI
AI and energy have been discussed lately .... but I thought I'd chat with our local AI buddy about it.
ChatGPT provided me both average and worst-case numbers for energy usage per query and per year by ChatGPT—check your own accuracy here.
Under the worst-case scenario, including overhead costs:
- Annual Energy Consumption: Approximately 547,500,000 kWh (547.5 GWh).
- Annual Carbon Footprint: Approximately 219,000,000 kg of CO₂ (219,000 metric tons), assuming an average global carbon intensity of 0.4 kg CO₂ per kWh.
This is worst case today, but as we move toward a future of more intense use of LLMs, we will be approaching this number.
This energy is roughly equivalent to the annual electricity consumption of 50,000 average U.S. households.
The carbon footprint is comparable to the emissions from driving approximately 875 million kilometers (544 million miles) in a typical gasoline car.
Comparing AI's Energy Use to the Human Brain
The human brain consumes about 175 kWh per year, I'm told—so a 20W lightbulb.
By my own estimate, I must use at least 1,000 kWh per year and have a carbon footprint of 365 kg. And I would say that most of the benefit to myself is simply speeding up access to information. Now, the monetary cost of this would be in the range of €200—and I pay €20/month for the premium ChatGPT. So assuming they get a nice discount since they are huge, they are still making a few bucks on my queries so far—but my usage rates are increasing rapidly.
Right, this is not a study in any shape or form—but some interesting numbers and food for thought.
Scaling Challenges and Future Projections
Yes, data center technology and data center investments will be huge in the coming years, and that is nothing new—but I believe that the estimates are low—hey, just a feeling as I see the usage patterns—just ChatGPT expects to grow 5x in the next year in terms of usage; that means we will be at around 25% of the worst-case number already in a year.
Keeping in mind that free users are about 60 to 70% of the users. In this context, I am not sure how ChatGPT can be making money... but hey... government funding, Microsoft funding... well... let's see...
Once we consider the integration of IoT and other devices which will act as proxies or end users in themselves, utilizing APIs, etc., machine-to-machine interactions and subLLMs, then we can expect tens of billions of effective users in the next 10 years—in fact, at a projected 90% growth over the next 5 years—which doesn't seem like something totally out of this world—we will have about 23.5 billion effective users of LLMs. The number of queries per user is also likely to rise—as integration continues and becomes more pronounced and implicit to how people function in their everyday activities both personal and work. The estimate is that queries per user will grow at a rate of 20-30% annually.
Potential Infrastructure Strain
So what? Well, yeah—something is going to bust—it is not hard to see how it might be difficult for the infrastructure to keep up.
Then we also turn our thoughts to the audio, graphic/photo, video producing, presentations, etc.—the integration into Office as in Microsoft, for instance—and into email and CRMs, etc. These systems will generate thousands of queries per day and much more. Projections of growth there are in the 30% per annum range.
These will strain the computational capacity even further, but beyond that, the transmission of these high-bandwidth elements will strain the networks beyond their current limits.
All of the various smart connected devices such as cars—acting as access points—sharing data—ingesting sensory data and processing—there will be plenty of room for innovation.
Innovation and more innovation ...
The GPU innovations of today will not be enough—methodology will have to change to accommodate the sheer scale—so algorithms, analysis methodology, not just hardware, will drive these capabilities.
We will need a lot more energy even if we get a lot more efficient.
Cheap energy will make really large-scale LLM global growth possible. Without that, users may be pushed out or restricted... it is not hard to imagine what that would imply.
Of course, there are many companies and folks working on ways to maintain the momentum, and as yet most of us will not be privy to the developments yet to come which may already be on the drafting tables today.
Decentralization and Specialized Models
Cloud-based LLMs are not likely to be the norm in the future—in fact, subLLMs residing in smaller data centers or even a couple of servers are going to be key to continued high growth in this market.
Even mobile devices will be able to run LLMs of sorts—maybe not so Large at that point :)
As new use cases are brought forward and the market continues to evolve, adoption and usage patterns will also change with more focus on smaller, more focused, more efficient models.
Individual companies will be utilizing these smaller, more focused, and specialized LLMs in their business with greater autonomy as these algorithms are wrapped in more user-friendly interfaces.
LLMs that will tune and train other LLMs will continue to rise, and this will help turn black box LLM ecosystems into a more transparent and understandable LLM ecosystem.
Conclusion: Fun Times Ahead
Regardless, there will be fun times ahead...
At Arion Networks, we have been working on testing existing algorithms of LLMs that do not require computationally intensive GPUs to perform specialized automated functions for network operators.
More to come.
Additional Reading
- New Trends in LLM Architecture - Data Science Central
- ChatGPT's Energy Emergency - Tom's Guide
- ChatGPT's Power Consumption - Heise
- Generative AI Models Energy Consumption - Designboom
- AI Power Demand Increase - Goldman Sachs
- Carbon Footprint of ChatGPT - Piktochart
- AI and Energy - Nature
- Environment and Sustainability from ChatGPT Perspective - Taylor & Francis
Note: All statistics and projections are based on estimates and should be verified for accuracy.