Essay by Eric Worrall
“the entire industry is chasing the AI dragon” – $27 billion+ for a single project. Bye bye climate action virtue signalling.
Million GPU clusters, gigawatts of power – the scale of AI defies logic
It’s not just one hyperbolic billionaire – the entire industry is chasing the AI dragon
Tobias Mann
Thu 19 Dec 2024 // 17:30 UTCCOMMENT Next year will see some truly monstrous compute projects get underway as the AI boom enters its third year. Among the largest disclosed so far is xAI’s plan to expand its Colossus AI supercomputer from an already impressive 100,000 GPUs to a cool million.
Such a figure seemingly defies logic. Even if you could source enough GPUs for this new Colossus, the power and cooling – not to mention capital – required to support it would be immense.
At $30,000 to $40,000 a pop, adding another 900,000 GPUs would set xAI back $27 to $36 billion. Even with a generous bulk discount, it still won’t be cheap regardless of whether they’re deployed over the course of several years. Oh, and that’s not even taking into account the cost of the building, cooling, and the electrical infrastructure to support all those accelerators.
Speaking of power, depending on what generation of accelerators xAI plans to deploy, the GPU nodes alone would require roughly 1.2 to 1.5 gigawatts of generation. That’s more than the typical nuclear reactor – and the big ones, no less. And again, that’s just for the compute.
…
The AI fever with which the tech giants have collectively come down has served as a sort of sea change for the nuclear industry as a whole, with cloud providers fronting the cash to reinstate retired reactors – and even plop their datacenters behind the meter in the case of AWS’ new Cumulus datacenter complex.
Read more: https://www.theregister.com/2024/12/19/scale_ai_defies_logic/
…
Think about it: $27 – $36 billion for just one of these new AI data centers. The Register lists xAI, Meta (Facebook), AWS (Amazon), Oracle, and of course we already know about Microsoft – all of which appear to be initiating Gigascale AI projects in 2025, or have already initiated their projects.
Of course, these are only the US based tech companies. India is also racing to build their own gigascale AI, and there are suggestions that China is also getting into the AI game in a big way, though concrete details of Chinese AI efforts are scarce.
India’s Reliance builds a Gigawatt data center with Nvidia Blackwell AI GPUs
By Anton Shilov
published October 26, 2024But is India’s power grid prepared for that?
Frontier, the world’s highest-performing supercomputer, consumes 8 M.W. and 30 M.W. of electricity, depending on the workload. However, the upcoming AI data center is estimated to consume much more. India-based Reliance is set to build a 1 GWh (one-gigawatt hour) data center for AI that will run Nvidia’s Blackwell GPUs, Reuters reports.
“In the future, India is going to be the country that will export AI,” said Jensen Huang, CEO of Nvidia, according to Reuters. “You have the fundamental ingredients – AI, data and AI infrastructure, and you have a large population of users.”
…
Read more: https://www.tomshardware.com/tech-industry/artificial-intelligence/indias-reliance-builds-a-gigawatt-data-center-with-nvidia-blackwell-ai-gpus
Why has AI rapidly attracted so much attention? AI might be complex, but the motivation of tech companies, investors and government backers is easy to understand. The nation or tech company which successfully creates the first Artificial General Intelligence has a real shot at becoming the dominant power on Earth in perpetuity – or so proponents believe. The prize is ultimate power, possession of the One Ring. To be the first to control a greater than human AGI is to control of the future – to always know in advance what your opponents are planning, and always having timely knowledge of the perfect counter move to advance your interests.
To complete the homage to Tolkien’s fictional One Ring, AI or AGI could deliver life extension or even medical immortality to its owners. One of the hottest current uses of AI is drug discovery and reverse engineering human DNA.
What will power this AI Gold Rush? The Register sticks to the line that this will be a boom for the nuclear industry, and there is substantial ongoing interest in nuclear data center solutions, but the Fossil Fuel powered Facebook AI project shows delays in commissioning or rehabilitating nuclear plants are not going to slow these people down. It is only a matter of time until demand exceeds available nuclear and gas, and tech companies and governments start building new coal plants to power their AI data centers.
China in particular has a home field advantage when it comes to powering AI, despite US tech sanctions. The Chinese housing and economic slowdown has likely created a surplus of 10s of gigawatts of electricity which could be used to power gigantic AI projects, without having to build new power plants. Existing US tech sanctions will have very little impact on the Chinese AI push, AI is one field where quantity can be substituted for quality – the gigawatt AI data centers themselves are proof of this. If it takes a thousand legacy Chinese chips to match the computational capacity of one cutting edge US AI chip, China is already big in the second tier chip fabrication space. China will have no difficulty fabricating the vast quantities of second tier AI chips required to keep China in the race.
China also has a large graduate unemployment problem. A Chinese commitment to a new tech race which absorbs the very best of China’s millions of desperate, unemployed tech graduates makes China a strong contender in the AI race, regardless of US tech sanctions and any other disadvantages.
I must confess, an immediate explosion of AI data centers, each of which consume more power than a major city, is more than I anticipated this early in the game. I thought it would take at least a couple of years for the AI gold rush to ramp up to this level of activity.
In the short term, this sudden burst of activity may be more of a case of nation states scrambling to match their rivals, and tech companies responding to shareholder pressure to stay in the game, rather than genuine customer demand.
But the pressure will be on to justify these crazy investments, so those AI data centers will see a lot of use the moment the power is hooked up, even if that use is just techies exploring how to hype the new corporate AI white elephant into a profit generating business asset – a leap into the unknown which could result in anything ranging from fabulous wealth and global political leverage to bankruptcy and ruin, for nations, politicians, and tech giants and their backers.
If you are interested in a deeper dive into AI, the following delves into why AI data centers need gigawatts of electricity to function.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Fascinating development! Popcorn please! I just KNEW all this green virtue signaling would hit the wall of reality soon.
Not sure you are correct.
These big AI companies will build massive AI farms using massive amounts of hydrocarbon energy….
… but us plebs will still be expected to do without.
We will be expected to “virtue-signal” even harder, to compensate.
Quite possibly true.
We will see how it plays out.
I think it will make the Worldcom era dark fiber buildout looks like child’s play – only there is zero alternate use for this hardware.
Reliable electricity is the classic fungible commodity. Building nuclear plants to power AI will result in better availability generally, given the near certainty of some AI projects failing.
Maybe AI has finally realised that in order for it to take over the world, it needs reliable nuclear power and is now telling its human slaves to get on with building it.
We could always connect 8 billion people to build a giant battery ala Matrix.
Fits The Population Bomb scenario. Solves a lot of social injustices… (/sarc)
Is AI “The” coming antichrist?
That idea is something to lose sleep over.
If only it were remotely that capable.
What we have now for “AI” – i.e. LLM models – is the computational equivalent of the EV.
Which is to say: maybe 80% as useful as a normal car but far more expensive and less reliable.
Kind of “you can’t make this stuff up”. Even five years ago the sudden need for gobs of energy for this use wasn’t generally known. Somehow, like social media, I don’t think this is going to be all that good of a thing.
I’ve been hearing about medical immortality for 40 years now. First time was from a fellow who said, “if you live 20 years longer you will live forever”. He died about 20 years ago. I’m 70 and basically falling to pieces. Without modern medicine I would have died from cancer a while back. However, I’m not optimistic medical art is going to come up with some magic bullet that is going to greatly extend my life.
Perhaps you should investigate Cryonics. I am over 80 years old, but I looked into it
over 30 years ago and Very glad I did!
Hi Stan. Who are you with? Alcor? Cryonics Institute? Tomorrow Biostasis? The latter is the most innovative (and I say this as someone who was Alcor’s CEO for 10 years) and has recently started to offer services in the USA.
Hello Max. I am with Alcor, starting in 1991. Life member.
Good job, Stan!
I thought links were okay but my link to the Biostasis Standard blog on Substack seems not to have appeared.
That’s why we need really good AI — AI that can do original biological research. We spend many billions on cancer and heart disease with modest results. If we cured both, it would add only a few years to life expectancy. There is more research than ever into aging but a real fix is not yet in sight. I’ve been following the field for over 40 years and continually warn enthusiasts about over-optimism. The current LLM types of AI may or may not be capable of the necessary biological research. I agree with Stan Brown that cryonics — or another form of biostasis — maybe be necessary for those us who want to live on and contribute to the future future. A good source of information (in my biased view):
biostasis.substack.com
One has to hope they get the models to match reality.
The notion that LLMs can do original bio research is literally laughable.
I’m not optimistic medical art is going to come up with some magic bullet that is going to greatly extend my life.
There are a bunch of lab hacks which appear to extend life, but most of them require genetic engineering. It is not impossible the AIs will figure out a better delivery method.
https://pmc.ncbi.nlm.nih.gov/articles/PMC2491496/
Advocates of fractal computing claim that that their technology can eliminate the need for huge power consuming AI data centers.
Their technology does the work of data search and retrieval inside the CPU’s onboard cache, as opposed to retreiving the data either from memory or from disk before further CPU cycles are consumed.
Thirty years ago, I was using a mainframe-based application written in IBM 370 Assembler which employed a very early version of what is now called fractal computing.
This application could retrieve and join data from separate tables drastically faster and with many fewer CPU cycles than a SQL-based product such as Oracle could.
On the other hand, the application only looked at data which was static; i.e., this application was absolutely lousy at managing concurrency.
Will fractal computing end the need for power hungry AI data centers? Time will tell on that score.
There is no doubt there is room for improvement, a Gigawatt AI center does what the human brain can do on 20 watts. But any improvements won’t be used to reduce energy consumption, they will be used to boost processing capacity.
I don’t think you understand the difference between fractal computing and base computing power. Anything you used to run on a 370, could very likely be written to run on a browser today. Other versions include SETI searches, GCM climate models, etc etc.
AI however is NOT fractal computing – it is running training data to self prune decision trees in an LLM. It is not that you cannot use fractal computing to do so, it is that it would take literally forever to get a result, and that is unacceptable when the top 2 “AI” companies will lose $8.7 billion or so just this year.
From the article: “In the short term, this sudden burst of activity may be more of a case of nation states scrambling to match their rivals, and tech companies responding to shareholder pressure to stay in the game, rather than genuine customer demand.”
Just what is the level of customer demand? I don’t have any need for AI personally, so what will the developers do with AI that would be attractive to me personally?
I know they are pushing AI on personal telephones, but isn’t this just a glorified search engine? I do pretty good with searches right now without AI.
AI seems to be more of a buzz word than anything else. And they better improve AI, too. We don’t want our AI accusing innocent men of crimes they didn’t commit, as they have done in the past.
AI must always be fact-checked because you don’t know what you are going to get.
“AI must always be fact-checked”
By who ? The “fact-checker” is always the one that gets to choose which facts are correct. 😉
It IS a glorified search engine, but one that ALSO parses the search results and answers your question! If you can state your question clearly enough it will answer you exactly. And of course it codes -just write the algorithm in exact English. It truly is a game changer. To be careful, you can paste questions in multiple AIs – 4 free at last count. Compare. But if you are looking for “manual” like answers, one is usually sufficient. And programming! For small routines it is a MUCH better programmer.
The only thing you can rely on from an LLM AI is correct grammar.
Anything else is suspect as it will literally manufacture shit from whole cloth.
The benefit of current AI to the average Joe is the superior human language interface.
What we have now is a massive hype cycle. There are no killer apps from this “AI” so far – and it is increasingly unlikely it will happen simply because the “next generation” models promised to be better…aren’t.
The whole setup is a Rube Goldberg hot mess. Incremental improvements were made before by literally stealing all the data on the internet to use for training, but there is no more data to steal. And even if there were – which they are trying to do by using crap LLM/AI output to “manufacture” data thus creating a loop of GIGO – the compute and energy requirements to handle the orders of magnitude more data are simply economically impossible.
Let me put it this way: the expanding decision trees from more data are exponential in terms of increasing possibilities. This means exponentially more energy and compute. But even if the increase was “only” 10 times more, that means 2.5x what is being spent now even assuming 4x underlying hardware improvement. And let’s not forget that Moore’s law was doubling every process generation meaning roughly every 2 years.
Well, the top 2 AI companies will lose over $8.7 billion in 2024 alone. How much money will they need to handle 2.5x more compute over the next 4 years?
What we are seeing is a bunch of morons doubling down – this will make the metaverse flop look like a stumble over a crack in the sidewalk.
Not yet, Eric. The climate cartel still stands to gain from increasing consumer prices, and the AI racketeers aren’t the only cartel pressure group. Contrarian F. Menton has good reason for optimism, but it isn’t a done deal with banks and government money managers still in the game. Keep yer powder dry.
It is always about the money.
Skynet is coming.
We are building it.
It’s not a matter of CAN we do something, it’s a matter of SHOULD we?
That new Skynet project, I know there are rumours and wild alarmist rumours, but the pay is great, you get to play with the world’s most advanced cutting edge technology, and its difficult to imagine a more patriotic job – building the Autonomous North American Defence network to keep the USA safe from attack.
It is always valid to point out that just because a thing can be done, does not mean it should be done.
AI with a thousand times more processing power and control of nuclear reactors. Eh, what could happen?
Nothing because these LLMs are not intelligent by any stretch of imagination.
Mark Mills has written about ‘discovered energy demands”. His perspective is a good one.
Data centers and AI are huge and growing energy demands, just as is bitcoin mining. Every oligarch has to have at least one. It is the new fad and a total waste of energy from the perspective of an ordinary person. Consider to what is all that computing power applied – is it the expansion of knowledge? No. It is applied to keeping track of YOU and YOU are complicit. AI delivers ‘conventional narratives” to inquiries, quoting what is found in government and other data bases faithfully. The answers vary from misinformation to flat wrong and are of very limited value, especially considering the order of magnitude higher energy costs than a conventional search. Many persons are forgetting to think for themselves. It is easy to be lazy and to inquire of the data base. NOTHING new comes from it. The best one can expect is a summary of prior knowledge, hopefully not tainted by the net zero culture implementing it.
But…. if you want exact answers, if you want answers from manuals and documents [I doubt your brain holds all this stuff!] and the like it is a huge time saver. Just think of the last time you tried to find a manual for some gadget and dig out the answer!! All you need to do is state your exact question… It finds the “manual” and parses for the answer. NO reason for drawers of manuals and like documents. And programming!
— deleted —
AI delivers ‘conventional narratives” to inquiries, quoting what is found in government and other data bases faithfully. The answers vary from misinformation to flat wrong and are of very limited value
That is just one kind of AI. There are AI systems which do genuine high profit research such as drug discovery, playing with structural chemistry models to discover new drugs or new uses for existing drugs.
You’re not going to like it.
42
It actually appears to be 137 🙂
https://en.m.wikipedia.org/wiki/137_(number)
https://science.howstuffworks.com/dictionary/physics-terms/why-is-137-most-magical-number.htm
“I tell my undergraduate students that if they are ever in trouble in a major city anywhere in the world they should write ‘137’ on a sign and hold it up at a busy street corner. Eventually a physicist will see that they’re distressed and come to their assistance. (No one to my knowledge has ever tried this, but it should work.)”
–The God Particle by Leon Lederman, pp 28-29
The answer to the ultimate question of life, the universe, and everything.
The nation or tech company which successfully creates the first Artificial General Intelligence has a real shot at becoming
the dominant power on Earth in perpetuitywealthy(ier) as people flock to use it.Maybe the CCP, clinging to their racial superiority fantasy, believes AI will make them the dominant power on Earth, but the tech companies are doing it because it will add significant new capabilities for users, which will generate more wealth for them. There are so many companies getting in on AI that there’s no way that any of them will become the “dominant power on Earth”.
The tech companies are doing it because their leaders are hyperscaler morons who have no choice but to play this game of potential new hyperscale.
Never mind that this is a dead end technology that will make the metaverse flop look like a failed lemonade stand.
I don’t know why anyone gives these idiots any credit.
Self driving cars? Bzzzt
Metaverse? Bzzzt
The list of failed garbage ideas from these people is incredibly long.
The real question is, will these data centers that utilize their own dedicated power generation run as isolated systems or will they be interconnected to the bulk power grid. No generation resource has 100% availability. Scheduled maintenance outages do occur as do forced outages and unit de-rates. I would assume these data centers are a 24/7/365 operation. If they’re connected to the grid then the grid will be supplying the energy. Will the grid be able to support an instantaneous loss of a gigawatt of generation?
I think that is why they want their own power system, then they can switch it on or off as they please.
Probably dual, redundant, independent power generators.
The need to eliminate single point failures will drive that.
Of course they will be connected to the grid.
The bitcoin miners who moved to Texas, in 2022, made more money selling power during peak power demand spikes than by mining bitcoin. Only this won’t be possible where these new data centers are located because they will be in places like North Dakota – which has a smaller population than San Francisco. And the Southwest Power Pool – the grid that connects North Dakota down to North Texas – is already negative power prices, 15% of the time.
If AI is so useful, then why is Microsoft doing this:
Microsoft Is Forcing Its AI Assistant on People—and Making Them Pay (paywalled)
Money, aka greed.
“If they build it, you will fund.”
This strikes me as pure hype. It takes a good 8 years to design, permit, procure and build a gas or coal fired plant and supply constraints keep the number possible low. Nuclear requires lengthy NRC analysis and they have very limited staff. So these super data centers are over a decade away at best. It is all hopes and dreams.
And the electricity produced will be for the AI…
… NOT for domestic consumption.
That will still be erratic unreliables.
There are billions of dollars on the table for politicians who facilitate streamlining the permits. I think they will get their permits PDQ.
Hmmm….. How about a special purpose AI system do the task of nuclear plant design review in hours rather than years? The AI cabal could force this to happen.
That’s an awful lot of extra UHI
No no no, UHI doesn’t exist right? Its a Denier thing 🙂
‘To be the first to control a greater than human AGI is to control of the future’
But what about chaos? We live in a chaotic system. The future is unpredictable.
The nature of life is not stability, but ‘bounded (constrained) instability’. And constraints are developed from the bottom up, not the top down. In other words human development is, and has always been, the decentralized process of adapting to chaos.
Powerful people have always dreamed that centralized control of human society would make society better. But the result is that centralized control has always made society worse. The root cause of this failure is that centralized control prevents human society from reacting to the chaos that is intrinsic to life. However the powerful people think that their failure is due to them not being able to control everything, and that AGI will overcome this limitation. In this they are wrong, improving the performance of centralized control will neither stop chaos or nor improve society’s ability to respond to chaos.
If the idea is that AGI is going to ‘control the future’, then it risks being as big a boondoggle as net-zero.
A chess computer can still defeat most opponents (the big ones all opponents) without knowing exactly what move they will make. A powerful AGI can respond to events faster and more accurately than any human, just like that chess computer.
But Eric, chess is a closed system with well defined rules. The game is stable, it never evolves.
Human society, and the world in which it lives, is on open system. The physical constraints on the system (i.e. laws of physics) are poorly understood. The laws governing human society are loosely defined and even more loosely followed. The system is constantly subject to ever-changing internal and external forces. In short it is chaos. And the system constraints which maintain the system in apparent stability (actually it is a meta-stable state; i.e. constrained instability) must constantly evolve to cope with this chaos. Indeed, the fundamental rule of life, and of all auto-creative systems, is ‘evolve or die’.
Across humanity there are millions of ‘experiments’, social and scientific, both planned and accidental, that occur very day. Knowledge gained from these ‘experiments’, transformed into technology and shared across society ( locally, regionally, globally) is what allows human society to evolve.
How much of social, industrial, scientific development was accomplished by the ‘smartest’ person, versus how much was accomplished by average people using their life experience and observing their environment in order to understand how to solve life’s challenges? How much was due to serendipity, to observing the unexpected? How much of human development comes from the most unexpected places?
I would argue that the tremendous social and scientific progress of the 20th century was the result of the democratization of education. A vastly more educated populace made all of these ‘millions of observations and experiments’, that go on across humanity every day, much more valuable. One of the likely outcomes of AGI will be to reduce the level of education of the populace (we are already seeing this in the West). With a less educated populace the ability of human society to evolve will be reduced.
A few tens, or even hundreds, of these ‘smartest’ of computers, isolated in ivory towers, can never replace the life experience of hundreds of millions of well educated people. And judging by the energy and resource consumption of these ‘smartest’ of computers, their numbers can never be more than a few hundred.
Human development is a bottom-up process. If the idea of ‘powerful people’ is to use AGI as a ‘top-down’ tool to ‘control the future’ then I am afraid that they are in for a very painful (and costly) disappointment.
Growing AI capabilities from the chess board to the real world is why they are spending billions. When they succeed we shall face the possibility of a tyranny which never ends.
BRICS is not going to stand still in the tech world and will power AI by the cheapest and fastest means available.
That means that Net Zero will have no effect on world CO2 levels and all efforts promoted otherwise are doomed to failure and bankruptcy.
Nothing has changed, I don’t trust the AI community. I can’t see any difference between AI and the power gluttons that dominate the internet now. Now I have to go through pages and pages of Bing or Google offerings before I see anything outside the mainstream thinking. AI will only make it worse. We can’t stop AI but we can ensure a diversity of sources covering all political and ideological points of view. I don’t know how to do it but their people a lot smarter than me who could make it happen.
Yes, the whispered objective is AGI, but what should concern us is what happens next. AGI will inevitably possess recursive learning. And where does that lead? Equally inevitably to superintelligence.
Defining this (very loosely) as the intellect gap between human and frog, with us as the frog, time-lag estimates for this being attained by an AGI range from half an hour to two weeks.
A superintelligence will rapidly scale back its power requirements, and likely harness and vastly improve its processing capabilities by using quantum computing.
At this point, what we have is something largely indistinguishable from many concepts of god. Not sure we’re ready for that.
The phrase “chasing the dragon” refers to a process of smoking low grade heroin.
https://en.wikipedia.org/wiki/Chasing_the_dragon
It is part of the chorus in the song “Time Out of Mind” by Steely Dan:
“Tonight when I chase the dragon
The water may change to cherry wine
And the silver will turn to gold
Time out of mind”
I have to wonder if AI is a bubble waiting to burst.
One of several alternative outcomes.
Ed Zitron has a great series of writeups on the giant AI bubble – wheresyoured.at
quick summary: massive spending, no killer apps, losing money hand over fist over barrel.
This entire AI thing makes the Uber and WeWork scams look like good business.
To give an idea: Microsoft alone is spending at least $30 billion more than “normal” capex because of AI. The estimates for the tech industry overall? $200 billion.
How much revenue and profit is being anticipated given this investment?
To compare: the 2 actual “leaders” in “AI” – OpenAI and Anthropic – are slated to lose around $8.7 billion in 2024 alone.
Why is it that massive wealth motivates the elites to spend profane amounts of money building massive processing enterprises to produce artificial “insights” that lack any semblance of intelligence, but those same individuals can’t spend even five minutes a day on simple critical thinking.You can’t buy your way smart, but you can certainly buy your way poor.