By David Hodges
The global competition around artificial intelligence is usually framed as a race for chips, models, and computing capacity. That is only part of the story. The binding constraint is increasingly electrical power. In the U.S., data centers already account for roughly 4% to 5% of electricity use, and EPRI projects that figure could rise to 9% to 17% by 2030 as AI deployment accelerates.
That reality should change how we talk about AI leadership.
AI is not just a software race. It is an infrastructure race. The countries that can most quickly permit, build, connect, and operate reliable power will have a decisive advantage in the next phase of digital growth. Every breakthrough in artificial intelligence still depends on physical systems: generation, substations, transmission, cooling, backup architecture, and the industrial work required to make it all run.
The scale of demand is no longer hypothetical. EPRI estimates U.S. data center electricity use in 2024 at roughly 177 to 192 terawatt-hours, with the potential to climb significantly by the end of the decade. New AI-oriented campuses are no longer 20- or 40-megawatt facilities, but increasingly in the 100-megawatt to 1-gigawatt range. That is no longer a conventional commercial load. It is grid-shaping infrastructure.
Meanwhile, the grid is already under pressure. Lawrence Berkeley National Laboratory reports that nearly 2,300 gigawatts of generation and storage capacity were seeking interconnection at the end of 2024. These queues are a reminder that demand growth does not automatically translate into available power. A project is not “powered” because someone wants to build it – it is powered when generation, transmission, interconnection, and on-site systems align on schedule.
That is why reliability matters as much as supply. AI infrastructure does not tolerate instability. These facilities require high-density power, continuous uptime, advanced cooling, and redundant systems that perform exactly as designed. For many projects, the question is not just whether enough electrons exist on paper, but whether reliable, dispatchable power can be delivered when the facility needs to go live.
This is where the policy conversation often misses the mark. The debate still centers on semiconductor manufacturing and model development. Those are critical issues, but the economic winners in AI will also be determined by permitting speed, grid upgrades, transmission buildout, generation strategy, and the industrial workforce capable of executing complex facilities under hard deadlines.
A practical energy mix will be required. Renewables will continue to expand and should remain part of the buildout, but the operational profile of AI infrastructure means firm, dependable power will remain essential. In the near term, that points to a larger role for natural gas, hybrid systems, grid modernization, and, in some cases, on-site or dedicated power solutions.
There is also a broader local dimension. Communities are right to ask what large data center clusters mean for land use, water demand, local grids, and long-term planning. The answer is not to stop building, but to plan better, site smarter, invest earlier, and be clear-eyed about what this infrastructure requires. Done well, these projects can expand the tax base, create jobs, and strengthen regional energy systems. Done poorly, they create friction, delay, and distrust.
The AI race will not be decided by algorithms alone. It will be decided by whether we can build the physical backbone on which advanced computing depends -power plants, transmission, substations, cooling systems, and the teams that bring it all online.
The next chapter of AI will be measured in gigawatts.
David Hodges is Chief Operating Officer of LP Energy Services Group, an industrial services company supporting energy, infrastructure, and mission-critical facilities across North America.
This article was originally published by RealClearEnergy and made available via RealClearWire.