By Christopher Monckton of Brenchley

Douglas Pollock will be known to many readers here as a regular and popular speaker at Heartland conferences. After several years researching the effect of unreliables on electricity grids the world over, Douglas has discovered a truly fascinating scientific result.
He had been looking at nations such as Britain, whose government has gone further towards reducing the economy to third-world status by its unhinged nut-zero policies than any other. As a direct result of this fatuity, Britain now suffers the costliest electricity prices in the world.
The manufacturing industries in which we once led the world have died or gone overseas to Communist-led China, India and Russia. Manufacturing now accounts for just 8% of Britain’s already-imploding GDP. The workshop of the world has become its workhouse.
Industries large and small are going to the wall at a record rate, wrecked by the endless hikes in electricity prices whose root cause is the enforced and pointless shuttering of long-amortized and perfectly viable coal-fired power stations that used to produce electricity at only $30 per MWh, and their replacement with wind and solar subsidy farms producing intermittent and unreliable electrical power at anything up to $11,500 per MWh.
What is more, this disastrous industrial and economic collapse has been deliberately precipitated by a once-Conservative “government” that has long abandoned the no-nonsense economic realism and free-market ideals of Margaret Thatcher and Ronald Reagan.
Curiously, though, the crazed infliction of pig-ugly, wildlife-wrecking, landscape-lacerating windmills on the British people is not reducing our electricity-driven CO2 emissions.
More and more windmills and solar panels are industrializing and destroying our formerly green and pleasant land. Yet the fraction of the nation’s electrical power contributed by unreliables stubbornly remains at just below 25%. Douglas Pollock wondered why.
He consulted widely among the ranking experts on grid management, but no one had any idea why grids such as Germany and the UK, whose installed unreliables capacity is so much greater than 25% of total generation, are incapable of getting their mean annual contribution from wind power, in particular, above 25%. True, on some days wind can generate about two-thirds of Britain’s electricity. But on average – a la larga, as they say in the casinos of Puerto Rico – the contribution of wind and solar is stuck at 25% of total grid generation.
So Douglas scratched his head and thought about it. After a good deal of research and a lot more thinking, he discovered what was wrong. It was a subtle but devastating error that none of the whinnying enviro-zomb advocates of unreliables had noticed.
Douglas’ argument is a beautifully simple and simply beautiful instance of the logical application of mathematical principles to derive a crucially-important but unexpected and hitherto wholly overlooked result. Read it slowly and carefully. Admire its elegant and irrefutable simplicity.
Let H be the mean hourly demand met by a given electricity grid, in MWh/h. Let R be the average fraction of nameplate capacity actually generated by renewables – their mean capacity factor. Then the minimum installed nameplate capacity C of renewables that would be required to meet the hourly demand H is equal to H/ R.
It follows that the minimum installed nameplate capacity N < C of renewables required to generate the fraction f of total grid generation actually contributed by renewables – the renewables fraction – is equal to f C, which is also f H / R ex-ante.
Now here comes the magic. The renewables fraction f, of course, reaches its maximum fmax where hourly demand H is equal to N. In that event, N is equal to H ex hypothesi and also to fmax H/ R ex-ante, whereupon H is equal to fmax H/ R.
Since dividing both sides by H shows fmax / R is equal to 1, fmax is necessarily equal to R.
And that’s it. In plain English, the maximum possible fraction of total grid generation contributable by unreliables turns out to be equal to the average fraction of the nameplate capacity of those reliables that is realistically achievable under real-world conditions.
For onshore wind, that capacity factor R is a depressingly low 25%. For offshore wind, one might get 30%. The reason is that a lot of the time the wind is not blowing at all, and some of the time the wind is blowing too much to allow safe rotation of the turbines.
What Douglas Pollock’s brilliant and, at first blush, unexpected result means is that the miserably low capacity factor R is in fact also the fundamental limit fmax on the contribution that unreliable can make to the grid without prohibitively expensive and logistically unachievable large-scale static-battery backup.
That means that wind and solar power cannot contribute more than about a quarter of total electricity demand on the grid, unless there is battery backup. However, as Professor Michaux’ 1000-page paper of 2021 for the Finnish geological survey has established, there are nothing like enough techno-metals to provide battery backup of the entire grid worldwide.
Just for the first 15-year generation of static-battery backup for the global grid, the Professor calculates that one would need the equivalent of 67,000 years total current annual production of vanadium, to name but one of the scarce techno-metals that would be required in prodigious quantities. In another 15 years, another 67,000 years production will be needed, for batteries are short-lived, as anyone with a cell-phone knows to his cost. So battery backup is simply not an option on a global scale, even if it were affordable.
Now consider just how devastating is Douglas Pollock’s brilliant result for the climate-Communist narrative. First, it is simple. Even a zitty teenager in high school can understand it. Secondly, it shows that even if global warming were a problem rather than a net benefit there is absolutely nothing we can realistically do about it, except sit back and enjoy the sunshine. Thirdly, it shows that the climate Communists, in placing all their eggs in the electricity basket, have a basket-case on their hands.
For the imminent, enforced replacement of gasoline-powered autos by electric buggies will not only impose an enormous extra loading on the grid – for which most grids are wholly unprepared – but, since the batteries add 30% to the weight of the typical buggy compared with a real auto, the entire transport sector will be squandering 30% more energy than it does now. And that energy is supposed to come from the already overloaded grid, powered by unreliables that can only deliver a quarter of total grid capacity in any event.
It gets worse. In the UK, the “government”, in its final thrust to destroy the British economy, is ordering every household with a perfectly good oil-fired boiler to tear it out in two years’ time and replace it with a ground-source or air-source heat pump, which will deliver far less heat at far greater cost. And where is the electricity for the heat pumps going to come from? From the grid, that’s where.
The bottom line is that, because vastly more electricity than now would be needed to achieve nut zero, and because the Pollock limit means only about a quarter of grid electricity can be delivered by unreliables, the net effect of attempts at nut zero will be to increase global emissions significantly, because, as Douglas has decisively proven, nut zero – even if it were at all desirable, which it is not – is impossible.
Nut zero, then, is a striking instance of Monckton’s Law, which states that any attempt by governments to interfere in the free market in pursuit of some political objective or another will tend to bring about a result that is precisely the opposite of that which was – however piously – intended.
Just as electric vehicles are the gateway to no vehicles at all, the unreliables are the gateway to no energy at all. They’re not intended to work. They’re not intended to be practical. They’re intended to let things limp along until traditional sources of energy are decommissioned, dismantled, and destroyed beyond recall. Then it will suddenly be discovered that the unreliables are problematic and the joyless proles will just have to shiver in the dark.
What is wrong with the following example, which produces an f > R?
Call Nf the renewable nameplate fraction being generated. Over 5 hours Nf is 1, 0.75, 0.5, 0.25, 0 respectively for each hour.
Average Nf for the period of 5 hours is 2.5/5 = 0.5.
So for this period R = 0.5.
H is constant over the 5 hours.
Install renewable nameplate capacity 4H.
Then for first four hours at least H is generated. For first three hours excess generation is thrown away. For last hour zero renewable power is generated.
Then total grid generation including non renewable over period of 5 hours is 5H.
Renewable generation is 4H.
f = 0.8 > R (=0.5)
Should clarify that Nf is the renewable nameplate capacity fraction actually being generated.
Should maybe also say that renewable power being generated at any time is installed renewable capacity (4H) times Nf at that time.
So for five-hour period the renewable power being generated is 1x4H, 0.75x4H, 0.5x4H, 0.25x4H, 0x4H for each hour respectively, i.e. 4H, 3H, 2H, H, 0 respectively for each hour.
Renewable generation in excess of H is thrown away. So actual renewable contribution to grid for each hour is H, H, H, H, 0 respectively for each hour. So total renewable generation over period is 4H.
Also I said “H is constant over the 5 hours”. I should have said “Demand is constant at H over the 5 hours” (so it is always at the average value).
Mr Chambers has failed to understand the head posting. The Pollock limit is the maximum fraction of total grid output generated by unreliables without throwing electricity wastefully away.
I had looked in the head posting for a constraint that renewables generating excess power over demand is prohibited but didn’t find it. I’ve just looked again and still didn’t find it.
Call this the no-waste constraint.
Suppose
the no-waste constraint applies
the mean grid demand is H
the minimum grid demand is Hmin
the renewable installed nameplate capacity is N
the average fraction of nameplate capacity
actually generated by renewables is R
N <= Hmin must be true, as otherwise there is the possibility that the wind is blowing or the sun is shining when demand is less than instantaneous renewables output and the no-waste constraint is broken.
The average renewables output is NR.
The average renewables fraction of total output f = NR/H.
N <= Hmin implies f <= Hmin.R/H.
The Pollock limit is R, so the no-waste constraint actually implies a limit Hmin.R/H which is lower than the Pollock limit (assuming Hmin < H which is true if demand is not constant).
—————————————-
Is the no-waste constraint realistic?
It is not realistic if either (A) it is considered acceptable to discard excess output generation from renewables or (B) it is possible to throttle the output from renewable generators.
(A) is acceptable or not depending on how you want to evaluate the cost/benefit effects of adding more renewable generators to increase the fraction of grid generation produced by renewables. Personally I think the current effort to maximize this fraction is crazy and extremely harmful but I expect the people pushing this agenda and whom you want to convince would have no problem swallowing (A).
(B) is a technical issue. The throttle control might be coarse-grained (such as switching off a wind-turbine or a solar panel) or fine-grained (reducing power by some smaller fraction of available power). I’m not knowledgeable about such things but I would expect there is at least some level of coarse-grained control.
If the constraint is relaxed then the Hmin.R/H limit or the Pollock limit can easily be breached.
Read the head posting again. The following appears, immediately after the calculations:
“What Douglas Pollock’s brilliant and, at first blush, unexpected result means is that the miserably low capacity factor R is in fact also the fundamental limit fmax on the contribution that unreliable can make to the grid without prohibitively expensive and logistically unachievable large-scale static-battery backup.”
This point is actually made twice in the article. If there were no battery backup, the excess generation would of course be expensively wasted.
Thanks, yes, I read the parts excluding battery backup (for good reasons).
However, whether excess generation is considered to be wasted or not depends on how the costs/benefits are assessed. Excess generation combined with curtailment can be used to increase the renewables fraction of total grid output (“curtailment is the deliberate reduction in output below what could have been produced in order to balance energy supply and demand or due to transmission constraints ” – Wikipedia).
Examples of a case being made for exploitation of excess generation combined with curtailment (not saying I agree with doing this):
Curtailment of low-cost renewables a cost-effective alternative to ‘seasonal’ energy storage
Minnesota study finds it cheaper to curtail solar than to add storage
Overbuild solar: it’s getting so cheap curtailment won’t matter
Analysis of Energy Curtailment and Capacity Overinstallation to Maximize Wind Turbine Profit Considering Electricity Price–Wind Correlation
Study: Wind Power Curtailment More Cost-Efficient Than Storage
I think your conclusion is interesting Christopher, but is slightly flawed. If you add more windmills C also increases, but the conclusion must be that this is pretty much useless as it is intermittent.
On another point the cost of a grid scale battery with sufficient capacity is estimated at between £3 and £6 Trillion. It might last 15 years, but would be so dangerous that it would be very unwise to build it unless greatly distributed around the country. The capacity, 28.8 terawatt hours, 28,800 GWhrs. That is for the UK 40 GW grid capacity.
Bankruptcy is the obvious outcome for everyone and the Country. We would also need 160 GW more nameplate capacity of windmills, they would be everywhere!
No. C is carefully defined as the minimum installed nameplate capacity of renewables that would be required to meet the hourly demand H; C is equal to H / R, for R the average fraction of nameplate capacity actually generated by renewables (their mean capacity or load factor). Adding more unreliables does not, therefore, increase C.
Forget the maths, the wind doesnt blow above the ~15 knots a turbine needs more than 25% of the time on land.
Without storage, this will always be the limiting factor.
You are correct; solar PV is only able to generate useful power about 25% of the time (which of course has been known for a long time). Adding extra generation to cover the other 75% will not work without storage, the excess above demand will just be wasted.
25% is indeed the capacity factor for onshore wind in the UK. In countries with more or steadier wind or both, the capacity factor may be higher. But, as Mr Pollock has shown, whatever the capacity factor is for a particular species of unreliables or for a particular territory, that capacity factor is the Pollock limit. Any capacity installed beyond that limit will be wasteful, costly and destabilizing.
My “zitty teenager” days ended (just under) 40 years ago, but I’m having problems with the details of the argument, and the rhetoric, used here.
OK, so for a given time period, one hour in the example used, then :
The average power output (in GW, say) of “renewables” (for that time period) = R
The “nameplate” capacity (in GW) of those “renewables”
and
The average power output (in GW) of “renewables” = f
The average “Demand” (in GW) of the entire grid
So if you calculate the f and R values for a series of (hourly) time periods, then it should be “impossible” to find a f value greater than R (/ a R value less than f) … Hmmmmmmmm …
One of my favourite science quotes :
“If it disagrees with experiment, it’s wrong. In that simple statement, is the key to science. It doesn’t make any difference how beautiful your guess is. It doesn’t make any difference how smart you are, who made the guess, or what his name is. If it disagrees with experiment, it’s wrong. That’s all there is to it.” — Richard P. Feynman
Half-hourly data for the Great Britain electricity grid from the end of November 2022 to the beginning of January 2023, with “capacity factor” data from the ET 6.1 Excel file on the BEIS’s “Energy Trends: UK renewables” webpage (direct link, using “GB = England + Scotland + Wales”), is shown in the attached graph.
NB : In my graph “Nameplate f[raction] = NP_f = R” and “Demand f[raction] = Dem_f = f”
The main claim, that “R >= f” effectively, appears to be incorrect (for “Wind in GB”, at least).
It is unclear whether this comes “directly” from Douglas Pollock’s original argument or CMOB’s “interpretation” of it.
To check if the “integration time” affected the results I did a similar set of calculations for the daily “sums” — in “GWh per day” rather than “average GW per 30-minute time period” — for the last 13 months and got the following graph.
Note that the minimum difference is around -3% to -4% (instead of down to around -15% for the 30-minute resolution data), and that for the period since 24/11/2022 only one day had a negative difference value (instead of the dozen or so “dips below zero” in the OP’s graph).
Mark BLR has made the same error as many others here. He has not noticed that I had written: “What Douglas Pollock’s brilliant and, at first blush, unexpected result means is that the miserably low capacity factor R is in fact also the fundamental limit fmax on the contribution that unreliable can make to the grid without prohibitively expensive and logistically unachievable large-scale static-battery backup.”
Without that backup, surplus generation is expensively wasted.
In periods of high wind, the capacity factor of wind turbines transiently increases, just as it falls to zero when there is no wind. One must, therefore, take the mean hourly demand over a sufficiently long period to be properly representative.
I look forward to reading your articles here on WUWT, and usually agree with your (scientific) points of view, but in this case I fear you have “fallen in love with your theory”.
You may need to take a step back and ask yourself exactly why so “many” of us have made (exactly ?) the same “error” after reading the ATL article.
My personal focus is on the GB electricity grid.
Other people will focus on grids closer to home (Texas / ERCOT, California, South Australia, Germany, …) but to the best of my knowledge the GB grid doesn’t (yet) include any of that “large-scale static-battery backup” you mention.
Elements like the Minety (150 MW / 266 MWh) and Pillswood (98 MW / 196 MWh) batteries are sized (2 hours max) for “frequency stabilisation” functions, they are not suitable for “grid-scale backup” operations.
I repeat, to the best of my knowledge the GB grid doesn’t currently include any such “backup”.
Please provide links if my “assumptions” are incorrect.
In my browser the “Ctrl-f” keyboard shortcut brings up a “Search in webpage …” box.
Entering “wasted” for this page revels that the word doesn’t appear at all in the ATL article, it can be found for the first time in a “Reply” you made (to user “chadb”) stating :
– – – – –
Actual quotes from the ATL article follow.
My data for the GB grid shows that “R” values greater than 60% are “realistically achievable under real-world conditions” … without “battery backup” systems in place.
Again, the GB grid doesn’t include any such “battery backup”.
The ATL article clearly states that under those conditions wind (and solar) power “cannot” contribute “more than about a quarter of total electricity demand”.
My actual empirical data shows both 30-minute and 24-hour periods when “R” for the GB grid exceeds 60% … without “battery backup” systems in place.
– – – – –
After some thought, an alterative way of looking at the actual empirical data.
None of the “limits” to the f value mentioned in the ATL article are correct for the GB grid (at least).
Please quantify just how long that “sufficiently long period” needs to be.
I’m confused by your confusion. Your focus on short time frames is inappropriate. See the following direct quote from the article per Monckton:
“True, on some days wind can generate about two-thirds of Britain’s electricity. But on average – a la larga, as they say in the casinos of Puerto Rico – the contribution of wind and solar is stuck at 25% of total grid generation.”
OK, mea culpa, I got so distracted by the “wind and solar power cannot contribute more than about a quarter of total electricity demand on the grid, unless there is battery backup” claim that I missed that.
In the post you replied to I was asking CMoB to quantify what “a sufficiently long period” meant, i.e. to provide an actual concrete number.
He has not replied (yet), but maybe you can provide the answer instead.
Does “a la larga” mean the Pollock limit (fmax = R) only applies for 1-month averages (and longer) ?
For 3-month (quarterly) averages ?
For 12-month (annual) averages ?
For some other “integration period” ? … One that is not specified in the ATL article …
I suspect that some of the confusion here regards just how the “capacity factor” is calculated. Here, from the EIA Electric Power Annual 2021, are the capacity factors of the major sources of US energy.
2021 US Capacity Factors
Coal: 49.1%
Gas—Combined Cycle: 55.0%
Gas—Gas Turbine: 11.7%
Gas—Steam Turbine: 12.5%
Gas—Internal Combustion: 18.2%
Geothermal: 69.8%
Hydroelectric: 36.0%
Nuclear: 92.7%
Biomass: 63.2%
Solar—Photovoltaic: 24.4%
Solar—Thermal: 20.5%
Wind: 34.4%
Wood: 59.9%
Nuclear is first, 92.7%, followed by geothermal, 69.8%.
Hmmm … I suspect that folks can see the issues …
w.
PS—I think grid-scale solar and wind are an expensive and tragic joke that has no place in modern generation. But that’s a separate question.
Regarding solar PV, there are two separate issues that cause low CF:
1—the artificially high standard max power rating that is disconnected from actual use conditions
2—any generation above demand which is wasted
#1 is constant
#2 increases as more PV is connected to a grid
Prior to the 2020 election, some voices in the US nuclear industry were promoting a long-term power generation mix for America of one-third nuclear, one-third natural gas, and one-third wind & solar.
The theory behind this position was that nuclear would handle mostly continuous baseload demand; and that natural gas would deal with the daily and weekly ups and downs of wind & solar’s intermittency and relatively low capacity factors.
This position was in large part a reflection of the reality that public policy decision makers were forcing the adoption of wind & solar regardless of what the long term costs of that policy would entail, and that the damage wind and solar could inflict on the the grid could be minimized if its market penetration could be held to just one-third of the nation’s total generation capacity.
After the 2020 election, with climate activists now in full control of the government, the 1/3rd-1/3rd-1/3rd generation mix proposal has become untenable simply for the fact that the Biden Administration is bent on eliminating gas-fired generation altogether.
For the next two decades, we are all in for a wild ride as the process of Net Zero transition continues to gain momentum.
Thank you for the information. Yes, we can see the issues. It is unclear why politicians cannot.
Willis is right that there are many definitions of “capacity factor”. In accordance with the usual scientific practice, therefore, the head posting states which definition it is using. For instance, “the average fraction of the nameplate capacity of … [un]reliables that is realistically achievable under real-world conditions“. That fraction will, of course, vary depending on average weather in each country, as previously explained.
Thanks, Christopher. My problem with that definition is twofold. First, it’s not the usual definition, as we can see from the EIA figures.
Second, and more importantly, I have no idea how we’d measure your definition. The problem is the “realistically achievable” part of the definition. My example comparing Norwegian and US hydropower capacity factors in this thread highlights the issue. The difficulty is that in addition to varying based on the average weather in each country, what is “realistically achievable” depends on the choices of the folks managing the grid in each country, and which sources they choose to use at every instant …
Best regards,
w.
Willis, your problem is with the EIA’s definition, whose bizarreness you have rightly highlighted. Douglas Pollock’s definition is much more widely used in the industry. And of course it is possible to determine the mean annual national capacity factor for, say wind power, by reference to meteorological records of wind speeds, and, increasingly, by hard experience. In Britain, the mean national capacity factor for wind power is about 25%; for offshore wind, more like 30%.
As previously discussed, there can be quite considerable variations either side of these values: but Douglas Pollock’s result – which is quite unexpected, and is simply not known among governments, grid operators or generators – is a very useful benchmark.
It tells us, for instance, that there is no point in adding any more wind or solar power to the UK grid, unless battery backup is added too, at crippling cost. For we have already installed capacity well in excess of the Pollock limit.
What is also remarkably useful about Douglas Pollock’s insight is that it is simple enough to be comprehensible. Reading some of the inspissate calculations done by some of the commenters shows the difficulty of not finding a simple benchmark to give grid operators some idea of how much (or, rather, how little) unreliables generation they can install without either battery backup or capacity payments, both of which are very costly.
Douglas has consulted very widely in the industry over recent years, and several of those whom he consulted had noticed that, after a certain point, their additional installations of wind and solar power were not increasing the fraction of total generation contributed by those species. But they did not know why. Now we know why.
Thanks as always, Christopher. I find the Pollock Limit to be a most fascinating insight.
And as with many such limits, what is of the most interest is why and how some places are able to exceed the limit.
That’s why I find Ireland to be an interesting case. Not only are they exceeding the Pollock Limit, but every year since exceeding it they’ve exceeded it even further.
Now, Ireland doesn’t pay wind farms for curtailments (over-production of more than the grid needs). But it does pay for constraints (more energy than the grid can transmit). Are these the reason that Ireland can exceed the Pollock Limit?
I don’t know. But it is exactly these kinds of cases that can either point to ways to exceed the limit or alternatively show that those ways are uneconomical.
There’s an interesting analysis of this Irish situation entitled “Value of demand flexibility for managing wind energy constraint and curtailment” … worth a read.
My thanks to you as always for raising interesting issues … and a gentle suggestion for you, that you treat those raising questions about your work with some modicum of respect …
Your friend as always,
w.
Willis Eschenbach asks whether EirGrid, the Irish national grid authority, makes what the industry calls “capacity payments” to wind and solar generators to switch of at times of high wind, strong sun or low demand.
EirGrid has in the past made capacity payments, and still does so to some extent. Its preferred tactic, however, is simply to order the unreliables generators to shut down at their cost whenever necessary. In 2019 these compulsory shutdowns cost unreliables generators some $60 million, which, scaled up by UK/Irish population, would be $850 million – about treble the UK’s capacity payments in that year. See the picture?
The reason for these very heavy costs is that, in Ireland as in the UK, the Pollock limit has been exceeded. If the grid authorities had known about this limit, they would have been able to discourage the connection of still more costly and wasteful surplus unreliables capacity unless and until cripplingly expensive matching backup battery storage had been installed.
And if you want to preach, Willis, do it privately. By now, as an editor of this site, you ought to know that there are several climate-Communist commenters here, some of them paid, whose objective is to divert any argument that might prove fatal to the Party Line. There are several others who think they are good mathematicians or physicists who, in reality, lack the intuitive ability to comprehend theoretical results and will vent their frustration by often childish attacks on the authors of posts here.
I have no patience with these trolls. If commenters raise genuine questions and do so with reasonable politeness, I respond in kind, and kindly. If they play what in Yorkshire we call “silly-b*ggers” I do not take prisoners.
Likewise, I deal very firmly with those who, even after further explanation, deliberately misrepresent the head posting. For instance, several commenters have taken a single sentence from the head posting as suggesting (which is entirely fatuous if one thinks about it even for an instant) that I had said it was physically impossible to generate more wind or solar power than the respective national capacity limits.
Indeed, one commenter even went so far as to say I had said nothing at all about battery backup, even though at least a quarter of the head posting was devoted to it.
What is useful about these dicussions is that it shows how difficult it is to explain theoretical concepts, however elementary, even to those with a scientific background who are in reality unable to think conceptually. For not all of those who have misunderstood the head posting have done so wilfully. They have misunderstood it because, like most scientists, they do not have the kind of mind that is at ease going from the concrete to the abstract and back without getting lost.
Douglas Pollock, who has read and enjoyed the entire thread, has been able to make a few tweaks to his paper, particularly in definitions of terms, so that as far as possible any reviewer who may have the same blind spot when it comes to comprehending the underlying meaning of a sequence of abstract equations will be able to get the point, which I think you have now seen.
If Douglas’ result proves to be correct, then the next question is how much wind power backed up by static batteries will cost. We are quietly working with a geometallurgist of more than usual competence who has been studying that question for many years.
To ensure continuity of supply on the UK grid, for instance, one would need at least three months’ battery storage to cover the long and quite frequent solstitial periods when there is little or no wind. However, there are not enough known or foreseeable techno-metal reserves to make batteries on this scale.
Therefore, Douglas’ result means that nut zero is not only unnecessary but also unaffordable and unattainable.
It is also useful to look at the ratio between average demand and peak demand. That provides an overall limit to the system capacity factor. Ifsome demands met by baseload generation at higher capacity factors the rest of the generation must necessarily operate at lower capacity factors. This feature coupled with undisparchable randomness us why wind and nuclear are effectively incompatible at higher ,every of either.
It doesnot add up: “It is also useful to look at the ratio between average demand and peak demand. That provides an overall limit to the system capacity factor. If some demands met by baseload generation at higher capacity factors, the rest of the generation must necessarily operate at lower capacity factors. This feature coupled with undispatchable randomness us why wind and nuclear are effectively incompatible ……. .”
This is a key point — if a portion of total power demand is being met by one type of generation at higher capacity factors, then the remainder of the generation fleet must necessarily operate at lower capacity factors.
In a power marketplace where non-dispatchable wind & solar are being given priority access to the grid — and where there is little or nothing in the way of battery backup storage — those dispatchable resources still attached to the grid must bear the operational burdens and the cost burdens of dealing with wind and solar’s intermittency.
Nuclear generation and gas-fired generation are both dispatchable sources of electricity. But as things stand today, gas-fired generation is technically, operationally, and economically better at it than is the current generation of nuclear power plants.
From a technical and operational perspective, the oncoming small modular reactor (SMR) nuclear technologies are intended to be much more suitable as variable energy resources in coping with a power grid heavily penetrated by non-dispatchable wind and solar.
These SMRs will be sold to public policy decision makers and to utility executives as being ‘dispachable emission free resources (DEFRs)’ in NYISO parlance. The sales pitch will be that an SMR power plant will be much more capable than massive banks of batteries in reliably backing up wind & solar 24/7/365 in a variety of operational conditions.
But what about cost? Nuclear fuel is roughly ten percent of the cost of running a nuclear power plant. The bulk of the life-cycle cost of a nuclear plant lies in capital cost recovery, in plant maintenance, and in day-to-day plant operations which require the presence of well-trained and well-paid personnel to perform.
Those designing the oncoming SMRs are working to address all facets of the cost issues of nuclear, with primary focus on reducing capital costs. The other cost factors are also being worked on to some extent. But with nuclear, one can only go so far in addressing those other cost factors.
Will an SMR-based power plant be as effective as a gas-fired power plant in coping with wind & solar’s intermittency? I think the SMRs can be equally effective from a technical and an operational perspective. However, from a total lifecycle cost perspective, I don’t think the SMR’s could be as competitive in that role as gas-fired generation.
In the context that public policy decision makers want emission free electricity, the main advantage the oncoming SMRs have over gas-fired backup as a dispatchable variable energy resource is that these will be emission free. If it weren’t for the low-carbon and zero carbon mandates being pushed by America’s politicians, new-build nuclear would not be under consideration in the US. (Nor would wind and solar backed by batteries for that matter.)
Repeating what I’ve stated in earlier comments:
What is not being said by anyone of real prominence in the nuclear industry is that the impacts of wind & solar in driving up the future price of electricity will certainly be one of the major factors making the relatively high upfront capital costs of nuclear more acceptable in the power generation marketplace.
My warning to all nuclear power advocates on that score is this:
A strategy of depending upon future increases in the price of electricity in order to make nuclear competitive with natural gas — as opposed to pursuing diligent, rigorous, and tightly-focused efforts at keeping nuclear’s capital costs under control — would be a major mistake, one which could prove fatal to a 2020’s nuclear renaissance in America.
My first back of envelope on SMRs considered two variables: the financing cost interest rate and average utilisation, producing a table of cost per MWh for the capital investment assuming a constant plant life. Whilst fuel cost and maintenance cost may show some variation with operating regime, it is relatively small beer in comparison. A grid load following 60% will multiply cost by at least 5/3. Providing wind infill while other generation provides baseload probably takes cost up by a factor of 3 or more.
That’s probably a good estimate as first back-of-the-envelope estimates go.
That said, the politicians and the climate activists who are pushing wind & solar backed by batteries don’t particularly care about the true costs of their Net Zero vision. At least they don’t care about wind & solar’s true costs.
The senior executives of most power utilities don’t care about wind & solar’s true costs either. Their job is to make a profit for their utility; and if they play the game smartly, they can use an ‘asset churn’ type of strategy in pursuing renewable projects in order to enhance the bottom line.
However, when it comes to nuclear, the attitudes of the politicians and the public policy decision makers are different. The attitudes of utility executives towards nuclear are different as well.
In theory, a future energy marketplace in which the price of electricity is two or three times higher than it is today ought to enable a much more friendly market environment where nuclear’s high upfront capital costs can be justified.
It won’t be nearly that simple for those promoting a specific nuclear project. The predicted capital and operational costs of any proposed nuclear construction project will come under exceptionally intense scrutiny. Especially so after the cost & schedule overrun debacles of the VC Summer and Vogtle 3 & 4 projects.
In contrast with what the promoters of wind and solar projects are now expected to produce as evidence for their cost & schedule projections, those who are promoting nuclear power must deliver cost & schedule estimates which are strongly backed by highly detailed, highly credible basis-of-estimate information and data.
More than that, the very first SMR construction project out of the gate here in the US must fully deliver on its cost & schedule commitments. And it must do so in every phase of the project from beginning to end. If the first SMR project in the US blows its cost & schedule estimate, the adverse consequences for the future of new-build nuclear power in the US will be severe.
At any rate, one way or another, the grand wind & solar experiment will be going forward here in America regardless of how accurate predictions of steeply rising costs for electricity are eventually proven to be.
Proposing molten salt reactors as the energy answer is exciting, much better than the current effort to replace “fossil fuels” with sunshine and breezes.
MSR technology via Bill Gates’ first commercial operation is supposedly to debut prior to 2030. I hope so.
Accelerated testing methods of the corrosion resistant materials required will have had to satisfied regulators and investors. Such testing over a 5 or more likely 10-year period seems necessary to find the “best” materials but the “best” materials need to last 40+ years to be commercial. I see comments about “cladding” for MSR corrosion resistance, but I suspect there’s patent protection issues that will discourage cooperation and extend timeframes.
The best hope we have for phasing in nuclear technology from 2030-2040 is NuScale’s small scale modular reactors, while molten salt fast neutron reactors are being perfected IMHO. I’m not by any stretch an expert, just a seriously interested old man who wants to see the beginning of the new generation nuclear renaissance before I kick off!
copy
TerraPower will run tests with depleted uranium, which is not used in fission, to determine which materials can hold molten salt without being damaged by corrosion
https://www.researchgate.net/publication/333245378_Status_of_Metallic_Structural_Materials_for_Molten_Salt_Reactors
2018:
Hastelloy N has not been qualified for use in nuclear construction, and significant additional characterization would be required for Code qualification. …
… It is recommended that a systematic development program be initiated to develop new nickel alloys that contain a fine, stable dispersion of intermetallic particles to trap helium at the interface between the matrix and particle, and with increased solid-solution strengthening from addition of refractory elements.
With support from computational materials science tools, a speculative time frame for a down-selection program, using 20-30 kg heats, is about four to five years….
My expectation is that the NuScale SMR design will be the first to reach commercial operation in the US, in early 2029 if the current schedule is met. The NuScale design is a light water reactor derivative using legacy LWR/PWR nuclear technology which has been loaded into a smaller package. In comparison, the Natrium reactor design and its associated reactor development project is more ambitious both technologically and programatically.
In addition, the NuScale team has been working on their SMR design since the mid-2000’s. The founders of the company recognized early on that the energy marketplace in the US was no longer conducive to building the large unitary 1,100 MWe power reactors. What was needed instead was scalability in response to slower growth in demand for electricity, a more cost-effective approach to fabricating QA-compliant systems and components, a much reduced emergency response planning zone, and the need to tightly manage every facet of the design, development, and construction of their SMR concept in a highly disciplined way, thus avoiding unforeseen regulatory roadblocks as the project moves forward.
TerraPower claims they can get their first Natrium reactor into commercial operation by 2028.
After looking at their published high level schedule, I don’t believe this claim. Too much work covering too many areas of technology development & demonstration, supplier qualification & mobilization, regulatory review & permitting, and project planning & scheduling still remains to be done before the Natrium design can go into commercial operation. For an SMR design as ambitious as the Natrium, six years simply isn’t enough time to get all this work done. Given how ambitious the TerraPower project actually is, 2032 or even 2035 seems to me to be a more realistic target date.
Here are illustrations of the NuScale SMR module design and the SMR module containment building.
The reactor core and the steam generator are contained in one 76-foot high 77 MWe SMR module. These modules reside in a common pool of water inside the same containment building. For a 12-module power plant, two containment buildings carry six modules each. For a 6-module power plant, the two containment buildings carry three modules each. The emergency planning zone for a NuScale plant extends only to the plant fence.
Each SMR module serves one turbine generator. Each reactor-turbine combination can be brought online or shut down independently of the others. Each SMR module can be refueled independently from the others while the others are still operating. The NuScale design uses half-height conventional fuel rods. Any future improvements in the nuclear fuel technology used in the legacy LWR/PWR reactor fleet can also be applied to the NuScale SMR design. Enough room is allocated on the plant site for sixty years of dry cask spent fuel storage.
This month, NuScale submitted another application to the NRC for Standard Design Approval (SDA) of their standard six-module SMR plant. The revised application uprates the previous 50 MWe design to 77 MWe. Based on the NRC’s comments made on a preliminary draft of the SDA application submitted in the fall of 2022, I’m expecting NRC review and approval of this lastest submission to take fourteen to sixteen months when all required revisons to the application are complete.
In the meantime, other work needed to mobilize the component supplier base can continue with little or no risk to the 2029 project completion date from obstacles directly related to regulatory oversight and compliance. My opinion remains that NuScale is well ahead of its SMR competitors in doing all the work needed to get an SMR-based power plant into production operation in the United States.
What kinds of obstacles could prevent that first SMR plant from being built?
Opposition from anti-nuclear activists isn’t the most important obstacle to new-build nuclear in the US. Keeping nuclear’s capital costs under control is the most difficult challenge the nuclear construction industry now faces. The target for most SMR projects is to keep capital cost to $5,000 per kw or lower. The high rate of inflation which is affecting all component suppliers in the industrial supply chain is now the greatest threat to the successful deployment of SMR technology in this country.
Nuclear projects are different from wind & solar projects because the people who make the energy policy decisions and the energy system procurement decisions don’t particularly care what wind & solar costs. Wind & solar subsidies will flow regardless. However, they do care what nuclear power costs. If the nominal capital cost for a new-build SMR plant rises too much above $5,000 per kw, then here in the US, we will be seeing the postponement or outright cancellation of many if not all of the SMR projects now in the pipeline.
Time will tell what happens.
My friend in London informed me he is now paying $0.65 per KW (I pay about $0.10 here in rural Northern PA). The unintended consequences (widely reported except by CNN) is that it now costs more to “fill up” your EV than your ICE car, and heat pumps now become much more expensive than gas alternatives. These facts are why now the only obvious course is to ban the better and cheaper alternatives to green electricity. Otherwise, the free market will destroy their mirage.
Lord Monckton, If you are briefing a senior Conservative MP next week on the Pollock limit would it be possible please for you to also point out that for off-shore wind to produce P GW of dispatchable/reliable energy it will require an installed capacity of 7.5P GW (over 7 times more).
My calculation is as follows:
Suppose we want P GW of power to be “dispatchable”, meaning always available “on demand”.
Let us start with P GW of installed wind turbine power and calculate the extra installed capacity required to produce P GW of dispatchable power.
Now the capacity factor of offshore wind turbines is 33% (onshore is less), so the average amount of power over a year supplied by a wind turbine is 0.33P GW and consequently we will require 0.67P GW of storage.
The efficiencies are :
Electrolysis : 60%
Compression : 87%
Electricity generation : 60%
So overall efficiency = 60% x 87% x 60% = 31%
So the amount of excess power required to produce the missing 0.67P GW is 0.67P/0.31 = 2.16P.
Since the capacity factor is 33%, this means we will need 2.16P/0.33 = 6.55P GW of additional installed wind power to provide the needed 0.67P of dispatchable power.
Hence a total of P + 6.55P = 7.5P of installed wind turbine capacity is required to provide P GW of dispatchable power.
This is of course if you agree with my calculation!
Thank you.
Mr Brown’s calculation is certainly intriguing. Does he know whether the inefficiencies to which he draws attention are already taken into account in statements of the nameplate capacity of wind farms?
Lord Monckton, Many thanks for reading my post and for your question.
The “nameplate capacity” (or I call it the “installed capacity”) of a wind turbine is the maximum capacity (output) possible from a wind turbine in ideal conditions. To quote Wikipedia :
“Nameplate capacity, also known as the rated capacity, nominal capacity, installed capacity, or maximum effect, is the intended full-load sustained output of a facility such as a power station,[1][2] electric generator, a chemical plant,[3] fuel plant, mine,[4] metal refinery,[5] and many others.
Nameplate capacity is the theoretical output registered with authorities for classifying the unit. For intermittent power sources, such as wind and solar, nameplate power is the source’s output under ideal conditions, such as maximum usable wind or high sun on a clear summer day.
Capacity factor measures the ratio of actual output over an extended period to nameplate capacity. Power plants with an output consistently near their nameplate capacity have a high capacity factor.”
https://en.wikipedia.org/wiki/Nameplate_capacity
For wind farm developers this is not only the highest figure for the wind farm’s output – which they would like to quote – but it is the only figure they can accurately quote as the actual (real) output is variable depending on the amount of wind during the period under examination.
The UK’s BEIS’ UK ENERGY IN BRIEF 2022 for 2021 P33
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1094025/UK_Energy_in_Brief_2022.pdf
Shows onshore (29.2 GW) + offshore (35.5 GW) wind produced a total of 64.7 TWhrs of energy during 2021. This is an average of 7.4 GW over the year.
Wikipedia shows that the UK had 25.7 GW of (installed/nameplate) wind capacity in 2021. So the capacity (load) factor is 7.4 GW/25.7 GW = 29%.
In fact the Wikipedia themselves put the figure at 29.3% for 2021.
https://en.wikipedia.org/wiki/Wind_power_in_the_United_Kingdom
So I do not think the “nameplate capacity of wind farms” takes into account the capacity factor/inefficiency and the inefficiency of wind means that the average output from a wind turbine is only around 30% of the quoted installed/nameplate capacity.
I believe my calculation is correct and explains why there is no attempt anywhere in the world to use hydrogen as a store of energy at grid-scale to counter the intermittency of renewables.
I look forward to your comments and if you agree with my calculation to please explain this next week to the senior Conservative MP you are meeting next week.
Thank you.
https://www.statista.com/statistics/536065/finland-share-of-electricity-produced-from-renewable-energy/
https://aaltodoc.aalto.fi/bitstream/handle/123456789/21929/isbn9789526069999.pdf?sequence=4
Probably too late to matter, I have been repeatedly pulled back to this post not by my initial, too quick reading of it, but by attacks against it by other respected posters and commentary (with chorus) on those attacks.
Apologies offered, if somehow warranted, by my poor understanding and perhaps momentary misrepresntation of my own of this post in comments. Following directions, finally, I have read it all carefully and believe that I can appreciate it. As well, I think that I appreciate how those attackers are incorrect, and how unworthy it is for some of those to be so engaged for this long without a retraction or correction.
Thanks Christopher Monckton for an excellent puzzle and useful exercise. I am unable to engage in any activity that would help to verify or refute the postulate you’ve presented. It is interesting idea, and I hope that Douglas Pollack will present some of his work in a way that I can read it, and hopefully with a little better attention than I’ve initially given your presentation. I hope to have the pleasure to study your future posts of the like.
Congratulations.