A couple of days ago (January 11, apparently shortly after midnight) on Watts Up With That, Christopher Monckton published a piece that ran under the headline “The Final Nail in the Coffin Of ‘Renewable’ Energy.” The piece contained a short and apparently elegant mathematical proof — which Monckton attributes to a guy named Douglas Pollock — of a proposition that Monckton stated as follows:
In plain English, the maximum possible fraction of total grid generation contributable by unreliables turns out to be equal to the average fraction of the nameplate capacity of those reliables [sic — should be “unreliables”?] that is realistically achievable under real-world conditions.
Monckton (and Pollock) thus seem to be saying that if (for example) a wind turbine system can only generate about 35% of nameplate capacity “realistically achievable under real-world conditions,” then it’s futile to build any more wind turbines once you get to 35% wind penetration into output, because the 35% penetration is a mathematical limit that cannot be exceeded.
My immediate reaction was that that couldn’t possibly be right. I was planning to write a comment over there pointing out what I thought was the flaw. But before I got around to it there were 300 or so comments on the post, which had to a sad degree degenerated into a name-calling match between Monckton and some adversaries. So rather than writing a long comment there that would then be buried at the bottom of all of that, I decided to write a post here, which may or may not then get cross-posted at WUWT (it’s up to them).
This matter illustrates why, when I dabble in math in my posts, I try to stick to simple arithmetic. Not that mathematical proofs aren’t fun — I have done more than a few in my day — but it’s very easy to make an implicit assumption that you don’t recognize and end up with a result that does not really support the conclusion you think it does.
First, here’s what I think is the flaw: Monckton/Pollock, perhaps without realizing it (or maybe, because they think it’s too ridiculous to even consider), have assumed that there would be no “overbuilding” of intermittent generation capacity. By overbuilding, I mean building so many generators that when the wind and sun are at full strength the system produces more electricity than the demand, which electricity then has to be discarded or wasted. (You often see the term “curtailed.”)
But unfortunately, overbuilding is very much on the table as a way to get more wind and solar input into the system and, supposedly, reduce the use of fossil fuels. As an example, in a post on July 30, 2022 I compiled some statistics for the country of Germany that had been put out by the U.S. Energy Information Agency for the year 2020. According to that data, in 2020 Germany had average electricity usage of about 57 GW, and peak usage of about 100 GW. But it had wind turbines with “nameplate capacity” of 62 GW and solar panels with nameplate capacity of about 54 GW, for a total between the two of 116 GW. So when the wind and sun are both producing at full strength and usage is average, Germany has more than twice the electricity it needs just from the wind and sun even if everything else is turned off. They need to “curtail,” or alternatively, as I understand it, wholesale power prices go negative and they have to pay Poland to take the excess power off their hands. And yet, in the effort to get to the imaginary “all renewable” future, Germany continues to build more wind turbines and solar panels. So, as ridiculous as it may seem, overbuilding is actually occurring in the real world, and more of it is coming.
According to the German Umwelt Bundesamt (Federal Environmental Agency), Germany got 41% of its electricity from renewables in 2021. That well exceeds the “average fraction of nameplate capacity” of the wind and solar generators that is “realistically achievable” (however that may be defined), which is around 30% averaged between the two of them. The difference, I believe, is a result of the overbuilding. Thus the case of Germany demonstrates that overbuilding can, and in the real world does, lead to exceeding what Monckton calls the “Pollock limit.”
To illustrate how this works, let me introduce some math. However, in accordance with my practice, I will avoid fancy proofs and stick to simple arithmetic.
Consider an electricity system with a constant 1 GW demand, to be supplied to the extent possible with wind turbines. Assume that the wind turbines operate at 50% of nameplate capacity averaged over the course of the year. In this location, it turns out that the weather is such that the wind blows at full strength 25% of the time, half strength 50% of the time, and not at all the remaining 25% of the time. You build 1 GW of nameplate capacity of wind turbines to exactly match demand when the wind is at full strength. Over the course of the year, you get from the wind turbines all of your demanded electricity 25% of the time, half of it 50% of the time, and none the remaining 25% of the time, which as stated comes to an average of 50% over the course of the year. The penetration of wind power on the grid at 50% is equal to the capacity factor of the wind turbines at 50%, and thus is exactly at the “Pollock limit.”
Can you get more than the 50% grid penetration from wind production, even though the turbines only produce at 50% of nameplate capacity? Yes — by overbuilding. You can double the amount of wind turbines. Then, in the 25% of the year when the wind blows at full strength, you will get double the electricity you need, and will have to discard or “curtail” half of the production. In the 50% of the time when the wind blows at half strength, you will get exactly the amount of electricity you need. And in the remaining 25% of the time when the wind does not blow at all, you get nothing. Averaged over the course of the year, even though the wind turbines only operate at an average of 50% of capacity, you have gotten 75% of your electricity from the wind system, at the cost of doubling the size of the system and throwing away 25% of the electricity produced. And you still have no electricity 25% of the time.
But suppose you want to get nearer to all of your electricity from the wind. Too bad — you have maxed out. If you again double the amount of wind turbines, you will have 4x the amount of electricity you need during the 25% of the time when the wind is at full strength, 2x the amount you need when it is at half strength, and still nothing when the wind is calm. In other words, under these assumptions, with a 2x overbuild, you have maxed out your percent of electricity from wind at 75%. The 75% is well more than the 50% “Pollock limit” of these assumptions, and happens to correspond exactly to the amount of time when there is any usable wind at all. In simple terms, overbuilding can get you above the “Pollock limit,” but no amount of overbuilding can solve the problem of complete calms. For solar, the same principle applies to nights.
So how do you determine what is the maximum percentage limit of wind generation on a grid when overbuilding is allowed? Think about this a little, and maybe do a couple of more simple examples in your head, and you will realize that the following is true: with overbuilding and curtailment allowed without limit, the maximum penetration of renewables on a grid is 1 minus the percent of time when the wind does not blow and/or sun does not shine sufficiently to generate any electricity at all. Where all the renewables are wind, if the wind is completely calm (or at least so light that the wind turbines don’t turn) 10% of the time, then the maximum penetration of wind onto the grid is 90% (i.e., 1 minus 10%). As long as there is even slight generation from the wind, a theoretical massive overbuild could turn that into enough supply to meet demand. Suppose that another 10% of the time the wind only blows sufficiently to generate 1% of nameplate capacity (with all other times generating higher percentages). Then you can get still to the 90% theoretical limit with a 100x overbuild. Even if the wind generates only 0.1% of nameplate capacity for a substantial amount of time, you can still hit the 90% theoretical maximum with a 1000x overbuild. But you can never cover that last 10% when the wind is completely calm, because any number, no matter how large, times zero, equals zero.
So that’s my contribution to the math of this matter. Now a few thoughts on what has occurred over at WUWT. I’m going to quote in full the words in which Monckton states the Pollock proof, highlighting where I think the flaw lies:
Let H be the mean hourly demand met by a given electricity grid, in MWh/h. Let R be the average fraction of nameplate capacity actually generated by renewables – their mean capacity factor. Then the minimum installed nameplate capacity C of renewables that would be required to meet the hourly demand H is equal to H/ R.
It follows that the minimum installed nameplate capacity N < C of renewables required to generate the fraction f of total grid generation actually contributed by renewables – the renewables fraction – is equal to f C, which is also f H / R ex-ante.
Now here comes the magic. The renewables fraction f, of course, reaches its maximum fmax where hourly demand H is equal to N. In that event, N is equal to H ex hypothesi and also to fmax H/ R ex-ante, whereupon H is equal to fmax H/ R.
Since dividing both sides by H shows fmax / R is equal to 1, fmax is necessarily equal to R.
I think that in the bolded phrase “the minimum installed nameplate capacity N < C of renewables” Monckton has assumed that no overbuilding is allowed. It is far from 100% clear, and I have difficulty parsing the sentence. I would agree that if no overbuilding is allowed then the conclusion follows that the maximum possible fraction of grid penetration by the renewables would equal the average fraction of nameplate capacity at which the renewables produce averaged over the year. The maximum would occur, at least as one example, when average electricity demand is constant over the year and the nameplate capacity of the renewables was equal to that constant level of demand. If, on the other hand, demand fluctuated over the year, then there would be times of peak demand when even full nameplate capacity of production could not fulfill demand, and therefore the level of grid penetration by the renewables would fall below their average capacity factor.
Unfortunately, Monckton did not state in his conclusion (quoted in italics way back at the beginning of this post) that the conclusion only applied in a case where there was an assumption of no overbuilding. Several commenters at WUWT (e.g., chadb, Joe Born, “it does not add up”) weighed in to provide examples from places like Texas and the UK where grid penetration could go above Monckton’s “Pollock limit” with overbuilding. Instead of simply recognizing that a small modification to his conclusion was in order, Monckton then launched into a sad round of name-calling. For example, from a comment time-stamped January 11 at 2:40 PM, replying to Joe Born, Monckton calls Born “incompetent,” “idiotic,” “stupid,” a “nitwit,” says he used a “half-witted word salad,” and should “get his kindergarten mistress to read to him.”
Over the course of many, many comments replying to others, I think that Monckton ultimately concedes that his result only applies to a situation where overbuilding is not allowed. He calls such overbuilding “wasteful” and “foolish,” with which I would certainly agree. However, many governments are currently heading down that path. Germany is already there, and proceeding farther and farther by the day. The UK is already there as well, or at least very close. California and New York are not far behind. So I don’t think that we can just dismiss overbuilding cases as so foolish that no one would ever do it.
For any readers who are interested in a deep dive on this subject, I highly recommend Ken Gregory’s definitive August 2022 study titled “The Cost of Net Zero Electrification of the U.S.A.” Gregory explicitly considers paths to “net zero” in the face of the random intermittency of the renewables. Options considered by Gregory include batteries, overbuilding, and carbon capture and storage. For what it’s worth, Gregory finds that within certain ranges and at certain assumed prices, overbuilding is a superior alternative to batteries for increasing grid penetration of renewables — which is not saying a lot, but is saying that overbuilding, while it may be insane, is less insane than other options that seemingly everyone is talking about as if they make sense.
I should say that I have reviewed Mr. Gregory’s study extensively, and I have not found a flaw. That does not mean that there aren’t any. The same goes for my own simple arithmetic above in this post, which could contain flaws as well. If any reader discovers any such, I encourage you to point them out, and I hope that I will accept the criticism with a good spirit, and make any corrections that are appropriate.
Meanwhile, I have long followed Lord Monckton’s work, and respected much of it, and I am saddened to see him go somewhat over the edge on this one. To the extent that my comments here may appear critical, they are offered in the spirit of trying to get the right answer, and hopefully of friendship and cooperation.
As someone said, I quote: “At midnight local time, it’s a very long distance to where the sun is shining”
Also, the Sun ceases to produce useful energy long before Evening Peak Usage time
I remember PG&E’s chief engineer making that point ca 1976 when showing a typical daily demand curve for PG&E. Cal PUC and Cal ISO have been pushing for west facing solar panels to get more solar power in late afternoon.
The economics may not be so good. The installation of one or two axis panels that follow the sun across the sky is rarely justified, especially as they need to be spaced out to avoid shading each other. Sunny deserts only.
Good thing there aren’t any sunny deserts in CA then…The day may come when environmentalists start looking at the damage massive panel installations can do in the desert.
inside the Arctic Circle the sun gets to shine continuously in summer. In midwinter you have to travel to the Arctic Circle to see the sun at all at midday.
The elliptic nature of the earth’s orbit contributes to some unevenness in the time of the middle of the night. At the winter solstice my local sunset was 4p.m. while sunrise was 8:20 a.m.. the time of sunrise barely moved while sunset got as late as 4:20p.m.
Like most things climate change related, it’s the starting premise that causes the problems. Eg to over build or not.
Like defining the colourless and odourless gas CO2 emmissions as dangerous pollution.
So many other things could be done to better the next 100yrs than reduce emissions and constrain energy mix.
Overbuilding may solve the ‘Pollock Limit’. BUT it doesn’t solve the three fundamental renewable problems:
I disagree with Point 3. As long as they provide power, it can be regulated electronically to help stabilize the grid. In principle, at least. I don’t know of any commercial products available now.
Look at this for some consideration. Not to say it is reasonable but perhaps it is doable.
But will it be able to do a black start after all the grid goes down?
Without help from synchronous sources from elsewhere?
If not, it is flirting with danger that could lead to deaths.
As in premeditated murder, which has severe criminal penalties.
National Grid have been experimenting.
Last year I think it was in the UK when a lightning strike disrupted a section of the UK grid mostly supplied by off-shore turbines. The drop in frequency was noted by the control systems and the wind turbines shut down automatically.
Grid management always have a similar capacity fossil fuel station on spinning reserve for this reason and that re-connected automatically as well.
The next thing to happen was a continued drop in frequency because the spinning reserve was incapable of managing the load. The whole section of the grid shut down. Quite a sizeable area of the North East of the UK.
Several reasons emerged to cause this. The turbines have less spinning mass than a similar steam turbine/alternator so have less ride through capacity.
Another reason was grid management could only cater for the generation they knew about. There were many (too many apparently) solar panels and small wind turbine arrays that were not known of generating into the system before.
The whole event took about a couple of seconds to evolve.
It was August 2019. Read about it here
The basic timeline
Overbuilding does not solve the Pollock limit: it breaches the Pollock limit, whereupon one must either have cripplingly expensive battery backup or waste some or all of the additional capacity, either by very costly capacity payments or by the wasted-asset cost of disconnect orders. The Pollock limit simply tells the grid authority when those additional costs are likely to be incurred, so that they can decide whether they want to inflict them upon customers.
Is there anything at all that makes you believe their care in the least?
I also agree with not only the increased costs of overbuilding and wasted energy by overproduction. The other problem is the costs of conventional backup which is only occasionally used and thus even more costly and inefficient. And let’s be frank about it: Germany, with it’s massive overbuilding investments in renewables, has hardly been able to cut it’s emissions at all. So: all costs and no gain. The same in the UK,
that’s sadly burned off square miles of Canadian forest …
Vieirae is right on all points. The virtue of the Pollock limit is to reveal to all concerned the limit on the useful penetration of a weather-dependent renewable species in a given national grid. As it becomes better known, there will be more debate about the absurd proposals (in the UK, for instance) vastly to expand both onshore and offshore wind generation.
The Pollock limit is not just a snapshot limit. It reflects long term impacts as well. More overbuild means more replacement costs over time – if that can be afforded. If it can’t be afforded then you will see a fallback toward the Pollock limit.
More overbuild means less demand for coal, oil, and gas which drives prices for those commodities up increasing backup costs meaning less and less of it available which then leads to more and more blackouts from lack of wind and sun. A direct result of trying to breach the Pollack limit.
Once you reach that Pollack limit you simply can’t escape from the long term consequences of trying to exceed it. It isn’t just a physical limit to be argued over but also an economic limit which will be reached sooner or later.
In practice what happens is that increased renewables drive off baseload forms of generation because of the need for rapid ramping infill generation. In the UK wind has driven out coal and nuclear, but not gas. It has also led to rising dependence on the flexibility of interconnected grids.
Rud Istvan is right on all three points. On the third point, the UK, now well above the national Pollock limits for both wind and solar, is now experiencing such difficulties in stabilizing the grid that the stabilization cost was $1.2 billion last year.
10% grid penetration? How old are your numbers?
A useful refinement of the Pollock limit. In the end the conclusion is much the same. One can derive some improvements from exceeding that limit but soon the costs outweigh the benefit and ultimately
there is complete failure.
Best to regard the Pollock limit as the point at which returns begin to diminish rather than presenting it as an absolute limit.
from the beginning.
Mr Wilde is, of course, correct. My original article made it quite clear, twice, that if one generated above the Pollock limit one would need battery backup. I did not need to add that without cripplingly expensive battery backup one would need either capacity-constraint payments or the wasted-asset costs of disconnect orders (EIRgrid uses those a great deal).
I would argue that above the limit set by fast responding generation (e.g. hydro). When that is exceeded, any sort of uncontrolled intermittent generation will need some form of short term energy storage to prevent grid destabilization. The failure mode for overloading a transmission line is in the endpoints pulling out of synchronism with each other.
An excellent point. As the penetration of weather-dependent unreliables, including hydro, increases, the costs of attempting to stabilize the grid rise very rapidly: $1.2 billion in the UK alone last year.
The limits of hydro as a solution have been discussed most elegantly by Roger Andrews here
The practicalities for the Norwegians have been that the interconnectors open them up to the problems of capacity shortage at the other end of the line, and in particular the importation of high shortage pricing. They are now minded to shelve further interconnection.
Based an a second or third hand comment, the hydro capacity to handle swings in wind generation were max’ed out by 2009 in the US. IMHO, handling the short term variations of renewable generation should be handled by the providers and not the customers.
Battery backup is always a very expensive way to go. The economic use of batteries (or even much less costly pumped storage) is in providing grid stabilisation services for second to second and minute to minute fluctuations in the frequency, and for small amounts of diurnal arbitrage between low demand overnight or as the result of excess solar and peak rush hour demand.
Storage turnover must be frequent to be economic, which precludes providing coverage for periods of Dunkelflaute, much less interseasonal requirements.
Otherwise it is cheaper to spill or curtail, and provide backup from flexible dispatchable generation such as hydro if available or gas. Less flexible generation such as coal was used historically, relying on a combination of low cost and scheduling to provide longer runs for individual units interspersed with longer turndowns and seasonal shutdowns.
Of course, unless the underlying renewables generation is genuinely very cheap it may make no economic sense at all. The evidence we have from the UK grid is that wind has been remunerated at above market average price, at least until governments contrived to make gas expensive by limiting its production.
From the very beginning, costs of unreliables outweigh the benefit.
It rather seems, at least in general, that “the point at which returns begin to diminish” is where the contract is signed.
I’ll repeat what I said in the previous article.
It has been known for decades that overbuilding makes renewable energy very, very expensive. But it is clear that we can get about 40% of our energy from renewables and we should not discard that option. Europe is tremendously deficient in energy. Renewable energy is generated locally, while fossil fuels are for most European countries almost totally imported. Much of it is from unreliable countries.
In Australia the body AEMO invents high penetration scenarios then implements them because there is little opposition to them.
AEMO clearly say that they limit their plans within policies of net zero by 2050 and changes set by the Paris Agreement. Some of their many-page reports do not find search terms CO2 or Fossil Fuel.
Therefore, no official body exists to question AEMO plans for 75% penetration. No official economic studies including fossil fuel costs and extra costs from enforced non-optimum operation, seem to exist.
Christopher, emphasis on the Pollock Limit is most valuable as a marker that rational people can ask about when and why it is exceeded
Javier, of course these matters have been known for decades, but by people working in the field. They are not known by more than a couple of our politicians, most of whom without understanding rubber stamp reports from the zealous at AEMO with little idea of the economic and social damage.
In a better world, responsible engineers at AEMO should be pressing alarm buttons. There used to be ways to take the badges of recalcitrant engineers.
We had a good chewing over the work of Blakers for AEMO on Australia here
It turned out that he made lots of assumptions (some of which he might not even have been aware of) that were unrealistic and led to gross underestimation of the effects of intermittency, in addition to downplaying the costs of storage.
“…Therefore, no official body exists to question AEMO plans for 75% penetration. No official economic studies including fossil fuel costs and extra costs from enforced non-optimum operation, seem to exist….”
These folk (link below) have been trying to alert AEMO to their flawed reports since 2017. AEMO know their flawed scenario assumptions – but being a gov agency have no courage to go against the government NZE narrative!
Electricity System Model (modelling.energy)
Thank you, sincerely, for this comment about Red Vector etc and the many publications of which I was unaware.
It is too easy for retired observers like me, on the sidelines, to miss important work.
Your maths don’t indicate the reality of what is happening. It is not just calm periods when wind turbines don’t produce electrical power – in high winds and severe storms they don’t either and it is those conditions that cause the damage to gear boxes, bearings and blades. It is hard to get real data on these breakdowns as they try to hide it but it seems all suffer these problems inside ten yers and many in even five years of operation.
You would rarely see all turbines spinning in most wind farms so this makes the name plate argument a bit theoretical. If you can’t use the electrical power and can’t transmit it to another country it has to be disconnected to avoid over frequency and over voltage issues.
It would not be known what percentage of solar cells could be inoperative or damaged or underperforming in solar farms again making the name plate percentage something that is not realistic.
So, regardless of the maths intermittency for various reasons makes them unable to reliably run a power grid.
Another point few seem to appreciate is that from the moment you place the blades up on the tower and they start to turn, they will inevitably become unbalanced due to impact from ice, flying debri, rain pitting, lightning strikes, bird droppings etc. Aircraft propellers have to be checked and rebalanced every 2000 hours, helicopters rebalanced in pairs, and turbine blades in jet aircraft and power stations have to be similarly treated.
There is no worse place to put these things than up on an open ridge where the lightning from thunderstorms and severe winds under downbursts can strike them
In addition, the wind does not blow with equal intensity and frequency everywhere on the planet. I expect there will be some mathematical reduction in the rated capacity of each succeeding wind tower array deployed on successively less suitable locations for wind
Kenji, whilst that is true, some places such as around the equator have less wind strength much of the time, whilst all parts of the globe can get severe winds regardless of location because the pressure patterns determine the weather
My understanding is that nameplate, is exactly what the name implies. It’s the maximum rated output under at a set standard stamped on the device. If some of the turbines aren’t spinning or some of the solar panels are inoperative isn’t relevant. Not delivering to the grid for any reason simply reduces the capacity factor of the “farm”/unit which is typically determined and reported on an annual basis. I read about different definitions, but I suspect it’s just part of the intentional deception of fact that is such a large part of the W&S hoax.
Maybe Monckton’s response, and some other regular bloggers on WUWT have varying degrees of this condition “Atychiphobia is the fear of being wrong, or rather the fear of being told we’re wrong.” Also when the article author presumes to give an untainted summary of comments (thinking of a previous author I have crossed statistical-swords with) that’s where WUWT needs an independent panel of experts (like peer review) to summarise the pros-and-cons considering only serious comments (i.e excluding personal attacks and “smart-arse” comments). Therefore I appreciate your approach of being willing to be corrected on finer points or even major flaws as we all should be. After all the search for the best outcomes should be a “team sport”.
Menton is wrong and Monckton right. Monckton had made it explicit, twice, in the original article that if one exceeded the Pollock limit one would need battery backup. Menton, like one or two commenters on my original article, chose to cite a single sentence taken from my original post, then to restate it in terms explicitly contradicted twice in the original posting, and then to say he thought I was wrong. What neither he nor his publisher did, before calling me names, was to get in touch with me for clarification. That was a failure no less of common sense than of courtesy.
Ercot does it already without battery backup.
I should add that colleague review and, despite its flaws, peer review has the benefit that flaws or unrecognised caveats get picked up before one presents their pet theories to the world. That can save much embarrassment and reduce symptoms of Atychiphobia.
It seems to not actually work that well, at least in some fields such as politicas, sociology, economics, and climate.
Does the pseudonymous “steveshowmethedata” find anything wrong in Mr Pollock’s result? If so, what? If not, what is all this waffle about atychiphobia? Is he, perhaps, atychiphobic?
I was commenting on the name-calling that Menton documented. All too common on WUWT. My experience in peer-review journals is that in letters to the Editors or invited published comments the authors are required to professionally and politely defend against but also accede, when justified, to commentator’s pointing out flaws or un-acknowledged caveats and thank such for their contribution. We all suffer from atychiphobia its part of the human condition but we must fight its impulses and not “play the man instead of the ball” or “shoot the messenger”, to mix a couple of metaphors. Also a key point on article construction that an Editor would have picked up if doing their job was that you did not put the crucial caveat about batteries/overbuild up front in your take-home message where it should have been as Menton rightly pointed out.
Excellent, very clear and I think completely correct. I too found Christopher’s piece difficult to follow, and the clarification and examples of the results of increasing the percentage of intermittent generation in a grid make it very clear what the limitations are.
On overbuilding, Francis is correct to cite the UK. The UK has demand of somewhere between 28 and 45GW. It has an installed base of 25GW wind and 15 GW solar.
I haven’t done the math on it, though anyone inclined to could get the basic data required from here:
My impression is that it works exactly as Francis suggests. The installed base of 40GW of renewables never falls to zero, but in late November it was pretty close. Its the lack of wind and sun that impose the limit on what proportion of the demand can be generated by renewables. In late November there were periods when wind generated less than 1GW, and solar generates nothing after about 3pm.
The constraint payments are quite high, and seem to work as Francis suggests:
In June 2022, according to another source, they were £332 million for that month alone:
What is happening is exactly what Francis describes: wind and solar together with other sources produce more than the grid can handle. And this is because the UK already has overbuilt, if you add together all the sources of generation, nuclear, CCGT, wind, solar – and the bits of coal and biomass.
This must indeed cause a limit to the percentage you can deliver from intermittent sources, and seeking to raise that limit by overbuilding must indeed be subject to diminishing returns as Francis argues.
The interesting thing going forward is the plan to raise demand – it will at least double, with EVs and heat pumps. And at the same time to move mainly to wind and solar for generation.
They are going to run bang into Francis’ points. One, that huge overbuilding will be required to come anywhere near their objectives for the percent of supply from wind and solar. Two, that no matter how much you overbuild, when there is a blocking high between November and February, overbuilding will not save you. Three, that in the summer months the corollary of overbuilding will be that the more you overbuild to raise the percent of renewables on average, the higher and more unaffordable the constraint payments will be.
It is truly crazy. The policy is essentially to build far more generating capacity than you can use, and then pay the operators not to generate, and at the same time have to build a parallel network to cover for the times when, despite generating far more power than you can use on average, you have lengthy periods when they generate close to nothing.
Their answer to overproduction and constraint is then even crazier, its to generate hydrogen using electrolysis….
So long as ruinables generators are being paid for curtailment, overbuilding will always deliver an economical return to the generator … the public, not so much.
That becomes unaffordable as wind penetration increases.
and an economical return to the investment banker who provided a low interest loan on the basis that those curtailment payments will continue for the length of the power purchase agreement. Those curtailment payments are going to be around for more than the next 20 years. Not good.
I am sorry that Michel found my original article difficult to follow. I had thought the equations were elementary. And I had twice explicitly stated that if one generated more than the Pollock limit then one would need battery backup, which is cripplingly expensive. Menton chose to ignore both those statements and instead to take a single sentence out of context, paraphrase it in a manner explicitly contradicted by those two statements, and then say he thought I was wrong.
I have submitted a follow-up piece containing ploddingly careful definitions of the terms in the equations. But those who are anxious to find fault where none exists will no doubt continue to be tendentious.
Defining each term on a separate line, rather than running the together, one after the other, is always easier to follow.
I have followed that structure in my reply to Mr Menton, which will appear shortly at Manhattan Contrarian and may also appear here in due course.
I have also made the definitions a great deal more specific than in my original article, so as to avoid the sometimes wilful misunderstandings that seem to have arisen among some commenters.
A few years back I look at official Iowa website on electricity demand, generation, supply types, etc. At that time coal was more than 80% of generation capacity and actual wind generation was less than 10% of demand, so there were few problems.
One especially interesting bit of information was the distribution of wind capacity factor vs wind nameplate over the course of each year. The average “coldest six months’” capacity factor was fairly high, somewhere over 45%, but during the “warmest six months’” it was only 16%, giving a yearly average of, if I remember correctly, 32%.
That means, if the expectation was to depend on wind replacing all that coal generation, more than a 6X overbuild would be necessary in order to (supposedly) supply enough electricity during warm weather (ignoring the twice that much needed to keep the imaginary storage filled). The yearly average seemed especially meaningless.
Would the Pollack factor be 16% or the 32% average over a year’s time?
I think the basic point which both pieces are making is correct. The thing that I found difficult in your own piece was the relationship of the algebra to the physical phenomena.
Probably the clearest way to approach this rigorously is from observed intermittency, and then calculate through to average loading. It would be rather laborious, but the BMRS site lets you download csv files for any period
The argument is quite a simple one in summary terms. It rests on the fact that the pattern of intermittency will not change with increased build. Regardless of how much you build there will still be night and still be blocking highs in winter and summer, and these will take supply from renewables close to zero, no matter how much you have installed.
So as the overbuild rises, what happens is that overproduction in the summer months will rise by a lot, some of the time. The percentage of time when renewables meet demand 100% will also rise. But the percentage of time when they totally or mostly fail won’t change all that much. You can see this in the early winter 2022-3, when renewable supply fell below 1GW, from an installed base of 40GW.
Even had the installed base been 80 or 120GW it would not have been much practical help, going from 1GW to 2 or 3 GW in late November is not much of an improvement. The argument is, overbuilding leads to diminishing returns.
Its difficult to see how to do a rigorous treatment of this, but its what needs doing. The problem is, you can get the data on generation and supply, and this gives quantified intermittency. But in bmreports this is actually how the demand was met, by fuel type, and demand varies by time of day and by season. So you have to control for that in some way to show the real impact of intermittency and what it would take to deal with it.
My non-rigorous impression is that there is no way to get there, though I don’t know what the limit is. The UK currently has about 100% coverage of demand from wind and solar. Demand is about 40GW on average and the installed base is 40GW. That seems to be resulting in a coverage of about 30-35% on average, maybe less, with about 60% coming from CCGT.
Question is, if you doubled the installed base to 80 or 120GW, and if demand stayed the same, and if the pattern of intermittency stayed the same, how much more supply would you get from wind and solar? The argument that there would be diminishing returns is very persuasive, but to make it watertight it needs to be applied to the real world data from bmreports.
I’m afraid I no longer have the time, energy or expertise to do it, but that is what needs doing.
Michel nicely illustrates the difficulty in doing precise capacity-vs-demand-vs-weather calculations. One can make such calculations enormously complex. But the beauty of the Pollock limit is that it is simple. Most grid authorities have a very good idea, by now, of the actual capacity factors of each weather-dependent renewable species in their territory. Now that the Pollock limit is available to them, they will know that very considerable cost and waste will arise if they allow either wind or solar to exceed its Pollock limit, which is its known capacity factor.
Maybe what’s needed (though I have no idea how to calculate it) is an intermittency factor or parameter? If the performance varies between 50% and 10%, this will make a difference. If the duration of the low spells is longer or shorter, and the distribution has a larger or smaller standard deviation, that too will make a difference.
I think that is probably what “It doesnot add up” is doing below, though I am having trouble following the math he or she is using.
It’s been a while….
In the UK industrial demand for electricity has fallen by 20% since 2000 but capacity has risen by over 20GW for this significantly lower demand. That is the effect of unreliables and it can only get worse with 40GW of offshore wind in the pipeline.
Should have added the more unreliables the greater the costs of generation for everyone.
I have already done those sums. They raise serious questions about the viability of storage solutions, partly because the size of surpluses is hugely variable, and therefore it is never going to be economic to use them all. When surpluses are small they are also infrequent, which implies storage utilisation is low. The overall surpluses are the areas under the duration curves.
I’m sure you have the data to do a better job, but I calculated rough trade-offs based on ERCOT data in a Naptown Numbers piece.
It is instructive to consider some basic mathematics. Let g(x) be a capacity factor duration function, defining the proportion of capacity that is produced less than x proportion of the time. It is analytically convenient to consider functions g(x) of the form g(x)=x^n. These have the property that integral g(x) from 0 to 1 is 1/(n+1), and is the average capacity factor. Thus for n=1, g(x)=x, and average capacity factor is 1/2, or 50%; for n=2 g(x)=x^2, and average capacity factor is 1/3 or ~33%, etc. They are actually quite good approximations to real world duration curves for fleets of wind farms, although they need scaling by a fudge factor that reflects that wind is never consistent enough to achieve full capacity across a fleet of farms, but tops out at around 80% of capacity – see the recent wind record in the UK at 20.918GW plus some curtailment cf capacity of 28.5GW.
For a capacity of C, the duration curve is simply Cg(x). For demand we should also properly consider duration curves, recognising the several hours of low demand overnight, rising towards a plateau during much of the day before the early evening peak, subsiding back again to overnight levels, amplified by the weekday/weekend and seasonal factors. For simplicity I will look at convolution with baseload and peak levels of demand, respectively B and P. If the proportion of hours that are baseload is α the average demand is αB + (1-α)P. A fuller analysis looks at a full convolution integral with the different levels of demand. We shall assume that there is no correlation between wind output and time of day which is reasonable for the UK, but not true in e.g. Australia where it tends to be windier at night – or seasonally (whereas in the UK it tends to be windier on average in winter). B and P can also be thought of as low season and high season demand rather than merely diurnal fluctuations.
Following Pollock’s principles we have that all curtailment is avoided when C<=B. For C>B we expect curtailment. The expected rate of curtailment is Cg(x)-B over the interval between x:g(x)=B/C (or x=(B/C)^(1/n) ) and x=1, and the total curtailment is the integral of Cg(x)-B over that interval. Integrating the function we get
which evaluates to C-B at the upper limit and C(B/C)^((n+1)/n)/(n+1)-B(B/C)^(1/n) at the lower limit, and this level of curtailment applies the proportion α of the time. While C<=P that is the source of curtailment. For C>P we have a similar analysis applying to the peak hours, applying for (1-α) as a proportion of the time. The analysis can be extended to more levels of demand in an obvious manner, and in the limit as a convolution.
For n=1: we get curtailment of
α(C-B)(1-B/C)/2 for C>B, plus a further (1-α)(C-P)(1-P/C)/2 for C>P
Marginal curtailment for an increase in capacity is found by differentiating with respect to C.
α(1-(B^2/C^2))/2, plus a further (1-α)(1-(P^2/C^2)/2 for C>P.
Marginal production is ½, and the useful production, U is ½ less marginal curtailment. The effective cost of marginal capacity is then the LCOE multiplied by 1/2U to allow for the wastage. Capacity additions are asymptotically useless as B/C and P/C tend towards 0 with increasing C. However, for values of C that are only slightly above B marginal curtailment is small, and only occur in baseload hours. If we set C equal to average demand so that total generation by wind is C/2, equal to average capacity factor multiplied by average demand – the Pollock criterion – we find that B<C<P so we are only concerned with curtailment during baseload hours, but that curtailment is positive. There are no obvious grounds for stating that it is optimal. However we have the basis for a cost curve relative to levels of grid penetration (though we should add in other costs caused elsewhere in the system to make a proper tally: already by the time there is sufficient capacity to cause curtailment there are plenty of costs for extra transmission and grid stabilisation and costs imposed on other generators as I have outlined already). An optimised grid would see marginal long run costs of different kinds of generation equal if you can define those long run costs. In practice, the optimisation would need to be risk based, reflecting probabilities for many different parameters.
We can replace B in our formulae by a demand function D(t) where 0<t<1 (year), and integrate over t to provide a convolution.
Higher values of n (including fractional values) produce more complicated expressions for curtailment, but for given C, curtailment occurs less of the time, and there is more headroom to increase capacity at little curtailment penalty. In practice, empirical capacity factor duration functions may have an interval of low to zero output and never attain a 100% capacity factor at the fleet level. Modifying the form of g(x) accordingly may provide more realistic answers, but the underlying analysis gives a feel for how the variables interact to influence the tolerable level of wind generation. Comparison with empirically determined surplus duration curves such as these
goes some way to validating the approach. In turn, they highlight the problems of trying to create an economically rational storage system as an alternative to dispatchable backup. That is perhaps the real issue here, especially when combined with the economics of storage turnover.
which evaluates to C-B at the upper limit
which evaluates to C/(n+1)-B at the upper limit
This chart summarises the end result for the 50% capacity factor case (n=1, g(x)=x) under different demand scenarios. Demand is assumed to be 50% Baseload and 50% Peak, at a given multiple of baseload.
It is clear that approaching 60% average penetration the costs start escalating rapidly. If we considered a case where g(x)= 0.5x, which would imply an average capacity factor of 25%, the basic shape of these curves would not alter, but we would require double the capacity (i.e. the x-axis would be multiplied by 2).
That’s a very useful calculation, which nicely illustrates how the cost increases as the Pollock limit is breached.
Illustrative chart for g(x)=x^2 (implying an average capacity factor of 1/3rd) with demand at 70% of wind capacity, shown by the yellow and blue shaded generation sources.
The x-axis can be thought of as the probability that the wind output is less than the corresponding y-axis value, or for a constant demand, the proportion of time that the wind output is less than the y-axis value. The blue area then represents demand met by wind, the yellow area is the balance of demand met from other sources, and the red area represents the surplus generation. At levels of demand >= wind capacity all the wind is used with no curtailment – the yellow area expands and the red area shrinks and disappears as demand is increased. At lower levels of demand the red area increases and the blue area decreases. The ratio of the red area to the area under the parabola (the sum of the red and blue areas, equal to a third of nominal wind capacity) gives the proportion of surplus generation relative to total wind generation potential.
The point where the demand and generation curves intersect is when generation is 70% of capacity, and is at sqrt(70%) on the x-axis. If we call the level of demand D (for demand less than wind capacity) then the red area is then given by
which compares with
for the case of g(x)=x. At low levels of curtailment (D->C) there is little difference between the two.
According to Our World in Data share of electricity by source 2021: coal 29% wind 20% gas 15% nuclear 12% solar 9% bioenergy 8% oil 4% hydropower 3%.
However wind for instance supplied only 9% of total primary energy consumption and Germany energy consumption per cap has fallen to the same level as 1969, it will be interesting to see where it goes from here.
As noted above averages don’t acknowledge the inherent intermittency and unreliability of wind and solar.
Correct, 20+9++8+3=40% renewable, but only 29% wind and solar, and seriously overbuilt. Pollock alive and well.
Germany raises price caps for 2023 onshore wind, solar tenders (renewablesnow.com)
Germany’s Federal Network Agency has increased the price caps for the auctions for onshore wind and rooftop solar energy in 2023 by the maximum 25% in a bid to revive the waning interest in tendered capacities.
The maximum price in the tenders for onshore wind energy next year is set at EUR 0.0735 (USD 0.078) per kWh and for rooftop solar systems, it is EUR 0.1125 per kWh, the Federal Network Agency (BNetzA) said on Tuesday.
When considering storage losses and charging limitations, the period defining storage requirements extends over as much as 12 weeks. For this longer period, the cost-optimal storage needs to be large enough to supply 36 TWh of electricity, which is about three times larger than the energy deficit of the scarcest two weeks
Thanks for this post which summarises things very clearly.
I got a derisory response for pointing out that real-world data for the UK falsified the theory. Things turned ugly later. That’s the sort of behaviour usually displayed by the climateers: it’s a shame to see it on WUWT.
The real-world data for the UK do not falsify Mr Pollock’s result. If one installs renewables capacity above the Pollock limit, one must either install battery backup (to which a substantial portion of the original head posting was devoted) or make expensive capacity-constraint payments, or make disconnect orders.
Monkton in his own words from the original discussion.
1. The miserably low capacity factor R is in fact also the fundamental limit fmax on the contribution that unreliable can make to the grid without prohibitively expensive and logistically unachievable large-scale static-battery backup.
2. That limit is equal their capacity factor (also sometimes known as the load factor). If one were to add more unreliables to the grid, one would be wasting surplus electricity.
3. Calculation of the capacity factor must be performed with some care for a particular grid. In the UK, Professor Gordon Hughes of Edinburgh University is the ranking expert. He reckons it is about 24% of nameplate capacity for onshore wind in the UK, and perhaps 30-35% for offshore wind.
4. In practice, then, installing significantly more unreliables capacity than the Pollock limit will be wasteful, and the more that limit is exceeded the more wasteful the outcome.
5. The Pollock limit – adding renewables to the grid is pointless, wasteful, expensive and destabilizing.
A most helpful summary from Mr Sandberg. One wonders why so many here do not want to get the point.
The difficulty I am having with this is in the first point. The capacity factor will limit the contribution to a maximum which is a simple matter of how much in total the wind installations produce.
If you have an installed parc of 25GW (which the UK seems to have) and a capacity factor of 30% then you cannot expect to get more than an average of 15GW for the whole year.
That seems clear.
But will you even get this much, in usable form? Surely it depends on a matter of fact, the behavior of the wind? This is what controls how the power comes in. Isn’t that the key variable?
The UK total generation in this example is 15 x 24 x 365 = 131,400GW. The average capacity factor times the number of hours in the year.
But it makes a big difference how this generation is distributed. It may be very sharply peaked with lots of days with 20 or 24GW and a long tail, with lots of days of less than 5GW. Or it might be quite flat, with most days being around 15GW plus or minus 2GW.
Surely which it is, how the generation is distributed, is what determines the effect of adding more of it? Adding more wind will have very different effects depending on this.
So I can see why capacity factor imposes an upper limit, in fact that seems rather obvious. What I cannot see is that capacity factor, rather than the distribution of the power generated with a given capacity factor throughout the year, is the determining factor.
In fact, imagine a case where the winds are light but almost totally uniform. You would get a very low capacity factor, but every GW of capacity you add would have the same effect, and you could get to a very high percentage of renewables at very little cost. On the other hand in a very variable climate, like the UK, you may get a higher capacity factor, but run into diminishing returns as you try to raise the percentage of wind. For the reasons in the pieces under discussion.
Maybe I am missing something but it seems like the distribution of the intermittent generation is the important parameter which limits the usefulness of adding additional intermittent capacity.
I think you’re right.
The best that can be said of the Pollock limit is the empirical observation that it roughly marks the point of diminishing grid-power returns to installed unreliables capacity. The chart nearby illustrates this for the case of 2018 ERCOT wind power. But that’s a result of wind and demand distributions, and those don’t enter into Lord Monckton’s equations. One can readily conceive of distributions for which the point of diminishing returns would be well above the Pollock limit.
Moreover, it’s instructive is to set aside the distribution question and look at what the marginal costs of wind power are without batteries or other storage. We have to bear the capital cost of almost 100% backup by, say, gas plants no matter what the unreliables penetration is, so we’ll ignore that fixed cost. That is, we’ll consider that money already spent, so it won’t come into play in deciding whether to add more wind. And for the sake of this discussion we’ll assume that our reliable plants’ only variable cost is fuel.
Now suppose that dividing wind farms’ average annual (pre-curtailment) output into the annual debt service imposed by their (let’s assume all-debt) financing comes to $30/MWh. And suppose that at current prices gas costs $22 per generated megawatt-hour. This would mean that adding more wind capacity wouldn’t make sense even at penetrations well below the Pollock limit. So if their greater real cost hasn’t yet stopped us from installing unreliables it’s hard to see why giving a name to the point of diminishing returns will change things.
As the chart nearby illustrates, moreover, prices varied within a higher range during the spring, summer, and fall of ’22, and for much of that range it would have made sense to increase penetration beyond the Pollock limit. So that limit doesn’t seem to have much chance of being the “final nail in the coffin of ‘renewable’ energy.”
Glad you wrote this as I had the same thought. In fact overbuilding is well recognized. A recent California study estimated they could get 85% from renewables with 300% overbuilding. In Virginia Dominion Energy’s net zero plan involves a similar amount of overbuilding.
But overbuilding is indeed constrained by night and clouds for solar and low to no wind for wind. My understanding is that solar only works about 8 hours a sunny day so the no juice fraction is very large. Given wind generators require sustained speeds over 8 mph or so the no wind power fraction may also be well over 10% in many places.
Of course batteries are not included in any of these calcs.
Yes, overbuilding happens. It is precisely for that reason that Douglas Pollock did his research. He wanted to know whether one could determine the point beyond which installation of further capacity of weather-dependent renewables on a national grid would require either battery backup (mentiond explicitly twice in the original head posting) or capacity-constraint payments or, as EIRgrid does, disconnect orders, with the waste of capital that that entails. All those options are cripplingly expensive. The Pollock limit gives a clear indication that there is a limit – and in some countries quite a low limit – above which such expensive waste is bound to occur.
Australian Weekly Wind Power Generation Data – 9 January 2023 To 15 January 2023 | PA Pundits – International
Here is a link to a guy in Australia who records all wind produced every day in Australia. A former electrical and electronics instructor in the RAAF he is well versed in all this and when you see the actual power to the grid compared to capacity factor you know how bad it is.
Most grateful to R.K. for this link, which shows that in Australia the capacity factor for wind power is 30%. That, then, is the Pollock limit. Build more than that and one must either have a lot more battery backup than Australia has or pay capacity-constraint payments, in default of which capital-wasting do-not-generate orders must be issued.
Overbuilding is like overbetting at Vegas. The house odds on Blackjack might only be .4%, but if you double your bet every time you lose expecting to win back your losses, you will be broke soon.
The house odds at Blackjack vary quite a bit depending on the rules. For many years the casinos of New Jersey allowed the early-surrender rule, which gave the player following an ideal card-counting strategy a 1.2% edge over the house. On my visits to New York as a lad, I used to rent a limo to take me to Atlantic City, spend the day there messing about at the tables and then get the limo back to NY with a few hundred dollars in my pocket. One had to play for smallish stakes or the pit-bosses took too much of an interest.
Eventually a greedy card-counter went too far, and the casinos lobbied the gaming control commission to let them ditch the early-surrender rule. The last time I was able to take advantage of that rule was in a casino within sight of the Pyramids in Egypt some years ago.
Good post. About overbuilding, it sure blows a hole in the “cheaper” argument for wind and solar over coal or gas or nuclear. For the reliable sources, the Levelized Cost of Electricity (LCOE) comes with 24/7 operation. The only overbuilding is for the system to accommodate downtimes.
For the “cheaper than fossil fuels” renewables, LCOE ignores the cost of backup, storage, or any overbuilding which results in curtailment.
Add: over-maintenance, over-repair, and over-replacement to the “overbuild” argument … and you’ve got some seriously unaffordable energy generation there. “Renewable” my arse.
and the cost for the rest of the grid to make the wind and solar useable?
Again—the “nameplate rating” of a PV module is an artificial test designed so that it can be measured indoors in artificial sunlight. It has very little connection with how it will operate when installed as part of a larger PV system.
Because the “nameplate rating” is at 25°C cell temperature, modules will invariably operate at higher temperatures outdoors and the power output is lower.
If the “capacity factor” of a system equals PV power out divided by the standard power at 25°C times the number of modules, it will be a low number.
In other words, the Pollock Limit for solar PV depends on what you use for the maximum power!
Francis, incredible that you failed to understand the main thrust of the article. The argument isn’t that overbuilding doesn’t happen and can’t happen; it just should not happen because it’s wasteful.
Monckton is excited, as I am, that Pollock’s tool should help regulators and legislators understand the economic limits of “unreliables”.
In defense of Monckton’s name calling, most of the comments attacking Monkton and Pollack resulted from a simple lack of critical reading and critical thinking; the same thing you did. I feel his pain it must be extremely frustrating.
On the other hand, your explaining the consequences of overbuilding was brilliant and very helpful. Thank you.
In a video, I think this one
Prof Simon Michaux mentioned a number of interesting political aspects.
One is that various vociferous critics have attempted to debunk his report. All were stymied by not being able to find fault with either his data or his calculations, but that didn’t stop the hate, derision, or denial.
Another was that he made a presentation to the UK parliament. He was asked, so what is the solution? His reply was essentially “there isn’t one, you can’t get there from here.”
He was then told that, therefore, the UK Parliament would not pay any attention to his work.
The point being, a little thing such as a Pollock Limit, or the more detailed discussion in this article, are not going to make a fly spec’s worth of difference in the activist, political, or Masters of the World circles.
I am not as pessimistic as AndyHce about whether the Pollock limit will make a difference. I shall be briefing a senior Conservative MP on the Pollock limit next week, and he will direct questions to the relevant agencies, and he will require them to provide reasoned responses, which will be copied to me. Once the Pollock limit becomes better known, grid operators will be able to use it as a very simple guide. They already know the mean annual capacity factors for the weather-dependent renewable species on their grids; but, until now, what they did not know was that those capacity factors are the Pollock limits above which additional generating capacity will either be wasted or have to go to costly battery backup.
Good luck! You’ll need it. But if you’re successful and Pollock is accepted worldwide, you will prevent $trillions of waste.
Lord Monckton, If you are briefing a senior Conservative MP next week on the Pollock limit would it be possible please for you to also point out that for off-shore wind to produce P GW of dispatchable/reliable energy it will require an installed capacity of 7.5P GW (or 10P GW according to Leslie MacMillan who believes my efficiency for burning hydrogen to produce electricity is too high) as my calculation I posted below yesterday at 10:40am.
This is of course if you agree with my calculation!
Agree, wind and solar is a perfect storm of campaign fund seeking liberal/progressive politicians, willfully uninformed voters, climate studies grant seeking universities, and talented snake oil persons like Al Gore and Elon Musk. Some consider it a hoax, but it’s actually fraud IMHO.
I am most grateful to Mr Sandberg for calling out Mr Menton on his having misunderstood – or, at any rate, misstated – the main thrust of Mr Pollock’s research. Mr Menton first of all did what one or two disreputable commenters on the original head posting had done: he cited, out of context, a single sentence from that posting; then he restated it in terms that were explicitly contradicted twice in the head posting, though not in that particular out-of-context sentence; then he drew the conclusion that he thought I was wrong.
There is little hope for regulators understanding the economic limits of “unreliables” if they refuse to acknowledge that bids in 2012 money that do not have to be honoured are not representative of costs today in money of today. We need to start by acknowledging that construction costs have not beeen falling precipitously, turbine manufacturers are unprofitable, raw material prices are rising, and financing costs are escalating sharply. Getting them to understand that output that cannot be used immediately has a negative value (because it must be disposed of), and the consequence is that output that can be used becomes more expensive goes beyond their capabilities entirely.
So many of them seem to believe that making hydrogen, or some equally useless process, will solve all the problems that are presented here.
It doesnot add up makes a number of excellent points. However, there is some hope that, because the Pollock limit is simple, the regulators may be brought to understand it.
I suspect H L Mencken and Upton Sinclair knew the answer to that.
I know we have all asked before. But if it is so feasible, why has’nt some one done on smaller scale? And I don’t mean a simulated de industrialized country set up where all your heavy industry is outsourced to a country burning coal and all the products shipped to you by freighters burning heavy oil or some kind of fossil fuel. Come on lets see it!
If somewhere like Vancouver Island could be proven to operate solely on wind & solar, then the preferred “100% renewables” proposition could be proven in real-world conditions.
456 km (283 mi) in length, 100 km (62 mi) in width at its widest point, and 32,100 km2 (12,400 sq mi) in total area
one of the warmest climates in Canada, and mild enough in a few areas to grow Mediterranean crops such as olives and lemons.
population of Vancouver Island was 864,864 as of 2021. Nearly half of that population (~400,000) live in the metropolitan area of Greater Victoria, the capital city of British Columbia.
I live in California, home to Topaz, Ivanpah and Solyndra. $billions wasted. Solar isn’t a hoax it’s fraud IMHO.
This is not all intermittent generation with weather constrained CFs. Only 28.8% is from intermittents.
South Australia leads the world with intermittent generation penetration as a semi isolated network. It has enough rooftops to run the entire lunchtime load in spring. Over a year it averages 70% of generation from intermittent.
The average demand in SA is around 1.8GW. The State has 2GW of solar capacity and 2.4GW of wind capacity so is already more than 100% overbuild.
Like Germany, SA is a huge Ponzi because SA relies on Victoria acting as a 600MW battery of infinite capacity; the same way Germany uses other countries. The low cost intermittent from Germany to Norway might actually benefit Norway by conserving perched water in the hydro network there.
If the interconnector to Victoria goes down, the wind generators voluntarily curtail because the FCAS charges goes through the roof and the wind generators are exposed to about 50% of that cost.
Germany is achieving 29% intermittent penetration by leeching off the neighbouring countries. Germany is destroying the economics of all European countries, with the possible exception of Norway, by exporting its intermittency. Neighbouring countries with coal generators should simply refuse to take intermittent power from Germany because it is eroding the economics of reliable generators the same has South Australia has exported its high cost to Victoria through its intermittency.
This is just a quick selection of what South Australia’s “duck” curve looks like. At lunchtime on 5th November just 87MW coming from grid generators. and 1.5GW coming from rooftops. The grid solar curtails because the price is always negative under these conditions.
I don’t think SA can be counted as a semi isolated network when it has flexibility over its interconnectors of +/- 6-700MW out of demand that averages about 1.8GW and peaks at around 3GW. That’s much more interconnection relative to demand than most places.
Germany can export a similar proportion of its average demand as South Australia. That is how they get their 28% without massive overbuild. When the wind is blowing they flood Europe with cheap intermittent generation.
I don’t think so. This graphic shows German interconnector flows in 2021. Relative to its Nordic neighbours and to levels of demand its interconnection is small.
Agree 100%. The good news is Australia is the best possible location for solar. Lots of underproductive land, clear skies, and an excellent latitude. The bad news is all these ideals encourages overbuilding. Australia is in the sweepstakes for worst grid award. Germany and UK have a new contender for the honors. The Biden/Harris/Pelosi/Schumer cabal have their dream inflation inflaming Act that should be able to double the cost of electricity in the next few years here in the U.S. But only a few individual States would be contenders. Here in my Cali, if it wasn’t for cheap hydro from Oregon/Washington and affordable nuclear from Arizona, we would be alone in first place.
Any roof owner in mainland Australia can make their own power. Tasmania is a bit lean on sunlight in winter. Those on Bass strait might do OK.
As the grid power prices increase, their will be a point where roof owners can make their own power at lower cost than grid.
Wind and solar have little benefit of scale and very high transmission costs. It is more economic to make your own than using a high high cost grid.
except for nightfall.
Going off grid becomes expensive when you have to cope with long periods of unfriendly weather. Storage runs out fast. You might prefer a 1,000 litre diesel tank and a genny.. .
Australia is where somewhat more reasonable heads in the government tried to get roof top solar off the grid because it causes such a major control and reliability problem.
The only way to know the optimal build-out of renewables, nukes, coal plants, combustion turbines, or any other source of electrical energy, is to maintain free unhampered markets for energy production and dispatch, as noted here:
As for the Pollock limit on renewable generation, it is not valid for at least two reasons, as noted here:
Frank from NoVA is wrong. National grid authorities have a pretty good idea, by now, of the capacity factors of their individual renewable species. What they did not know, until now, was that those capacity factors are also the Pollock limits. Above them, generation will either have to go to expensive battery backup or be prevented by capacity-constraint payments, or the capital cost of the excess generating capacity of weather-dependent renewable species will be wasted by disconnect orders.
It’s not complicated.
And Monckton of Brenchley is missing the two points I made above.
First, given the extent to which governments around the world have already forced and/or subsidized the implementation of renewables, Mr. Pollocks limit, ‘R’, which is defined as the average capacity utilization of existing renewables, is already likely well beyond the actual points where renewables inflict excess costs on end-users.
Second, if anyone actually pays attention to the derivation of Mr. Pollock’s limit, ‘f_max = R’, it should be clear that any ‘proof’ that a) depends on applying an average renewable capacity utilization factor to a specific (read bizarre) case where total system load equals renewable nameplate capacity and b) stipulates, without any evidence, that the ratio of renewable to total generation, f_max, reaches a maximum under this same load condition is simply incorrect.
Frank’s first point is sound, but does not impugn the Pollock limit. Above that limit, the already large costs of inefficient, intermittent, environmentally destructive, low-energy-density unreliables become crippling. The UK grid is a notable example.
Frank’s second point is, with respect, misconceived. Taking total system load as equal to renewable nameplate capacity so as to derive f_max is an entirely standard technique in logic and in mathematics known since the days of the medieval schoolmen as reductio ad absurdum. The fact of the apparent absurdity does not render the argument or the conclusion inappropriate.
Given that my first point is ‘sound’, it does impugn the Pollock limit, ‘R’, which among other shortcomings, changes with the degree of ‘overbuilding’ (apparently the word of the day) and a zillion other factors, including fuel costs for conventional generation.
Re. your comment on my second point, I have no problem with your ‘taking total system load as equal to renewable nameplate capacity.’ What I have a problem with are your assumptions that at that particular level of system load ‘R’ is pertinent and that ‘f’, the ratio of renewable generation to total generation is equal to ‘f_max’. Deriving conclusions from unsupported assumptions is a logical fallacy.
Again, I’m totally on board with the concept that the ongoing political drive to implement renewable energy will result in economic, if not, societal disaster if not headed off.
I’m also certain that most grid operators and the portion of rate payers who are sentient enough to read their utility bills are already aware of this.
We can only stop the degradation of the energy grid on the basis of real economic analysis today and by challenging and overturning the unscientific basis of climate alarmism asap.
What we don’t need to do is to make the alarmists’ cause any easier by fronting an easily dismissed ‘proof’.
On the first point, capacity factors are well known within the various national grids, and the Pollock limits are of course unaffected by any change in capacity arising from breaching the limits.
On the second point, f is not assumed to be equal to f_max: instead, f_max is the maximum value of f, which is expressed in the relevant equation.
Mr Pollock’s proof is not at all easy to dismiss, for it is correct. Frank from NoVA has not really made the effort to understand it, which is why he keeps misrepresenting elements of it.
You’re being evasive:
‘[C]apacity factors are well known within the various national grids, and the Pollock limits are of course unaffected by any change in capacity arising from breaching the limits.’
Capacity utilization varies over time and under varing conditions. Yes, one can calculate ‘R’ over some period of time and conditions, but to then treat it as an invariable constant within that period that applies to a very specific condition of system load, i.e., “H = N’, is a logical fallacy. By the way, if we were strictly considering solar renewables, what would it say about this ‘proof’ if that load condition typically occurred at night when we know solar dispatch is zero?
Re. the second point, you assume that f = f_max GIVEN H = N. This is an unsupported assumption,
This word “overbuilding” does not have a very clear definition. In some ways, everything is overbuilt. The road past my house gets little traffic — half a dozen cars a day, not even a minute’s worth total that I can see, overbuilt by a factor of 2000 if you want to look at averages. Tires use very little of the road — another factor of 10 overbuilding.
My computer doesn’t mine crypto currencies, so it seldom runs anywhere near capacity — 1000 times overbuilt?
My truck’s engine very seldom generates anywhere near full power — even when scooting up on ramps .
Fossil fuel power stations are overbuilt so some can go offline for maintenance.
Saturn V rockets — probably not overbuilt much, but there must have been some margin built in.
Is there anything that is not overbuilt for some circumstances?
Scarecrow Repair is of course right that generating capacity is overbuilt, typically by about 15% of total grid capacity. However, the overbuild of renewables is already egregious in many European countries, and the Pollock limit is a very simple way of determining the maximum installed capacity above which there will be costly excess generation,.
overbuilding is not only allowed it is encourage and it creates a new market
Neither Mr Pollock nor I had ever suggested that overbuilding was not allowed. Mr Pollock has shown with a simple and elegant mathematical argument how a grid operator can determine the penetration of wind or solar power above which any additional installed capacity constitutes overbuilding, which must either be matched by cripplingly costly battery backup or be dealt with by prohibitively costly capacity-constraint payments or by disconnect orders, which waste capital.
you know hes wrong
monkton celebrated too early
Apparently, Steve, so did you 🙂
This video contains content from NFL, who has blocked it from display on this website or application”
For once, SM, I agree with you. Whenever someone says AGW science is over, it is real, I know they are wrong.
But Monckton did not say AGW science is over. He said the final nail has been driven into the coffin of so-called “renewable” energy as a major – rather than a minor and even then wickedly costly – contribution to net zero targets.
Can jtom provide any rational argument against Mr Pollock’s result?
Mr Mosher is, as usual, wrong. Mr Pollock’s result is above his pay-grade.
Well done Francis.
It should be obvious that if there is some non-zero availability of wind power that is always available, then you can have continuous power from that source (wind) by building whatever capacity is required to meet all demand when the wind is at its minimum.
Then, as soon as wind rises above that minimum, gobs and gobs and gobs of constraint payments are required.
And the Pollock limit shows in the simplest fashion the point beyond which penetration by weather-dependent renewable species becomes prohibitively costly and wasteful.
“Averaged over the course of the year, even though the wind turbines only operate at an average of 50% of capacity, you have gotten 75% of your electricity from the wind system, at the cost of doubling the size of the system and throwing away 25% of the electricity produced. And you still have no electricity 25% of the time.”
No, not throwing away 25% of the electricity. Store it and get back lets say 8% of the total. 17% left to be supplied by other renewables. Problem solved.
lgl: You probably should have included the /sarc based on several of the comments. Some people actually believe that grid scale battery storage at $500 kW.h for more than four (4) hours is economical.
Store as ammonia.
There’s a very long road before that is practical.
Then you’ll need to store enough to turn back into a month’s worth of electricity demand, perhaps more.
In sub-Saharan Africa?
There’s not a very long road before that is practical anywhere. Plants are being built.
Having been exposed to an industrial level ammonia leak in Wyoming back in the 80’s, that I remember like yesterday, I’m not too excited about ammonia. The answer to excess wind and solar production isn’t making hydrogen, ammonia, or fairy dust. The answer is to end the low density mineral intensive nonsense and save our nickels and dimes until 2030 when we can start buying NuScale small scale modular reactors.
And spread nuclear waste even wider. Better to wait for fusion then.
I guess I’m old enough to remember some of the foibles of the Soviet Union. If a factory was useful as an advertisement, in many cases it was assigned higher and higher production requirements whether they were used or not. Look at our productivity and production! Not exactly the most economical way of running production.
Paying for curtailment of production is somewhat akin to the Soviet Union running of their economy. This practice only encourages ever more and more installation of productive capacity whether it is needed or not. There is no economic signal to restrict production past an economic point. I can envision a countryside literally covered in windmills and solar panels, most of which are not being used for anything other than farming curtailment payments.
Government run economies NEVER work in the long run. Inefficiencies force the populace to change the politicians in one form or fashion.
No matter how large is the installed capacity of wind turbines there is always the chance that their output will be zero or next to zero. So to obtain reliable power, storage using “excess” wind power is required.
Batteries are far too expensive and I understand there isn’t even sufficient mineral mining capacity anyway to build all the necessary batteries. The other suggestion is to produce and store hydrogen from electrolysis using “excess” wind turbine electricity and then to use the hydrogen to generate electricity. I have the following calculation for the amount of installed wind turbine capacity required:
Suppose we want P GW of power to be “dispatchable”, meaning always available “on demand”.
Let us start with P GW of installed wind turbine power and calculate the extra installed capacity required to produce P GW of dispatchable power.
Now the capacity factor of offshore wind turbines is 33% (onshore is less), so the average amount of power over a year supplied by a wind turbine is 0.33P GW and consequently we will require 0.67P GW of storage.
The efficiencies are :
Electrolysis : 60%
Compression : 87%
Electricity generation : 60%
So overall efficiency = 60% x 87% x 60% = 31%
So the amount of excess power required to produce the missing 0.67P GW is 0.67P/0.31 = 2.16P.
Since the capacity factor is 33%, this means we will need 2.16P/0.33 = 6.55P GW of additional installed wind power to provide the needed 0.67P of dispatchable power.
Hence a total of P + 6.55P = 7.5P of installed wind turbine capacity is required to provide P GW of dispatchable power.
You would think that the necessity of building and maintaining 7 times (more for onshore wind and even more for solar) installed capacity than the required dispatchable energy would be the nail in the coffin except that we are going to be forced to accept intermittency.
I did some calculations for the UK here:
Re burning of hydrogen to generate electricity:
Generation of electricity from any heat combustion process cannot be 60% efficient. Are you talking about some kind of fuel-cell? 40% is more like it.
I would multiply 0.6 x 0.87 x 0.4 to get ~ 0.22 and therefore 0.67P/0.22 = 3.0P
3.0P/0.33 = 9.1P additional installed wind
so P + 9.1P = 10.1 P to provide P GW of dispatchable power. 10-fold overbuild is a nice round number.
And don’t forget that hydrogen tends to leak out of everywhere and has to be replenished during storage unless it’s used right away. So there is a steady demand to replace lost storage.
And the cost of storage is much higher than the cost or the turbines. Totally unworkable.
An argument for diminishing returns on your renewable investment after a certain point.
The main problem with renewables, in general, is that they don’t fit our current lifestyles which are highly regulated. We tend to have breakfast, lunch and dinner at regulated times, and we begin and finish our daily jobs at regululated times, whereas the weather, especially the sun and wind, does not conform to such regulated times, which is why ‘renewables’ are often called ‘unreliables’.
The solution to the problem is how to adapt to an unreliable supply of energy in an efficient way, which does not reduce our standard of living. Creating a huge oversupply of energy from wind and solar is obviously very inefficient if we cannot make use that excess energy, such as pumping water back up to a hydro dam.
A possible solution, which theoretically could increase everyone’s living standard, would be to use that excess energy to operate fully-robotic factories which do not require any human presence. Whenever there is a shortage of energy supplies from wind or solar, the supplies to one or more of the robotic factories in the area would be tempoarily suspended so there could be sufficient power available for people’s homes, EVs, and non-robotic industries.
Robotic factories are of course more efficient. Robots does not require wages. During very windy and sunny times, the fully-robotic factories would operate 24 hours a day. Frequent shutdowns of energy supplies could still result in the factories operating more than 12 hours a day, providing more affordable goods for everyone.
I guess I’m an optimist. (wink)
In the spirit of oversimplified models…
Consider a grid with 100Gwh of demand every day from 7am to 7pm, and 50 Gwh from 7pm to 7am. Now put in solar panels with 100% output from 7am to 7pm, and off completely from 7pm to 7am. How many panels would you put in? Enough to provide 100 Gwh. In this case you have no curtailment, and provide 66% of demand with 50% capacity factor. The panels were not designed to match load, but they happen to.
This is the other problem in the Pollock limit. It assumes the only number that matters for the grid is the average demand.