By Richard D. Patton
When examining penetrations of wind power, there seem to be two types of papers. In one type, wind penetration up to 20% is difficult but doable, and in the other type wind penetrations of 50-60% are quite easy. In fact, renewable penetrations of up to 90%-100% are claimed.
As an example of the first type, consider the Western Wind and Solar Integration Study performed by GE under contract to NREL (National Renewable Energy Lab). One of their conclusions is the following [1]:
“There appears to be minimal stress on system operations at up to 20% wind penetration. Beyond that point, the system’s operational flexibility is stretched, particularly if the rest of WECC is also aggressively pursuing renewables.”
The GE/NREL paper invites skepticism because they freely admit that their reserves are stretched and that interconnections provide extra reserves. They take up a 20/20 case – 20% wind in both the area and its interconnections. They also examine the case of 30% wind in an area, but not the 30/30 case, where both the area and its interconnections have 30% wind penetration. This is the actual case, if wind is to supply 30% everywhere. Thus, 20% is doable but 30% is much more doubtful. Note that the 30% case also had 5% penetration from solar energy, for a total of 35%.
For an example of a paper that shows high wind penetration, consider Examining the Feasibility of Converting New York State’s All-purpose Energy Infrastructure to One Using Wind, Water and Sunlight [2]. This paper has 50% wind power, and its only sources of dispatchable power are hydroelectric and concentrating solar power (CSP), and these only amount to 15.5% of the power. There is no fossil fuel energy in the system anywhere, and there is no biomass energy in the system.
The methodology for this paper is given in another paper: A Monte Carlo approach to generator portfolio planning and carbon emissions assessments of systems with large penetrations of variable renewables [3]. This is the methodology used in the previous paper. It treats the hourly wind speeds as a random variable of the mean and standard deviation of the wind speed for that date, combined with a cyclic allowance for the fact that the wind is stronger at night than during the day. It does so to simplify the problem and reduce the number of variables, so the design becomes manageable. Let’s take a look at this Monte Carlo approach. The authors state:
“The Monte Carlo simulation relies on the assumption that meteorological processes, demand forecasts and forced outages can be approximated as Markov chain processes.”
The authors then build all of their analysis on this assumption. The question then becomes – can wind speed be approximated a Markov chain process? The reason for asking this question is that if you asked a Math or Statistics prof what the most common error students make, they will tell you it’s not verifying your assumptions. If your assumption is not true, all of your work is wasted.
In order to understand what might be wrong with the assumption, we turn to Introduction to Markov Chains by Anders Tolver [4]. His first example is of a child being taken to daycare if he is well or staying home if he is ill. He achieves a series of well – ill states depending on whether his child went to daycare that day. Here is what the author has to say about that series:
“It is clear that many random processes from real life do not satisfy the assumption imposed by a Markov chain. When we want to guess whether a child will be ready for daycare tomorrow, it probably influences our prediction whether the child has only been ill today or whether it has been ill for the past 5 days. This suggests that a Markov chain might not be a reasonable mathematical model to describe the health state of a child.”
Random processes never look back, so that past cannot predict the future. In statistical terms, this means that the expected value of its autocorrelation is zero, when the time shift is non-zero. When the time shift is zero, the autocorrelation reduces to 1.0.
I tried several autocorrelations of NREL wind data. Tower M2 data was used because it was the most accessible. The autocorrelation for a 1-hour time shift for the wind measured hourly at the NREL M2 tower was 0.787 in the month of January 2018. In June the autocorrelation was 0.735. This is illustrative of the fact that wind speed does not behave as a random variable. Since it is not a random variable, Markov chain simulations are not valid.
The significance of this can be seen in the graph below. Both the orange line and the blue line have the same mean and standard deviation. The orange line represents a day in which there were strong winds which slowed down at midday and has an autocorrelation of .91. the blue line has an autocorrelation of -.076 (nearly zero).

Note that the random wind cannot model whether the wind is persistent or not because it has no memory. It is as likely to be high or low any time of the day.
When multiple random number series are added together, the sum converges to the average and the variance converges to zero. Here is an example.

As the number of independent random series added together increases, the deviations decrease. The lows become higher and the highs become lower. Translated to wind farms, this equates to the insistence that problems can be fixed by adding wind farms at different locations. If each wind farm is represented by a different random series, this will smooth at the result and make the system appear to be more reliable than it actually is. The effect of storms and calms will be lost. The intermittency is lost, because as more wind farms are added, the power output converges to a constant function. This is not true in practice.
This conclusion is not changed if the there is an effort by the analyst to add the correlation between the winds at different locations. The Monte Carlo simulation is still treating everything as a Markov chain. To repeat Dr. Tolver, “It is clear that many random processes from real life do not satisfy the assumption imposed by a Markov chain” [4]. Wind is one of those random processes that do NOT satisfy the assumption imposed by a Markov chain.
Any paper such as those cited that uses this methodology is faulty and gives a very unrealistic view of the value of wind power. This is because the technique itself tends to smooth out the flow of power from wind so that it approaches a constant value as more wind farms are added. This is an artifact of faulty math and does not exist in the real world.
Mark Z. Jacobson, the lead author of the paper on making New York 100% renewable, is a professor of Civil and Environmental Engineering and director of the Atmosphere/Energy program at Stanford University. According to his CV [5]:
“In 2009, he coauthored a plan, featured on the cover of Scientific American, to power the world for all purposes with wind, water, and sunlight (WWS). In 2010, he appeared in a TED debate rated as the sixth all-time science and technology TED talk. In 2011, he cofounded The Solutions Project, a non-profit that combines science, business, and culture to educate the public about science based 100% clean-energy roadmaps for 100% of the people. From 2011-2015, his group developed individual WWS energy plans for each of the 50 United States, and by 2017, for 139 countries of the world.
The individual state roadmaps were the primary scientific justifications for California and Hawaii laws to transition to 100% clean, renewable electricity by 2045, Vermont to transition to 75% by 2032, and New York to transition to 50% by 2030. They were also the primary scientific justifications behind United States House Resolution H.Res. 540, House Bill H.R. 3314, House Bill H.R. 3671, Senate Resolution S.Res. 632, Senate Bill S.987, and the platforms of three presidential candidates and a major political party in 2016, all calling for 100% clean, renewable energy in the U.S.”
It seems likely that all of the high penetration models are influenced by him and possess similar methodology. Since they are all faulty, at best it can be said that high wind penetrations might be possible. As of today, they are unproven and unlikely, given experiences in places like Germany and South Australia.
It also seems likely that the plans proposed and passed to reduce the carbon footprint in places like California and New York will fail badly and drive up utility bills. They are based on this faulty methodology and hence are very unlikely to succeed.
References
1) Western Wind and Solar Integration Study, prepared for NREL by GE, p 162
https://www.nrel.gov/docs/fy10osti/47434.pdf
2) Hart, Elaine and Jacobson, Mark Z., A Monte Carlo approach to generator portfolio planning and carbon emissions assessments of systems with large penetrations of variable renewables. Renewable Energy, Vol. 36, Issue 8, August 2011, p. 2281.
3) Jacobson, Mark Z., et. al., Examining the Feasibility of Converting New York State’s All-purpose Energy Infrastructure to One Using Wind, Water and Sunlight. Energy Policy, Vol. 57, (2013), p 589. www.elsevier.locate/enpol.
4) Tolver, Anders, An Introduction to Markov Chains, p. 8, http://web.math.ku.dk/noter/filer/stoknoter.pdf
5) Mark Z. Jacobson Curriculum Vitae, http://stanford.edu/group/efmh/jacobson/
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Even if the wind is random and indendently dustributed don’t you still have to maintain and build lots of turbines ? There is a trade off between having enough turbines all over the place to smooth out the power generation and the cost of the extra turbines and maintaining them.
Yes exactly, and at the risk of re-stating the obvious if regions A and B are required to cover the shortfall in adjacent regions C and D then regions A and B have to have enough capacity to cover the demand in all four regions A, B, C, and D at the same time. That’s a hell of an overbuild (and someone else in these comments has pointed out that even an infinite overbuild cannot cover for the certain eventuality of no wind anywhere).
“Mark Z. Jacobson, the lead author of the paper on making New York 100% renewable”
Several years ago I took a deep dive on this subject to assuage my own curiosity, looking at the only large set of real numbers I could find:
https://energy-charts.de/energy.htm
Based on the numbers available at the time, up to early 2016, it was clear that Germany would repeatedly see periods of 3-6 weeks where it would require hugely more solar, wind, and storage resources if it would power itself with a domestic “renewable” infrastructure. Vastly more. A later look at energy import/export also confirmed that Western Europe could not likely do much to solve regional temporal challenges via long-distance undersea and overland interconnects. Not to mention the security implications of a national economy which hangs from an undersea thread.
About this time Jacobson was telling the world that 100% renewable was a piece of engineering cake and I called him out on it, and suggested he look at the real numbers from Germany. He would not hear of it.
My takeaway was that snake-oil salesmen make the money and at some point the engineers take the fall.
Switzerland and Germany were traditional examples of grid stability.
Today Swiss Grid has already dropped the concept of public service, meaning, we’ll do at best with what we have, our service is no more guaranteed.
The actual situation is well described and documented in this post:
http://notrickszone.com/2019/01/26/the-green-energies-of-instability-swiss-power-grid-requires-200-fold-more-intervention-than-8-years-ago/
Our outages and breaks learning curve progresses fast.
To take a simple example, let’s build a grid with 30% wind generation on average. Allowing for an average 30% capacity factor, that means that when the wind blows strongly, wind could theoretically supply 100% of demand. So all other generators must be idled. But on occasion when the wind doesn’t blow, back up generators must be available to supply 100% of peak demand. We now have a grid with 100 % wind nameplate capacity, and 100% deliverable other generators. Double the investment.
The Australian experience has been that large high pressure systems can stall over all of south east Australia. The grid there has a nominal wind generation capacity of 5,500 MW but on occasion wind only supplies 200 MW in a grid that requires 22,000 MW on average with a peak of 33,000 MW.
The alternative is storage. Batteries have yet to prove themselves (Sth Australia installed the “”world’s biggest battery” courtesy of Elon Musk with a capacity of only 129 MW for one hour). Hydro is the most effective storage, and wind can be paired effectively in places where hydro is available. In the Australian grid hydro supplies about 2,000 MW on average, with a high of about 5,000 MW.
But in the world’s wind test bed of Sth Australia, there is no hydro. Wind nameplate capacity is 1900 MW and total demand varies from 700-2000 MW with summer peaks up to 3000 MW. Their other sources of power are solar and gas. They survive because of a 650 MW interconnector to the far larger Victorian market that has coal, gas and hydro. But Victoria has plans to go to 50% “renewables” and is already experiencing shortages in peak times.
Jay got thrown out before he had to make the phone call to Dan. In any case once Hazelwood went Dan was in the same boat as Jay. Now Dan has to call both Bill and Gladys to keep the lights on.
It appears they will go on building these things a long way into the future and keep wondering why power prices keep going up and reliability keeps coming down. If they left the city and went out into the country for a road trip they would get a better appreciation of why wind generators are a waste of money.
Victoria was using over 2GW of Snowy hydro and 500MW from Tasmania via Basslink during the heatwave. Even Snowy 2 isn’t going to help if they go to 50% wind.
Testing my new emergency generator tomorrow; next house will have full propane standby generator capability. Dependable power is to important to leave to anyone who either believes in renewables or is required to comply with a renewable mandate.
I made my living for a good number of years doing Monte Carlo simulations of objects going through the atmosphere. If my assumptions of the variation of the atmospheric parameters were as wrong as these authors used, there would have been some notably bad things going on.
(Of course, there are efforts to improve things so that simpler prediction methods such as these can be used.)
I had to revise my initial hack at the work because randomizing some things based on a normal distribution using the mean and standard deviation does not result in the right answer in the real world.
This is the problem with all of these climate studies. There is no check on whether the answer is correct. Even if the things done seem logical, that does not mean that they are correct in the real world.
Finally, some light shines down on the madness.
Without energy storage, wind generated energy will never be reliable enough to exist without nuclear or fossil fuel backup power. It just is not economically viable to build batteries big enough to store a reasonable amount of backup power using existing technology (but maybe someday it will be). Until then, Wind is at best just intermittent, and completely unreliable at worst. You just cannot build a nations power grid based on the premise that “everything will work just fine”.
Say you build more and bigger wind farms… It takes one big event (like a hurricane, or severe winter storm) to knock out hundreds of the turbines – and this could be spread over a huge area. It just takes one big high or low pressure system to become stationary to reduce the output of wind energy to 1/10th its normal for days or even weeks. Because wind power is so remote, you end up building lots of feeders to trunks to get the power moved about – it takes one trunk to cut off half or more of your power, and its more likely as those trunks become more distant from the cities that use the power. I have been arguing for years that you cannot just assume that wind is available 100 miles away when its quiet nearby – it depends on the weather system.
Can wind supply some of the load – yes, but at a higher cost. No one seems to realize that wind turbines wear out and need to be taken down, replaced, and something down with the old parts. Add the costs of replacement and recycling or turning into waste the last generation wind turbines, add in the costs of batteries to even out the load a little, the cost of backup power sources for when there is a long term disruption, and the costs are ridiculous. Just build a reliable nuclear power plant and be done with it.
Just imagine a long cold winter system where its cloudy and the wind is quiet for a week. Heating needs are peaking, solar is almost dead, and wind turbines can barely rotate. Now imagine that is a few hundred miles across. Then imagine neighboring power grids are already stretched because of cold temperature. This is a very real possibility. What do we do then? Let Kansas freeze? Or Ohio, or…
Modern society needs dependable power grids and sources. The source needs to be a close to the users as feasible. There needs to be enough capacity to allow for maintenance and a the unexpected. Wind Turbines fail at all of these missions. It is a wonderful niche power source where the main grid will not reach. Leave it at that.
Kill the Wind Warts! They are ugly, destructive to birds and bats, and expensive. Only the rich pushing their use and making huge profits off the subsidies are winning.
“… they are unproven and unlikely, given experiences in places like Germany and South Australia. It also seems likely that the plans proposed and passed to reduce the carbon footprint in places like California and New York will fail badly and drive up utility bills …”.
=====================================
That is the purpose of these ridiculous contraptions viz. to reduce demand or euphemistically “demand response” through ever higher prices.
Empirical evidence trumps computer models every time — mad professors or your own eyes, who are you going to believe⸮
WHAT IS GRID-CONNECTED WIND POWER REALLY WORTH? [GERMAN EXAMPLE]
Wind power is intermittent and non-dispatchable and therefore should be valued much lower than the reliable, dispatchable power typically available from conventional electric power sources such as fossil fuels, hydro and nuclear.
In practice, one should assume the need for almost 100% conventional backup for wind power (in the absence of a hypothetical grid-scale “super-battery”, which does not exist in practical reality). When wind dies, typically on very hot or very cold days, the amount of wind power generated approaches zero.
Capacity Factor equals {total actual power output)/(total rated capacity assuming 100% utilization). The Capacity Factor of wind power in Germany equals about 28%*. However, Capacity Factor is not a true measure of actual usefulness of grid-connected wind power.
The true factor that reflects the intermittency of wind power Is the Substitution Capacity*, which is about 5% in Germany – a large grid with a large wind power component. Substitution Capacity is the amount of dispatchable (conventional) power you can permanently retire when you add more wind power to the grid. In Germany they have to add ~20 units of wind power to replace 1 unit of dispatchable power. This is extremely uneconomic.
I SUGGEST THAT THE SUBSTITUTION CAPACITY OF ~5% IS A REASONABLE FIRST APPROXIMATION FOR WHAT WIND POWER IS REALLY WORTH – that is 1/20th of the value of reliable, dispatchable power from conventional sources. Anything above that 5% requires spinning conventional backup, which makes the remaining wind power redundant and essentially worthless.
This is a before-coffee first-approximation of the subject. Improvements are welcomed, provided they are well-researched and logical.
Regards, Allan
____________________________________________________________
NOTES (Edited):
The excellent E-On Netz Wind Report 2005 (1) provides excellent information. See Figure 7 re Substitution Capacity.
Sadly, green energy is not green and produces little useful energy – intermittency and the lack of practical energy storage are the fatal flaws.
Germany has calculated that it needs 95% spinning backup of conventional energy (e.g. natural gas turbines) to support their wind power schemes – it would make much more economic sense to just scrap the wind power and use the gas turbines.
Driving up energy costs just increases winter mortality, which especially targets the elderly and the poor. Excess Winter Deaths in the UK this year totaled over 50,000, half the annual average 100,000 in the USA, which has FIVE times the population of the UK.
When politicians fool with energy policy, real people suffer and die. Most politicians are so scientifically illiterate they should not even opine on energy, let alone set policy.
Posterity will judge this climate/ green energy nonsense harshly, as the most costly and foolish scam in human history.
____________________________________________________________
REFERENCES:
1. “E.On Netz Wind Report 2005” at
http://www.wind-watch.org/documents/wp-content/uploads/eonwindreport2005.pdf
2. DEBATE ON THE KYOTO ACCORD
PEGG, reprinted in edited form at their request by several other professional journals, THE GLOBE AND MAIL and LA PRESSE in translation, by Baliunas, Patterson and MacRae, November 2002.
http://www.friendsofscience.org/assets/documents/KyotoAPEGA2002REV1.pdf
This is what we KNEW in 2002:
[excerpts from our Rebuttal in the APEGA debate}
“The ultimate agenda of pro-Kyoto advocates is to eliminate fossil fuels, but this would result in a catastrophic shortfall in global energy supply – the wasteful, inefficient energy solutions proposed by Kyoto advocates simply cannot replace fossil fuels.”
“Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist.”
The real situation is that winds are driven by low and high pressure regions which can vary in size from a few miles across up to that of a large country. When a large system sits overhead, which it can do for days or even weeks, with no pressure difference to drive airflow the wind output for that entire region is zero.
We’ve had this situation in the UK many times, where winds are near nil for two or three weeks at a stretch. This casts a totally different light on the need for backup capacity compared to a purely random wind distribution, in which so long a total outage would be a statistically rare event.
Every 20 years or so you have to replace all the major parts of all the turbines…… at the top of a 300ft tower.
No problemo for an academic at a University desk.
Why bother with models when you have real working examples of higher wind penetration in real life?
Eire in 2015 had 23% of electricity from wind and is almost entirely powered by gas/wind generation…
UK was 15% in 2017… same as Germany’s onshore turbines.
Spain had 23% of wind power in 2017.
And Australia produces on average 32% of installed wind capacity in a year but lefties simply don’t do marginal analysis-
https://anero.id/energy/wind-energy/2019/january
they seem to ignore seasonal variation completely-
https://anero.id/energy/wind-energy/2018/june
Notice the virtual zeros but all they see is a lovely hump.
https://anero.id/energy/wind-energy/2018/april
No matter just grab the headline 32% figure and ooh and aah over that. I take my car in under warranty because it’s hard to get going some days and it suddenly decelerates to a crawl or lurches forward and takes off with a mind of it’s own and Griff the mechanic tells me no worries it’s all fine mate as you’re achieving the average kms a year for that model.
Ireland depends on being able to dump surplus wind on the UK (and curtails it to ensure that the grid remains stable with sufficient inertia) – and then depends on imports when the wind isn’t blowing. The balancing flows are up to +/-1GW. You can see these modes of operation in the period of Storm Ophelia in this chart (although the Moyle interconnector was limited to 250MW at that time, so it was +/-750MW):
Peak demand was about 5.5GW, so a swing of 1.5-2GW is a substantial fraction of demand.
Spain is also significantly dependent on interconnectors to nuclear powered France.
You can’t view those grids as isolated instances.
griff likes to pretend that each nations grid is independent when he makes these absurd claims.
He’s been corrected over and over again.
in doing an assessement of the capabilities of wind generation, surely the output should be used not wind speed as the two do not have a linear relationship. From what I’ve read the relationship of wind speed to power output is a cube law so small wind speed deviation produces larger power deviations?
Perhaps that might convince these modellers that their conclusions are wrong. And it would be useful also if they actually looked at real installations to see how poorly they work?
So who was saying it did. You start out explaining they subtract a diurnal cycle, so at least you need to do the same before looking at the a/c of the residuals. what you are currently doing is a straw man.
A very simple approach is to aggregate into 24 hour periods. I have presented data on that basis in this thread: it shows a large disconnect between modelled and real data.
GB National Grid status 24 January 2019 at 11.35.
Demand 45.71 GW
Wind 0.26 GW or 0.57% of demand (Installed capacity per RenewableUK is “Operational Capacity (MW)
20,687.505”. That is 20.68 GW. So all the turbines installed on and off shore managed just over 1% of installed capacity. Wow.
Most of the demand was met by dispatchable energy including coal at 6.89 GW. Of course the UK Government have plans to shut down all coal plants by 2025. Nuclear provided 6.11 GW and the Government is struggling to replace our nuclear power stations. So beyond 2015 it would appear that a return to paraffin lighting and wood stoves will be required.
“the UK Government have plans to shut down all coal plants by 2025”
I suspect they will convert them all to burning wood pellets, as they have done at Drax.
I just had a look at M2 tower data ( which you forgot to link in the article ).
https://midcdmz.nrel.gov/nwtc_m2/
Here is their wind speed plot for 26/12/18 to 26/1/19 at 80m

Of course it’s auto-correlated. Now subtract that cycle and try again. Also remember that by taking hourly averages you’re reducing the variance and messing with the stats.
You do not say explicitly what data you were using but it seems to be a monthly average of hourly rates as I showed above. Again you are reducing the variance and leaving an auto-correlated cycle.
You need to be as critical ( or more so ) of your own methods as you are of the papers you wish to challenge ( see Feynmann ).
This is an important question and you may have valid points. You need to apply valid methods to support that. It seems here you are diving straight in with the bias confirmation since you are opposed to wind generation and not seeing the serious flaws in your processing.
correction, I selected a monthly period but that plot is a single day : 26.dec.2018
Everything Jacobson writes about wind and solar can be ignored. His models have been thoroughly torn apart by the community of serious modelers.
The GE/NREL model studied instantaneous penetration, not annual energy penetration.
A system that can only handle 20% of power from wind and solar at a given moment (instantaneous penetration), will get something less than 10% of energy from those resources over the course of the year (annual energy penetration).
Transmission planners talk about the first type of penetration. Politicians and activists talk about the second. Because they rarely speak to each other, they use the same word to mean two very different things.
An interesting question: can anyone here think of a climatological time-series that is not auto-correlated?
Our local Melbourne TV invested 15 minutes to show parents how to make schoolday lunches for their kids.
In the good old days they would include some kids in the show to road test the theoretical advice.
Not these days, when advice from authority reigns.
Even more interesting would be footage showing pupil after pupil throwing lunches into bins, unopened and untasted, as tuck shop volunteers see daily.
I could not resist this quick mention of how domination by authority – even by self-appointed authority – is eating away at the order of society.
In just the way that leads to multiple yellow vests on French streets. Geoff.
From the ‘Monte Carlo’ paper:
“Wind speed realizations are produced using an algorithm that includes treatments of the temporal and geographic correlations and any diurnal character present in the wind speed dataset.”
I don’t think this analyses’ representation of the paper’s methods matches the description in the paper.
In more detail:
“Realizations of the random variable are generated using the Cholesky decomposition of the correlation matrix built from site-specific mean daily wind speed data. This approach maintains any geographical correlations due to proximity or weather phenomena that are present in the wind dataset. Diurnal character is imposed by a function, αi(t), which gives the monthly-averaged ratio of the wind speed in the hour of the day corresponding to the tth time step to the mean daily wind speed. Hourly wind speeds, vi(t), are generated from the mean daily wind speed realizations and αi(t), as well as a term that considers the deviation from αi(t − 1) in the prior time step to preserve temporal correlations, and a random variable term”
Using the Cholesky decomposition of the spatial correlation matrix is done simply to ensure that the spatial correlation of the generated data match the real world correlation matrix over whatever limited time span of data it was generated: however, it uses random numbers to create the generated data for the daily model data with no temporal correlation. Note that the spatial correlation matrix is based on simultaneous spatial data, and thus does not capture the effect of weather systems moving from one place to the next – a temporal correlation effect.
Intra day data is derived by some curve fitting that is a bit like fitting splines, but allows some variation around the curve, while giving some appearance of continuity. It simply implies that if the previous day had a high wind total, then it will be assumed to be windier than average at 1 a.m. on the current day, with the wind dying down over the day to match the random number for the current day. Much depends on the assumed windiness at midnight on an average day: if this is low, then the discontinuity between a windy random number and a low wind random number will appear much less, and the diurnal pattern dominates the generated data at the intra day level.
The random walk of the daily data will not emulate real world behaviour: seasonal variations, and periods of lull or protracted storms, or seasons or years with well below or above average levels of windiness are not properly modelled by these methods. It is precisely these anomalies that give rise to real difficulties in modelling high renewables scenarios, and to grossly inadequate estimates of storage or backup requirements.
There is no adequate substitute for collecting long runs (many years) of real data, and scaling them for the assumed installed capacity to begin to give a flavour of how these systems perform in practice.