An animated analysis of the IPCC AR5 graph shows 'IPCC analysis methodology and computer models are seriously flawed'

This post made me think of this poem, The Arrow and the Song. The arrows are the forecasts, and the song is the IPCC report – Anthony

I shot an arrow into the air,

It fell to earth, I knew not where;

For, so swiftly it flew, the sight

Could not follow it in its flight.

I breathed a song into the air,

It fell to earth, I knew not where;

For who has sight so keen and strong,

That it can follow the flight of song?

– Henry Wadsworth Longfellow

Guest Post by Ira Glickstein.

The animated graphic is based on Figure 1-4 from the recently leaked IPCC AR5 draft document. This one chart is all we need to prove, without a doubt, that IPCC analysis methodology and computer models are seriously flawed. They have way over-estimated the extent of Global Warming since the IPCC first started issuing Assessment Reports in 1990, and continuing through the fourth report issued in 2007.

When actual observations over a period of up to 22 years substantially contradict predictions based on a given climate theory, that theory must be greatly modified or completely discarded.

IPCC AR5 draft figure 1-4 with animated central Global Warming predictions from FAR (1990), SAR (1996), TAR (2001), and AR5 (2007).
IPCC AR5 draft figure 1-4 with animated central Global Warming predictions from FAR (1990), SAR (1996), TAR (2001), and AR5 (2007).

IPCC SHOT FOUR “ARROWS” – ALL HIT WAY TOO HIGH FOR 2012

The animation shows arrows representing the central estimates of how much the IPCC officially predicted the Earth surface temperature “anomaly” would increase from 1990 to 2012. The estimates are from the First Assessment Report (FAR-1990), the Second (SAR-1996), the Third (TAR-2001), and the Fourth (AR4-2007). Each arrow is aimed at the center of its corresponding colored “whisker” at the right edge of the base figure.

The circle at the tail of each arrow indicates the Global temperature in the year the given assessment report was issued. The first head on each arrow represents the central IPCC prediction for 2012. They all mispredict warming from 1990 to 2012 by a factor of two to three. The dashed line and second arrow head represents the central IPCC predictions for 2015.

Actual Global Warming, from 1990 to 2012 (indicated by black bars in the base graphic) varies from year to year. However, net warming between 1990 and 2012 is in the range of 0.12 to 0.16˚C (indicated by the black arrow in the animation). The central predictions from the four reports (indicated by the colored arrows in the animation) range from 0.3˚C to 0.5˚C, which is about two to five times greater than actual measured net warming.

The colored bands in the base IPCC graphic indicate the 90% range of uncertainty above and below the central predictions calculated by the IPCC when they issued the assessment reports. 90% certainty means there is only one chance in ten the actual observations will fall outside the colored bands.

The IPCC has issued four reports, so, given 90% certainty for each report, there should be only one chance in 10,000 (ten times ten times ten times ten) that they got it wrong four times in a row. But they did! Please note that the colored bands, wide as they are, do not go low enough to contain the actual observations for Global Temperature reported by the IPCC for 2012.

Thus, the IPCC predictions for 2012 are high by multiples of what they thought they were predicting! Although the analysts and modelers claimed their predictions were 90% certain, it is now clear they were far from that mark with each and every prediction.

IPCC PREDICTIONS FOR 2015 – AND IRA’S

The colored bands extend to 2015 as do the central prediction arrows in the animation. The arrow heads at the ends of the dashed portion indicate IPCC central predictions for the Global temperature “anomaly” for 2015. My black arrow, from the actual 1990 Global temperature “anomaly” to the actual 2012 temperature “anomaly” also extends out to 2015, and let that be my prediction for 2015:

  • IPCC FAR Prediction for 2015: 0.88˚C (1.2 to 0.56)
  • IPCC SAR Prediction for 2015: 0.64˚C (0.75 to 0.52)
  • IPCC TAR Prediction for 2015: 0.77˚C (0.98 to 0.55)
  • IPCC AR5 Prediction for 2015: 0.79˚C (0.96 to 0.61)
  • Ira Glickstein’s Central Prediction for 2015: 0.46˚C

Please note that the temperature “anomaly” for 1990 is 0.28˚C, so that amount must be subtracted from the above estimates to calculate the amount of warming predicted for the period from 1990 to 2015.

IF THEORY DIFFERS FROM OBSERVATIONS, THE THEORY IS WRONG

As Feynman famously pointed out, when actual observations over a period of time contradict predictions based on a given theory, that theory is wrong!

Global temperature observations over the more than two decades since the First IPCC Assessment Report demonstrate that the IPCC climate theory, and models based on that theory, are wrong. Therefore, they must be greatly modified or completely discarded. Looking at the scattershot “arrows” in the graphic, the IPCC has not learned much about their misguided theories and flawed models or improved them over the past two decades, so I cannot hold out much hope for the final version of their Assessment Report #5 (AR5).

Keep in mind that the final AR5 is scheduled to be issued in 2013. It is uncertain if Figure 1-4, the most honest IPCC effort of which I am aware, will survive through the final cut. We shall see.

Ira Glickstein

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

117 Comments
Inline Feedbacks
View all comments
Andyj
December 20, 2012 4:25 am

Camburn,
Maybe you are missing something about LazyTeenager.
The arrows do fly straight and true. Published and predicted. What happens between The date the prediction is released based on theory compared to the proof of observation only has one straight line that matters. The temperature line. The most recent release being the most ridiculous.

Bill Illis
December 20, 2012 4:55 am

One can also add the IPCC AR5 multimodel means to the projections as well. They would have had access to temperatures up until 2010 so that is when the projections start. AR5 is almost the same as AR4, there is very little difference.
The Climate Explorer has recently added a nice summary download page for AR5 multi-model means. I use the RCP 6.0 scenario which is the most realistic in terms of where we are going with GHGs. Be sure to set the base period to 1961 to 1990 in order to be able to compare to Hadcrut temperatures for example (everyone is using different baseperiods now so one has to be careful that they are all comparable – someone post this comment over at Skeptical Science since they do not seem to get this idea).
http://climexp.knmi.nl/cmip5_indices.cgi?id=someone@somewhere

Bill Illis
December 20, 2012 4:58 am

Sorry, I should have added that the Climate Explorer’s dataset starts in 1860 (when I think it is actually 1861 – there is a small bug somewhere – just move forward one year).

Frank K.
December 20, 2012 5:50 am

E.M.Smith says:
December 19, 2012 at 9:04 pm
They are about to miss even more (further?)
http://rt.com/news/russia-freeze-cold-temperature-379/
Hi E.M. Smith – I also pointed to this story yesterday in another thread (I saw the story first at Instapundit).
What I find interesting is that CAGW devotees appear to believe that the mean temperature of the Earth is slowly increasing over time, which can be expressed simply as:
T_earth(t) = T_cagw(t) + T_stf(t)
where t is time, T_cagw(t) is the slow increase in mean temperature due to “global warming”, with a time scale on the order of multiple decades, and T_stf(t) are “short term fluctuations” due to ENSO, volcanoes, weather “noise”, and other natural variations. What I don’t understand is that if multidecade-scale “global warming,” as expressed above, does exist, we should NOT be breaking low temperature records established many decades ago in large area, broad regions like Russia. It will be interesting to see if more low temperature records are broken as we move into winter 2013…

lemiere jacques
December 20, 2012 5:52 am

well i am not sure that the debate is right.. I am more in comparing the shape of the curve.
Clearly any single model is not able to fit the data.
Who can explain why they use so many models? what is the meaning of that??? Why is it called uncertainty?

TLM
December 20, 2012 6:48 am

The last entry for the “Observed” data set is 2011 not 2012. Also, the graph does not say which data set “Observed” is. I suspect HadCrut3 or 4 as the HadCrut set has been their preferred one for all previous reports.
Data for the year so far suggests that 2012 will be warmer than 2011 but actually only about the same as 2009. That means the two dots will be at the bottom end of the green shaded area (TAR) and the upper end of their error bars is likely to sneak into the orange AR4 range. Of course the IPCC will say that because the single data point for the Observed 2012 data could have fallen within the bottom of the AR4 predicted range that it is “consistent” with their forecast. Of course they will ignore the fact that the trend in the data is clearly flat compared with the predicted upward trend.
That will, of course, not stop Tamino claiming that he has “pre-bunked” this argument by removing the effect of the dominant La Nina during the period, and then stating that the climate would have warmed. That translates to me as “if the climate had not cooled then it would be warmer than it is now”. The problem for Tamino is that ENSO is not a “cycle” where the warm and cool spells cancel out, it is a random fluctuation and can have a negative or positive trend of its own. Just because ENSO has biased cool in the last 10 years does not mean that it will bias warm to an equal extent in the future and that temperatures will somehow “catch up” through the effect of a series of El Ninos. They might, or they might not, it is a random fluctuation and it will now take a series of quite monster El Ninos to cancel out the last few La Ninas.

Andy W
December 20, 2012 6:49 am

So Lazy, are you going to try and show us how the models actually got it right? I’d love to see the twisted mathematics you’re going to employ to convince us. Perhaps you could use Hansen’s A, B, and C scenarios he once touted 😉
As others have commented here, we should be looking at the BAU predictions the models have made as that is the scenario we are currently living in (in fact, I believe our evil ‘SeeOhToo’ emissions are higher than the BAU scenarios). I’d REALLY love to see you try and reconcile those predictions with the real-world temps!
Over to you Lazy…

December 20, 2012 6:59 am

fhhaynie
You said: “That still would not explain a probable future downward trend in global temperature.”
As you know, there is no forecast of a near-term downturn in temperature in the purview of mainstream science. Certainly I don’t know of such a forecast, and I am therefore confident that many others who read your contribution will likewise be unaware.
I went to your website and found nothing that lead me to judge that such a decline is likely.
The great thing about WUWT’s (Specifically Anthony Watts’s) determined light moderation stance, is that within reason everybody has a chance to have their say. The heretic, the dissenter, the lone true voice in the crowd, the voice of orthodoxy, the honestly mistaken and the oughtright crackpot all get heard. 
It’s embarrassing to have crackpots interjecting in a discussion. It would be even more embarrassing to exclude honest, possibly even correct viewpoints by wrongly judging them to be crackpot.
With respect, no matter how correct you might actually be, when you allude to a forecast not supported by conventional science, if you don’t give a citation then the reader has little choice but to include you among the crackpots. From visiting your blog, this would be an unfair characterisation of you.
I therefore ask you to always include a citation to your calculations about your expected temperature decline with every post you make that alludes to it, no matter how much you feel we ‘ought’ to know it.
Sincerely,
Leo Morgan

Reply to  Leo Morgan
December 20, 2012 7:42 am

Leo,
That probable downturn may not occur in my life time, but it will happen. We will have another ice age. Also, consider the probability on a short term basis that the last sixteen years of no temperature rise is the top of a temperature cycle that is following a 200 year cycle of solar activity. Time will tell and reveal the true crackpots.

RACookPE1978
Editor
December 20, 2012 7:18 am

Andy W says:
December 20, 2012 at 6:49 am
(replying to LazyTeenager)
So Lazy, are you going to try and show us how the models actually got it right? I’d love to see the twisted mathematics you’re going to employ to convince us.

I think you have that wrong. I really don’t even care anymore “how” his precious models may have accidentally got it right.
Your question actually needs to be: “So Lazy, are you going to try and show us which of the models actually got it right?”
See, we still have not seen ANY of the 23 some odd “officially acceptable models” actually produce even ONE single model run (of the many thousands they supposedly average to get their results) that has “reproduced reality” and predicts/projects/outputs/calculates ANY single 16 year steady temperature period during ANY part of the 225 years between 1975 and 2200.
Its not that the “CAGW modelers” need to produce hundreds (or thousands) of model runs that lay right down the middle of the real world temperatures: clearly there are error bands and the global circulation models will be slightly different each run. Nobody anywhere questions that.
They cannot even produce ONE run of ONE model that fits inside the error band of ONE standard deviation.
But for the IPCC to claim “certainty” (more than 3 standard deviations (of what outputs??? from what sample set ??? using what “data” ???) that their GCM models are correct 100 years in the future – when not even ONE result of 23 models x 1000 runs/model is inside the 16 years of real world measurements between 1996 and 2012 is ludicrous!

December 20, 2012 7:24 am

Beth Cooper says: November 5, 2011 at 11:16 pm
Oh the rate of warmin’s slowin’
And the skepticism’s growin’
And the snow it keeps on snowin’
And the data it is showin’
Which way the wind is blowin….

G. Karst
December 20, 2012 7:53 am

E.M.Smith says:
December 19, 2012 at 9:04 pm
They are about to miss even more (further?)
http://rt.com/news/russia-freeze-cold-temperature-379/
Russia is enduring its harshest winter in over 70 years, with temperatures plunging as low as -50 degrees Celsius. Dozens of people have already died, and almost 150 have been hospitalized.
The country has not witnessed such a long cold spell since 1938, meteorologists said, with temperatures 10 to 15 degrees lower than the seasonal norm all over Russia.

It only makes logical sense: most of the world’s warming happened in the northern latitudes, so it shouldn’t be a surprise when cooling is realized in this same locale. Unfortunately, these same areas are the global breadbaskets. GK

December 20, 2012 8:04 am

Lance Wallace says:
December 20, 2012 at 1:59 am
Ira– As Tokyoboy (9 PM above) and Roger Knights (9:43) point out, picking the middle point of each set of IPCC projections is not correct. . . .

To which Ira responded:

[Lance Wallace, Tokyoboy, and Roger Knights: Of course you are correct that, had I chosen the “business as usual” scenario predictions which correspond to the actual rise in CO2, my animated arrows would have had a higher slope and the separation of the IPCC from reality would have been greater. I used the central IPCC predictions (which correspond to the centers of the colored “whiskers” at the right of the chart) to avoid being accused of “cherry picking”. In other words, if the IPCC is off the mark based on my central predictions, they would have been even more off the mark had I used “business as usual”. Ira]

However, Lance Wallace mis-reported what my criticism was, which was quite different and which must be addressed:

Roger Knights says:
December 19, 2012 at 11:02 pm
There’s an error in the chart. The oval labeled “2012″ should read “2011,” and the heading “1990 to 2012″ should read “1990 thru 2011″. The last year, shown by vertical bars or dots on the chart, is 2011, not 2012. (2012 will be somewhere between 2010 and 2011.)

[Roger Knights: Thanks, you are correct about the oval. I should have moved it and the arrow heads to the right by one year. Please see my embedded reply to Werner Brozek (December 19, 2012 at 8:43 pm) that I used 2012 instead of 2011 “… with the hope that, when the official AR5 is released in 2013, they will include an updated version of this Figure 1-4 with 2012 observed data. Please notice that I drew my black arrow through the higher of the two black temperature observations for 2011, which kind of allows for 2012 being a bit warmer than 2011. – Ira]

TimC
December 20, 2012 8:44 am

Dr Glickstein – many thanks for your comment immediately following mine above at 7:35 pm.
I quite agree that if you take truly random events such as throwing dice, the probability of throwing the same number N times will be 1/(6^N).
However, what I have problems with is where you say “If a prediction based on a given theory and associated computer model is supposed to be 90% certain, the probability it is wrong is one in ten. If the same theory and computer model is run again several years later, the chance that both are wrong is one in ten times ten …”.
The same theory and model implies the same result, if you use the same starting and boundary conditions. Even with different starting conditions I don’t think you can regard any two runs as truly random – so I personally have doubts that the probabilities can simply be multiplied in the way you suggest (one in ten times ten, etc).
But I will be happy to be corrected, if my grasp of probability theory here is wrong …!

mpainter
December 20, 2012 9:08 am

Don’t anyone hold their breath, waiting for LT to respond. He never does. His strength is that he doesn’t mind being wrong. Nonetheless, he serves a good purpose in parroting the dubious scientists who brought us AGW, and so exposing their dubious science to public inspection.

mikerossander
December 20, 2012 11:40 am

Tim’s critique about the “prosecutor’s fallacy” (Dec 19 7:35 pm) is correct (and the rebuttal unfortunately is not). Four incorrect predictions, each with a 90% confidence (and therefore, a 10% chance of being wrong), does not lead to a 1 in 10,000 chance of all four being wrong. The fallacy is that the predictions are not independent events – that is, they are not separate throws of the dice.
If, for example, the 10% uncertainty includes some component of systemic error and that systemic error is propogated through all four trials, the calculated error considering all four trials may still be as high as the original 10%.
To go back to the rebuttal’s dice example, there is a one in six chance of rolling a “1” and a one in 6^4 chance of rolling four “1”s in a row if you have no prior knowledge or reason to suspect that the dice are unevenly balanced. Once you have four “1”s in a row, you have competing hypotheses, however – a) that you’re really unlucky or b) that the dice are skewed. Now you need to assess the probability of systemic error and recalculate. That is, given that you know that trial A was exceeded, what is the probability that trial B will be exceeded.
Unless you pick the extremes of either 0 or 100% component of skewing, the final properly-multiplied error of all four reports considered as a unit will be less than one in ten but substantially greater than one in ten thousand.

Andy W
December 20, 2012 12:50 pm

RACookPE1978 says:
December 20, 2012 at 7:18
Your absolutely right RACookPE1978. No matter how many times they run the models the results are always duff.
We still haven’t heard from LT 🙂

December 20, 2012 1:51 pm

Let’s be generous, say SAR got it right. Doesn’t that still mean the GCMs that produce high forecasts have been proven inappropriate? Doesn’t all this still mean that the “C” part of CAGW has fallen off the table.
Even if the aeorosol component in the prior GCMs is considered wrong, to account for the discrepancey, doesn’t this mean that the science is not settled?
Connolly says the SAR, at least, is correct, but doesn’t concede the Catastrophic part has been invalidated by time.

December 20, 2012 2:17 pm

Gunga Din says:
December 19, 2012 at 7:21 pm
“Have any warmests ever admitted even that, that the models need improvement? Let alone admit they’ve been just plain wrong? Yet they still insist we take immediate action based on the past flawed models.”
Well Nasa doesn’t admit that they are wrong, just that some of the answers weren’t right 🙂
http://icp.giss.nasa.gov/research/ppa/2002/mcgraw/
http://icp.giss.nasa.gov/research/ppa/2001/mconk/

pete
December 20, 2012 2:30 pm

RACookPE1978 almost gets to the issue.
For any of these projections to be valid, they need to not only reproduce the forward temperature but also the components of the projection need to be correct. If they get the temperature correct but CO2, water vapour, ENSO, clouds, aerosol, TSI etc are wrong then the model isn’t correct at all, it’s got the temperature correct by pure chance. You can do this with virtually any ensemble of models you like.
So when the IPCC puts together these ensembles they are trying to hide the fact that their underlying models have zero predictive power from the get go. Not only do they not have a single model that can be run and produce any kind of predictive output, they don’t have a single model that can be run to get even a hindcast of temperature correct with all of the underlying variables also being correct.
The temperature analysis here is a good starting point, but if it is also taken to a component analysis of the models then it will be quickly shown that they are rubbish.

Lance Wallace
December 20, 2012 6:21 pm

Ira, I think you are still not grappling with the main point here. The point, as rgb and many others have said, is that this graph is NOT showing a range of predictions with a “best” value somewhere in the middle and uncertainties around the best value shown by colored bands. That is what the IPCC wants people to think! When you accept that, as you implicitly do by picking the central estimate, you are now open to the IPCC response (e.g., see Connelly) that at least the actual values are within the uncertainty. But these values are not even close to the uncertainty if you use reasonable uncertainty values enclosing the ACTUAL SCENARIO that ensued following the IPCC projection. That is, one would see four lines (probably lying close to the upper boundaries of each band of colors), with NARROW bands associated with each line, and the measured temperatures would lie far outside those narrow bands. This would give the IPCC no wiggle room.
Roger Knights, I did not “mis-report” what you said. I quoted your response and gave the time of 9:43 PM. That post of yours simply quoted Tokyoboy and said “it would be a nice addition.” You made two posts and it is the second one you are thinking of.

Gail Combs
December 20, 2012 7:28 pm

pete says: December 20, 2012 at 2:30 pm
…The temperature analysis here is a good starting point, but if it is also taken to a component analysis of the models then it will be quickly shown that they are rubbish.
____________________________________
Yes this chart alone shows the premise upon which the models are built is rubbish. They put in airplane contrails but they ignore clouds and water is bundled into CO2 as a “Feedback”!
And it is not like they do not have any real world data either.

Parameterization of atmospheric long-wave emissivity in a mountainous
site for all sky conditions
J. Herrero and M. J. Polo
Received: 14 February 2012 – Accepted: 11 March 2012 – Published: 21 March 2012
ABSTRACT
Long-wave radiation is an important component of the energy balance of the Earth’s surface. The downward component, emitted by the clouds and aerosols in the atmosphere, is rarely measured, and is still not well understood. In mountainous areas, the models existing for its estimation through the emissivity of the atmosphere do not give good results, and worse still in the presence of clouds….. This study analyzes separately three significant atmospheric states related to cloud cover, which were also deduced from the screen-level meteorological data. Clear and totally overcast skies are accurately represented by the new parametric expressions, while the intermediate situations corresponding to partly clouded skies, concentrate most of the dispersion in the measurements and, hence, the error in the simulation. Thus, the modeling of atmospheric emissivity is greatly improved thanks to the use of different equations for each atmospheric state.
——–
Introduction Long-wave radiation has an outstanding role in most of the environmental processes that take place near the Earth’s surface (e.g., Philipona, 2004). Radiation exchanges at wavelengths longer than 4 μm between the Earth and the atmosphere above are due to the thermal emissivity of the surface and atmospheric objects, typically clouds, water vapor and carbon dioxide. This component of the radiation balance is responsible for the cooling of the Earth’s surface, as it closely equals the shortwave radiation absorbed from the sun. The modeling of the energy balance, and, hence, of the long-wave radiation balance at the surface, is necessary for many different meteorological and hydrological problems, e.g., forecast of frost and fog, estimation of heat budget from the sea (Dera, 1992), simulation of evaporation from soil and canopy, or simulation of the ice and snow cover melt (Armstrong and Brun, 2008)….
Downward long-wave radiation is difficult to calculate with analytical methods, as they require detailed measurements of the atmospheric profiles of temperature, humidity, pressure, and the radiative properties of atmospheric constituents (Alados et al., 1986; Lhomme et al., 2007). To overcome this problem, atmospheric emissivity and temperature profile are usually parameterized from screen level values of meteorological variables. The use of near surface level data is justified since most incoming long-wave radiation comes from the lowest layers of the atmosphere (Ohmura, 2001).
…. the effect of clouds and stratification on atmospheric emissivity is highly dependent on regional factors which may lead to the need for local expressions (e.g., Alados et al., 1986; Barbaro, et al., 2010). on environmental processes, especially if snow is present. As existing measurements are scarce (e.g., Iziomon et al., 2003; Sicart et al., 2006), a correct parameterization of downward long-wave irradiance under all sky conditions is essential for these areas….
Conclusions
The long-wave measurements recorded in a weather station at an altitude of 2500 m in a Mediterranean climate are not correctly estimated by the existing models and frequently used parameterizations. These measurements show a very low atmospheric emissivity for long-wave radiation values with clear skies (up to 0.5) and a great facility for reaching the theoretical maximum value of 1 with cloudy skies.….

unha
December 20, 2012 7:39 pm

The problem with any model is as follows: by every iteration, the error tends to be enlarged. When one runs a model through thousands of iterations, errors accumulate.
Simply put: when my model does a 90% good prediction of the temperature at day one, what will it do for day two, assuming the same skill of the model? 0.*0/9???
And on dAy three? 0.9*0.9*0.9????
Anyone tried o.9^100????
It is 26 exp -6
And the models do much more than cycling through 100 cycles.
Please, do not consider models as if they were experiments. They are not. Discard models.

mpainter
December 20, 2012 8:55 pm

Ira Glickstein, PhD says:
December 20, 2012 at 7:38 pm

The very first sentence of Chapter 1 of the leaked AR5 says:
Since the fourth Assesment Report (AR4) of the IPCC, the scientific knowledge derived from observations, theoretical evidence, and modelling studies has continued to increase and to further strengthen the basis for HUMAN ACTIVITIES being the PRIMARY driver in climate change. At the same time, the capabilities of the observational and modelling tools have continued to improve. [EMPHASIS mine]
====================================
It looks as though they intend to brazen it out. Is any more proof needed that the IPCC reports are the vehicle of a particularist agenda?

davidmhoffer
December 20, 2012 8:56 pm

Ira Glickstein;
It seems to me that the opposite conclusion has been increased and strengthened, namely that the IPCC-supported Climate Theory and models derived from that theory were wrong to start with (like “loaded” dice) and, after four tries, are still wrong.
>>>>>>>>>>>>>>>>>>
Ch11 of AR5 is about the models and shorter term (a few decades) predictions. There’s a section on initialization as a technique to make the models more accurate, in which they make the most astounding (to me anyway) statement:
************
“While there is high agreement that the initialization consistently improves several aspects of climate (like North Atlantic SSTwith more than 75% of the models agreeing on the improvement signal), there is also high agreement that it can consistently degrade others (like the equatorial Pacific temperatures).”
************
How much more obvious could it be? They adjust to make one part to be more accurate and it makes another part worse. They don’t even seem to consider that this is an indication that the models contain one or more fatal flaws which render them incapable of producing an accurate result. It is direct evidence that the things the model gets right, it gets right for the wrong reasons.