This post made me think of this poem, The Arrow and the Song. The arrows are the forecasts, and the song is the IPCC report – Anthony
I shot an arrow into the air,
It fell to earth, I knew not where;
For, so swiftly it flew, the sight
Could not follow it in its flight.
I breathed a song into the air,
It fell to earth, I knew not where;
For who has sight so keen and strong,
That it can follow the flight of song?
– Henry Wadsworth Longfellow
Guest Post by Ira Glickstein.
The animated graphic is based on Figure 1-4 from the recently leaked IPCC AR5 draft document. This one chart is all we need to prove, without a doubt, that IPCC analysis methodology and computer models are seriously flawed. They have way over-estimated the extent of Global Warming since the IPCC first started issuing Assessment Reports in 1990, and continuing through the fourth report issued in 2007.
When actual observations over a period of up to 22 years substantially contradict predictions based on a given climate theory, that theory must be greatly modified or completely discarded.

IPCC SHOT FOUR “ARROWS” – ALL HIT WAY TOO HIGH FOR 2012
The animation shows arrows representing the central estimates of how much the IPCC officially predicted the Earth surface temperature “anomaly” would increase from 1990 to 2012. The estimates are from the First Assessment Report (FAR-1990), the Second (SAR-1996), the Third (TAR-2001), and the Fourth (AR4-2007). Each arrow is aimed at the center of its corresponding colored “whisker” at the right edge of the base figure.
The circle at the tail of each arrow indicates the Global temperature in the year the given assessment report was issued. The first head on each arrow represents the central IPCC prediction for 2012. They all mispredict warming from 1990 to 2012 by a factor of two to three. The dashed line and second arrow head represents the central IPCC predictions for 2015.
Actual Global Warming, from 1990 to 2012 (indicated by black bars in the base graphic) varies from year to year. However, net warming between 1990 and 2012 is in the range of 0.12 to 0.16˚C (indicated by the black arrow in the animation). The central predictions from the four reports (indicated by the colored arrows in the animation) range from 0.3˚C to 0.5˚C, which is about two to five times greater than actual measured net warming.
The colored bands in the base IPCC graphic indicate the 90% range of uncertainty above and below the central predictions calculated by the IPCC when they issued the assessment reports. 90% certainty means there is only one chance in ten the actual observations will fall outside the colored bands.
The IPCC has issued four reports, so, given 90% certainty for each report, there should be only one chance in 10,000 (ten times ten times ten times ten) that they got it wrong four times in a row. But they did! Please note that the colored bands, wide as they are, do not go low enough to contain the actual observations for Global Temperature reported by the IPCC for 2012.
Thus, the IPCC predictions for 2012 are high by multiples of what they thought they were predicting! Although the analysts and modelers claimed their predictions were 90% certain, it is now clear they were far from that mark with each and every prediction.
IPCC PREDICTIONS FOR 2015 – AND IRA’S
The colored bands extend to 2015 as do the central prediction arrows in the animation. The arrow heads at the ends of the dashed portion indicate IPCC central predictions for the Global temperature “anomaly” for 2015. My black arrow, from the actual 1990 Global temperature “anomaly” to the actual 2012 temperature “anomaly” also extends out to 2015, and let that be my prediction for 2015:
- IPCC FAR Prediction for 2015: 0.88˚C (1.2 to 0.56)
- IPCC SAR Prediction for 2015: 0.64˚C (0.75 to 0.52)
- IPCC TAR Prediction for 2015: 0.77˚C (0.98 to 0.55)
- IPCC AR5 Prediction for 2015: 0.79˚C (0.96 to 0.61)
- Ira Glickstein’s Central Prediction for 2015: 0.46˚C
Please note that the temperature “anomaly” for 1990 is 0.28˚C, so that amount must be subtracted from the above estimates to calculate the amount of warming predicted for the period from 1990 to 2015.
IF THEORY DIFFERS FROM OBSERVATIONS, THE THEORY IS WRONG
As Feynman famously pointed out, when actual observations over a period of time contradict predictions based on a given theory, that theory is wrong!
Global temperature observations over the more than two decades since the First IPCC Assessment Report demonstrate that the IPCC climate theory, and models based on that theory, are wrong. Therefore, they must be greatly modified or completely discarded. Looking at the scattershot “arrows” in the graphic, the IPCC has not learned much about their misguided theories and flawed models or improved them over the past two decades, so I cannot hold out much hope for the final version of their Assessment Report #5 (AR5).
Keep in mind that the final AR5 is scheduled to be issued in 2013. It is uncertain if Figure 1-4, the most honest IPCC effort of which I am aware, will survive through the final cut. We shall see.
Ira Glickstein
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Camburn,
Maybe you are missing something about LazyTeenager.
The arrows do fly straight and true. Published and predicted. What happens between The date the prediction is released based on theory compared to the proof of observation only has one straight line that matters. The temperature line. The most recent release being the most ridiculous.
One can also add the IPCC AR5 multimodel means to the projections as well. They would have had access to temperatures up until 2010 so that is when the projections start. AR5 is almost the same as AR4, there is very little difference.
The Climate Explorer has recently added a nice summary download page for AR5 multi-model means. I use the RCP 6.0 scenario which is the most realistic in terms of where we are going with GHGs. Be sure to set the base period to 1961 to 1990 in order to be able to compare to Hadcrut temperatures for example (everyone is using different baseperiods now so one has to be careful that they are all comparable – someone post this comment over at Skeptical Science since they do not seem to get this idea).
http://climexp.knmi.nl/cmip5_indices.cgi?id=someone@somewhere
Sorry, I should have added that the Climate Explorer’s dataset starts in 1860 (when I think it is actually 1861 – there is a small bug somewhere – just move forward one year).
E.M.Smith says:
December 19, 2012 at 9:04 pm
They are about to miss even more (further?)
http://rt.com/news/russia-freeze-cold-temperature-379/
Hi E.M. Smith – I also pointed to this story yesterday in another thread (I saw the story first at Instapundit).
What I find interesting is that CAGW devotees appear to believe that the mean temperature of the Earth is slowly increasing over time, which can be expressed simply as:
T_earth(t) = T_cagw(t) + T_stf(t)
where t is time, T_cagw(t) is the slow increase in mean temperature due to “global warming”, with a time scale on the order of multiple decades, and T_stf(t) are “short term fluctuations” due to ENSO, volcanoes, weather “noise”, and other natural variations. What I don’t understand is that if multidecade-scale “global warming,” as expressed above, does exist, we should NOT be breaking low temperature records established many decades ago in large area, broad regions like Russia. It will be interesting to see if more low temperature records are broken as we move into winter 2013…
well i am not sure that the debate is right.. I am more in comparing the shape of the curve.
Clearly any single model is not able to fit the data.
Who can explain why they use so many models? what is the meaning of that??? Why is it called uncertainty?
The last entry for the “Observed” data set is 2011 not 2012. Also, the graph does not say which data set “Observed” is. I suspect HadCrut3 or 4 as the HadCrut set has been their preferred one for all previous reports.
Data for the year so far suggests that 2012 will be warmer than 2011 but actually only about the same as 2009. That means the two dots will be at the bottom end of the green shaded area (TAR) and the upper end of their error bars is likely to sneak into the orange AR4 range. Of course the IPCC will say that because the single data point for the Observed 2012 data could have fallen within the bottom of the AR4 predicted range that it is “consistent” with their forecast. Of course they will ignore the fact that the trend in the data is clearly flat compared with the predicted upward trend.
That will, of course, not stop Tamino claiming that he has “pre-bunked” this argument by removing the effect of the dominant La Nina during the period, and then stating that the climate would have warmed. That translates to me as “if the climate had not cooled then it would be warmer than it is now”. The problem for Tamino is that ENSO is not a “cycle” where the warm and cool spells cancel out, it is a random fluctuation and can have a negative or positive trend of its own. Just because ENSO has biased cool in the last 10 years does not mean that it will bias warm to an equal extent in the future and that temperatures will somehow “catch up” through the effect of a series of El Ninos. They might, or they might not, it is a random fluctuation and it will now take a series of quite monster El Ninos to cancel out the last few La Ninas.
So Lazy, are you going to try and show us how the models actually got it right? I’d love to see the twisted mathematics you’re going to employ to convince us. Perhaps you could use Hansen’s A, B, and C scenarios he once touted 😉
As others have commented here, we should be looking at the BAU predictions the models have made as that is the scenario we are currently living in (in fact, I believe our evil ‘SeeOhToo’ emissions are higher than the BAU scenarios). I’d REALLY love to see you try and reconcile those predictions with the real-world temps!
Over to you Lazy…
@ur momisugly fhhaynie
You said: “That still would not explain a probable future downward trend in global temperature.”
As you know, there is no forecast of a near-term downturn in temperature in the purview of mainstream science. Certainly I don’t know of such a forecast, and I am therefore confident that many others who read your contribution will likewise be unaware.
I went to your website and found nothing that lead me to judge that such a decline is likely.
The great thing about WUWT’s (Specifically Anthony Watts’s) determined light moderation stance, is that within reason everybody has a chance to have their say. The heretic, the dissenter, the lone true voice in the crowd, the voice of orthodoxy, the honestly mistaken and the oughtright crackpot all get heard.
It’s embarrassing to have crackpots interjecting in a discussion. It would be even more embarrassing to exclude honest, possibly even correct viewpoints by wrongly judging them to be crackpot.
With respect, no matter how correct you might actually be, when you allude to a forecast not supported by conventional science, if you don’t give a citation then the reader has little choice but to include you among the crackpots. From visiting your blog, this would be an unfair characterisation of you.
I therefore ask you to always include a citation to your calculations about your expected temperature decline with every post you make that alludes to it, no matter how much you feel we ‘ought’ to know it.
Sincerely,
Leo Morgan
Leo,
That probable downturn may not occur in my life time, but it will happen. We will have another ice age. Also, consider the probability on a short term basis that the last sixteen years of no temperature rise is the top of a temperature cycle that is following a 200 year cycle of solar activity. Time will tell and reveal the true crackpots.
Andy W says:
December 20, 2012 at 6:49 am
(replying to LazyTeenager)
So Lazy, are you going to try and show us how the models actually got it right? I’d love to see the twisted mathematics you’re going to employ to convince us.
I think you have that wrong. I really don’t even care anymore “how” his precious models may have accidentally got it right.
Your question actually needs to be: “So Lazy, are you going to try and show us which of the models actually got it right?”
See, we still have not seen ANY of the 23 some odd “officially acceptable models” actually produce even ONE single model run (of the many thousands they supposedly average to get their results) that has “reproduced reality” and predicts/projects/outputs/calculates ANY single 16 year steady temperature period during ANY part of the 225 years between 1975 and 2200.
Its not that the “CAGW modelers” need to produce hundreds (or thousands) of model runs that lay right down the middle of the real world temperatures: clearly there are error bands and the global circulation models will be slightly different each run. Nobody anywhere questions that.
They cannot even produce ONE run of ONE model that fits inside the error band of ONE standard deviation.
But for the IPCC to claim “certainty” (more than 3 standard deviations (of what outputs??? from what sample set ??? using what “data” ???) that their GCM models are correct 100 years in the future – when not even ONE result of 23 models x 1000 runs/model is inside the 16 years of real world measurements between 1996 and 2012 is ludicrous!
It only makes logical sense: most of the world’s warming happened in the northern latitudes, so it shouldn’t be a surprise when cooling is realized in this same locale. Unfortunately, these same areas are the global breadbaskets. GK
To which Ira responded:
However, Lance Wallace mis-reported what my criticism was, which was quite different and which must be addressed:
[Roger Knights: Thanks, you are correct about the oval. I should have moved it and the arrow heads to the right by one year. Please see my embedded reply to Werner Brozek (December 19, 2012 at 8:43 pm) that I used 2012 instead of 2011 “… with the hope that, when the official AR5 is released in 2013, they will include an updated version of this Figure 1-4 with 2012 observed data. Please notice that I drew my black arrow through the higher of the two black temperature observations for 2011, which kind of allows for 2012 being a bit warmer than 2011.“ – Ira]
Dr Glickstein – many thanks for your comment immediately following mine above at 7:35 pm.
I quite agree that if you take truly random events such as throwing dice, the probability of throwing the same number N times will be 1/(6^N).
However, what I have problems with is where you say “If a prediction based on a given theory and associated computer model is supposed to be 90% certain, the probability it is wrong is one in ten. If the same theory and computer model is run again several years later, the chance that both are wrong is one in ten times ten …”.
The same theory and model implies the same result, if you use the same starting and boundary conditions. Even with different starting conditions I don’t think you can regard any two runs as truly random – so I personally have doubts that the probabilities can simply be multiplied in the way you suggest (one in ten times ten, etc).
But I will be happy to be corrected, if my grasp of probability theory here is wrong …!
Don’t anyone hold their breath, waiting for LT to respond. He never does. His strength is that he doesn’t mind being wrong. Nonetheless, he serves a good purpose in parroting the dubious scientists who brought us AGW, and so exposing their dubious science to public inspection.
Tim’s critique about the “prosecutor’s fallacy” (Dec 19 7:35 pm) is correct (and the rebuttal unfortunately is not). Four incorrect predictions, each with a 90% confidence (and therefore, a 10% chance of being wrong), does not lead to a 1 in 10,000 chance of all four being wrong. The fallacy is that the predictions are not independent events – that is, they are not separate throws of the dice.
If, for example, the 10% uncertainty includes some component of systemic error and that systemic error is propogated through all four trials, the calculated error considering all four trials may still be as high as the original 10%.
To go back to the rebuttal’s dice example, there is a one in six chance of rolling a “1” and a one in 6^4 chance of rolling four “1”s in a row if you have no prior knowledge or reason to suspect that the dice are unevenly balanced. Once you have four “1”s in a row, you have competing hypotheses, however – a) that you’re really unlucky or b) that the dice are skewed. Now you need to assess the probability of systemic error and recalculate. That is, given that you know that trial A was exceeded, what is the probability that trial B will be exceeded.
Unless you pick the extremes of either 0 or 100% component of skewing, the final properly-multiplied error of all four reports considered as a unit will be less than one in ten but substantially greater than one in ten thousand.
RACookPE1978 says:
December 20, 2012 at 7:18
Your absolutely right RACookPE1978. No matter how many times they run the models the results are always duff.
We still haven’t heard from LT 🙂
Let’s be generous, say SAR got it right. Doesn’t that still mean the GCMs that produce high forecasts have been proven inappropriate? Doesn’t all this still mean that the “C” part of CAGW has fallen off the table.
Even if the aeorosol component in the prior GCMs is considered wrong, to account for the discrepancey, doesn’t this mean that the science is not settled?
Connolly says the SAR, at least, is correct, but doesn’t concede the Catastrophic part has been invalidated by time.
Gunga Din says:
December 19, 2012 at 7:21 pm
“Have any warmests ever admitted even that, that the models need improvement? Let alone admit they’ve been just plain wrong? Yet they still insist we take immediate action based on the past flawed models.”
Well Nasa doesn’t admit that they are wrong, just that some of the answers weren’t right 🙂
http://icp.giss.nasa.gov/research/ppa/2002/mcgraw/
http://icp.giss.nasa.gov/research/ppa/2001/mconk/
RACookPE1978 almost gets to the issue.
For any of these projections to be valid, they need to not only reproduce the forward temperature but also the components of the projection need to be correct. If they get the temperature correct but CO2, water vapour, ENSO, clouds, aerosol, TSI etc are wrong then the model isn’t correct at all, it’s got the temperature correct by pure chance. You can do this with virtually any ensemble of models you like.
So when the IPCC puts together these ensembles they are trying to hide the fact that their underlying models have zero predictive power from the get go. Not only do they not have a single model that can be run and produce any kind of predictive output, they don’t have a single model that can be run to get even a hindcast of temperature correct with all of the underlying variables also being correct.
The temperature analysis here is a good starting point, but if it is also taken to a component analysis of the models then it will be quickly shown that they are rubbish.
Ira, I think you are still not grappling with the main point here. The point, as rgb and many others have said, is that this graph is NOT showing a range of predictions with a “best” value somewhere in the middle and uncertainties around the best value shown by colored bands. That is what the IPCC wants people to think! When you accept that, as you implicitly do by picking the central estimate, you are now open to the IPCC response (e.g., see Connelly) that at least the actual values are within the uncertainty. But these values are not even close to the uncertainty if you use reasonable uncertainty values enclosing the ACTUAL SCENARIO that ensued following the IPCC projection. That is, one would see four lines (probably lying close to the upper boundaries of each band of colors), with NARROW bands associated with each line, and the measured temperatures would lie far outside those narrow bands. This would give the IPCC no wiggle room.
Roger Knights, I did not “mis-report” what you said. I quoted your response and gave the time of 9:43 PM. That post of yours simply quoted Tokyoboy and said “it would be a nice addition.” You made two posts and it is the second one you are thinking of.
pete says: @ur momisugly December 20, 2012 at 2:30 pm
…The temperature analysis here is a good starting point, but if it is also taken to a component analysis of the models then it will be quickly shown that they are rubbish.
____________________________________
Yes this chart alone shows the premise upon which the models are built is rubbish. They put in airplane contrails but they ignore clouds and water is bundled into CO2 as a “Feedback”!
And it is not like they do not have any real world data either.
TimC (and mikerossander): You are correct that I was considering the ideal case where each run of the climate model could be analogized to a throw of a die, where each throw is totally independent (if the die is fair). Thus, my claim of ten times ten times ten times ten equals only one chance in 10,000 of four runs being wrong does not apply to IPCC runs of their climate models. (And when I say “run” I understand that the results published in each Assessment Report are the combined results of several models using different assumptions as to the rate of increase in atmospheric CO2, etc.)
Of course, if you run the same set of climate models with the same data, you will always get the same results. If the statistical certainty of one run is 90%, the chance of one run or multiple runs being wrong remains only one in ten.
However, in the case of the four IPCC Assessment Reports, each individual climate model and each set of climate models was somewhat different. Also, each run made use of some previously unknown data because the runs were made five or six years apart. So, while not independent, the subsequent runs were not totally identical either. Therefore, if the supposed statistical certainty of each run was 90%, meaning the probability of a single run being wrong is one in ten, the probability of all four runs being wrong is definitely greater than ten, but not as great as 10,000.
Could the four IPCC Assessment Reports be like throwing four different dice and all coming up as “1”? As before, if all four dice are fair, the probability of all coming up as “1” would be six times six times six times six equals 1296. However, if I threw four different dice and got that result, I would wonder if the dice were actually fair. Wouldn’t you? I would strongly suspect that all four dice were “loaded” to favor a certain result, namely that they would come up as “1”.
We have four sets of IPCC climate models, over a period of up to 22 years, predicting far greater Global Warming than has actually been observed. I think there is a very high probability that all of those sets of IPCC climate models are “loaded” to favor high Climate Sensitivity to CO2 and other forcing factors that are under some control by humans. Given the fact that Global Warming has slowed down or stalled for a decade or two despite the continued rapid increase in CO2 levels, I think there is also a very high probabilty that the Climate Sensitivity to CO2 has been way over-estimated and that actual temperatures are mainly driven by natural cycles of the Earth and Sun that are not under human control.
The very first sentence of Chapter 1 of the leaked AR5 says:
It seems to me that the opposite conclusion has been increased and strengthened, namely that the IPCC-supported Climate Theory and models derived from that theory were wrong to start with (like “loaded” dice) and, after four tries, are still wrong. IPCC researchers were closest to the truth with the SAR, which is the least high of the four Assessment Reports, but subsequent Climate Models, the TAR and AR4, have demonstrated that the IPCC researchers have definitely not improved their modelling tools. Thus, the draft AR5 starts with two false sentences in the very first paragraph.
Ira
The problem with any model is as follows: by every iteration, the error tends to be enlarged. When one runs a model through thousands of iterations, errors accumulate.
Simply put: when my model does a 90% good prediction of the temperature at day one, what will it do for day two, assuming the same skill of the model? 0.*0/9???
And on dAy three? 0.9*0.9*0.9????
Anyone tried o.9^100????
It is 26 exp -6
And the models do much more than cycling through 100 cycles.
Please, do not consider models as if they were experiments. They are not. Discard models.
Ira Glickstein, PhD says:
December 20, 2012 at 7:38 pm
The very first sentence of Chapter 1 of the leaked AR5 says:
Since the fourth Assesment Report (AR4) of the IPCC, the scientific knowledge derived from observations, theoretical evidence, and modelling studies has continued to increase and to further strengthen the basis for HUMAN ACTIVITIES being the PRIMARY driver in climate change. At the same time, the capabilities of the observational and modelling tools have continued to improve. [EMPHASIS mine]
====================================
It looks as though they intend to brazen it out. Is any more proof needed that the IPCC reports are the vehicle of a particularist agenda?
Ira Glickstein;
It seems to me that the opposite conclusion has been increased and strengthened, namely that the IPCC-supported Climate Theory and models derived from that theory were wrong to start with (like “loaded” dice) and, after four tries, are still wrong.
>>>>>>>>>>>>>>>>>>
Ch11 of AR5 is about the models and shorter term (a few decades) predictions. There’s a section on initialization as a technique to make the models more accurate, in which they make the most astounding (to me anyway) statement:
************
“While there is high agreement that the initialization consistently improves several aspects of climate (like North Atlantic SSTwith more than 75% of the models agreeing on the improvement signal), there is also high agreement that it can consistently degrade others (like the equatorial Pacific temperatures).”
************
How much more obvious could it be? They adjust to make one part to be more accurate and it makes another part worse. They don’t even seem to consider that this is an indication that the models contain one or more fatal flaws which render them incapable of producing an accurate result. It is direct evidence that the things the model gets right, it gets right for the wrong reasons.