Submitted by Dr. Clive Best
The first IPCC report in 1990 chaired by Prof. Houghton made a prediction for a rise in global temperatures of 1.1 degrees C from 1990 until 2030. This prediction can now be compared with the actual data as measured up to now (May 2011).
These results have been derived as described below. You can see the results here
http://clivebest.com/blog/?p=2208
regards
Dr Clive Best

The IPCC and Climate science are crying our loud that CO2 induced by humans is the cause of the temperature rise and policymakers are going around spending millions and imposing taxes on that basis. All that was based on the so called models. It’s bloody stupid to come and give academic bullshit about model projections being conditional on other factors. Go tell the world first that CO2 is not a demon and not to trust the models to make predictions just on CO2 alone and to stop all talk of ” carbon mitigation “. Because, in the real world, policy affecting millions and costing trillions are being made based upon ” carbon ” being a pollutant and being the only reason for CAGW due to these bloody models you modellers idolise about. Get real, you bloody modellers. You are the culprits here who have caused this situation today when people are suffering due to your modelling.
The biggest issue here is that figure 2 is still being shown when the text makes clear that figure 1 is the correct figure to show. Figure 2 is misleading.
Look fellows, I am all in favor of comparing the forecasts to the observations. But it has to be done in a much more technically competent way. Lucia has been doing it for some time.
1. Put the right chart in this post.
2. understand your opponents BEST argument and defeat that.
3. I believe these are energy balance models being looked at. that’s not really sensible to use them since they have no uncertainty. use GCMs, look at what Lucia does, attack the BEST argument and dont misrepresent.
I think what you will see is that the models look like they are systematically a little hot.
steven mosher says:
June 9, 2011 at 10:35 pm
The biggest issue here is that figure 2 is still being shown when the text makes clear that figure 1 is the correct figure to show. Figure 2 is misleading.
As you asked: “what’s the speed of dark?” Many people [e.g. Mr Feht] simply don’t get it.
1. The sun: The IPCC can make recommendations about the solar forcing that the 20 or so models use. in AR4 some groups used a straight line forward.. basically a “high” number. Some groups tried to use a nominal 11 year cycle. So if you had time and money enough to study this parametrically you would.
2. Volcanos, actually in AR5 I think they are going to do some kind of tests around this. But the point is this. In the case presented above the observations had a big plunge down. Not predictable. of course the temp recovers, but the start of your curve is pegged lower.
The reason you would make the prediction with no volcanos is simple. 1 you cant predict them. 2. The effect doesnt last long. 3. you cant count on them to keep things cool. BUT if you are interested in TESTING a prediction, then having a volcano in the observations makes a fair test harder, ESPECIALLY if the incident happend at either end of the test period and your looking at trends
Upon reading this analogy, the very first thing that entered my mind is what about the wife’s salary?
I feel that this immediate questioning of the given scenario *is* the scientific approach, and these days is only to be found on the ‘skeptical’ side of the debate. This is opposed to what I would categorize as the static or linear or assumptive approach that Mr. Mosher appears to have offered, and IMHO is the hallmark of AGW thinking.
So what I see here is the immediate dismissal of all possible variables that may impact the conclusion, I call it input smoothing, a trademark of AGW. The result is graphs that plot wild-assed guess far into the future. Graphs that by definition get less accurate as they move forward because of the introduction of countless new variables along the way. Not to mention the fact that the function itself is corrupt because inputs were dismissed as being irrelevant or insignificant, or were just not thought of yet.
Besides the wife’s salary (dismissed variable), what if I get a raise? What if I get a 2nd job? What if we hit the lottery? What if we get a smaller house and our expenses are reduced? Negative feedbacks etc,. Or we might buy an electric car and blow a bigger hole in the budget (an ultra-positive feedback in the given scenario).
Fortunately for Steve, I will not let the old lady address any possible implied sexism in the comment! (“down honey! no computer tonight!“) 😉
The base message is, if you model something and your model doesn’t prove predictive, you model did not contain enough factors.
Did volcanos happen? Then volcanos are part reality, and should be quantifiably predicted in the model, because they will effect the (real) output the model is emulating. And since we can’t predict volcanos, then we can’t predict their effect on the climate, and thus we can’t predict the climate. That is just one unmodeled factor that can be identified. It cannot be pretended (as modern climatology does) that an incomplete abstraction is representative, predictive or justified if it is inherently lacking identified factors.
It might sound ridiculous when you put it that way, but it’s no less ridiculous than saying “if you take out all these chaotic factors, then we get a predictive model.”
And exactly why “they can’t predict the weather, why could they predict the climate” is a valid argument.
If people want to know what was projected for BAU…
Somebody should write and ask for it
Steven Mosher says “Most NOTABLY the forecast did not foresee or take into account any volcanic eruptions.”
Hansen et al Section 4.2 discusses stratospheric aerosols. There are no volcanos in Scenario A (described as an extreme case) and some assumptions of El Chichon scale events in Scenarios B and C.
Figure 2 suggest that the resulting attenuation in radiative forcing is short-lived. Ongoing differences between the scenarios are probably then due to accumulation of warming due to assumed positive feedack.
In the climate prediction game, reporters have a habit of pointing to a particular “realisation” and claiming skill. At the same time, they distance themselves from other realisations on arguments that they were “CONDITIONAL” on unforseen events.
But a fortunate prediction as-of today can just as easily be shown to be wrong in a few years time when those pesky “CONDITIONS” come along to spoil the party. So claims of skill are, at best, tentative and short-lived.
I roll a dice and I suggest six possible “realisations”. When the outcome is a ‘2’, do I claim credit for being a good forecaster, while playing-down the other five predictions because they were CONDITIONAL on events which did not happen. To have claimed that credit, I would have have just set myself up for a fall on the next few rolls of the dice.
The question is what is the purpose and value of this type of exercise?
the other problem with this post is this.
So what is this saying. The BAU scenario in 1990 ( see page 333 FAR) is pretty extreme.
C02 was projected UNDER BAU to reach 400 ppm by 2010. we are less than that
Methane was projected under BAU to be substantially higher than it actually turned out to be.
So basically the reason why the projections are HIGH is that the emission scenario was HIGH. In reality we havent put out as much C02 or methane as THIS scenario contemplated. So, of course, its high.
The other thing you can see is THIS. 1990-2030 is 40 years. there low end projection for a HIGH emissions scenario ( BAU is HIGH) is .7C in 40 years… that’s what .17C per decade. Which is just about at the center
of the current projections.
The point is you cant project emissions very well, You can only build scenarios.. and then conclude, we cannot afford to pump that much (BAU) GHGs into the air.
So, go have a look at the actual FAR. If we want to be critical of the models you better do things right.
Anthony
Is it possible to add to this post FIgure I or a link to Figure 1 ?
Can you also provide the CO2 emission assumptionbs behind each of the three model predictions?
This will go someway to allowing people to consider the points raised by Steven Mosher. Alternatively, Steven could provide this data along with his comments.
Perhaps Steven will tell us what he estimates the forcing/cooling caused by Krakatoa and what the end of 19th century temperatures would have looked like but for that event..
Thanks .
Back then, there were no cars.
It would have been much more helpful (or less misleading) if you had shown Fig 1 of Best’s link where he corrects for different data offsets and which support his conclusion:
Conclusions
Following a gradual rise of about 0.2 degrees from 1990 to 2000, global temperatures have stopped increasing and have actually fallen slightly. The only IPCC prediction which remains consistent with the current data is the lower prediction of a 0.7 degree rise from 1990 to 2030. The “Best” IPCC estimate and the higher 1.5 degree rise are ruled out by the data.
As it stands the figure you present is bing highlighted on Andrew Bolt’s website where people are claiming this shows that the IPCC has it all wrong:
More spin and idiocy from the data denying lefty trolls is expected.
The data deniers will be in full scream today.
Always good for a laugh.
I’ve linked to similar charts for some time on this and other blogs. All they can do is reflexively deny that Hansen’s and other’s models continue to diverge from reality.
So, data deniers, please provide some more merriment. More expressions of blind faith welcome.
Keith of Canberra (Reply)
Fri 10 Jun 11 (06:58am)
Isn’t life funny.
We get told that the science is settled, the models are verified, the data is not fiddled , snow will soon be a rare event and there is a consensus.
When we say ‘but this isnt like the lab and simple lab approaches miss the point and the complexity’, we get villified, called deniers and they make videos of children deniers exploding in bloody tatters.
Then when the original predictions are out by a mile ,Steve says ‘ this isnt like the lab and simple lab approaches miss the point and the complexity.’
maybe, just maybe, we were right all along.
Linear projections for chaotic systems are silly from the onset, anyway – even more so, as Earth’s climate is proven to be governed by an unknown number of self-regulating factors – proof of which is the simple fact that we still do exist because, otherwise, this cozy planet would have turned into a glazing, smothering, hostile-to-life, Venus-like Hell long since.
This little-noticed fact alone should be sufficient to completely disqualify and dismiss the linear IPCC-projections already.
This page does the job: http://www.realclimate.org/index.php/archives/2011/01/2010-updates-to-model-data-comparisons/ .
Dr. Best is actually looking at the Hansen predictions of 1988. Of them, RealClimate comes to a comparable conclusion:
—
As stated last year, the Scenario B in that paper is running a little high compared with the actual forcings growth (by about 10%) (and high compared to A1B), and the old GISS model had a climate sensitivity that was a little higher (4.2ºC for a doubling of CO2) than the best estimate (~3ºC).
–
The trends for the period 1984 to 2010 (the 1984 date chosen because that is when these projections started), scenario B has a trend of 0.27+/-0.05ºC/dec (95% uncertainties, no correction for auto-correlation). For the GISTEMP and HadCRUT3, the trends are 0.19+/-0.05 and 0.18+/-0.04ºC/dec (note that the GISTEMP met-station index has 0.23+/-0.06ºC/dec and has 2010 as a clear record high).
As before, it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world. Repeating the calculation from last year, assuming (again, a little recklessly) that the 27 yr trend scales linearly with the sensitivity and the forcing, we could use this mismatch to estimate a sensitivity for the real world. That would give us 4.2/(0.27*0.9) * 0.19=~ 3.3 ºC.
—
steven mosher says:
June 9, 2011 at 11:40 pm
the other problem with this post is this.
“Based on the IPCC Business as Usual scenarios, the energy-balance upwelling diffusion model with best judgement parameters yields estimates of global warming from pre-industrial times (taken to be 1765) to the year 2030 between 1.3°C and 2.8″C, with a best estimate of 2 0°C This corresponds to a predicted rise from 1990 of 0.7-1.5°C with a best estimate of 1.1C. “
Prediction: 1990 to 2030 –> 0.7 – 1.5 degrees C
So what is this saying. The BAU scenario in 1990 ( see page 333 FAR) is pretty extreme.
C02 was projected UNDER BAU to reach 400 ppm by 2010. we are less than that
Methane was projected under BAU to be substantially higher than it actually turned out to be.
So basically the reason why the projections are HIGH is that the emission scenario was HIGH. In reality we havent put out as much C02 or methane as THIS scenario contemplated. So, of course, its high.
The other thing you can see is THIS. 1990-2030 is 40 years. there low end projection for a HIGH emissions scenario ( BAU is HIGH) is .7C in 40 years… that’s what .17C per decade. Which is just about at the center
of the current projections.
The point is you cant project emissions very well, You can only build scenarios.. and then conclude, we cannot afford to pump that much (BAU) GHGs into the air.
So, go have a look at the actual FAR. If we want to be critical of the models you better do things right.
——————————————————————————————————————–
Well, to be precise, the rise in temperature from 1765 until doday exactly follows the curve describing Earth’s climate’s recovery from the little Ice-Age.
Therefore, NO rise in temperature can clearly and doubtlessly be attributed to ANY rise in CO2 in Earth’s atmosphere, whatsoever – be it man-made or be it caused by natural sources.
So, go have a look at the actual FACTS. If you want to be critical of the comments of other posters you better do things right.
“So basically the reason why the projections are HIGH is that the emission scenario was HIGH. In reality we havent put out as much C02 or methane as THIS scenario contemplated. So, of course, its high.”
There was NO reduction in CO2 emissions. CO2 emissions are in mass units, not in ppm. Another very important sceptical point regarding AGW was (and is) that CO2 mass balnce in the atmosphere according to consensus is wrong. We are still far away from estimating all natural and anthropogenic inputs/outputs with any accuracy/certainty. We don’t know how anthropogenic CO2 emissions impact atmospheric CO2 concentration. To know that, one would have to know all inputs/outputs.
We can emit a lot of CO2 and atmospheric CO2 concentration can still decrease, because natural fluxes are overwhelming.
The temperature readings don’t meet the projections because Kyoto was a great agreement, the IPCC is doing a fanatastic job and it should get more money and campaign longer and louder. Obvious isn’t it? /SARC OFF.
Steve Mosher,
I don’t think we want to count on a dearth of volcanoes keeping us warm.
What’s the speen of dark?
I don’t know, but the speed of darkness is a good Flogging Molly album.
So many words to defend the indefensible, Msrs. Mosher and Svalgaard.
Any prediction, however conditional (especially if it is used as a pretext to suck billions of dollars out of taxpayers’ veins) is only as good as it is true.
IPCC predictions are demonstrably false. You can discuss the question of WHY are they false to your heart’s content — but all your rich allusions at nothing in particular, learned provisos, subtilizations and self-serving obfuscations won’t HIDE THE DECLINE.
Climate Change 2007: The Physical Science Basis
Summary for Policymakers
by Vincent Gray
The absence of any form of validation still applies today to all computer models of the climate, and the IPCC wriggle out of it by outlawing the use of the word “prediction” from all its publications. It should be emphasised that the IPCC do not make “predictions”, but provide only “projections”. It is the politicians and the activists who convert these, wrongly, into “predictions.”, not the scientists.
An unfortunate result of this deficiency is that without a validation process there cannot be any scientific or practical measure of accuracy. There is therefore no justified claim for the reliability of any of the “projections’
http://www.pensee-unique.fr/GrayCritique.pdf
RR Kampen,
Reality trumps your models: click [chart by Bill Illis]
Kampen’s claimed 3.3° rise per 2xCO2 is as preposterous as Hansen’s Texas Sharpshooter predictions [shoot holes in a barn door, then draw a circle around them and claim, “Bullseye!”].
Also interesting is that climate model projections that seem to be the most accurate are the ones where they show a scenario where GHG emissions stop and/or actually decrease.
In FAR, the most accurate temperature projection was the one where they had emissions declining by 2.0% per year starting in the year 1990. It’s bang on for 2010.
Hansen’s scenario C, as well, where emissions increase stops in the year 2000 is still too high but is the closest projection. We see this time and time again.
CO2 emissions, of course, are not declining by 2.0% per year since 1990 but are rising at 2.2% per year since 1990 (and have been at 3.0% per year since 2000) .