From their Die kalte Sonne website, Professor Fritz Vahrenholt and Dr. Sebastian Lüning put up this guest Post by Prof. Jan-Erik Solheim (Oslo) on Hansen’s 1988 forecast, and show that Hansen was and is, way off the mark. h/t to Pierre Gosselin of No Tricks Zone and WUWT reader tips.

Figure 1: Temperature forecast Hansen’s group from the year 1988. The various scenarios are 1.5% CO 2 increase (blue), constant increase in CO 2 emissions (green) and stagnant CO 2 emissions (red). In reality, the increase in CO 2 emissions by as much as 2.5%, which would correspond to the scenario above the blue curve. The black curve is the ultimate real-measured temperature (rolling 5-year average). Hansen’s model overestimates the temperature by 1.9 ° C, which is a whopping 150% wrong. Figure supplemented by Hansen et al. (1988) .
One of the most important publications on the “dangerous anthropogenic climate change” is that of James Hansen and colleagues from the year 1988, in the Journal of Geophysical Research published. The title of the work is (in German translation) “Global climate change, according to the prediction of the Goddard Institute for Space Studies.”
In this publication, Hansen and colleagues present the GISS Model II, with which they simulate climate change as a result of concentration changes of atmospheric trace gases and particulate matter (aerosols). The scientists here are three scenarios:
A: increase in CO 2 emissions by 1.5% per year
B: constant increase in CO 2 emissions after 2000
C: No increase in CO 2 emissions after 2000
The CO 2 emissions since 2000 to about 2.5 percent per year has increased, so that we would expect according to the Hansen paper a temperature rise, which should be stronger than in model A. Figure 1 shows the three Hansen scenarios and the real measured global temperature curve are shown. The protruding beyond Scenario A arrow represents the temperature value that the Hansen team would have predicted on the basis of a CO 2 increase of 2.5%. Be increased according to the Hansen’s forecast, the temperature would have compared to the same level in the 1970s by 1.5 ° C. In truth, however, the temperature has increased by only 0.6 ° C.
It is apparent that the next to it by the Hansen group in 1988 modeled temperature prediction by about 150%. It is extremely regrettable that precisely this type of modeling of our politicians is still regarded as a reliable climate prediction.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
At least I think you’re right.
Phil.,
“Well we’re discussing Hansen’s paper not ‘climate models’ and he explicitly uses a model that has an equilibrium climate sensitivity of 4.2ºC/doubling of CO2.”
So we’re both in agreement that Hansen (1988) overestimated future warming. Done.
But, he said it best himself in Section 6.1: “Forecast temperature trends for time scales of a few decades or less are not very sensitive to the model’s equilibrium climate sensitivity. Therefore climate sensitivity would have to be much smaller than 4.2ºC, say 1.5-2ºC, in order to modify our conclusions significantly.”
It would be interesting to take Hansen’s formula and adust the forcing downwards until his curve coincided with the actual.
Russ R. – WRT methane emissions, I would suggest you start by googling “trends in methane emissions”, and reading some of the considerable work in this area. Numerous papers and presentations discuss this, including Wuebbles et al 2000 (http://www.atmosresearch.com/NCGG2a%202002.pdf), noting that anthropogenic sources have not increased much in the last 20 years or so:
In regards to the estimations of emissions, well, if Hansen was studying economics you might have a point. But he doesn’t – he studies climate. The relevant question is whether the model correctly ties forcings to climate state. If so, then the model is useful, allowing us to judge the results of our actions.
“But you haven’t at all addressed the CO2 problem at all. How could CO2 emissions have increased at a greater rate than assumed, but forcings remain in line with what was projected?”
They haven’t! CO2 forcing for this period as per Scenarios A and B was estimated at ~0.6W/m^2, and according to figures you can look at on http://www.esrl.noaa.gov/gmd/aggi/ it was actually ~0.55W/m^2, slightly lower. Adding in trace gases, slightly lower stratospheric aerosols, etc. (http://tinyurl.com/8xw3dtp is directly relevant to this paper), our total forcings have been ~16% lower than Scenario B.
Meaning that despite Hansen not being an economist, his mid-line, most likely projections for forcings are actually fairly close to what has actually occurred over the past 25 years.
“Look, I’m not yet arguing that Hansen’s projections were good or bad compared to what’s been observed since 1988. I’m saying that I haven’t yet seen anyone here present an honest, fair analysis in order to come to an objective conclusion.”
I would have to strongly disagree. See http://www.realclimate.org/index.php/archives/2007/05/hansens-1988-projections/ for just that evaluation, http://www.atmos-chem-phys-discuss.net/11/22545/2011/acpd-11-22545-2011.pdf for forcing histories,
Slightly different topic, side note, while I would agree that there won’t be a single climate sensitivity that applies to all climate regimes, ~3C/doubling is the value supported by the evidence for our current state.
—
But finally – this thread really centers on Solheim harping on a 25 year old paper, using inappropriate strawman arguments. Arguments that I feel have been shown to be unjustified – Hansen 1988 sensitivity was too high (the state of the art back then), and hence shows overestimates, but even with that his model does a good job of predicting regional temperature anomaly distributions 25 years out. Comparing observations to scenario A, which did not happen, is just plain silly.
Wouldn’t it be nice if the discussion could instead center on current work, current data? Rather than nit-picking quarter-century old works that the authors of clearly stated were works in progress?
old engineer,
My apologies, your method for extrapolating CO2 growth appears to be equivalent to Hansen’s.(Though it’s now apparent that Hansen’s own method differs somewhat from how it was summarized in the abstract… though not by enough for me to nit-pick.)
As for the other GHGs in the analysis, you might find this helpful: http://www.realclimate.org/data/H88_scenarios.dat
It’s the concentration projections from Hansen (1988) for each of the 5 GHGs (CO2, N2O, CH4, CFC11 and CFC12), for each of the 3 scenarios, for each of the 93 years from 1958 to 2050.
FWIW, Hansen’s CO2 projections for 2011 were:
– Scenario A: 393.74 ppmv
– Scenario B: 390.99 ppmv
– Scenario C: 367.81 ppmv
Why his numbers differ from those you get by following the method he spelled out in Appendix B, is entirely beyond me.
What’s desperately needed is a magisterial review-type paper that addresses itself to clearing away the misconceptions, identifying the points in dispute, evaluating the outcomes assuming one grants certain in-dispute points, etc. Then everyone can refer to it and use it as a take-off point.
But who’ll peer-review it? Maybe it should just be posted and then modified in light of comments posted below it (or defended against them, where warranted).
KR:
Several points:
1. It doesn’t trouble me at all if Hansen wasn’t accurate on his estimates for emissions (Step 1). He’s a scientist, not a clairvoyant. I’m entirely happy to work from whatever emissions assumptions he laid out. My concern is that from a given assumption about emissions, you’re granting him a pass on projecting atmospheric concentrations (Step 2). I don’t consider this an immaterial step (and he’s made it a bit complicated by being inconsistent in how he made assumptions… sometimes talking about changes in annual concentration increments and at other times talking about actual increases in emissions). For Step 3 (calculating forcing from a given atmospheric composition) I’m happy to take this part at face value since the math appears uncontroversial. (Myrhe et al. (1998)) But in the end, you’re trying to account for all of the remaining divergence in Step 4 (temperature variance for a given change in forcing), and pinning it all on a simple little overestimate of climate sensitivity. I’ll refer you back to what Hansen himself said about sensitivity in Section 6.1.
2. “In the absence of any mechanism to explain a long-term decrease in natural sources such as wetlands, the answer appears to lie with anthropogenic emissions.” To me that sounds like “We can’t account for the gap so we’ll just assume that less methane was emitted.”. In other words… there’s no data to allocate the variance between step 1 (lower than assumed emissions) and Step 2 (lower concentrations given assumed emissions), but you’re arbitrarily attributing all of the gap to Step 1. In the end, the distinction may very well prove trivial… but until there’s some evidence, we can’t just assume away the difference.
3. I agree with you that Gavin did a very respectable job of evaluating Hansen (1988) in his 2007 post on RC. However, that was 5 years ago, and you’ll note that Scenario’s B & C have diverged since then while observations have tracked much closer to Scenario C.
4. Thank you for sending the AGGI link… this is excellent. I’ll refer it to others in future when such questions arise.
5. Minor point: As for my CO2 emissions question… you replied: “CO2 forcing for this period as per Scenarios A and B was estimated at ~0.6W/m^2, and according to figures you can look at on http://www.esrl.noaa.gov/gmd/aggi/ it was actually ~0.55W/m^2, slightly lower”.
That doesn’t quite jive with Hansen’s projected numbers for 2011 :
– Scenario A: 393.74 ppmv
– Scenario B: 390.99 ppmv
– Scenario C: 367.81 ppmv
(http://www.realclimate.org/data/H88_scenarios.dat)
and observed average concentration for 2011:
– Mauna Loa: 391.57 ppmv
(ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_annmean_mlo.txt)
It looks to me like CO2 came in higher than scenario B but lower than A,, but still a very good estimate. Nonetheless, it’s a long way from Scenario C.
6. “Wouldn’t it be nice if the discussion could instead center on current work, current data? Rather than nit-picking quarter-century old works that the authors of clearly stated were works in progress?”
That would be nice indeed, except that “temperature records of at least 17 years in length are required for identifying human effects on global‐mean tropospheric temperature.” – Santer et. al (2011).
(Sorry, I couldn’t resist.)
KR says:
June 15, 2012 at 9:50 am
Hansen did use a 4.2°C per doubling sensitivity – now thought to be too high, with ~3°C the current estimate. That resulted in a slight overestimate of warming, with the model showing an overestimate of ~20% when run with actual forcings.
Don’t throw misinformation about like that, please. The model we are talking about is GISS GCM Model II, right? Developed in the 1970s / early 1980s and described in Hansen 1983, which is referred to as paper 1 in Hansen 1988.
However, the NASA GISS GCM Model II page explicitly says: “Historical versions of Model II (e.g., the computer code used in the 1988 simulation runs) are not currently available.”. They do have an improved, value-added version by the same name available to download though. It means there are unknown and (publicly) undocumented differences between the computational climate model used by Hansen et al. in the eighties and the current, publicly available version, developed and maintained by the Columbia University EdGCM project.
Therefore it is impossible to run the model with actual (or whatever) forcings, at least not the one which was used to produce Hansen’s 1988 predictions. It may be possible for someone having access to NASA code archives (provided they exist), but the original model is certainly not publicly available for independent, third party checking.
It means your claim the model shows an overestimate of ~20% is not a scientific proposition, just hot air defying verification attempts. The proper term for it is journalism, which used to have nothing to do with science in a pre-postnormal epoch.
@KR:
So, Hansen’s busted *theoretical predictions* don’t matter, because someone’s thought up some *theoretical reasons* that *might* partly explain why they failed. And that means we can talk as if they weren’t busted at all?
ON VERIFICATION OF AGW
Now comes the moment of verification and truth: testing the theory back against protocol experience to establish its validity. If it is not a trivial theory, it suggests the existence of unknown facts which can be verified by further experiment. An expedition may go to Africa to watch an eclipse and find out if starlight really does bend relatively as it passes the edge of the sun. After a Maxwell and his theory of electro-magnetism come a Hertz looking for radio waves and a Marconi building a radio set. If the theoretical predictions do not fit in with observable facts (http://bit.ly/JPvWx1), then the theorist (Hansen) has to forget his disappointment and start all over again. This is the stern discipline which keeps science sound and rigorously honest.
Note that CO2 emission growth rate since 1980s is 1.84% (http://bit.ly/P1dXaB), which is greater than the 1.5% for Scenario A of Hansen et al.
Phil. says:
June 15, 2012 at 6:39 pm
“Why would one pretend that instead of using the actual value he shows in Fig 3?”
While reading the graph is already sufficient to show scenario B was claiming about 1 degree Celsius rise over that period, digging around elsewhere to get beyond the paywalled paper link finds http://www.realclimate.org/data/scen_ABC_temp.data showing scenario B as going from 0.121 degrees Celsius on the temperature anomaly scale in 1979 to 1.065 degrees in 2012. That is +0.944 degrees 1979->2012. Slightly different years like 1980->2012 also give similar results for his prediction.
The overall curved black line at http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_May_2012.png shows under 0.3 degrees Celsius meaningful temperature rise over that timeframe. If particularly generous, pretend up to 0.4 degrees.
The claim of Hansen just erroring by assuming 4.2 degrees Celsius climate sensitivity for a doubling and it fitting 3 degrees being used by warmists now doesn’t work.
Observations were not 3/4.2 or 71% of his scenario B prediction. 0.3 / 0.944 is not remotely close to that, nor is even 0.4 / 0.944. Observed temperature increase was <=~ 32% to 42% of his scenario B prediction at most.
The preceding would be already be more like <=~ 1.3 to 1.8 degrees Celsius climate sensitivity / doubling.*
* BUT that is if doing the warmist fallacy of neglecting the warming component from natural sources, falsely pretending there was zero effect from rise in the AMO/PDO meanwhile, etc.; among other examples, natural factors are well illustrated in
http://earthobservatory.nasa.gov/Features/ArcticIce/Images/arctic_temp_trends_rt.gif
Without that dishonest fallacy, concluded climate sensitivity is less.
And the preceding is all also just pretending scenario B for the sake of argument, to highlight how even that pretense wouldn't actually save their bacon.
I remember vividly the build up to the Montreal protocol and it was the ability of CFC’s to destroy ozone that was the motivating factor.
This is the quote of the stated purpose:
“Recognizing that worldwide emissions of certain substances, including ST, can significantly deplete and otherwise modify the ozone layer in a manner that is likely to result in adverse effects on human health and the environment, … Determined to protect the ozone layer by taking precautionary measures to control equitably total global emissions of substances that deplete it, with the ultimate objective of their elimination on the basis of developments in scientific knowledge … Acknowledging that special provision, including ST is required to meet the needs of developing countries…”
Iy worked too, CFC’s have a half-life of between 60 and 650 years, so almost all of those made are in the biosphere.
http://www.ciesin.org/docs/003-006/fig1.gif
So how come the global warming argument wasn’t used/ Why didn’t they state that CFC’s were six or more orders of magnitude better GHG’s than CO2?
If you think about it, if it was getting rid of the CFC production that caused to leveling off of temperature, in the face of rising CO2. Would it be better to remove CO2 from the biosphere than stop generating CO2?
Girma:
You wrote: “Note that CO2 emission growth rate since 1980s is 1.84% (http://bit.ly/P1dXaB), which is greater than the 1.5% for Scenario A of Hansen et al.”
Not quite. The figures you cite from CDIAC are only the fossil-fuel contribution to CO2 emissions. There’s still additional CO2 from cement production and a land-use component.
Try this: http://www.tyndall.ac.uk/global-carbon-budget-2010#Jump to Data.
Using these more comprehensive data I calculated a 1.33% compounded annual increase from 1988 to 2010.
Russ R:
Thanks for information. I was looking for Hansen’s 2011 CO2 values to see if I really understood what he was doing. I think the reason my 2011 CO2 value didn’t agree with Hansens was that I used a different value for the starting yearly increase in CO2. I used the actual average annual increase over the period from 1958 to 1981 from this website:
ftp://ftp.cmdl.noaa.gov/ccg/co2/trends/co2_annmean_mlo.txt
which may, or may not, be the data that Hansen used back in the 80’s. I came up with 1.05 ppm. From the 1988 paper’s Introduction section, Hansen may have used a value close to 1.5 ppm. (the quote is “…with current annual increments of about 1.5 ppmv…”) When I plug 1.5 in to my spreadsheet I get 395.7 for the 2011 concentrations compared to value you give of 393.74 ppm. That’s close enough for me to think I have his methodology correct , I just don’t have the exact numbers he used for 1981 concentration and annual increment.
After spending the afternoon rereading his paper, I think there is great deal to learn in a revisit to all of the papers that culminated in the 1988 paper. The 1988 paper at the end of section 4.1 states “The forcing for any other scenario of atmospheric trace gases can be compared to these three cases by computing DeltaT0(t) with the formulas provided in Appendix B.”
I must agree with those that said that the post that started this thread is essentially a puff piece. What is needed is a real revisit with the forcings recalculated for the actual trends of the GHG’s Hansen used.
Phil. says:
June 16, 2012 at 10:28 am
Bill Tuttle says:
June 16, 2012 at 9:42 am: “Did Hansen get it right with Scenario A? No. Did he get it right with Scenario B? No. Did he get it right with Scenario C? No.”
Did he do what he intended i.e. to bracket the future conditions, yes.
No, he *failed* to establish a bracket, Phil — kim2ooo June 15, 2012 at 9:38 am: “Temperatures are lower than Hansen forecast they would be if humans disappeared off the planet twelve years ago.”
http://stevengoddard.wordpress.com/2012/06/15/clarifying-hansens-scenarios-worse-than-it-seems/
Try not to make such a fool of yourself.
Physician, heal thyself.
An earlier comment pointed out that Hansen’s paper states that a significantly reduced climate sensitivity (i.e. 1.5C/CO2 doubling or smaller) is needed (they used 4.2 C/CO2 doubling) to make a significant impact on the predicted delta T over just a few decades. We now have data that shows this.
But its worse than we thought:
1. The surface temperature record is assumed to be error-free. This is not true. There is arguably a 50% contribution to the measured delta T due to UHI alone. This halves the delta T that can be attributed to trace gas increases.
2. The GISS surface temperature record has been corrupted/adjusted since this paper was published. This invalidates the control runs and all of the parameter fitting that was used to initiate the model. This also raises the question about which version of corrections of whose surface temperatures should be used.
3. The surface temperature record anomalies from 1988 to present are assumed to have negligible contributions from natural variability. Since the 1970’s thru 1990’s was the positive half of a 50 – 60 year natural climate cycle, some of the observed temperature anomalies are due to natural variations. This reduces the delta T that can be attributed to trace gas increases.
4. The 1988 paper has predictions of the mid-troposphere hotspot becoming very pronounced, whereas measurements show it to be nonexistent still. This points to a fundamental flaw in the climate model.
Based on these issues alone, any agreement between predicted T anomalies from this paper and observed temperatures should be attributed to a fortuitous cascade of compensating errors, also known as dumb luck.
old engineer:
“I must agree with those that said that the post that started this thread is essentially a puff piece. ”
I’m similarly in agreement. I’d never heard of Prof. (emeritus) Solheim prior to this piece but his “analysis” here doesn’t make a very good first impression.
“What is needed is a real revisit with the forcings recalculated for the actual trends of the GHG’s Hansen used.”
See here: http://www.esrl.noaa.gov/gmd/aggi/ (KR kindly linked to this above.)
Anyway, this has been a fun thread. I’d say the lower half of it is a good deal more reasoned and less polarized than the top half. And I can happily say, I’ve learned a fair bit along the way.
Phil. says:
June 16, 2012 at 8:01 am
Gunga Din says:
June 15, 2012 at 7:24 pm
dana1981 says:
June 15, 2012 at 6:16 pm
For the record, despite Solheim’s poor analysis, it is true that observed temps have been closest to Scenario C, while emissions have been closest to Scenario B. What this tells you is indeed that Hansen’s model was “wrong” – meaning its sensitivity was too high.
==============================================================
ME: The Wizard of COz was rubbing his crystal ball based on CO2 emissions, not all emissions. He and his model was, and continues to be, just plain wrong. (That little dot at the end of the sentence is a PERIOD!)
===============================================================
PHIL: Nope, another one who can’t read! It was based on all emissions as has been pointed out several times in this thread.
================================================================
ME: I can read (really!) but I hadn’t read some of what’s been said. Apologies.
But we both agree that his model was, indeed, “wrong”. By 150% or 60%? It doesn’t really matter. Either way it’s not trustworthy. My main beef is that policies, very expensive policies both in lost dollars and freedoms, have been made based on this and other faulty predictions and “postdictions”. Example, CO2 is ruled a pollutant because the Wizard of COz said it was.
“To bracket future conditions” – as Orwellian doublespeak as it gets.
KR says:
June 15, 2012 at 9:50 am
“However, the NASA GISS GCM Model II page explicitly says: Historical versions of Model II (e.g., the computer code used in the 1988 simulation runs) are not currently available.. They do have an improved, value-added version by the same name available to download though. It means there are unknown and (publicly) undocumented differences between the computational climate model used by Hansen et al. in the eighties and the current, publicly available version, developed and maintained by the Columbia University EdGCM project.”
I just checked out Model II. Man – I thought Model E was bad…yikes!. I encourage everyone with a scientific programming background to check out the Model II source code at the links provided in KR’s post. Count all the GOTOs. And documentation … uh … what documentation? What differential equaitons? What numerical methods? NASA can do much better than this…
Gunga Din says:
June 17, 2012 at 1:33 pm
Phil. says:
June 16, 2012 at 8:01 am
Gunga Din says:
June 15, 2012 at 7:24 pm
dana1981 says:
June 15, 2012 at 6:16 pm
For the record, despite Solheim’s poor analysis, it is true that observed temps have been closest to Scenario C, while emissions have been closest to Scenario B. What this tells you is indeed that Hansen’s model was “wrong” – meaning its sensitivity was too high.
==============================================================
ME: The Wizard of COz was rubbing his crystal ball based on CO2 emissions, not all emissions. He and his model was, and continues to be, just plain wrong. (That little dot at the end of the sentence is a PERIOD!)
===============================================================
PHIL: Nope, another one who can’t read! It was based on all emissions as has been pointed out several times in this thread.
================================================================
ME: I can read (really!) but I hadn’t read some of what’s been said. Apologies.
So you asserted that the model was based on CO2 only without any facts to back it up, you should apologize for such misleading statements!
But we both agree that his model was, indeed, “wrong”. By 150% or 60%? It doesn’t really matter.
No we don’t, the “By 150% or 60%?” was based on ridiculous mis-statements which had no basis in fact! What is true is that the model which used a very good estimate for the upcoming emissions over the next 25 years, the sensitivity used a value which although reasonable at the time has proved to be slightly high.
Question to dana1981:
In your “rebuttal” you had shown that the amount of Greenhouse Gases/CFCs that have contributed to the recent warming was around 0.7 w/m^2 over the last ~22 years, which you showed was higher than Scenario C:
http://www.skepticalscience.com/pics/SolheimForcings.jpg
You then showed a graph that depicted the average of surface temperature stations VS Hansen’s forecast:
http://www.skepticalscience.com/pics/HansenSolheim.jpg
The temperatures are lower than Scenario C, yet we have an alleged higher energy imbalance that what was depicted in Scenario C? Something is not adding up.
It seems that your analysis unintentionally confirmed that Dr. Hansen DID overestimate Climate Sensitivity quite substantially.
And, to repeat – while CO2 has progressed roughly as both scenarios A and B projected, we have not gone through Scenario A, due primarily to CFC reductions and a rather lower than expected amount of methane.
At this point the most interesting aspect of climate science is what the next excuse will be.
On the plus side, we can predict with 100% certainty that we’ll be told we need to spend trillions of dollars to reduce CO2 emissions, no matter what temperatures actually do.
It was interesting to see a comment by our little buddy dana1981 that was dead center – re: ” an amateur like me” . As best as I can tell, amateurish pretty much fits dana like a glove.
Anthony, as one who learned a long time ago to ignore name calling, the whole denialist label isn’t something that bothers me that much. I think you are right to point out its use and remind people that there is an obvious agenda behind its use, but I’d argue that deleting comments because they use the term isn’t necessary. When dana uses it he reminds everyone not only how mean spirited and hateful an indivisual he is, but what lack of credibility he brings. He might as well be a walking talking billboard for what Skeptical Science really is about.
REPLY: Well you see Dana Nuccitelli is rather immature (he’s a kid that rides a scooter) in his emotional view of the issue. He complained that commenters and contributors on WUWT were referring to Skeptical Science with the abbreviation “SS”, due the Nazi connotation it carried, so I made it a policy not to use that abbreviation. I asked him not to use “denier” anymore, but he’s so full of hatred he can’t help himself. So, I just don’t have much sympathy for somebody who makes demands but won’t reciprocate – Anthony
Why didn’t they state that CFC’s were six or more orders of magnitude better GHG’s than CO2?
Well, obviously they needed to save that for 2012 when temperatures hadn’t increased and they needed to explain why. Clearly you are not a climate scientist.