I’m rather tired still from my trip, and so I don’t have the energy to get into a detailed read and analysis of this document which was posted up on the IPCC website just 14 hours ago. This is the first time I’ve seen this document, though others may know of it.
But, I’m sure WUWT readers will have some insight and we can look at it in more detail tomorrow.
WUWT reader Alan writes in an email:
Searching around the internet just now I chanced upon an IPCC document, listed as being posted 14 hours ago. Curiously, however, it isn’t a recent document at all, rather an IPCC pdf from 1990! I know the date from cross-checking and finding it mentioned in the Bulletin of the Atomic Scientists, June 1992. Anyway, its title is Detection of the Greenhouse Effect in the Observations, and it deals with the conditions needed to confirm that global warming is due to a human-induced enhanced greenhouse effect. In other words, the document admits that these conditions have not yet been established but — in Section 8.4, When Will The Greenhouse Effect be Detected — stipulates what MUST occur in the future in order to diagnose a human cause.
The document is here: http://www.ipcc.ch/ipccreports/far/wg_I/ipcc_far_wg_I_chapter_08.pdf
Alan
In case the document disappears, I’ve also loaded it onto WUWT here:
ipcc_far_wg_I_chapter_08 (PDF)
In my very brief scan, I found this section most interesting:
Note that in 1988, four* two years earlier, in his testimony before the US Senate, Dr. James Hansen said this in his opening remarks:
Source: http://image.guardian.co.uk/sys-files/Environment/documents/2008/06/23/ClimateChangeHearing1988.pdf
Mind you, this is only 10 years after the fiercely cold North American Winter of 77-78 in which ideas of another ice age were being bandied about in scientific and media circles.
Maybe, giving the benefit of the doubt, they are talking about different things, but there seems to be a significant profound confidence gap between Dr. Hansen’s testimony and that of the IPCC working group 1 on the ability to discern “global warming” in the surface temperature record. The disparity is striking due to the similarity of wording.
I’ll leave the rest in the hands of our capable readers for further discussion.
* Note: I made a mistake, originally saying 1992 in the title, which was the year the BAS report mentioned the 1990 IPCC FAR document. Corrected. – Anthony
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


Stephan Pruett .. “Are you aware of any models that accurately predict 1990 to present? If so, could you give us the reference?”
The relevant IPCC projections 1990-2010 are here. The scenario most close to the actual outcome was A1F1. This gives a rise 1990 to 2010 of 0.32C. Which was spot on. Hope this helps.
Chris Hanley – judging the veracity of century-scale projections after the first decade is a novel technique. Hope it works out for you …..
Professor Bob Ryan says:
March 18, 2011 at 1:27 am
“…it is almost impossible to establish any meaningful correlation between co2 and temperature. Both processes are non-stationary with the first having one unit root and the second two. Inducing stationarity in both series brings a close to zero correlation on all lags. Correlation does not imply causality but the absence of correlation is a tough problem to surmount for those who do believe there is an empirical relationship.”
You grasp the essence of the problem well. But, I have a quibble about the double unit-root attribution to the temperature series. If one stays away from UHI-corrupted data and uses century-long records, the powerful role of natural multidecadal oscillations becomes inevitably recognizable. Inadequately long, corrupted records give misleading indications of nonstationarity and unit-root behavior. I’ve been cross-spectrum analyzing the monthly and annual ROC of Mauna Loa CO2 observations against a variety of temperature series, including UAH anomalies, and have yet to find a case where CO2 leads the temperature series. Gene Zeien’s recollection is exactly backwards from the results I obtain for the intra-annual relationship: coherence is very low and temperature leads CO2 by about ~90 degrees (i.e., ~3months). The lagging behavior of CO2 at all time scales is indeed a tough problem for AGWers to surmount.
Steven Mosher says:
March 18, 2011 at 12:48 pm
It seems to me that very few have even comprehended what they said in the 1990 document.
What’s there to comprehend, but that it’s gearing itself up to produce propaganda masquerading as real science? They don’t even bother hiding the fact. Why should they? They simply carry on regardless because this is supported by big business and government interests. The whitewashes show this.
If there had been any honesty allowed the IPCC would have been dismantled by the corruption of the following 1995 report:
http://www.congregator.net/articles/majordeception.html
A Major Deception on Global Warming
by Frederick Seitz
Wall Street Journal, June 12, 1996
The following passages are examples of those included in the approved report but deleted from the supposedly peer-reviewed published version:
. “None of the studies cited above has shown clear evidence that we can attribute the observed [climate] changes to the specific cause of increases in greenhouse gases.”
. “No study to date has positively attributed all or part [of the climate change observed to date] to anthropogenic [man-made] causes.”
. “Any claims of positive detection of significant climate change are likely to remain controversial until uncertainties in the total natural variability of the climate system are reduced.”
..
“Whatever the intent was of those who made these significant changes, their effect is to deceive policy makers and the public into believing that the scientific evidence shows human activities are causing global warming.”
======
From that point on, the IPCC geared itself up to creating the data it required, or rather, was required of it, for which it had been created. That’s simply a fact proving it is not at all interested in real science, but in pushing someone’s agenda on a global scale for which ‘science’ has been deemed the method of choice to obtain the desired control. And it’s been extremely successful on all fronts.
The only way to effectively counteract that is in the spread of knowledge about the GlobalWarmingCon. To counteract disinformation with information.
I’m having a tough time arguing against CAWG these days, I keep cracking up laughing, the poor believers are so naked these days it’s comical, I find many of them will agree we also need to ban the spread of dihydrogenmonoxide before its too late.
Soon I shall politely invite my local politicians to publish the science upon which they base the idiotic social policies they are pushing, after all the debate is over eh.
The IPCC is much like the CRU emails a gift that keeps on giving.
Mr. Watts, after reading the comments to this point, your hope that “…WUWT readers will have some insight…” seems unfounded.
Phil Clarke at 3:57 pm:
“…Chris Hanley – judging the veracity of century-scale projections after the first decade is a novel technique. Hope it works out for you …..”.
Hah! that’s cute.
Mr Clarke you are doing precisely that with your claim “….as we now know, in time the observations nearly always end up in the high end of IPCC projection ranges…” (6:33 am).
If you were not referring to 1990 IPCC projections of temperature and sea level (which so far are proving to be grossly above observations), what projections were you referring to?
If you were referring to the first decade of this century, the observed temperature increase is virtually 0°C.
Chris,
Temperature, sea level and CO2. As described in the
paper I linked to above.
cheers
Sky: thanks for your comment. I have been principally focused on the post 59 data looking for dependency in short run temperature volatility. Your results, if they hold up are very interesting indeed, and tend to support one conclusion of Soares’ paper. It’s not easy in this field I know but I do hope you can get your work published.
Note to Stephen Mosher: your comment to Chris on multi-pattern optimal fingerprinting (I think that is what you meant) was intended as a put down but perhaps a little more robust than the method would support. As Hasselmann (1997) says: ‘the term attribution can still be interpreted only in the limited sense of consistency, since the possibility cannot be excluded that the retrieved signal can be explained by other forcing mechanisms not considered in the analysis’.
Phil Clark,
Hansen’s forecasts as well as the IPCC’s predictions are off by more than 50%. You are listening to someone’s spin and not looking at the actual forecasts made.
For example, here is the IPCC AR4 predictions made from 2003 versus today. Not so accurate so far.
http://img195.imageshack.us/img195/1584/ar41979mmmeanvshadcrut3.png
And, while one cannot actually find the model predictions made in FAR and SAR (I have a database of the TAR multi-model predictions and they are actually the worst made yet) but here is a chart showing how close the IPCC FAR, SAR and TAR forecasts are to reality.
http://img80.imageshack.us/img80/7416/ipccpredictions1.png
And Hansen’s 1988 predictions ??? You must be kidding right. Here they are if you have never looked at the actual numbers before.
http://www.realclimate.org/data/scen_ABC_temp.data
Bill Illis -“I have a database of the TAR multi-model predictions and they are actually the worst made yet”
Bill – which of these assertions is false?
1. The TAR models projected an increase under scenario A1F1 of 0.32C 1990-2010.
2. The actual trend increase was indeed 0.32C (HADCRUT)
3. A1F1 is the scenario closest to actual emissions over the period.
Hansen’s 1988 model was programmed with a climate sensitivity that is now estimated to be too high, the fact that the projections are tracking high – after several decades of accuracy – confirms that the IPCC estimate of climate sensitivity is likely correct.
The period since 2003 is way too short to be a useful measure of predictions:, even so the observations are well within 95% of the model runs ( the grey band )
Your second chart appears to be an update of IPCC AR4 figure TS26. Except the IPCC plots smoothed annual values while the ‘update’ seems to switch to monthly, which obviously show much greater variance. Given the fuss made around here about splicing different values into the same plot this seems an odd move. This presumably explains why it shows temperatures plunging in 2010 when plotting the annual average – as used in the original IPCC chart – would show the second or third warmest year in the record. Naughty.
The IPCC chapter from which the graph is taken has this:
“Previous IPCC projections of future climate changes can now be compared to recent observations, increasing confidence in short-term projections and the underlying physical understanding of committed climate change over a few decades. Projections for 1990 to 2005 carried out for the FAR and the SAR suggested global mean temperature increases of about 0.3°C and 0.15°C per decade, respectively.[10] The difference between the two was due primarily to the inclusion of aerosol cooling effects in the SAR, whereas there was no quantitative basis for doing so in the FAR. Projections given in the TAR were similar to those of the SAR. These results are comparable to observed values of about 0.2°C per decade, as shown in Figure TS.26, providing broad confidence in such short-term projections.”
cheers!
Re Phil Clarke:
‘What we do see, every time his name comes up, is an amazing ad hominem display of bile:’
I share your distaste for bile, ad hominem or any other variety. However, Hansen’s world view is hardly a model of restraint…
From eadavison.com
A recent book by one Keith Farnish includes:
Civilization has created the perfect conditions for a terrible tragedy on the kind of scale never seen before in the history of humanity.
Farnish proposes preventing this tragedy by:
… removing grazing domesticated animals, razing cities to the ground, blowing up dams …
And if governments won’t co-operate:
… if …removing a sea wall or a dam will have a net beneficial effect on the natural environment then, however you go about it – explosives, technical sabotage or manual destruction – the removal would be a constructive action.
Unasked, Hansen offered his view:
Keith Farnish has it right: time has practically run out, and the ‘system’ is the problem.
Tough to defend a man who supports blowing things up in pursuit of saving the planet.
Phil Clarke says:
March 19, 2011 at 8:31 am
The cut-off period for submission of TAR model runs was early 2000. It was published in 2001. So all of your comparisons to 1990 are not really valid. They had all the actual temperature data up to 1998 or 1999 or so to work with and, thus, the 1990 to 2000 number is a “hindcast” not a projection.
Secondly, the numbers you linked to are, in fact, 10-year averages – the average of 2000-2010 for example, not the year 2010.
So, we can go back and put IPCC Tar climate model projections on the same baseline as Hadcrut3 today (which turns out to be the same baseline that IPCC Tar used 1961-1990) and what do we get.
http://img21.imageshack.us/img21/8867/ipcctarvshad3feb11.png
One has to be careful with the IPCC because they are very careful in making sure it is hard to check their projections.
Bit of a stretch, Ted. Hansen provided a single sentence for the ‘blurb’ of a book, saying he agrees with its central thesis. In one passage of the book the author discusses the ethics of ‘direct action’. Therefore Hansen is in favour of blowing up dams. Thanks, got that. Now find me a direct quote where Hansen endorses violence.
One expects this kind of desperate nonsense from Joanne Nova, but disappointing to find it repeated on a ‘science’ site. Anyone want to discuss any of the science in Hansen’s papers?
Phil Clarke asks “Anyone want to discuss any of the science in Hansen’s papers?” Why not? First up, Hansen’s staffers Schmidt, Ruedy, Miller and Lacis, (JGR 2010 draft) found that “With a straightforward scheme for allocating overlaps, we find that water vapour is the dominant contributor (c50% of the effect) [of atmospheric long-wave absorbers], followed by clouds (c25%) and then CO2 with c20%”. That is a rather striking departure from the “science” in IPCC AR4 WG1 Chapters 2 and 9, which find that with better than 90% certainty all GHGs (excluding water vapour) account for “most” (must equal more than 50%) of humans’ “substantial warming influence on climate” (p.671). So within 3 years of AR4 Hansen’s team have reduced the role of CO2, by far the largest contributor to the GHGs in WG1, to a mere 20%.
However, Schmidt & co go on to say: “In a doubled CO2 scenario, this allocation is essentially unchanged, even though the magnitude of the total greenhouse effect is significantly larger than the initial radiative forcing, underscoring the importance of feedbacks from water vapour and clouds to climate sensitivity.” So now the story is that GHGs cause feedbacks! Where is the evidence for that?
The truth is that Hansen’s team has never published any econometric analysis establishing their attributions of temperature change to changes in atmospheric GHGs, especially CO2, for the very good reason that they cannot. Why do they never test their models against the data AND report the results? Again, because multivariate regressions of the relative roles of CO2 and water vapour find no role for CO2 in raising temperature and therefore none for raising water vapour.
However simple chemistry tells us that combustion of hydrocarbon fuels releases both CO2 and H2O, in the ratios of 2:1, but the RF of the latter is at least 50% higher than for CO2. Why do the IPCC, Hansen, and Schmidt NEVER publish the formulae for that combustion? The formulae prove that H2O is not a feedback but a direct outcome from hydrocarbon combustion.
Now tell us what’s wrong with getting more water along with CO2 and with temperature increases of even the IPCC’s 3 oC by 2100, given that all crops everywhere have higher yields when it is warmer, wetter, AND there’s more CO2.
Bill Illis @1:21 pm:
“…So all of your comparisons to 1990 are not really valid. They had all the actual temperature data up to 1998 or 1999 or so to work with and, thus, the 1990 to 2000 number is a “hindcast” not a projection…”.
The IPCC projections/predictions/forecasts seem to be a moving target.
They are ‘projections’ of data much of which has already been observed, for instance from 1990 to 2000 the temperature had already risen 0.2°C.
My posts are taking an age to appear so this is probably my last contribution to this thread. I’ll confine myself to a few errors of fact.
Firstly, thanks to Bill for confirming that the TAR 1990-2010 temperature trend projections were spot on. It is not the case, though, that the models are tuned using historical trend data, as stated in Rahmsdorf et al 2007:-
‘Although published in 2001, these model projections are essentially independent from the observed climate data since 1990: Climate models are physics-based models developed over many years that are not “tuned” to reproduce the most recent temperatures,…”
Secondly, the chart of HADCRUT monthly temperatures versus ensemble AR4 projections does not make a lot of sense. The individual model runs do show a lot of the stochastic variability (aka weather) but this gets averaged out in the combination process. So one can cherry-pick a monthly anomaly of 0.2 in a La Nina period, say and claim this proves the models are ‘out’, when the more appropriate comparison of smoothed average year for the corresponding year would be more than 0.5C. Unconvincing.
Tim, you’ve managed to discuss ‘Hansen’s’ science without mentioning a single paper of his, which rather makes my point. You’ve also confused the constituent contributions to the greenhouse effect with the consequent warming after a single component is increased. The water vapour feedback has been observed and measured and is as predicted by the models, (Dessler et al 2008 http://geotest.tamu.edu/userfiles/216/Dessler2008b.pdf ). Attribution is discussed at RC here http://www.realclimate.org/index.php/archives/2006/10/attribution-of-20th-century-climate-change-to-cosub2sub/ and there is a rich literature on model-data comparisons. You could start with the CMIP project.
Bye for now.
Phil Clarke says:
March 19, 2011 at 8:31 am
3. A1F1 is the scenario closest to actual emissions over the period.
Hansen’s 1988 model was programmed with a climate sensitivity that is now estimated to be too high, the fact that the projections are tracking high – after several decades of accuracy – confirms that the IPCC estimate of climate sensitivity is likely correct.
—————–
The A1F1 emissions numbers are irrelevant. It is the concentration that stays in the air that is relevant and that is tracking A1B. [Plants and Oceans are absorbing half of the emissions so using emissions as the metric is on the wrong track. This is especially true since the Bern-carbon model used by the IPCC is also based on the wrong assumptions about ocean and plant absorption. A1B is the best scenario and that is why the IPCC quotes it the most often.
The fact that Hansen’s 1988 climate model used 4.2C per doubling is also irrelevant. Actual temperatures are tracking below even the predictions made for when GHGs stopped increasing in 2000. Over a short time period and incorporating the 35 year lag built into the Hansen’s model, the difference between 3.0C and 4.2C are tiny over the period. It is the theory that is wrong.
http://img69.imageshack.us/img69/5460/sscenariob.png
“The trends are probably most useful to think about, and for the period 1984 to 2009 (the 1984 date chosen because that is when these projections started), scenario B has a trend of 0.26+/-0.05 ºC/dec (95% uncertainties, no correction for auto-correlation). For the GISTEMP and HadCRUT3 data (assuming that the 2009 estimate is ok), the trends are 0.19+/-0.05 ºC/dec (note that the GISTEMP met-station index has 0.21+/-0.06 ºC/dec). Corrections for auto-correlation would make the uncertainties larger, but as it stands, the difference between the trends is just about significant.
Thus, it seems that the Hansen et al ‘B’ projection is likely running a little warm compared to the real world, but assuming (a little recklessly) that the 26 yr trend scales linearly with the sensitivity and the forcing, we could use this mismatch to estimate a sensitivity for the real world. That would give us 4.2/(0.26*0.9) * 0.19=~ 3.4 ºC. Of course, the error bars are quite large (I estimate about +/-1ºC due to uncertainty in the true underlying trends and the true forcings), but it’s interesting to note that the best estimate sensitivity deduced from this projection, is very close to what we think in any case. For reference, the trends in the AR4 models for the same period have a range 0.21+/-0.16 ºC/dec (95%). Note too, that the Hansen et al projection had very clear skill compared to a null hypothesis of no further warming”
Realclimate update to model-data comparisons.
“Given that the Scenario B radiative forcing was too high by about 5% and its projected surface air warming rate was 0.26°C per decade, we can then make a rough estimate regarding what its climate sensitivity for 2xCO2 should have been:
dT/dF = (4.2°C * [0.20/0.26])/0.95 = 3.4°C warming for 2xCO2
In other words, the reason Hansen’s global temperature projections were too high was primarily because his climate model had a climate sensitivity that was too high. Had the sensitivity been 3.4°C for a 2xCO2, and had Hansen decreased the radiative forcing in Scenario B slightly, he would have correctly projected the ensuing global surface air temperature increase.
The argument “Hansen’s projections were too high” is thus not an argument against anthropogenic global warming or the accuracy of climate models, but rather an argument against climate sensitivity being as high as 4.2°C for 2xCO2, but it’s also an argument for climate sensitivity being around 3.4°C for 2xCO2. This is within the range of climate sensitivity values in the IPCC report, and is even a bit above the widely accepted value of 3°C for 2xCO2.
Skeptical Science. http://www.skepticalscience.com/Hansen-1988-prediction-advanced.htm
‘Bit of a stretch, Ted. Hansen provided a single sentence for the ‘blurb’ of a book, saying he agrees with its central thesis.’
Hmm… Phil has a point but there is more to be said.
Regarding ‘discussion of the science’.
That was where I started when the CRU email hack / leak (I hope I live long enough to discover which) woke me to what is going on. I set about, within the limits of my expertise (I’m a mediocre chemist and a passably competent statistician) to judge the merits of the AGW case. After perhaps a couple of hundred hours, I found myself no wiser, in fact scarcely better informed.
‘Tell me your sect and I will anticipate your argument.’
Given that the science is ultimately intended to persuade a voting public, many of whom would have difficulty spelling thermodynamics, it seemed that the only way – or at any rate my only way – to decide the issue was to examine the motivations of the principal players, both individuals and organisations. This examination would at least be intelligible to the layman.
The results, of which the Hansen quote is a very small example, are at eadavison.com.
If Phil or anyone else cares to offer a critique of the entire argument it would be appreciated.
Regarding ‘bit of a stretch’.
It behoves a man of extreme intelligence, education and, above all, influence, to consider his public utterances and their potential consequences more carefully than the rest of us.
Either Hansen had read the book and knew exactly what he was advocating in which case my argument is no stretch, or he didn’t read enough of it to appreciate its ramifications in which case he is prepared to endorse a ‘central thesis’ – Direct Action – without qualification of or even reference to its obvious potential for violence. Hardly the conduct of a man of mature judgement.
OK Phil, let’s go back to Hansen, Lacis, Ruedy, Sato, and Helene Wilson 1993, How sensitive is the World’s Climate? (1993) – it’s a splendid production, great photos, innumerable graphs with curves in synch, but not a single regression report. Two of its authors (Lacis & Ruedy) co-authored the Paper by Schmidt et 2010 that I discussed above.
Hansen et al 1993 make this statement: “If the amount of CO2 in the air increases, the atmosphere becomes more opaque, temporarily reducing thermal emission to space”. Phil you must admit that is the core belief of Hansen et al, so I hate to tell you there is no evidence to support that statement.
Let’s try it on data from NOAA (NREL) for New York/JFK from 1960 to 2006.The R2 (0.001) and t (1.01)statistics inform us there is no statistically significant relationship between atmospheric CO2 and opacity of the air.
But the next set of results show us that changes in atmospheric CO2 and Opacity levels also have no impact on annual mean minimum temperatures, whereas changes in atmospheric water vapor “H2O” (aka precipitable water vapor at NOAA) have an enormously powerful impact (t = 9.64; R2 = 0.69). Funnily enough, Team Hansen has NEVER in their thousands of papers decrying the CO2 emissions from hydrocarbon combustion studied the chemical formula for that combustion, which indicates that at least a third by mass of the total emissions is “H2O”.
The next results show that sky opacity is strongly associated with changes in “H2O”, t=2.12, and again not at all with changing atmospheric CO2 (t=0.35).
Finally, If WUWT had the space I could show very similar results across the whole of the USA from Hawaii to Alaska to California and New Orleans.
Phil, where are you based, and I will do it for you, or you can access the data yourself from
http://rredc.nrel.gov/solar/old-data/nsrdb/1961-90/dsf/data and http://rredc.nrel.gov/solar/old-data/nsrdb/1991-2005/statistics/data
Well, that’s climate science for you. There is an old saying “those who can, do. Those who can’t, teach”. In this context it should read “those who can do real science do it, those who can’t, teach climate change”.
Some of the results I mention here are in my paper now up at http://www.lavoisier.com.au
To conclude: none of Team Hansen’s myriad papers do the basic statistical work needed to qualify them as scientific.
Ted Davison says:
March 20, 2011 at 12:59 pm
Whew! Your site is the best sequencing of actions and thought processes of the Hokey Team et al. I’ve seen yet. Pretty hard to stomach at one go, though.
Recommended to all:
Anthropogenic Global Bias
Re post from Brian H
Thanks. I do hope to see my arguments subjected to criticism. (I was myself surprised at what turned up once I’d started collating.)