Dr. David Whitehouse of the GWPF expounds on the “prime statistic of global warming” graph and its failure, as first reported here.
The Leaked AR5 Report And Global Temperature
Whatever one’s view about the leaking of the draft IPCC AR5 report it does make fascinating reading, and given the public scrutiny it is now receiving it will be interesting to see what parts of it are changed when the final report is released in a year or so.
One part of it that should be changed is the section on global surface temperature data and its interpretation.
The analysis of global combined land and ocean surface temperature in AR5 is inadequate for what it admits is seen as the prime statistic of global warming. It is highly selective in the references it quotes and in the use of time periods which obscures important, albeit inconvenient, aspects of the temperature data. It is poorly drafted often making a strong assertion, and then somewhat later qualifying if not contradicting it by admitting its statistical insignificance. This leaves the door open for selective and incomplete quoting.
In Chapter 2 the report says that the AR4 report in 2007 said that the rate of change global temperature in the most recent 50 years is double that of the past 100 years. This is not true and is an example of blatant cherry-picking. Why choose the past 100 and the past 50 years? If you go back to the start of the instrumental era of global temperature measurements, about 1880 (the accuracy of the data is not as good as later years but there is no reason to dismiss it as AR5 does) then of the 0.8 – 0.9 deg C warming seen since then 0.5 deg C of it, i.e. most, occurred prior to 1940 when anthropogenic effects were minimal (according to the IPCC AR4).
AR5 admits that of the warmest years on record the “top ten or so years are statistically indistinguishable from one another.” This is sloppy. The “or so” is significant and should be replaced with a more accurate statement. Despite the admitted statistical indistinguishability of the past ten years (at least) AR5 then goes on to say that 2005 and 2010 “effectively” tied for the warmest years! There is no mention of the contribution to global temperature made by the El Nino in those years!
It is in its treatment of the recent global temperature standstill that AR5 is at its most unevenhanded. It says that much attention has been focused on the “apparent flattening in Hadcrut3 trends,” and it says that “similar length phases of no warming exist in all observational records and in climate model simulations.”
No it hasn’t. The IPCC says that the time when anthropogenic influence dominated began between 1960-80. AR5 takes 1979 – 2011 as a period for analysis when temperatures started rising after a 40-year standstill. The fact that is obvious from the data is that the past 16 years of no global temperature increase is unusual and is not an “apparent flattening.” It is a total flattening for 16 years (as AR5 confusingly admits later on), just over half of the duration of the recent warming spell. Flat periods have existed before but they were in the era when mankind’s influence was not significant. The 16-year flatness since mankind has been the prime climatic influence has been the cause of much discussion in the peer-reviewed literature, something that this AR5 does not reflect.
AR5 goes on to say that with the introduction of Hadcrut4 (and its inclusion of high latitude northern hemisphere data) there is now a warming trend. No it isn’t. Look at the Hadcrut4 data and, as the GWPF has demonstrated, it is warmer than Hadcrut3, but it is also flatter for the past 15 years. AR5 also adds that “all products show a warming trend since 1998.” That this is not the case seems to be something that AR5 concedes a little later in the report when it that none of the warming trends they quote are statistically significant!
Referenced And Dismissed
Consider AR5’s summary: “It is virtually certain that global near surface temperatures have increased. Globally averaged near-surface combined land and ocean temperatures, according to several independent analyses, are consistent in exhibiting warming since 1901, much of which has occurred since 1979.”
Nobody doubts that the world has warmed since 1901. But why choose 1901, and what warming is natural and what is anthropogenic? As we have seen the last comment is wrong.
AR5 says: “Super-imposed upon the long-term changes are short-term climatic variations, so warming is not monotonic and trend estimates at decadal or shorter timescales tend to be dominated by short-term variations.”
So since 1979 we have has about 16 years of warming and 16 years of temperature standstill. Which is the short-term natural variation? The warming or the standstill?
AR5 says: “A rise in global average surface temperatures is the best-known indicator of climate change. Although each year and even decade is not always warmer than the last, global surface temperatures have warmed substantially since 1900.” Nobody, of whatever “skeptical” persuasion would disagree with that.
I can’t help but conclude that the pages of the GWPF contain a better analysis than is present in AR5, which is a mess written from a point of view that wants to reference the recent standstill in global temperatures but not impartially consider its implications.
The unacknowledged (in AR5) problem of the global temperature standstill of the past 16 years is well shown in its fig 1.4, which is seen at the head of this article. Click on the image to enlarge. It shows the actual global temperature vs projections made by previous IPCC reports. It is obvious that none of the IPCC projections were any good. The inclusion of the 2012 data, which I hope will be in the 2013 report, will make the comparison between real and predicted effects appear ever starker.
In summary, the global temperature of the past 16 years is a real effect that in any realistic and thorough analysis of the scientific literature is seen to be a significant problem for climate science, indeed it may currently be the biggest problem in climate science. To have it swept under the carpet with a selective use of data and reference material supported by cherry-picked data and timescales is not going to advance its understanding, and is also a disservice to science.
Feedback: david.whitehouse@thegwpf.org
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

There is only one group dumber than the climate change scientist modelers and forecasters and that is any public policy leader that takes these pathetic failures and looks the other way to do the wrong thing with the public trust. It is worse than taking bribes and looking the other way for person gain. This cheats a generation with a pack of lies that is decidely exposed. The cheats prefer fully cloaked lies as a rule.
“it will be interesting to see what parts of it are changed when the final report is released in a year or so.”
Easy; Anything that contradict the CO2 meme.
Thanks to David Whitehouse of the GlobalWarming Policy foundation for a clearly written critique of the IPCC AR5 second order draft. Critiques like this will enable the authors of AR5 to achieve a much more respectable product, if they have a mind to preserve their credibility, that is.
Thanks again to Alec Rawls for his “liberation” of this draft for public perusal and comment. By this commendable act, he has served the policy of the IPCC, which publicly espouses the policy of a transparent process of writing these reports, which policy everyone agrees, almost.
Quote: “the rate of change global temperature in the most recent 50 years is double that of the past 100 years. This is not true and is an example of blatant cherry-picking.”
It seems that the numbers about the rate of change are approximately correct. 0.5C in the first 100 years and 0.4 in the last 50 means roughly the same change in the last 50 years as the 100 years before.
Aside: Among other interpretations the data suggests a rapidly decreasing sensitivity to CO2. forcing.
The authors are caught on the horns of a dilemma. If they ignore the flattening in the past 15-16 years, even the MSM won’t be able to take them seriously. If they look at it head-on, they may badly damage the whole rationale for the IPCC and the monetary support of thousands of rent-seekers who in turn bolster the IPCC process. Damned if they do, damned if they don’t.
Ad David states, the IPCC chapter on the temperature is a mess and it is at odd with published peer reviewed literature. It is very selective in excluding the numerous empirical analysis publications.
The reason why the climate presents periods of cooling and steady temperature since 1850 is because there are significant climate natural oscillations working at multiple scales such oscillations with a period of about 9.1 yr, 10-11-yr, 20 yr and 60 yr, plus an upward trend which is only partially produced by anthropocentric warming. In particular the steady temperature since 2000 was caused by the cooling phase of both the quasi 20 yr and 60 yr oscillations, which compensated the anthropocentric warming during the same period.
The climate models used by the IPCC simply do not reproduce any of these oscillations probably because they have missing astronomical forcings. In fact, these oscillations are synchronous to astronomical oscillations.
All above is extremely well demonstrated in my peer reviewed papers.
My updated model is here
http://people.duke.edu/~ns2002/#astronomical_model_1
http://people.duke.edu/~ns2002/scafetta-forecast.png
a summary of my research is here
http://people.duke.edu/~ns2002/#astronomical_model
My model agrees much better with the temperature than the IPCC models and is correctly forecasting climate trends since 2000.
I say scrap the IPCC and their assessment reports. What use are they ? Does anyone know what benefits have arisen from the first four assessment reports. Have any lives been saved ? How many people, if any, are healthier of have better living conditions as a result of the assessment reports. Surely the money wasted on the IPCC could be better spent ?
Does anyone know who decides what “0” is?
Also, I was wondering why the graph made it seem the projections aren’t far off, until I realized they based the temperature on -.2 degrees, instead of 0 degrees.
David Whitehouse:
Yes! Well said. Thankyou.
Everybody and especially “Policymakers” needs to read your assessment.
Richard
First sentence should read “David Whitehouse” not “David White.”
[Thanks, fixed. — mod.]
“Which is the short-term natural variation? The warming or the standstill?”
This part could be explored. Essentially it means “whatever we have not accounted for”. Any part of warming or cooling effects they later might account for are then not considered to be part of Natural Variability.
If they want to use that term then they need to use it in every explanation…explaining for each cherry pick that they don’t know how much of the warming or cooling was due to Natural Variance.
This figure puzzled, and continues to puzzle, me. I assume that the colored shaded areas are the two-sigma boundaries, although that is by no means clear — and how in the world would one compute sigma? These are model predictions — are these related to the spread in results integrated out 1, 5, 10 years? If so, this figure is openly unbelievable, suggesting that system state and non-Markovian history dominates the actual supposed forcing the further out you get — otherwise the error ranges should saturate at the natural variance around some concrete prediction. Are they related to the spread in outcomes between completely different models, or outcomes associated with uncertainties in internal model parameters leading to different outcomes? How can one interpret this graph in terms of complementary error functions, cumulative distribution functions, the usual apparatus of statistics to talk about confidence or probability without a full Bayesian analysis of the uncertainty in each contributing parameter? Such an analysis would strongly reduce confidence in everything, of course, in a nonlinear multivariate chaotic model — to the point where its predictions were utterly insignificant as the confidence interval was amplified to being too large to constrain any possible future trajectory.
But let’s imagine that a miracle happened and there is a completely defensible interpretation of the colored shaded regions as 95% confidence levels for some sort of presumed future forcing by CO_2, business as usual, and so on.
Next, we need to look at the “error bars” on the annual global temperature data — what do they mean? Are they computed related to the variance of the contributing data? Are they assessed based on the “theoretical” uncertainties in the measurement process? Did somebody look at the averages and go “gee, I think the probable error in these is around this big”? Or worse, did they go “if we don’t have any error bars at all this become unbelievable, so let’s draw some, and let’s make sure they are big enough not to falsify our hypotheses”?
I rather think they are the latter — for one thing, they are all the same size! According to this figure, annual global temperature is known to within 0.075 C, period, for every year from 1990 to the present. Looking at the temperature spread among the “observed” sources, this is puzzling. For one thing, even treating the samples as “independent and identically distributed” which they almost certainly are not — one of the sources might well have a much larger variance than the others — their mutual distribution does not look particularly Gaussian, although how could one tell with only 36 samples.
Then there is the usual problem with picking their starting year. The temperature in 1990 was, as it happens, almost identical to the temperature in 1980 (according to the UAH LTT, anyway). Move the starting point of their shaded curves back to 1980, and the entire argument is over! And they lose. At least we should be grateful that they didn’t start the curves at the nadir caused by Mount Pinatubo cooling, I suppose. Starting in 1990 means that the 1998 El Nino bump and subsequent flat remain in their predicted range. Starting in 1980 puts the present so far out of their predicted range — even allowing for the limits on this range continuing to grow linearly without bound instead of saturating the way any sort of sensible climate prediction would for it to have any meaning anyway — that the present completely falsifies all of these models.
Since nobody in the game seems to have heard of jacknifing and other sorts of data processing that might eliminate or at least honestly assess errors associated with bias from the starting year, 1980 is at least as good as 1990, and given that the rate of CO_2 increase across this entire interval has been steady to (if anything) increasing, one has to assume that the obvious extrapolation of these models is a lower bound of what is expected from them started back in 1980.
And then there is the grey shading. What the f*ck is up with that? It is unlabelled. It dips with Mt. Pinatubo. It grows so that it is around twice the spread between all of the models combined. It is so broad that it would take at least another decade of flat temperatures to falsify it. Is it something that is supposed to have some meaning? Is it a pretty color somebody added to the figure to get better contrast for the shaded colors? All in all, this is a nearly completely meaningless figure. Perhaps there is sufficient explanation in a caption somewhere, but it looks to me like an open affront to the discipline of statistical analysis.
Nevertheless, if we are as generous as possible and interpret the boundaries as two-sigma confidence intervals and the temperature error estimates as honest one sigma estimates, the data suffices to reject FAR at 95% confidence, reject SAR at 95% confidence, reject AR4 at 95% confidence (filling in the expected 0.075 C in error bar at the end), and leave TAR barely alive on the lower edge of the 95% range. Only the undefined vast grey area survives.
Mind you, this too is a bullshit result, because the climate does not vary like these curves, ever, over any sort of respectable range. But they do not bode well for the CAGW hypothesis even in the AR5 report, even choosing 1990 instead of 1980 as the start year! And I’m not even thinking of addressing the curves if one uses (say) 1997 as the start year, because then the temperature curve is basically flat and moving the vertex of the colored wedges to that year instantly falsifies all of the models, even more badly than starting in 1980.
Their problem is that none of the model wedges allow for zero growth in temperature as a statistically permissible outcome. The lower edges of the colored regions are strictly ascending for all AR’s, with TAR being the only one with a sufficiently small slope on the lower bound that it CAN embrace any reasonable segment of the data when the data turns flat. It is this slope that sets the fundamental confidence level in the climate sensitivity contributing to the models since the temperature has turned remarkably flat post 1980, except for changes associated with strong, discrete modulation produced by specific drivers that have nothing to do with CO_2 per se — Pinatubo, the 1997-1998 ENSO, the 2010 correlated ENSO/NAO phase change. One has to argue that CO_2 is somehow shifting the centroid of the range of end stage outcomes from events of this sort to the systematically warmer side, but that is not evident in the 33+ year data.
Bottom line is that UAH suggests a warming rate of ~0.1 C/decade, across the entire dataset, no special choice of start or end points. This linear fit has little confidence, given the noise in the data (natural variability) and the fact that most of the warming is associated with a single discrete event — the 1997-1998 ENSO max. UAH LTT was flat as a pancake from 1979-1997, jumped by 0.3C from 1995 to 1999 and has been flat as a pancake from 1997 to the present (slight overlap in ranges because it is pointless to talk about annual temperature in this context as if it is a sharp value as far as “climate” is concerned). That’s two intervals of over fifteen years each with no warming, and one single sharp event that produced all of the observed warming.
That all by itself falsifies all of the climate models put together, unless they exhibit exactly this sort of behavior.
rgb
NASA is already tentatively testing other hypothesis of the global temperature variability
Jean Dickey of NASA’s Jet Propulsion Laboratory, Pasadena:
“One possibility is the movements of Earth’s core (where Earth’s magnetic field originates) might disturb Earth’s magnetic shielding of charged-particle (i.e., cosmic ray) fluxes that have been hypothesized to affect the formation of clouds. This could affect how much of the sun’s energy is reflected back to space and how much is absorbed by our planet. Other possibilities are that some other core process could be having a more indirect effect on climate, or that an external (e.g. solar) process affects the core and climate simultaneously.”
http://www.nasa.gov/topics/earth/features/earth20110309.html
If my calculations are correct than there be more to the JPL’s hypothesis than what Dr. Dickey proposes. My finding is condensed here:
http://www.vukcevic.talktalk.net/EarthNV.htm with the strong correlation Arctic-Temperature and. Sun-Antarctica
D.J. Hawkins has it right, but not all of it. The real horns of the dilemma that the IPCC has, are the certainties that it has claimed in previous reports; things like “very likely” meaning 90% probability. Now the recent data shows that these certainties have little basis in science. So the IPCC has to somehow walk a fine line by still claiming that these certainties exist, while at the saame time trying to make these agree with the current data. It was this difficulty that caused Alec Rawls to leak the reporet in the first place. The statements which he succeeded in getting into Chapter 7 made the conclusions in the SPM wrong.
Incidentally, I wonder what logic the IPCC uses for claiming that they can summarize the report, and come to firm conclusions, while the science in the report is not yet complete. It seems to me that this is putting the cart before the horse.
This graph has come in for considerable discussion in the last few days, but as Steve McIntyre is fond of saying, one should keep one’s eye on the pea. Or, alternatively, don’t let the IPCC choose the ground rules.
The graph purports to show upper and lower limits of the IPCC estimates of future temperatures. But remember, as the IPCC constantly admonishes us, these are not predictions, they are projections. What does that mean? It means that the IPCC makes certain assumptions (scenarios) about the CO2 rate of increase, population gain, economic growth, etc.. Each different assumption leads to a different estimate of future temperatures. But some years later we can see which assumption was closest to the truth. That means that the IPCC estimate based on that assumption is the one we should take as their best estimate.
How does this change the graph? Well, looking at the first assessment report, one assumption was the “Business-as-usual” scenario. This gave the highest estimate, the top of the gold area in the graph above. Considering the nearly-perfect exponential growth of CO2 since then, we have probably been following the business-as-usual scenario pretty closely. So the entire rest of the gold area should be wiped out, and only the estimate based on the actual winning scenario should be shown.
This approach could be extended to the next three assessments. In most cases, I suspect, the actual scenario will be found to produce one of the higher estimates. The resultant graph would show four lines, rather than areas, probably all close to the top of their respective areas. Or if one wanted to allow for uncertainty, perhaps the lines would be bounded by uncertainty estimates, but they would be much narrower than the present lower limits, which, remember, are based on scenarios that we now know to be untrue.
This more correct analysis of the IPCC estimates would remove much wiggle room–as it is,they can say that their estimates may be high but at least the lower bound of the estimates is in the ballpark. The proper analysis would show they not only are far out in left field, they aren’t even in the same park!
“There is only one group dumber than the climate change scientist modelers and forecasters and public policy leaders that take these pathetic failures and look the other way to do the wrong thing – the public who pays for it in the first place and who continues to do so.”
there, fixed that for ya.
Courtland:
Quote: “the rate of change global temperature in the most recent 50 years is double that of the past 100 years. This is not true and is an example of blatant cherry-picking.”
It seems that the numbers about the rate of change are approximately correct. 0.5C in the first 100 years and 0.4 in the last 50 means roughly the same change in the last 50 years as the 100 years before.
Aside: Among other interpretations the data suggests a rapidly decreasing sensitivity to CO2. forcing.
________________
I think the quotation is correct. The past 100 year period (0.8-0.9 degree warming) also encompasses the most recent 50 years (o,4 degrees). Thus, the rates in each 50 year period are approximately the same.
On the other hand, the 100 years beginning 1880 are roughly 1/2 of the overall rise; but then the 1979-1998 rise is a rate of close to 2 degrees per century.
End points are critical and leads to cherry picking–which is one of the points of the discussion–along with the flatness of the recent 15-16 years.
rgbatduke
Yes, the error bars on the data plots are conspicuously mysterious. I agree that it could be meant as a “cosmetic” to fuzz over the discrepancy of model forecast with observations. I deem IPCC authors fully capable of such subterfuge. Theyhave done much worse than that.
The whole IPCC ‘reporting’ has become a mish mash of facts, half facts, truths and untruths, half explained, unexplained, real and ‘virtual’ information! Even a single lead author (or editor) cannot present a single written explanation of the ‘state of the science’ because to do so would require that he/she make the honest statement that little they have done/achieved has been proven even half right! They obviously cannot and would not write this as it will destroy the need for their very existence!
I don’t believe a truthful scientist/reviewer could write anything other than ‘Listen, folks, we got a lot of it wrong, we don’t have the data or facts to substantiate our earlier claims, and all previous bets are off until such times as the data/facts are fully available !’
The above analysis is based on collected temperature data which has been “adjusted”, almost always upward, to the extent that ~50% of the reported temperature anomaly is in the adjustments, rather than in the data.
Not only do we not know what the global average surface temperature is, we also don’t know what the anomaly is to any degree of certainty. In light of the expenditure of more than $100 billion, that is unconscionable.
rgbatduke says: December 18, 2012 at 10:40 am
……………
grey shading. What the f*ck is up with that?
In their previous report (2007) grey colour was something called ‘postSRES’ range http://www.ipcc.ch/graphics/syr/fig3-1.jpg
rgbatduke says:
December 18, 2012 at 10:40 am
The grey shading is described in the Fig description. As I read it – it is basically an uncertainty/error bar ‘extension’ of the model projections boundaries based on the Hadcrut4 dataset(?). Though it does seem a bit weird……
Look carefully at Box 2.2 , Table 1, on page 2-21. This states the least squares fit of the mean change per decade of Hadcrut4 from 1901 to 1950 IS EQUAL (at .107/decade) to the rate of change of 1951 to 2011. i.e. The rate of pre 1950 equals the rate of change post 1950. The graphic is on page 2-153.
That is not consistent with, and I could not find, a reference to the rate of change doubling in the 2007 report as noted above (I may have missed it).
However simply demonstrating that the rate of change is identical before and after 1950 means this table and the graphic do not match the written conclusions in the report text.
vukcevic says:
December 18, 2012 at 10:57 am
I am wrong, SRES it is related to CO2.
rgb–
Your post was not up when I began writing mine, but I think you are probably right in suspecting that the grey area is added to give the IPCC a chance to say the actual temperature change is not outside their uncertainty intervals. However, if the graph were properly put together, the lower colored areas corresponding to failed or untrue scenarios would not appear, and any grey areas of assumed uncertainty would be symmetrical around the single lines corresponding to the one scenario that reality picked as the right one. These lines would probably cluster near the upper areas and one could see clearly the departure from reality.
I thought about doing the actual work and finding which scenario in the SAR, TAR, AR4 etc was the one we should use, but it got to be somewhat complicated (i.e., AR4 made some estimates about world population being either 6 billion or 12 billion in 2100–which one is right?) so I gave up. But the important point is not to accept the IPCC version of this graph.