Is there any statistical evidence that global temperatures have changed since 1997 ?
Guest post by Clive Best
The UK Met Office seem determined to stand by their claim made in response to the David Rose article in the Mail on Sunday:
‘The linear trend from August 1997 (in the middle of an exceptionally strong El Nino) to August 2012 (coming at the tail end of a double-dip La Nina) is about 0.03°C/decade, amounting to a temperature increase of 0.05°C over that period.’
Several of us have been requesting statistical evidence via their blog that this trend is actually indistinguishable from flat.
Dave Brittan has done a sterling job in replying on behalf of the Met Office, but he eventually crafted a complex answer as to whether the above statement made statistical sense.
“The first is measurement uncertainty associated with basic measurement error and uncertain biases in the observations. These are included in the HadCRUT4 ensemble, and when computing linear trends in global temperatures from August 1997 to August 2012 these give a trend of 0.034 ± 0.011 °C per decade (95% confidence interval) for the observed portion of the earth.”
I questioned this statement because I think their quoted error is actually about a factor 10 less than it should be. After waiting 36 hours with my post still in moderation, and with no other posts being accepted I am now presuming that this is their last word on the matter.
Frustrated by the lack of response, I decided instead to do the analysis myself. – see post here:
http://clivebest.com/blog/?p=4237

@JJ “Your analysis does not correct for autocorrelation, and one standard deviation only points at a 68% CI. Go two wide for ~95%. Correct for both (or perhaps just either), and the error bands will include zero and a swath of cooling territory.”
I fully agree. The HADCRUT3 data are actually in negative territory. Yes it really should be 1 sigma=68% CL .
Time to see what I have learned. I am currently in my second week of my first stats course, so excuse my ignorance. Correlation factors run -1 to +1. Weaker correlations are closer to 0. Stronger correlation exists closer to -1 or +1. The correlation coefficient for Hadcrut 4, from 97-Aug 2012 is .12. Looking at the scatter plot of temp anomalies, they are all over the place with many outliers. How can this be deemed reliable with such a weak correlation?
What’s that quote fron Jaws? “We’re gonna need a bigger boat.”
C’mon Met Office! Place an order for a new supercomputer and you may get the results you crave.
@Eric You are quite correct ! The correlation with a straight line is very poor and the measurements are fluctuating monthly. You might get a better fit with a high order polynomial but at the end of the day all you can say from this data is that there has been no net temperature change over the last 16 years
Great! I was reading that and wondering what approach you would use. I was thinking you were leading toward a “it’s well within the spread” … and then like the crime scene drama … it wasn’t the butler as we had all expected, but out of a cupboard apparently long dead and buried pops: (I you need to use a Belgium accent) “To emphasize this point, I now show exactly the same analysis done for HADCRUT3 which up to a few months ago was the flagship IPCC data as used in AR4.”
Having met some of the Met Office researchers, I have to say that they are quite reasonable people. Indeed, I sat next to someone who turned out to be a big-wig reading all kinds of sceptic blogs.
However, there is a gulf in understanding and common sense between those at the “grindstone” of the models and those like John Hirst who if reports back of his presentations are accurate seems to have not a clue and is making it all up as he goes along.
So I do not believe this nonsense of “It’s warming” (recently) is endorsed by the run of the mill Met Office staff. Instead it appears to be PR spin from senior staff who are taking the good work of their underlings and discrediting the whole organisation by their shameful partisan approach.
In area like this when you see claims of accuracy to two decimal places they should really put a flag up . The idea that you can get better accuracy levels then the instruments your using to measure the effect in the first place have by ‘statistics’ is ‘interesting ‘
a trend of 0.034 ± 0.011 °C per decade or in other words it could be a third out either way . rather a large error factor that . Especial given when great and bold claims are made on the back of the finger based on its ‘value ‘
I think this has something to do with science, even though it is OT, so I decided to post it.
I’m not so sure it is off topic. People who like numerology will note that the warming trend of the latter half of the 20th century fairly accurately matched not only a burst of high solar activity but the cumulative numbers from nuclear testing, with a 5-10 year lag. Nuclear testing more or less stopped in 1998, and gee, since then the temperatures have been essentially flat. Just one more confounding correlation in a sea of correlations that may or may not have the slightest causal association.
I can think of several ways, though, for nuclear testing to affect climate — the production of dust, aerosols, and ozone (especially the latter), the transport of water vapor into the stratosphere, the distribution of radioactive particulates that can affect cloud nucleation rates. Some of these one would guess to be cooling, some warming, but sadly, when one considers the energy and detailed nature of the tests it is difficult to reconcile numbers in a way that makes sense (I’ve tried a couple of times) largely because the vast bulk of the later tests were underground and it is difficult to see how the underground tests would affect the atmosphere and climate so much — unless the climate effects were as noted lagged by a decade or more, or were heterodyning with other e.g. solar or GHG influences.
But a pretty, interesting, display either way. I lived through most of this — I was born as the Apple II explosion occurred on March 29, 1955 in operation teapot. Of course it isn’t THAT unlikely to be born on a day with a bomb, given around 2000 bomb tests over fifty years, but still…
rgb
Oh, and as always, a nice job Clive although as you note, anybody who actually uses stats in some constructive way can glance at the data and reject the notion that there is any meaningful linear trend. Doing a fit is really quite unnecessary.
Kudos also to the people who noted that fitting a linear trend in the first place is sheer numerology, given that the climate is hardly linear on any timescale (yes, that even includes you, Olson). Another way of assessing the “meaning” in the data would be to determine the parameters of its noise and write a short program to simulate the random generation of “simulated temperature” series with zero trend but the same general autocorrelation function, a weighted markov process as it were. Generate a few thousand of these, fit linear trends through them, and compute the probability of getting a warming or cooling stretch out of pure noise that has the same empirical autocorrelation as the general climate signal portrayed, and you’ll have a good idea of how much “meaning” there is in any proposed linear trend. Which is absolutely none, as I said, from a mere eyeballing. It is certain that one would get linear trends that are not zero from every simulation run, and the distribution of slopes would be (I’m guessing) nearly Gaussian and quite wide. One cannot possibly reject the null-hypothesis of no warming from this data, in other words — not at the 95% confidence level, not at the 5% confidence level (both of which terms are meaningless and not really what p-values mean, but that’s the way it goes).
I do wish that folks would contemplate using R instead of Excel for doing stats, though. Excel is fine for doing dumb/simple linear regression — maybe — but not so good for actually playing with the data and learning something a bit deeper from it, and not so good at all for exploring the world of nonlinear fits or predictive models. What is the autocorrelation function for the temperature, for example? Sadly, I have no time to mess with it myself (yet) but perhaps one day.
rgb
My understanding is that over 97.5% of the climate system’s heat content resides in the oceans, and that the oceans have been warming faster in the last decade than in the previous two decades – that’s according to Levitus 2012, and is consistent with rising global sea level and the accelerating loss of Arctic sea ice, Greenland ice mass and Antarctic ice mass. So, it seems that global warming is accelerating rather than slowing down, which is probably what we would expect as our greenhouse gas emissions are rising faster than ever.
Icarus62 – You must have found Kevin T.’s missing heat, something he can’t even do. The ocean buoys can’t find any major increase of heat, the ice loss in the Arctic is driven more by wind and storms than temp. Whatever Greenland has lost, it isn’t much, at the current pace, it will be over 5000 years for it to loose it’s half it’s ice. And the Antarctic, well it just set a record for area covered. Yes, I know you are talking volume but volume hasn’t been measured for very long. The Grace satellites don’t have many years of data to go through.
As far as a rising ocean, it isn’t rising very fast, roughly a foot a century.
And to top it off, MET just decided that we haven’t had any “warming” in over fifteen years. This coming from the organization which once said British children won’t know what snow is.
Nice cherry picking, go sell cherries elsewhere, we like the whole kit and kaboodle.
Icarus62:
Please clarify a rumour.
On another thread HenryP said Stephanthedenier is also Gary Lance. Gary Lance did not deny this.
Is he also you, please?
Richard
REPLY: False, by their IP address, they are on different continents – Anthony
@rgbatduke
This is an original idea I have never heard before! The late 20th century warming was from around 1960 lasting until around 1998 so does more or less coincide with nuclear testing. However, If you believe the data, there was a previous warming spell from lasting from 1910 till about 1940 before nuclear weapons. So I think there is likely another underlying cause.
If we assume there is an oscillation superimposed on an underlying trend entirely due to CO2 then we get a logarithmic dependency roughly 2.5ln(CO2(year)/CO2(1750)) deg.C. This then gives total warming to a doubling of CO2 of 1.7 deg.C – no big deal. Of course natural variations may dominate anyway.
The margins of error quoted by the Met Office are fictitious.
The temperature record is derived from a survey, although many might
think it better described as a dog’s breakfast. The error limits calculated
by the Met Office assume a perfect sample. This is akin to those polling
companies who survey a thousand respondents and then claim that the
margin of error for their survey is +- 3% in the worst case. These claims
are likewise crap. Try +- 5% or 6% … they conveniently forget about
SURVEY ERROR, one of the first things I first learnt about when I got into
survey analysis over 40 years ago.
I’ve said something like this before but I think it bears repeating. If you have to coax, massage and enhance the temp figures over decades looking for a signal of catastrophic warming, then you essentially can drop the C, A, G and possibly even the W off of CAGW. If a tsunami is coming, there is no need to wade out into the water a few hours before with a yardstick to plot millimetres of sea rise to detect it early (I suppose sea level goes down first). Just wait and you will be washed away. Possibly you could make a note or two in a water-proof paper notebook before its too late if the science is so important. What we have is a 250 year recovery from the LIA and it seems to be a bit more than 1C since 1750. If we have another 0.5C by 2100, I could move across the river in Ottawa, Ontario to the Gatineau Hills (I would only have to move upwards about 110 metres) or 45 miles north with no change in elevation where the fishing is good. This adaptation is a bitch.
In poking around in the literature of climatology, I’ve found that eccentricities in the language of climatology sometime obscure methodological errors in the research of climatologists. Translation of terms and phrases in the literature of climatology into technical English sometimes serves to reveal these errors.
At http://www.aos.wisc.edu/~sco/normals.html, the University of Wisconsin asserts that a “climatic normal” is “the arithmetic average of a climate event such as temperature over a prescribed 30-year interval”; translated into technical English, this phrase states that the period of a climatological event ends 30 years after it begins while the outcome of this event is the arithmetic average over the period of the event of an instantaneous meteorological observable. The climate unfolds, then, as a sequence of statistically independent events each lasting 30 years. The climate “changes” if and only of the outcome differs between adjacent events. For global warming climatology, the outcome of each event is the arithmetic average, over the period of this event, of the numerical values of a global surface temperature time series, such as the Hadcrut4.
In reviewing Anthony’s paper, I note that: a) the paper does not present the arithmetic average of the values of the Hadcrut4 over the period of an event but rather presents unaveraged values and b) the values that are presented extend over only half of the period of one event. As there are no events, no evidence is presented that is pertinent to climatology and one cannot reach statistically significant conclusions from this non-existent evidence.
Terry,
Anthony is innocent here – it is my post is intended simply to refute the UK Met office claim that there has indeed been “global warming” since 1997 till today of 0.03 deg.C/decade. Their model actually predicted 0.2 deg.C/decade. The clear evidence based on a 10 year average is that nothing has changed.
The best prediction is that the climate is changing by 0.00C / decade:
“we have been unable to find a scientific forecast to support the currently widespread belief in “global warming.” Climate is complex and there is much uncertainty about causal relationships and data. Prior research on forecasting suggests that in such situations a naïve (no change) forecast would be superior to current predictions” – GLOBAL WARMING: FORECASTS BY SCIENTISTS VERSUS SCIENTIFIC FORECASTS by Kesten C. Green and J. Scott Armstrong
“Nuclear testing more or less stopped in 1998, and gee, since then the temperatures have been essentially flat.”
Interesting. The most powerful nuclear device, the Tsar Bomba, released 420 PJ when it exploded. If all of this energy went into heating the atmosphere, it would increase its average temperature by 4.2×10^17 J / (5×10^21 J/K) = 8.4×10^-5 K. About 2000 nuclear bombs have been exploded so far. Assuming all of these were as powerful as Tsar Bomba gives like 0.168 K.
Terry Oldberg:
I write to ask two question concerning your post at October 26, 2012 at 10:39 am.
Before I ask I think I need to state that my questions are genuine and have no meaning other than their overt meaning.
My question are
1.
This thread concerns the Met.Office response to the empirical fact that there has been no discernible global temperature change for at least 15 years. Are you suggesting there has to be no rise in global temperature for at least 30 years before supporters of the AGW-hypothesis will agree global warming has stopped?
2.
If the answer to my Question 1 is an affirmative, then what does that indicate about Hansen’s testimony to the US Senate in 1987 when there was only at most 15 years of global warming prior to then?
Richard
Gary Pearse said,
“If we have another 0.5C by 2100, I could move across the river in Ottawa, Ontario to the Gatineau Hills (I would only have to move upwards about 110 metres) or 45 miles north with no change in elevation where the fishing is good. This adaptation is a bitch.”
That’s all very well for you but here am I in a very flat/cold part of the UK and I’ll have to move up to somewhere in Yorkshire – very pretty, but there are Yorkshire people there and they don’t like strangers!!
I believe the time interval involved in this article is too short to establish any sort of trend except to say temperatures were relatively constant over the interval in question. I believe a span of years going back to at least 1880 would be required to see any real trend. One site, the Climate Research Unit at the University of East Anglia UK shows a mere 0.8 degree C rise since 1880 and that data seems to hint there may be a cyclical process involved. As this is the site involved in the Climategate scandal, one might assume this to be a worst-case estimate of climate change.
I personally believe that an increase on the order of 3 to 5 degrees C since 1880 would be required to justify the current state of climate change alarm.
http://www.cru.uea.ac.uk/cru/info/warming/
Spector:
The period since 1880 contains only 3 to 4 independent cllimatological events of 30 years each. A sample of size 3 to 4 is apt to be highly unrepresentative of the underlying population. Thus, it would be a mistake to generalize from the record of this period.
Terry Oldberg says, October 26, 2012 at 10:39 am
If we interpret this definition literally we get a plot like this …
http://postimage.org/image/4puutknlj/full
… which just seems to be diplaced 15-years to the right of numerological reality.
It doesn’t really matter what you plot, 30 years is just as meaningless as 30 months. Given oceanic turnover time I doubt you’d find any convincing evidence of an AGW signal in the temperature record if we had 3 centuries of reliable data. But even on this arbitrary 30 year basis, what is there to be alarmed about? The rate of change of ‘climate normal’ is clearly decelerating at present – just another wiggle that could go anywhere.
clivebest:
Thank you for correcting my mistake in confusing you with Anthony!
My critique of your paper’s conclusions are unaffected by my mistake in identifying this paper’s author. In brief, the 15 year period since 1997 can contain no climatological event for, according to the University of Wisconsin, the period of such an event is 30 years. Thus, no climatological conclusions may be based upon the Hadcrut4 global temperature time series in this period. The UK Met office’s claim of 0.03 deg. C/decade of global warming is falsified by the same consideration.
So we agree then. It makes no sense for the MET Office to state that over the last 16 years the world has warmed 0.03 degrees/decade. Since their GCM model actually predicted 0.2 degrees/decade, presumably then they need 30 years to validate it. In that case can we please wait another 30 years before closing down ALL our coal fired power stations in the UK ? It would be a shame to commit economic suicide based on a computer simulation.
clivebest:
One alternative to economic suicide would be litigation.
RE: Terry Oldberg: (October 26, 2012 at 6:08 pm)
“. . . it would be a mistake to generalize from the record of this period.
Given what I believe to be our current understanding of climatology, it appears to me that these observations over the last 150 years of modern instrumentation are largely in the noise level. As we have only increased the amount of CO2 in the atmosphere by a half-doubling (from 280 ppm) it is most likely that the net effect of this is about 0.5 degrees C (based on MODTRAN calculations.) I think the higher sensitivities for CO2 (degrees per doubling) indicated by the IPCC may have been the result of taking the steepest slope of observed data and attributing that all to CO2 and an assumed dangerously high, positive climatic feedback mechanism. The flat-line data of the last ten years is inexplicable by that assumption.
richardscourtney:
Thank you for giving me the opportunity to clarify. My answers to your questions of Oct. 26, 2012 at 11:25 am follow.
1. The phrasing of your question suggests a lack of grasp of the significance of the idea of an “event” for the scientific method of investigation. A scientific investigation centers on a complete set of statistically independent events, the so-called “statistical population.” A sample drawn from the underlying population provides the sole basis for falsification of the claims that are a consequence from the investigation. In a diligent search, I’ve been unable to find the events or statistical population that underlie the claims that are made by the IPCC in AR4. It seems to me that IPCC climate “science” is a pseudo-science for the claims of a science are falsifiable but the claims that are made by the IPCC are not in lieu of the statistical population.
According to the University of Wisconsin (U of W), a climatological event lasts 30 years. The U of W implies that, in the investigation of global warming, the outcome of an event is the arithmetic average of the global temperature over the 30 year period of this event. The numerical values of a global temperature time series such as the Hadcrut4 over 1/2 of the period of an event (15 years) are irrelevant, for events are discrete and countable.
2. In a Web search, I was unable to find a link to Hansen’s 1987 testimony to the U.S. Senate. If you know of a link, please provide same so I can respond.