Met Office Statistics Questioned

Is there any statistical evidence that global temperatures have changed since 1997 ?

Guest post by Clive Best

The UK Met Office seem determined to stand by their claim made in  response to the David Rose article in the Mail on Sunday:

‘The linear trend from August 1997 (in the middle of an exceptionally strong El Nino) to August 2012 (coming at the tail end of a double-dip La Nina) is about 0.03°C/decade, amounting to a temperature increase of 0.05°C over that period.’

Several of us have been requesting statistical evidence via their blog that this trend  is actually indistinguishable from flat.  

Dave Brittan has done a sterling job in replying on behalf of the Met Office, but he eventually crafted a complex answer as to whether the above statement made statistical sense.

“The first is measurement uncertainty associated with basic measurement error and uncertain biases in the observations. These are included in the HadCRUT4 ensemble, and when computing linear trends in global temperatures from August 1997 to August 2012 these give a trend of 0.034 ± 0.011 °C per decade (95% confidence interval) for the observed portion of the earth.”

I questioned this statement because I think their quoted error is actually about a factor 10 less than it should be. After waiting 36 hours with my post still in moderation, and with no other posts being accepted I am now presuming that this is their last word on the matter.

Frustrated by the lack of response, I decided instead to do the analysis myself.    – see post here:

http://clivebest.com/blog/?p=4237

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
69 Comments
Inline Feedbacks
View all comments
October 27, 2012 1:55 am

Terry Oldberg:
Thankyou for your answer to me at October 26, 2012 at 11:00 pm.
If I understand your answer to my first question correctly, then you are saying that climate science as promulgated by the IPCC is meaningless junk. Although I would not go so far as to say that, I tend to agree.
In the light of your answer, any specific response from you to my two questions would be irrelevant so I am content to leave it at that.
Thankyou for your reply.
Richard

Reply to  richardscourtney
October 27, 2012 9:21 am

richardscourtney:
A more precise way of stating my conclusion is to say that the methodology of the IPCC’s investigation is not scientific but rather is dogmatic. People who reach the opposite conclusion do so by inserting one or another false premises into their arguments that are disguised as truths. Among these people, a favorite stratagem is to argue over the magnitude of the equilibrium climate sensitivity. Another favorite is to conflate the IPCC-style “evaluation” of a model with the statistical validation of this model.

icarus62
October 27, 2012 3:10 am

What UKMO should be saying is that there is no evidence of a decline in the rate of global surface warming, which is currently running at around 0.17°C per decade (30-year trend). This is to be expected as the TOA energy imbalance is still substantial at around 0.6W/m². We would need to reduce atmospheric CO₂ by 50ppm to eliminate this energy imbalance and halt global warming. What are the chances of doing that?

Reply to  icarus62
October 27, 2012 10:01 am

icarus62
Your argument seems to rest on the assumption that the equilibrium climate sensitivity (TECS) is of a positive magnitude. It would follow that any increase in the CO2 concentration warmed our planet. However, as I’ve pointed out on many occasions, the equilibrium temperature is not an observable, with the consequence that when a claim is made about the magnitude of TECS this claim is insusceptible to being tested. As it is insusceptible to being tested, this claim is unscientific by the definition of “scientific.” In order for claims about the global surface surface temperature to be made susceptible to testing and thus scientific, some institution has to describe the underlying statistical population but this has not yet happened. Institutions seem to fancy their current ability to con us into thinking that when they pontificate about the climate they are speaking as scientists.

AJB
October 27, 2012 10:12 am

icarus62 says, October 27, 2012 at 3:10 am

What UKMO should be saying is that there is no evidence of a decline in the rate of global surface warming

That is simply not true. See:http://postimage.org/image/4puutknlj/full. The rate of warming on a 30-year trend basis has been declining rapidly since 2007 or so. If you wish to examine the rate of warming then plot the rate of warming, not the temperature outcome.

icarus62
October 27, 2012 1:08 pm

AJB: I disagree.

icarus62
October 27, 2012 1:13 pm

AJB: I suppose you could squint and say that the warming rate in HADCRUT4 has been declining slightly, but I doubt if it’s statistically significant.

icarus62
October 27, 2012 1:19 pm

Terry Oldberg: I don’t see how your comment follows from what I said above. If the planet is out of energy balance (more energy being received than radiated away) then it will inevitably warm up – that’s just fundamental physics. Climate sensitivity doesn’t come into it. Agreed?

Reply to  icarus62
October 27, 2012 9:51 pm

icarus62:
I know of no principle of physics which states that if Earth is currently receiving more energy than is being radiated away then Earth will inevitably warm up. If you know of one, please inform me of same.

Tad
October 27, 2012 4:32 pm

I’m not even convinced that least-squares is the be-all end-all fitting method. I mean, it’s fine under most circumstances, maybe someone more knowledgeable than I can comment as to its applicability here. Isn’t there something about autocorrelation in time series that calls for other fitting methods? Whatever, I’m not saying LS is wrong, I’m just wondering if the error bounds are actually even wider due to issues like autocorrelation.
Also, sometimes a more robust fitting method is considered appropriate, usually due to outliers. I don’t see any obvious outliers here so this is probably not an issue.
I guess I’m just bringing this up to remind folks that LS is not sacred or unique as a fitting method. And I’m also asking if there are other statistical issues that need to be considered, I really don’t know.

Reply to  Tad
October 27, 2012 10:44 pm

Tad:
Your skepticism regarding least sum of the squared errors is well founded.
Least sum of the squared errors is an intuitive rule of thumb that is often used in the construction of a model. Viewed from a logical perspective, a model is a procedure for making inferences. On each occasion in which an inference is made, there are generally many candidates for being made. Thus, the builder of a model is persistently faced with the necessity for selecting the one correct inference from the many candidates.
By tradition, model builders discriminate the one correct inference from the many possibilities through the use of the intuitive rules of thumb that are known as “heuristics.” Least sum of the squared errors is an example of one of them.
However, the method of heuristics has a logical shortcoming: On each occasion in which a particular heuristic identifies a particular inference as the one correct inference, a different heuristic identifies a different inference as the one correct inference. In this way, the method of heuristics violates Aristotle’s law of non-contradiction. Thus, though use of the method of heuristics is traditional it is also illogical.
Violations of non-contradiction may be avoided through replacement of the method of heuristics by optimization of the selected inferences. I’ve provided an introduction to this topic in the series of three articles that are published at the blog Climate, Etc under the title of “The Principles of Reasoning.”

AJB
October 27, 2012 4:47 pm

icarus62 says, October 27, 2012 at 1:13 pm
What source data have you used for this (the actual file name and columns) and how did you arrive at the value plotted on this graph?

AJB
October 27, 2012 6:41 pm

icarus62 says October 27, 2012 at 1:13 pm
Using your graph of only 10 values, you call a 15% decline in a mere 8 years statistically insignificant (despite it being based on ludicrous 30-year trailing averages). Amazing.

Nick Kermode
October 28, 2012 9:01 pm

Hi Clive, this is not a graph showing what the headline says. It is actually shows the anomaly in degrees not tenths of a degree. You would remove the decimal from the axis if you were showing the information in the suggested metric, otherwise it is point something of a tenth of a degree.

October 29, 2012 2:34 am

Hi Nick,
You are right. The headline says 10th of a degree while the y-axis scale is in degrees. The graph above was taken from the original Mail on Sunday article. What is actually plotted is the temperature anomaly against the 1961-1990 average. However, I assume David Rose just rounded it to 14 degrees to make it easier to for the public to understand. My fit to the anomaly data can be seen here and the fit to Hadcrut3 is here.

Carter
October 31, 2012 1:52 pm

[snip]