The Met Office responds to Doug Keenan's statistical significance issue

Bishop Hill reports that Doug Keenan’s article about statistical significance in the temperature records seems to have had a response from the Met Office.

WUWT readers may recall our story here: Uh oh, the Met Office has set the cat amongst the pigeons:

===========================================

The Parliamentary Question that started this was put by Lord Donoughue on 8 November 2012. The Question is as follows.

To ask Her Majesty’s Government … whether they consider a rise in global temperature of 0.8 degrees Celsius since 1880 to be significant. [HL3050]

The Answer claimed that “the temperature rise since about 1880 is statistically significant”. This means that the temperature rise could not be reasonably attributed to natural random variation — i.e. global warming is real. 

The issue here is the claim that “the temperature rise since about 1880 is statistically significant”, which was made by the Met Office in response to the original Question (HL3050). The basis for that claim has now been effectively acknowledged to be untenable. Possibly there is some other basis for the claim, but that seems extremely implausible: the claim does not seem to have any valid basis.

=============================================

The Met office website text is here and there is a blog post here.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

133 Comments
Inline Feedbacks
View all comments
May 31, 2013 10:27 am

All the response says is that there are other reasons, statistically, for the temperature rise than that proposed for CAGW CO2. It does NOT invalidate the CAGW hypothesis. Since the IPCC et al claim there is real-world theories and observations for the CO2-as-demon narrative, CAGW lives still: while multiple causes could be responsible, investigation has narrowed the suspect down to one.
It is as if a murder case were underway in court, and the defense attorney has asked the detective is Colonel Mustard could have done the crime and not the Professor in the box. The detective says yes, the Professor could have done it (motive, opportunity, fingerprints on the gun), but the Colonel not only had all those things but was seen by three policemen and a nun pulling the trigger and kicking the body.
The admission means unprecedented and unique cannot stand, but the villain charged still looks best for the crime.

Scott Scarborough
May 31, 2013 10:30 am

I read the introduction. Does not say what I expected it to say. Their arguments are lame. The best model they could come up with to show that Keenan’s choice of model is also not perfect does not seem to produce results any better (it even seems worse) than Keenan’s – by thier own numbers! I think that they are counting on people not having the slightest idea of what they are talking about.

Dodgy Geezer
May 31, 2013 10:30 am

From the response:
“…A wide range of observed climate indicators continue to show changes that are consistent with a globally warming world, and our understanding of how the climate system responds to rising greenhouse gas levels….”
The Met Office are arguing that one statistical approach is much like another, and no statistic actually proves anything. But that the rising temperatures since 1880 (which everyone accepts) are ‘consistent’ with the theory that we’re all going to fry.
When I leave my house in the morning and get in my car, those actions are ‘consistent’ with the theory that I’m going to rob a bank downtown. So I wonder why I don’t get arrested…?

arthur4563
May 31, 2013 10:33 am

“Statistical significance” is a trap for the unwary, since many think that statistical significance means significant change, per se. That’s not at all true. Statistical significance is normally calculated to test the null hypothesis, which in this case signifies zero or no “true” warming
Given enough data points, even minuscule, totally insignificant warming can be declared statistically significant. I see this misunderstanding constantly. If one claims a .5 degree increase, for example, then the proper statistical test is whether there has been that much change, with
results at the 01 and 05 level of confidence provided. .

Mike jarosz
May 31, 2013 10:36 am

Because we said so. That’s why. Debate is over, We have consensus. Move along.

Jacob
May 31, 2013 10:38 am

Wow. The Met Office’s blog post was painful to read, not for it’s crushing arguments, rather for use of arm-waving and logically fallacious arguments. As a professional meteorologists myself, I would expect far better from a national Meteorological organization. If they are going to lie, at least lie well, not like a bloviating bloke.

Scott Scarborough
May 31, 2013 10:40 am

I thought that they were going address the reason why they chose a statistacal model that fits the data 1000 times worse than others that are available. That is the real question.

Editor
May 31, 2013 10:41 am

Prof Slingo’s paper, although well researched, does not at any point prove that CO2 is the reason why the climate has warmed. She and everyone else involved in it has not put forward a satisfactory explanation as to why there has been no GW for he last 16 years despite the headlining 400ppm atmospheric CO2

Disko Troop
May 31, 2013 10:46 am

Smoke and mirrors.

Scott Scarborough
May 31, 2013 10:46 am

Maybe there is an answer as to why they chose the statistical model that they did. Maybe it is better to choose first order models than third order models for example – I don’t know. That is what I wanted from them…an expaination, not arm waving.

May 31, 2013 10:50 am

‘Thus, the Met Office does not use one of these statistical models to assess global temperature change in relation to natural variability. In fact, work undertaken at the Met Office on the detection of climate change in observational data is predominantly based on the application of formal detection and attribution methods. These methods combine observational evidence with physical knowledge of the climate (in the form of general circulation models) and its response to external forcing agents, and have a solid foundation in statistics. These methods allow physical knowledge to be taken into account when assessing a changing climate and are discussed at length in Chapter 9 of the Contribution of Working Group I to IPCC AR48.’
Er…so they do use statistical models, just other ones and the limited understanding of forcing agents.

Claude Harvey
May 31, 2013 10:54 am

When clear and concise questions and accusations are answered with “word blizzards”, even the fly on the wall knows who’s blowing smoke.

Latitude
May 31, 2013 10:58 am

whether they consider a rise in global temperature of 0.8 degrees Celsius since 1880 to be significant.
======
..and this is where everyone has lost the argument
rise? from what……..NORMAL
You’ve let the crooks define normal…………..

g3ellis
May 31, 2013 10:59 am

It is still carp… when you take a Sin wave and add 1, then say everything above 0 is significant and blame it on CO2 instead of where you added +1…. sigh. Totally a biased model.

clipe
May 31, 2013 11:16 am

“Those are my principles, and if you don’t like them… well, I have others.”
Groucho Marx.

May 31, 2013 11:20 am

Keenan’s statistical model is physically wrong.
When you analyze data you choose a model. picking a model that is physically wrong ( for example a random walk for temperature) can get you a better fit, but it’s a mistake.
A good example would be people that look at ice melt in september and fit that data with a linear trend. Well, before we even start we know this model is physically wrong.
How do we know that? well at some time in the future your model will predict negative ice area. So a linear model might be useful for communicating the loss rate, but you know that its physically wrong, so you should not hang anything too heavy on it. That is, if that choice of models leads to stupid conclusions, thats a good hint the model is misleading, regardless of how well it “fits” the data.
Put another way. Keenan chose a model that fit the data better. That model says there is no warming. But looking at the data we know it has warmed. Looking at the Thames we know it isnt frozen. Looking at the sea level we know it has gone up. We know the LIA was cooler. plants know it. animals know it. ice knows it. What this means is that Keenan has chosen the wrong model. There are an infinite number of models that fit the data as well or better than his model. fitting the data “better” is not the acid test of a good model. First and foremost the model has to be physically realistic. Keenan’s is not.

Lance Wallace
May 31, 2013 11:25 am

In Slingo’s defense (God, did I just say that?), or more correctly in defense of the 5 or 6 persons she credits who probably wrote the entire response, she (they) make the point that surface temperature is only one of 11 “indicators” (ice coverage, specific humidity, tropospheric and stratospheric temperature, etc.) that the Met Office uses to study climate change. In fact, the pdf refers to a rather nice collection of 50 or so datasets on all these indicators that were made available in 2010. I for one found the collection to be quite useful, although the datasets need to be updated to 2013. It is also possible (likely?) that these datasets are cherry-picked, leaving out inconvenient ones. So buyer beware. Here are the datasets:
http://www.metoffice.gov.uk/hadobs/indicators/11keyindicators.html

Rud Istvan
May 31, 2013 11:30 am

The important information is that Slingo replied at all. Keenan’s post obviously stung, and there must be additional powerful politics at work behind the scenes.
The UK gov is very far out on a limb. WSJ article about clear cutting North Carolina to feed wood pellets to Drax at a subsidized cost increase of £600 million is not sitting well with the Sierra Club and WWF. Met already acknowledged the pause, and revised interim forecast to no change until near end of decade. And blew it by asserting the just past miserable winter/spring was due to global warming. Clear loss of credibility.
Now if only we could begin to see equivalent climb down in the US, as opposed to OBumer tweets about Cook’s nonsense, proving not only poor judgement about quality of information but detachment from the real world’s current state of play. Keystone XL being exhibit 1.

Dodgy Geezer
May 31, 2013 11:40 am

They say that they don’t only depend on stats, they depend on ‘a deep understanding of the climate system, and ‘complex models’.
The trouble is that the hypothesis that CO2 drives everything is just a hypothesis, and the models that they use have obviously failed, as can be seen from their outputs. When you ask about these, they justify the CO2 hypothesis and the models by referring to the stats – saying that the models and hypothesis MUST be right, because there is statistically significant warming going on.
This is a common bureaucratic circular argument trick. It needs to be exposed for what it is…

Hal Javert
May 31, 2013 11:43 am

Steven Mosher says: May 31, 2013 at 11:20 am
There are an infinite number of models that fit the data as well or better than [Keenan’s] model. fitting the data “better” is not the acid test of a good model. First and foremost the model has to be physically realistic. Keenan’s is not.
==================================================
Hmmm; interesting Mosher requires this rigor of Keenen’s model, but not the Met (or IPPC…).

Richard M
May 31, 2013 11:45 am

How to avoid addressing the issue in one easy lesson. This silly response from the MET Office shows their true colors. They decided not to address the issue of statistical significance at all, but instead talk around the issue. FAIL.

Hal Javert
May 31, 2013 11:46 am

IPPC = IPCC

USDOTguy
May 31, 2013 11:53 am

Pardon me if someone has already said this, but I think we need to clarify the meaning of “statistical significance” in the context of regression analysis. The purpose of doing a regression is to test a hypothesis. In this case, the hypothesis is that warming since 1980 exceeds normal climate fluctuations. It appears that the warming is in fact “not statistically significant”. That means the hypothesis that warming exceeds normal variability must be rejected. There is no proof that in fact, the warming is “unprecedented”, or even unusual. We need more data. A longer time series would be best.

Mindert Eiting
May 31, 2013 11:57 am

Arthur4563, spot on. This a well known problem with the Fisher procedure. Significance depends on effect size and sample size. For samples of ‘infinite’ size each null hypothesis should be rejected at each significance level, whatever the result may be. A null hypothesis postulating an exact null is trivially false. Perhaps we have forgotten that Fisher devised his procedure for making simple decisions about experiments. Here is a simple question: if we take the temperature record of the past century and would put it erroneously on his head, would we would get the same significance level while testing the null hypothesis of no-change?

Matthew R Marler
May 31, 2013 12:00 pm

Steven Mosher: Keenan’s statistical model is physically wrong.
That’s one possibility.
The basic problem is that, after observing what seems like a change, there is no longer any way to formulate what would have been the null hypothesis a priori, that is, a reasonable expectation of what would have happened absent the hypothesized cause of the change.
Thinking back to the Little Ice Age, and hypothesizing before the rise that CO2 might or might not cause an increase in temp, what would the null hypothesis of negligible CO2 effect look like? Stationary independent year-on-year mean changes? Stationary red noise? Non-stationary chaos? All we can say now is that, for some of the possible null hypotheses that might have been chosen, the change in temperature is compatible with no effect of CO2; but for other null hypotheses that might have been chosen, the change in temperature is not compatible with no effect of CO2.
Looking forward, a reasonable null hypothesis is that from 1950 onward the spectral density of the mean temperature time series is unchanged from what it was before 1950. The problem, as everyone knows, is that there are not enough data for a sufficiently precise estimate of the prior spectral density function.

1 2 3 6