The Met Office responds to Doug Keenan's statistical significance issue

Bishop Hill reports that Doug Keenan’s article about statistical significance in the temperature records seems to have had a response from the Met Office.

WUWT readers may recall our story here: Uh oh, the Met Office has set the cat amongst the pigeons:

===========================================

The Parliamentary Question that started this was put by Lord Donoughue on 8 November 2012. The Question is as follows.

To ask Her Majesty’s Government … whether they consider a rise in global temperature of 0.8 degrees Celsius since 1880 to be significant. [HL3050]

The Answer claimed that “the temperature rise since about 1880 is statistically significant”. This means that the temperature rise could not be reasonably attributed to natural random variation — i.e. global warming is real. 

The issue here is the claim that “the temperature rise since about 1880 is statistically significant”, which was made by the Met Office in response to the original Question (HL3050). The basis for that claim has now been effectively acknowledged to be untenable. Possibly there is some other basis for the claim, but that seems extremely implausible: the claim does not seem to have any valid basis.

=============================================

The Met office website text is here and there is a blog post here.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Doug Proctor

All the response says is that there are other reasons, statistically, for the temperature rise than that proposed for CAGW CO2. It does NOT invalidate the CAGW hypothesis. Since the IPCC et al claim there is real-world theories and observations for the CO2-as-demon narrative, CAGW lives still: while multiple causes could be responsible, investigation has narrowed the suspect down to one.
It is as if a murder case were underway in court, and the defense attorney has asked the detective is Colonel Mustard could have done the crime and not the Professor in the box. The detective says yes, the Professor could have done it (motive, opportunity, fingerprints on the gun), but the Colonel not only had all those things but was seen by three policemen and a nun pulling the trigger and kicking the body.
The admission means unprecedented and unique cannot stand, but the villain charged still looks best for the crime.

Scott Scarborough

I read the introduction. Does not say what I expected it to say. Their arguments are lame. The best model they could come up with to show that Keenan’s choice of model is also not perfect does not seem to produce results any better (it even seems worse) than Keenan’s – by thier own numbers! I think that they are counting on people not having the slightest idea of what they are talking about.

Dodgy Geezer

From the response:
“…A wide range of observed climate indicators continue to show changes that are consistent with a globally warming world, and our understanding of how the climate system responds to rising greenhouse gas levels….”
The Met Office are arguing that one statistical approach is much like another, and no statistic actually proves anything. But that the rising temperatures since 1880 (which everyone accepts) are ‘consistent’ with the theory that we’re all going to fry.
When I leave my house in the morning and get in my car, those actions are ‘consistent’ with the theory that I’m going to rob a bank downtown. So I wonder why I don’t get arrested…?

“Statistical significance” is a trap for the unwary, since many think that statistical significance means significant change, per se. That’s not at all true. Statistical significance is normally calculated to test the null hypothesis, which in this case signifies zero or no “true” warming
Given enough data points, even minuscule, totally insignificant warming can be declared statistically significant. I see this misunderstanding constantly. If one claims a .5 degree increase, for example, then the proper statistical test is whether there has been that much change, with
results at the 01 and 05 level of confidence provided. .

Mike jarosz

Because we said so. That’s why. Debate is over, We have consensus. Move along.

Jacob

Wow. The Met Office’s blog post was painful to read, not for it’s crushing arguments, rather for use of arm-waving and logically fallacious arguments. As a professional meteorologists myself, I would expect far better from a national Meteorological organization. If they are going to lie, at least lie well, not like a bloviating bloke.

Scott Scarborough

I thought that they were going address the reason why they chose a statistacal model that fits the data 1000 times worse than others that are available. That is the real question.

Prof Slingo’s paper, although well researched, does not at any point prove that CO2 is the reason why the climate has warmed. She and everyone else involved in it has not put forward a satisfactory explanation as to why there has been no GW for he last 16 years despite the headlining 400ppm atmospheric CO2

Disko Troop

Smoke and mirrors.

Scott Scarborough

Maybe there is an answer as to why they chose the statistical model that they did. Maybe it is better to choose first order models than third order models for example – I don’t know. That is what I wanted from them…an expaination, not arm waving.

‘Thus, the Met Office does not use one of these statistical models to assess global temperature change in relation to natural variability. In fact, work undertaken at the Met Office on the detection of climate change in observational data is predominantly based on the application of formal detection and attribution methods. These methods combine observational evidence with physical knowledge of the climate (in the form of general circulation models) and its response to external forcing agents, and have a solid foundation in statistics. These methods allow physical knowledge to be taken into account when assessing a changing climate and are discussed at length in Chapter 9 of the Contribution of Working Group I to IPCC AR48.’
Er…so they do use statistical models, just other ones and the limited understanding of forcing agents.

Claude Harvey

When clear and concise questions and accusations are answered with “word blizzards”, even the fly on the wall knows who’s blowing smoke.

Latitude

whether they consider a rise in global temperature of 0.8 degrees Celsius since 1880 to be significant.
======
..and this is where everyone has lost the argument
rise? from what……..NORMAL
You’ve let the crooks define normal…………..

g3ellis

It is still carp… when you take a Sin wave and add 1, then say everything above 0 is significant and blame it on CO2 instead of where you added +1…. sigh. Totally a biased model.

clipe

“Those are my principles, and if you don’t like them… well, I have others.”
Groucho Marx.

Keenan’s statistical model is physically wrong.
When you analyze data you choose a model. picking a model that is physically wrong ( for example a random walk for temperature) can get you a better fit, but it’s a mistake.
A good example would be people that look at ice melt in september and fit that data with a linear trend. Well, before we even start we know this model is physically wrong.
How do we know that? well at some time in the future your model will predict negative ice area. So a linear model might be useful for communicating the loss rate, but you know that its physically wrong, so you should not hang anything too heavy on it. That is, if that choice of models leads to stupid conclusions, thats a good hint the model is misleading, regardless of how well it “fits” the data.
Put another way. Keenan chose a model that fit the data better. That model says there is no warming. But looking at the data we know it has warmed. Looking at the Thames we know it isnt frozen. Looking at the sea level we know it has gone up. We know the LIA was cooler. plants know it. animals know it. ice knows it. What this means is that Keenan has chosen the wrong model. There are an infinite number of models that fit the data as well or better than his model. fitting the data “better” is not the acid test of a good model. First and foremost the model has to be physically realistic. Keenan’s is not.

Lance Wallace

In Slingo’s defense (God, did I just say that?), or more correctly in defense of the 5 or 6 persons she credits who probably wrote the entire response, she (they) make the point that surface temperature is only one of 11 “indicators” (ice coverage, specific humidity, tropospheric and stratospheric temperature, etc.) that the Met Office uses to study climate change. In fact, the pdf refers to a rather nice collection of 50 or so datasets on all these indicators that were made available in 2010. I for one found the collection to be quite useful, although the datasets need to be updated to 2013. It is also possible (likely?) that these datasets are cherry-picked, leaving out inconvenient ones. So buyer beware. Here are the datasets:
http://www.metoffice.gov.uk/hadobs/indicators/11keyindicators.html

Rud Istvan

The important information is that Slingo replied at all. Keenan’s post obviously stung, and there must be additional powerful politics at work behind the scenes.
The UK gov is very far out on a limb. WSJ article about clear cutting North Carolina to feed wood pellets to Drax at a subsidized cost increase of £600 million is not sitting well with the Sierra Club and WWF. Met already acknowledged the pause, and revised interim forecast to no change until near end of decade. And blew it by asserting the just past miserable winter/spring was due to global warming. Clear loss of credibility.
Now if only we could begin to see equivalent climb down in the US, as opposed to OBumer tweets about Cook’s nonsense, proving not only poor judgement about quality of information but detachment from the real world’s current state of play. Keystone XL being exhibit 1.

Dodgy Geezer

They say that they don’t only depend on stats, they depend on ‘a deep understanding of the climate system, and ‘complex models’.
The trouble is that the hypothesis that CO2 drives everything is just a hypothesis, and the models that they use have obviously failed, as can be seen from their outputs. When you ask about these, they justify the CO2 hypothesis and the models by referring to the stats – saying that the models and hypothesis MUST be right, because there is statistically significant warming going on.
This is a common bureaucratic circular argument trick. It needs to be exposed for what it is…

Hal Javert

Steven Mosher says: @ May 31, 2013 at 11:20 am
There are an infinite number of models that fit the data as well or better than [Keenan’s] model. fitting the data “better” is not the acid test of a good model. First and foremost the model has to be physically realistic. Keenan’s is not.
==================================================
Hmmm; interesting Mosher requires this rigor of Keenen’s model, but not the Met (or IPPC…).

Richard M

How to avoid addressing the issue in one easy lesson. This silly response from the MET Office shows their true colors. They decided not to address the issue of statistical significance at all, but instead talk around the issue. FAIL.

Hal Javert

IPPC = IPCC

USDOTguy

Pardon me if someone has already said this, but I think we need to clarify the meaning of “statistical significance” in the context of regression analysis. The purpose of doing a regression is to test a hypothesis. In this case, the hypothesis is that warming since 1980 exceeds normal climate fluctuations. It appears that the warming is in fact “not statistically significant”. That means the hypothesis that warming exceeds normal variability must be rejected. There is no proof that in fact, the warming is “unprecedented”, or even unusual. We need more data. A longer time series would be best.

Mindert Eiting

Arthur4563, spot on. This a well known problem with the Fisher procedure. Significance depends on effect size and sample size. For samples of ‘infinite’ size each null hypothesis should be rejected at each significance level, whatever the result may be. A null hypothesis postulating an exact null is trivially false. Perhaps we have forgotten that Fisher devised his procedure for making simple decisions about experiments. Here is a simple question: if we take the temperature record of the past century and would put it erroneously on his head, would we would get the same significance level while testing the null hypothesis of no-change?

Matthew R Marler

Steven Mosher: Keenan’s statistical model is physically wrong.
That’s one possibility.
The basic problem is that, after observing what seems like a change, there is no longer any way to formulate what would have been the null hypothesis a priori, that is, a reasonable expectation of what would have happened absent the hypothesized cause of the change.
Thinking back to the Little Ice Age, and hypothesizing before the rise that CO2 might or might not cause an increase in temp, what would the null hypothesis of negligible CO2 effect look like? Stationary independent year-on-year mean changes? Stationary red noise? Non-stationary chaos? All we can say now is that, for some of the possible null hypotheses that might have been chosen, the change in temperature is compatible with no effect of CO2; but for other null hypotheses that might have been chosen, the change in temperature is not compatible with no effect of CO2.
Looking forward, a reasonable null hypothesis is that from 1950 onward the spectral density of the mean temperature time series is unchanged from what it was before 1950. The problem, as everyone knows, is that there are not enough data for a sufficiently precise estimate of the prior spectral density function.

Allan M

Steven Mosher says: @ May 31, 2013 at 11:20 am
There are an infinite number of models that fit the data as well or better than [Keenan’s] model. fitting the data “better” is not the acid test of a good model. First and foremost the model has to be physically realistic. Keenan’s is not.
I think that is Keenan’s point. It isn’t physically realistic, but it still works better than the Mutt Office’s.

Myrrh

Latitude says:
May 31, 2013 at 10:58 am
whether they consider a rise in global temperature of 0.8 degrees Celsius since 1880 to be significant.
======
..and this is where everyone has lost the argument
rise? from what……..NORMAL
You’ve let the crooks define normal…………..

Start of the MWP? The Roman Warm? The Optimum?
The temps 137 million years ago before the big plunge into cold in the Cretaceous?
According to this: http://www.telegraph.co.uk/science/dinosaurs/7624014/Dinosaurs-died-from-sudden-temperature-drop-not-comet-strike-scientists-claim.html

My eyes glaze over when assaulted with management speak. Unlike circumlocution which may have a nugget contained within, the dark arts of management and politics have evolved to have no nuggets. There is nothing, just a confluence of words designed so that when it goes belly up no one can be held responsible as no one actually said anything. Of course we all know this sad state of affairs exists because no one has the balls to admit they released the Kraken and he is behind all these nasty weather events.

Theo Goodwin

Steven Mosher says:
May 31, 2013 at 11:20 am
“Keenan’s statistical model is physically wrong.

There are an infinite number of models that fit the data as well or better than his model. fitting the data “better” is not the acid test of a good model. First and foremost the model has to be physically realistic. Keenan’s is not.”
Though your appreciation of empirical science has improved recently, you blunder once again here. You contrast “fitting the data better” with “being physically realistic.” Fine, but you seem blithely unaware that there are critical relationships between “the data” and the “physical realism” of the model.
The most important of those relationships is that the data is the ultimate evidence for the physical model (actually, physical theory). The data is used to select the model. You cannot say that the statistical model fits the data but conflicts with physical reality. That is the same thing as saying that the statistical model fits the evidence for the physical theory but conflicts with the physical theory. Nonsense.
You continue to make the fundamental mistake of climate modelers. You believe that you can rationally discuss the “physical reality inside the computer model” apart from the physical evidence for the model (physical theory). It cannot be done.
Well, it cannot be done in science where falsification and observational evidence rule. It is commonly done in metaphysics. Read Charles Sanders Peirce’s “The Fixation of Belief,” (1877). Peirce was a Pragmatist, though a scientific Pragmatist like W. V. Quine, and you might find him a congenial thinker.

Bill Marsh

What I got from the response was, lots of arm waving and a further admission that none of the statistical models are of much value, but, we KNOW CO2 is responsible for any warming, we just know.

@ Steven Mosher 11:20 am (I guess I’ll pile on…)
You say a “linear model might be useful for communicating the loss rate” but since it will inevitably lead to negative areas, it is wrong, regardless of how it fits the data. I have no argument with that. But that’s about all.
Then you say, Keenan chose a model that fit the data better. That model says there is no warming. But looking at the data we know it has warmed.
This is a contortionist’s Red Herring.
First off, Keenan’s model isn’t wrong because it leads to physically impossible scenarios and violates boundary conditions. You say it is wrong because by “looking” at the data it doesn’t support the conclusion you prefer, not because it leads to impossible physics. A failure in logic. A failure in simile. A slight of hand with predicate.
Second. Keenan’s model does NOT say “there IS NO Warming.” It only says that natural variability is such as it is that what appears to be a warming in the data could be statistical noise from a non-warming system. The Null Hypothesis (temperature change is natural) cannot be rejected with the Keenan’s model. You should know the difference. If I flip a coin 10 times with 7 head and 3 tails, I cannot reject the idea that it is a fair coin.
Third. All models are “Wrong”. Some models are wronger than others. (Asimov: The Relativity of Wrong) It ill serves a scientific argument to say a model is wrong without offering a model that, better fits the data and better fits the boundary conditions or is easier to work with acceptable increases in error.
Fourth. Given the resources involved, how do you hold Keenan’s model to a higher standard than the MET’s or IPCC’s?

So basically the MET office is saying “if you don’t like those bananas, it’s OK as we’ve got plenty of other bananas!”

M Courtney

Just read Slingo’s paper. It is illogical.

It should be noted that the Met Office does not rely solely on statistical models in its detection and attribution of climate change.

But Panel 5 of fig 2 is entirely based on statistical models. The upshoot is ascribed to man’s effect because of models.
If the effect of heat capture by spectroscopic attributes of CO2 led to that then… well, we would understand all the feedbacks in the oceans, atmosphere, mankind’s economy and all the unknowns.
It is reasonable to make that assumption but she ought to acknowledge it is an assumption and not put in bold that other data backs it up when the other data does not support that. The observations support the warming of the world. OK. But not the cause of the warming of the world.
And Keenan’s model says that it could be a random walk that leads to the warming of the world. To say it is the work of man relies on other models.
This is a circular argument.
Whoops.

Steven Mosher says:
May 31, 2013 at 11:20 am
“Put another way. Keenan chose a model that fit the data better. That model says there is no warming. ”
Really? As I read Mr. Keenan’s argument, he seems to have taken the observed warming as a given and proceeded to argue that proper statistical analysis shows it to be “not significant” i.e. not demonstrably outside the bounds of natural random variation. The question then resolves down to what constitutes the “proper” statistical analytic procedure. From my perspective that question is about as “settled” as climate science in general. From observing comment threads here and elsewhere over the years, I’ve seen commenters to numerous to count, with varying degrees of at least claimed statistical expertise, arguing steadfastly that their preferred methodology is the gold standard of statistical mathematics. They have been, almost universally. met by other comments arguing quite contrary positions. I personally lack the level of expertise to fully evaluate these arguments, mostly because I haven’t been inclined invest my time in exploring a field that I consider to be mostly dubious. I admit statistical analysis is, potentially at least, a valuable and necessary tool for modern science and modern life, but in its current practice it seems to be used less to reveal hidden truths than to obfuscate them. “Statisticians” have become like lawyers, willing to offer analyses that support whatever agenda they or their clients are pushing. I have become like the people in the village in the story of “The Little Boy Who Cried Wolf”. Having been lied to so many times in the past I have lost the capacity to respond appropriately when something that may actually be true is presented.
Truthfully, I did not find Mr. Keenan’s analysis to be entirely compelling, but I would say the same for Ms. Slingo’s counter argument and the length and detail included in it at least explains why the Met Office had to wait for the 6th iteration of the question before they were willing to respond.

kadaka (KD Knoebel)

From Steven Mosher on May 31, 2013 at 11:20 am:

A good example would be people that look at ice melt in september and fit that data with a linear trend. Well, before we even start we know this model is physically wrong.
How do we know that? well at some time in the future your model will predict negative ice area.

Errrrrrrrrr!!!
As you have not yet fit the data, you should not know a linear fit would have a negative trend. Eyeballing it as having a negative linear trend is preliminary fitting.
Any negative trend linear line will eventually cross zero and indicate negative something. Likewise any positive trend linear line can indicate there was a negative something in the past. Thus by your reckoning I’d have to say any linear trend except zero, dead flat, must be physically wrong thus shouldn’t be used.
Except linear models are used all the time, successfully. I’m sorry to explode your worldview like this, but most people can accept there is no negative ice area, once the line crosses the zero it means there is zero ice area, no matter how far the line drops.
Also, you already are certain the trend is negative. You say we should know “before we even start” the linear fit is wrong because it will be negative. Therefore, you have shown bias, you EXPECT a negative trend, before you have even begun the curve fitting, before examining the data.
Minus ten points for House Slytherin.

Lars P.

Steven Mosher says:
May 31, 2013 at 11:20 am
A good example would be people that look at ice melt in september and fit that data with a linear trend. Well, before we even start we know this model is physically wrong.
How do we know that? well at some time in the future your model will predict negative ice area. So a linear model might be useful for communicating the loss rate, but you know that its physically wrong, so you should not hang anything too heavy on it. That is, if that choice of models leads to stupid conclusions, thats a good hint the model is misleading, regardless of how well it “fits” the data.
Put another way. Keenan chose a model that fit the data better. That model says there is no warming. But looking at the data we know it has warmed. Looking at the Thames we know it isnt frozen. Looking at the sea level we know it has gone up. We know the LIA was cooler. plants know it. animals know it. ice knows it. What this means is that Keenan has chosen the wrong model.

Your post has been answered by many, so I will comment only parts I see missing in the answers.
You give a good example of starting measuring the ice melt in September, then you talk about temperature raise since the LIA?
You talk about plants and animals knowing of the warming since LIA but forget about wine grapes growing at higher latitudes then now during the MWP.
Your logic seems very twisted to me towards what you may want to achieve, not towards a clear logic conclusion.

Rud Istvan
Here is what is happening in the real world the Met office have to operate in.
http://wattsupwiththat.com/2013/05/08/the-curious-case-of-rising-co2-and-falling-temperatures/
A sharp drop in temperatures in Britain over the last decade is not easy to explain to British MP’s so Julia Slingo is under pressure Internationally and locally to explain exactly what is happening
tonyb

Chas

“Put another way. Keenan chose a model ………That model says there is no warming.”
-Not true Prof Mosh 🙁

Stephen Rasey says: May 31, 2013 at 12:34 pm
“First off, Keenan’s model isn’t wrong because it leads to physically impossible scenarios and violates boundary conditions.”

Yes it is. A random walk is unbounded and has no fixed reference point. If it were to apply, at some stage (not too far away), all life would be extinguished (with probability 1). If it applied in the past, we would not be here. It is physically impossible because there is not the energy available for that unbounded behaviour.
“Second. Keenan’s model does NOT say “there IS NO Warming.” It only says that natural variability is such as it is that what appears to be a warming in the data could be statistical noise from a non-warming system.”
I haven’t seen any numbers that actually say that. Do you have any in mind?
But I wouldn’t be surprised. The model is a random walk with three (3) orders of autocorrelation. A highly autocorrelated random walk does proceed in near straight lines for quite some time. Steps are repeated with little change. The only really random choice is the initial direction. It’s not hard for it to emulate a trend.
“Fourth. Given the resources involved, how do you hold Keenan’s model to a higher standard than the MET’s or IPCC’s?”
Being physically possible is a pretty basic standard. The Met’s is, Keenan’s isn’t. That’s not applying a higher standard.

Matthew R Marler

Nick Stokes: A random walk is unbounded and has no fixed reference point.
Oh Brother. Ever since Einstein, Brownian motion has been successfully used to model lots of processes. The fact that the support for the normal distribution is infinite has never prevented it from being a successful math model for finite measurements.

Nullius in Verba

“If it were to apply, at some stage (not too far away), all life would be extinguished (with probability 1). If it applied in the past, we would not be here.”
This seems to me to be mere objection for objection’s sake. The model is a local approximation, like fitting a linear trend doesn’t mean you think it will stay linear indefinitely far into past and future, but that you’re implicitly representing a curve with a power series and trying to estimate only the constant and linear terms, because you don’t have enough data to estimate any of the higher-order terms.
Similarly, the use of an integrated model doesn’t mean that it is actually unbounded, only that there is a root of the characteristic equation close enough to the unit circle for it to be effectively indistinguishable from one given the length of data you have, it to be a good approximation, and the statistics are more reliable if you make that approximation. It’s like taking a short segment of a curve and approximating it as straight, because you don’t have enough data to say otherwise.
ARIMA models are so useful for approximating physical processes because they can represent the discretely sampled output of differential equations. ARIMA(3,1,0) says that the temperature in each year is the accumulation of the heat that is added or subtracted in each year, and that this latter figure can be well-approximated as a second-order differential equation. Using an integrated model is effectively saying that the forces pushing it back to the equilibrium are smaller than the ‘weather’ noise being added or subtracted from year to year, drowning them out. So over the short term, you can get a good fit by ignoring them.
And in any case, Doug’s point was not that ARIMA(3,1,0) is the “right” model, but that saying the rise is significant because it doesn’t fit ARIMA(1,0,0), (which is what the Met Office initially did), is logically invalid. There are lots more driftless models that would need to be excluded before you could conclude there was a drift, and the ARIMA(3,1,0) one was simply an example.

Jordan

Mosher ” First and foremost the model has to be physically realistic.”
On that basis, the evidence suggests the models which anticipated a tropospheric hotspot are not physically realistic. (I know these are physical simulations and not the statistical models being discussed here.)
Recall how Santer and a bunch of other researchers/co-authors wen looking for it. Can’t blame them for trying – if they had confirmed “vertical amplification” of temperature change embodied in the hotspot, it would surely have secured their places in history. They couldn’t.

Greg Goodman

Steven Mosher says:
“Keenan’s statistical model is physically wrong.
When you analyze data you choose a model. picking a model that is physically wrong ( for example a random walk for temperature) can get you a better fit, but it’s a mistake.
A good example would be people that look at ice melt in september and fit that data with a linear trend. Well, before we even start we know this model is physically wrong.”
I have the same impression, you have a valid point. But if it was that simple to refute , why did it take the Met. Office over six months and a probably precedented five times refusal to answer an official parliamentary question?
Keenan if no fool. Neither do I think he is out to deceive. What he was trying ( and has succeeded ) to do was to force the Met. Office to state that it all depends upon the validity of the model you choose. It seems it is this that they were so steadfastly resisting.
The corollary is clear: how appropriate or valid is the model that the Met. Office chose to use?
They know they are one move away from checkmate and that’s why they are stalling.

Duster

Hmm, Julia Slingo says, after a disussion of various statistical models and their fit to empirical measures over a portion of the 19th and 2oth centuries that:
“These results have no bearing on our understanding of the climate system or of its response to human influences such as greenhouse gas emissions and so the Met Office does not base its assessment of climate change over the instrumental record on the use of these statistical models.”
Why didn’t they just say that models don’t count and that results should be handled wearing nitrile gloves in the first place?

Nullius in Verba says: May 31, 2013 at 1:59 pm
“The model is a local approximation, like fitting a linear trend doesn’t mean you think it will stay linear indefinitely far into past and future, but that you’re implicitly representing a curve with a power series and trying to estimate only the constant and linear terms, because you don’t have enough data to estimate any of the higher-order terms.”

The idea of testing for significant trend, or increase, is to see if something has changed. There wasn’t a trend before, now there is. It doesn’t suggest that there has always been a trend, or always will be.
But the purpose of Keenan’s analysis has been to suggest that nothing has changed. It’s just random variation like we’ve always had.
But random walk variation can’t have been the regular state of affairs. So if you want to adopt it as a local model, you need an idea of when it became a random walk and why.

Nick Stokes – You addressed Stephen Rasey’s “First off, Keenan’s model isn’t wrong because it leads to physically impossible scenarios and violates boundary conditions” incorrectly. He wasn’t saying that Keenan’s model wasn’t wrong, he was criticising Steven Mosher for using an illogical argument instead of that reason.

Chas

“It is physically impossible because there is not the energy available for that unbounded behaviour.” -Nick
Errr?
-The oceans contain around 10^24 grams of water
The average temp is around 4 C whilst the surface temp is around 17 C
If the oceans slowly became well mixed, over the next 1000 years (~10^10 seconds)
How many watts of forcing would it take to balance this out? !
-A speed up or slow down in the downward heat transfer into the oceans would seem quite capable of generating very unbounded behaviour.

Nick Stokes

Greg Goodman says: May 31, 2013 at 2:30 pm
“a probably precedented five times refusal to answer an official parliamentary question?”

That’s simply untrue. The MO did not refuse to answer any questions. In fact, what is being discussed here is their answer to the second question, which was posed as a follow-up to the first.
What the Met Office does seem to have been reluctant to do is to undertake a calculation prescribed by Doug Keenan. That is not refusing to divulge facts – it is resisting doing something that they don’t think is appropriate to do. Q’s in the HoL are not usually used as a management tool.
But it’s not even that Keenan wanted the answer. He wanted the Met to put their name to his calc. And they didn’t want to, probably because they thought it would be misused. As it has been.

Nick Stokes

Chas says: May 31, 2013 at 3:04 pm
“-A speed up or slow down in the downward heat transfer into the oceans would seem quite capable of generating very unbounded behaviour.”

No, unbounded is unbounded. As in boiling. As in white hot.
As I said above, if you want to say that it’s just natural variation and nothing has changed, then you have to propose something that would work without changing.

HAS

To repeat what I said at Bishop Hill:-
This issue becomes a lot simpler to resolve if you remember that the test of a model is its utility. So you need to state your purpose if you are going to judge one model against another. If goodness-of-fit to a time series is your only interest then you will make one judgement, but if you are (for instance) interested in skill at forecasting you will likely make another.