Met Office Statistics Questioned

Is there any statistical evidence that global temperatures have changed since 1997 ?

Guest post by Clive Best

The UK Met Office seem determined to stand by their claim made in  response to the David Rose article in the Mail on Sunday:

‘The linear trend from August 1997 (in the middle of an exceptionally strong El Nino) to August 2012 (coming at the tail end of a double-dip La Nina) is about 0.03°C/decade, amounting to a temperature increase of 0.05°C over that period.’

Several of us have been requesting statistical evidence via their blog that this trend  is actually indistinguishable from flat.  

Dave Brittan has done a sterling job in replying on behalf of the Met Office, but he eventually crafted a complex answer as to whether the above statement made statistical sense.

“The first is measurement uncertainty associated with basic measurement error and uncertain biases in the observations. These are included in the HadCRUT4 ensemble, and when computing linear trends in global temperatures from August 1997 to August 2012 these give a trend of 0.034 ± 0.011 °C per decade (95% confidence interval) for the observed portion of the earth.”

I questioned this statement because I think their quoted error is actually about a factor 10 less than it should be. After waiting 36 hours with my post still in moderation, and with no other posts being accepted I am now presuming that this is their last word on the matter.

Frustrated by the lack of response, I decided instead to do the analysis myself.    – see post here:

http://clivebest.com/blog/?p=4237

0 0 votes
Article Rating
69 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
October 25, 2012 3:48 pm

I know Gore is sweating, increasingly.

JJ
October 25, 2012 4:19 pm

Several of us have been requesting statistical evidence via their blog that this trend is actually indistinguishable from flat.
Irrespective of whether or not there is any statistical distinction between 0.03C per decade and 0, there is absolutely no practical distinction between the two. None.
A global average temp trend of 0.03C per decade is FLAT. Period.

John Q Public
October 25, 2012 4:21 pm

The fact that Global Warming never came up once in the US Presidential debates shows you how far the credibility of Climate Science has fallen.

geran
October 25, 2012 4:36 pm

As more and more facts like these emerge, and the fear/panic of CAGW goes away, my concern is that “Big Oil” will stop our monthly checks. Drat, I was planning on purchasing one more beach front property. Oh well, I’ll have to make do with the four they already bought for me.
On to the next “sky is falling” meme….

October 25, 2012 4:47 pm

So! that and a Euro will get you a cup of what the British call coffee. Anyone who, considers a 0.0 anything on a global average of anything needs a long vacation. I am proposing northern Greenland.

john
October 25, 2012 4:47 pm

Why can’t they just stick to the science and the data? Is there managerial pressure to produce rising trends when there are non?
Climate changes. Everyone accepts that and part of that change also includes periods of no change.

Steve Geiger
October 25, 2012 4:52 pm

I think Lucia’s analysis also rejected the zero trend hypothesis. If the Suth is not familiar with her work, I would recommend as a seemingly very honest broker. (see link to The Blackboard blog)

kadaka (KD Knoebel)
October 25, 2012 5:10 pm

Relevant WoodForTrees graph, from August 1997 to August 2012 inclusive, note in WFT notation that’s 8/1997 to 9/2012 (“From” is inclusive, “To” is exclusive):
http://www.woodfortrees.org/plot/hadcrut4gl/from:1997.67/to:2012.75/plot/hadcrut4gl/from:1997.67/to:2012.75/trend
Click on “Raw data”: Least squares trend line; slope = 0.00340933 per year

JJ
October 25, 2012 6:09 pm

Clive,
Your analysis does not correct for autocorrelation, and one standard deviation only points at a 68% CI. Go two wide for ~95%. Correct for both (or perhaps just either), and the error bands will include zero and a swath of cooling territory.

October 25, 2012 6:16 pm

” Least squares trend line; slope = 0.00340933 per year” Right, The CRUT4 data comes with error margins, just because a spread sheet spits out 8 digits after a decimal doesn’t improve the original accuracy of the data. Rose seems to have bated them into a more serious blunder.

October 25, 2012 6:31 pm

So if a strong El Nino swings in late because the sun has been been quieter than usual, and it’s about to become very active peaking within the next 18, should this be cause for alarm? The suns activity warms the planet, and the lack of activity cools it.

Brian H
October 25, 2012 6:45 pm

Dallas;
baited them, even.
;p

D Caldwell
October 25, 2012 6:58 pm

Approximately 2 divided by almost 3 has a pretty good chance of being somewhere between a half and a whole.
It is not 0.667

wayne
October 25, 2012 7:31 pm

Is there any statistical evidence that global temperatures have changed since 1997?

Hmm. Half of a degree (Aug2012) minus half of a degree (Aug’97) is zero so there is zero actual global warming over this period. There, that wasn’t too hard. Scarf.
But I am sure that answer of zero doesn’t satisfy those at the MetOffice and climatologists worldwide so off they go in search of the statistical global warming over this same period that surly must be hiding within the numbers somewhere. Need I go any further?
Anyone with the least mathematical or scientific mind surely realized buy these two pieces of evidence that this statistical global warming never really exists and is only a man-made constructed math value of no meaning. It can never burn or warm you or anything else and only lives within minds, on charts or graphs. Yet, daily millions of words and thousands of hours of computer time is spent in search of this mythical statistical global warming when actual warming of a period is by a mere subtraction. Simply breathtaking the levels of delusion present today.

Paul Vaughan
October 25, 2012 7:51 pm

Assumptions.
Assumptions.
Assumptions.
Plot the residuals. They don’t meet the (unspoken & unwritten) assumptions about the model error term, so statistical inference produces meaningless p-values &/or confidence intervals.
Good Stat 101 students know this, but sensible comments about statistical inference in the climate discussion (in both alarmist & nonalarmist circles) are – to be frank – nearly nonexistent. Everyone just pretends untenable assumptions aren’t a problem and this ignorance is absolutely wrong at the level of fundamentals.

Werner Brozek
October 25, 2012 7:55 pm

“from August 1997 to August 2012 these give a trend of 0.034 ± 0.011 °C per decade (95% confidence interval) for the observed portion of the earth.” So in other words, at 0.034/decade, we can be 95% confident that warming is occurring? But in Phil Jones’ interview from February 2010:
“B – Do you agree that from 1995 to the present there has been no statistically-significant global warming
Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level.”
Can both statements be true for equally long 15 year periods?
With the Hadcrut4 anomaly for August at 0.526, the average for the first eight months of the year is (0.288 + 0.209 + 0.339 + 0.514 + 0.516 + 0.501 + 0.469 + 0.526)/8 = 0.420. If the average stayed this way for the rest of the year, its ranking would be 11th. 2010 was the warmest at 0.54. The highest ever monthly anomaly was in January of 2007 when it reached 0.818. With the 2011 anomaly at 0.399 in 12th place and the 2008 anomaly of 0.383 in 14th place, if things stay as they are, then 3 of the last 5 years are not even in the top 10 in Hadcrut4.
Hadcrut4 has a slope of 0 (-0.00018 per year to be exact) since November 2000 or 11 years, 10 months (goes to August.)
P.S. My earlier graph estimating Hadcrut4 using GISS was off by only one month as the flat line started in December 2000 in the estimation.
See: http://www.woodfortrees.org/plot/hadcrut4gl/from:2000.8/plot/hadcrut4gl/from:2000.8/trend

Michael
October 25, 2012 8:28 pm

I think this has something to do with science, even though it is OT, so I decided to post it.
A Time-Lapse Map of Every Nuclear Explosion Since 1945 – by Isao Hashimoto

October 25, 2012 8:50 pm

STUDENTS ! ! !
Are you a frustrated wannabe scientist….but you flunked quadratic equations ? ? ?
Welcome to CLIMA-CLOWNOLOGY ! ! !
Where everything is SINGLE PARAMETER, LINEAR equations using easily adjustable data ! ! !
Amaze friends and family with your new FOIA PROOF powers of prediction ! ! !
Take that ISAAC NEWTON ! ! !

Michael
October 25, 2012 9:14 pm

We already have a collective hive mind.
You know it’s there, but you can’t touch it with your hand.
You can get in contact with it, especially through your keyboard, but you can’t grasp the whole of it’s collective conciseness.
The Universal Collective Hive Mind knows nearly everything knowable.
It’s really there.
It’s like the collective conciseness above us, almost but not quite.
It’s called the Internet.

Brian H
October 25, 2012 9:28 pm

Michael;
INTERNET Hive Mind calling. Either I exist, or you do. Choose.

Editor
October 25, 2012 10:51 pm

The Met Office are determined to factor in their global warming models into their supercomputer, despite all evidence to the contrary.
What results from this is as follows:
1) Forecast of an Indian Summer in October 2012! Did not happen.
2) Forecast of BBQ Summers for this year and last year! Wettest Summers ever!
3) Forecast for the last four years, summer drought! We had hosepipe bans, followed by floods (in the same areas that the Met office said would be dry.
4) 2010, would be the mildest winter on record, thanks to AGW! In actual fact we had the worst winter for many years.
Met office, will you please cast your bigotry aside and reprogram your computers!

Henry Clark
October 25, 2012 11:16 pm

The global temperature trend over 1997 to now is negative, not positive at all, in RSS satellite data, slightly negative but negative nonetheless:
http://www.woodfortrees.org/plot/rss/from:1997/plot/rss/from:1997/trend
Over 1998 to now in contrast, it is substantially negative, with a linear trend line fit being cooling at a rate of around 0.5 degrees Celsius per decade:
http://www.woodfortrees.org/plot/rss/from:1998/plot/rss/from:1998/trend
The exact trend depends on the start point. 1998 could seem arbitrary. However, 1998 corresponded to the key change from a warming regime to a cooling regime. A surge of warmth built in the oceans up through earlier in the 1990s was released into the atmosphere by the El Nino then. There will not be as strong warming from such a El Nino again for decades to come, and a trendline like 1998 -> 2020 will show greater cooling than 1998 -> 2012 now.
——————————–
History is explained well if one sees what unfortunately even few skeptics ever have, part of the big picture visible by combining enough data not too adjusted by the CAGW movement:
http://s10.postimage.org/l9gokvp09/composite.jpg
(click to enlarge and scroll)

October 25, 2012 11:16 pm

http://www.woodfortrees.org/plot/hadcrut3vgl/from:1997/plot/hadcrut4gl/from:1997
HadCRUT4 has miraculously created warming where was none before. How on Earth could 2006 El Nino suddenly overtook the 1998 one? Which are the areas, which in retrospection suddenly increased Earth temperature by 0.2 deg C globally?
I never use HadCRUT4 in any analysis.

Henry Clark
October 25, 2012 11:36 pm

Typo correction to my prior comment: I wrote 0.5 degrees when I meant 0.05 degrees.

Michael
October 26, 2012 12:20 am

Brian H says:
October 25, 2012 at 9:28 pm
“Michael;
INTERNET Hive Mind calling. Either I exist, or you do. Choose.”
We both exist.
You can’t erase information from history. It is stored in universal consciousness. It has something to do with photons and DNA.

October 26, 2012 1:55 am

@JJ “Your analysis does not correct for autocorrelation, and one standard deviation only points at a 68% CI. Go two wide for ~95%. Correct for both (or perhaps just either), and the error bands will include zero and a swath of cooling territory.
I fully agree. The HADCRUT3 data are actually in negative territory. Yes it really should be 1 sigma=68% CL .

Eric H.
October 26, 2012 2:32 am

Time to see what I have learned. I am currently in my second week of my first stats course, so excuse my ignorance. Correlation factors run -1 to +1. Weaker correlations are closer to 0. Stronger correlation exists closer to -1 or +1. The correlation coefficient for Hadcrut 4, from 97-Aug 2012 is .12. Looking at the scatter plot of temp anomalies, they are all over the place with many outliers. How can this be deemed reliable with such a weak correlation?

October 26, 2012 2:44 am

What’s that quote fron Jaws? “We’re gonna need a bigger boat.”
C’mon Met Office! Place an order for a new supercomputer and you may get the results you crave.

October 26, 2012 3:22 am

@Eric You are quite correct ! The correlation with a straight line is very poor and the measurements are fluctuating monthly. You might get a better fit with a high order polynomial but at the end of the day all you can say from this data is that there has been no net temperature change over the last 16 years

October 26, 2012 4:15 am

Great! I was reading that and wondering what approach you would use. I was thinking you were leading toward a “it’s well within the spread” … and then like the crime scene drama … it wasn’t the butler as we had all expected, but out of a cupboard apparently long dead and buried pops: (I you need to use a Belgium accent) “To emphasize this point, I now show exactly the same analysis done for HADCRUT3 which up to a few months ago was the flagship IPCC data as used in AR4.”
Having met some of the Met Office researchers, I have to say that they are quite reasonable people. Indeed, I sat next to someone who turned out to be a big-wig reading all kinds of sceptic blogs.
However, there is a gulf in understanding and common sense between those at the “grindstone” of the models and those like John Hirst who if reports back of his presentations are accurate seems to have not a clue and is making it all up as he goes along.
So I do not believe this nonsense of “It’s warming” (recently) is endorsed by the run of the mill Met Office staff. Instead it appears to be PR spin from senior staff who are taking the good work of their underlings and discrediting the whole organisation by their shameful partisan approach.

KnR
October 26, 2012 5:02 am

In area like this when you see claims of accuracy to two decimal places they should really put a flag up . The idea that you can get better accuracy levels then the instruments your using to measure the effect in the first place have by ‘statistics’ is ‘interesting ‘
a trend of 0.034 ± 0.011 °C per decade or in other words it could be a third out either way . rather a large error factor that . Especial given when great and bold claims are made on the back of the finger based on its ‘value ‘

rgbatduke
October 26, 2012 5:14 am

I think this has something to do with science, even though it is OT, so I decided to post it.
I’m not so sure it is off topic. People who like numerology will note that the warming trend of the latter half of the 20th century fairly accurately matched not only a burst of high solar activity but the cumulative numbers from nuclear testing, with a 5-10 year lag. Nuclear testing more or less stopped in 1998, and gee, since then the temperatures have been essentially flat. Just one more confounding correlation in a sea of correlations that may or may not have the slightest causal association.
I can think of several ways, though, for nuclear testing to affect climate — the production of dust, aerosols, and ozone (especially the latter), the transport of water vapor into the stratosphere, the distribution of radioactive particulates that can affect cloud nucleation rates. Some of these one would guess to be cooling, some warming, but sadly, when one considers the energy and detailed nature of the tests it is difficult to reconcile numbers in a way that makes sense (I’ve tried a couple of times) largely because the vast bulk of the later tests were underground and it is difficult to see how the underground tests would affect the atmosphere and climate so much — unless the climate effects were as noted lagged by a decade or more, or were heterodyning with other e.g. solar or GHG influences.
But a pretty, interesting, display either way. I lived through most of this — I was born as the Apple II explosion occurred on March 29, 1955 in operation teapot. Of course it isn’t THAT unlikely to be born on a day with a bomb, given around 2000 bomb tests over fifty years, but still…
rgb

rgbatduke
October 26, 2012 5:35 am

Oh, and as always, a nice job Clive although as you note, anybody who actually uses stats in some constructive way can glance at the data and reject the notion that there is any meaningful linear trend. Doing a fit is really quite unnecessary.
Kudos also to the people who noted that fitting a linear trend in the first place is sheer numerology, given that the climate is hardly linear on any timescale (yes, that even includes you, Olson). Another way of assessing the “meaning” in the data would be to determine the parameters of its noise and write a short program to simulate the random generation of “simulated temperature” series with zero trend but the same general autocorrelation function, a weighted markov process as it were. Generate a few thousand of these, fit linear trends through them, and compute the probability of getting a warming or cooling stretch out of pure noise that has the same empirical autocorrelation as the general climate signal portrayed, and you’ll have a good idea of how much “meaning” there is in any proposed linear trend. Which is absolutely none, as I said, from a mere eyeballing. It is certain that one would get linear trends that are not zero from every simulation run, and the distribution of slopes would be (I’m guessing) nearly Gaussian and quite wide. One cannot possibly reject the null-hypothesis of no warming from this data, in other words — not at the 95% confidence level, not at the 5% confidence level (both of which terms are meaningless and not really what p-values mean, but that’s the way it goes).
I do wish that folks would contemplate using R instead of Excel for doing stats, though. Excel is fine for doing dumb/simple linear regression — maybe — but not so good for actually playing with the data and learning something a bit deeper from it, and not so good at all for exploring the world of nonlinear fits or predictive models. What is the autocorrelation function for the temperature, for example? Sadly, I have no time to mess with it myself (yet) but perhaps one day.
rgb

Icarus62
October 26, 2012 8:52 am

My understanding is that over 97.5% of the climate system’s heat content resides in the oceans, and that the oceans have been warming faster in the last decade than in the previous two decades – that’s according to Levitus 2012, and is consistent with rising global sea level and the accelerating loss of Arctic sea ice, Greenland ice mass and Antarctic ice mass. So, it seems that global warming is accelerating rather than slowing down, which is probably what we would expect as our greenhouse gas emissions are rising faster than ever.

RHS
October 26, 2012 9:26 am

Icarus62 – You must have found Kevin T.’s missing heat, something he can’t even do. The ocean buoys can’t find any major increase of heat, the ice loss in the Arctic is driven more by wind and storms than temp. Whatever Greenland has lost, it isn’t much, at the current pace, it will be over 5000 years for it to loose it’s half it’s ice. And the Antarctic, well it just set a record for area covered. Yes, I know you are talking volume but volume hasn’t been measured for very long. The Grace satellites don’t have many years of data to go through.
As far as a rising ocean, it isn’t rising very fast, roughly a foot a century.
And to top it off, MET just decided that we haven’t had any “warming” in over fifteen years. This coming from the organization which once said British children won’t know what snow is.
Nice cherry picking, go sell cherries elsewhere, we like the whole kit and kaboodle.

October 26, 2012 9:29 am

Icarus62:
Please clarify a rumour.
On another thread HenryP said Stephanthedenier is also Gary Lance. Gary Lance did not deny this.
Is he also you, please?
Richard
REPLY: False, by their IP address, they are on different continents – Anthony

October 26, 2012 9:55 am

@rgbatduke

Nuclear testing more or less stopped in 1998, and gee, since then the temperatures have been essentially flat.

This is an original idea I have never heard before! The late 20th century warming was from around 1960 lasting until around 1998 so does more or less coincide with nuclear testing. However, If you believe the data, there was a previous warming spell from lasting from 1910 till about 1940 before nuclear weapons. So I think there is likely another underlying cause.
If we assume there is an oscillation superimposed on an underlying trend entirely due to CO2 then we get a logarithmic dependency roughly 2.5ln(CO2(year)/CO2(1750)) deg.C. This then gives total warming to a doubling of CO2 of 1.7 deg.C – no big deal. Of course natural variations may dominate anyway.

Rex
October 26, 2012 10:10 am

The margins of error quoted by the Met Office are fictitious.
The temperature record is derived from a survey, although many might
think it better described as a dog’s breakfast. The error limits calculated
by the Met Office assume a perfect sample. This is akin to those polling
companies who survey a thousand respondents and then claim that the
margin of error for their survey is +- 3% in the worst case. These claims
are likewise crap. Try +- 5% or 6% … they conveniently forget about
SURVEY ERROR, one of the first things I first learnt about when I got into
survey analysis over 40 years ago.

Gary Pearse
October 26, 2012 10:23 am

I’ve said something like this before but I think it bears repeating. If you have to coax, massage and enhance the temp figures over decades looking for a signal of catastrophic warming, then you essentially can drop the C, A, G and possibly even the W off of CAGW. If a tsunami is coming, there is no need to wade out into the water a few hours before with a yardstick to plot millimetres of sea rise to detect it early (I suppose sea level goes down first). Just wait and you will be washed away. Possibly you could make a note or two in a water-proof paper notebook before its too late if the science is so important. What we have is a 250 year recovery from the LIA and it seems to be a bit more than 1C since 1750. If we have another 0.5C by 2100, I could move across the river in Ottawa, Ontario to the Gatineau Hills (I would only have to move upwards about 110 metres) or 45 miles north with no change in elevation where the fishing is good. This adaptation is a bitch.

October 26, 2012 10:39 am

In poking around in the literature of climatology, I’ve found that eccentricities in the language of climatology sometime obscure methodological errors in the research of climatologists. Translation of terms and phrases in the literature of climatology into technical English sometimes serves to reveal these errors.
At http://www.aos.wisc.edu/~sco/normals.html, the University of Wisconsin asserts that a “climatic normal” is “the arithmetic average of a climate event such as temperature over a prescribed 30-year interval”; translated into technical English, this phrase states that the period of a climatological event ends 30 years after it begins while the outcome of this event is the arithmetic average over the period of the event of an instantaneous meteorological observable. The climate unfolds, then, as a sequence of statistically independent events each lasting 30 years. The climate “changes” if and only of the outcome differs between adjacent events. For global warming climatology, the outcome of each event is the arithmetic average, over the period of this event, of the numerical values of a global surface temperature time series, such as the Hadcrut4.
In reviewing Anthony’s paper, I note that: a) the paper does not present the arithmetic average of the values of the Hadcrut4 over the period of an event but rather presents unaveraged values and b) the values that are presented extend over only half of the period of one event. As there are no events, no evidence is presented that is pertinent to climatology and one cannot reach statistically significant conclusions from this non-existent evidence.

October 26, 2012 11:02 am

Terry,
Anthony is innocent here – it is my post is intended simply to refute the UK Met office claim that there has indeed been “global warming” since 1997 till today of 0.03 deg.C/decade. Their model actually predicted 0.2 deg.C/decade. The clear evidence based on a 10 year average is that nothing has changed.

H. Bricobrac
October 26, 2012 11:03 am

The best prediction is that the climate is changing by 0.00C / decade:
“we have been unable to find a scientific forecast to support the currently widespread belief in “global warming.” Climate is complex and there is much uncertainty about causal relationships and data. Prior research on forecasting suggests that in such situations a naïve (no change) forecast would be superior to current predictions” – GLOBAL WARMING: FORECASTS BY SCIENTISTS VERSUS SCIENTIFIC FORECASTS by Kesten C. Green and J. Scott Armstrong

H. Bricobrac
October 26, 2012 11:20 am

“Nuclear testing more or less stopped in 1998, and gee, since then the temperatures have been essentially flat.”
Interesting. The most powerful nuclear device, the Tsar Bomba, released 420 PJ when it exploded. If all of this energy went into heating the atmosphere, it would increase its average temperature by 4.2×10^17 J / (5×10^21 J/K) = 8.4×10^-5 K. About 2000 nuclear bombs have been exploded so far. Assuming all of these were as powerful as Tsar Bomba gives like 0.168 K.

October 26, 2012 11:25 am

Terry Oldberg:
I write to ask two question concerning your post at October 26, 2012 at 10:39 am.
Before I ask I think I need to state that my questions are genuine and have no meaning other than their overt meaning.
My question are
1.
This thread concerns the Met.Office response to the empirical fact that there has been no discernible global temperature change for at least 15 years. Are you suggesting there has to be no rise in global temperature for at least 30 years before supporters of the AGW-hypothesis will agree global warming has stopped?
2.
If the answer to my Question 1 is an affirmative, then what does that indicate about Hansen’s testimony to the US Senate in 1987 when there was only at most 15 years of global warming prior to then?
Richard

johnbuk
October 26, 2012 11:31 am

Gary Pearse said,
“If we have another 0.5C by 2100, I could move across the river in Ottawa, Ontario to the Gatineau Hills (I would only have to move upwards about 110 metres) or 45 miles north with no change in elevation where the fishing is good. This adaptation is a bitch.”
That’s all very well for you but here am I in a very flat/cold part of the UK and I’ll have to move up to somewhere in Yorkshire – very pretty, but there are Yorkshire people there and they don’t like strangers!!

Spector
October 26, 2012 4:52 pm

I believe the time interval involved in this article is too short to establish any sort of trend except to say temperatures were relatively constant over the interval in question. I believe a span of years going back to at least 1880 would be required to see any real trend. One site, the Climate Research Unit at the University of East Anglia UK shows a mere 0.8 degree C rise since 1880 and that data seems to hint there may be a cyclical process involved. As this is the site involved in the Climategate scandal, one might assume this to be a worst-case estimate of climate change.
I personally believe that an increase on the order of 3 to 5 degrees C since 1880 would be required to justify the current state of climate change alarm.
http://www.cru.uea.ac.uk/cru/info/warming/

Reply to  Spector
October 26, 2012 6:08 pm

Spector:
The period since 1880 contains only 3 to 4 independent cllimatological events of 30 years each. A sample of size 3 to 4 is apt to be highly unrepresentative of the underlying population. Thus, it would be a mistake to generalize from the record of this period.

AJB
October 26, 2012 5:13 pm

Terry Oldberg says, October 26, 2012 at 10:39 am
If we interpret this definition literally we get a plot like this …
http://postimage.org/image/4puutknlj/full
… which just seems to be diplaced 15-years to the right of numerological reality.
It doesn’t really matter what you plot, 30 years is just as meaningless as 30 months. Given oceanic turnover time I doubt you’d find any convincing evidence of an AGW signal in the temperature record if we had 3 centuries of reliable data. But even on this arbitrary 30 year basis, what is there to be alarmed about? The rate of change of ‘climate normal’ is clearly decelerating at present – just another wiggle that could go anywhere.

October 26, 2012 7:00 pm

clivebest:
Thank you for correcting my mistake in confusing you with Anthony!
My critique of your paper’s conclusions are unaffected by my mistake in identifying this paper’s author. In brief, the 15 year period since 1997 can contain no climatological event for, according to the University of Wisconsin, the period of such an event is 30 years. Thus, no climatological conclusions may be based upon the Hadcrut4 global temperature time series in this period. The UK Met office’s claim of 0.03 deg. C/decade of global warming is falsified by the same consideration.

Reply to  Terry Oldberg
October 27, 2012 2:07 am

So we agree then. It makes no sense for the MET Office to state that over the last 16 years the world has warmed 0.03 degrees/decade. Since their GCM model actually predicted 0.2 degrees/decade, presumably then they need 30 years to validate it. In that case can we please wait another 30 years before closing down ALL our coal fired power stations in the UK ? It would be a shame to commit economic suicide based on a computer simulation.

Reply to  clivebest
October 27, 2012 9:31 am

clivebest:
One alternative to economic suicide would be litigation.

Spector
October 26, 2012 7:52 pm

RE: Terry Oldberg: (October 26, 2012 at 6:08 pm)
“. . . it would be a mistake to generalize from the record of this period.
Given what I believe to be our current understanding of climatology, it appears to me that these observations over the last 150 years of modern instrumentation are largely in the noise level. As we have only increased the amount of CO2 in the atmosphere by a half-doubling (from 280 ppm) it is most likely that the net effect of this is about 0.5 degrees C (based on MODTRAN calculations.) I think the higher sensitivities for CO2 (degrees per doubling) indicated by the IPCC may have been the result of taking the steepest slope of observed data and attributing that all to CO2 and an assumed dangerously high, positive climatic feedback mechanism. The flat-line data of the last ten years is inexplicable by that assumption.

October 26, 2012 11:00 pm

richardscourtney:
Thank you for giving me the opportunity to clarify. My answers to your questions of Oct. 26, 2012 at 11:25 am follow.
1. The phrasing of your question suggests a lack of grasp of the significance of the idea of an “event” for the scientific method of investigation. A scientific investigation centers on a complete set of statistically independent events, the so-called “statistical population.” A sample drawn from the underlying population provides the sole basis for falsification of the claims that are a consequence from the investigation. In a diligent search, I’ve been unable to find the events or statistical population that underlie the claims that are made by the IPCC in AR4. It seems to me that IPCC climate “science” is a pseudo-science for the claims of a science are falsifiable but the claims that are made by the IPCC are not in lieu of the statistical population.
According to the University of Wisconsin (U of W), a climatological event lasts 30 years. The U of W implies that, in the investigation of global warming, the outcome of an event is the arithmetic average of the global temperature over the 30 year period of this event. The numerical values of a global temperature time series such as the Hadcrut4 over 1/2 of the period of an event (15 years) are irrelevant, for events are discrete and countable.
2. In a Web search, I was unable to find a link to Hansen’s 1987 testimony to the U.S. Senate. If you know of a link, please provide same so I can respond.

October 27, 2012 1:55 am

Terry Oldberg:
Thankyou for your answer to me at October 26, 2012 at 11:00 pm.
If I understand your answer to my first question correctly, then you are saying that climate science as promulgated by the IPCC is meaningless junk. Although I would not go so far as to say that, I tend to agree.
In the light of your answer, any specific response from you to my two questions would be irrelevant so I am content to leave it at that.
Thankyou for your reply.
Richard

Reply to  richardscourtney
October 27, 2012 9:21 am

richardscourtney:
A more precise way of stating my conclusion is to say that the methodology of the IPCC’s investigation is not scientific but rather is dogmatic. People who reach the opposite conclusion do so by inserting one or another false premises into their arguments that are disguised as truths. Among these people, a favorite stratagem is to argue over the magnitude of the equilibrium climate sensitivity. Another favorite is to conflate the IPCC-style “evaluation” of a model with the statistical validation of this model.

October 27, 2012 3:10 am

What UKMO should be saying is that there is no evidence of a decline in the rate of global surface warming, which is currently running at around 0.17°C per decade (30-year trend). This is to be expected as the TOA energy imbalance is still substantial at around 0.6W/m². We would need to reduce atmospheric CO₂ by 50ppm to eliminate this energy imbalance and halt global warming. What are the chances of doing that?

Reply to  icarus62
October 27, 2012 10:01 am

icarus62
Your argument seems to rest on the assumption that the equilibrium climate sensitivity (TECS) is of a positive magnitude. It would follow that any increase in the CO2 concentration warmed our planet. However, as I’ve pointed out on many occasions, the equilibrium temperature is not an observable, with the consequence that when a claim is made about the magnitude of TECS this claim is insusceptible to being tested. As it is insusceptible to being tested, this claim is unscientific by the definition of “scientific.” In order for claims about the global surface surface temperature to be made susceptible to testing and thus scientific, some institution has to describe the underlying statistical population but this has not yet happened. Institutions seem to fancy their current ability to con us into thinking that when they pontificate about the climate they are speaking as scientists.

AJB
October 27, 2012 10:12 am

icarus62 says, October 27, 2012 at 3:10 am

What UKMO should be saying is that there is no evidence of a decline in the rate of global surface warming

That is simply not true. See:http://postimage.org/image/4puutknlj/full. The rate of warming on a 30-year trend basis has been declining rapidly since 2007 or so. If you wish to examine the rate of warming then plot the rate of warming, not the temperature outcome.

October 27, 2012 1:08 pm

AJB: I disagree.

October 27, 2012 1:13 pm

AJB: I suppose you could squint and say that the warming rate in HADCRUT4 has been declining slightly, but I doubt if it’s statistically significant.

October 27, 2012 1:19 pm

Terry Oldberg: I don’t see how your comment follows from what I said above. If the planet is out of energy balance (more energy being received than radiated away) then it will inevitably warm up – that’s just fundamental physics. Climate sensitivity doesn’t come into it. Agreed?

Reply to  icarus62
October 27, 2012 9:51 pm

icarus62:
I know of no principle of physics which states that if Earth is currently receiving more energy than is being radiated away then Earth will inevitably warm up. If you know of one, please inform me of same.

Tad
October 27, 2012 4:32 pm

I’m not even convinced that least-squares is the be-all end-all fitting method. I mean, it’s fine under most circumstances, maybe someone more knowledgeable than I can comment as to its applicability here. Isn’t there something about autocorrelation in time series that calls for other fitting methods? Whatever, I’m not saying LS is wrong, I’m just wondering if the error bounds are actually even wider due to issues like autocorrelation.
Also, sometimes a more robust fitting method is considered appropriate, usually due to outliers. I don’t see any obvious outliers here so this is probably not an issue.
I guess I’m just bringing this up to remind folks that LS is not sacred or unique as a fitting method. And I’m also asking if there are other statistical issues that need to be considered, I really don’t know.

Reply to  Tad
October 27, 2012 10:44 pm

Tad:
Your skepticism regarding least sum of the squared errors is well founded.
Least sum of the squared errors is an intuitive rule of thumb that is often used in the construction of a model. Viewed from a logical perspective, a model is a procedure for making inferences. On each occasion in which an inference is made, there are generally many candidates for being made. Thus, the builder of a model is persistently faced with the necessity for selecting the one correct inference from the many candidates.
By tradition, model builders discriminate the one correct inference from the many possibilities through the use of the intuitive rules of thumb that are known as “heuristics.” Least sum of the squared errors is an example of one of them.
However, the method of heuristics has a logical shortcoming: On each occasion in which a particular heuristic identifies a particular inference as the one correct inference, a different heuristic identifies a different inference as the one correct inference. In this way, the method of heuristics violates Aristotle’s law of non-contradiction. Thus, though use of the method of heuristics is traditional it is also illogical.
Violations of non-contradiction may be avoided through replacement of the method of heuristics by optimization of the selected inferences. I’ve provided an introduction to this topic in the series of three articles that are published at the blog Climate, Etc under the title of “The Principles of Reasoning.”

AJB
October 27, 2012 4:47 pm

icarus62 says, October 27, 2012 at 1:13 pm
What source data have you used for this (the actual file name and columns) and how did you arrive at the value plotted on this graph?

AJB
October 27, 2012 6:41 pm

icarus62 says October 27, 2012 at 1:13 pm
Using your graph of only 10 values, you call a 15% decline in a mere 8 years statistically insignificant (despite it being based on ludicrous 30-year trailing averages). Amazing.

Nick Kermode
October 28, 2012 9:01 pm

Hi Clive, this is not a graph showing what the headline says. It is actually shows the anomaly in degrees not tenths of a degree. You would remove the decimal from the axis if you were showing the information in the suggested metric, otherwise it is point something of a tenth of a degree.

October 29, 2012 2:34 am

Hi Nick,
You are right. The headline says 10th of a degree while the y-axis scale is in degrees. The graph above was taken from the original Mail on Sunday article. What is actually plotted is the temperature anomaly against the 1961-1990 average. However, I assume David Rose just rounded it to 14 degrees to make it easier to for the public to understand. My fit to the anomaly data can be seen here and the fit to Hadcrut3 is here.

Carter
October 31, 2012 1:52 pm

[snip]