By Steve Goddard
Dr. John Christy recently wrote an excellent piece “Is Jim Hansen’s Global Temperature Skillful?” which highlighted how poorly Dr. Hansen’s past predictions are doing.
This post raises questions about GISS claims of record 2010 temperatures. The most recent GISS graph below shows nearly constant warming from 1965 to the present, with 2010 almost 0.1°C warmer than the actual warmest year of 1998.
HadCrut disagrees. They show temperatures flat over the past decade. and 2010 about 0.1°C cooler than the warmest year 1998.
Looking more closely, the normalised plot below shows trends from Jan 1998 to the present for GISS, HadCrut, UAH and RSS
GISS shows much more warming than anybody else during that period. Hansen claims :
The difference of +0.08°C compared with 2005, the prior warmest year, is large enough that 2010 is likely, but not certain, to be the warmest year in the GISS record.
The discrepancy with the other data sources is larger than Hansen’s claimed 0.08 record. Is it a record temperature, or is it good old fashioned bad data?
Either way, it is still far below Hansen’s projected temperatures for 2010. This is not pretty science.

Hansen made temperature forecasts which have proven too high. Now his “measured” temperature data is pushing higher than everyone else. Would you accept the other team’s coach doing double duty as the referee? In what other profession would people accept this sort of conflict of interest?



Dikran Marsupial
The problem is that GISS Arctic numbers are too high.
Dikran Marsupial
When Hansen runs to the press talking about a “record global temperature” he is talking about a single data point, which is not supported by other sources.
jeez
the point you are making about long term trend between GISS and CRU is valid:
http://www.woodfortrees.org/plot/hadcrut3vgl/from:1958/to:2010/offset/trend/plot/gistemp/from:1958/to:2010/offset/trend
And the point Steven Goddard is making about the short term is valid.
But they are not the same point. The point Steven Goddard is making is lost in the averaging of the point you are making.
Despite Had Crut’s best efforts to push their data up – as of June they hadn’t manged to get 2010 up to 1998 levels yet.
[youtube=http://www.youtube.com/watch?v=GaqLhTpMtZg]
stevengoddard says:
August 18, 2010 at 5:32 am
You can recalculate a flawed data set as many times as you want, and come up with the same answer. Garbage in, garbage out.
Agreed with. And not because Steven Goddard said it.
jeez,
Hansen is crowing about 0.08 degrees. Throwing coarse data trends of much lower resolution into the argument is meaningless in that context.
Can’t have it both ways. Sorry.
Dikran Marsupial says:
August 18, 2010 at 6:12 am
hence Hansen has a valid point in noting a possible record year in GISSTemp, even though it isn’t as likely to be a record in the other datasets.
Actually this is the problem. James Hansen is on an island. With his temperature set is on an island. And he’s on an island with his science by referencing himself to validate it.
The name of this article is :
Is Hansen’s Recent Temperature Data Consistent?
Alexej Buergin says:
August 18, 2010 at 12:32 am
” EFS_Junior says:
August 17, 2010 at 3:46 pm
the global temperature record (any global temperature record) will not produce the sharp peaks …”
Nonsense. All you need is to take the monthly UAH data, copy them to Excel and make a graph. You will be surprised how sharp the peaks in 1998 and 2010 are. They remind me of a Bowie knife. A newly sharpened Bowie knife.
(But of course you will change the scale of the ordinate, use -1000°C to +1000°C and get a straight line).
____________________________________________________________
Taken totally out of context, SOP for the WUWT AAGW readership.
Go back and reread what I did say in reference to frequency domain FFT’s.
That’s where you separate the wheat from the chaff, as it were, in terms of defining true cyclic behaviors, with narrow banded large amplitude spikes (ideally these spikes would be confined to a single frequency if the record is long enough and the freequency bins match the frequencies of the truly cyclic behaviors).
And no, you will not see an FFT with narrow banded large amplitude spikes using any global mean temperature record. Ergo the dataset contains no artifacts of true cyclic behavior as the diurnal and annual cyclic behaviors at a point have been averaged out per the title of global mean temperature record. D’oh!
Amino Acids in Meteorites says:
August 18, 2010 at 7:02 am
“And the point Steven Goddard is making about the short term is valid.”
No, short term trends are unstable – they mostly just tell you what ENSO is doing and very little about the presence of a long term trend. Now if Steve Goddard can show me an interesting short term cooling trend that (like the long term trend) is statistically significant, then he would have a point. Note statistically insignificant just means “not enough data to be sure either way”, in which case, they shouldn’t be used as evidence. Sadly it doesn’t seem likely that this will happen as Steven apparently didn’t realise that his peak-to-peak method introduces a cooling bias into the analysis becase the 1998 El-Nino was a much stronger one than the one just passed. As I said, some self-skepticism is in order.
If Hansen starts making assertions on the basis of short term trends, I’ll happily criticise him for it, but it is unlikely to happen as I would have though he knows better than that.
Dikran Marsupial
Are you joking? Hansen’s claim of a record temperature is based on a short term trend cherry picked to coincide with El Nino.
Steven Goddard wrote:
“The name of this article is : Is Hansen’s Recent Temperature Data Consistent?”
You chose to use short term trends with which to make your argument. I have pointed out that short term trends are unstable and dominated by ENSO, especially when you pick 1998 (one of the strongest El-Nino on record) as your start date. Now if you were a competent scientist, you would go off and perform some experiments to quantify that bias and demonstrate (if you can) that it doesn’t affect your argument (or at least admit that you have not tested to see if there is a bias). As I said, some self-skepticism is in order.
Hansen’s standard 1.7-2.0 trend is based on cherry picking a start date of 1975. The long term GISS trend is 0.75.
stevengoddard says:
August 18, 2010 at 9:07 am
“Are you joking? Hansen’s claim of a record temperature is based on a short term trend cherry picked to coincide with El Nino.”
(i) an estimate of mean global temperature is not a short term trend, it is an average. There is a difference, if you are not aware of the difference, I suggest you leave the statistical analysis to those who do, at least until you have worked on your basic skills.
(ii) it ought to be obvious to anyone with an ounce of common sense that records usually ocurr due to a combination of factors, so you will probably find the majority of record warm years happen in El-Nino years. You are basically saying that nobody is allowed to talk about record high years because they tend to be El-Nino years, well duh!
If you are interested in record years, a much more reliable (non-cherry-pickable as far as I can see, correct me if I am wrong) is to see how recently the ten warmest years ocurred (use a figure other than 10 if you like). Or look at warmest decades perhaps (no that is only one point on a decadal time series ;o)?
Hansen is just pointing out this is likely to be a record year according to GISTEMP, no big deal, it is only really of interest to those argueing we are heading for global cooling, and there don’t seem many of those about.
“stevengoddard says:
August 18, 2010 at 9:14 am
Hansen’s standard 1.7-2.0 trend is based on cherry picking a start date of 1975. The long term GISS trend is 0.75.”
There is a family of methods in statistics known as “change point detection”, demonstrate that 1975 is does not coincide with a statistical change point. You will find a very simple changepoint analysis here. Come up with a better one if you like, but if 1975 coincides with a genuine statistical changepoint, your accusation of cherry-picking is baseless.
“Is Hansen’s Recent Temperature Data Consistent?”
My conclusion is that it is not consistent once data from the coasts surrounding the pole, were interpolated towards the centre. This shows up very well comparing GISS with both satellite data sets from 1979. The higher temperatures guessed for the Arctic are changing the way the GISS behaves compared with the satellite.
Compared with UAH
http://www.woodfortrees.org/plot/gistemp/from:1979/normalise/plot/uah/from:1979/normalise
Comapred With RSS
http://www.woodfortrees.org/plot/gistemp/from:1979/normalise/plot/rss/from:1979/normalise
Pre 2000 both were matching GIStemp much closer then the period after with the odd blip. After 2000 El Ninos and La Ninas show a warmer anomaly for GIStemp compared before with RSS and UAH. GIStemp shows a overall warmer bias over the past decade due to change in SST use and recent interpolated polar data. With there been no point(s) in centre of the Arctic, interpolated data has little meaning.
http://en.wikipedia.org/wiki/Interpolation
This is the key
“In the mathematical subfield of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points.”
There isn’t a range of discrete set of know data points for use in constructing new data points. This is down to there is no data used in the Arctic above 80N north. In climate science interpolation shouldn’t occur into different climate zones as these behave very differently.
Dikran Marsupial
Do you think that plummeting SSTs are not going to drag 2010 down before the year is over?
No matter what happens in 2010, it will be far below Hansen’s forecasts.
stevengoddard says:
August 18, 2010 at 9:36 am
“Do you think that plummeting SSTs are not going to drag 2010 down before the year is over?”
Do I detect a trend here? On a previous thread, I pointed out an error in Stevens post, where he incorrectly implied that the 72 hour prediction horizon for weather forecasting has some bearing on climate modelling. It doesn’t, weather forecasting requires accurate knowledge of initial conditions (the lack of which is what causes the short prediction horizon), climate modelling doesn’t require accurate knowledge of initial conditions, as they work by simulating weather rather than forecasting it. On that thread Steven neither addressed that criticism, nor acknowledged it, and simply blustered along trying to ignore it. On this thread I pointed out that his short term trend analysis is (a) unstable and (b) biased due to starting the analysis window at an unusually strong El-Nino. Again, rather than address the criticism or acknowledge it, he instead blusters on about supposed errors in Hansens analysis, rather than have a good look at his own.
It is rather a short term trend, as it is only two threads, but it is two out of two at the moment, and I don’t think Steven is giving much of an advert for skepticism.
Dikran Marsupial
When I do “change point detection,” Tamino accuses me of cherry picking. Different standards apply for the other side.
Anyway, the trend from 1910 to 1940 was nearly identical to the last 30 years.
http://www.cru.uea.ac.uk/cru/info/warming/
There is no justifiable reason not to include older data.
stevengoddard says:
August 18, 2010 at 10:01 am
“When I do “change point detection,” Tamino accuses me of cherry picking. Different standards apply for the other side.”
Having a go at tamino is no defense of your argument, it is possible for both of you to be wrong. I am getting a bit bored with your inability to deal with specific criticisms; it isn’t science, and it certainly isn’t skepticism. If you were to take the time to address the criticism properly it would be you that gained most from it.
“Anyway, the trend from 1910 to 1940 was nearly identical to the last 30 years.
http://www.cru.uea.ac.uk/cru/info/warming/”
Yep, did something rather different between 1940(ish) and 1975 (ish) didn’t it? Guess what, that is what change point detection is for.
“There is no justifiable reason not to include older data.”
Yes there is if there is a physical reason to make a distinction between the two periods, for instance a reduction in aerosols. Climate forcings have not been constant since the start of the record, which is why it is perfectly reasonable to consider different sub-periods if there is a distinct difference in forcings.
Dikran Marsupial
You appear intent on dragging the conversation away from the topic of this article. I wonder why?
“You appear intent on dragging the conversation away from the topic of this article. I wonder why?”
Because there is a flaw in your analysis. If there is a flaw in your analysis that affects the soundness of your conclusions regarding the topic of the article.
It is called “science”, you make an assertion and explain your working, then others look at your analysis. If it is sound, they accept your conclusions. If there is a flaw they point it out and give you an opportunity to correct it before deciding whether to accept the conclusions or not. The ball is in your court, but the more you evade discussing the flaw, the less inclined I am to accept your conclusion.
EFS_Junior says:
August 18, 2010 at 8:30 am
Nobody here says that cyclical change happens with absolute regularity, the problem was to find a simple rule how best to calculate a trend. If you want the trend of about the last decade, it may make no sense to use exactly a decade. It is better to calculate from peak temperature to peak temperature (El Niño to El Niño). Or to find a useful estimate for the time between El Niños, the mean of the last century is good enough; we need neither Fourier nor Wolfram.
You use a computer to estimate with 95% probability that minimum ice extend will be between 4.25 an 5.25. I can do that in 10 seconds by just looking at the curves (but of course I get a mean of 5).
The strong El Niño and the weak El Niño:
That is what Bill Illis wrote about that over at the Blackboard:
Bill Illis (Comment#47733) July 2nd, 2010 at 1:25 pm
The 2009-10 El Nino should be influencing the June 2010 TLT anomaly by about 0.08C above the trend.
The 1997-98 El Nino should have influenced the June 1998 TLT anomaly by about 0.10C above trend.
So June 1998 (0.562C) versus June 2010 (0.436C) should be close to comparable as far as the ENSO is concerned.
Dikran Marsupial
You are an interesting character.