Global Temperature Update–September 2012

Guest post by Paul Homewood

The HADCRUT data has now been released for September, so we can have a look at the latest figures for the four main global temperature datasets. I have now switched to HADCRUT4, although the Hadley Centre are still producing numbers for HADCRUT3.

RSS HADCRUT4 UAH GISS
September 2012 anomaly 0.38 0.52 0.34 0.60
Increase/Decrease From Last Month +0.12 -0.01 +0.14 +0.03
12 Month Running Average 0.16 0.42 0.11 0.50
Average 2002-11 0.26 0.47 0.19 0.55

                                   Global Temperature Anomalies – Degree Centigrade           

 

The pattern is similar across all datasets, with September temperatures above both long term and 12 month averages, although, interestingly, both satellite sets have picked up a bigger spike than the other two. We are currently seeing the lagged effect on temperature from the mild El Nino, which began in April and has now pretty much fizzled out, as can be seen below. Purely thinking aloud, but is this an indication that atmospheric warming is slower to dissipate than surface warming?

image

http://www.esrl.noaa.gov/psd/enso/mei/

My guess is that temperatures will settle back slightly by the end of the year. If ENSO conditions remain fairly neutral in the next few months, we should get a good indication of underlying temperatures, for the first time for a while.

The following graphs show 12-month running averages for each set. As I mentioned before, we often get fixated with calendar year figures, which obviously change a good deal from year to year. It therefore seems much more sensible to look at 12 month averages on a monthly basis, rather than wait till December.

image

image

image

image

In all cases, the 12 month averages are lower than they were at the start of the year.

Finally, I mentioned last month that UAH had just brought out a new Version 5.5, which corrected for spurious warming from the Aqua satellite. (Roy Spencer has the full technical stuff here). The new version is now incorporated and backdated in my analysis above. I have also plotted the difference between the old and new versions below.

image

As can be seen, the divergence really started to be noticeable towards the end of last year, and has steadily grown wider over the last few months.

Remember that all monthly updates can be accessed on my “Global Temperature Updates” page, at

http://notalotofpeopleknowthat.wordpress.com/category/global-temperature-updates/

Sources

http://nsstc.uah.edu/climate/index.html

http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

http://www.remss.com/data/msu/monthly_time_series/RSS_Monthly_MSU_AMSU_Channel_TLT_Anomalies_Land_and_Ocean_v03_3.txt

http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html#regional_series

0 0 votes
Article Rating
58 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
November 1, 2012 10:29 am

I am still, after all this time, not convinced that averages like this are not being given far to much attention. It seems to me, many others are attaching a far greater importance to it then deserved or is truly useful. No matter what arguments I have seen, none demonstrate any connection to any place or region. It if isn’t effecting you or me then I think the claimed usefulness is bogus.

Tim Ball
November 1, 2012 10:42 am

Phil Jones claimed an increase of 0.6°C in 120+ years in the 2001 IPCC Report and it became, with the hockey stick, a major part of the proof of human induced warming. The error range of ±0.2°C was overlooked.
The numbers presented here show that there is a 0.28°C difference between HadCRUT4 and UAH and a 0.36°C difference between GISS and UAH in just 9 years.
All this with temperatures taken to 0.5°C.
These numbers are the modern equivalent of the medieval argument about number of angels on the head of a pin.

Crispin in Battambang
November 1, 2012 10:59 am

The claim that there has been no (statistically significant) warming since 1997 is borne out. If there has been any, it is not detectable. I can’t see any reason to get worried about cooling, yet either. The mysteryis why there have been so few major storms hitting the USA in recent years and why Sandy did not develop into one of the powerful monsters that have hit the same region in the past.
WUWT?

MIke (UK)
November 1, 2012 11:03 am

My local weather station here in the English Midlands has us running at -0.4 degrees Centigrade so far this year on a thirty year average.
Facts and figures here: http://bws.users.netlink.co.uk/

November 1, 2012 11:04 am

Twas the cooling that caused Sandy, mostly.
I blame you and the satellites for not picking it up. If I can see it why cannot you? Try looking at maxima.

Kelvin Vaughan
November 1, 2012 11:29 am

The September maximum was 3 degrees colder than last year in the UK, the minimum was 1.7 degrees colder, probably due to all the wet cloudy weather we have had.

Graeme M
November 1, 2012 12:14 pm

I know the use of anomalies has been explained before but I have to admit to not ‘getting’ it. What exactly are anomalies? Are they the excursion above a baseline average by the daily average? I don’t really see that tells us a lot about daily temperatures. For example, if there IS some sort of warming trend that encourages slightly higher daily maxima OR minima that would cause the daily average to increase but it would not necessarily mean that temps overall have increased surely?
That is, if the only effect was to see a short spike in daily temps (eg at 4 AM) but the rest of the day was largely normal, we’d still see a difference in anomaly wouldn’t we? What do the daily actuals show when plotted over time? Is it possible to see the daily temp range for specific long term stations plotted against the same day for say 100 years? If we don’t have the date to show temp ranges hourly for each day then how can we really say what is happening?
I am not discounting the concept, I just don’t quite see that it is really telling us anything about climate…

November 1, 2012 12:19 pm

Dennis Nikols, P. Geo says:
“I am still, after all this time, not convinced that averages like this are not being given far to much attention. ”
I became interested in how much the temp drop every night.
60 years (1950-2010) of the Northern Hemisphere difference between how much today’s temp goes up, minus how it drops tonight.
http://www.science20.com/files/images/1950-2010%20D100_0.jpg
Based on the NCDC’s summary of days data set (~110m samples).

Werner Brozek
November 1, 2012 12:48 pm

2012 in Perspective so far on Six Data Sets
Note the bolded numbers for each data set where the lower bolded number is the highest anomaly recorded so far in 2012 and the higher one is the all time record so far. There is no comparison.

With the UAH anomaly for September at 0.34, the average for the first nine months of the year is (-0.13 -0.13 + 0.05 + 0.23 + 0.18 + 0.24 + 0.13 + 0.20 + 0.34)/9 = 0.123. If the average stayed this way for the rest of the year, its ranking would be 10th. 1998 was the warmest at 0.42. The highest ever monthly anomaly was in April of 1998 when it reached 0.66. With the adjustments, the 2010 value is 0.026 lower than 1998 instead of 0.014 as was the case before.
With the GISS anomaly for September at 0.60, the average for the first nine months of the year is (0.32 + 0.36 + 0.45 + 0.55 + 0.67 + 0.55 + 0.46 + 0.57 + 0.60)/9 = 0.503. This would rank 10th if it stayed this way. 2010 was the warmest at 0.63. The highest ever monthly anomalies were in March of 2002 and January of 2007 when it reached 0.88.
With the Hadcrut3 anomaly for September at 0.520, the average for the first nine months of the year is (0.217 + 0.194 + 0.305 + 0.481 + 0.475 + 0.477 + 0.446 + 0.513+ 0.520 )/9 = 0.403. This would rank 10th if it stayed this way. 1998 was the warmest at 0.548. The highest ever monthly anomaly was in February of 1998 when it reached 0.756. One has to back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less.
With the sea surface anomaly for September at 0.453, the average for the first nine months of the year is (0.203 + 0.230 + 0.241 + 0.292 + 0.339 + 0.352 + 0.385 + 0.440 + 0.453)/9 = 0.326. This would rank 10th if it stayed this way. 1998 was the warmest at 0.451. The highest ever monthly anomaly was in August of 1998 when it reached 0.555.
With the RSS anomaly for September at 0.383, the average for the first nine months of the year is (-0.059 -0.122 + 0.072 + 0.331 + 0.232 + 0.338 + 0.291 + 0.255 + 0.383)/9 = 0.191. If the average stayed this way for the rest of the year, its ranking would be 11th. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857.
With the Hadcrut4 anomaly for September at 0.524, the average for the first nine months of the year is (0.288 + 0.209 + 0.339 + 0.514 + 0.516 + 0.501 + 0.469 + 0.529 + 0.524)/9 = 0.432. If the average stayed this way for the rest of the year, its ranking would be virtually tied for 10th. 2010 was the warmest at 0.54. The highest ever monthly anomaly was in January of 2007 when it reached 0.818. The 2011 anomaly at 0.399 puts 2011 in 12th place and the 2008 anomaly of 0.383 puts 2008 in 14th place.
On all six of the above data sets, a record is out of reach.
On all data sets, the different times for a slope that is at least very slightly negative ranges from 11 years and 9 months to 15 years and 9 months, but note *
1. UAH: (*New update not on woodfortrees yet)
2. GISS: since January 2001 or 11 years, 9 months (goes to September)
3. Combination of 4 global temperatures: since November 2000 or 11 years, 10 months (goes to August)
4. HadCrut3: since March 1997 or 15 years, 6 months (goes to August)
5. Sea surface temperatures: since February 1997 or 15 years, 8 months (goes to September)
6. RSS: since January 1997 or 15 years, 9 months (goes to September)
RSS is 189/204 or 92.6% of the way to Santer’s 17 years.
7. Hadcrut4: since December 2000 or 11 years, 10 months (goes to September.)
See the graph below to show it all.
http://www.woodfortrees.org/plot/hadcrut3gl/from:1997.16/trend/plot/gistemp/from:2001.0/trend/plot/rss/from:1997.0/trend/plot/wti/from:2000.8/trend/plot/hadsst2gl/from:1997.08/trend/plot/hadcrut4gl/from:2000.9/trend

Rosco
November 1, 2012 1:08 pm

Thank you Tim Ball !
The oft quoted 2 decimal point accuracy of the supposed global anomaly is ridiculous – if I did this sort of thing – and I did cause I’m a precision freak – I’d fail the test I was sitting for.
I’ve seen many reporting stations equipment in many parts of Queensland and to give an accuracy as +/- 0.5 degrees C is a guess at best.
How the hell does anyone take thousands of readings with at best 0.5 degree accuracy and come up with a global average of 0.55 or whatever ??
It is simply inconceivable that this is “science” !!

Neil Jordan
November 1, 2012 1:22 pm

Perhaps global wine production makes better thermometers than trees. There was a small article in this morning’s LA Times business section titled “Wine levels worldwide shrinking to 37-year low” that I traced to this link:
http://www.goerie.com/article/20121101/BUSINESS05/311019911/Wine-levels-worldwide-shrinking-to-37-year-low
“Uncooperative weather has damaged grapes worldwide, causing global wine production to shrivel 6.1 percent to its lowest point since 1975, according to a Paris trade group.”

November 1, 2012 1:49 pm

re: Dr. Tim Ball, 1Nov12 at 10:42 am: — “These numbers are the modern equivalent of the medieval argument about number of angels on the head of a pin.”
I agree. You point is fundamental. Maybe viral marketing would be a way to communicate the essence of your point in a very brief message. Just a thought …

Nigel Harris
November 1, 2012 2:22 pm

How the hell does anyone take thousands of readings with at best 0.5 degree accuracy and come up with a global average of 0.55 or whatever ??
Independent errors grow as the square root of sample size. So if you measure 1000 data points with individual errors of 0.5 the error on the mean is 0.5 / sqrt(1000) or 0.016

P. Solar
November 1, 2012 2:42 pm

Paul Homewood says: “The reason for using anomalies is that temperature changes can be measured, as opposed to absolute ones.”
The main reason for using anomalies is that is takes out the “average” seasonal variation. This leaves a (mostly) non seasonal record. Filters would do this better a filter needs a window of data to work on and thus cannot run up to the end of the data. (I’m not sure how your plots run up to the end of the year, you probably have not centred the data and thus have a 6 month offset in your results.) Anomalies are often preferred since they do run up to the last date available.
Paul Homewood says: ” It therefore seems much more sensible to look at 12 month averages on a monthly basis, rather than wait till December.”
That is a good approach. It would be even better if you used proper filter instead of a runny mean. Runny means distort the data as much as they filter it. Just look at the amount of sub-annual detail you have on something that you ran a 12m filter on.
Here’s why:
http://oi41.tinypic.com/nevxon.jpg
runny means are crappy filters and let through large amounts of what you intended to get rid off. Worse, every second lobe is in fact negative. So not only does it get through it get inverted !!
NOT A LOT OF PEOPLE KNOW THAT 😉
Here is the UAH TLT data with a clean gaussian filter.
http://i46.tinypic.com/dy0oyb.png
Even the 3m gaussian is smoother than the 12m runny mean. If you look at the 12m filter there is not visible detail on a scale of less than a year.
Since these data are already deseasonalised by being anomalies, a light filter should be enough if it is a proper filter.

P. Solar
November 1, 2012 2:45 pm

If wordpress does not mangle it , here is a simple awk script that will run a gaussian filter :
[sourcecode]
#!/bin/awk -f
# pass input through 3 sigma gaussian filter where sigma, if not given, is 2 data points wide
# usage : ./gauss.awk filename <sigma=2> <scale_factor=1>
# optional scale_factor simply scales the output
# use OFMT="%6.4f"
# sigma can be compared to the period of the -3dB point of the filter
# result is centred, ie not shift. dataset shortened by half window each end
# check whether data is continuous !!
BEGIN { OFMT="%6.4f"
# ARGV[1]=filename; argv[0] is script name, hence ARGC>=1
pi= 3.14159265359811668006
if ( ARGC >3 ) {scaleby=ARGV[3];ARGV[3]=""} else {scaleby=1};
if ( ARGC >2 ) {sigma=ARGV[2];ARGV[2]=""} else {sigma=2};
print "filtering "ARGV[1]" with gaussian of sigma= ",sigma
root2pi_sigma=sqrt(2*pi)*sigma;
two_sig_sqr=2.0*sigma*sigma;
gw=3*sigma-1; # gauss is approx zero at 3 sigma, use 3 sig window
# eg. window=2*gw-1 – 5 pts for sigma=1; 11pts for sig=2; 3 sig=17
# calculate normalised gaussian coeffs
for (tot_wt=j=0;j<=gw;j++) {tot_wt+=gwt[-j]=gwt[j]=exp(-j*j/two_sig_sqr)/ root2pi_sigma};
tot_wt=2*tot_wt-gwt[0];
tot_wt/=scaleby;
for (j=-gw;j<=gw;j++) {gwt[j]/=tot_wt};
# strip off last .xxx part of file name
# improve this (doesn’t work with paths like ../filename)
split(ARGV[1],fn,".");
basename=fn[1]
gsfile=basename"-gauss"sigma".dat";
print "# ",gsfile >gsfile;
ln=-1;
}
($0 !~ /^#/)&&($0 != ""){
xdata[++ln]=$1;
ydata[ln]=$2;
if (ln>2*gw)
{
gauss=0
for (j=-2*gw;j<=0;j++) {gauss+=ydata[ln+j]*gwt[j+gw]}
print NR,xdata[ln-gw],gauss
print xdata[ln-gw],gauss >> gsfile;
}
else
{
# print $1,$2;
}
}
END {
print "#gausssian window width = "gw+gw+1",done"
print "#output file = "gsfile
}
[/sourcecode]
[Looks like WordPress mangled it. Sorry. — mod.]

stephen richards
November 1, 2012 2:48 pm

Joe Bastardi estimated that the UK Met Of HadCru 3/4 was about 0.2°c higher than the satelite values because of the averageing period. That makes UAH RSS and CRU anomolies about the same.

P. Solar
November 1, 2012 3:38 pm

[Looks like WordPress mangled it. Sorry. — mod.]
No, it looks alright. If you hover over the code text, some flash gadgets pop up and you can click on “view source”. This seems to be an intact copy just scanning by eye.
awk ( or gawk of nawk…) is available on all major platforms so anyone capable of using a computer beyond just reading MSN and Facebook should be able to use it.

Taphonomic
November 1, 2012 3:46 pm

Rosco says:
“How the hell does anyone take thousands of readings with at best 0.5 degree accuracy and come up with a global average of 0.55 or whatever ??”
Simple, plug the temperatures into a calculator and divide by the number of observations.
Using a basic calculator, you can get precision to 8 to 10 decimal places by completely ignoring the concept of “significant figures”.

D.I.
November 1, 2012 4:07 pm

Well now,to me, (as a lay man) this Climastrology thing is no more than a joke.After looking at these so called ‘Graphs’, with no data points,I see the scam that they are using.
Anyone who has used an oscilloscope knows that you can expand or compress the Amplitude (y axis)and the Timebase (x axis) to suit your needs.Think about it,stretch the amplitude and you can make 0.00000001 degree centigrade to look scary,compress the timebase, scarier still, reverse the controls,drop the amplitude, expand the timebase,nothing to see but a wavy or straight line (depending on back-round-hum).
Well I hope anyone in Electronics can see where I’m coming from on this Climate ‘Scam’.
Can we start up a scare story of how much a voltage variation of 5v on the Grid will cause a Catastrophe,or 0.5v, or 0.05v,you know where I’m coming from on this B.S.
Electronic Engineers can become the new World Saviours,,jump on the band wagon now.
Why not? everyone else is.SHOW YOUR GRAPHS.

Crispin in Battambang
November 1, 2012 5:28 pm

@Rosco says:
>How the hell does anyone take thousands of readings with at best 0.5 degree accuracy and come up with a global average of 0.55 or whatever ??
As my brother the historian used to say, it i not quite a simple [to dismiss it] as that. The precision cannot be imporived, that is true, so there are error bars of a fixed size above and below. However the confidence about where the middle point is located can be improved by having a larger number of readings. The location is the accuracy, the error bars are the precision. They are different. The location of the middle point can be assigned a confidence level (like 95%). If the deviation is less than the precision, there is no statistically significant difference between readings. Basically, that is what the ‘no warming in 16 years’ message contains.

Brian H
November 1, 2012 8:58 pm

Crispin in Battambang says:
November 1, 2012 at 10:59 am
The claim that there has been no (statistically significant) warming since 1997 is borne out. If there has been any, it is not detectable. I can’t see any reason to get worried about cooling, yet either. The mysteryis why there have been so few major storms hitting the USA in recent years and why Sandy did not develop into one of the powerful monsters that have hit the same region in the past.
WUWT?

It’s AGC – Anthropogenic Global Calming. Over the long term, humans have been moderating the weather and climate. Sandy is a failure of that process, probably caused by excessive pollution controls, and reductions in CO2 output. Such policies should be reversed, forthwith! Bring back that old time AGC!!

Smoking Frog
November 2, 2012 1:00 am

Tim Ball says:
November 1, 2012 at 10:42 am
These numbers are the modern equivalent of the medieval argument about number of angels on the head of a pin.
There was no medieval argument about the number of angels on the head of a pin. It was invented in the 1800s as either sarcasm or slander (I don’t know which) against the Catholic Church.

P. Solar
November 2, 2012 1:03 am

Paul Homewood says: “If you compare the current 12-month averages with Dec 1979, you get an increase in temps of :-”
With the level of uncertainty and the month to month variations as large as they are taking one month like that is simply not reasonable. I think you either have this properly or not at all.
GISS are inflating the surface record but if you are going to criticise, at least take the time to find out what the actual base lines are for all these data sets an make a meaningful comparison.
Also please try the gaussian filter I posted above, you will find it is a much better “smoother” than doing runny means. It also correctly centres the result instead of shifting it.

P. Solar
November 2, 2012 1:16 am

DI says: “Anyone who has used an oscilloscope knows… ”
Oscilloscopes also have the trigger function that selects just a small part of the waveform from the peak or the trough of the mains hum to provide a stable readable output of a very small part of the time sample. The rest of your analogy falls apart at this point.
What is happening in climate pseudo-science is the mains hum is 60 years not 60 Hz and instead of filtering it out , they are just looking at the rise edge of trough to peak and pretending it will continue rising.

November 2, 2012 1:53 am

In an attempt to reduce Australia’s pollution, our Prime Minister (Julia Gillard) introduced a Carbon Tax last July 1st.
People’s opinion as to how effective this will be seems to be tied closely to their political persuasions.
The opposition party are blaming all our problems (current and in the future) on the Carbon Tax, while the Government champion its potential.
Nobody will really know its effect until it has been in for a few years.
But in the meantime, it gives me great material for some political cartoons.
http://cartoonmick.wordpress.com/editorial-political/#jp-carousel-517
Cheers
Mick

DirkH
November 2, 2012 3:25 am

Taphonomic says:
November 1, 2012 at 3:46 pm

“Rosco says:
“How the hell does anyone take thousands of readings with at best 0.5 degree accuracy and come up with a global average of 0.55 or whatever ??”
Simple, plug the temperatures into a calculator and divide by the number of observations.
Using a basic calculator, you can get precision to 8 to 10 decimal places by completely ignoring the concept of “significant figures”.”

No, again, the error of the average goes down with the square root of the number of measurements (assuming no autocorrelation).
Imagine you are sampling a noisy signal again and again. You are not interested in the short term fluctuations, you want to find out the DC component – the constant part of the signal – , and we assume no autocorrelation, that means no periodic signals in there for the moment.
This would be equivalent to recording the throw of a dice time and time again. You know that over time your average measurement will get closer and closer to 3.5 . If you don’t believe me, try it out. During each moment in time the absolute deviation of the integral of all measurements from n*3.5 can get arbitrarily large but as the number of measurements n gets higher and higher the integral divided by n asymptotically approaches 3.5 . The error bar gets smaller and smaller over time. It can be shown that it shrinks with the square root of the number of measurements.
In other words: More measurements do help; but we have to look out for autocorrelation. That would be a reason to distribute temperature measurements around the globe, to minimize spatial autocorrelation.

DirkH
November 2, 2012 3:30 am

cartoonmick says:
November 2, 2012 at 1:53 am
“But in the meantime, it gives me great material for some political cartoons.”
I see no cartoon about the relation of Julia Gillard with keeping her promises. Wouldn’t that make a great one? Heck, she even has the nose of Pinocchio.

cd_uk
November 2, 2012 4:08 am

Paul
I find it incredible that disparate sources of global temperature records produced using very different methodologies compare so well (over last 30 years); especially when one considers what is being measured.
When you look at the methodologies employed in producing the instrumental value (interpolation, projection systems etc.) and the fine margins, one would think that the final figure is nonsense. But when compared with the satellite record there seems to be a good degree of concordance. Amazing.
In short, it seems unlikely that all could share the same bias.

cd_uk
November 2, 2012 4:54 am

P Solar
For stationary data, all low pass filters converge after successive runs (central limit theory). For the temperature record over small windows (say 12 months) the record will be effectively stationary. Therefore, after a single run the differences are no more than “stylistic”.
So I don’t understand, but I am interested, in why you think this gives you anything more meaningful.

November 2, 2012 5:14 am

MiCro says
http://wattsupwiththat.com/2012/11/01/global-temperature-update-september-2012/#comment-1132403
Henry says
Very interesting. But I am not sure exactly what I am looking at except that it is a sine wave similar to my own. I assume the scale on the left is temp. but what is the scale on the bottom?
I recently discovered that in a period of cooling (such as has already arrived now, since 1995, looking at energy-in) some places do get warmer due to the GH effects. CET is an example. There are more clouds and there is more precipitation in a cooling period. In such a case the sine wave runs in opposite directions, but the wave is still there.
Read my comments made here:
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/

Editor
November 2, 2012 7:19 am

I posted the preliminary October 2012 sea surface temperature data here:
http://bobtisdale.wordpress.com/2012/10/29/theyre-back-nino3-4-sea-surface-temperature-anomalies-are-back-above-the-threshold-of-el-nino-conditions/
The complete October update should be available on Monday, November 5.

Rex
November 2, 2012 9:07 am

The margins of error quoted for these figures are the Statistical Errors, based
on the assumption of a perfect sample and methodology. In addition to the Statistical
Error there is a factor which might be called SURVEY ERROR, which can be as much
as, or indeed greater by some degrees, than the calculated Statistical Error.
My rule of thumb is to double or triple any quoted margins of error for survey data.
And since the temperature measurement system resembles a dog’s breakfast more
than it does a properly designed survey, I suggest quadrupling, in this instance.

November 2, 2012 10:23 am

Werner Brozek says
http://wattsupwiththat.com/2012/11/01/global-temperature-update-september-2012/#comment-1132432
thanks for your updates, we do appreciate the work that you did there!
Nevertheless, I really only trust my own dataset, which is my right, and that one shows that we have already fallen by as much as 0.2 degrees C since 2000. I can also predict that we will fall by another 0.3 degrees C from 2012 to 2020.
I am not sure what trend UAH is showing from 2000, since the adjustments?
So far, it seemed to me that only Hadcrut 3 showed some significant cooling trend,
http://www.woodfortrees.org/plot/hadcrut4gl/from:2002/to:2012/plot/hadcrut4gl/from:2002/to:2012/trend/plot/hadcrut3vgl/from:2002/to:2012/plot/hadcrut3vgl/from:2002/to:2012/trend
On hadcrut 3 it looks like almost -0.1 which is beginning to become closer to my dataset.
On hadcrut4 it still looks very flat, but look at the very high result for 2007.
I think that time there was still somebody trying to cook the books and the results a bit?

November 2, 2012 10:48 am

HenryP says:
November 2, 2012 at 5:14 am
“Very interesting. But I am not sure exactly what I am looking at except that it is a sine wave similar to my own. I assume the scale on the left is temp. but what is the scale on the bottom?
I recently discovered that in a period of cooling (such as has already arrived now, since 1995, looking at energy-in) some places do get warmer due to the GH effects. CET is an example. There are more clouds and there is more precipitation in a cooling period. In such a case the sine wave runs in opposite directions, but the wave is still there.”
Scale on left is daily difference temperature (Rising-Falling) * 100, across the bottom is the day of the year. Also this is for station North of 23 Lat.
What this is showing is a measure of atm cooling rate as the length of day changes, cooling in the fall and winter, warming in the spring and summer. When you average the daily rate out for a year, year over year the daily average changes very little. There’s a number of things I’ve worked on if you follow the url in my name here.
“Read my comments made here:
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
I’ll take a look.

P. Solar
November 2, 2012 11:59 am

Paul Homewood says: “Above all though running averages is a concept something everybody understands, which I suspect is not the case with gaussian filters!”
Firstly that has to about this the most piss-poor excuse ever for choosing a filter. What you are probably saying is that it is the only one you understand. (A similar argument Bob Tisdale has also used for the same reasons). A filter is chosen because it works, not because the public may or may have heard of it.
Do you really think that “everybody understands” how runny means distort the data, truncating peak and inverting oscillations. Do you understand that?
Secondly, the true problem is “everybody” does not understand filters in the slightest. Most people would not even know running average IS a filter and even if they got that far they would have no idea what a frequency response was or why it mattered.
“People” and “everybody” do not have to know the ins and outs of digital signal processing for you to chose a good filter. Just chose one, and chose a good one.
You could chose a binomial (as used by Met Office, for example) or a number of others. I’ve provided you with one alternative and the code to do it.
>>
The divergence between GISS and the other sets has been present for the last decade or so. It is not just based on one month’s figures.
It might need a statistician to calculate the exact amount of the divergence, which I am not. But the divergence is real and is something people should be aware of.
>>
I totally agree, it has been noted for a long time that they are ramping up the warming. If you want to highlight this you need to do apples to apples comparisons, so how about an update here where everything has the same base line.
Best of all plot GISS on top of UAH or RSS , preferable all smoothed with a filter that does not introduce spurious deformations in to the data, or else someone’s data may be misrepresented.

November 2, 2012 12:00 pm

Micro says
http://wattsupwiththat.com/2012/11/01/global-temperature-update-september-2012/#comment-1133577
Henry says
true. I looked in at the url and saw that you came to the same conclusion as I did from my sample, of 47 weather stations which I had balanced by latitude and 0n-sea/inland 70/30
Over a 88 year period we are more or less back to square zero. It could be that within that 88 year cyle we are also in a 200 yr and 500 yr cycle moving up or down ever so slightly, but I think I will not be able to detect the details of those cycles. On the 88 year cycle I calculated that we are moving at speeds of cooling and warming of between -0.04 and +0.04 degree C per annum on the maxima with the average (over 88 years) at 0.00.

P. Solar
November 2, 2012 1:16 pm

cd_uk says:
November 2, 2012 at 4:54 am
>>
For stationary data, all low pass filters converge after successive runs (central limit theory). For the temperature record over small windows (say 12 months) the record will be effectively stationary. Therefore, after a single run the differences are no more than “stylistic”.
So I don’t understand, but I am interested, in why you think this gives you anything more meaningful.
>>
Firstly, there are not ” successive runs” here , there is one run. Your first point is irrelevant to what is presented here.
Also the difference is not cosmetic as you suggest. The following is an AR1 time series (based on Spencer’s “simple model” that produces climate-like data. The data was filtered with both gaussian and runny mean filters.
http://i44.tinypic.com/351v6a1.png
It is hard to believe both these lines originate from the same data. Note the following aberrations in the runny mean:
1: 1941, the biggest peak in the data gets truncated and is even inverted into a dip.
2: Early 80’s : complete inversion of the oscillations
3: 1959 peak twisted to the right , its peak being 1 year later then where it should be.
Bending , inverting and cropping peaks is really not acceptable in most situations so a better filter is required.

P. Solar
November 2, 2012 1:25 pm

“You still have not explained why UAH and GISS use running averages.”
I hadn’t noticed that I was being asked to explain that. I was under the impression that the graphs posted in this article were your work. Isn’t that the case? What are referring to here?
I know Spencer shows a 13m runny mean on his site and I pointed out to him last month how this was inverting peaks and troughs in the last two years of his but he chose not to reply.
I guess he finds it easy to do in Excel which is what he uses to produce the plots for his blog.
Abuse of runny means as a filter is pretty common practice in climate science, along with many other bad practices like fitting trends to cyclic data and thinking it means something. Excel is probably to the reason for that as well.

Werner Brozek
November 2, 2012 1:57 pm

HenryP says:
November 2, 2012 at 10:23 am
I am not sure what trend UAH is showing from 2000, since the adjustments?

I really wish woodfortrees would update this! However on Dr. Spencer’s site at
Walter Dnes says:
October 5, 2012 at 7:10 PM
“The revised UAH dataset shows a zero or negative slope for
April 2001 – August 2012
This is at least in line with NOAA and GISS, for what it’s worth. Note that when including the warm September data, the longest zero or negative slope series in the UAH data is July 2001 to September 2012.”

Mooloo
November 2, 2012 2:20 pm

Do you really think that “everybody understands” how runny means distort the data, truncating peak and inverting oscillations. Do you understand that?
If the same method is used consistently over a period of time, then the trends are visualised fine with running means. The truncated peaks are a feature, not a bug, because they allow a focus on the signal not the noise.
That’s all we are doing here.
We are showing graphs of average temperature, when we need to be measuring the amount of heat, and you are worried about running means? This is real example of arguing about angels on a pin – arguing at length about trivial details when the actual existence of angels is the real issue.

cd_uk
November 2, 2012 3:21 pm

P. Solar
Thanks for your response.
Surely any type of processing comes with the implicit caveat that there will be artifacts. Also when trying to relay messages to the layperson you’d better use as simple a method as possible. For example, the best method would be to use a Butterworth filter but this would involve processing in the frequency domain – at that point you lose the reader. How far should you go to relay the general trend in noisy time-series data.
I’m not disagreeing with any of your points but at some point you have to decide on a technique.

cd_uk
November 2, 2012 3:46 pm

P. Solar
General point: the choice of filter is an arbitrary one, what you gain with one you lose with another. You agree? In short you choose the one that best suits what you wish to show with all the implicit caveats.
But I think I see where the issue lies (from what you say)…
From your last post:
“I know Spencer shows a 13m runny mean on his site and I pointed out to him last month how this was inverting peaks and troughs in the last two years of his but he chose not to reply.”
And a previous post:
“Do you really think that “everybody understands” how runny means distort the data, truncating peak and inverting oscillations. Do you understand that?”
Are you conflating peaks and troughs with oscillation. And I’m sure you know this but just for clarity, it appears to me that you’re suggesting that (perhaps not):
1: peaks/troughs = signal
2: it follows from point 1 then the convolution should preserve the signal
But in a stochastic series, not withstanding that there will be some pseudo-cyclicity due to El Nino/La Nina, troughs and peaks are as likely to be random fluctuations as signal. So then you have to identify what’s signal and what’s not.
So I agree that you should remove the oscillations then apply your filter (this would necessitate an FFT again). But in order to do this properly you need to remove drift: how, you’d need to chose a filter or fit a polynomial and so round and round you go.

November 2, 2012 5:18 pm

Paul Homewood says:
November 2, 2012 at 3:56 am
The divergence between GISS and the other sets has been present for the last decade or so. It is not just based on one month’s figures.
It might need a statistician to calculate the exact amount of the divergence, which I am not. But the divergence is real and is something people should be aware of.

You need to need to bear in mind that the satellite and surface measurements measure different things. And that they shouldn’t diverge is a prediction of GHG warming theory. The extent to which they do diverge is evidence that GHGs are not the cause of the observed warming in the surface temperature record and to a lesser extent the satellite record. There are some other complications like the surface temperatures reliance on min/max temperatures, which is a poor way of determining average temperature.
BTW, the likely cause of the divergence is warming from increased solar insolation, due to reduced aerosols and aerosol seeded clouds.

P. Solar
November 3, 2012 12:39 am

cd_uk says: “Are you conflating peaks and troughs with oscillation. And I’m sure you know this but just for clarity, it appears to me that you’re suggesting that (perhaps not):”
No, that is why I list the two defect on these two points separately. Sideways bending of peaks being a third problem which is a manifestation of the phase distortion that, for brevity, I have not mentioned explicitly.
“So I agree that you should remove the oscillations then apply your filter ”
“But in order to do this properly you need to remove drift: how, you’d need to chose a filter or fit a polynomial and so round and round you go.”
Hang on, I’m not expanding this into full blown analysis of climate, I’m just pointing out how awful runny means are as a filter. And why, even as a first step, it is a really bad idea.
“For stationary data, all low pass filters converge after successive runs (central limit theory). ”
Sounds impressive but I’m not sure where you get this from. Once you have screwed up your phase with a running mean applying the same filter again will not make matters any better.
Each time you apply such a windowing filter you lose some data each end. Eventually you will just have a short dataset that is as short or shorter than the window and can go no further. At this point what’s left will be near flat and something close to the mean of all the data.
Maybe that’s your “convergence”, although it is not a particularly useful one and does not tell you anything about how much the first or any other step screwed up the time series you wanted to filter.
“General point: the choice of filter is an arbitrary one, what you gain with one you lose with another. You agree? ”
NO , it is not arbitrary. There is no one “right” choice but that does not mean you can do anything.
The only thing runny mean has to offer is that if you don’t ask yourself what the effects of your filter are or don’t even realise you should asking yourself that question, you can do it with a bit of clicking and dragging in Excel and post smart looking graphs of your distorted data on the internet.
[b] Running mean must die[/b]

P. Solar
November 3, 2012 12:43 am

Philip Bradley says: “BTW, the likely cause of the divergence is warming from increased solar insolation, due to reduced aerosols and aerosol seeded clouds.”
So why don’t other surface records show the same warming as GISS ?

November 3, 2012 2:45 am

P Solar says November 2, 2012 at 11:59 am: Paul Homewood says: “Above all though running averages is a concept something everybody understands, which I suspect is not the case with gaussian filters!” Firstly that has to about this the most piss-poor excuse ever for choosing a filter. What you are probably saying is that it is the only one you understand. (A similar argument Bob Tisdale has also used for the same reasons). A filter is chosen because it works, not because the public may or may have heard of it.
How arrogant you are. Running means (not “runny means” by the way, as you constantly repeat – some mathematician you must be) are PERFECTLY ACCEPTABLE. The fact that the concept is comprehensible to most people is an ADVANTAGE in a controversy where skeptics from all disciplines are trying their best to hack away at the truth hidden behind scientific obfuscation of the kind you perpetrate.
An 11 year running mean on the HadCRUT3 world temperature series does an EXCELLENT job of selecting out the natural ~67 year ocean temperature oscillation whose upswing over the ~33 years to 2000 was the cause of much of the climate alarmism. See the red line here:
http://www.thetruthaboutclimatechange.org/tempsworld.html

cd_uk
November 3, 2012 1:44 pm

P Solar
The point about central tendency theorem is that if you use the sampled averages for all possible 12 month windows, then you’re distribution of these means would be normal about the mean of the entire time series. Any smooth will produce a second data set, and if sampled as before, would give you another normal distribution of means centred about the mean (for the set) but with a lower variance this time. And if you repeat the smooth processing step again, and get the sampled means as before, we’ll get a tighter and tighter distribution about the same mean. In short, and casually speaking, all low pass convolutions are converging on the same thing but follow a different path; and yes we’re only doing one run but effectively the differences are “stylistic” – so personally I think you’re splitting hairs, even if you are right.
If you’re so concerned with fidelity then you should perform a Butterworth filter. Here you can look at the time series in the frequency domain (its spectrum). Here we can determine the frequency at which noise >= signal. The Butterworth filter allows you to “passband” these so when you back transform into the time domain, only the signal portion is returned (effectively). This is much more sophisticated than blindly running a convolution. But why bother.
BTW, the Butterworth filter can be applied in Excel using the FFT data analysis tool and a small number of functions.

November 4, 2012 1:43 am

Fit a slope line to the GISS data and it’s a nice upward line… compare to the others… Hmmm….
Sure makes the folks at GISS look kinda like ‘outliars’.. 😉
It’s pretty darned clear that the GISS method ‘has issues’ when compared to the others, in any case.

P. Solar
November 4, 2012 7:25 am

cd_uk says:
“we’ll get a tighter and tighter distribution about the same mean”
Sure, and because the filter looses data at each step the net result is you end up with a small number of points which equal the mean. Great, but you don’t a time series any more AS I already pointed out and you ignore.
“– so personally I think you’re splitting hairs, even if you are right”.
Hey, I took the time to post a graph which demonstrates all the problems and spelt them out in works.
If you still think I’m splitting hairs it’s because you don’t bother read when I reply to your questions.
All your waffle about central mean theorem is irrelevant to what is being shown here. Runny mean is crap filter that distorts the data in fundamentally bad ways.
Now if those who can’t do anything beyond click and point in Excel want to use a Butterworth that’s find be me. There are plenty of choices of filter that do a reasonable job.
But the sooner we start thinking about applying filters and stop talking about “doing a smooth”, the sooner we may start asking if we are using a suitable filter, which requires knowing something about the one you choose.
Even when I point out the problem with explanation and full detail and a graphic, people like yourself refer still prefer to pretend it does not matter.

cd_uk
November 4, 2012 4:11 pm

P. Solar
You asked what the central limit theorem had to do with anything. So I explained its relevance.
“Sure, and because the filter looses data at each step the net result is you end up with a small number of points which equal the mean.”
Oops, the description I gave should have been in relation to an exhaustive data set so that the filters converge long before you’re left with a small number of points (actually always).
“All your waffle about central mean theorem is irrelevant to what is being shown here.”
The fact you think it is waffle, when any discussion of low pass convolution worth its salt will refer to it, shows that you’ll always miss the point; all low pass filters that are achievable by convolution are essentially converging from day 0.
“Runny mean is crap filter that distorts the data in fundamentally bad ways.”
Do you mean running average? Perhaps you’re referring to something specifically different.
“Now if those who can’t do anything beyond click and point in Excel”
Yes but you could easily perform a Gaussian filter using not much more effort in Excel. A moving average, if expressed as code would have almost as many lines as the sample you provided. Because it is easy to do, it doesn’t make it unsophisticated.
“Even when I point out the problem with explanation and full detail and a graphic, people like yourself refer still prefer to pretend it does not matter.”
Again, you’re approaching this from a purist’s perspective in which case you should be advocating something like a Butterworth filter instead. I understand the point you’re making but I just don’t think the argument will change because of the nature of the filtered data. The plot may change but then one could argue against any choice of filter – they’re all loaded one way or another as you suggest. Let’s just keep it simple, otherwise as evidenced here, the discussion becomes about the statistic itself rather than what the numbers mean. The warmists would love that.

P. Solar
November 4, 2012 4:40 pm

“Do you mean running average? ”
I’m referring to running mean (average can mean several things), I call it runny because it’s crap 😉
” you’re approaching this from a purist’s perspective” What is “purist” about not wanting your filter to invert the data, truncating and shifting peaks !?
In electronics Butterworth has some good qualities but how are you suggesting it is implemented? I strongly suspect Excel will implement this as an IIR formula which, depending on the frequencies, will mean it takes a considerable time to “spin up”. Except Excel (which doesn’t) will not tell you at which point it has converged to whatever accuracy you need. In fact you’re flying blind doing that sort of thing since you have NO WAY of knowing how much of the output is valid.
Neither will you know how much to offset the result by to keep it in phase with the original data.
Since, like the trailing R.M. that Paul did here, this introduces a phase shift. This sort of thing is KINDA important when you are looking for relationships in climate phenomena.
This is not knit-picking purism, it’s the very essentials of digital signal processing.

cd_uk
November 5, 2012 2:49 am

P Solar
“Neither will you know how much to offset the result by to keep it in phase with the original data.”
Aaah…now I see. Was a bit slow there. I totally accept there will be an issue with phase shift, the other points are moot as far as I’m concerned.
Yes I do now see your point (even for the Butterworth filter) – i.e. where the final signal is a composite; I suppose you’d need to decompose the signal, do your Butterworth filter, and then – stepwise – add each component back individually with a phase shift. Wow, now that would be tedious.
“This is not knit-picking purism, it’s the very essentials of digital signal processing.”
The way I see it is, I don’t think signal fidelity is the aim of the filter. It is just a way to illustrate the trend. And for that, the moving average works fine. But, thanks for the thoughts and remember warmists love to fixate on things like this, it moves the argument away from the core of the issue.