Global Temperature Update–September 2012

Guest post by Paul Homewood

The HADCRUT data has now been released for September, so we can have a look at the latest figures for the four main global temperature datasets. I have now switched to HADCRUT4, although the Hadley Centre are still producing numbers for HADCRUT3.

RSS HADCRUT4 UAH GISS
September 2012 anomaly 0.38 0.52 0.34 0.60
Increase/Decrease From Last Month +0.12 -0.01 +0.14 +0.03
12 Month Running Average 0.16 0.42 0.11 0.50
Average 2002-11 0.26 0.47 0.19 0.55

                                   Global Temperature Anomalies – Degree Centigrade           

 

The pattern is similar across all datasets, with September temperatures above both long term and 12 month averages, although, interestingly, both satellite sets have picked up a bigger spike than the other two. We are currently seeing the lagged effect on temperature from the mild El Nino, which began in April and has now pretty much fizzled out, as can be seen below. Purely thinking aloud, but is this an indication that atmospheric warming is slower to dissipate than surface warming?

image

http://www.esrl.noaa.gov/psd/enso/mei/

My guess is that temperatures will settle back slightly by the end of the year. If ENSO conditions remain fairly neutral in the next few months, we should get a good indication of underlying temperatures, for the first time for a while.

The following graphs show 12-month running averages for each set. As I mentioned before, we often get fixated with calendar year figures, which obviously change a good deal from year to year. It therefore seems much more sensible to look at 12 month averages on a monthly basis, rather than wait till December.

image

image

image

image

In all cases, the 12 month averages are lower than they were at the start of the year.

Finally, I mentioned last month that UAH had just brought out a new Version 5.5, which corrected for spurious warming from the Aqua satellite. (Roy Spencer has the full technical stuff here). The new version is now incorporated and backdated in my analysis above. I have also plotted the difference between the old and new versions below.

image

As can be seen, the divergence really started to be noticeable towards the end of last year, and has steadily grown wider over the last few months.

Remember that all monthly updates can be accessed on my “Global Temperature Updates” page, at

http://notalotofpeopleknowthat.wordpress.com/category/global-temperature-updates/

Sources

http://nsstc.uah.edu/climate/index.html

http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

http://www.remss.com/data/msu/monthly_time_series/RSS_Monthly_MSU_AMSU_Channel_TLT_Anomalies_Land_and_Ocean_v03_3.txt

http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html#regional_series

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

58 Comments
Inline Feedbacks
View all comments
November 3, 2012 2:45 am

P Solar says November 2, 2012 at 11:59 am: Paul Homewood says: “Above all though running averages is a concept something everybody understands, which I suspect is not the case with gaussian filters!” Firstly that has to about this the most piss-poor excuse ever for choosing a filter. What you are probably saying is that it is the only one you understand. (A similar argument Bob Tisdale has also used for the same reasons). A filter is chosen because it works, not because the public may or may have heard of it.
How arrogant you are. Running means (not “runny means” by the way, as you constantly repeat – some mathematician you must be) are PERFECTLY ACCEPTABLE. The fact that the concept is comprehensible to most people is an ADVANTAGE in a controversy where skeptics from all disciplines are trying their best to hack away at the truth hidden behind scientific obfuscation of the kind you perpetrate.
An 11 year running mean on the HadCRUT3 world temperature series does an EXCELLENT job of selecting out the natural ~67 year ocean temperature oscillation whose upswing over the ~33 years to 2000 was the cause of much of the climate alarmism. See the red line here:
http://www.thetruthaboutclimatechange.org/tempsworld.html

cd_uk
November 3, 2012 1:44 pm

P Solar
The point about central tendency theorem is that if you use the sampled averages for all possible 12 month windows, then you’re distribution of these means would be normal about the mean of the entire time series. Any smooth will produce a second data set, and if sampled as before, would give you another normal distribution of means centred about the mean (for the set) but with a lower variance this time. And if you repeat the smooth processing step again, and get the sampled means as before, we’ll get a tighter and tighter distribution about the same mean. In short, and casually speaking, all low pass convolutions are converging on the same thing but follow a different path; and yes we’re only doing one run but effectively the differences are “stylistic” – so personally I think you’re splitting hairs, even if you are right.
If you’re so concerned with fidelity then you should perform a Butterworth filter. Here you can look at the time series in the frequency domain (its spectrum). Here we can determine the frequency at which noise >= signal. The Butterworth filter allows you to “passband” these so when you back transform into the time domain, only the signal portion is returned (effectively). This is much more sophisticated than blindly running a convolution. But why bother.
BTW, the Butterworth filter can be applied in Excel using the FFT data analysis tool and a small number of functions.

November 4, 2012 1:43 am

Fit a slope line to the GISS data and it’s a nice upward line… compare to the others… Hmmm….
Sure makes the folks at GISS look kinda like ‘outliars’.. 😉
It’s pretty darned clear that the GISS method ‘has issues’ when compared to the others, in any case.

P. Solar
November 4, 2012 7:25 am

cd_uk says:
“we’ll get a tighter and tighter distribution about the same mean”
Sure, and because the filter looses data at each step the net result is you end up with a small number of points which equal the mean. Great, but you don’t a time series any more AS I already pointed out and you ignore.
“– so personally I think you’re splitting hairs, even if you are right”.
Hey, I took the time to post a graph which demonstrates all the problems and spelt them out in works.
If you still think I’m splitting hairs it’s because you don’t bother read when I reply to your questions.
All your waffle about central mean theorem is irrelevant to what is being shown here. Runny mean is crap filter that distorts the data in fundamentally bad ways.
Now if those who can’t do anything beyond click and point in Excel want to use a Butterworth that’s find be me. There are plenty of choices of filter that do a reasonable job.
But the sooner we start thinking about applying filters and stop talking about “doing a smooth”, the sooner we may start asking if we are using a suitable filter, which requires knowing something about the one you choose.
Even when I point out the problem with explanation and full detail and a graphic, people like yourself refer still prefer to pretend it does not matter.

cd_uk
November 4, 2012 4:11 pm

P. Solar
You asked what the central limit theorem had to do with anything. So I explained its relevance.
“Sure, and because the filter looses data at each step the net result is you end up with a small number of points which equal the mean.”
Oops, the description I gave should have been in relation to an exhaustive data set so that the filters converge long before you’re left with a small number of points (actually always).
“All your waffle about central mean theorem is irrelevant to what is being shown here.”
The fact you think it is waffle, when any discussion of low pass convolution worth its salt will refer to it, shows that you’ll always miss the point; all low pass filters that are achievable by convolution are essentially converging from day 0.
“Runny mean is crap filter that distorts the data in fundamentally bad ways.”
Do you mean running average? Perhaps you’re referring to something specifically different.
“Now if those who can’t do anything beyond click and point in Excel”
Yes but you could easily perform a Gaussian filter using not much more effort in Excel. A moving average, if expressed as code would have almost as many lines as the sample you provided. Because it is easy to do, it doesn’t make it unsophisticated.
“Even when I point out the problem with explanation and full detail and a graphic, people like yourself refer still prefer to pretend it does not matter.”
Again, you’re approaching this from a purist’s perspective in which case you should be advocating something like a Butterworth filter instead. I understand the point you’re making but I just don’t think the argument will change because of the nature of the filtered data. The plot may change but then one could argue against any choice of filter – they’re all loaded one way or another as you suggest. Let’s just keep it simple, otherwise as evidenced here, the discussion becomes about the statistic itself rather than what the numbers mean. The warmists would love that.

P. Solar
November 4, 2012 4:40 pm

“Do you mean running average? ”
I’m referring to running mean (average can mean several things), I call it runny because it’s crap 😉
” you’re approaching this from a purist’s perspective” What is “purist” about not wanting your filter to invert the data, truncating and shifting peaks !?
In electronics Butterworth has some good qualities but how are you suggesting it is implemented? I strongly suspect Excel will implement this as an IIR formula which, depending on the frequencies, will mean it takes a considerable time to “spin up”. Except Excel (which doesn’t) will not tell you at which point it has converged to whatever accuracy you need. In fact you’re flying blind doing that sort of thing since you have NO WAY of knowing how much of the output is valid.
Neither will you know how much to offset the result by to keep it in phase with the original data.
Since, like the trailing R.M. that Paul did here, this introduces a phase shift. This sort of thing is KINDA important when you are looking for relationships in climate phenomena.
This is not knit-picking purism, it’s the very essentials of digital signal processing.

cd_uk
November 5, 2012 2:49 am

P Solar
“Neither will you know how much to offset the result by to keep it in phase with the original data.”
Aaah…now I see. Was a bit slow there. I totally accept there will be an issue with phase shift, the other points are moot as far as I’m concerned.
Yes I do now see your point (even for the Butterworth filter) – i.e. where the final signal is a composite; I suppose you’d need to decompose the signal, do your Butterworth filter, and then – stepwise – add each component back individually with a phase shift. Wow, now that would be tedious.
“This is not knit-picking purism, it’s the very essentials of digital signal processing.”
The way I see it is, I don’t think signal fidelity is the aim of the filter. It is just a way to illustrate the trend. And for that, the moving average works fine. But, thanks for the thoughts and remember warmists love to fixate on things like this, it moves the argument away from the core of the issue.