Global Temperature Update–September 2012

Guest post by Paul Homewood

The HADCRUT data has now been released for September, so we can have a look at the latest figures for the four main global temperature datasets. I have now switched to HADCRUT4, although the Hadley Centre are still producing numbers for HADCRUT3.

RSS HADCRUT4 UAH GISS
September 2012 anomaly 0.38 0.52 0.34 0.60
Increase/Decrease From Last Month +0.12 -0.01 +0.14 +0.03
12 Month Running Average 0.16 0.42 0.11 0.50
Average 2002-11 0.26 0.47 0.19 0.55

                                   Global Temperature Anomalies – Degree Centigrade           

 

The pattern is similar across all datasets, with September temperatures above both long term and 12 month averages, although, interestingly, both satellite sets have picked up a bigger spike than the other two. We are currently seeing the lagged effect on temperature from the mild El Nino, which began in April and has now pretty much fizzled out, as can be seen below. Purely thinking aloud, but is this an indication that atmospheric warming is slower to dissipate than surface warming?

image

http://www.esrl.noaa.gov/psd/enso/mei/

My guess is that temperatures will settle back slightly by the end of the year. If ENSO conditions remain fairly neutral in the next few months, we should get a good indication of underlying temperatures, for the first time for a while.

The following graphs show 12-month running averages for each set. As I mentioned before, we often get fixated with calendar year figures, which obviously change a good deal from year to year. It therefore seems much more sensible to look at 12 month averages on a monthly basis, rather than wait till December.

image

image

image

image

In all cases, the 12 month averages are lower than they were at the start of the year.

Finally, I mentioned last month that UAH had just brought out a new Version 5.5, which corrected for spurious warming from the Aqua satellite. (Roy Spencer has the full technical stuff here). The new version is now incorporated and backdated in my analysis above. I have also plotted the difference between the old and new versions below.

image

As can be seen, the divergence really started to be noticeable towards the end of last year, and has steadily grown wider over the last few months.

Remember that all monthly updates can be accessed on my “Global Temperature Updates” page, at

http://notalotofpeopleknowthat.wordpress.com/category/global-temperature-updates/

Sources

http://nsstc.uah.edu/climate/index.html

http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

http://www.remss.com/data/msu/monthly_time_series/RSS_Monthly_MSU_AMSU_Channel_TLT_Anomalies_Land_and_Ocean_v03_3.txt

http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html#regional_series

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

58 Comments
Inline Feedbacks
View all comments
P. Solar
November 2, 2012 1:03 am

Paul Homewood says: “If you compare the current 12-month averages with Dec 1979, you get an increase in temps of :-”
With the level of uncertainty and the month to month variations as large as they are taking one month like that is simply not reasonable. I think you either have this properly or not at all.
GISS are inflating the surface record but if you are going to criticise, at least take the time to find out what the actual base lines are for all these data sets an make a meaningful comparison.
Also please try the gaussian filter I posted above, you will find it is a much better “smoother” than doing runny means. It also correctly centres the result instead of shifting it.

P. Solar
November 2, 2012 1:16 am

DI says: “Anyone who has used an oscilloscope knows… ”
Oscilloscopes also have the trigger function that selects just a small part of the waveform from the peak or the trough of the mains hum to provide a stable readable output of a very small part of the time sample. The rest of your analogy falls apart at this point.
What is happening in climate pseudo-science is the mains hum is 60 years not 60 Hz and instead of filtering it out , they are just looking at the rise edge of trough to peak and pretending it will continue rising.

November 2, 2012 1:53 am

In an attempt to reduce Australia’s pollution, our Prime Minister (Julia Gillard) introduced a Carbon Tax last July 1st.
People’s opinion as to how effective this will be seems to be tied closely to their political persuasions.
The opposition party are blaming all our problems (current and in the future) on the Carbon Tax, while the Government champion its potential.
Nobody will really know its effect until it has been in for a few years.
But in the meantime, it gives me great material for some political cartoons.
http://cartoonmick.wordpress.com/editorial-political/#jp-carousel-517
Cheers
Mick

DirkH
November 2, 2012 3:25 am

Taphonomic says:
November 1, 2012 at 3:46 pm

“Rosco says:
“How the hell does anyone take thousands of readings with at best 0.5 degree accuracy and come up with a global average of 0.55 or whatever ??”
Simple, plug the temperatures into a calculator and divide by the number of observations.
Using a basic calculator, you can get precision to 8 to 10 decimal places by completely ignoring the concept of “significant figures”.”

No, again, the error of the average goes down with the square root of the number of measurements (assuming no autocorrelation).
Imagine you are sampling a noisy signal again and again. You are not interested in the short term fluctuations, you want to find out the DC component – the constant part of the signal – , and we assume no autocorrelation, that means no periodic signals in there for the moment.
This would be equivalent to recording the throw of a dice time and time again. You know that over time your average measurement will get closer and closer to 3.5 . If you don’t believe me, try it out. During each moment in time the absolute deviation of the integral of all measurements from n*3.5 can get arbitrarily large but as the number of measurements n gets higher and higher the integral divided by n asymptotically approaches 3.5 . The error bar gets smaller and smaller over time. It can be shown that it shrinks with the square root of the number of measurements.
In other words: More measurements do help; but we have to look out for autocorrelation. That would be a reason to distribute temperature measurements around the globe, to minimize spatial autocorrelation.

DirkH
November 2, 2012 3:30 am

cartoonmick says:
November 2, 2012 at 1:53 am
“But in the meantime, it gives me great material for some political cartoons.”
I see no cartoon about the relation of Julia Gillard with keeping her promises. Wouldn’t that make a great one? Heck, she even has the nose of Pinocchio.

cd_uk
November 2, 2012 4:08 am

Paul
I find it incredible that disparate sources of global temperature records produced using very different methodologies compare so well (over last 30 years); especially when one considers what is being measured.
When you look at the methodologies employed in producing the instrumental value (interpolation, projection systems etc.) and the fine margins, one would think that the final figure is nonsense. But when compared with the satellite record there seems to be a good degree of concordance. Amazing.
In short, it seems unlikely that all could share the same bias.

cd_uk
November 2, 2012 4:54 am

P Solar
For stationary data, all low pass filters converge after successive runs (central limit theory). For the temperature record over small windows (say 12 months) the record will be effectively stationary. Therefore, after a single run the differences are no more than “stylistic”.
So I don’t understand, but I am interested, in why you think this gives you anything more meaningful.

November 2, 2012 5:14 am

MiCro says
http://wattsupwiththat.com/2012/11/01/global-temperature-update-september-2012/#comment-1132403
Henry says
Very interesting. But I am not sure exactly what I am looking at except that it is a sine wave similar to my own. I assume the scale on the left is temp. but what is the scale on the bottom?
I recently discovered that in a period of cooling (such as has already arrived now, since 1995, looking at energy-in) some places do get warmer due to the GH effects. CET is an example. There are more clouds and there is more precipitation in a cooling period. In such a case the sine wave runs in opposite directions, but the wave is still there.
Read my comments made here:
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/

Editor
November 2, 2012 7:19 am

I posted the preliminary October 2012 sea surface temperature data here:
http://bobtisdale.wordpress.com/2012/10/29/theyre-back-nino3-4-sea-surface-temperature-anomalies-are-back-above-the-threshold-of-el-nino-conditions/
The complete October update should be available on Monday, November 5.

Rex
November 2, 2012 9:07 am

The margins of error quoted for these figures are the Statistical Errors, based
on the assumption of a perfect sample and methodology. In addition to the Statistical
Error there is a factor which might be called SURVEY ERROR, which can be as much
as, or indeed greater by some degrees, than the calculated Statistical Error.
My rule of thumb is to double or triple any quoted margins of error for survey data.
And since the temperature measurement system resembles a dog’s breakfast more
than it does a properly designed survey, I suggest quadrupling, in this instance.

November 2, 2012 10:23 am

Werner Brozek says
http://wattsupwiththat.com/2012/11/01/global-temperature-update-september-2012/#comment-1132432
thanks for your updates, we do appreciate the work that you did there!
Nevertheless, I really only trust my own dataset, which is my right, and that one shows that we have already fallen by as much as 0.2 degrees C since 2000. I can also predict that we will fall by another 0.3 degrees C from 2012 to 2020.
I am not sure what trend UAH is showing from 2000, since the adjustments?
So far, it seemed to me that only Hadcrut 3 showed some significant cooling trend,
http://www.woodfortrees.org/plot/hadcrut4gl/from:2002/to:2012/plot/hadcrut4gl/from:2002/to:2012/trend/plot/hadcrut3vgl/from:2002/to:2012/plot/hadcrut3vgl/from:2002/to:2012/trend
On hadcrut 3 it looks like almost -0.1 which is beginning to become closer to my dataset.
On hadcrut4 it still looks very flat, but look at the very high result for 2007.
I think that time there was still somebody trying to cook the books and the results a bit?

November 2, 2012 10:48 am

HenryP says:
November 2, 2012 at 5:14 am
“Very interesting. But I am not sure exactly what I am looking at except that it is a sine wave similar to my own. I assume the scale on the left is temp. but what is the scale on the bottom?
I recently discovered that in a period of cooling (such as has already arrived now, since 1995, looking at energy-in) some places do get warmer due to the GH effects. CET is an example. There are more clouds and there is more precipitation in a cooling period. In such a case the sine wave runs in opposite directions, but the wave is still there.”
Scale on left is daily difference temperature (Rising-Falling) * 100, across the bottom is the day of the year. Also this is for station North of 23 Lat.
What this is showing is a measure of atm cooling rate as the length of day changes, cooling in the fall and winter, warming in the spring and summer. When you average the daily rate out for a year, year over year the daily average changes very little. There’s a number of things I’ve worked on if you follow the url in my name here.
“Read my comments made here:
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
I’ll take a look.

P. Solar
November 2, 2012 11:59 am

Paul Homewood says: “Above all though running averages is a concept something everybody understands, which I suspect is not the case with gaussian filters!”
Firstly that has to about this the most piss-poor excuse ever for choosing a filter. What you are probably saying is that it is the only one you understand. (A similar argument Bob Tisdale has also used for the same reasons). A filter is chosen because it works, not because the public may or may have heard of it.
Do you really think that “everybody understands” how runny means distort the data, truncating peak and inverting oscillations. Do you understand that?
Secondly, the true problem is “everybody” does not understand filters in the slightest. Most people would not even know running average IS a filter and even if they got that far they would have no idea what a frequency response was or why it mattered.
“People” and “everybody” do not have to know the ins and outs of digital signal processing for you to chose a good filter. Just chose one, and chose a good one.
You could chose a binomial (as used by Met Office, for example) or a number of others. I’ve provided you with one alternative and the code to do it.
>>
The divergence between GISS and the other sets has been present for the last decade or so. It is not just based on one month’s figures.
It might need a statistician to calculate the exact amount of the divergence, which I am not. But the divergence is real and is something people should be aware of.
>>
I totally agree, it has been noted for a long time that they are ramping up the warming. If you want to highlight this you need to do apples to apples comparisons, so how about an update here where everything has the same base line.
Best of all plot GISS on top of UAH or RSS , preferable all smoothed with a filter that does not introduce spurious deformations in to the data, or else someone’s data may be misrepresented.

November 2, 2012 12:00 pm

Micro says
http://wattsupwiththat.com/2012/11/01/global-temperature-update-september-2012/#comment-1133577
Henry says
true. I looked in at the url and saw that you came to the same conclusion as I did from my sample, of 47 weather stations which I had balanced by latitude and 0n-sea/inland 70/30
Over a 88 year period we are more or less back to square zero. It could be that within that 88 year cyle we are also in a 200 yr and 500 yr cycle moving up or down ever so slightly, but I think I will not be able to detect the details of those cycles. On the 88 year cycle I calculated that we are moving at speeds of cooling and warming of between -0.04 and +0.04 degree C per annum on the maxima with the average (over 88 years) at 0.00.

P. Solar
November 2, 2012 1:16 pm

cd_uk says:
November 2, 2012 at 4:54 am
>>
For stationary data, all low pass filters converge after successive runs (central limit theory). For the temperature record over small windows (say 12 months) the record will be effectively stationary. Therefore, after a single run the differences are no more than “stylistic”.
So I don’t understand, but I am interested, in why you think this gives you anything more meaningful.
>>
Firstly, there are not ” successive runs” here , there is one run. Your first point is irrelevant to what is presented here.
Also the difference is not cosmetic as you suggest. The following is an AR1 time series (based on Spencer’s “simple model” that produces climate-like data. The data was filtered with both gaussian and runny mean filters.
http://i44.tinypic.com/351v6a1.png
It is hard to believe both these lines originate from the same data. Note the following aberrations in the runny mean:
1: 1941, the biggest peak in the data gets truncated and is even inverted into a dip.
2: Early 80’s : complete inversion of the oscillations
3: 1959 peak twisted to the right , its peak being 1 year later then where it should be.
Bending , inverting and cropping peaks is really not acceptable in most situations so a better filter is required.

P. Solar
November 2, 2012 1:25 pm

“You still have not explained why UAH and GISS use running averages.”
I hadn’t noticed that I was being asked to explain that. I was under the impression that the graphs posted in this article were your work. Isn’t that the case? What are referring to here?
I know Spencer shows a 13m runny mean on his site and I pointed out to him last month how this was inverting peaks and troughs in the last two years of his but he chose not to reply.
I guess he finds it easy to do in Excel which is what he uses to produce the plots for his blog.
Abuse of runny means as a filter is pretty common practice in climate science, along with many other bad practices like fitting trends to cyclic data and thinking it means something. Excel is probably to the reason for that as well.

Werner Brozek
November 2, 2012 1:57 pm

HenryP says:
November 2, 2012 at 10:23 am
I am not sure what trend UAH is showing from 2000, since the adjustments?

I really wish woodfortrees would update this! However on Dr. Spencer’s site at
Walter Dnes says:
October 5, 2012 at 7:10 PM
“The revised UAH dataset shows a zero or negative slope for
April 2001 – August 2012
This is at least in line with NOAA and GISS, for what it’s worth. Note that when including the warm September data, the longest zero or negative slope series in the UAH data is July 2001 to September 2012.”

Mooloo
November 2, 2012 2:20 pm

Do you really think that “everybody understands” how runny means distort the data, truncating peak and inverting oscillations. Do you understand that?
If the same method is used consistently over a period of time, then the trends are visualised fine with running means. The truncated peaks are a feature, not a bug, because they allow a focus on the signal not the noise.
That’s all we are doing here.
We are showing graphs of average temperature, when we need to be measuring the amount of heat, and you are worried about running means? This is real example of arguing about angels on a pin – arguing at length about trivial details when the actual existence of angels is the real issue.

cd_uk
November 2, 2012 3:21 pm

P. Solar
Thanks for your response.
Surely any type of processing comes with the implicit caveat that there will be artifacts. Also when trying to relay messages to the layperson you’d better use as simple a method as possible. For example, the best method would be to use a Butterworth filter but this would involve processing in the frequency domain – at that point you lose the reader. How far should you go to relay the general trend in noisy time-series data.
I’m not disagreeing with any of your points but at some point you have to decide on a technique.

cd_uk
November 2, 2012 3:46 pm

P. Solar
General point: the choice of filter is an arbitrary one, what you gain with one you lose with another. You agree? In short you choose the one that best suits what you wish to show with all the implicit caveats.
But I think I see where the issue lies (from what you say)…
From your last post:
“I know Spencer shows a 13m runny mean on his site and I pointed out to him last month how this was inverting peaks and troughs in the last two years of his but he chose not to reply.”
And a previous post:
“Do you really think that “everybody understands” how runny means distort the data, truncating peak and inverting oscillations. Do you understand that?”
Are you conflating peaks and troughs with oscillation. And I’m sure you know this but just for clarity, it appears to me that you’re suggesting that (perhaps not):
1: peaks/troughs = signal
2: it follows from point 1 then the convolution should preserve the signal
But in a stochastic series, not withstanding that there will be some pseudo-cyclicity due to El Nino/La Nina, troughs and peaks are as likely to be random fluctuations as signal. So then you have to identify what’s signal and what’s not.
So I agree that you should remove the oscillations then apply your filter (this would necessitate an FFT again). But in order to do this properly you need to remove drift: how, you’d need to chose a filter or fit a polynomial and so round and round you go.

November 2, 2012 5:18 pm

Paul Homewood says:
November 2, 2012 at 3:56 am
The divergence between GISS and the other sets has been present for the last decade or so. It is not just based on one month’s figures.
It might need a statistician to calculate the exact amount of the divergence, which I am not. But the divergence is real and is something people should be aware of.

You need to need to bear in mind that the satellite and surface measurements measure different things. And that they shouldn’t diverge is a prediction of GHG warming theory. The extent to which they do diverge is evidence that GHGs are not the cause of the observed warming in the surface temperature record and to a lesser extent the satellite record. There are some other complications like the surface temperatures reliance on min/max temperatures, which is a poor way of determining average temperature.
BTW, the likely cause of the divergence is warming from increased solar insolation, due to reduced aerosols and aerosol seeded clouds.

P. Solar
November 3, 2012 12:39 am

cd_uk says: “Are you conflating peaks and troughs with oscillation. And I’m sure you know this but just for clarity, it appears to me that you’re suggesting that (perhaps not):”
No, that is why I list the two defect on these two points separately. Sideways bending of peaks being a third problem which is a manifestation of the phase distortion that, for brevity, I have not mentioned explicitly.
“So I agree that you should remove the oscillations then apply your filter ”
“But in order to do this properly you need to remove drift: how, you’d need to chose a filter or fit a polynomial and so round and round you go.”
Hang on, I’m not expanding this into full blown analysis of climate, I’m just pointing out how awful runny means are as a filter. And why, even as a first step, it is a really bad idea.
“For stationary data, all low pass filters converge after successive runs (central limit theory). ”
Sounds impressive but I’m not sure where you get this from. Once you have screwed up your phase with a running mean applying the same filter again will not make matters any better.
Each time you apply such a windowing filter you lose some data each end. Eventually you will just have a short dataset that is as short or shorter than the window and can go no further. At this point what’s left will be near flat and something close to the mean of all the data.
Maybe that’s your “convergence”, although it is not a particularly useful one and does not tell you anything about how much the first or any other step screwed up the time series you wanted to filter.
“General point: the choice of filter is an arbitrary one, what you gain with one you lose with another. You agree? ”
NO , it is not arbitrary. There is no one “right” choice but that does not mean you can do anything.
The only thing runny mean has to offer is that if you don’t ask yourself what the effects of your filter are or don’t even realise you should asking yourself that question, you can do it with a bit of clicking and dragging in Excel and post smart looking graphs of your distorted data on the internet.
[b] Running mean must die[/b]

P. Solar
November 3, 2012 12:43 am

Philip Bradley says: “BTW, the likely cause of the divergence is warming from increased solar insolation, due to reduced aerosols and aerosol seeded clouds.”
So why don’t other surface records show the same warming as GISS ?