Bob Tisdale shows us that GISS is once again, “way out there” in 2009 compared to other global temperature data sets. It is not surprising, we’ve come to expect it.
Was 2009 The Warmest Year On Record In The Southern Hemisphere?
Guest Post by Bob Tisdale
Figure 1
http://i50.tinypic.com/alq6wy.png
Figure 2
The annual NCDC Land+Sea Surface Temperature anomalies from 1982 to 2009, Figure 3, also do not show the record levels in 2009, but the NCDC does not infill with the 1200km smoothing like GISS.
http://i45.tinypic.com/2h2ghdy.png
Figure 3
GISS has used OI.v2 SST data since 1982. Figure 3 is an annual graph of SST anomalies for the Southern Hemisphere, and it illustrates that 2009 was not a record year for SST anomalies. That leaves the GISS land surface temperature anomaly data as the culprit.
http://i50.tinypic.com/2eceu74.png
Figure 4
Hadley Centre data is still not available for December, and they’ve been running late recently. The NCDC and GISS data through KNMI Climate Explorer data should be updated within the next few days, so we’ll be able to do some comparisons and try to determine which of the continents is responsible for the new record GISS Southern Hemisphere temperatures.
SOURCES
OI.v2 SST anomaly data is available through the NOAA NOMADS website:
http://nomad3.ncep.noaa.gov/cgi-bin/pdisp_sst.sh?lite
The GISTEMP Southern Hemisphere Land Plus Surface Temperature data is available from GISS:
http://data.giss.nasa.gov/gistemp/tabledata/SH.Ts+dSST.txt
The NCDC Southern Hemisphere Land Plus Surface Temperature data is available here:
ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/monthly.land_ocean.90S.00N.df_1901-2000mean.dat
The UAH MSU TLT anomaly data was retrieved from the KNMI Climate Explorer:
http://climexp.knmi.nl/selectfield_obs.cgi?someone@somewhere
Posted by Bob Tisdale at 9:06 PM




hunter: You wrote, “Hansen’s mental contortion to claim that we have just finished the hottest decade on record seems contrived, at best.”
Especially when you consider that the majority of the difference between the 1990s and the 2000s was caused by the aftereffects of the 1997/98 El Nino (and 1998/99/00/01 La Nina). Refer to:
http://bobtisdale.blogspot.com/2009/11/global-temperatures-this-decade-will-be.html
Regards
Pascvaks: Thanks for your numerous comments that include links to graphs of sea ice as a proxy for global or hemispheric temperature…
BUT…
Sea ice is a poor proxy for global and hemispheric temperature. It is impacted by many other variables: changes in ocean currents, surface wind direction, polar atmospheric pressure, etc.
Bob Tisdale : Do the graphs that I’ve posted at
(05:12:10) and (05:40:28) accurately reflect the Global and Southern ice record? Are the a reasonable proxy for Global and Souther temperatures?
hunter, it is not. The proper (or the best that we can do) data would be a running 3-month average to coincide with other indices reported likewise, such as SST anomalies in the ENSO report used to determine El Nino/La Nina events. Even solar data would benefit from this type of calculation. The universe cares not one iota which month this is, yet we get our knickers in a twist if this January is warmer or colder than last January. Which reminds me, comparing today’s Arctic ice cover with last year’s cover on this date, or any comparison date by date, is just absurd.
John Finn (03:23:59) :
RE: GISS/UAH SH ‘discrepancy’
Are you sure that the ENSO lag is not playing some part in this, i.e. satellite readings don’t respond as quickly to El Nino development.
The surface temperatures started to rise much earlier in the year. UAH readings for the 6 months since July have been at record levels – higher than the same period in 1998. The SH surface (GISS) temperatures seem to have moderated while it’s not clear that has happened with the satellite temperatures. I think we need to look at the Jan/Feb numbers before jumping to conclusions.
If you look at UAH data for Ch 5 you’ll see that it’s been at or above the 20 yr record levels for the whole month of Jan. (and substantially above last year)
This issue came up some months ago (June??) when UAH was recording near zero anomalies while GISS and Hadcrut anomalies were rising. I reckon there’s a good chance that by May or June, UAH will have recorded it’s warmest ever 12 month period (SH and NH).
UAH has a problem with anomalies ever since they switched to the Aqua satellite, it’s always low in the summer now. I think it’s due to the change from a drift corrected dataset (on which the mean for calculating the anomaly is based) to now using a non-drifting satellite. It’s notable that RSS didn’t shift to Aqua and doesn’t show the seasonal variation.
As a lifelong scientists who has looked at a lot of data, charts, spectra, etc., I just find it amazing that the warmists look at any of these charts and see anything but noise.
The ever-changing methods of adjustments and the dying of thermometers enable GISS to create a continuous bull market for global warming. Ultimately, imho, the point here is to create higher highs and an ongoing bull market. They understand that the PR is more important than the science in terms of accrual of power and money.
As boballab has clearly demonstrated, the center of Africa is butt naked. It is a huge continent with sparse coverage. In the 21st century no less.
Pascvaks: You asked, “Do the graphs that I’ve posted at
(05:12:10) and (05:40:28) accurately reflect the Global and Southern ice record?”
I have no reason to dispute the accuracy of the data used to create the graphs.
You asked, “Are the a reasonable proxy for Global and Souther temperatures?”
As I replied above, no.
Sea ice is a poor proxy for global and hemispheric temperature. Sea ice is impacted by many other variables in addition to temperature: changes in ocean currents, surface wind direction, polar atmospheric pressure, etc.
Regards
Peter of Sydney–Thanks for emphasizing that the entire AGW debate is about 0.6 C. Much of the debate can be likened to early theologians arguing about how many angels can dance on the head of a pin. To believe that AGW is real, one would have to believe that the temperature measurements and the climate models are exceptionally precise. If you look at the purported science behind AGW, one would have to be BRAINDEAD not to be a skeptic. Please see my latest post at http://socratesparadox.com/?p=121#more-121 for further discussion.
Ref – Bob Tisdale (06:24:15) :
“Sea ice is a poor proxy for global and hemispheric temperature.”
_________________
Thanks!
Keep thinking that we’ve over defined some things; that we’re discussing string theory weather. Like taking apart a clock and looking at all the pieces, we can’t tell the time. Seemed that Sea Ice cover was a pretty effective way to tell if we were actually cooling or warming from a Macro point of view. Thanks again!
Oefinell (02:45:04) :
ajstrata (19:47)
Interesting post but misguided. Your blog correctly states that it is not possible to get an accurate global surface temperature from current data. In fact that is well understood by Climate Scientists. That is why they do not measure the absolute temperature at all, they measure the temperature change or variance.
————–
So, what you’re saying is that when Alexei pulls on his parka and goes out to the airport stevensonsky screensky, he just looks at the top of the thermometer, not the whole thing?
Danimals (05:55:15) :
My personal opinion for right now with just a cursory glance and going by memory the answer is found in history. During the 80’s and 90’s you had a lot of turmoil in Central Africa with governments comming and going. Things like the Genocide in Rwanda, Zimbabwe’s problems and famine across the region (remember that is what got Bush I into Somalia in the 90’s) would in my mind take precendent over the need for reading thermometers.
You will also find the same pattern in the Middle East. There is a blank spot in southern Saudi Arabia and with good reason: Its a desert with very little there so no theremometers. However if you stop/slow dow the progression of the animation you will see that gray blob spread to Iraq/Iran area. Again that happens at a time when there is wars fought in those areas and it spreads outwards from there through and into Afghanistan.
Bob Tisdale (03:49:45) :
John Finn: You asked, “Are you sure that the ENSO lag is not playing some part in this, i.e. satellite readings don’t respond as quickly to El Nino development,” then clarified that satellite data meant TLT.
Note the difference between the GISTEMP and NCDC data.
Bob
Fair point about NCDC, but there’s an issue about the GISS 1998 anomaly. It’s always seemed, relatively speaking, much lower that the other datasets.
Pascvaks, I suspect Bob is right on a yearly basis, however, at some timescale sea ice may very well be a good proxy for temperature. However, it might take another 50 years before we understand this enough to say anything concrete.
Bob,
The GISS divergence is no longer a mystery. Let me get things up to date a little. Since around 1998 GISS has been significantly diverging from HadCRUT and UAH and RSS. The reason that was given by Real Climate was that GISS did infill at the poles and the others did not represent the poles. Real Climate considered this to be a reason to consider the GISS data set as superior. Of course GISS is the only data set that shows warming since 1998. HadCRUT, UAH and RSS do not. I made the argument that it could not be the poles because it would take a huge amount of heating at the poles in order to produce such a large global divergence. Well, I was wrong. It was the poles. But I was not wrong about the huge amount of warming that it would take at the poles to cause the divergence. Four days ago Jim Hansen did a guest post on RC. What he had done was to take the GISS grided map and remove from it those areas that were not covered by HadCRUT. These were mostly polar of course. Then, when he did a global average on the map that resulted, the answer was very close to HadCRUT. So far so good. The divergence was in fact due to the poles. But the problem was that the infill that GISS is doing at the poles is completely irrational. For example, there were some polar cells that exist and show cooling in HadCRUT. These cells were converted to maximum hot by the GISS algorithms. The difference for some of these cells between the values that HadCRUT had for them and the value that GISS had for them was 6.7 C or greater. An outrageous difference. Then, if you look at the total anomaly that HadCRUT has for it’s most northern (a row with 30% coverage) row and compare it to the same row in GISS, the GISS anomaly was more than two times as large on a per cell basis. The GISS infill algorithms are pure nonesense. I have been pointing this out in the comments for Hansen’s latest post. And I have had no response from Hansen or Gavin. And there has been no meaningful response from the RC sycophants. Here are the Hansen charts that I am refering to:
http://www.realclimate.org/images/Hansen09_fig3.jpg
And here is one of the comments that I made on the Hansen thread.
“I’ve been talking about the qualitative problems that I noticed in the GISS charts in figure 3 above. So I thought that I would try to take a rough shot at quantizing the problem as well. I used the HadCRUT 2005 chart and the GISS 2005 chart to make comparisons. And I wanted to compare the HadCRUT gridcell row that was furthest north to the GISS gridcell row that occupied the same position. First I counted the number of gridcells in a row. There are 72. Then I counted the number of HadCRUT cells that have data in that row. There are 24. This means that the topmost HadCRUT row has 30% coverage. So I added up all of the covered gridcell anomaly values in the row. The total was 43.8. Dividing by 24 I got an average covered gridcell value for the HadCRUT row of 1.85 C. The GISS row obviously had 100% coverage using interpolation and extrapolation. When I added all of the gridcell anomaly values together for the GISS row I came up with 300. Dividing by 72 gave me an average anomaly value of 4.17 C. So the anomaly for the top row of GISS is 2.25 times as large as that of HadCRUT.
It seems to me that this reflect very badly on the GISS interpolation extrapolation algorithm. The other problem is that there are 6 cells in that top row that HadCRUT has negative values for. GISS turns them all to the maximum positive value. The difference is 6.7 C or greater per cell for those 6 cells.
I can only conclude from this that the GISS divergence from the HadCRUT data is an artifact of the GISS processing algorithms and not a reflection of actual temperature variance at the poles.”
As I point out, the GISS interpolation extrapolation algorithms are just nuts.
Just found a relatively old paper by Dr Vincent Gray at nzclimatescience.net with a really broad approach as to why surface temp recording is totally unreliable – his main point is that the equipment providing the current database is constructed from ad-hocery and little else. His premise gives a similar outcome to Strata’s – impossible signal/noise ratio due hopelessly inadequate earth stations network and many other factors.
Loved the video clip of the Yorkie EuroMP having his say on AGW!
davidmhoffer (20:57:24) :, I believe 1850 is when temps appear to cross what is considered the median temp for the last 2000 years. Naturally, temps rose from the depths of the LIA to reach this median value. Therefore, temps have been rising for longer than 160 years.
If we consider the LIA and MWP to be around 400 years in length that means temps should start dropping about 200 years after crossing the median line. That would mean another 40 years of warming.
BTW, I don’t believe anything in nature is this exact. These numbers are only used as a projection based on my simple model. 😉
For people who use IE there is a google toolbar capability to do spell checking. It works better than nothing but I still screw up quite often. No context checking capability.
AjStrata: You were obviously in a hurry to get this on the web – you do need to look at the spelling for typos (not important to the message) and also at the grammar in places (which is important to the message – there is at least one paragraph which makes no sense at all).
Anyway, comments on the actual content:
The discussion on how much variability there is in temperature over the day is somewhat irrelevant since we know this and so do the AGW “scientists”. The readings used as supposedly taken at the same time every day to remove such fluctuations (more on this later).
As for the noise signal masking any fundamental temperature range, well, there are methods that can be used to extract a useful signal from nois which is many times higher than that signal. For example, signals from voyager, many hundreds of millions of miles from earth, now outside the solar system itself.
However, I have seen no sign that any of these techniques are being used, they are using simple low pass filters assuming (with some reason) that the noise is all high frequency compared to the very low frequency (decades or centuries) temperature change signal.
There is no way you will ever see such a signal with a few months worth of data and using very basic statistical techniques, so I don’t find your results very surprising.
But even the techniques used by CRU and others (low pass filters) are not really good enough. They assume that there is no low frequency noise (at the decade/century frequency). In fact, there probably is. We have the signal caused by changes in the Sun output, the signal caused by the Earth’s non-circular orbit, the signal caused by the precession of the Earth in it’s orbit, changes in cosmic ray intensity, and probably a few I haven’t thought of. These signals need to measured and accounted for, of which I see no evidence.
Measuring temperature at the same time of day helps to eliminate some of the daily temperature fluctuation, and probably works tolerably well at a specific location. However, I have issues with the idea of comparing across locations unless the following (at least) are accounted for:
* Time of day needs to by UC/GMT bases not clock time if DST is in use.
* Time of day should be sidereal time because timezones are all over the place wrt “solar” time.
* Because of seasonal variations the only times compared should be those taken at the same time, at the same location, on the same date (i.e. 365 per year), to try to eliminate orbital effects.
Trying to do averaging of temperatures is a clumsy and amateurish approach.
Sit and think about it (which you appear to have done) and it becomes obvious that the required infrastructure is not in place and the accumulated data probably so full of artificial and natural signals as to make any sort of analysis at the level being attempted futile. If these people had any intellectual honesty they would say so.
As to this underlying temperature change, I think that most people agree that it is real. The “global” temperature does change, we see evidence all around us and in the human historical record.
The argument is whether this change is directly tied to CO2 concentrations in the atmosphere. There is no direct proof of this, so the AGW proponents use the fact that there is a CO2 absorption band in the infra-red in the CO2 transmission spectrum, and attempt to fit the observed CO2 concentration change to the observed temperature change as “proof”. Its not proof, of course, even if they could derive an accurate enough temperature signal to compare to the CO2 concentration.
Phil., I want to congratulate you for pointing out the slight seasonable bias that appears in the UAH numbers. I remember you also pointed this out in June when the anomaly was zero. I was wondering if any of the AGW supporters would point this out at this time.
ajstrata (19:47:43) :
“Hey folks, just finished a post which I believe provides proof (or at least a clear way to prove) the AGW theories are mathematically invalid….”
Interesting analysis. I was always bothered by the claims such as “this year is 0.1 C hotter than last year” given the inaccuracy of the measurements, sparsity of the data and non-uniformity of temperature over distance.
So how many BILLIONS have we wasted on this boondoogle
Baa Humbug (22:22:47) :
I was told by bloggers that changing baselines didn’t alter anomalies. Not so coming from the horses mouth. What baseline do we use and why?
Changing baselines will change the absolute value of the anomaly but not the pattern. You will see all the values shifted upward or downward by a constant value. So, using a baseline from a cool period, as GISS does, will shift the entire anomaly record higher. And, of course, if this baseline contains more thermometers in colder locations, as E.M.Smith discovered, the result will go even higher.
In addition, if the 1930s values contain those colder thermometers it will also seem colder in comparison to the 2000s.
RE
Oefinell (02:45:04) :
ajstrata (19:47)
Interesting post but misguided. Your blog correctly states that it is not possible to get an accurate global surface temperature from current data. In fact that is well understood by Climate Scientists. That is why they do not measure the absolute temperature at all, they measure the temperature change or variance.
Oefinell can you explain to me how they calculate the variance without using the Absolute Temperature?
What Temperature do they use in the variance calculation?
Or do they just make it up?
the use of a baseline also obscures the big picture. It is usefull for exposing trends because it exagerates them, but we tend to forget that they are, in fact, now exagerated. If the graphs that show a 1 degree or so rise against a baseline of zero were all done in degrees K (the actual measure of absolute temperaure instead of the Celsius relative to freezing point of water scale) we would see a scale of about 300 degrees with earth avergage temp hanging in about (I’m guessing now) 280 and rising slightly to 281. It would look like a flat line in other words, nothing to get excited about. Of course comfort zone for us humans would be an equally narrow range, but you see my point.
j.pickens,
I have a day job, I am sure you can muddle through as is.