GISS Divergence with satellite temperatures since the start of 2003

By Steve Goddard and Anthony Watts

Some of the excellent readers of the last piece we posted on WUWT gave me an idea, which we are following up on here.  The exercise here is to compare GISS and satellite data (UAH and RSS) since the start of 2003, and then propose one possible source of divergence between the GISS and satellite data.  The reason that the start of 2003 was chosen, is because satellite data shows a rapid decline in temperatures starting then, and GISS data does not.  The only exception to the downward trend was an El Nino at the start of 2007, which caused a short but steep spike.  Remembering back a couple of years, Dr. Hansen had in fact suggested that El Nino might turn into a “Super El Nino” which would cause 2007 to be the “hottest year ever.”

The last six years (2003-2008) show a steep temperature drop in the satellite record, which is not present in the GISS data.   Prior to 2003, the three trends were all close enough to be considered reasonably consistent, but over the last six years is when a large divergence has become very apparent both visually and mathematically.

Click link for larger source image http://www.woodfortrees.org

Since the beginning of 2003, RSS has been dropping at 3.60C/century, UAH has been dropping at 2.84C/century, and GISS has been dropping at 0.96C/century.  All calculations are done in a Google Spreadsheet here:

The divergence between GISS and RSS is shown below.  Since the start of 2003, GISS has been diverging from RSS at 2.64C/century, and GISS has been diverging from UAH at 1.87C/century.  RSS has been diverging from UAH at minus 0.76C/century, indicating that RSS temperatures have been falling a little faster than UAH over the last six years, as can also be seen in the graph above.

Below is a 250km map of GISS trends from 2003-2008.  One thing which stands out is that GISS has large areas with sparse or no coverage.  Notably in Africa, Antarctica, Greenland, Canada, Brazil, and a few other places.

Click for larger image

Many of the GISS holes seem to be in blue regions on the map. Here is a post and video of the GHCN station loss over the past several years globally, created by WUWT contributor John Goetz:

Here are two images showing the difference between GISS global coverage in 1978 and 2008:

April 1978 anomalies

Click for a larger image

April 2008 anomaly

Click for a larger image

There is a tremendous amount of station dropout in 30 years. Dropout is worst in the high northern latitudes, most all of Canada, and about half of Africa. Of particular note is the red band at the southernmost latitude, which “seems” to indicate a continuous coverage there. Of course we know that is not true, given the paucity of stations in the Antarctic interior. Read more here.

By contrast, while it doesn’t hit both poles (neither does GISS) UAH has much broader global coverage as seen below. Could this be part of the explanation for the divergence between GISS and satellite data?  What do the readers think?

[Image]

Click for larger image

[Image]

Click for larger image

How different would the GISS graph appear, if it showed a -3.6C/century cooling trend over the last six years?  For reference, the steep GISS warming trend from 1980 to 2002 was about 0.4 degrees.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
243 Comments
Inline Feedbacks
View all comments
steven G
January 18, 2009 8:13 pm

I agree with several posters that the absence of ocean coverage in the GISS is alarming. If they leave out 70 per cent of the surface of the planet, this is a fatal flaw of their data. If they use Hadley data for the oceans, then the GISS and HadCrut can hardly be considered as two independent sources of temperature data.
Statistical and sampling theory permits that a random sample of a whole may be used to make an estimate of a certain property in the entire statistical population (eg political opinion polls usually cover only a few thousand people only). But the key to the sample’s accuracy is its randomness and representativity of the entire statistical population. To leave out oceans entirely inserts an inherent bias because it is well known that ocean temperatures do not behave the same as land temperatures (eg they move slower) and because oceans aren’t affected by urban heat island effect.

January 18, 2009 8:25 pm

Why isn’t the data from ARGO being used? Check out the map of coverage on their home page:
http://www.argo.ucsd.edu/
Interesting site – “Float of the month”

Richard M
January 18, 2009 8:26 pm

steven G (20:03:56) makes a very good point about the 60 station nonsense. For a number like this to be valid it ASSUMEs a homogenized world.
I don’t think many folks here believe our world is one giant climate. The truth is the world consists of multiple, interacting climates. Anyone who attempts to model the world otherwise is bound to fool themselves. Now, if you put 60 accurate weater stations within those regional climtes you might have something. I think is what the satellites MAY provide, however, I think much work needs to be done in regionalizing the data before using it.

crosspatch
January 18, 2009 8:28 pm

“How many temperature stations per square mile are there for the ocean areas???”
Well, there are several thousand (3283 to be exact) floats measuring ocean temperature from the surface down to 2000 meters.
Andrew (19:22:02)
I believe you misunderstood. The most accurate observations show that there has not been any warming for the past 10 years. Ocean temperatures, as David Archibald mentioned in a comment in this thread have also been cooling for at least the past five years. It doesn’t look like there is any “global warming”.
“Ok – I’ll bite. Please tell me where these few dozens should be sited. ”
Ok, sure. If you are looking for a global trend over time, it isn’t really going to matter where you site them as long as you cite them away from places that have human causes changes such as land use changes. In theory, all you would need is one station because if the entire globe is warming, that one station should be enough to show that over a long enough period of time to cycle through all natural weather cycles.
I would place them in locations far away from cities and far away from areas that are actively changing such as in the process of being deforested and the land use changed from, say, forest to farming. Areas that are static would be fine so an area that is currently farmed and probably will be the for next century or so would probably be just great. I might put stations in the middle of large national parks and wilderness areas. Also in places like desert and tundra would work, too. As would large areas of open space that is relatively static such as Northern Canada. I would place none of these stations near population centers. They would be intentionally difficult to reach and automated. They might be visited one or more times a year for calibration and maintenance but would be otherwise quite far from any human influence.
Or we can just measure the ocean temperature. That is pretty much where the vast majority of the Earth’s heat is stored. I would measure it on the sea floor, though, not on the surface.

DR
January 18, 2009 8:34 pm

DA (again)
Yes, I checked your pattern observation and it follows fairly well.
However, looking at AMSU-A temps for the last several days, CH05 (600MB) and CHLT(900MB) are going off the charts. Unless they drop like a rock in the next 12 days, January looks to be a leap upward.

January 18, 2009 8:36 pm

from Yahoo story: Americans giving Obama extraordinary support: polls
“A survey conducted by The New York Times and CBS News found a US public eager to give the president-elect a wide berth as he attempts to turn around a faltering US economy, tackle global warming, help solve the intractable Middle East peace process . . .”
Al Gore would be proud that tackling global warming is listed second for challenges that the President Obama must confront.
http://news.yahoo.com/s/afp/20090118/pl_afp/usinaugurationobamapoll

littlepeaks
January 18, 2009 8:48 pm

Anthony–
I am constantly trying to understand the climate science on this web site, and the excellent graphs of data. For someone who is not a climatologist (not sure if that’s the right word), could you put a link on your web site with acronyms and their definitions?
Thanks and congrats on the 2008Weblog award for science — I voted for you every day.

REPLY:
Already there, see Glossary at the top. – Anthony

January 18, 2009 8:49 pm

Very interesting work, thanks for sharing Steve and Anthony.

January 18, 2009 8:58 pm

Oh, and on the question of the divergence from 2003 onward — that is right around the time we began the steep decline to our curent solar minimum. Maybe that effect shows up at different rates in the two measurement systems.
Or it could be a meaningless coincidence, of course.

January 18, 2009 9:00 pm

The comparison between the maps is rather flattering to the satellite data, to be a fair comparison the ocean areas, data poleward of 82.5° North and 70° South, as well as areas with land or ice elevations above 3000 meters, should be excluded from the satellite data. Not so clear then.

Purakanui
January 18, 2009 9:29 pm

Kim
It’s the road to hell that is paved with good intentions. Knowing that the end justifies the means is one of the most seductive of good intentions.

January 18, 2009 10:37 pm

Joel Shore (18:09:28) said:
“Maybe because they understand the huge errorbars in trends computed over such short time periods”
What you apparently failed to see is that what I gave you was a maximum. Drawing long term (30-year) trends across either a maximum or a minimum might not be a useful exercise. Once a maximum is crossed – as it was in 2000 – then your long term trends don’t necessarily apply. The valid trend line for near-future prediction is what’s happened since the maximum. At least until the next minimum is reached. Then – maybe – your long-term trend will be useful again.
The question at the moment is – what will the “short-term” trend look like at the end of this year – and for the next 5-10 years. Only the “short-term” trend line can give you a clue about that – maybe. Or not.
The models have already proven that they’re incapable of predicting long-term. And they will continue to do so until modified to include the myriad of other climate drivers that have so far failed to be seriously considered. The name of the game isn’t CO2 – if you’re gonna talk “science” then the game is – What factors did we miss so badly that our very expensive models have failed so badly? And the answer to that question that doesn’t lie with the IPCC. And possibly not with GISS.
I do believe that 9 years is sufficient to establish a trend. After all, the boys at GISS managed to establish their trend line is 9 years or so (1979-1988). And they seem to have no problem with using 8-year trend lines to supoort their case when it’s convenient.
I knew there was a reason I refused to join the atmospheric science team back then.
“In other words, they understand the limitations in drawing conclusions from their data better than you do.”
ROTFLMAO! I spent 40+ years drawing correct conclusions from far less and far worse data than they have available. I doubt they understand the limitations as well as you think. Or that I understand them as little as you think.
YMMV

January 18, 2009 10:38 pm

I have been working now for several weeks on figuring out the differences between the satellite records and ground records. I have just made a post which I think is some of my more useful work. I was able to determine the step is primarily in the RSS data. Once it’s corrected, both trends come close to UAH.
http://noconsensus.wordpress.com/2009/01/19/satellite-temp-homoginization-using-giss/

anna v
January 18, 2009 10:45 pm

Mongo (20:05:42) :
“crosspatch:
To get a rate of change of global temperatures over time, you probably don’t need more than a few dozen stations globally.”
Ok – I’ll bite. Please tell me where these few dozens should be sited.

This problem has been solved by pollsters the world over of how to find a representative population. A similar study should be done for representative locations with the correct weights . Depending on the statistical error that is deemed acceptable for the study the number of necessary stations can be deduced.
As statistical errors go like sqrt(N)/N. It will not be small if one wants to speak of temperature errors of the order of .1C over 15C. I do not think a few dozen would do the trick.
Better stick to satellite measurements from now on.

Scott in Minnesota
January 18, 2009 10:56 pm

Thank you so much for providing the forum, and congrats on the award.
Like many who read this site, I am an armchair statistician. Although I had statistics training in college, and did get an honors grade.
A comment I feel comfortable in bringing to the attention of the forum is that I know that statistical sampling can be more accurate than actually trying to count all the data points. For instance, I believe (sorry, do not have source to cite) that statistical sampling of the population in a country can be more accurate than actually conducting a census. The challenges with marshaling the resources to conduct a census can overwhelm attempts at accuracy.
In the middle of my last statistics class I asked my professor about this. We had covered many types of statistical analysis in the class, but one thing had struck me about all of them: the margin of error got very small when you got to about 30 data points in all these different statistical analysis methods. He agreed that was the case. A good rule of thumb was you can stop at 30 points if all you need is a trends in business, and it might be sufficient for other purposes too.
(It would be interesting to see a temperature trend of the “certified installed correctly” temperature stations from the siting project. It would be a relatively small number, but it is an _accurate_over_time_ sample series. How does that subset compare to the satellite data?)
Of course one can argue in dealing with the example of a population that the undocumented immigrant will not participate in any census. All data can be flawed. And in measuring global temps, is it sufficient to measure temperature only over land? I do not know. The 30% of the surface is a large proportion. However the arguments I have heard over the rates of absorption of CO2 in the ocean seem to say we should be measuring or sampling that, too.
Which is one reason why the satellite data is so compelling to me, although I think I understand many of the challenges in using it. With satellites there is ONE (or a few) instrument(s) making multiple measurements in multiple locations. It is much easier to discern a trend. When we are discussing trends, does it matter if we are measuring in a cold hole or not? If the Urban Heat Island is dissipated and assumed not to affect climate overall within just a few miles, then perhaps the “cold hole” might be just as local and unimportant in the big scheme of things.
I hope the comment is helpful to someone with more statistical knowledge than mine. Thank you for the opportunity to convey my thoughts.

wooble
January 18, 2009 10:56 pm

There is always a risk of a credibility gap when picking a timescale for looking at temperature trends.
A big word of thanks for woodfortrees.org for providing such a simple tool for looking at the data. Temperature up, down, or no change, its all there depending on your chosen timescale and start point.
Similarly its the argument about what is the timescale for “weather” and what for “climate”. How about we all agree that (say) 10 years is the (human) timescale for “climate”, and we don’t pick start or end points based on apparent changes or divergences?
If we do that then since 1999 all three indexes have risen. GISS by about 2 degC/century and the satellites by 1 degC/century.

January 18, 2009 11:07 pm

teven G (20:03:56) :
I agree with several posters that the absence of ocean coverage in the GISS is alarming. In my opinion, to leave out 70 per cent of the surface of the planet is a fatal flaw of their data.

Which is a flaw they don’t have, see for example: http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.lrg.gif
They allow data to be downloaded from either land only, ocean only or both, for the post above the authors chose ‘land only’, you’ll have to ask them why.
Statistical and sampling theory permits that a random sample of a whole may be used to make an estimate of a certain property in the entire statistical population (eg political opinion polls usually cover only a few thousand people only). But the key to the sample’s accuracy is its randomness and representativity of the entire statistical population. To leave out oceans entirely inserts an inherent bias because it is well known that ocean temperatures do not behave the same as land temperatures (eg they move slower) and because oceans aren’t affected by urban heat island effect.
The data in the graphs above are from the ‘land-ocean’ database but the maps are not.

philincalifornia
January 18, 2009 11:16 pm

Roy (17:27:07) : wrote:
Just wanted to raise an issue for general awareness: if someone has been adjusting published government data with the intent to influence government action, that person could be prosecuted for violation of 18 USC section 1519 and be sent to jail for up to 20 years.
—————————————————
Forget it Roy. That part of the U.S. legal system is terminally broken.
Having said that though, some of these people could not imagine the ferocity with which a class action plaintiff’s litigator would go after this if there’s a big damages pay-off to be had. The asymmetric warfare of costs of litigation versus potential damages awards is staggeringly in favor of such a plaintiff.
I’m not an attorney, but I’ve had the, shall we say, “educational experience” of being associated with a big time science litigation for ten years. Not producing ALL data and notes during discovery does not go over well with a Federal Judge – believe me. Testifying that the debate was over would not be a particularly good defense for ignoring 2008 and prior data either. Think Vioxx (not the one I was involved in). Think civil not criminal. Think big damages. It’s all about their finding a target that can pay up though, so don’t get too excited.

Alan Wilkinson
January 18, 2009 11:41 pm

Those who say that 2003-2008 is too short a time period to show anything need to explain why.
(a) In the absence of any peculiar events such as volcanic eruptions, just where has the additional heat from CO2 forcing gone? There is no evidence it has been stored in the sea or elsewhere is there? How much has gone into melting snow and ice and is that sufficient to account for the temperature drops?
(b) If the claim is that natural causes have intervened, then surely these must be identified and quantified before it is possible to make any predictions whatever about the consequences of additional CO2?

Trevor
January 18, 2009 11:53 pm

Re Anthony’s reply to Lars Kamél (13:34:27) :
“REPLY: I had thought about Amundsen-Scott base, but the the red band seemed so large in area, compared to other stations, that it seemed unrealistic to treat it as a single station. Perhaps one station is being distorted in the map resentation. Mercator projection does that. – Anthony”
Looking at the GISS maps you referred to they didn’t strike me as Mercator projections. They are I believe “Unprojected Lat Long World Maps” which distort both area and shape at virtually all latitudes, obviously moreso at the higher latitudes. (Mercator at least does preserve shape). However, that is not the issue for this post which was prompted by the “full width” high anomaly band located over Antarctica. Clearly this is reflective of only a few stations in the Antarctic.
However, if GISS calculates the area of anomaly based on the number of pixels on these maps (for example) then clearly anomalies at high latitudes such as what is shown on the map above reflect a disproportionate amount of the earth surface as having an anomaly compared to that same area on the GISS map being located at an equatorial latitude.
If you take a mean Radius of the earth as 6371km then a 100 km wide band of anomaly located wholly around the equator would occupy 4,003,017 sq km. The same anomaly located at the extreme top of bottom of the GISS map (which only extends to 80 deg N & S) would occupy 695,116 sq km. This is only 17% of the equatorial area yet on the GISS map shows or implies the anomaly to be the same area.
I stand to learn and be corrected about how GISS determines the areas of the anomalies in order to assess overall global temperature, but simply looking at the map and assigning a pixel to represent the anomaly, effectively a square of given size at the polar extemity of the GISS map has nearly 6 times the weighting in area of the same size square at the equator.

Alan Wilkinson
January 18, 2009 11:56 pm

Having just posted that I see Roger Pielke Sr is just making a similar comment in response to Realclimate’s usual line on this.

E.M.Smith
Editor
January 19, 2009 12:14 am

Billy Bob lifts the hood… “Now there’s your problem Jamie boy… Your Canada dropped out! And it looks like your central Asian steppes are a bit loose too. You want me to fix it up for ya? I can start on it Tuesday…”
(Can I really “Blame Canada” with a straight face … giggle… 8-}

January 19, 2009 12:38 am

OT – sunspeck spotted
The current SOHO show a speck just below the equator, looking for all the world like a dead pixel, but it shows bright and tight on the magnetogram, too. Could it be that the last cycle isn’t over yet?

Chris H
January 19, 2009 12:43 am

I am no AGW believer, but a true skeptic must consider all the possibilities:
The “GISS divergence” problem since 2003 may not be a problem at all – it is after all only 5 years of data, and MIGHT be a temporary anomaly. Look at how GISS & HADCRUT have varied since 1880:
http://i37.tinypic.com/6r20zb.jpg
Although the differences here never 0.1C, they are both land-based measurements, so you would expect them to be the closer. What is important it that they can ‘diverge’ for DECADES at a time, before coming back together (or even going in the other direction).
Secondly, it is possible that the satellite data is the source of the divergence, not GISS, although I don’t know what.

Chris H
January 19, 2009 12:44 am

That should read “Although the differences here never *exceed* 0.1C”.