Tisdale on comparing global climate data with SST

The Global Coverage of NCDC Merged Land + Sea Surface Temperature Data

Guest post by Bob Tisdale

There are a numerous blogosphere posts about the global coverage, or lack thereof, of the GISS and Hadley Centre land plus sea surface temperature datasets. Few include the NCDC product. This post provides sample maps from 1880 to 2010 comparing NCDC merged land+sea surface temperature data to those of GISS (GISTEMP with 1200km radius smoothing–LOTI) and Hadley Centre (HADCRUT). It also shows GHCN and ERSST.v3b maps, which are the sources for the NCDC merged product. And this post illustrates how NCDC deletes infilled data in early years, discusses why they delete the data, and shows the very limited impact of the NCDC’s deletion of that data in early years.

The NCDC merged land+sea surface temperature anomaly data is now available through the KNMI Climate Explorer. (Many thanks to Dr. Geert Jan van Oldenborgh.)

Figure 1 compares the July 2010 temperature anomaly map of the NCDC merged Land+Sea Surface Temperature to those of the GISS and Hadley Centre products. I’ve used the base years of 1901 to 2000 for all datasets. These are the base years of the NCDC data, not their dot-covered maps. And the contour levels of the maps were set for a range of -4.0 to 4.0 deg C.

As illustrated, the NCDC does not present data over sea ice. Also, there is a sizeable area of east-central Africa without data during July 2010. And the NCDC does not present Antarctic data. The infilling methods employed by the NCDC provide greater land surface coverage than the Hadley Centre product but less coverage than GISS. The methods used by NCDC are discussed in Smith et al (2008) Improvements to NOAA’s Historical Merged Land-Ocean Surface Temperature Analysis (1880-2006), and in Smith and Reynolds (2004) Improved Extended Reconstruction of SST (1854-1997).

http://i35.tinypic.com/2uy1x6o.jpg

Figure 1

GISS includes more Arctic surface station data than NCDC and Hadley Centre. This can be seen in the maps that compare the NCDC GHCN data, the Hadley Centre CRUTEM3 data, and the GISS land surface data with 250km radius smoothing, Figure 2. GISS includes Antarctic surface stations (not illustrated), which are not included in GHCN. And of course, GISS Deletes Arctic And Southern Ocean Sea Surface Temperature Data and extends land surface data out over the oceans to increase coverage in the Arctic and Antarctic.

http://i36.tinypic.com/e7glzq.jpg

Figure 2

The GISTEMP combined land plus sea surface temperature dataset with 250km radius smoothing is used to show how little Arctic Ocean sea surface temperature data remains in the GISS product. Refer to the bottom cell in Figure 3. The NCDC and Hadley Centre, on the other hand, include Arctic Ocean Sea Surface Temperature data during seasons with reduced sea ice.

http://i34.tinypic.com/55jcxe.jpg

Figure 3

Figures 4 through 8 provide global coverage comparison maps for NCDC, Hadley Centre and GISS surface temperature products from 2010 to 1880. Januarys in 2010, 1975, 1940, 1910, and 1880 are shown. Note how the coverage decreases in early years. The exception is the SST data presented by GISS. Keep in mind that the three SST datasets prior to the satellite era basically use a common source SST dataset, ICOADS. Refer to An Overview Of Sea Surface Temperature Datasets Used In Global Temperature Products. The HADSST2 data in the Hadley Centre maps represents the locations of the SST samples. The HADISST and ERSST.v3b datasets used by GISS and NCDC are infilled using statistical methods.

http://i37.tinypic.com/1215rag.jpg

Figure 4

####################

http://i34.tinypic.com/29yqvsl.jpg

Figure 5

####################

http://i35.tinypic.com/24dplkj.jpg

Figure 6

####################

http://i34.tinypic.com/28mixyb.jpg

Figure 7

####################

http://i33.tinypic.com/28w1bus.jpg

Figure 8

####################

The decrease in land surface coverage is not surprising, but the NCDC uses ERSST.v3b SST data for the oceans and that dataset provides complete coverage for the oceans even in early years. This can be seen in Figures 9 through 13. They include the same Januarys as the maps above, but they present the NCDC merged product and the GHCN land surface data and ERSST.v3b sea surface data used by NCDC. The NCDC infilled much of the Sea Surface Temperature data in early years. Why do they then delete so much of it? The answer follows the maps.

http://i36.tinypic.com/24b2sup.jpg

Figure 9

####################

http://i37.tinypic.com/2isd2e8.jpg

Figure 10

####################

http://i33.tinypic.com/bgbfxt.jpg

Figure 11

####################

http://i35.tinypic.com/2vcezhf.jpg

Figure 12

####################

http://i38.tinypic.com/11uuavo.jpg

Figure 13

####################

In Smith et al (2008), Improvements to NOAA’s Historical Merged Land-Ocean Surface Temperature Analysis (1880-2006), the NCDC describes why they delete data from their merged product. On page 6, under the heading of “Sampling cutoffs for large-scale averaging”, they write, “The above results show that the reconstructions can be improved in periods with sparse sampling. However, there can still be damping errors in periods with sparse sampling. Damping of large-scale averages may be reduced by eliminating poorly sampled regions because anomalies in those regions may be greatly damped. In Smith et al. (2005) error estimates were used to show that most Arctic and Antarctic anomalies are unreliable and those regions were removed from the global average computation. Here testing using the simulated data is done to find objectively when regions should be eliminated from the global average to minimize the MSE [global mean-squared error] of the average compared to the full data.” Smith et al then go on to describe the criteria for deleting the data in poorly sampled regions.

Of course, the question that comes to mind is, what impact does deleting the all of that SST data have on the long-term trends? Answer: very little. Figures 14 and 15 compare SST data for the NCDC merged product and the source SST data in the North and South Pacific. The coordinates (illustrated on the graphs) were chosen to capture large portions of those ocean subsets, while making sure they were free of influences from land surface data and sea ice. As shown, the NCDC merged data become much more volatile during periods of reduced coverage, but there is little impact on the long-term trends.

http://i38.tinypic.com/11h3vk1.jpg

Figure 14

####################

http://i37.tinypic.com/1juvdy.jpg

Figure 15

Makes one wonder, doesn’t it?

SOURCE

The data and maps are available through the KNMI Climate Explorer:

http://climexp.knmi.nl/selectfield_obs.cgi?someone@somewhere

Posted by Bob Tisdale at 5:30 AM

0 0 votes
Article Rating
34 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
August 27, 2010 12:03 pm

Bob:
Can you direct us curious to the DATA SETS used to make figure 14 and 15?
I desperately want to use MiniTab on that data and see what the “normal distribution” likelyhood is.
Thanks!
Max

trbixler
August 27, 2010 12:21 pm

Did I miss something or are we talking about .4 degrees c from 1880 to 2010. Such a small snapshot to make a story up about tomorrow.

August 27, 2010 12:37 pm

Max Hugoson: The data is available throuh the KNMI Climate Explorer:
http://climexp.knmi.nl/selectfield_obs.cgi?someone@somewhere
The NCDC merged land plus sea surface data is under the first grouping “Temperature”, and the ERSST.v3b is under the heading of “SST”.
The coordinates I used for the North Pacific subset were 0-40N, 145E-130W, and for the South Pacific is used 60S-0, 180-85W.

DirkH
August 27, 2010 12:39 pm

They improve their temperature product by throwing out data? Seriously cool magical mushroom science.

August 27, 2010 1:01 pm

trbixler says: “Did I miss something or are we talking about .4 degrees c from 1880 to 2010.”
The linear trends of both Pacific Ocean SST subsets are about 0.04 deg C/decade, so that’s about 0.52 deg C.

August 27, 2010 1:17 pm

I’ve looked up some graphs for locations in my own country (The Netherlands, KNMI-land); most of them, some dating back to 1880, look virtually flat in trend. Am I missing something vital here? Is it real?

August 27, 2010 1:30 pm

Aaron Stonebeat says: “I’ve looked up some graphs for locations in my own country (The Netherlands, KNMI-land); most of them, some dating back to 1880, look virtually flat in trend.”
Have you downloaded the data into a spreadsheet and had the sreadsheet calculate the trend? Looks can be deceiving.

Jeff
August 27, 2010 1:47 pm

any grid location that doesn’t have actual temperature records DOES NOT have any data … if they fill it in with some averaged data from other grids then what we have is opinions, not data …

kwinterkorn
August 27, 2010 1:56 pm

Subtract a little for UHI and Airport siting, and, oops, where did the global warming go?—-other than the slow long term trend since the Little Ice Age.

August 27, 2010 2:01 pm

Bob, I’d recommend that for studying coverage, you don’t use plots for the most recent month. It takes time for data to be verified and added to the plot, and in most cases you’ll have incomplete versions. The plots themselves are updated as data is processed. You’d get a more stable picture from, say, March 2010.

August 27, 2010 2:09 pm

It seems to me that events like El and La nina help illustrate the inaccuracy of the temperature records. Shouldn’t the true temperature average be extremely stable? I don’t see where there should be any global variation at all.

August 27, 2010 2:38 pm

Nick Stokes says: “Bob, I’d recommend that for studying coverage, you don’t use plots for the most recent month. It takes time for data to be verified and added to the plot, and in most cases you’ll have incomplete versions.”
Thanks. The CRUTEM and NCDC samples in Figure 2 do look a little sparse with the July data, but January 2010 samples are not much better.
http://i37.tinypic.com/2lxicg2.jpg
There’s one additional cell in the CRUTEM data. Cells do bounce in and out from year to year, but recently that’s about all they have.
I originally plotted only the Januarys (2010…1880) Figures 4 to 13. Then I included July 2010 (Figures 1 and 2) because someone was bound to ask…or, on the other hand, to complain that I was trying to hide something.

August 27, 2010 3:25 pm

Genghis says: “It seems to me that events like El and La nina help illustrate the inaccuracy of the temperature records. Shouldn’t the true temperature average be extremely stable?”
Nope. Keep in mind that the Eastern Tropical Pacific, which is directly impacted by El Nino and La Nina events, covers more than 25% of the tropics and the tropics represent about 35% of the globe, so the global area that is directly impacted the El Nino is about 8.75%. The Eastern Tropical Pacific SST anomalies rose about 1.5 deg C during the 1997/98 El Nino. That, in and of itself, would register on global surface records.
http://i37.tinypic.com/148hqmo.jpg
El Nino events also raise temperature anomalies outside of the tropical Pacific. The response of the Tropical North Atlantic to the 1997/98 El Nino peaked at almost 0.9 deg C, and there’s land between the two oceans, so there’s no direct transfer of heat. The tropical North Atlantic rose because the El Nino caused changes in atmospheric circulation, which in turn caused a slowing of the trade winds over tropical North Atlantic. The slower trade winds caused less evaporative cooling and less upwelling of cooler waters, so tropical Atlantic SST anomalies rose. And this happens globally to different extents. Some areas actually cool during El Nino events, but the rise in the areas that warm outweigh the drop in the areas that cool. Refer to the following correlation maps from Trenberth et al (2002).
http://i47.tinypic.com/261e1lf.png

Bruce of Newcastle
August 27, 2010 3:38 pm

Also maybe worth noting that total solar irradiance (TSI) seems to have risen 0.5 W/m^2 in the same timeframe, and relatively steadily. If the extra solar heating effect could be accounted for in the SST anomaly trend the maximum possible greenhouse warming would be lower than this 0.4 C/century.
Lots of nice PDO and ENSO signals but no sign of any hockey sticks that I can see.

August 27, 2010 3:42 pm

Jeff says: “any grid location that doesn’t have actual temperature records DOES NOT have any data …”
There’s data but it’s make believe data.
You continued, “if they fill it in with some averaged data from other grids then what we have is opinions, not data …”
The NCDC and Hadley Centre do not use averages for infilling SST. And the NCDC does not use averages to infill Land Surface Temperatures. But yes, the infilling does represent opinion. On the other hand, the Hadley Centre doesn’t infill areas without samples and the rise in its global temperature anomalies are not significantly different from the other datasets. It’s only when current temperature anomalies are being touted by alarmists as record years that the infilling and make believe data have any real impact.

intrepid_wanders
August 27, 2010 3:47 pm

Nick Stokes says:
August 27, 2010 at 2:01 pm

Bob, I’d recommend that for studying coverage, you don’t use plots for the most recent month. It takes time for data to be verified and added to the plot, and in most cases you’ll have incomplete versions. The plots themselves are updated as data is processed. You’d get a more stable picture from, say, March 2010.

Why in your would March 2010 be more “verified” or complete than January 2010? A graphs appear to be of “January”, are you seeing something else?

Gary Pearse
August 27, 2010 3:54 pm

So the average increase in global temps from 1880 is 0.52C. Now remember, CO2’s heavy weight in the increase is now diminished by the AGW supporters grudging, recent, acceptance of an LIA, MWP, and other significant warming periods that was forced on them by the revelation in the “emails” of the concerted effort to flatten down these natural cycles. They skimpingly allowed the MWP to come probably halfway back toward its true amplitude and ditto the LIA (I have reasoned that there is no way they would have erred on the extreme side and halfway is a kind of instinctive bargaining move). This would squeeze the margin of what is excessive to natural variability, so in 130 years can we accept that if CO2 is implicated at all, it would be limited to (generously) 0.25C. It wouldn’t even be worth arguing about it in that case. This would mean that by 2100, CO2 contribution to temp rise would be around 0.44C and the total increase from all causes (linear with no serious downturns like the 30 year cooling we are in, to be generous) would be around +0.88C (say 1C) for the period 1880-2100. Anyone got a problem with that?

R. Gates
August 27, 2010 4:30 pm

Let’s see…throwing out data is better than keeping it, random data is better than reconstructed tree ring data, anything before 1975 doesn’t count anyway. Does this about sum it all up?

Adrian Smits
August 27, 2010 4:54 pm

I tried to explain some of this rather mild temperature increase since 1979 at climate progress and all I got was grief without any serious discussion. I don’t think anyone can change a mind that is closed!

August 27, 2010 5:14 pm

Bruce of Newcastle says: “Lots of nice PDO and ENSO signals but no sign of any hockey sticks that I can see.”
Keep in mind the PDO represents the spatial pattern of the North Pacific SST anomalies, not the SST anomalies. The rises and falls of the SST anomalies in the North Pacific (north of 20N) do not correlate with the PDO.
http://i43.tinypic.com/29fp8ad.jpg
The graph is from this post:
http://bobtisdale.blogspot.com/2009/04/misunderstandings-about-pdo-revised.html

August 27, 2010 5:17 pm

Adrian Smits: They aren’t only closed; they’ve got locks on them.

wayne
August 27, 2010 5:44 pm

DirkH says:
August 27, 2010 at 12:39 pm
“They improve their temperature product by throwing out data? Seriously cool magical mushroom science.”
Think they need a new jingle? Yeah, they must.
♪♪ There’s no data
♪♪ Like no data
♪♪ Like no data we know…
♪♪ Everybody’s got to find a way today
♪♪ How to hide away
♪♪ The cold today…

Bill Illis
August 27, 2010 6:08 pm

Good post Bob.
Here’s one for you to get your teeth into that is rapidly making the rounds today and will enter the world of the next new global warming myth by next week.
El Ninos in the Central Pacific are getting stronger (since 1982 using satellite data apparently) – paywall issue on the actual paper.
http://www.agu.org/pubs/crossref/2010/2010GL044007.shtml
http://www.sciencedaily.com/releases/2010/08/100825200657.htm
I’ve always used the long-term trend in the Nino 3.4 which is about 0.006C per decade or an impact on global temperatures of 0.0004C per decade (or in other words, ZERO – the other regions are slightly different but still zero in my opinion).

Mark.r
August 27, 2010 6:33 pm

OT
Victoria’s northern ski resorts have received one of their biggest ever 24-hour snow dumps, and more is expected today.
Fifty-four centimetres of snow has fallen at Falls Creek in the past day.
It is the biggest August dump ever recorded.
Mt Hotham has had 46 centimetres and Mt Buller has recorded 29 centimetres of fresh snow in the past 24 hours.
Falls Creek lift manager Dave Plant says it is an amazing sight.
“I’ve been up here for seven years and this is the craziest I’ve ever seen it,” he said.

http://www.weatherchannel.com.au/main-menu/News/Breaking-News/Huge-snow-dumps-delight-skiers.aspx

August 27, 2010 6:55 pm

Bob Tisdale says:
August 27, 2010 at 3:25 pm
“El Nino events also raise temperature anomalies outside of the tropical Pacific. ”
Yes, they raise air temperatures. Air temperatures are no more than rounding errors compared to ocean temps. Which was partially my unstated point. The global average temperature should be extremely stable and any anomalies (over relatively short periods) are indicative of our measurement errors. If simply sloshing heat around the system raises or lowers the global average temperature our measurement system is defective.

August 27, 2010 7:11 pm

Bill Illis says: I ran into the paper because of a link at the LA Times back to my website:
http://latimesblogs.latimes.com/greenspace/2010/08/climate-change-el-nino-southern-california-rain.html
I left a couple of comments that haven’t been moderated yet. Basically, NINO3.4 SST anomalies have decadal variations…
http://i43.tinypic.com/33agh3c.jpg
…and looking at the last three decades is not going to account for this.

Ben D.
August 27, 2010 7:56 pm


Bill Illis says:
I’ve always used the long-term trend in the Nino 3.4 which is about 0.006C per decade or an impact on global temperatures of 0.0004C per decade (or in other words, ZERO – the other regions are slightly different but still zero in my opinion

That is one of the best ways to do it. Although people here are claiming longer is better, that is NOT always the case. To find out what the overall trend is, its best to find all the high water marks or low water marks and base anomalies off of that. You can kind of do year to year if you can find a good year that corresponds to the current year, but this is insanely difficult to compute statistically speaking with sin waves (oceanic cycles and solar cycles) and la nina’s effecting year to year temperatures for the globe.
Its best just to go from wave to wave so to speak. So in order to find a trend with the data we have, you would probably start at 1880 – 1940 – 1998. or something similar to that. That is (a rough estimate of the) high point in temperatures based on the long term ocean cycles.
So in other words….you would not even use data from the last 12 years to figure out what the climate is doing. You would also not adjust the data at all in this approach, and so you would use the 1940’s is hotter then the 1990’s approach…
I can go on why you would do this, but the basis behind normalizing the data is to take out short term anamolies to allow others to see the big picture, which is not wrong really, its just the wrong method to compute the actual warming of the system.
El ninos or la ninas are also appropiate as you are graphing high/low points in the data and finding out what the overall trend in the data is on average. Personally, I normally would use low points if you are trying to measure warming because those tend to stick to the point more to speak. But as long as you are cancelling out the oceanic/solar/la nina trends somehow, you are doing a good job statistically speaking.
But overall, there is not very much we can figure out on climate-wise, because we have 60 year long ocean cycles and around 150 years of reliable data for the globe.
The reason 1975 is a bad year is that you are finding the lowest point in the data set…and graphing to a higher point (today). It is not completely wrong so to speak, but in math, this would be worthless and thrown out as garbage since you have not eliminated outside parameters such as solar cycles and ocean cycles. Saying that you are graphing “the start of GW” is circular logic in that it would show up in wave to wave comparisons if it did exist as stated. Or put simply, it would exist in any appropiate graph of temperature trends as a “large artifact”. This is not the case however.
But don’t take my word for it, do an honest sketch using the data like I have many times and figure out R for CO2 + temperature. My method of high to high or low to low will give you an acurate climate reading….and you might be surprised on how quickly your beliefs are squishied.
Remember, you have to use unadjusted data for this, as adjusted data defeats the purpose of the figuring out the human aspect of climate change.
Now this does not say GW is not AGW right now, I am just saying its not CO2 caused. Simple experiments like that will get you your proof.

August 27, 2010 10:32 pm

Bob,
I’m currently working through HADSST2 over at the blog. Will glaldy move onto the other data sets once the system is all done.. basically less than 10 lines of code once you get the right data structures.

Bruce of Newcastle
August 28, 2010 1:41 am

Bob T @August 27, 2010 at 5:14 pm
Sorry, to be more precise, I see a beautiful empirical 61 year signal in the ERSST.v3b (which I hitherto have linked to the PDO in posts – I admit a bias to empiricity).
Forgive my ignorance, if the 61 year signal is not PDO related, what is it due to?
(I still see no hockey stick or tipping point and 0.4 C/century is pretty slow even without David Archibald’s data to consider.)

Alan Simpson not from Friends of the Earth
August 28, 2010 9:28 am

Bruce of Newcastle says:
August 28, 2010 at 1:41 am
Bob T @August 27, 2010 at 5:14 pm
Sorry, to be more precise, I see a beautiful empirical 61 year signal in the ERSST.v3b (which I hitherto have linked to the PDO in posts – I admit a bias to empiricity).
Forgive my ignorance, if the 61 year signal is not PDO related, what is it due to?
(I still see no hockey stick or tipping point and 0.4 C/century is pretty slow even without David Archibald’s data to consider.)
More to the point, how much of this, ( insignificant), change is the product of human activity?
Unless you add a big green thumb print on the scale, this data is completely worthless.

kfg
August 28, 2010 7:53 pm

The definition of “data” that Jeff is using is the plural of “datum.” Under this definition Imaginary Data representing Opinion is not an element of the set Data unless the quantity being measured is the answer to the survey question, “Penny for your thoughts?” Nor is the colloquial phrase, “What does the data show?” commutative; that is to say that it returns a different result from the phrase, “Show me the data.”

August 29, 2010 3:11 am

Bruce of Newcastle says: “Sorry, to be more precise, I see a beautiful empirical 61 year signal in the ERSST.v3b (which I hitherto have linked to the PDO in posts – I admit a bias to empiricity).”
Sorry for not replying earlier. I got sidetracked with the Lee and McPhaden paper.
If you assume the North Atlantic represent 15% of the global ocean surface area and if you subtract the North Atlantic SST anomalies from Global SST anomalies, the Global SST anomalies lose most of the multidecadal variability:
http://i33.tinypic.com/25r0z1x.jpg

August 29, 2010 3:14 am

Steven Mosher says: “I’m currently working through HADSST2 over at the blog.”
Steve, please link your blog to your name via “website” field in comment section. I do stop by regularly.

August 29, 2010 3:25 am

Not sure if this will come up as a duplicate. Ran into a system problem when I tried to post it.
Genghis says: “Yes, they raise air temperatures. Air temperatures are no more than rounding errors compared to ocean temps.”
They also raise Sea Surface Temperatures through changes in Atmospheric Circulation. Refer to Wang (2005):
http://www.aoml.noaa.gov/phod/docs/Wang_Hadley_Camera.pdf
And refer to Trenberth (2002):
http://www.cgd.ucar.edu/cas/papers/2000JD000298.pdf
You continued, “Which was partially my unstated point. The global average temperature should be extremely stable and any anomalies (over relatively short periods) are indicative of our measurement errors.”
Global surface temperatures do rise and fall in response to ENSO events. How is the measurement of those variations “indicative of our measurement errors?”