The Global Coverage of NCDC Merged Land + Sea Surface Temperature Data
Guest post by Bob Tisdale
The NCDC merged land+sea surface temperature anomaly data is now available through the KNMI Climate Explorer. (Many thanks to Dr. Geert Jan van Oldenborgh.)
Figure 1 compares the July 2010 temperature anomaly map of the NCDC merged Land+Sea Surface Temperature to those of the GISS and Hadley Centre products. I’ve used the base years of 1901 to 2000 for all datasets. These are the base years of the NCDC data, not their dot-covered maps. And the contour levels of the maps were set for a range of -4.0 to 4.0 deg C.
As illustrated, the NCDC does not present data over sea ice. Also, there is a sizeable area of east-central Africa without data during July 2010. And the NCDC does not present Antarctic data. The infilling methods employed by the NCDC provide greater land surface coverage than the Hadley Centre product but less coverage than GISS. The methods used by NCDC are discussed in Smith et al (2008) Improvements to NOAA’s Historical Merged Land-Ocean Surface Temperature Analysis (1880-2006), and in Smith and Reynolds (2004) Improved Extended Reconstruction of SST (1854-1997).
http://i35.tinypic.com/2uy1x6o.jpg
Figure 1
GISS includes more Arctic surface station data than NCDC and Hadley Centre. This can be seen in the maps that compare the NCDC GHCN data, the Hadley Centre CRUTEM3 data, and the GISS land surface data with 250km radius smoothing, Figure 2. GISS includes Antarctic surface stations (not illustrated), which are not included in GHCN. And of course, GISS Deletes Arctic And Southern Ocean Sea Surface Temperature Data and extends land surface data out over the oceans to increase coverage in the Arctic and Antarctic.
http://i36.tinypic.com/e7glzq.jpg
Figure 2
The GISTEMP combined land plus sea surface temperature dataset with 250km radius smoothing is used to show how little Arctic Ocean sea surface temperature data remains in the GISS product. Refer to the bottom cell in Figure 3. The NCDC and Hadley Centre, on the other hand, include Arctic Ocean Sea Surface Temperature data during seasons with reduced sea ice.
http://i34.tinypic.com/55jcxe.jpg
Figure 3
Figures 4 through 8 provide global coverage comparison maps for NCDC, Hadley Centre and GISS surface temperature products from 2010 to 1880. Januarys in 2010, 1975, 1940, 1910, and 1880 are shown. Note how the coverage decreases in early years. The exception is the SST data presented by GISS. Keep in mind that the three SST datasets prior to the satellite era basically use a common source SST dataset, ICOADS. Refer to An Overview Of Sea Surface Temperature Datasets Used In Global Temperature Products. The HADSST2 data in the Hadley Centre maps represents the locations of the SST samples. The HADISST and ERSST.v3b datasets used by GISS and NCDC are infilled using statistical methods.
http://i37.tinypic.com/1215rag.jpg
Figure 4
####################
http://i34.tinypic.com/29yqvsl.jpg
Figure 5
####################
http://i35.tinypic.com/24dplkj.jpg
Figure 6
####################
http://i34.tinypic.com/28mixyb.jpg
Figure 7
####################
http://i33.tinypic.com/28w1bus.jpg
Figure 8
####################
The decrease in land surface coverage is not surprising, but the NCDC uses ERSST.v3b SST data for the oceans and that dataset provides complete coverage for the oceans even in early years. This can be seen in Figures 9 through 13. They include the same Januarys as the maps above, but they present the NCDC merged product and the GHCN land surface data and ERSST.v3b sea surface data used by NCDC. The NCDC infilled much of the Sea Surface Temperature data in early years. Why do they then delete so much of it? The answer follows the maps.
http://i36.tinypic.com/24b2sup.jpg
Figure 9
####################
http://i37.tinypic.com/2isd2e8.jpg
Figure 10
####################
http://i33.tinypic.com/bgbfxt.jpg
Figure 11
####################
http://i35.tinypic.com/2vcezhf.jpg
Figure 12
####################
http://i38.tinypic.com/11uuavo.jpg
Figure 13
####################
In Smith et al (2008), Improvements to NOAA’s Historical Merged Land-Ocean Surface Temperature Analysis (1880-2006), the NCDC describes why they delete data from their merged product. On page 6, under the heading of “Sampling cutoffs for large-scale averaging”, they write, “The above results show that the reconstructions can be improved in periods with sparse sampling. However, there can still be damping errors in periods with sparse sampling. Damping of large-scale averages may be reduced by eliminating poorly sampled regions because anomalies in those regions may be greatly damped. In Smith et al. (2005) error estimates were used to show that most Arctic and Antarctic anomalies are unreliable and those regions were removed from the global average computation. Here testing using the simulated data is done to find objectively when regions should be eliminated from the global average to minimize the MSE [global mean-squared error] of the average compared to the full data.” Smith et al then go on to describe the criteria for deleting the data in poorly sampled regions.
Of course, the question that comes to mind is, what impact does deleting the all of that SST data have on the long-term trends? Answer: very little. Figures 14 and 15 compare SST data for the NCDC merged product and the source SST data in the North and South Pacific. The coordinates (illustrated on the graphs) were chosen to capture large portions of those ocean subsets, while making sure they were free of influences from land surface data and sea ice. As shown, the NCDC merged data become much more volatile during periods of reduced coverage, but there is little impact on the long-term trends.
http://i38.tinypic.com/11h3vk1.jpg
Figure 14
####################
http://i37.tinypic.com/1juvdy.jpg
Figure 15
Makes one wonder, doesn’t it?
SOURCE
The data and maps are available through the KNMI Climate Explorer:
http://climexp.knmi.nl/selectfield_obs.cgi?someone@somewhere
Posted by Bob Tisdale at 5:30 AM
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Bill Illis says: I ran into the paper because of a link at the LA Times back to my website:
http://latimesblogs.latimes.com/greenspace/2010/08/climate-change-el-nino-southern-california-rain.html
I left a couple of comments that haven’t been moderated yet. Basically, NINO3.4 SST anomalies have decadal variations…
http://i43.tinypic.com/33agh3c.jpg
…and looking at the last three decades is not going to account for this.
Bill Illis says:
I’ve always used the long-term trend in the Nino 3.4 which is about 0.006C per decade or an impact on global temperatures of 0.0004C per decade (or in other words, ZERO – the other regions are slightly different but still zero in my opinion
That is one of the best ways to do it. Although people here are claiming longer is better, that is NOT always the case. To find out what the overall trend is, its best to find all the high water marks or low water marks and base anomalies off of that. You can kind of do year to year if you can find a good year that corresponds to the current year, but this is insanely difficult to compute statistically speaking with sin waves (oceanic cycles and solar cycles) and la nina’s effecting year to year temperatures for the globe.
Its best just to go from wave to wave so to speak. So in order to find a trend with the data we have, you would probably start at 1880 – 1940 – 1998. or something similar to that. That is (a rough estimate of the) high point in temperatures based on the long term ocean cycles.
So in other words….you would not even use data from the last 12 years to figure out what the climate is doing. You would also not adjust the data at all in this approach, and so you would use the 1940’s is hotter then the 1990’s approach…
I can go on why you would do this, but the basis behind normalizing the data is to take out short term anamolies to allow others to see the big picture, which is not wrong really, its just the wrong method to compute the actual warming of the system.
El ninos or la ninas are also appropiate as you are graphing high/low points in the data and finding out what the overall trend in the data is on average. Personally, I normally would use low points if you are trying to measure warming because those tend to stick to the point more to speak. But as long as you are cancelling out the oceanic/solar/la nina trends somehow, you are doing a good job statistically speaking.
But overall, there is not very much we can figure out on climate-wise, because we have 60 year long ocean cycles and around 150 years of reliable data for the globe.
The reason 1975 is a bad year is that you are finding the lowest point in the data set…and graphing to a higher point (today). It is not completely wrong so to speak, but in math, this would be worthless and thrown out as garbage since you have not eliminated outside parameters such as solar cycles and ocean cycles. Saying that you are graphing “the start of GW” is circular logic in that it would show up in wave to wave comparisons if it did exist as stated. Or put simply, it would exist in any appropiate graph of temperature trends as a “large artifact”. This is not the case however.
But don’t take my word for it, do an honest sketch using the data like I have many times and figure out R for CO2 + temperature. My method of high to high or low to low will give you an acurate climate reading….and you might be surprised on how quickly your beliefs are squishied.
Remember, you have to use unadjusted data for this, as adjusted data defeats the purpose of the figuring out the human aspect of climate change.
Now this does not say GW is not AGW right now, I am just saying its not CO2 caused. Simple experiments like that will get you your proof.
Bob,
I’m currently working through HADSST2 over at the blog. Will glaldy move onto the other data sets once the system is all done.. basically less than 10 lines of code once you get the right data structures.
Bob T @August 27, 2010 at 5:14 pm
Sorry, to be more precise, I see a beautiful empirical 61 year signal in the ERSST.v3b (which I hitherto have linked to the PDO in posts – I admit a bias to empiricity).
Forgive my ignorance, if the 61 year signal is not PDO related, what is it due to?
(I still see no hockey stick or tipping point and 0.4 C/century is pretty slow even without David Archibald’s data to consider.)
Bruce of Newcastle says:
August 28, 2010 at 1:41 am
Bob T @August 27, 2010 at 5:14 pm
Sorry, to be more precise, I see a beautiful empirical 61 year signal in the ERSST.v3b (which I hitherto have linked to the PDO in posts – I admit a bias to empiricity).
Forgive my ignorance, if the 61 year signal is not PDO related, what is it due to?
(I still see no hockey stick or tipping point and 0.4 C/century is pretty slow even without David Archibald’s data to consider.)
More to the point, how much of this, ( insignificant), change is the product of human activity?
Unless you add a big green thumb print on the scale, this data is completely worthless.
The definition of “data” that Jeff is using is the plural of “datum.” Under this definition Imaginary Data representing Opinion is not an element of the set Data unless the quantity being measured is the answer to the survey question, “Penny for your thoughts?” Nor is the colloquial phrase, “What does the data show?” commutative; that is to say that it returns a different result from the phrase, “Show me the data.”
Bruce of Newcastle says: “Sorry, to be more precise, I see a beautiful empirical 61 year signal in the ERSST.v3b (which I hitherto have linked to the PDO in posts – I admit a bias to empiricity).”
Sorry for not replying earlier. I got sidetracked with the Lee and McPhaden paper.
If you assume the North Atlantic represent 15% of the global ocean surface area and if you subtract the North Atlantic SST anomalies from Global SST anomalies, the Global SST anomalies lose most of the multidecadal variability:
http://i33.tinypic.com/25r0z1x.jpg
Steven Mosher says: “I’m currently working through HADSST2 over at the blog.”
Steve, please link your blog to your name via “website” field in comment section. I do stop by regularly.
Not sure if this will come up as a duplicate. Ran into a system problem when I tried to post it.
Genghis says: “Yes, they raise air temperatures. Air temperatures are no more than rounding errors compared to ocean temps.”
They also raise Sea Surface Temperatures through changes in Atmospheric Circulation. Refer to Wang (2005):
http://www.aoml.noaa.gov/phod/docs/Wang_Hadley_Camera.pdf
And refer to Trenberth (2002):
http://www.cgd.ucar.edu/cas/papers/2000JD000298.pdf
You continued, “Which was partially my unstated point. The global average temperature should be extremely stable and any anomalies (over relatively short periods) are indicative of our measurement errors.”
Global surface temperatures do rise and fall in response to ENSO events. How is the measurement of those variations “indicative of our measurement errors?”