At NASA’s Climate 365, there is an interesting story posted with this statement and a graph:
Some say scientists can’t agree on Earth’s temperature changes
Each year, four international science institutions compile temperature data from thousands of stations around the world and make independent judgments about whether the year was warmer or cooler than average. “The official records vary slightly because of subtle differences in the way we analyze the data,” said Reto Ruedy, climate scientist at NASA’s Goddard Institute for Space Studies. “But they also agree extraordinarily well.”
All four records show peaks and valleys in sync with each other. All show rapid warming in the past few decades. All show the last decade has been the warmest on record.
In sync? Weellll, not quite. Japan apparently hasn’t ‘got their mind right‘ yet as the graph shows:
Here is where it gets interesting. Note the purple line after the year 2000.
The Japanese data line in purple is about .25 degree cooler than the NASA, NOAA, and Met Office data sets after the year 2000. That has partially to do with anomaly baselines chosen by the different agencies, as these two comparison graphs shown below illustrate:
Source: http://ds.data.jma.go.jp/tcc/tcc/news/press_20120202.pdf
NASA GISS uses a 1951-1980 average for the anomaly baseline, Japan’s Meteorological agency uses a 1981-2010 baseline, and that explains the offset difference between 0.48 and ~ 0.23 C, however, it doesn’t explain the divergence when all of the data is plotted together using the same anomaly 1951-1980 baseline as NASA did, which is explained in more detail at the link provided in the NASA 365 post to NASA’s Earth Observatory study here:
Source: http://earthobservatory.nasa.gov/IOTD/view.php?id=80167
In that EO story they explain:
The map at the top depicts temperature anomalies, or changes, by region in 2012; it does not show absolute temperature. Reds and blues show how much warmer or cooler each area was in 2012 compared to an averaged base period from 1951–1980. For more explanation of how the analysis works, read World of Change: Global Temperatures.
The justification for using the outdated 1951-1980 baseline is humorous, bold mine:
The data set begins in 1880 because observations did not have sufficient global coverage prior to that time. The period of 1951-1980 was chosen largely because the U.S. National Weather Service uses a three-decade period to define “normal” or average temperature. The GISS temperature analysis effort began around 1980, so the most recent 30 years was 1951-1980. It is also a period when many of today’s adults grew up, so it is a common reference that many people can remember.
So, the choice seems to be more about feeling than hard science, kind of like the time when Jim Hansen and his sponsor Senator Tim Wirth turned off the air conditioning in the Senate hearing room in June 1988 (to make it feel hotter) when they first tried to sell the global warming issue:
But, back to the issue at hand. The baseline difference doesn’t explain the divergence.
Perhaps it has to do with all of the adjustments NOAA and GISS make, perhaps it is a difference in methodology in computing the global surface average and then the anomaly post 2000. Perhaps it has to do with sea surface temperature, which Japan’s Met agency is very big on, but does differently. A hint comes in this process explanation seen here:
http://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/explanation.html
Global Average Surface Temperature Anomalies
JMA estimates global temperature anomalies using data combined not only over land but also over ocean areas. The land part of the combined data for the period before 2000 consists of GHCN (Global Historical Climatology Network) information provided by NCDC (the U.S.A.’s National Climatic Data Center), while that for the period after 2001 consists of CLIMAT messages archived at JMA. The oceanic part of the combined data consists of JMA’s own long-term sea surface temperature analysis data, known as COBE-SST (see the articles in TCC News No.1 and this report).
The procedure for estimating the global mean temperature anomaly is outlined below.
1) An average is obtained for monthly-mean temperature anomalies against the 1971-2000 baseline over land in each 5° x 5° grid box worldwide.
2) An average is obtained for monthly mean sea surface temperature anomalies against the 1971-2000 baseline in each 5° x 5° grid box worldwide in which at least one in-situ observation exists.
3) An average is obtained for the values in 1) and 2) according to the land-to-ocean ratio for each grid box.
4) Monthly mean global temperature anomaly is obtained by averaging the anomalies of all the grid boxes weighted with the area of the grid box.
5) Annual and seasonal mean global temperature anomalies are obtained by averaging monthly-mean global temperature anomalies.
6) The baseline period is adjusted to 1981-2010.
Note what I highlighted in red:
…for the period after 2001 consists of CLIMAT messages archived at JMA
That along with:
The oceanic part of the combined data consists of JMA’s own long-term sea surface temperature analysis data, known as COBE-SST
Is very telling, because it suggests that Japan is using an entirely different method for both land and sea data. For the post 2001 land data, it suggests they use the CLIMAT data as is, rather than the “value added” processing that NCDC/NOAA and NASA GISS do. The Met Office gets the NCDC/NOAA data already pre-processed with the GHCN3 algorithms. NASA GISS deconstructs the data then applies their own set of sausage factory adjustments, which is why their anomaly is often the highest of all the data sets.
Prior to 2001, Japans Met Agency uses the GHCN data, which is pre-processed and adjusted through another sausage recipe pioneered by Dr. Thomas Peterson at NCDC.
The land part of the combined data for the period before 2000 consists of GHCN (Global Historical Climatology Network) information provided by NCDC
A good example of the GHCN sausage is Darwin, Australia, as analysed by Willis Eschenbach:
Above: GHCN homogeneity adjustments to Darwin Airport combined record
So, it appears that Japan’s Meteorological agency is using adjusted GHCN data up to the year 2000, and from 2001 they are using the CLIMAT report data as is, without adjustments. To me, this clearly explains the divergence when you look at the NASA plot magnified and note when the divergence starts. The annotation marks in magenta are mine:
If anyone ever needed the clearest example ever of how NOAA and NASA’s post facto adjustments to the surface temperature record increase the temperature, this is it.
Now, does anyone want to bet that the activist scientists at NOAA/NCDC (Peterson) and NASA (Hansen) start lobbying Japan to change their methodology to be like theirs?
After all, the scientists in Japan “need to get their mind right” if they are going to be able to claim “scientists agree on Earth’s temperature changes”, when right now they clearly don’t.
P.S.
BTW if anyone wants to analyze the Japanese data, here is the source for it:
http://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/map/download.html
It is gridded, and I don’t have software handy at the moment to work with gridded data, but some other readers might.
UPDATE: Tim Channon at Tallbloke’s has plotted the gridded data and offers a graph, see here: http://tallbloke.wordpress.com/2013/02/01/jmas-global-surface-temperature-gridded-first-look/





The Japanese are an honest and honourable people and do not want to participate in the criminal behaviour of NOAA, Met Office, GISS and NCDC.
The inherent integrity of the Japanese will prevent them succumbing to pressure from the above mentioned group who wish for a clean sweep of ‘consensus’/
The better method, of course, is not to grid the world at all or to grid at a finer level. Errr..just krig it
Kriging is not necessarily justified in this case, as the underlying assumption of stationarity is clearly violated (that in fact being the point of it all). I would also be dubious about the secondary assumptions being satisfied. Kriging basically assumes a smooth landscape and by its nature will almost always underestimate peaks and valleys. As is always the case with smooth interpolant functions, actually.
Gridding the world per se is not necessarily a better answer, but adaptive tiling is not a silly thing to use. The basic issue is information theoretic — how to use the information you have most effectively without assigning it more weight and accuracy than the information at hand actually justifies. That latter bit is the tricky part. Kriging temperatures from a sparse set of stations in Antarctica is, for example, just silly. Kriging won’t fill in holes that span significant structure any more than anything else will, and nothing makes up for inadequate sampling. At best it leaves you (still) with large error bars.
But better still is to use satellites and actually cover the globe, or lay down a systematic grid. In the meantime, I continue to be amused by global temperature (anomaly) estimates that invariably do not include an estimate of the probable error. Often estimates made to several decimal places (but no error). We actually take points off student papers in physics when they put down three or four digit answers when the answer is only accurate to one or two, and we don’t take kindly to figures that plot experimental numbers without any sort of error bar.
Curiously, in climate science it seems to be the other way around. Plot not the measured quantity, but the “anomaly” compared to some presumably known baseline. Plot it without any error bars, even though the methodology used to compute it is rife with assumptions, corrections, interpolations, adjustments, and the blood of a white chicken sacrificed with a black-handled athame. When the error bars are (rarely) shown, do not let the fact that they often include the null hypothesis stop you from asserting otherwise. It’s like they actually study the book How to Lie with Statistics, and not in a good way…
rgb
Steve Mosher;
Bottom line, you dont need base periods,
>>>>>>>>>>>>>>>>>.
What you do need is justification for averaging anomalies from very cold regimes with anomalies from very warm regimes. I’ve brought this up with several NASA scientists, none of whom have ever answered. Perhaps you could explain how this is justified?
Jon says:
January 31, 2013 at 10:45 am
” … what is interesting is that this warming is strongly correlated with the rate of movement of the North Pole.”
OMG! Global warming causes movement of the North magnetic pole. It’s worse than we thought!
rgbatduke;
It would actually be very interesting to see just the post 1988 tail of this data plotted against the satellite data.
>>>>>>>>>>>>>>
Why 1988? In any event, here’s RSS and UAH versus Hadcrut4 since 1988. Note that I’ve added an offset of -0.2 to the HadCrut data to make starting baselines closer.
http://www.woodfortrees.org/plot/uah/from:1988/to:2012/mean:6/plot/rss/from:1988/to:2012/mean:6/plot/hadcrut4gl/from:1988/to:2012/mean:6/offset:-0.2
That “365” graph is fiction.
Note the same source data (up to 1995) from
http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html
Scroll down for the USHCNv1 graphs. The adjustment methodology is clearly stated. (read the Karl et al paper from 1986). Further down you see the plot of “all” the corrections. Compare the USHCNv1 plots to the “365” chart. It’s obvious the “365” chart has had the 1910 to 1940 era data altered to be cooler and the 1975 to 1995 data are altered to be warmer, thus severely increasing the apparent warming trend.
Sometime in late 2012 the GIStemp data were given a major “adjustment” to produce large step cooling the the pre-1940 data for about 10 percent of the reporting stations. Also, large numbers of stations had post-2006 data removed completely from the timeline. Now the GIStemp servers have been down for almost a month.
If you want good data, then look at the satellite data from 1979. Look at the USCRN data. Look at the Antarctic surface stations Amundsen-Scott, Vostok, Halley and Davis since 1957
davidmhoffer says:
January 31, 2013 at 1:12 pm
Or, rather, very dry and very wet regimes, perhaps? This metric seems implicitly to weight desert areas most heavily.
Assuming that it would be reasonable to use the most ‘reliable’ data for land temperatures I prefer to use this plot as a guideline to global temperature trends :-).
http://www.woodfortrees.org/plot/best/to:1980/plot/rss-land/plot/uah-land
Comment from Bishop Hill reader.
Oh them Japanese – they don’t cook fish and they don’t cook data.
Jan 31, 2013 at 4:29 PM | dearieme
http://bishophill.squarespace.com/blog/2013/1/31/japanese-cool.html
I should have added – this is the non-hockey stick way of looking at things and I justify it on the same grounds as the original hockey stick itself (for combining sources and trimming series).
January 31, 2013 at 9:07 am | Doug Huffman
————————
The state of ‘climate science’ … http://youtu.be/gQQ7BoadSCk
The data agree perfectly. In the fulness of time. At the appropriate juncture. A task of such analytical delicacy that the data must be left with One Of Us.
Sir Humphrey Appleby
St Dymphna’s Home For The Elderly Deranged
Bart says:
January 31, 2013 at 2:13 pm
davidmhoffer says:
January 31, 2013 at 1:12 pm
Or, rather, very dry and very wet regimes, perhaps? This metric seems implicitly to weight desert areas most heavily.
>>>>>>>>>>>>>>>>.
Agreed. Cold regions become over represented and warm regions under represented. Same with dry regions versus moist regions. I just don’t get how anomalies can be justified, and the list of researchers I have posed the question to without response is growing. I’m going to start keeping track.
The amazing thing about the Japanese graph (second in the article) from the JMA with that nice linear fit of 0.68 deg/century, is that the slope is so similar back to 1880, way before CO2 increased from anthropogenic sources.
Certainly there is no cause for alarm or sign of a catastrophe looming in the last 30-40 years with increased coal and oil burning. No hockey stick. This must be as Steve McIntyre wrote, a result of their adjustments being “somewhat primitive” or more likely as I would say “somewhat more honest”
Especially when it so easy to see that station temperatures from near by stations aren’t the same on your own smartphone with a weather app.
There is no excuse for linearizing temperature in the spacial dimension when it’s wrong. And then trying to use that data to justify a multi-trillion dollar stake in the heart of modern society.
They aren’t climate scientists, they’re climate activists.
WOW! Almost 1 degree of warming in 130 years. Pretty scarey! Who cares if they agree or not at that rate?
The argument – again! – is not about what happened since 1890, but since 1975. If you plot the post ’75 temp anomalies, your trends are different.
HOWEVER, if you plot post ’75 and trend the 4, you get a big difference.
Bait and switch: that is what this graph is all about. Confuse the post-LIA with the “CO2” temperature rises.
@Streetcred, thanks for your January 31, 2013 at 2:47 pm. “Any man don’t keep order, spends a night in the box.” Quite a contrast to the too common idiom of “thinking outside of the box.”
Why don’t you display satellite temperatures in parallel with this. They would show that these temperature curves are falsified. First, the late twentieth century temperatures of every one of these curves are wrong. They are cheating. And except for the Japanese the twenty-first century temperatures are falsified too. They are given an extra height of 0.2 degrees Celsius which is so far off that it leads to the absurdity of 2005 and 2010 both being higher than the 1998 super El Nino. In the twentieth century, the five El Nino peaks from 1980 to 1997 in are of nearly equal height and line up horizontally. But in these curves they are shown climbing steps up a mountain that gains 0.2 degrees in height every twenty years. The Super El Nino of 1998 that follows in reality has symmetrical dips on both sides of it. In this version you can see with the naked eye that its footing rises 0.1 degrees from one side to the other. This means an additional three degrees warming per century. If you think that the agreement among these temperature curves proves they cam be trusted think again. These are not independent temperature curves as claimed. As an example, I have determined that NASA and Met Office temperatures, from two sides or the ocean, were both subjected to a mysterious computer processing which had the unanticipated consequence of leaving many high spikes in these data sets. They are located in exactly the same places on both curves. Unless an explanation is forthcoming I have to consider it as part of the conspiracy to make the public believe in a non-existent global warming.
Why don’t you display satellite temperatures in parallel with this. They would show that these temperature curves are falsified. First, the late twentieth century temperatures of every one of these curves are wrong. They are cheating. And except for the Japanese the twenty-first century temperatures are falsified too. They are given an extra height of 0.2 degrees Celsius which is so far off that it leads to the absurdity of 2005 and 2010 both being higher than the 1998 super El Nino. In the twentieth century, the five El Nino peaks from 1980 to 1997 in are of nearly equal height and line up horizontally. But in these curves they are shown climbing steps up a mountain that gains 0.2 degrees in height every twenty years. The Super El Nino of 1998 that follows in reality has symmetrical dips on both sides of it. In this version you can see with the naked eye that its footing rises 0.1 degrees from one side to the other. This means an additional three degrees warming per century. If you think that the agreement among these temperature curves proves they cam be trusted think again. These are not independent temperature curves as claimed. As an example, I have determined that NASA and Met Office temperatures, from two sides or the ocean, were both subjected to a mysterious computer processing which had the unanticipated consequence of leaving many high spikes in these data sets. They are located in exactly the same places on both curves. Unless an explanation is forthcoming I have to consider it as part of the conspiracy to make the public believe in a non-existent global warming.
Why was it so cold in 1902 to 1912?
It was probably 0.7C warmer in 1878. Oh wait, that isn’t on the chart.
So, was it really that cold in 1902 to 1912. Did people starve from crop failures?
There was actually the longest La Nina on record (1906 to 1911) in the period and the Santa Maria volcano went off in 1902 (followed by Nova Rupta in 1912 just as temps started going up and the next big stratospheric volcano did not occur until 1963) but was it really that cold ? …
Three guesses which data set has better quality control, and your first two don’t count.
I’ve put up a post with time series of global and hemisphere’s from the gridded data at Tallbloke’s Talkshop (comoderator and contributor)
http://tallbloke.wordpress.com/2013/02/01/jmas-global-surface-temperature-gridded-first-look/
davidmhoffer says….
So refreshing to hear someone speak of the difference between an anomaly in the dry Arctic and at the moist equator! Whenever I see a warm anomaly in the Arctic I always just see it as heat escaping to space and oppositely, a warm temp anomaly in the equator is heat accumulation (temporarily anyway.)
Data means things! Ignore at one’s peril!
You can’t deal with gridded data? I can so email me if you need something specific done.
REPLY: I wrote the post from home, where I didn’t have all my tools that I have at the office, and I didn’t have time to work on it at the office due to pressing business. That’s what I was referring to. Thanks just the same. – Anthony