At NASA’s Climate 365, there is an interesting story posted with this statement and a graph:
Some say scientists can’t agree on Earth’s temperature changes
Each year, four international science institutions compile temperature data from thousands of stations around the world and make independent judgments about whether the year was warmer or cooler than average. “The official records vary slightly because of subtle differences in the way we analyze the data,” said Reto Ruedy, climate scientist at NASA’s Goddard Institute for Space Studies. “But they also agree extraordinarily well.”
All four records show peaks and valleys in sync with each other. All show rapid warming in the past few decades. All show the last decade has been the warmest on record.
In sync? Weellll, not quite. Japan apparently hasn’t ‘got their mind right‘ yet as the graph shows:
Here is where it gets interesting. Note the purple line after the year 2000.
The Japanese data line in purple is about .25 degree cooler than the NASA, NOAA, and Met Office data sets after the year 2000. That has partially to do with anomaly baselines chosen by the different agencies, as these two comparison graphs shown below illustrate:
Source: http://ds.data.jma.go.jp/tcc/tcc/news/press_20120202.pdf
NASA GISS uses a 1951-1980 average for the anomaly baseline, Japan’s Meteorological agency uses a 1981-2010 baseline, and that explains the offset difference between 0.48 and ~ 0.23 C, however, it doesn’t explain the divergence when all of the data is plotted together using the same anomaly 1951-1980 baseline as NASA did, which is explained in more detail at the link provided in the NASA 365 post to NASA’s Earth Observatory study here:
Source: http://earthobservatory.nasa.gov/IOTD/view.php?id=80167
In that EO story they explain:
The map at the top depicts temperature anomalies, or changes, by region in 2012; it does not show absolute temperature. Reds and blues show how much warmer or cooler each area was in 2012 compared to an averaged base period from 1951–1980. For more explanation of how the analysis works, read World of Change: Global Temperatures.
The justification for using the outdated 1951-1980 baseline is humorous, bold mine:
The data set begins in 1880 because observations did not have sufficient global coverage prior to that time. The period of 1951-1980 was chosen largely because the U.S. National Weather Service uses a three-decade period to define “normal” or average temperature. The GISS temperature analysis effort began around 1980, so the most recent 30 years was 1951-1980. It is also a period when many of today’s adults grew up, so it is a common reference that many people can remember.
So, the choice seems to be more about feeling than hard science, kind of like the time when Jim Hansen and his sponsor Senator Tim Wirth turned off the air conditioning in the Senate hearing room in June 1988 (to make it feel hotter) when they first tried to sell the global warming issue:
But, back to the issue at hand. The baseline difference doesn’t explain the divergence.
Perhaps it has to do with all of the adjustments NOAA and GISS make, perhaps it is a difference in methodology in computing the global surface average and then the anomaly post 2000. Perhaps it has to do with sea surface temperature, which Japan’s Met agency is very big on, but does differently. A hint comes in this process explanation seen here:
http://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/explanation.html
Global Average Surface Temperature Anomalies
JMA estimates global temperature anomalies using data combined not only over land but also over ocean areas. The land part of the combined data for the period before 2000 consists of GHCN (Global Historical Climatology Network) information provided by NCDC (the U.S.A.’s National Climatic Data Center), while that for the period after 2001 consists of CLIMAT messages archived at JMA. The oceanic part of the combined data consists of JMA’s own long-term sea surface temperature analysis data, known as COBE-SST (see the articles in TCC News No.1 and this report).
The procedure for estimating the global mean temperature anomaly is outlined below.
1) An average is obtained for monthly-mean temperature anomalies against the 1971-2000 baseline over land in each 5° x 5° grid box worldwide.
2) An average is obtained for monthly mean sea surface temperature anomalies against the 1971-2000 baseline in each 5° x 5° grid box worldwide in which at least one in-situ observation exists.
3) An average is obtained for the values in 1) and 2) according to the land-to-ocean ratio for each grid box.
4) Monthly mean global temperature anomaly is obtained by averaging the anomalies of all the grid boxes weighted with the area of the grid box.
5) Annual and seasonal mean global temperature anomalies are obtained by averaging monthly-mean global temperature anomalies.
6) The baseline period is adjusted to 1981-2010.
Note what I highlighted in red:
…for the period after 2001 consists of CLIMAT messages archived at JMA
That along with:
The oceanic part of the combined data consists of JMA’s own long-term sea surface temperature analysis data, known as COBE-SST
Is very telling, because it suggests that Japan is using an entirely different method for both land and sea data. For the post 2001 land data, it suggests they use the CLIMAT data as is, rather than the “value added” processing that NCDC/NOAA and NASA GISS do. The Met Office gets the NCDC/NOAA data already pre-processed with the GHCN3 algorithms. NASA GISS deconstructs the data then applies their own set of sausage factory adjustments, which is why their anomaly is often the highest of all the data sets.
Prior to 2001, Japans Met Agency uses the GHCN data, which is pre-processed and adjusted through another sausage recipe pioneered by Dr. Thomas Peterson at NCDC.
The land part of the combined data for the period before 2000 consists of GHCN (Global Historical Climatology Network) information provided by NCDC
A good example of the GHCN sausage is Darwin, Australia, as analysed by Willis Eschenbach:
Above: GHCN homogeneity adjustments to Darwin Airport combined record
So, it appears that Japan’s Meteorological agency is using adjusted GHCN data up to the year 2000, and from 2001 they are using the CLIMAT report data as is, without adjustments. To me, this clearly explains the divergence when you look at the NASA plot magnified and note when the divergence starts. The annotation marks in magenta are mine:
If anyone ever needed the clearest example ever of how NOAA and NASA’s post facto adjustments to the surface temperature record increase the temperature, this is it.
Now, does anyone want to bet that the activist scientists at NOAA/NCDC (Peterson) and NASA (Hansen) start lobbying Japan to change their methodology to be like theirs?
After all, the scientists in Japan “need to get their mind right” if they are going to be able to claim “scientists agree on Earth’s temperature changes”, when right now they clearly don’t.
P.S.
BTW if anyone wants to analyze the Japanese data, here is the source for it:
http://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/map/download.html
It is gridded, and I don’t have software handy at the moment to work with gridded data, but some other readers might.
UPDATE: Tim Channon at Tallbloke’s has plotted the gridded data and offers a graph, see here: http://tallbloke.wordpress.com/2013/02/01/jmas-global-surface-temperature-gridded-first-look/





Japan has 1998 as the hottest year. As well, 4 of the data sets below agree with this, namely RSS, UAH, Hadcrut3 and Hadsst2. For further details as to how 2012 ended compared to the warmest year for each set, keep reading.
How 2012 Ended on Six Data Sets
Note the bolded numbers for each data set where the lower bolded number is the highest anomaly recorded in 2012 and the higher one is the all time record so far.
With the UAH anomaly for December at 0.202, the average for 2012 is (-0.134 -0.135 + 0.051 + 0.232 + 0.179 + 0.235 + 0.130 + 0.208 + 0.339 + 0.333 + 0.281 + 0.202)/12 = 0.161. This would rank 9th. 1998 was the warmest at 0.42. The highest ever monthly anomaly was in April of 1998 when it reached 0.66. The anomaly in 2011 was 0.132 and it came in 10th.
With the GISS anomaly for December at 0.44, the average for 2012 is (0.36 + 0.39 + 0.49 + 0.60 + 0.70 + 0.59 + 0.51 + 0.57 + 0.66 + 0.70 + 0.68 + 0.44)/12 = 0.56. This would rank 9th. 2010 was the warmest at 0.66. The highest ever monthly anomaly was in January of 2007 when it reached 0.93. The anomaly in 2011 was 0.54 and it came in 10th.
With the Hadcrut3 anomaly for December at 0.233, the average for 2012 is (0.206 + 0.186 + 0.290 + 0.499 + 0.483 + 0.482 + 0.445 + 0.513 + 0.514 + 0.499 + 0.482 + 0.233)/12 = 0.403. This would rank 10th. 1998 was the warmest at 0.548. The highest ever monthly anomaly was in February of 1998 when it reached 0.756. One has to back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less. The anomaly in 2011 was 0.340 and it came in 13th.
With the sea surface anomaly for December at 0.342, the average for the year is (0.203 + 0.230 + 0.241 + 0.292 + 0.339 + 0.352 + 0.385 + 0.440 + 0.449 + 0.432 + 0.399 + 0.342)/12 = 0.342. This would rank 8th. 1998 was the warmest at 0.451. The highest ever monthly anomaly was in August of 1998 when it reached 0.555. The anomaly in 2011 was 0.273 and it came in 13th.
With the RSS anomaly for December at 0.101, the average for the year is (-0.060 -0.123 + 0.071 + 0.330 + 0.231 + 0.337 + 0.290 + 0.255 + 0.383 + 0.294 + 0.195 + 0.101)/12 = 0.192. This would rank 11th. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2011 was 0.147 and it came in 13th.
With the Hadcrut4 anomaly for December at 0.269, the average for 2012 is (0.288 + 0.208 + 0.339 + 0.525 + 0.531 + 0.506 + 0.470 + 0.532 + 0.515 + 0.524 + 0.512 + 0.269)/12 = 0.436. This would rank 10th. 2010 was the warmest at 0.54. The highest ever monthly anomaly was in January of 2007 when it reached 0.818. The anomaly in 2011 was 0.399 and it came in 13th.
If you would like to see the above month to month changes illustrated graphically, see:
http://www.woodfortrees.org/plot/wti/from:2012/plot/gistemp/from:2012/plot/uah/from:2012/plot/rss/from:2012/plot/hadsst2gl/from:2012/plot/hadcrut4gl/from:2012/plot/hadcrut3gl/from:2012
Les Johnson says:
January 31, 2013 at 9:11 am
Click on Gavin’s name or picture…..
============================================
No thanks
Map of NCDC Station locations, these should be only stations that have at least 240 days/year of data from 1950-2010.
a thought for everyone: look at the graph again. there is no hockey stick.
the graph starts low and goes up.
rhetorically, to use a temp graph that things used ot be at a normal level without influence by humans, then started going up with human influence, the flat part of the hockey stick is necessary.
has the hockey stick become so unpopular that the rhetoric is carried out but with the hope that the flat part of the hockey stick is no longer needed? has the era of the hockey stick ended?
if you used this as a discussion point with a warmer cult member, you could have them declare that temps were stable, then humans influenced them to go up. then, ask them ot point out the stable period. there is none.
I think the right way to measure deviation from “normal” of any kind needs to be of a moving 30 years’ normal.
It seems to me a year by year normal is the right thing to do, but the minimum should be a decadal one.
I.e. The Year 2012 should be referenced to either the 30 years of 1981 to 2011 normal, or at least to 1980 to 2010.
Otherwise it seems apples and oranges to me.
Matthew W says:
January 31, 2013 at 10:18 am
Les Johnson says:
January 31, 2013 at 9:11 am
Click on Gavin’s name or picture…..
============================================
No thanks
—
I agree…I was about to, but then why give them any web traffic – not worth it. Anyways – isn’t Gavin supposed to be working on proper documentation for Model E or somehting? Nahhh…too busy being a climate rock star…
This is actually hiding the disagreement. Include the satellite measurements, too, then we can really see the divergence.
My guess is that the same person who released the climategate email’s also convinced someone to use the Japanese data on this graph thereby exposing GHCN as the hothead in the GAT crowd.
It’s pretty obvious that the climate has warmed over the past few decades … what is interesting is that this warming is strongly correlated with the rate of movement of the North Pole. See here: http://www.appinsys.com/globalwarming/earthmagneticfield.htm
Has anyone looked at the individual graphs on the absolute temperature scale and then overlayed them to see how thay actually disagree with each other in absolute terms? I’d like to see such a graph particularly as on the anomoly chart the Japanese data is somewhat lower than the other anomolies around 1900 as well.
Anomoly graphs allow the potential use of smoke and mirrors.
“The GISS temperature analysis effort began around 1980, so the most recent 30 years was 1951-1980. It is also a period when many of today’s adults grew up, so it is a common reference that many people can remember.”
That appears to be either a rather blatant misstatement or severely outdated. Referring to pre-1980 as period that many people can remember is humorous. To realize how funny this really is, one should consult the Beloit College Mindset lists. Each year since 1998 (i.e, which will be the class of 2002) Beloit College publishes a list of things to which freshmen don’t have a living, historical reference.They have published 14 more lists, updating it each year.
Below are links to the oldest, 1998, which would be for the class of 2002
http://www.beloit.edu/mindset/2002/
Examples:
The people starting college this fall across the nation were born in 1980.
They are too young to remember the Space Shuttle Challenger blowing up.
They never had a polio shot, and likely, do not know what it is.
Bottle caps have not always been screw off, but have always been plastic. They have no idea what a pull top can looks like. (on this particular one, I once had to explain the term “church key” to a young friend who didn’t know that there were cans before pull tops.)
Atari pre-dates them, as do vinyl albums.
and the most recent, 2012, which would be for the class of 2016.
http://www.beloit.edu/mindset/2016/
Examples:
For this generation of entering college students, born in 1994, Kurt Cobain, Jacqueline Kennedy Onassis, Richard Nixon and John Wayne Gacy have always been dead.
Benjamin Braddock, having given up both a career in plastics and a relationship with Mrs. Robinson, could be their grandfather.
Outdated icons with images of floppy discs for “save,” a telephone for “phone,” and a snail mail envelope for “mail” have oddly decorated their tablets and smart phone screens.
Star Wars has always been just a film, not a defense strategy.
They have had to incessantly remind their parents not to refer to their CDs and DVDs as “tapes.”
There have always been blue M&Ms, but no tan ones.’
In the graphic above, the Japanese data seems to end before the others which all have an uptick,
From 1995, the Japanese curve is very similar to both Hadcrut3 and RSS as can be seen below. That is what “agreement” looks like.
http://www.woodfortrees.org/plot/hadcrut3gl/from:1995/plot/rss/from:1995
UCAR http://climatedataguide.ucar.edu/guidance/sst-data-cobe-centennial-situ-observation-based-estimatessays that, relative to HadSST, the Japanese “bias adustments” are “somewhat primitive”. j
I can come up with an easy justification for not including the satellite data. The graph uses a baseline of 1951-1980. We don’t have satellite data for that period (only the very end of the period), so they wouldn’t have been able to calculate correct anomalies.
Of course, if they used the same baseline as the original Japanese data, they would have been able to include the satellite data for the relevant period…
I find it amazing that people think the accuracy and precision of a global temperature average is that good. Or that a global average means anything.
Whoops. As I read “All four records show peaks and valleys in sync with each other. All show rapid warming in the past few decades. All show the last decade…”, I expected the logical sequence to be “All show that there has been no further warming in the last decade”. I expected that because, firstly it is true, and secondly it continues to describe how global temperatures have fluctuated. But instead, it says “the last decade has been the warmest on record”.
This is what we call sophistry. That is, the statement is not in itself untrue, not an outright lie, but it is carefully designed to be deceptive. It is designed to mislead from the fact that temperatures have now stopped rising (and no climate model predicted that and current theory cannot account for that!)
This is sophistry – i.e dishonesty. Why do GISS feel it necessary to stoop to that? Does it help their case do you think?
Over at James Delingpole’s blog there is quote from famous James Lovelock, once the high priest of the global warming green movement. It says
Maybe one day, in the not too distant future GISS could also become honest and encompass such humility?
Dear James Lovelock,
You forgot you were living in Salem.
If you use RSM (reference station method ) or CAM ( common anomaly method ) and IF you base your data on GHCN or CLIMAT, then there are two periods you could use as base periods to MAXIMIZE the number of stations used.
1950-1981 or 1960-1991. Note that GISS which uses RSM and CRU which uses CAM each select on of these periods. the difference between these two periods is that one (1950-1981 ) gives you a few more stations in the NH, while 1960 -1991 gives you a few more stations in the
southern hemisphere.
When japan selects the period they do, they also change the distribution and number of stations used in each hemisphere. So a good comparison will control for that spatial difference in sampling.
The right approach, pioneered by skeptic Jeff Id and statistician roman M, just uses all the data without a base period. This approach was enhanced by Nick Stokes, and Tamino, and finally improved upon by Berkeley. Bottom line, you dont need base periods, and more importantly choosing a base period can alter your sampling.
on SST
japan take this route:
“3) An average is obtained for the values in 1) and 2) according to the land-to-ocean ratio for each grid box.”
That approach is rather crude. If you simply weight according to land ocean ratio you are neglecting the issue that each portion of the grid may have more or less measures that will over weight land portions that are under sampled and underweight those that are over sampled. CRU has a better method for calculating grids were land and ocean are part of the grid.
The better method, of course, is not to grid the world at all or to grid at a finer level. Errr..just krig it
Reblogged this on acckkii.
I just can’t see a baseline that avoids your dataset as responsible science. its like putting all the bells and whistles AND a more powerful engine in the test-drive car but never mentioning it has 90 more HP and a better air conditioner.
Honestly the entire output becomes apples and orangutangs, two completely un-related forms of data.
Let me get this straight, according to the boffins, the gas medium that we live in has warmed by about 1DegC since 1900?
I don’t know whether to laugh or cry at that one.
Gee, they didn’t include the satellite data at all. Wonder why that is? It only has the best spatial and temporal coverage there is, it only cost a few billion to put those satellites up in space, but what the heck, what’s a few billion and the most accurate data we have compared to the fate of the world?
It would actually be very interesting to see just the post 1988 tail of this data plotted against the satellite data. The GISS and HADCRUT records, IIRC, are currently constrained by the fact that if they add any more artificial warming on at this point, the divergence from the satellites will be too great and will tip their hand. From the look of things — without much resolution at this scale — the Japanese data is much more in alignment with the satellites, and hence is actually moderately believable.
Your point on anomalies is also well made. But that is only one of the many aspects of lying with statistics in play at this point, and not the worst of them.
Looking back at the correction graph in the top article I am once again struck by the prospect of performing a statistical analysis of the corrections applied to the data, under a null hypothesis of fair and unbiased corrections that would move station readings up as often as it moves them down. After all, I can think of no good reason that readings of thermometers from long ago would be systematically biased in their errors — this violates ever so many principles of statistics. My prediction is that the p value of the existing correction set under the null hypothesis would be enough to convince any jury that they are not only biased, but openly and flagrantly biased (that is, a p value less than one in a million or thereabouts).
That actually sounds as though it would be worth a paper, or perhaps an addendum to the paper Anthony already has going on weather station siting and corrections.
rgb
If I wanted to know the history over a period of time of the average temperature of the surface air mass of the earth, what I really need to know is the heat content of that air mass over time relative to some reference point.
n’kay… fine … but what does this graph have to do with the ‘A’ in ‘AGW’? Let’s say, for the sake of argument, that all those pretty little fluctuations are natural, nothing out of the ordinary? What good is the pretty little graph and its sinister insinuation then?