At NASA’s Climate 365, there is an interesting story posted with this statement and a graph:
Some say scientists can’t agree on Earth’s temperature changes
Each year, four international science institutions compile temperature data from thousands of stations around the world and make independent judgments about whether the year was warmer or cooler than average. “The official records vary slightly because of subtle differences in the way we analyze the data,” said Reto Ruedy, climate scientist at NASA’s Goddard Institute for Space Studies. “But they also agree extraordinarily well.”
All four records show peaks and valleys in sync with each other. All show rapid warming in the past few decades. All show the last decade has been the warmest on record.
In sync? Weellll, not quite. Japan apparently hasn’t ‘got their mind right‘ yet as the graph shows:
Here is where it gets interesting. Note the purple line after the year 2000.
The Japanese data line in purple is about .25 degree cooler than the NASA, NOAA, and Met Office data sets after the year 2000. That has partially to do with anomaly baselines chosen by the different agencies, as these two comparison graphs shown below illustrate:
Source: http://ds.data.jma.go.jp/tcc/tcc/news/press_20120202.pdf
NASA GISS uses a 1951-1980 average for the anomaly baseline, Japan’s Meteorological agency uses a 1981-2010 baseline, and that explains the offset difference between 0.48 and ~ 0.23 C, however, it doesn’t explain the divergence when all of the data is plotted together using the same anomaly 1951-1980 baseline as NASA did, which is explained in more detail at the link provided in the NASA 365 post to NASA’s Earth Observatory study here:
Source: http://earthobservatory.nasa.gov/IOTD/view.php?id=80167
In that EO story they explain:
The map at the top depicts temperature anomalies, or changes, by region in 2012; it does not show absolute temperature. Reds and blues show how much warmer or cooler each area was in 2012 compared to an averaged base period from 1951–1980. For more explanation of how the analysis works, read World of Change: Global Temperatures.
The justification for using the outdated 1951-1980 baseline is humorous, bold mine:
The data set begins in 1880 because observations did not have sufficient global coverage prior to that time. The period of 1951-1980 was chosen largely because the U.S. National Weather Service uses a three-decade period to define “normal” or average temperature. The GISS temperature analysis effort began around 1980, so the most recent 30 years was 1951-1980. It is also a period when many of today’s adults grew up, so it is a common reference that many people can remember.
So, the choice seems to be more about feeling than hard science, kind of like the time when Jim Hansen and his sponsor Senator Tim Wirth turned off the air conditioning in the Senate hearing room in June 1988 (to make it feel hotter) when they first tried to sell the global warming issue:
But, back to the issue at hand. The baseline difference doesn’t explain the divergence.
Perhaps it has to do with all of the adjustments NOAA and GISS make, perhaps it is a difference in methodology in computing the global surface average and then the anomaly post 2000. Perhaps it has to do with sea surface temperature, which Japan’s Met agency is very big on, but does differently. A hint comes in this process explanation seen here:
http://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/explanation.html
Global Average Surface Temperature Anomalies
JMA estimates global temperature anomalies using data combined not only over land but also over ocean areas. The land part of the combined data for the period before 2000 consists of GHCN (Global Historical Climatology Network) information provided by NCDC (the U.S.A.’s National Climatic Data Center), while that for the period after 2001 consists of CLIMAT messages archived at JMA. The oceanic part of the combined data consists of JMA’s own long-term sea surface temperature analysis data, known as COBE-SST (see the articles in TCC News No.1 and this report).
The procedure for estimating the global mean temperature anomaly is outlined below.
1) An average is obtained for monthly-mean temperature anomalies against the 1971-2000 baseline over land in each 5° x 5° grid box worldwide.
2) An average is obtained for monthly mean sea surface temperature anomalies against the 1971-2000 baseline in each 5° x 5° grid box worldwide in which at least one in-situ observation exists.
3) An average is obtained for the values in 1) and 2) according to the land-to-ocean ratio for each grid box.
4) Monthly mean global temperature anomaly is obtained by averaging the anomalies of all the grid boxes weighted with the area of the grid box.
5) Annual and seasonal mean global temperature anomalies are obtained by averaging monthly-mean global temperature anomalies.
6) The baseline period is adjusted to 1981-2010.
Note what I highlighted in red:
…for the period after 2001 consists of CLIMAT messages archived at JMA
That along with:
The oceanic part of the combined data consists of JMA’s own long-term sea surface temperature analysis data, known as COBE-SST
Is very telling, because it suggests that Japan is using an entirely different method for both land and sea data. For the post 2001 land data, it suggests they use the CLIMAT data as is, rather than the “value added” processing that NCDC/NOAA and NASA GISS do. The Met Office gets the NCDC/NOAA data already pre-processed with the GHCN3 algorithms. NASA GISS deconstructs the data then applies their own set of sausage factory adjustments, which is why their anomaly is often the highest of all the data sets.
Prior to 2001, Japans Met Agency uses the GHCN data, which is pre-processed and adjusted through another sausage recipe pioneered by Dr. Thomas Peterson at NCDC.
The land part of the combined data for the period before 2000 consists of GHCN (Global Historical Climatology Network) information provided by NCDC
A good example of the GHCN sausage is Darwin, Australia, as analysed by Willis Eschenbach:
Above: GHCN homogeneity adjustments to Darwin Airport combined record
So, it appears that Japan’s Meteorological agency is using adjusted GHCN data up to the year 2000, and from 2001 they are using the CLIMAT report data as is, without adjustments. To me, this clearly explains the divergence when you look at the NASA plot magnified and note when the divergence starts. The annotation marks in magenta are mine:
If anyone ever needed the clearest example ever of how NOAA and NASA’s post facto adjustments to the surface temperature record increase the temperature, this is it.
Now, does anyone want to bet that the activist scientists at NOAA/NCDC (Peterson) and NASA (Hansen) start lobbying Japan to change their methodology to be like theirs?
After all, the scientists in Japan “need to get their mind right” if they are going to be able to claim “scientists agree on Earth’s temperature changes”, when right now they clearly don’t.
P.S.
BTW if anyone wants to analyze the Japanese data, here is the source for it:
http://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/map/download.html
It is gridded, and I don’t have software handy at the moment to work with gridded data, but some other readers might.
UPDATE: Tim Channon at Tallbloke’s has plotted the gridded data and offers a graph, see here: http://tallbloke.wordpress.com/2013/02/01/jmas-global-surface-temperature-gridded-first-look/





davidmhoffer says:
January 31, 2013 at 9:31 am
“That said, my objection remains that averaging anomaly data is meaningless. An anomaly of 1 in the arctic represents a change in energy balance of less than one third the amount of an anomaly of 1 in the tropics. Averaging them together results in a meaningless number.”
I totally agree with you. The word “anomaly” itself conveys the meaning that the “Earth’s temperature” (a meaningless metric for many reasons) is somehow abnormal (anomalous), and that the base period to which it’s referred is, for whatever reason, “normal”. The base period at present for groups like the controversial NASA/GISS appear to be linked to Jim Hansen’s youth (from 1950 – 1980).
Here is what NOAA says about anomalies:
—
https://www.ncdc.noaa.gov/cmb-faq/anomalies.php
Q: Why use temperature anomalies (departure from average) and not absolute temperature measurements?
A: Absolute estimates of global average surface temperature are difficult to compile for several reasons. Some regions have few temperature measurement stations (e.g., the Sahara Desert) and interpolation must be made over large, data-sparse regions. In mountainous areas, most observations come from the inhabited valleys, so the effect of elevation on a regions average temperature must be considered as well. For example, a summer month over an area may be cooler than average, both at a mountain top and in a nearby valley, but the absolute temperatures will be quite different at the two locations. The use of anomalies in this case will show that temperatures for both locations were below average.
Using reference values computed on smaller [more local] scales over the same time period establishes a baseline from which anomalies are calculated. This effectively normalizes the data so they can be compared and combined to more accurately represent temperature patterns with respect to what is normal for different places within a region.
For these reasons, large-area summaries incorporate anomalies, not the temperature itself. Anomalies more accurately describe climate variability over larger areas than absolute temperatures do, and they give a frame of reference that allows more meaningful comparisons between locations and more accurate calculations of temperature trends.
—
There are some problems (for me) with the statements above. First, for the mountain vs valley comparison, it may be true that the anomalies may both show the same general trend (i.e. cooler or warmer than “normal”) but the magnitudes may be entirely different. But the implication is that anomalies can be interpolated spatially regardless of the terrain.
And their statement “Absolute estimates of global average temperature are hard to compile…” – what does this mean? What is an “absolute estimate”?? (heh!!) There may be problems with data quality (as have been chronicled at WUWT for the past 6 years), but it shouldn’t be “hard” to compile and process the available raw, absolute temperature data. What IS more challenging, of course, is to homogenize, contort, distort, synthesize, and smooth the raw data so that it conforms to the CAGW world view…
richardscourtney;
Presenting global temperature anomalies reveals the global warming since 1900 which is of interest
>>>>>>>>>>>>>>>>>>>>
I understand the intent. If anomalies had a linear relationship to energy flux, I would have no objection. But they do not. The relationship is that P varies with T^4. This gives us the possibility of (for example) and anomaly of +2 from a very cold regime averaged with an anomaly from a very warm regime of -1. If the baselines used were -40 and +40, the result would be an average anomaly that is positive while the change in energy flux is negative. The change in the cold regime becomes over represented, the warm regime under represented, and the information derived tells us nothing about change in energy balance. The whole CAGW debate centres around the supposed change in energy balance at surface caused by CO2 increases. Since anomaly data does not and can not provide this information, it is not fit for purpose. The continued use of anomaly data in this fashion is particularly stunning given that converting baseline temps to w/m2 is a trivial matter, as is then calculating anomalies in w/m2 which would deliver precisely the information we are trying to track, the effect of CO2 increases on w/m2 at surface.
Here is the graph from the post with RSS & UAH added. I went to the climate365 website to look for what averaging their graph was using, found nothing. I used the RSS/UAH zero baseline matched to zero on the above graph, although that is not correct. It just occurred to me i could use one or more ground-based set of observations to align the satellites. I will look into that. Any other ideas on getting the satellite data aligned meaningfully would be appreciated.
http://i50.tinypic.com/fa4qap.jpg
I’m surprised…using GISS to align RSS & UAH hardly changes the previous graph. This has 36 month averages instead of the 60 month averages on the previous graph. Just looking for something that looks like what the climate365 graph used, don’t see it at 12, 13, 36 and 60 month averages.
http://i50.tinypic.com/nnjnuu.jpg
Steve Keohane says:
February 1, 2013 at 8:09 am
Steve, it looks as if you are doing something slightly different with the satellite data sets, namely taking mean of around 30 samples. This does not show that 1998 is the warmest year on the satellite data like it does on the Japanese data. If that is what you intend to show, that is one thing. But note the sharp peaks at 1998 on all the other data sets. However yours is very rounded and the alignment is a bit off. So you are not really comparing apples to apples yet. Perhaps align the 1998 peak on all sets and use monthly data (with a mean of 1) for the satellite data.
davidmhoffer says:
February 1, 2013 at 7:17 am
richardscourtney;
“Presenting global temperature anomalies reveals the global warming since 1900 which is of interest.”
It is one thing to use anomalies as rules of thumb that can serve as guides to discovery but it is quite another thing to use them as evidence. All sciences use heuristics to simplify daily work but when presenting conclusions and the evidence for them they do not present heuristics.
Anomalies are at several removes from anything that can be reasonably called a direct measurement of some feature of the environment. Davidmhoffer gives an excellent example above. What we cite as evidence for our claims must have some direct connection to actual measurements. Otherwise, we surrender all empirical constraints on theorizing. If we do that, we might as well buy our own climate supercomputer.
On more than one occasion, I have expressed my admiration for richardscourtney’s posts. In this rare case, his post could benefit from a bit more work.
davidmhoffer:
re your reply at February 1, 2013 at 7:17 am to my comment.
I agree with with everything you say.
I tried to be ‘polite’ but that clearly obscured what I was trying to say.
I will now be blunt and hope the Moderator will permit this post.
1.
Global temperature fluctuates up and down by 3.8 deg.C during each year.
Few people know this.
2.
Global temperature rose by 0.9 deg.C (as an annual average) since 1900.
Many people have been told this.
3.
People would not be scared by global warming if they knew that the rise since 1900 is less than a quarter of the variation during each year. It is an inconvenient truth.
4.
This inconvenient truth needs to be hidden if people are to be scared and, therefore, anomalies are used to conceal the inconvenient truth.
5.
In summation, the use of temperature anomalies is a trick used to further a political agenda and, therefore, it is not relevant that the use of anomalies has no scientific justification and purpose.
Richard
Your post and all the comments above reveal two important realities:
1. The debate surrounding our ever changing climate, with regard to moot anthropogenic influence, is irrelevant because current climate behaviour displays anomalies that are so insignificantly tiny that scientists are reduced to argument over differences of fractions of a degree in any comparison of data sets. I.e. If humankind really is affecting global temperatures, the result is negligible.
2. There are many interested parties who rely on funding, livelihood, academic reputation and recognition based on the concept of CAGW that they are constrained into, as has been remarked upon several times above, “cooking the books”.
The real problem for humankind, and most other organisms, is that several “uncooked” data sets indicate a recent trend of FALLING global temperatures. There is no question that lower global temperatures would be far more detrimental to our biosphere than fractional warming. Unfortunately, the very agencies in a position to detect and publicize such a trend are so busy trying to prove warming for the reasons given above that they will not merely overlook it, they will actively conceal it.
Werner Brozek says:February 1, 2013 at 8:53 am
Werner, With a mean=1 we’re looking at monthly data and the range in Y becomes too large. This has mean=12, and I aligned the peaks at 1998 along the X-axis. GISS does not seem to follow between the woodfortrees line and the line in the climate365 graph, the peaks do not match in amplitude. I used the GISTEMP LOTI plot, using GISTEMP dTs would push the satellite temps down another .2°C on the Y-axis.
http://i47.tinypic.com/w98ln6.jpg
Reblogged this on This Got My Attention and commented:
What exactly is the trend in temperatures?
But here’s where I think we suffer, the Media doesn’t know (or care to know) any better. When we try to say “Climate Scientists” are at best being disingenuous, at best we come off like the tin foil hat wearing Uncle, that we try to keep away from the public, or “paid shills for the oil industry”.
We try to point out 25%-100% of the temp trend isn’t real, most don’t know who to believe, so they pick the safe bet, the scientists. For many, they already loathe mankind, so it’s easy to believe. Climategate, if we could have proven the scientist were lying, might have been enough, but the proof was soft enough, the reviews not damning enough to prove their true colors.
IMO we need either hard irrefutable proof they’re lying, or we need proof GHG as explained to the public is wrong, in a way the public understands. Or we’ll have to wait for nature to show them wrong, but expect them to twist even that into not being proof they’re wrong. Without a press that’s willing to understand why the science is wrong, the hotheads will not admit they’re wrong, only that there was just a pause that thank heaven is postponing our demise, giving us another chance to repent our evil ways.
davidmhoffer says:
February 1, 2013 at 7:17 am
richard verney says:
February 1, 2013 at 1:35 am
Your posts highlight two different potential metrics to measure something physically meaningful. David is suggesting an average of T^4 which, assuming roughly uniform emissivities, would produce something proportional to the average rate of energy dissipation. Since T^4 weights high temperatures more heavily than low ones, the current T metric is effectively overweighting the cold areas from this perspective.
One problem with a T^4 metric (aside from the emissivity question): This is more-or-less proportional to the outward energy flux from the surface, but some of that flux is going to be reflected back by GHGs. So, it doesn’t really tell us the rate at which energy is leaving the Earth. Does this make T^4 particularly well suited for measuring the effect of GHGs? Something to ponder…
Anyway, Richard’s idea is to weight by RH, because the heat capacity of wet regions is greater than that of dry ones. Because of this, a 1 degree change of temperature in wet regions produce greater energy retention than a 1 degree change in a dry area. From this perspective, the current T metric is overweighting the changes in dry regions. It is possible for the T metric to show a significant rise, without any significant change in overall atmospheric energy storage.
Which of these two metrics is better, and which regions are we overweighting with the T metric: cold ones, or dry ones? Does it matter? Cold regions tend to be dry ones, but dry ones are not necessarily cold ones (e.g., Sahara). Should we be looking at a T^4 metric, or an RH one?
Steve Keohane says:
February 1, 2013 at 10:24 am
Thank you! It is known that the satellite data show greater extremes due to El Ninos and La Ninas so what you did in this latest attempt is about as well as can be done. Note the relative heights of the 1998 and 2010 El Ninos are about the same on RSS, UAH and the Japanese data.
GISS does not seem to follow between the woodfortrees line
You may be attempting the impossible! Don’t even try!
Check out 2007 with their latest data:
2007 93 66 67 71 63 55 57 58 60 57 54 46
This averages to 0.62.
Compare this to WFT which still has the data prior to last week:
2007 0.89
2007.08 0.64
2007.17 0.65
2007.25 0.68
2007.33 0.62
2007.42 0.54
2007.5 0.56
2007.58 0.57
2007.67 0.53
2007.75 0.55
2007.83 0.49
2007.92 0.4
This averages 0.59 that you can still verify on WFT as it has not updated December 2012 yet. So within a week, the anomaly for 2007 went up from 0.59 to 0.62.
Bart says:
February 1, 2013 at 12:32 pm
While this is what theory says, a hand held IR thermometer pointed into the sky on a clear 35F day, reads under the minimum scale of the device less than -40F. While everything around reads some various appropriate temp, including the bottom of clouds.
If there’s anything being reflected back it isn’t causing any warming.
richardscourtney;
In summation, the use of temperature anomalies is a trick used to further a political agenda and, therefore, it is not relevant that the use of anomalies has no scientific justification and purpose.
>>>>>>>>>>>>>>
I think we can make it relevant again simply by repeatedly asking the question. When I’ve tried to explain the physics, I get ignored or on some blogs painted as another one of those cranks yelping about back radiation not existing or other such nonsense. So instead, I’ve started asking questions like this one and…crickets.
I did the same to Jan P Perlwitz a while back. We were debating the recent Briffa paper and he was making the claim that you had to be a climate scientist to understand it. So instead of debating the paper with him further, I asked him which part of the paper could not be understood by a first year physics or stats student and why. Suddenly silence from Perlwitz. He responded to other comments in that thread, but studiously ignored me though I repeated the question three times.
The silence speaks loudly.
Bart;
Should we be looking at a T^4 metric, or an RH one?
>>>>>>>>>>>>
Both. Per your comment, this is a rather complex matter. As the saying goes, make the problem as simple as possible, but not simpler. By averaging anomaly data the problem has been over simplified to the point that the data is close to meaningless.
It seems as though Jan P Perlwitz spends a lot of time blogging. Ditto that recent grad whose six-letter (I think) last name begins with a C. Is GISS paying these guys partly for their propagandizing?
James Hansen Busies Himself
Creating A Third World Holocaust
That hothead Hansen’s
Radiative reflux?
Just Charlie Manson’s
Helter Skelter redux
Realities will
Not reorder his mind
Delusions that kill
Leave all reason behind
i am sure this is of no interest to anyone but me but I have altered the phrase “Atmospheric reflux” to “Radiative reflux”. i had thought of both choices and proceeded to choose the wrong one. Technically “radiative reflux” accurately describes the greenhouse effect — radiant energy from the earth is reflected back towards the earth. A reflux is bascially a backward flowing. “Atmospheric reflux’ also describes the greenhouse effect but in a more general way and can be mistaken to imply that “air” is the moving component when it is not.. It was a poor choice of words that I thought would make the poem simpler but instead just screwed it up.
Hansen and Manson share the same apocalyptical fascinations — and both have messiah complexes. And both are highly dangerous. The worlds of their implemented visions would be a living hell for the majority of humanity. They are two peas in a pod.
Eugene WR Gallun
This is actually hiding the disagreement. I think we can make it relevant again simply by repeatedly asking the question.
When I’ve tried to explain the physics, I get ignored or on some blogs painted as another one of those cranks yelping about back radiation not existing or other such nonsense. So instead, I’ve started asking questions like this one then crickets.
Theo Goodwin says:
January 31, 2013 at 10:09 am
Yes, using anomalies is a great way of side tracking a science. Maybe some day climate scientists will switch from anomalies to some measurable feature of the environment.
<<<<<<<<<<<<<,
No. Here's how that shakes out: you get a job. I get a job. Somebody else says
"You guys go work… I'll stay here indoors and take money to check the weather,
because you're stupid and have gotten trapped and can't get away from paying me to do it. You guys hurry up and get out of my way, rabble!"
"I'm trying to get to to the Weather Center, and do MY work, and YOU'RE in the way. And the next time you tell me to show you what I'm doing, I'll SUE you; and the GOVERNMENT will stand behind me because I'm a government employee!"
Money is taken from your check to pay this lifestyle to exist.
Therefore you will not be having more relevant science. You'll be having more
Gubmunt Psignts.
And you'll pay
or they'll ruin your life
releasing government authorized and sanctioned press releases,
that you're evil, an enemy to the country, and a prime reason the war on
_________ is failing;
and that you're a danger to yourself,
your children,
and to all Mannkind, saying something different than they say.
Because they're government employees.
The people who tell you now
and have for 75 years: that pot's heroin
and you're so stupid
they're going to finally,
have to take over all your medicine,
before you mess this up
worse than you already have.
Japan may have done this change in 2000 to not look ridiculous at home, where the arctic ice sheet is currently pressed up tight all along the north shore of Hokkaido and is slipping along the continental mainland to North Korea. Heck one could walk on ice and snow from Japan to North Korea (or to Detroit for that matter).
http://www7320.nrlssc.navy.mil/hycomARC/navo/arcticicennowcast.gif
davidmhoffer at February 1, 2013 at 7:01 pm, in http://wattsupwiththloat.com/2013/01/31/japans-cool-hand-luke-moment-for-surface-temperature/#comment-1214413
The only thing that speaks loudly here is your lie about me behind my back in this thread here, in which I have not been participating until now. I never have said that one had to be a climate scientist to understand some Briffa paper, which ever you are referencing here, or any other paper for the matter of fact. What is the purpose for which you are telling this lie?