March 2008 HadCRUT Global Temperature Anomaly

HadCRUT global numbers are out, and is at 0.43°C, still lower than the GISS number of 0.67°C.

Click for a larger image

Once again Jim Hansen’s NASA GISS is the highest global anomaly:

RSS (satellite)

2008 1 -0.070

2008 2 -0.002

2008 3   0.079

UAH (satellite)

2008    1  -0.046  

2008    2    0.020

2008    3    0.094

HadCRUT (surface, land-ocean)

2008/01  0.056 

2008/02  0.187 

2008/03  0.430

GISS (surface, land-ocean, polar estimates)

Year      Jan  Feb  Mar 

2008    .12   .26   .67

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
78 Comments
Inline Feedbacks
View all comments
J.Hansford.
April 13, 2008 1:57 am

…. and further to Vauss… Giss wouldn’t be very representative of the Southern Hemisphere as compared to the Northern, I wouldn’t think….?

Pierre Gosselin (aka AGWscoffer)
April 13, 2008 2:00 am

Evan
“HSE will exaggerate a real increase, exaggerate a decrease, and not affect a flat trend much.”
Exaggerate a decrease? How?
If anything the siting problems tend “to warm” the decreases, thus giving us higher average temps. Maybe I don’t follow you here.

cohenite
April 13, 2008 2:09 am

Further to the base period issue; GISS has a base period of 1951-1980, Hadley 1961-1990; the Pacific Event occurred in 1976 with a climate lapse period of 5 years during which time temp rose by 1.46 degreesC; since GISS has a preponderance of cooler temps prior to the PE, the elevation of trend after 1980 will be greater than from a base period of 1961-1990, and, conversely, the coolness and elevation of trend before 1976 will be less than with Hadley, with an overall greater trend over the century for GISS, simply because of their different base period. Or am I missing something?
REPLY: No, you got it exactly right.

Robert Wood
April 13, 2008 2:59 am

JM
Has anyone figured out exactly how and why Hansen makes these multiple adjustments throughout the record?
Steve Mcybntire’ Climateaudit spends a lot of time analyzing the mysterious adjustments.

Pierre Gosselin (aka AGWscoffer)
April 13, 2008 3:04 am

All this GISS baseline and temperature data adjustment discussion just further reinforces my belief we need an Index of Global Temperatures (my proposed Index of Leading Climatic Indicators will have to wait).
Lump the HadCrut; MSU UAH, RSS and crappy GISS data, average them out, and voila! – The Monthly Index of Global Temperatures (MIGT). For March the MIGT would simply be (0.079 + 0.094 + 0.430 + 0.670)/4 = 0.318.
Of course the GISS data is corrupt, but we don’t live in a perfect world, now do we?
Do this for the last 30 years, and you’ll get a MIGT curve that is far more moderate than the GISS curve – I’m sure. Feed that to FOX, Glenn Beck, JunkScience, Heartland etc. etc, and we’ll be on our way.
Maybe this already exists?
Dear Anthony;
It kinda annoys me that you actually devoted your time to reply with 20+ lines to this TCO repeat offender. With repeat offenders, just delete their post as soon as they step out of bounds. Just delete! Do not reply! Do not waste your time!
We want you to devote your time taking care of the real business here.

April 13, 2008 4:05 am

Can anyone tell me exactly what the difference in baseline amongst the datasets amounts to (e.g. by comparing the mean of one in the base period of the other)? I’d like to provide an example graph which has them offset to the same base.
In the meantime, I personally find the easiest way to understand it is by looking at the derivative (deltas):
http://www.woodfortrees.org/plot/hadcrut3vgl/last:12/derivative/plot/uah/last:12/derivative/plot/rss/last:12/derivative/plot/gistemp/last:12/derivative
(adjust time period to taste)
Another interesting thing to look at is the March figures for the last 15 years:
http://www.woodfortrees.org/plot/hadcrut3vgl/last:180/every:12/plot/uah/last:180/every:12/plot/rss/last:180/every:12/plot/gistemp/last:180/every:12
(actually, this gives you the last 15 years of whatever sample is the latest published – so if you check this again in late June, you’ll get June – but beware early in the month when not all the datasets have published yet.)

April 13, 2008 4:30 am

Oops, I just fell into my own trap… The HADCRUT3 new sample hasn’t made it to the CRU data source yet, so right now the latter graph is misleading – it is showing February values for hadcrut3vgl. I guess this should fix itself when CRU update their data on Monday.

Tom in Florida
April 13, 2008 6:01 am

From a layman (me):
Why all the fuss over monthly anomolies when the calender month is a man made system? Wouldn’t it be better to use lunar months or some other natural occurring consistant cycle? February had 29 days and March 31. Could that effect the results if the last 2 days of March were much warmer? As far as “averages” go, is there really an absolute “average” or “mean” tempurature each month? I love the example about averages: a person has one foot in a bucket of water at 30 degrees and the other in a bucket of water at 100 degrees, on average the person should be quite comfortable. As for using different baselines to bolster a particular opinion: liars figure and figures lie. Plus, as stated in other posts, why is a +- error range not included in each chart? It seems to me that it would make more sense to chart temperature ranges rather than specific temperature numbers as a baseline. i.e : the normal temperature range for April in west central Florida within 2 miles of the coast is 58 – 94 degrees F (example only do not know the real numbers). If future charting shows a move, higher or lower, of that range over time, only then can we conclude that there is warming or cooling.
This may be very simple but, as Anthony states, it is how the data is presented to the general public that sways opinion. Unfortunately for most of us, politicians will act on what they perceive to be best for themselves in view of that public opinion without regard to real science.

Basil
Editor
April 13, 2008 6:06 am

On the differing base periods for the anomalies, why are they not all using 1970-2000? Isn’t that the way “climatological normals” are normally computed, using the most recent three complete decades? If I ask the NWS what are normal CDD’s or HDD’s, they’ll come back with the 1970-2000 average. Why are temp anomalies reported differently?
Basil
REPLY: My point exactly. This is the sliding window I refer to.

steven mosher
April 13, 2008 6:52 am

JM
let me see if I can explain the differences between Hadcru and Giss.
Long term differences:
1. hadcu uses a different base period. this just move the whole line up and
down in Y. so, its easy to adjust them to be on the same base peroid.
2. Giss estimate the polar region using stations 1200km apart and they fill
in some missing data using a GCM.
3. other differences could be the use of different stations and the use of
different adjustment methods and a difference in algorithms for infilling
missing data.
So, on a long term basis you will see that Giss report temps that are a little higher than hadcru. the point is that difference is fairly consistent over time.
this is important because at the end we want to look at the TREND in the data.
so as long as the offset is constant the trends will be consistent. Now, in the short term, last few years, the giss trend has been a bit highrer than tha hadcru trend. Lucia covers this, so i recommend her work.
with regards to this last month. Giss reported .67 and hadcru reported .43
On average i would have expected hadcru to report .58C. or consersvely, if hadcru reports .43, then one could expect giss to report .52C. so the difference between giss and hadcru was a bit on the high side from what one normally sees but it’s not strictly speaking an outlier. It’s just one of those noise things.

Gary Gulrud
April 13, 2008 6:56 am

GISS global temp: Fake but accurate.

robert burns
April 13, 2008 7:11 am

re Zeke 4/12/08 15:58
Your chart reads “R^2 .003” What am I missing?
re error bars (Bob B and Lucia)
What are the error bars for each series?

Flashman
April 13, 2008 7:26 am

http://data.giss.nasa.gov/gistemp/maps/
is a neat resource.
I notice that Hansen may have overdone some “corrections”, or AGW is truly worse than I ever imagined.
Try Land:Giss, Ocean:none, Mean period:Annual (Jan-Dec) and
Time interval: 1951 to 1980 compared to the same interval baseline. 🙂
Smoothing radius: 250 km
5700+ degrees seems a bit steep, even for Hansen.

Raven
April 13, 2008 7:37 am

vauss asks:
“Yet Alaska has a surface area roughly 20 percent of the continental US. So do the 50 data points in Alaska get a weighted average to compensate for the lack of data points? ”
The stations are weighted by geographic area. This means a single station in Alaska will have a weighted equal to 500 stations in the southern US. This gets even worse at the poles where the sparse stations are used to “estimate” the temps over the ice packs.

Gal
April 13, 2008 7:47 am

Anthony,
we seem to agree that the fact that GISS uses a different reference is a problem of convenience for the end user. I thought that your first reply suggested a physical non-sense. I definitely share your frustration as a data user about different normalization, formatting, etc … However, I think it is more important to ensure that the reference makes sense physically.
I haven’t really given it deep mathematical thought, but the GISS data uses for reference a period were decadal trend seems to be relatively flat. Therefore the average would result in cancelling more or less the short term variability, and not including much of the long term trend. The other reference seems to include a bit of a trend, from the 80’s. I would tend to think that the first approach to pick a reference makes more sense (as a first guess only 😉 ). However, it appears clear that over the long run, all products indicate the same warming trend so they are all physically consistent.
I think the GISS choice to use an older base period is not realistic.We have 4 global metrics. The one that consistently reads different than the rest uses a base period that is by general climatic standards, outdated.

I am not sure what you mean by ‘realistic’ here. As long a the reference period is consistent with the time scale of the signal you are studying, it should not be outdated. What could be outdated is the data your are studying, whatever reference you use for them: e.g. sticking with temperature in the 90’s and ignoring those in the past years. The fact that you normalize these data with respect to the 40’s, 50’s or 80’s does not make your analysis outdated, it’s just a reference (that needs to be explained though). What would be questionable (not even outdated) would be to pick a reference 1,000,000 years ago, to show that the trend is down .. (yes I’ve read stuff like that 😉 )

For example, let’s say I published a work and used a baseline for 1930-1950, but made claims regarding the present. It would probably be criticized for that given that there is more up to date climate data.

If you would publish the latest data, but used a baseline for 1930-1950, and made claims regarding the present, I don’t think you would be criticized: you would have used the more up to date data! You would just have to specify that the ‘increase’ or anomaly you are reporting is with respect to your reference period, whatever it is (and possibly explain why this period is of interest and/or representative). A simple straightforward example would be to reference the data with respect to the end of the 19th century in order to emphasize the increase during the 20th century, ‘after’ the industrial revolution, etc … Doing that while stopping your time series to 1950, 1960, 1970 … would be outdated. Doing that with data up to 2008 would not!

Many climate references that are published by NOAA use a sliding window that gets updated as time and data goes on. I suggest that it is time for GISS to use a more current baseline.

I am not sure what NOAA data you are referring to, but I think a sliding window would create even more confusion than presently exist! And don’t forget that decadal trends for GW are averages over long term period, a few decades. So a sliding window could make sense but it would have to slide slowly, like every century… Computing the increase since the last decade or two decades (fast sliding window) would not make much sense, at least not as a baseline (it might be used to address yearly/decadal variability). You would see mostly short term variability, and miss the big picture of the the long term trend.

I think that GISS should use a more recent baseline. Ideally, since these 4 metrics are being compared regularly, it would seem prudent to have some sort of common presentation method for the data.

I do agree it would make the life of the end user easier, but it wouldn’t change the physical meaning of the data, and the conclusions as long as they are properly analyzed .
REPLY: Yes you definitely missed the point. Looks like I’m going to have to do a post on it. It’s about presentation.

Gal
April 13, 2008 8:01 am

It’s all about the presentation, GISS gets used more than the others, and it’s baseline choice presents the data with a greater positive offset than UAH, RSS, and HadCRUT.
As long as you are consistent in your use of data, there should not be any issue (as far as baseline is concerned, I am not addressing other issues like smpling etc …)
If you use GISS all the time, you will see the same increase since any reference time as with another dataset. Same increase since 1900’s, same increase since 1980’s, etc .. What will be different is one particular value: +0.2 in GISS will roughly mean +0.2 degC with respect to the 60’s/70’s. A value of +0.1 (guestimate!) in Hadley’s data would mean +0.1 degC since the 70’s/80’s. But compute the increase for a given reference both datasets would give similar results. You just can’t use one set for 2006 and then the other for 2007 and derive an variation from their difference … at least not directly. It would be like using Fahrenheit and then Celsius assuming they meant the same. They measure the same thing, they should provide the same results when expressed in the same ‘base’, but they can’t be used directly together.
So as long as a reporter uses GISS all the time, he’s fine. He will have slightly larger anomalies than if he would use another dataset because he reports the increase since a decade earlier than Hadley, that’ s all.
REPLY: You still missed the point.

Anthony
April 13, 2008 8:40 am

Mike asked the following:
“I wonder what the error bars are on these various mean temperatures. Are the month to month changes consistent with the uncertainty in the measurements? I know there are systematic differences between the surface and aerial measurements. But are the sampling uncertaintities each month compatible with the inconsistencies?”
I looked at data provided by NASA/GISS for a Class 5 station, of which there are many. Data from a station classified with a CRN rating of “5”, such as “Wickenberg ( 33.98, -112.73)”, has at least an error or uncertainty of 5 degrees, based on the definition of its CRN rating. This is a large uncertainty. If we look at the minimum and maximum temperature readings from this station over a 100 year period, we see that the maximum temperature is 20.56 degrees C and the minimum is 16 87 degrees C. Since this difference is less than the 5 degree C station location error or uncertainty, I can reasonably draw a temperature trend graph for this 100 year period which is a straight line, showing no change in temperature in this period. Thus the data from this station in my opinion is useless, as may be data from many other stations, no matter how much massaging of data is performed. If we use it, the old adage applies, “Garbage In—-Garbage out”.
REPLY: This is a different Anthpmy, not the forum operator

April 13, 2008 10:10 am

Mosher: sounds like a worthwhile exercise. I’m personally quite interested in the cause of the wide divergence in satellite and land based measurements in 1998. I’m busy working on an article on the solar cycle-climate link at the moment, but I might take another look at satellite temps in the future. The only real point of my earlier post was to point out that we should avoid allegations of warming bias without an analysis of long term trend.
JM (and JP): I think we are mixing up different types of adjustments. The temperature records are measured in terms of anomalies which are, roughly speaking, arbitrarily selected points in the series which all other points are compared against. What the series is interested in is the trend in temperature, rather than the absolute value. Thus any adjustments that do not affect the position of data points vis a vis eachother will have no impact on the trend. So converting a series with a 1951-1980 baseline to a 1979-1998 baseline won’t do anything to the trend in temperatures; it remains around 0.16 degrees per decade under either baseline.
The adjustments done by Hansen are more concerned with correcting for biases in individual stations, barring a few larger corrections like the so-called Y2K bug in the U.S. data. I have not looked into the subject of station adjustments in GISS, so I’ll withhold any judgment for the time being until I’m better informed.
Oddly enough, the main reason why GISS is showing record temps in recent years while other series are not is not due to a slight recent warm trend in the GISS series relative to the other temperature series. Rather, its primarily caused by the much larger cool trend in GISS in the late 1990s. Once the datasets are put on the same baseline, its clear that GISS has yet to reach the anomaly that RSS and UAH showed for 1998 (e.g. the GISS 2005 anomaly is lower than the RSS/UAH 1998 anomaly). The average residual between GISS and the other three datasets for the period from 1998 to 2002 is -0.03 (that is to say that, on average, GISS was 0.03 degrees cooler than the others). For the period from 2003 to 2007, GISS was only 0.023 degrees warmer than the other series on average.
Anthony: I’m still skeptical of the importance of baseline choice on the visual interpretation of the graph. If we are looking at the trend in temperatures, why does it matter where we put the zero? It seems akin to me arguing that the commonly used graphs underemphasize warming to the general North American public because they use Celsius rather than Fahrenheit, since the units are larger in Fahrenheit. For that matter, using actual temperatures rather than anomalies would put the zero far lower than GISS (which has the lowest zero of all series, given that it uses the baseline the furthest back). Would you argue that using absolute temperature rather than anomalies would exaggerate warming?
As far as using a common baseline for everything, while I agree in principal that it would be useful for us amature climatologists, its not the most pressing issue out there. Also, there is a good justification for using a 30 year baseline to smooth out noise, something you would be unable to do for the satellite series since they have not been around quite long enough (though give them another year or two…).

CAW
April 13, 2008 10:57 am

UAH showed a record difference between Northern hemispheric and Southern hemispheric trends for this March. Do we know the difference between the temperature change in the N and S hemispheres according to HadCrut.

April 13, 2008 1:12 pm

Re: Robert Burns
The r^2 = 0.003 in the anomalies graph (http://i81.photobucket.com/albums/j237/hausfath/GISSResiduals.jpg) means that a simple linear regression of the GISS residuals relative the the mean of the other temperature series shows a very, very slight positive trend over the past 30 years, but that linear model only explains a tiny amount of the variations of the residuals. So its pretty much meaningless, other than to show that there is no significant positive bias over the period of the last 30 years.
Pierre: Here is a monthly comparison of each separate temperature record to your “MIGT” (e.g. the monthly average of all four). It shows interesting points of difference between the records: http://i81.photobucket.com/albums/j237/hausfath/Variations.jpg
Basil: Unfortunately, 1970-2000 won’t work as a common baseline, since the satellite records don’t start till 1979.

Mike Bryant
April 13, 2008 1:21 pm

I was just wondering which other types of statistics that our government should be “adjusting”?
Anyone have any ideas?

Philip_B
April 13, 2008 1:55 pm

Bob B, re lucia’s statement that The true weather variability in month to month measurements appears larger than the measurement uncertainty
I see no reason why weather in the aggregate across the whole planet and over a month should be variable or noisy to the degree claimed. I.e. I agree that almost all of the variability is in the measurement.
Lucia may have a statistical basis for that statement and I’ll ask at her (BTW excellent) blog.

Brian D
April 13, 2008 2:36 pm

What a difference a baseline does make.
GISS(1951-1980)
Jan 08 .12
Feb 08 .26
Mar 08 .67
GISS(1961-1990 HadCRUT baseline)
Jan 08 .04
Feb 08 .18
Mar 08 .55
GISS(1979-1990 RSS and UAH baseline)
Jan 08 -.12
Feb 08 .05
Mar 08 .41
GISS(1971-2000)
Jan 08 -.08
Feb 08 .04
Mar 08 .44
I think GISS and HadCRUT should being using 1971-2000 baseline. I think the NWS uses it for computing averages that you see in your local weather. And they regularly shift this baseline every 10yrs or so. The next baseline will 1981-2010 and at this point the RSS and UAH could use the same 30yrs.
The zero anomaly line would change, but trends should be the same. And every metric could report using the same baseline.
REPLY: Thanks Brian, I had planned to work this up this evening, and do a post on it, but you beat me to it. This is the issue, the presentation of the data to the public changes depending on the baseline used.

steven mosher
April 13, 2008 2:58 pm

Ya Anythony, this whole anomaly dance needs an everyman explaination.
More than one once I’ve stumbled on myself looking at it. Willis did once, I believe.
I prefer a chart of absolute temps. I can do my own anomaly dance thank you.
But if somebody publishes a chart of anomaly they need to say:
Anomaly from a Base period xxxx-yyyy.
The average of the absolute temp during that period is zz.zC
That way you can always turn anomalies back into temps.
GISS has started to do this.

Evan Jones
Editor
April 13, 2008 6:55 pm

Exaggerate a decrease? How?
If anything the siting problems tend “to warm” the decreases, thus giving us higher average temps. Maybe I don’t follow you here.

To answer this and cohenite’s question:
It’s an overall warming effect. In toto. But, unlike waste heat (AC or enging exhaust, etc.) , which creates a simple “one-time” offset, a heat sink ( a driveway, parking lot, building, or whatever) operates a litlle differently because it affects the RATE of temperature change.
To be more specific, wate heat simply bumps up the temps when it occurs. End of story.
When the heat sink appears, on the other hand, a whole dynamic comes into play. A heat sink (unlike wate heat) comes into play especially at T-Max and T-Min. At T-max, it pumps up the temps. And at T-Min, it is releasing its joules and kicking things way up. La Dochy et al. (Dec. 2007) points out that T-Max is hit hard and T-Min is affected much worse.
A.) There is an initial offset (a warm bias). But, unlike waste heat, the story does not end there.
B.) When a temperature increase occurs, the effects are contimually exaggerated by a certain percentage. The more the heat increase, the greater the exaggeration.
C.) When the teperature drops the effect in step B. “undoes” itself.
So what you get is an initial warming offset followed by an exaggeration of any temperature increase. But there is also an exaggeration of any temp decrease, “drawing joules from the bank” of the initial warming offset.
Bottom line: An initial warming bias followed by and exaggeration of trend in–either–direction.
I hope that makes it clearer. (If not, I’ll give it another shot.)
What happened in the post-1980 period was that during the MMTS switchover, a huge number of previously better-sited stations wound up right next to buildings and concrete. Your most common “CRN4” type violation.
And it’s gotta add up. I am not going to be convinced that site violations are not affecting the overall delta-T since 1980, nossir!
And Joe D’Aleo’s PDO/AMO correlations will fit better if that is taken into consideration. (He shall have his 3 exclamation points; I am convinced he has earned each and every one. There may well be room for a 4th, if I am not mistaken.)
This needs to add up. We need to arrive at a consistent bottom line.