March 2008 HadCRUT Global Temperature Anomaly

HadCRUT global numbers are out, and is at 0.43°C, still lower than the GISS number of 0.67°C.

Click for a larger image

Once again Jim Hansen’s NASA GISS is the highest global anomaly:

RSS (satellite)

2008 1 -0.070

2008 2 -0.002

2008 3   0.079

UAH (satellite)

2008    1  -0.046  

2008    2    0.020

2008    3    0.094

HadCRUT (surface, land-ocean)

2008/01  0.056 

2008/02  0.187 

2008/03  0.430

GISS (surface, land-ocean, polar estimates)

Year      Jan  Feb  Mar 

2008    .12   .26   .67

Advertisements

78 thoughts on “March 2008 HadCRUT Global Temperature Anomaly

  1. Considering that the USA is back to normal for March ( 0 ) where is the “warming” coming from in the non-satellite records?

  2. HadCRUT global numbers are out, and is at 0.43°C, still lower than the GISS number of 0.67°C. Once again Jim Hansen’s NASA GISS is the highest global anomaly:
    What’s the relevance of these comments? GISS has a lower baseline and hence will almost, if not always be higher than the other three. Didn’t you learn that in this post
    A look at temperature anomalies for all 4 global metrics: Part 1?
    http://wattsupwiththat.wordpress.com/2008/02/27/a-look-at-temperature-anomalies-for-all-4-global-metrics/
    By the way wasn’t that supposed to be a three part post? Did I miss part three?

  3. Note well that the ground measurements jump higher than the satellite measurements.
    Could this be because of Heat Sink Effect (“HSE”) exaggerating the uptick? Note also that GISS showed an exaggerated downtick as well, which is consistent with HSE.
    HSE would exaggerate increases in temp rises. Also decreases, as the effect “undoes” itself.
    This would also be consistent with an exaggeration of the last warming trend and also the levelling of temps after 1998 (you have to have an actual increase in temps in the first place for a heat sink to exaggerate it).

  4. The reason that GISS has the highest is because he eliminated large grid areas from Southern Africa, South Pacific and North America that all showed cool anomalies.
    There was also a warm spell over much of Asia that is common in the spring season during La Nina years.

  5. Because of the different anomaly base periods Hadcrut is ON AVERAGE .09C
    cooler than GISS ( its just a diffrent basis period)
    So, if GISS were .67C you would guess hadcrut to come in at .58C.
    They came in .43C

  6. Anthony, I have recently become a regular visitor to this site and I just happened to be looking at the Hadcrutt3 data this morning – so your post is timely.
    I note that the satellite data for March show a much lower rise from Jan and Feb compared with NASA GISS or Hadcrutt3- as reported here:
    http://wattsupwiththat.files.wordpress.com/2008/04/uah_march_081.png
    And I’ve also been following the SST anomalies throughout March and April – which even today show La Nina very much in control?
    http://www.osdpd.noaa.gov/PSB/EPS/SST/climo&hot.html
    So I’m finding it a bit difficult to reconcile the satellite temperature and SST data with the weather station data (NASA GISS and Hadcrutt3) and was wondering if you could venture an explanation why the weather station data have bounced while the oceans still have the blues?
    Euan Mearns
    REPLY: I’ll have some post in a few days tounching on this more.

  7. You need to put them all on the same base period. Even then GISS sticks out.
    But somebody will always stick out.

  8. If you average out all four and you get something in the order of 0.30°C. Plot it and you have a 10 year downweard trend.
    Stlll no sunspots.

  9. Mike you wrote:
    “HadCRUT global numbers are out, and is at 0.43°C, still lower than the GISS number of 0.67°C. Once again Jim Hansen’s NASA GISS is the highest global anomaly:
    What’s the relevance of these comments? GISS has a lower baseline and hence will almost, if not always be higher than the other three. Didn’t you learn that in this post”
    Yes, GISS has a lower baseline, because of a different anomaly period. Even when you correct for this, GISS is still the oddball. It’s about a 2sig or 3sig event. Nothing too out of the ordinary.

  10. I hate to ask a stupid question – but I am still not 100% sure of what the definition of “Temperature Anomaly” is – as it applies to this graph.
    Can someone enlighten me?
    Thanks

  11. @Bob
    Anomaly means difference with respect to a reference number. The reference number here varies between the different temperature products. GISS compute the reference number as the average of temperature for the period 1951-1980. Hadley uses the period 1961-1990. Hence the non-sense of comparing directly both anomalies and stating that one is lower than the other …
    Here is a comparison of the different anomalies once corrected for the different definitions:
    http://atmoz.org/blog/2008/03/10/4-of-4-global-metrics-show-agreement-in-trends/
    REPLY: Actually the non-sense is that GISS uses 1951-1980 in the first place.

  12. @Anthony
    Why is the GISS reference a non-sense?
    As far as long term trends are concerned, it does not seem to have any major incidence anyway, all products are consistent.
    Now I can imagine that these products are not used for the sole purpose of studying long term trends, so in that context, why do you think it makes less sense to pick 1951-1980 than 1961-1990, besides the inconvenience of having different reference point?
    REPLY: I think the GISS choice to use an older base period is not realistic. We have 4 global metrics. The one that consistently reads different than the rest uses a base period that is by general climatic standards, outdated. Many climate references that are published by NOAA use a sliding window that gets updated as time and data goes on. I suggest that it is time for GISS to use a more current baseline.
    For example, let’s say I published a work and used a baseline for 1930-1950, but made claims regarding the present. It would probably be criticized for that given that there is more up to date climate data.
    Sure, if you go through the process of adjusting them all to a common baseline, the offset goes away, but why should this be left to the consumer of the data when the data is used for so many public presentations and is published for general public consumption on their websites?
    I think that GISS should use a more recent baseline. Ideally, since these 4 metrics are being compared regularly, it would seem prudent to have some sort of common presentation method for the data.

  13. An anomaly is a change from normal trends.
    For end of these temps datasets the period of normality is different. 1961-1981, 1981-2000, etc. Therefore if the period of normality just happens to be a cool period as for GISS 1961-1980, say then the anomaly may be greater than for a series normalised against a warmer period, e.g. HadCru.
    So, for example, if the average temperature during 1961-1980 is 14.00 then for GISS march is 14.67 for HadCru, 14.43, etc

  14. An anomaly is a change from normal trends.
    For EACH of these temps datasets the period of normality is different. 1961-1981, 1981-2000, etc. Therefore if the period of normality just happens to be a cool period as for GISS 1961-1980, say then the anomaly may be greater than for a series normalised against a warmer period, e.g. HadCru.
    So, for example, if the average temperature during 1961-1980 is 14.00 then for GISS march is 14.67 for HadCru, 14.43, etc

  15. I wonder what the error bars are on these various mean temperatures. Are the month to month changes consistent with the uncertainty in the measurements? I know there are systematic differences between the surface and aerial measurements. But are the sampling uncertaintities each month compatible with the inconsistencies?

  16. Anthony, has anyone ever “converted” GISS to the same baseline as the others?
    Jack Koenig, Editor
    The Mysterious Climate project
    http://www.climateclinic.com
    REPLY: yes it is routinely done in scientific analysis and presentations of those results, Atomz did it in the post “Gal” references for example.
    But such capability to calculate and apply an offset is typically beyond the ability of the average press reporter to do so. So, when they reference a GISS graph like this one:
    http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.lrg.gif
    There is typically little understanding of the base period issue, particularly since GISS does not reference the base period used for that anomaly on that graph. Yet you’ll see that GISS anomaly graph used in thousands of places with wide viewing.
    Wikipedia added it to the caption at least:
    http://en.wikipedia.org/wiki/Global_warming
    But the caption often doesn’t travel with the graphic. To be fair, I haven’t done that on my HadCRUT, UAH, or RSS graphs either, but now that I’ve pointed it out to myself I’ll do that.
    I just think a common standard for baseline and presentation would be beneficial for everyone.

  17. I assume that the “anomaly” is the measure of the difference between the given month and the average of that month for the reference period?
    If so, why do these graphs seeem to show a seasonal cycle?
    Just eyeballing the HadCRUT graph, the maximum negative anomaly for the year seems to disproportionately occur in the Nov-Jan period. The maximum positive (or least negative) anomaly for a year seems to disproportionately occur in the early spring.
    Why would that be? Or is my eyeball out of register?

  18. Anthony said:
    “There is typically little understanding of the base period issue, particularly since GISS does no reference the base period used for that anomaly on that graph. Yet you’ll see that GISS anomaly graph used in thousands of places with wide viewing.”
    Also, in order to show the most AGW, charts need to show the largest change above “normal” (ie, zero.) This is what GISS does.
    And, there is still the debate between GISS and HadCRU as to who does a better job of charting the anomaly. GISS supporters like to use the “GISS tracks Arctic temps, HadCRU doesn’t” line. If both systems used the same reference period, GISS’s “extra” warming goes away.
    The above statement can (and probably has been) proven by putting both on a common reference period (really doesn’t matter which one), and comparing the differences (the “anomaly” of the anomalies).
    If GISS tracks the Arctic area better, there would be a consistant, positive difference in favor of GISS over the ENTIRE RANGE of the chart. If this does not occur, then neither one has a better handle on temp reporting.

  19. Thanks for all the responses – I feel better that I was a little bit confused about it,
    Ok – another stupid quesiton. What is the value of plotting temperature against a base-line if the base-line is somewhat arbitrary?

  20. Peter Hearnden–
    But shouldn’t that equalize over the years?
    I am assuming that the temperature anomaly for December 2007 is the differeence between the average for December 2007 and the average of the averages for the Decembers of 1961-1980. If I look at the chart, just using my eyeball it would appear that 13 or 14 of the 20 years shown have a downward spike in Novcmber, December, or January of the 20 years, whereas one would expect only a random distribution of downward spikes.

  21. Are the month to month changes consistent with the uncertainty in the measurements?

    Having looked at the data from all five “important” measurement groups, and compared monthly data, generally speaking, a large portion of the month to moth variation appears to be due to actual temperature changes at the surface of the earth. So, those are weather.
    However, each month, there are also differences between measurements from each group. These are what I would call “measurement uncertainty”, or “measurement noise”. These differences are of the magnitude one would expected based on the measurement uncertainty estimates claimed by the various groups.
    The true weather variability in month to month measurements appears larger than the measurement uncertainty.

  22. Mike and others, the other point that no-one else seems to have mentioned is that while the jump from the absolute February anomaly to March is being discussed, surely the real story is with the difference from Feb to March, the two satellites steps being .074 and .077, while the HADCRU number is .253 and GISS a whopping .41. If we cant have consistent numbers, who do we believe? Or should the error bars be about +/- 2 degrees for the land numbers?

  23. JK
    It goes back to around 1920. The temperature trends correspond to ocean oscillation (up 1920s-1940s, down 1940s-1970s, up 1979-1998, flat 1998-2008) better than with CO2 increase, which took off c. 1950 right when temps got lower.
    Not only that, but I distrust the CO2 record because it shows no increase whatever during WWII with all the full war production and burning cities.
    The world temp measurements are somewhat higher in 1998 than in 1940 and I suspect the difference is roughly equal to the amount of bias added by the microsite violations.
    It all has to add up. And the site violations must be accounted for.
    What I think we are seeing is a real warming period (1978-1998) caused by ocean oscillation and exaggerated by Heat Sink Effect from severe microsite violation (that occurred after 1980). HSE will exaggerate a real increase, exaggerate a decrease, and not affect a flat trend much.

  24. Henry said:
    And, there is still the debate between GISS and HadCRU as to who does a better job of charting the anomaly. GISS supporters like to use the “GISS tracks Arctic temps, HadCRU doesn’t” line. If both systems used the same reference period, GISS’s “extra” warming goes away.”
    However as I understand things, there really is no actual measurement of Arctic temps, but rather only estimates. And if that’s the case, GISS figures are meaningless because Hansen & Company can fudge their estimates until they reach their predetermined figures.
    What a crock!
    Jack Koenig, Editor
    The Mysterious Climate project
    http://www.climateclinic.com

  25. The way I see it, the whole concept of an anomoly is largely useless because we don’t know and can’t agree on what is “normal.”
    Darts anyone?

  26. As far as I know, GISS set their baseline period before Hadley did, so castigating them for it is a tad off the mark. There is no reason to objectively prefer 1951-1980 to 1961-1990 or 1979-1998. And converting the temperature series to a common baseline is rather trivial; all you have to do is add the mean value of the new chosen baseline period to every point in the respective series. You end up with a pretty graph like this: http://www.yaleclimatemediaforum.org/pics/0408_gtr.jpg
    If anyone wants the raw monthly anomalies for all four major series normalized to the same baseline period, I’d be happy to send you a copy of my excel sheet. Just email me at zeke@yaleclimatemediaforum.org
    [snip – you are welcome to rephrase and resubmit that]
    REPLY: It may be trivial for you and participants of this forum, but show me a general news reporter that can take the GISS data or anomaly graph, with its warmer component due to the base period they use, as presented on their website, and perform that normalization prior to print, and I’ll believe it’s not an issue.
    It’s all about the presentation, GISS gets used more than the others, and it’s baseline choice presents the data with a greater positive offset than UAH, RSS, and HadCRUT.

  27. Tony Edwards — that is precisely why I raised the question — all the discrepancies between the surface and atmospheric measurements. The error bars must be quite large. I wish that the various sources would include their error bars, otherwise, their graphs can be quite misleading at least short term. I have seen a hadcru graph with the error bars and that is quite revealing.

  28. The work that generated the above chart that Anthony Watts asked for:
    Take the monthly anomalies for all four temperature series. Subtract the mean 1979-1998 value of the GISS series from every point in the series to normalize it to the same baseline used by RSS and UAH (e.g. subtract 0.238 from every point). Do the same for HadCRU (the value subtracted this time should be 0.146). Now, find the mean value of HadCRU, RSS, and UAH for each year, and find the difference between the GISS value and the mean of the other three for each year.
    In retrospect, it might be more meaningful to compare GISS to HadCRU, since the land-based temperature series and the satellite based ones tend to differ from eachother in the same direction (e.g. when GISS is colder than RSS and UAH, HadCRU will likely be colder as well, and vise versa), so comparing GISS to the mean of one land-based and two satellite-based records might be showing some of the general differences between land-based and satellite-based measurements rather than anything unique to GISS.
    REPLY: Thanks, I see in your most recent YCMF article that you agree with my issue about how reporters use graphs to present the points they try to make and must take some caution in doing so. I doubt any general media reporter has ever considered the issue of baseline differences when using one of these graphs.
    This is why I think it would be useful to have them all on an agreed upon baseline.

  29. Anthony, puhleeese get off your hobby-horse about GISS using a different normals base. All that does is modify the offset. The real scandal is that Hansen is passing off a GISStimate as real data. Go to http://data.giss.nasa.gov/gistemp/maps/ and select
    Land: GISS analysis
    Ocean: none
    Map Type: Anomalies
    Mean Period: Mar
    Time Interval: Begin 2008 End 2008
    Base Period: Begin 1951 End 1980
    Smoothing Radius: 250 km
    Projection Type: regular
    and click on the “Make Map” button. This gives an idea of what the GISS land coverage (or lack thereof) is like. He’s missing lots of Antarctica, chunks of China and Eurasia and Brasil, most of the Arabian peninsula, and almost all of Canada Greenland and Siberia, and almost all of Africa other than the northwestern bulge.
    I don’t know about his other missings, but there’s quite a bit of Canadian data to be had. Go to the Canadian government website http://www.climate.weatheroffice.ec.gc.ca/prods_servs/cdn_climate_summary_e.html and select “March”, “2008”, and “Plain Text” and click on “Submit”. A few seconds later, a text output comes up. There’s a code key at the bottom. Save the webpage as a text file (e.g. as x.txt).
    One of the columns is labelled “Tm” for monthly mean temp. Another one is labelled “D” for deviation from the latest long-term mean, which happens to be 1971…2000. Also, watch the column labelled “DwTm” for days without mean temp, i.e. number of missing days. Let’s say we’re willing to accept up to 2 days of missing data. Anyone with unix/linux (or even Cygwin on Windows) can use the following command to extract those lines from x.txt to y.txt…
    grep “^.\{57\} [012]…[0-9]” x.txt > y.txt
    The result for March 2008, is 220 data points. Obviously, someone has to go to the trouble of digging up 1951..1980 normals for these sites. Some newer sites will not not have normals for the 1951..1980 period, but there should still be well over 100 sites that can be used, along with their mean temperatures as listed here.

  30. Not being a mathematician, scientist, or statistician, it would seem to me that the sensible thing to do would be to for scientific reasons to adjust the baseline to the newest historical dataset with enough length to be reasonably accurate as a beginning trend line. It is unreasonable to expect the newer method of measurement to try to extrapolate temperature measurements prior to the beginning of their taking of active measurements. If comparisons must be made to reach any sort of accuracy then the base lines must be normalized.
    It just appears that it is almost a contest of I am better than you.
    are not
    am so
    are not
    am so
    I think they should just fix it they are all supposed to be seekers of the truth and scientists.
    Just my 2 cents
    Bill Derryberry

  31. Using the same base period as HadCRUT(1961-1990)
    GISS
    Jan 08 .04
    Feb 08 .18
    Mar 08 .55
    Go to GISS homepage, click on Global Maps and make your custom map with whatever baseline you want.
    http://data.giss.nasa.gov/gistemp/
    GISS has an unusual amount of missing data for March. Most of which looks to be negative anomaly. If it weren’t for that, GISS would be much closer to HadCRUT like the previous 2 months.

  32. Zeke,
    I am speaking from ignorance on this subject as in my work assembling data for statistical analysis is very straight forward. Adjusting raw data is verboten.
    It would be useful to have one standard baseline. That said, what is the justification for Hansen’s endless “adjustments” of data? This has been addressed on numerous occasions at ClimateAudit, including recently:
    http://www.climateaudit.org/?p=2964
    Has anyone figured out exactly how and why Hansen makes these multiple adjustments throughout the record? There is no question these adjustments affect year to year (and month-month) record, but the trend as well.
    Now, if GISS is not an outlier, how is it then Hansen keeps trotting out these “record” temperature statements (via the Press) when the other three products do not, particularly the satellite data which clearly show a decline since 2001? 1998, 2002, 2005 (the highest) and 2007 in the GISS record are very close to the same.
    Or is the GISS data a result of error in the measurement itself?
    So my question is, since anomalies rather than actual temperature are the metric by which temperature changes are reported, how do we actually know if temperatures are rising or falling? In other words, UAH/RSS show a ~.08 global rise, GISS .41 and you can bet the GISS record is being milked for all it’s got.
    It would seem the further out from the current decade the baseline is, the less relevant the anomaly is if the baseline is derived from a particularly cold/hot or departing (up or down) period.

  33. Jeezuz CHRIST. You repeated a comparison with different base periods, so that the anamolies have a built in bias. Didn’t you learn that from the last kerfuffle. you need to be castigated. You are dumb! You need to hold an M-1 rifle for long periods at extend arms. You need to be whipped into shape. You are so, so fucking slack. And don’t give me any crap about sugar and vineger. Shape up.
    REPLY: I originally blocked this comment, but I thought I should leave it up as a demonstration. Now we’ll see him and the usual suspects run over to Tamino, post something like the above, and there will be yet another round of screaming angry posts.
    Also you’ll no longer posting on this blog. Bye TCO, add WUWT to the list of blogs your juvenile language has banned you from.
    See, here’s the thing the angry phantom people like TCO don’t get. I know how the base period works, my point is in the presentation of premade graphics and data for public consumption and use by the general press.
    Be angry, yell, scream, call names, do what ever you like. It won’t change the fact that this GISS anomaly graph,
    http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.gif
    which is probably one of the most widely distributed in the world, would look a lot different if plotted using a different base period.
    Again, IMHO, all four metrics should have a common base period for the wide PR and public data/graphs they make available so that anomaly graphs and data presented in the general press is shown and compared on an even presentation field.
    That’s it. I don’t think it is an unreasonable request.

  34. “The true weather variability in month to month measurements appears larger than the measurement uncertainty”
    Lucia I respect the work you do but I do not agree with the above statement. From what I can tell the measurement uncertainty could be several degrees C. The fact that the resolution is perhaps less the tenths of a degree speaks nothing of measurement uncertainty and absolute accuracy.

  35. The anomaly dance often trips up people. It would be good practice to
    Label the charts with the base period and ALSO, the average for the base period.
    It really doesnt matter what base period you use, since adjusting from one to the other is simple.
    I will give a simple example: DATA: 1,2,2,3,3,4
    AVERAGE of the 2nd and 3rd numbers: 2.
    ANOMALY with regard to that period. -1,0,0,1,1,2
    Now do the anomaly dance to base period of the 4th and 5th year: average 3.
    ANOMALY -2,-1,-1,0,0,1
    Question: how do you shift a series based on year 2 and 3, to one based on
    on year 3 and 4?
    easy. just do the anomaly dance again.

  36. Zeke.
    I took a little different cut at the problem. Frst I averaged the satillite data
    Two instruments looking at the same thing. makes sense.
    Then I went to look at how GISS differs from this record, and how hadcru differs
    from this record. To see if there was a systematic difference. I just started that
    You can steal the idea if you think it has merit.
    Normalize to a common base period ( 79-98 ) first.

  37. The obvious solution for the baseline issue is to make the entire dataset the baseline. This avoids the cherry-picking of a baseline issue.
    I’ll suggest they don’t do this, because it will over time make the warming (from a straight line trend) look less and less impressive. It, of course, won’t effect the trend or month to month comparisons. But will effect the anomaly value of a month or other period (BTW an essentially meaningless number) as the baseline changes.

  38. Steve Mosher,
    My previous post was not clear and in error now that I reread it. I understand it “shouldn’t” make a difference with respect to the baseline used and they “should” reconcile if adjusted for different baseline periods. However, the anomalies are clearly not the same.
    The GISS data it is said has a ripple effect that can reach through the entire record after it is “adjusted”. Adjusted how?
    There should be no reason to constantly adjust the data once it is initially done. I recall after the so-called “Y2K” error was found, it was barely a month before 1998 magically was even with 1934.
    http://www.climateaudit.org/?p=1880
    Looking at the Leaderboard, 2005 wasn’t even in the list in August 2007, but after over 90 “adjustments” 2005 somehow now is top dog, and Hansen makes sure everyone is aware of it. 2002 was not in the top 10 either.

  39. Zeke,
    As far as adjustments go, I believe NASA applies the same TOB and homogeniety adjustments that NOAA uses. At this point you must also realize that all organizations grid thier adjusted data in 1200KMx1200km gridcells. At that point, there are various extrapolations used, especially in empty grids. Climate Audit has found so many problems (as has Anthony) with not only raw data, but also miscatogorized stations (rural vs urban), UHI questions (esp concerning China data used by Phil Jones), that the statisical analysis performed by all organizations is quite useless from a scientific perspective. In any other field, the error bars would be so great as to render any analysis useless. At least we can be thankful that the automotive engineers who design our brakes have higher quality standards.

  40. I’ve been looking at the data at
    http://www.cpc.noaa.gov/products/global_monitoring/temperature/global_temp_accum.shtml
    And one question I have about Hadcrut and GISS is how do they weight a particular data point? For example, you may have 1,000 data points in the continental US, but only 50 data points in Alaska. Yet Alaska has a surface area roughly 20 percent of the continental US. So do the 50 data points in Alaska get a weighted average to compensate for the lack of data points?
    If this is not done on a consistent basis, how can anyone speak reliably of an average global temperature when it is biased towards nations and areas with a greater density of temperature measuring sites?

  41. …. and further to Vauss… Giss wouldn’t be very representative of the Southern Hemisphere as compared to the Northern, I wouldn’t think….?

  42. Evan
    “HSE will exaggerate a real increase, exaggerate a decrease, and not affect a flat trend much.”
    Exaggerate a decrease? How?
    If anything the siting problems tend “to warm” the decreases, thus giving us higher average temps. Maybe I don’t follow you here.

  43. Further to the base period issue; GISS has a base period of 1951-1980, Hadley 1961-1990; the Pacific Event occurred in 1976 with a climate lapse period of 5 years during which time temp rose by 1.46 degreesC; since GISS has a preponderance of cooler temps prior to the PE, the elevation of trend after 1980 will be greater than from a base period of 1961-1990, and, conversely, the coolness and elevation of trend before 1976 will be less than with Hadley, with an overall greater trend over the century for GISS, simply because of their different base period. Or am I missing something?
    REPLY: No, you got it exactly right.

  44. JM
    Has anyone figured out exactly how and why Hansen makes these multiple adjustments throughout the record?
    Steve Mcybntire’ Climateaudit spends a lot of time analyzing the mysterious adjustments.

  45. All this GISS baseline and temperature data adjustment discussion just further reinforces my belief we need an Index of Global Temperatures (my proposed Index of Leading Climatic Indicators will have to wait).
    Lump the HadCrut; MSU UAH, RSS and crappy GISS data, average them out, and voila! – The Monthly Index of Global Temperatures (MIGT). For March the MIGT would simply be (0.079 + 0.094 + 0.430 + 0.670)/4 = 0.318.
    Of course the GISS data is corrupt, but we don’t live in a perfect world, now do we?
    Do this for the last 30 years, and you’ll get a MIGT curve that is far more moderate than the GISS curve – I’m sure. Feed that to FOX, Glenn Beck, JunkScience, Heartland etc. etc, and we’ll be on our way.
    Maybe this already exists?
    Dear Anthony;
    It kinda annoys me that you actually devoted your time to reply with 20+ lines to this TCO repeat offender. With repeat offenders, just delete their post as soon as they step out of bounds. Just delete! Do not reply! Do not waste your time!
    We want you to devote your time taking care of the real business here.

  46. Can anyone tell me exactly what the difference in baseline amongst the datasets amounts to (e.g. by comparing the mean of one in the base period of the other)? I’d like to provide an example graph which has them offset to the same base.
    In the meantime, I personally find the easiest way to understand it is by looking at the derivative (deltas):
    http://www.woodfortrees.org/plot/hadcrut3vgl/last:12/derivative/plot/uah/last:12/derivative/plot/rss/last:12/derivative/plot/gistemp/last:12/derivative
    (adjust time period to taste)
    Another interesting thing to look at is the March figures for the last 15 years:
    http://www.woodfortrees.org/plot/hadcrut3vgl/last:180/every:12/plot/uah/last:180/every:12/plot/rss/last:180/every:12/plot/gistemp/last:180/every:12
    (actually, this gives you the last 15 years of whatever sample is the latest published – so if you check this again in late June, you’ll get June – but beware early in the month when not all the datasets have published yet.)

  47. Oops, I just fell into my own trap… The HADCRUT3 new sample hasn’t made it to the CRU data source yet, so right now the latter graph is misleading – it is showing February values for hadcrut3vgl. I guess this should fix itself when CRU update their data on Monday.

  48. From a layman (me):
    Why all the fuss over monthly anomolies when the calender month is a man made system? Wouldn’t it be better to use lunar months or some other natural occurring consistant cycle? February had 29 days and March 31. Could that effect the results if the last 2 days of March were much warmer? As far as “averages” go, is there really an absolute “average” or “mean” tempurature each month? I love the example about averages: a person has one foot in a bucket of water at 30 degrees and the other in a bucket of water at 100 degrees, on average the person should be quite comfortable. As for using different baselines to bolster a particular opinion: liars figure and figures lie. Plus, as stated in other posts, why is a +- error range not included in each chart? It seems to me that it would make more sense to chart temperature ranges rather than specific temperature numbers as a baseline. i.e : the normal temperature range for April in west central Florida within 2 miles of the coast is 58 – 94 degrees F (example only do not know the real numbers). If future charting shows a move, higher or lower, of that range over time, only then can we conclude that there is warming or cooling.
    This may be very simple but, as Anthony states, it is how the data is presented to the general public that sways opinion. Unfortunately for most of us, politicians will act on what they perceive to be best for themselves in view of that public opinion without regard to real science.

  49. On the differing base periods for the anomalies, why are they not all using 1970-2000? Isn’t that the way “climatological normals” are normally computed, using the most recent three complete decades? If I ask the NWS what are normal CDD’s or HDD’s, they’ll come back with the 1970-2000 average. Why are temp anomalies reported differently?
    Basil
    REPLY: My point exactly. This is the sliding window I refer to.

  50. JM
    let me see if I can explain the differences between Hadcru and Giss.
    Long term differences:
    1. hadcu uses a different base period. this just move the whole line up and
    down in Y. so, its easy to adjust them to be on the same base peroid.
    2. Giss estimate the polar region using stations 1200km apart and they fill
    in some missing data using a GCM.
    3. other differences could be the use of different stations and the use of
    different adjustment methods and a difference in algorithms for infilling
    missing data.
    So, on a long term basis you will see that Giss report temps that are a little higher than hadcru. the point is that difference is fairly consistent over time.
    this is important because at the end we want to look at the TREND in the data.
    so as long as the offset is constant the trends will be consistent. Now, in the short term, last few years, the giss trend has been a bit highrer than tha hadcru trend. Lucia covers this, so i recommend her work.
    with regards to this last month. Giss reported .67 and hadcru reported .43
    On average i would have expected hadcru to report .58C. or consersvely, if hadcru reports .43, then one could expect giss to report .52C. so the difference between giss and hadcru was a bit on the high side from what one normally sees but it’s not strictly speaking an outlier. It’s just one of those noise things.

  51. re Zeke 4/12/08 15:58
    Your chart reads “R^2 .003” What am I missing?
    re error bars (Bob B and Lucia)
    What are the error bars for each series?

  52. http://data.giss.nasa.gov/gistemp/maps/
    is a neat resource.
    I notice that Hansen may have overdone some “corrections”, or AGW is truly worse than I ever imagined.
    Try Land:Giss, Ocean:none, Mean period:Annual (Jan-Dec) and
    Time interval: 1951 to 1980 compared to the same interval baseline. 🙂
    Smoothing radius: 250 km
    5700+ degrees seems a bit steep, even for Hansen.

  53. vauss asks:
    “Yet Alaska has a surface area roughly 20 percent of the continental US. So do the 50 data points in Alaska get a weighted average to compensate for the lack of data points? ”
    The stations are weighted by geographic area. This means a single station in Alaska will have a weighted equal to 500 stations in the southern US. This gets even worse at the poles where the sparse stations are used to “estimate” the temps over the ice packs.

  54. Anthony,
    we seem to agree that the fact that GISS uses a different reference is a problem of convenience for the end user. I thought that your first reply suggested a physical non-sense. I definitely share your frustration as a data user about different normalization, formatting, etc … However, I think it is more important to ensure that the reference makes sense physically.
    I haven’t really given it deep mathematical thought, but the GISS data uses for reference a period were decadal trend seems to be relatively flat. Therefore the average would result in cancelling more or less the short term variability, and not including much of the long term trend. The other reference seems to include a bit of a trend, from the 80’s. I would tend to think that the first approach to pick a reference makes more sense (as a first guess only 😉 ). However, it appears clear that over the long run, all products indicate the same warming trend so they are all physically consistent.
    I think the GISS choice to use an older base period is not realistic.We have 4 global metrics. The one that consistently reads different than the rest uses a base period that is by general climatic standards, outdated.

    I am not sure what you mean by ‘realistic’ here. As long a the reference period is consistent with the time scale of the signal you are studying, it should not be outdated. What could be outdated is the data your are studying, whatever reference you use for them: e.g. sticking with temperature in the 90’s and ignoring those in the past years. The fact that you normalize these data with respect to the 40’s, 50’s or 80’s does not make your analysis outdated, it’s just a reference (that needs to be explained though). What would be questionable (not even outdated) would be to pick a reference 1,000,000 years ago, to show that the trend is down .. (yes I’ve read stuff like that 😉 )

    For example, let’s say I published a work and used a baseline for 1930-1950, but made claims regarding the present. It would probably be criticized for that given that there is more up to date climate data.

    If you would publish the latest data, but used a baseline for 1930-1950, and made claims regarding the present, I don’t think you would be criticized: you would have used the more up to date data! You would just have to specify that the ‘increase’ or anomaly you are reporting is with respect to your reference period, whatever it is (and possibly explain why this period is of interest and/or representative). A simple straightforward example would be to reference the data with respect to the end of the 19th century in order to emphasize the increase during the 20th century, ‘after’ the industrial revolution, etc … Doing that while stopping your time series to 1950, 1960, 1970 … would be outdated. Doing that with data up to 2008 would not!

    Many climate references that are published by NOAA use a sliding window that gets updated as time and data goes on. I suggest that it is time for GISS to use a more current baseline.

    I am not sure what NOAA data you are referring to, but I think a sliding window would create even more confusion than presently exist! And don’t forget that decadal trends for GW are averages over long term period, a few decades. So a sliding window could make sense but it would have to slide slowly, like every century… Computing the increase since the last decade or two decades (fast sliding window) would not make much sense, at least not as a baseline (it might be used to address yearly/decadal variability). You would see mostly short term variability, and miss the big picture of the the long term trend.

    I think that GISS should use a more recent baseline. Ideally, since these 4 metrics are being compared regularly, it would seem prudent to have some sort of common presentation method for the data.

    I do agree it would make the life of the end user easier, but it wouldn’t change the physical meaning of the data, and the conclusions as long as they are properly analyzed .
    REPLY: Yes you definitely missed the point. Looks like I’m going to have to do a post on it. It’s about presentation.

  55. It’s all about the presentation, GISS gets used more than the others, and it’s baseline choice presents the data with a greater positive offset than UAH, RSS, and HadCRUT.
    As long as you are consistent in your use of data, there should not be any issue (as far as baseline is concerned, I am not addressing other issues like smpling etc …)
    If you use GISS all the time, you will see the same increase since any reference time as with another dataset. Same increase since 1900’s, same increase since 1980’s, etc .. What will be different is one particular value: +0.2 in GISS will roughly mean +0.2 degC with respect to the 60’s/70’s. A value of +0.1 (guestimate!) in Hadley’s data would mean +0.1 degC since the 70’s/80’s. But compute the increase for a given reference both datasets would give similar results. You just can’t use one set for 2006 and then the other for 2007 and derive an variation from their difference … at least not directly. It would be like using Fahrenheit and then Celsius assuming they meant the same. They measure the same thing, they should provide the same results when expressed in the same ‘base’, but they can’t be used directly together.
    So as long as a reporter uses GISS all the time, he’s fine. He will have slightly larger anomalies than if he would use another dataset because he reports the increase since a decade earlier than Hadley, that’ s all.
    REPLY: You still missed the point.

  56. Mike asked the following:
    “I wonder what the error bars are on these various mean temperatures. Are the month to month changes consistent with the uncertainty in the measurements? I know there are systematic differences between the surface and aerial measurements. But are the sampling uncertaintities each month compatible with the inconsistencies?”
    I looked at data provided by NASA/GISS for a Class 5 station, of which there are many. Data from a station classified with a CRN rating of “5”, such as “Wickenberg ( 33.98, -112.73)”, has at least an error or uncertainty of 5 degrees, based on the definition of its CRN rating. This is a large uncertainty. If we look at the minimum and maximum temperature readings from this station over a 100 year period, we see that the maximum temperature is 20.56 degrees C and the minimum is 16 87 degrees C. Since this difference is less than the 5 degree C station location error or uncertainty, I can reasonably draw a temperature trend graph for this 100 year period which is a straight line, showing no change in temperature in this period. Thus the data from this station in my opinion is useless, as may be data from many other stations, no matter how much massaging of data is performed. If we use it, the old adage applies, “Garbage In—-Garbage out”.
    REPLY: This is a different Anthpmy, not the forum operator

  57. Mosher: sounds like a worthwhile exercise. I’m personally quite interested in the cause of the wide divergence in satellite and land based measurements in 1998. I’m busy working on an article on the solar cycle-climate link at the moment, but I might take another look at satellite temps in the future. The only real point of my earlier post was to point out that we should avoid allegations of warming bias without an analysis of long term trend.
    JM (and JP): I think we are mixing up different types of adjustments. The temperature records are measured in terms of anomalies which are, roughly speaking, arbitrarily selected points in the series which all other points are compared against. What the series is interested in is the trend in temperature, rather than the absolute value. Thus any adjustments that do not affect the position of data points vis a vis eachother will have no impact on the trend. So converting a series with a 1951-1980 baseline to a 1979-1998 baseline won’t do anything to the trend in temperatures; it remains around 0.16 degrees per decade under either baseline.
    The adjustments done by Hansen are more concerned with correcting for biases in individual stations, barring a few larger corrections like the so-called Y2K bug in the U.S. data. I have not looked into the subject of station adjustments in GISS, so I’ll withhold any judgment for the time being until I’m better informed.
    Oddly enough, the main reason why GISS is showing record temps in recent years while other series are not is not due to a slight recent warm trend in the GISS series relative to the other temperature series. Rather, its primarily caused by the much larger cool trend in GISS in the late 1990s. Once the datasets are put on the same baseline, its clear that GISS has yet to reach the anomaly that RSS and UAH showed for 1998 (e.g. the GISS 2005 anomaly is lower than the RSS/UAH 1998 anomaly). The average residual between GISS and the other three datasets for the period from 1998 to 2002 is -0.03 (that is to say that, on average, GISS was 0.03 degrees cooler than the others). For the period from 2003 to 2007, GISS was only 0.023 degrees warmer than the other series on average.
    Anthony: I’m still skeptical of the importance of baseline choice on the visual interpretation of the graph. If we are looking at the trend in temperatures, why does it matter where we put the zero? It seems akin to me arguing that the commonly used graphs underemphasize warming to the general North American public because they use Celsius rather than Fahrenheit, since the units are larger in Fahrenheit. For that matter, using actual temperatures rather than anomalies would put the zero far lower than GISS (which has the lowest zero of all series, given that it uses the baseline the furthest back). Would you argue that using absolute temperature rather than anomalies would exaggerate warming?
    As far as using a common baseline for everything, while I agree in principal that it would be useful for us amature climatologists, its not the most pressing issue out there. Also, there is a good justification for using a 30 year baseline to smooth out noise, something you would be unable to do for the satellite series since they have not been around quite long enough (though give them another year or two…).

  58. UAH showed a record difference between Northern hemispheric and Southern hemispheric trends for this March. Do we know the difference between the temperature change in the N and S hemispheres according to HadCrut.

  59. Re: Robert Burns
    The r^2 = 0.003 in the anomalies graph (http://i81.photobucket.com/albums/j237/hausfath/GISSResiduals.jpg) means that a simple linear regression of the GISS residuals relative the the mean of the other temperature series shows a very, very slight positive trend over the past 30 years, but that linear model only explains a tiny amount of the variations of the residuals. So its pretty much meaningless, other than to show that there is no significant positive bias over the period of the last 30 years.
    Pierre: Here is a monthly comparison of each separate temperature record to your “MIGT” (e.g. the monthly average of all four). It shows interesting points of difference between the records: http://i81.photobucket.com/albums/j237/hausfath/Variations.jpg
    Basil: Unfortunately, 1970-2000 won’t work as a common baseline, since the satellite records don’t start till 1979.

  60. I was just wondering which other types of statistics that our government should be “adjusting”?
    Anyone have any ideas?

  61. Bob B, re lucia’s statement that The true weather variability in month to month measurements appears larger than the measurement uncertainty
    I see no reason why weather in the aggregate across the whole planet and over a month should be variable or noisy to the degree claimed. I.e. I agree that almost all of the variability is in the measurement.
    Lucia may have a statistical basis for that statement and I’ll ask at her (BTW excellent) blog.

  62. What a difference a baseline does make.
    GISS(1951-1980)
    Jan 08 .12
    Feb 08 .26
    Mar 08 .67
    GISS(1961-1990 HadCRUT baseline)
    Jan 08 .04
    Feb 08 .18
    Mar 08 .55
    GISS(1979-1990 RSS and UAH baseline)
    Jan 08 -.12
    Feb 08 .05
    Mar 08 .41
    GISS(1971-2000)
    Jan 08 -.08
    Feb 08 .04
    Mar 08 .44
    I think GISS and HadCRUT should being using 1971-2000 baseline. I think the NWS uses it for computing averages that you see in your local weather. And they regularly shift this baseline every 10yrs or so. The next baseline will 1981-2010 and at this point the RSS and UAH could use the same 30yrs.
    The zero anomaly line would change, but trends should be the same. And every metric could report using the same baseline.
    REPLY: Thanks Brian, I had planned to work this up this evening, and do a post on it, but you beat me to it. This is the issue, the presentation of the data to the public changes depending on the baseline used.

  63. Ya Anythony, this whole anomaly dance needs an everyman explaination.
    More than one once I’ve stumbled on myself looking at it. Willis did once, I believe.
    I prefer a chart of absolute temps. I can do my own anomaly dance thank you.
    But if somebody publishes a chart of anomaly they need to say:
    Anomaly from a Base period xxxx-yyyy.
    The average of the absolute temp during that period is zz.zC
    That way you can always turn anomalies back into temps.
    GISS has started to do this.

  64. Exaggerate a decrease? How?
    If anything the siting problems tend “to warm” the decreases, thus giving us higher average temps. Maybe I don’t follow you here.

    To answer this and cohenite’s question:
    It’s an overall warming effect. In toto. But, unlike waste heat (AC or enging exhaust, etc.) , which creates a simple “one-time” offset, a heat sink ( a driveway, parking lot, building, or whatever) operates a litlle differently because it affects the RATE of temperature change.
    To be more specific, wate heat simply bumps up the temps when it occurs. End of story.
    When the heat sink appears, on the other hand, a whole dynamic comes into play. A heat sink (unlike wate heat) comes into play especially at T-Max and T-Min. At T-max, it pumps up the temps. And at T-Min, it is releasing its joules and kicking things way up. La Dochy et al. (Dec. 2007) points out that T-Max is hit hard and T-Min is affected much worse.
    A.) There is an initial offset (a warm bias). But, unlike waste heat, the story does not end there.
    B.) When a temperature increase occurs, the effects are contimually exaggerated by a certain percentage. The more the heat increase, the greater the exaggeration.
    C.) When the teperature drops the effect in step B. “undoes” itself.
    So what you get is an initial warming offset followed by an exaggeration of any temperature increase. But there is also an exaggeration of any temp decrease, “drawing joules from the bank” of the initial warming offset.
    Bottom line: An initial warming bias followed by and exaggeration of trend in–either–direction.
    I hope that makes it clearer. (If not, I’ll give it another shot.)
    What happened in the post-1980 period was that during the MMTS switchover, a huge number of previously better-sited stations wound up right next to buildings and concrete. Your most common “CRN4” type violation.
    And it’s gotta add up. I am not going to be convinced that site violations are not affecting the overall delta-T since 1980, nossir!
    And Joe D’Aleo’s PDO/AMO correlations will fit better if that is taken into consideration. (He shall have his 3 exclamation points; I am convinced he has earned each and every one. There may well be room for a 4th, if I am not mistaken.)
    This needs to add up. We need to arrive at a consistent bottom line.

  65. Just looking at the graphs and I noticed what seems to be a pattern. Anybody else out there ever notice the same, or perhaps could evaluate it from a statistical point of view.
    In looking at the GISS monthly global temps,
    http://data.giss.nasa.gov/gistemp/graphs/Fig.C.lrg.gif
    in comparing that to Hadcrut
    http://wattsupwiththat.files.wordpress.com/2008/04/hadcrut_mar08.png
    and RSS
    http://wattsupwiththat.files.wordpress.com/2008/04/rss_msu_mar2008_large.png
    and UAH
    http://wattsupwiththat.files.wordpress.com/2008/04/uah_march_081.png
    One thing I notice, is that the majority of the extremely highs and lows, particularly on the GISS, and the HADCRUT, but to a lesser extent, seem to center right around the solstices. And the GISS graphs seem to have more average temps around the equinoxes.
    The RSS and UAH, while have ups and downs, don’t seem to have the peaks and valleys so centered around the solstices.
    Anybody else notice that, or could there be any theories as to why? At the outset, I would think it may be based on the fact that the GISS relies so heavy on polar temps, where the RSS and UAH don’t? And could that be a contributing factor as to why the GISS had a bigger drop in Jan, and starting to raise now?
    (Or maybe just somebody is cooking the books)
    REPLY: See all 4 compared here:
    http://klimakatastrophe.wordpress.com/2008/04/13/das-met-office-hat-seine-hardcrut3-datenbasis-der-giss-nasa-datenbasis-angeglichen/

  66. When comparing those graphs, the seasonal differences aren’t what catch my eye. It’s the inter-year variability that I find disturbing. Why is there such a pronounced convergence of data in 1998? All the numbers appear to agree within about .08 degrees for that year. But for the rest of the graph, there’s a fairly consistent .25-.3 degree variance. (except for the Hadley numbers looking odd in 2000)
    No change in baseline can possibly explain the VARIANCE changing by 300%.

Comments are closed.