Contiguous U.S. GISTEMP Linear Trends: Before and After
Guest post by Bob Tisdale
Many of us have seen gif animations and blink comparators of the older version of Contiguous U.S. GISTEMP data versus the newer version, and here’s yet another one. The presentation is clearer than most.
http://i44.tinypic.com/29dwsj7.gif
It is based on the John Daly archived data:
http://www.john-daly.com/usatemps.006
and the current Contiguous U.S. surface temperature anomaly data from GISS:
http://data.giss.nasa.gov/gistemp/graphs/Fig.D.txt
In their presentations, most people have been concerned with which decade had the highest U.S. surface temperature anomaly: the 1940s or the 1990s. But I couldn’t recall having ever seen a trend comparison, so I snipped off the last 9 years from current data and let EXCEL plot the trends:
http://i44.tinypic.com/295sp37.gif
Before the post-1999 GISS adjustments to the Contiguous U.S. GISTEMP data, the linear trend for the period of 1880 to 1999 was 0.035 deg C/decade. After the adjustments, the linear trend rose to 0.044 deg C/decade.
Thanks to Anthony Watts who provided the link to the older GISTEMP data archived at John Daly’s website in his post here:
NOTE: Bob, The credit really should go to Michael Hammer, who wrote that post, but I’m happy to have a role as facilitator. – Anthony
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Interesting post. I made a graph showing the deviation between the 2009 data and the 2000 data: http://3.bp.blogspot.com/_3vq8ECeb6L8/Sknkz72wIrI/AAAAAAAABqw/j6D_StPA8DA/s1600-h/giss+2009+minus+2000.JPG
In 1968, the deviation is zero; in 1999, the deviation is +0,23 degrees Celsius.
And in case anyone’s interested – link to the blogpost with the graph, translated by Google Translate (pretty rough translation, naturally, but there’s not much to read anyway): http://translate.google.com/translate?js=n&prev=_t&hl=sv&ie=UTF-8&u=http%3A%2F%2Fnontelly.blogspot.com%2F2009%2F06%2Favvikelse-mellan-giss-2009-siffror.html&sl=sv&tl=en&history_state0=
“Philip_B (01:02:27) :
That’s 2 billion terajoules. Hiroshima bomb was 63 TJ. The amount of energy that was necessary to melt all this ice was thus more or less the equivalent of 31 million Hiroshima bombs.
Of course, the Earth receives 3,850,000 exajoules of energy from the sun each year. So 2 billion terajoules is 15 seconds of sunshine. Spread over half a century to melt your glaciers.
And of course, a tiny fraction of the known and undisputed variation in solar output. Proving that glacier melt is insignificant on the scale of the Earth’s climate.
Flanagan, I’m sure those numbers impress your friends, but they are unlikely to impress the rather more numerate crowd here. ;-)”
Nice one!
paralegalnm: “Mark Twain’s maxim of three lies, “lies, damn lies, and statistics” seems to apply here.”
Not wrong per se, however: usually attributed to British Prime Minister Benjamin Disraeli and later popularized in America by author Mark Twain.
bluegrue (02:03:41) :
Here is what you wrote previously, bluegrue:
“Most of the change should be due to the differences in calculation between 1999 and 2001, as described in these two publications:
GISS analysis of surface temperature change, Hansen et al. 1999, J. Geophys. Res., 104, 30997-31022
A closer look at United States and global surface temperature change, Hansen et al. 2001, J. Geophys. Res., 106, 23947-23963
The former describes the status quo of 1999 and the changes with regard to Hansen and Lebedeff 1987, the latter documents the changes from 1999 up to 2001:
* includes adjustments developed from station meta data (Karl 1990)
* TOBS and station history adjustments
* improved urban adjustments using satellite data
Read the paper for details.”
—
As you correctly point out, the papers “describe” the numerous adjustments that are made to the data (not just TOBS), including station merging, filling
missing data, UHI adjustments, etc.
If you had indeed looked at the GISTEMP for more than a minute, you would know that all of these additional adjustments (over and above TOBS) are contained in GISTEMP. So, my challenge to you was to go through the code and show us how these algorithms are implemented, and verify for us that they indeed are doing what is reported in the papers you cite. I can understand if you are frustrated…GISTEMP is not good code, and was released under duress by Hansen after the infamous Y2K debacle. You will also note that GISS is so proud of this code that it is listed under their Software link….NOT.
“What was this act of misdirection trying to achieve? Set me off on a wild goose chase?”
There is no misdirection here. GISTEMP is used to produce the GISS surface temperature histories — what better way to examine the adjustments made to the raw surface temperature data than to look at the code? GISTEMP should contain all f the algorithms described in the papers you cite, right?
“Has it even occurred to you, that there is no simple equation to cover the effects?”
The equations don’t have to be simple…they be complex if you prefer. Perhaps even differential equations. My point is that, here we have a code, GISTEMP, which is implementing an algorithm with numerous mathematical relations (e.g. spatial blending and averaging relations). Yet, nowhere in the references cited do I see these relations spelled out just as they are implemented in the software. The same goes for Model E, their equally bad AOGCM. GISS really doesn’t believe in proper documentation, you see. Just write some quick research code, publish your results, and move on. This would not be tolerated in most serious research institutions.
So, why does all this matter? We are being asked here in the US to believe that we must engage in a destructive cap and trade policy, largely on the basis of temperature histories generated by codes like GISTEMP. It is therefore important to audit these codes and know what they are doing so we can be confident of the results. So far, I am not very confident…
Finally, I must give GISS at least some credit for actually making their code available to the public. Try get the corresponding codes from NOAA or the Hadley Center…
@bluegrue
You may be surprised, but many of us do not have time to read through the 71 pages that make up the two documents to see if they contain what you say they contain.
You are right about needing to read things for myself, but if you expect people to pay attention to you, you need to be more specific.
I do not have unlimited time but I am willing to read documents linked by others. I did a search of those documents using a different set of keywords and didn’t find the relevant sections. You have some point in calling me out and you also need to realize that you should point out the relevant sections when you post a link to a document if you really expect someone to follow-up on what you post.
I have briefly read the explanations of the adjustments and I am still not satisfied, but no, GISS did not adjust the data without posting some explanation, so I stand corrected on that point.
Thank you
Frank K. (07:10:53) :
As you can see from the NCDC’s information on USHCN adjustments the big players in the difference of 1999 and 2001 data – the difference this very post is about – are TOBS and SHAP. Neither of these is part of the GISTEMP code, GISTEMP uses input data that already incorporates TOBS and SHAP.
See above, GISTEMP does not handle raw data.
Have you even looked at the Karl et al. paper I have linked to in my post? There are models that you simply can not wrap up in a closed formula, be it differential equation or whatever.
I am not frustrated at the code, I am frustrated at you. You ignore the big corrections, the ones I specifically pointed out in my post, which are part of USHCN, and tell me to find them in GISTEMP. Then you tell me to go over the source code and verify to you that the code does what it promises to do (data in-filling, UHI correction, …). A few problems with this:
a) I trust Hansen et al to faithfully implement the described algorithms, unlike you
b) Verification takes a lot of time, and I consider it a waste of my free time because of a). Keep in mind that HadCrud, Gistemp and the satellite data agree reasonably well, given that they cover different data (HadCrud does not interpolate polar data, Sat is not surface)
c) Assume I take the time and do verify the code and write up what happens. I am an anonymous voice on the internet. Even if I were to disclose my real life name, why should you trust my analysis more than Hansen’s? You would not, and again my time would be wasted. So why don’t you do it yourself?
I half expect that some commenter down the line will cherry-pick a) only and ignore b) and c) to show how deluded I am.
If you choose to ignore our understanding of radiative and convective transfer, absorption spectra of CO2 and H2O, studies of climate sensitivity from paleo records, observed northward movement of hardiness zones, …., if you choose to forget everything else but the temperature record, then yes, it’s only the temperature record.
@ur momisugly John Galt (07:35:36) :
I’ll try and keep it in mind in the future.
11 pages, if you just take the text of the 2001 paper, which describes the changes. 😉 I did not search for key words, but looked for the section headings. Yet I agree, you only have that much time.
@ur momisugly bluegrue (08:14:45) :
I design and develop software for a living and teach college computer programming courses. I have looked at the source code for GISTEMP and it is a horrible, horrible mess.
You can never trust that a programmer correctly expressed an algorithm in code. The code must be tested and verified.
I also doubt Dr. Hansen personally writes much of the code himself, but that is possible. (I think Hansen is probably too busy doing other, higher-level functions than to write or review code, but I could be wrong. Do you have knowledge otherwise? It’s rare for some one in an executive position to be involved at that level, but it is possible.)
If a student handed in a project coded like GISTEMP , he/she would not get a good grade. The code is sloppy, poorly organized and poorly documented. None of the standard programming practices are followed. Nor would any of my clients find code like that acceptable. Why should GISS accept a lower standard?
bluegrue (08:14:45) :
“I am not frustrated at the code, I am frustrated at you. You ignore the big corrections, the ones I specifically pointed out in my post, which are part of USHCN, and tell me to find them in GISTEMP. Then you tell me to go over the source code and verify to you that the code does what it promises to do (data in-filling, UHI correction, …). ”
Bluegrue – I’m not ignoring anything – you are in fact ignoring your own post!
You cited several references which form the basis of GISTEMP and I merely suggested you check out the code. You then got frustrated with me – I think your frustration is misguided.
“a) I trust Hansen et al to faithfully implement the described algorithms, unlike you”
I’ve looked at the code. Have you??
“b) Verification takes a lot of time, and I consider it a waste of my free time…”
Don’t worry – you have plenty of company at GISS, who consider basic validation, verification, and testing to be a waste of their time too…
“If you choose to ignore our understanding of radiative and convective transfer, absorption spectra of CO2 and H2O, studies of climate sensitivity from paleo records, observed northward movement of hardiness zones, …., if you choose to forget everything else but the temperature record, then yes, it’s only the temperature record.”
Yes, the climate has warmed and cooled over thousands of years, regardless of our presence…[sigh]
My concern were the TOBS and SHAP, you’ll find that info in do_comb_step0.sh
plus looking up the readmes on the USHCN ftp site. I have not looked into filling in data or the two-legged urban correction; I gave it a cursory look and saw that there is not much to learn about how the urban/rural distinction is deduced from satellite data in step 2. It is just data handling, the decision urban/rural was taken elsewhere. You claim you have looked into the code: Have you found anything wrong (i.e. it’s not doing what it is claimed to do), or are you just dissatisfied with the aesthetics?
If I were in a research position, I would do all of the above. However, in my spare time I have better things to do. You have also nicely truncated my point, leaving out the good agreement with independent mean temperature calculations. If GISTEMP were indeed the only record, it would deserve much more scrutiny. As it is not and the other records agree reasonably well, this ceases to be a pressing matter.
And these changes were due to changes in forcings. Now we are introducing an additional forcing with anthropogenic changes in GHGs. …. [sigh]
John Galt (09:06:06) :
You won’t hear a contradiction from me. The question here however is not, whether it is well written or maintainable (all important stuff, don’t get me wrong), but does it do what it is supposed to do.
I don’t have knowledge about that myself, I’d expect a mixture of him starting it out in the beginning and other group members and PhD students contributing.
You’ll find a lot of quick and dirty programming in research. I think, very few researchers have an education in programming practices. You pick up what you need during your studies and work from there. So you have an idea of what you want to do and write a Q&D program. You don’t ha
bluegrue,
After reading the recent posts by you, Frank K and John Galt, it is clear that you are attempting to defend the indefensible. The fact is that GISS refuses to provide clear explanations and uncorrupted data. What do you think they’re trying to hide?
@John Galt
Sorry, had to leave the PC and must have hit the submit button, that’s why the post broke off. Basically it comes down to programming not being part of the formal education of many scientists, you learn as you go when necessary.
@Smokey
TOBS and SHAP are not done by GISS, take it up with NCDC. For the other adjustments, the code is there, if quick and dirty. I don’t do conspiracy theories.
RE: Bluegrue: “If you choose to ignore our understanding of radiative and convective transfer, absorption spectra of CO2 and H2O, studies of climate sensitivity from paleo records, observed northward movement of hardiness zones, …., if you choose to forget everything else but the temperature record, then yes, it’s only the temperature record.”
Actually, I do not observe serious commentators on this blog to ignore the issues you reference. I consider myself very knowledgeable on these issues, especially when I talk with most AGW pessimists. However, I also know other items as well. For example, I do not ignore our lack of understanding on feedback mechanisms, and I do not ignore the fact that clouds will play a huge role in GMT trends. Also, I do not ignore that other factors can be behind observed phenomena. For example, the northward movement of plant species can be due to increased CO2 levels – plants do better with CO2-enriched air. And beyond that reality, why would we be surprised that plants move north as the world emerges from the Little Ice Age? Retreating glaciers reveal evidence that the local climate was more hospitable to flora in the past.
Related to the subject of northward movement, I have often inquired about trends in the length of the growing season. I fully expect a lengthening trend, but no AGW pessimist has been able to provide any study to show that. The data that I can collect from county extension offices surprisingly does not show any consistent trend. On our farm, we tried in the 1970s to plant corn that had the same maturity dates as in the 1950s. It was a disaster. We have not yet returned to that maturity-date-corn even after a couple of decades of the GMT warming trend. Especially this year, we are so thankful that we did not venture to plant corn that would use the 1950s type of growing season.
@ur momisugly An Inquirer (12:28:20) :
The passage you quote was a reply specifically addressed to Frank K. (07:10:53) :
Frank K. may not have ignored the issues I mentioned, but he played them down way too much, IMHO, hence my reply that you quoted.
Take a pick, pretty obvious and neutral search terms on scholar.google.com, there’s lots of hits. A few examples: Menzel 2000 finds that in the period of 1951 to 1996 spring events in Europe have advanced by 6.3 days on average and autumn events have delayed by 4.5 days on average. Robeson 2004 finds that “results suggest that the growing-season length in Illinois became roughly one week longer during the 20thcentury”; of course this is a rather small area and the temperature data used is noisy, a study of the US would be better. Matsumoto 2003 finds growing seasons today begin 4 days earlier and end 8 days later as compared to 1950 in Japan. Feel free to add other regions (like your own) as search terms and look further.
@ur momisugly moderator
I think my latest comment from a few minutes ago ended up in the spam bucket, maybe there were too many links. Could you please check and delete this request? TIA
@ur momisugly moderator
All is fine, you were faster.
Reply: A little patience next time please. We don’t need to be told about the spam filter. ~ charles the moderator
Eddie: You wrote: “That’s pretty tricky. You’re taking the “new data”, but still only plotting through the year 2000 to reach an adjusted figure. How about you plot the latest cooling trend, too?”
I lopped off the last nine years of the newer data so that the trend comparison was for the same period of time. Adding the nine years back in only increases the trend of the newer data to 0.053 deg C/decade, but there’s no way then to compare it realistically to the older dataset, so why do it?.
Is there a “warmist” answer to these questions? If we could fix CO2 at its current level would their models predict a rising or falling exponential trend to a value or something else? Is it too obvious that their models should make an accurate prediction to the present from data ending in 1959 or any date you like?
bluegrue,
Thanx for your quick search showing that growing seasons have been gradually getting longer. But no need to get alarmed, that is entirely natural.
I suppose AGW followers are required to believe that a longer growing season is somehow a bad thing, just like they must believe that a [slight, fraction of a degree] warmer, more pleasant climate is something to be avoided, at a cost of $trillions. That is the corner they’ve painted themselves into.
But hold your horses, pardner. The fact is that the climate has been warming naturally, in fits and starts, since the LIA [and from the last great Ice Age before that]. The warming trend line over the past couple of hundred years hasn’t varied much at all — thus throwing a monkey wrench into the GCMs, which wrongly claim the rise is accelerating. Lately, in fact, the climate has been cooling. [But don’t worry, it will eventually return to its long term trend line; that’s the theory of natural climate change].
Prof. Akasofu had an article here not long ago. This is one of the charts from his paper: click
As you probably know, the climate peer-review process is badly broken. Papers with sometimes glaring errors are commonly waved through, as long as they promote the politically correct AGW point of view. Shenanigans by government and international agencies are routine. [Please don’t argue about this point, you will be crushed.]
True climate peer-review now takes place on the internet, primarily because the established journals have become too lackadaisical, and their internal politics have become more important than the science. Some recent examples:
Physicist Jan Hendrik Schön had numerous peer-reviewed papers published. Schön, a former Bell Labs scientist, had authored [or co-authored] one research paper every 8 days in one year alone. An amazing fifteen of Schön’s papers were [very uncritically] accepted for publication by both Nature and Science. Eventually — and no thanks to the peer-review referees or publications — Schön was outed as a monumental fraud who simply invented facts and experiments. When he was finally caught he resigned in disgrace — no thanks to the peer-review of Science and Nature.
Another fraud, geneticist Hwang Woo-suk, was also peer reviewed by the editors of the the AAAS journal Science when he submitted a paper to their friendly peer-review team, claiming to have derived lines of stem cells from cloned human embryos. [Hwang also had 25 co-authors!]. After uncritical review by its referees, Science waved Hwang’s paper through and published it. Later, other scientists noticed discrepancies, and eventually Hwang ended up also resigning after admitting to fraud.
And there are others. The point is this: neither Hwang nor Schön would have lasted a week on this site or others like it before their fraud was uncovered. Peer-review here is much more rigorous that in the laid back mutual back scratching of today’s climate peer-review process, which generally works like this: if you promote AGW, you’re waved through. But if you contradict AGW… just try to get published.
With a system like that in place, controlled by a small clique, which in turn makes large government grants possible for those who are fortunate enough to be published, is it any wonder that fraud is regularly uncovered at those once prestigious journals?
The climate peer-review system is even easier to game, because rather than using empirical measurements, those submitting climate papers rely mostly on computer models — and they jealously guard their [adjusted] data and methodologies, which are not archived so the public, whose taxes paid for the work, can verify the product. Steve McIntyre over at Climate Audit has done an excellent job of exposing their shenanigans. Just look up anything about Michael Mann there. That will get you started. And it will open your eyes.
Skeptics are always ready to debate AGW. But alarmists hide out from debate [and the couple of debates they have agreed to ended up in embarrassing public humiliation]. If someone refuses to debate what they say they believe, that should tell you all you need to know about their motivations: money and prestige, not honest science.
From Bluegrue:
“You’ll find a lot of quick and dirty programming in research. I think, very few researchers have an education in programming practices. You pick up what you need during your studies and work from there. So you have an idea of what you want to do and write a Q&D program.”
—
Hansen has been PAID by the government to do ONLY this since 1988:
You’re telling me that he can do no better than “quick and dirty” unintelligent, badly-written, hacked, uncommentedn GISS code for twenty years, but that we MUST spend 1.3 trillion dollars immediately – with not even two days to debate the bill – based on that “quick and dirty” unaudited and undocumented code?
Imo – the adjustments (and staunch support of them) come not from fraud, stupidity, or group think, etc., but rather hubris. I have lived in these academic circles. I would call it more arrogant self confidence in the belief that the corrections are valid.
Perhaps the adjustments are good for some sites, but I haven’t been convinced that blanket corrections are appropriate. I can’t believe (and even those doing the work don’t admit) that for example EVERY site that was moved from a city to an airport immediately began recording cooler temperatures (see http://cdiac.ornl.gov/epubs/ndp/ushcn/ndp019.html )
I won’t like an adjusted curve until the “correction” for EACH site is supported by some sort of data (quantitative info is best, but qualitative would at least provide some support).
“Correcting” each site in this manner would be a humongous undertaking. Until that time it is done, (and after as well), I would suggest those in charge of the official trends provide at least two versions of their trends (raw and adjusted), and maybe even a third ….. evidence based adjusted (of course, where the evidence is laid out for everyone to see).
Yes.
I can’t believe (and even those doing the work don’t admit) that for example EVERY site that was moved from a city to an airport immediately began recording cooler temperatures
Maybe not. But even if one stipulates that they did, there is no question that from that point onward, airports (on average) have warmed much faster than “good” stations, in fact much faster than the average of all stations, for a number of reasons. So, where’s the adjustment for that, then? Even if the initial adjustment is correct, there is no account made for conditions afterward.
I agree with you evanmjones. Anthony and others have shown that airport sites may not really be representative of the weather (and over time, climate) of the area they are deemed to represent.
Certainly, the robustness of representations of US and global temperature trends can (and should be) questioned. imo the case is not closed.
@Smokey (16:39:30) :
Smokey, you spin a nice fairy tale of what is happening. You are telling me that nothing has changed in the 20th century with regard to previous centuries, the warming trend is just the warming trend from the last ice age. Let us for the moment forget about Prof. Akasofu fixing the IPCC projections (NOT predictions) at the wrong place (the maximum of a “cycle”, instead of the trendline, a popular “mistake” (see e.g. Monckton)) and pretend to take it seriously. It implies a linear warming from the last ice age at a rate of more than 0.5°C/century (eyeballed from his plot) plus superimposed “cycles”. You told me, that the last century is just the same as previous ones. So let us project back to Roman times, 2000 years ago. Are you seriously contending, that the global mean temperature was 10°C colder in Roman times? I’ll spare you going back to the stone age. This simple consideration alone shows, that a warming of 0.5°C/century can not have been the rule over the last two millennia. I guess you’ll throw in another super-cycle now.
Your understanding of peer-review is equally flawed: It is not the last line of defense, it is simply the first hurdle in scientific fact finding. Keep in mind, that Schön and Hwang were ousted by the scientific community, not some internet blog.
You are kidding me, right? You must be. Remember Dr. Roy Spencer posting his piece on the interannual and trend signal of C13/C12 isotope ratio? The one with the (unfounded) bottom line:“If the C13/C12 relationship during NATURAL inter-annual variability is the same as that found for the trends, how can people claim that the trend signal is MANMADE??” ? Where was the critical thinking of the readers back then? Tamino pointed out that the equality of the slopes was a mathematical necessity. That was not good enough for WUWT. I showed, that you get the same result, if you substitute Dow Jones index and gold price on different timescales. Not good enough for WUWT. I posted a detailed mathematical proof here on WUWT. It was pretty much ignored. I do not claim fraud, but gross error. Don’t you think, there should have been at a minimum a heads-up attached to Dr. Roy Spencer’s article, if not a retraction? Nothing of that sort has happened. So don’t you dare give me your “would have lasted a week on this site”, it’s simply laughable.