Contiguous U.S. GISTEMP Linear Trends: Before and After
Guest post by Bob Tisdale
Many of us have seen gif animations and blink comparators of the older version of Contiguous U.S. GISTEMP data versus the newer version, and here’s yet another one. The presentation is clearer than most.
http://i44.tinypic.com/29dwsj7.gif
It is based on the John Daly archived data:
http://www.john-daly.com/usatemps.006
and the current Contiguous U.S. surface temperature anomaly data from GISS:
http://data.giss.nasa.gov/gistemp/graphs/Fig.D.txt
In their presentations, most people have been concerned with which decade had the highest U.S. surface temperature anomaly: the 1940s or the 1990s. But I couldn’t recall having ever seen a trend comparison, so I snipped off the last 9 years from current data and let EXCEL plot the trends:
http://i44.tinypic.com/295sp37.gif
Before the post-1999 GISS adjustments to the Contiguous U.S. GISTEMP data, the linear trend for the period of 1880 to 1999 was 0.035 deg C/decade. After the adjustments, the linear trend rose to 0.044 deg C/decade.
Thanks to Anthony Watts who provided the link to the older GISTEMP data archived at John Daly’s website in his post here:
NOTE: Bob, The credit really should go to Michael Hammer, who wrote that post, but I’m happy to have a role as facilitator. – Anthony
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Pierre Gosselin (08:50:43) :
GISS can try to produce a warming history all it wants, but people are not buying into it:
http://online.wsj.com/article/SB124597505076157449.html
Thanks for the link Pierre, good article!
Why if these sets of temperatures are adjusted with the errors, in recording temperatures, found by surfacestations.org, and draw it as comparison?
Robert A Cook PE (11:03:59) :
“The largest GISS headquarter buildings I’ve seen are in northern Washington DC – not in New York City. Then again, maybe that area “is” more expensive than in NYC. 8<)"
Actually, it is important to note that GISS is a subset (branch office, if you will) of the Goddard Space Flight Center – GSFC – which is, indeed, in the DC area. However, Hansen and his crew at the GISS are in fact located near Columbia University in NYC. As you point out, neither are low rent areas…
By the way, I have tried to obtain the 2009 NASA budget with the line items from GSFC for the GISS related work and couldn't find it. It would be very illuminating to find out the details of the GISS budget for 2009 – 2010. I suspect they have been given a healthy increase in funding given the increases in the Earth Science budget for the GSFC.
John Doe (11:11:16) :
"I agree if you talk just about US, but globally I would like to have several independent agencies that measure and analyze climate data. One could be from the USA, but then European, Russian, Chinese, Indian maybe some more."
"Deviation of the measurement results gives us also an indication of accuracy but only if they are really independent."
I agree with you that it is useful to have independent verification from other groups, internationally. Of course, none of them can seem to agree on a unified method of analysis! However, in the US, with our national debt being out of control, it seems very wasteful to me to have multiple government agencies (NASA and NOAA) producing the *** same products **** (and in GISS's case, doing it badly).
And to top it all off, we now know that the government want to create a National Climate Service, so as to hire people to do more redundant work to support the government's view of global warming!
http://wattsupwiththat.com/2009/05/07/wuwt-poll-do-we-need-a-national-climate-service/
A bit OT but it also reminds me of a phrase I hear in Ireland: “Never take an eejit with you; you’ll always find one when you get there”
Pieter F (11:47:19) : How come no one seems to be looking at the series of posts and properly crediting me for being the first in the thread to mention Hanlon’s Razor? I’ve got priority dang it! Hehe…
I was being facetious with my comment regarding peer-review, but never-the-less the question remains: Where are the documents (scientific or otherwise) that support these adjustments?
I have the old records from GISS before (i hope) James Hansen started rewritting them. I am doing a graph, 1850 to 1975 from the old GiSS numbers then UAH numbers from then to now. I am at year 2000. I will see if some warming shows up after 2,000 but at present all the warming is from 1900 to 1953. Of course 1998 shows up high but it does not seen to take the rest of the years up.
Most of the change should be due to the differences in calculation between 1999 and 2001, as described in these two publications:
GISS analysis of surface temperature change, Hansen et al. 1999, J. Geophys. Res., 104, 30997-31022
A closer look at United States and global surface temperature change, Hansen et al. 2001, J. Geophys. Res., 106, 23947-23963
The former describes the status quo of 1999 and the changes with regard to Hansen and Lebedeff 1987, the latter documents the changes from 1999 up to 2001:
* includes adjustments developed from station meta data (Karl 1990)
* TOBS and station history adjustments
* improved urban adjustments using satellite data
Read the paper for details.
Here is a blinker comparison of the US48 temperature data using properly scaled figures from the two Hansen papers referenced above:
http://i41.tinypic.com/9bgidh.gif
Now, if readers could please point out what exactly they find wrong with the corrections listed in Hansen 2001, as would behoove a science blog, instead of insinuating out of hand “fraud”, “manipulation”, “mistakes” or “incompetence”.
P.S.: All that was required to find this info was to go to the NASA Goddard site, click on “+Publications” in the top menu, click on “Authors” in the left menu, select “James E. Hansen” and have a look at the papers of the period 1999 to 2001.
What would the models say if you input the raw data and project it forward? If the temperature data has to be “corrected” for inaccuracies doesn’t that compound or add the errors of the computer models?
History matching incorrect input data means you get an answer that doesn’t mean anything. This should be earth shattering to the climate community. Has anyone gotten comments from them? Dr Meier etc???
The silence is very loud.
“It would be a wonderful thing for mankind if some philosophic Yankee would contrive some kind of “ometer” that would measure the infusion of humbug in anything. A “Humbugometer” he might call it. I would warrant him a good sale.”
— P. T. Barnum
GISS can try to produce a warming history all it wants, but people are not buying into it:
http://online.wsj.com/article/SB124597505076157449.html
A very strong article indeed. And interestingly the Alarmists are hard at work positioning their retreat. Only their strategy is the “surrender and call it victory” approach:
http://www.pbl.nl/en/news/pressreleases/2009/20090625-Global-CO2-emissions_-annual-increase-halved-in-2008.html
In spite of increasing emissions from new cars, factories and power plants, we have cut the rate of annual emissions by 50%. But has this any effect on
atmospheric trends? Apparently not:
http://www.esrl.noaa.gov/gmd/ccgg/trends/
So, just how much could man-made CO2 be contributing to atmospheric CO2? The truly perplexing development is that if the sustainable-ists had dropped the whole “climate change” facade – they could still have gotten the money for alternative energy. Now they have to surrender (gosh, there is no global warming) and call it a victory.
Is that a sign post or Rod Serling up ahead?
bluegrue (12:38:49) :
One of the common features of both the references you cited is that neither of them have a single equation! Isn’t that great???
Some of the explanations of their algorithms (such as they are) are very confused and, in some cases, they do things with little justification.
But – despite this – what I’d like for you to do, bluegrue, is to download the culmination of all of this research – the code GISTEMP. You can do so here…
http://data.giss.nasa.gov/gistemp/sources/
Once you study the source code, please return here and tell us all how the algorithms contained in your references are reflected in the FORTRAN code developed by GISS. If you happen to come across some equations in other references which you can relate to the source code, that would be a bonus. Extra points will be awarded if you can actually get it to compile and run correctly.
Please take your time. GISTEMP, being of typical GISS quality, will take some effort to get through.
I’m thinking that the word you are looking for is “naive.” The word “fraud” is unjustified because most of those publicizing the corrected data really believe in both the corrections and in AGW. Similarly, “incompetent” is too harsh because the corrections are carefully considered and meticulously applied.
The problem is in accepting theory as operative fact. Climate science is not a discipline where scientists are constantly slapped upside the head by the real world, for the simple reason that there is no way of effectively testing their theoretical results. In this case, being slapped upside the head is a good thing. Physicians get slapped upside the head when their pills and treatments are shown in studies to be only partially effective, if that. Engineers are slapped upside the head when their prototypes break in the real world and they have to go back to the drawing board. It is this repetitive experience that informs a person that when theory meets reality, theory is a loser on most occasions.
This is anecdotal, but in my line of work, I talk to a lot of engineers and I have yet to meet one that buys hook-line-and-sinker into the global warning theory. It’s not that they find it implausible, it’s just that they don’t attribute much credibility to it. Michael Crichton was a physician, and I’m guessing that this training also underpinned his scepticism of AGW theory when identifying all the reasons why it could be wrong. I would assume that many meteorologists, who have to deal with forecasts being wrong, also have reservations about the conclusiveness of the science behind AGW theory.
It not only is plausible, but likely, that all the scienctific procedures followed are correct, all the calculations performed are correct, but the result is nonetheless wrong for reasons that, without blame, could not have been anticipated. And it’s these very failures that lead to the advances that in turn eventually produce the correct result.
Climate scientists are in a discipline where they are largely insulated from the reality of being regularly proven wrong, despite having done everything right. I think this leads to a naive overconfidence in the results of even methodologically sound scientific studies or calculations.
Errata: “Why if these sets of temperatures ” must read “What if..”
“Jack Green (13:02:39) :
What would the models say if you input the raw data and project it forward?”
If I understand climate models, that’s the wromg question. The input of cliamte models are forcings like solar flux, CO2 concentration, aeresol concentration, etc. The output of a climate model is a stream of temperature data over a future interval. The inner workings of the climate model express the theory that relates the individual forcings to the individual output.
The real question is whether the inner workings if the climate models were adjusted to best fit a temperature record that doesn’t reflect real variations in termperature, and if so, would those models produce a different result if they were calibrated to the raw rather than the unadjusted temperature data.
Cold Play: You wrote, “Please accept this as constructive critiscism (sic), the moving graphs especially when on (sic) stops at a later date show an exageratted (sic) distortion and I am sure this site would not want to be accussed (sic) of distorting data?”
Actually, I, as the author of the post, do not appreciate the accusation or even the inference that I in some way distorted the data in this post.
Document your claims. You can accomplish this by downloading the data from the links I provided, importing them into spreadsheet software, plotting the data, uploading them to a picture hosting website, and providing links here at WUWT with your explanation. Don’t forget to make sure that all graph sizes are the same size. Then you can compare them to the following graphs that I imported to GIF Movie Gear software to create the animations.
GISTEMP Version 2000:
http://i44.tinypic.com/15je4j.png
GISTEMP Version 2009:
http://i40.tinypic.com/30i8390.png
GISTEMP Version 2000 With Linear Trend:
http://i40.tinypic.com/16la43.png
GISTEMP Version 2009 With Linear Trend (Ends in 1999 for Comparison with Earlier Data):
http://i43.tinypic.com/1zdvk2g.png
If you were to open all of the linked graphs and flip between them, you’d notice a number of things. The scales are the same. The sizes of the graphs are the same. There’s no change in the curves when flipping between the graphs identified as 2000 data–the graph with the trend and the one without. The same thing holds true for graphs identified as 2009 data, with the following exception. I deleted the 2000 through 2008 data in the 2009 graph with the linear trend for the trend comparison.
You, Cold Play, accused me of manipulating the data or the presentation of the data in the graphs. Document your accusation.
John:
Did you miss Bob’s comment below? He is referencing lists of numerous peer-reviewed papers that describe the rationales and methods for the temperature adjustments. You can access the abstracts of all of these papers, I think and your can probably find full copies of some of them online.
John Galt (12:31:57) :
I was being facetious with my comment regarding peer-review, but never-the-less the question remains: Where are the documents (scientific or otherwise) that support these adjustments?
Bob Tisdale (09:21:33) :
John Galt: You asked, “So where is the peer-reviewed study used to explain and justify all these adjustments?”
For the USHCN papers, scroll down to the bottom of this page:
http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html
For the GISS papers relating to GISTEMP, refer to this one:
http://data.giss.nasa.gov/gistemp/references.html
And refer to Steve McIntyre’s discussions:
http://www.climateaudit.org/?p=1142
http://www.climateaudit.org/?p=1139
http://www.climateaudit.org/?p=1891
The climate models are designed to show what the modelers believe is driving recent climate changes. They took a short period of the late 20th century, saw a correlation between CO2 and temps and then built model that projects that projects correlation decades into the future.
Unfortunately, the purported correlation between CO2 and increasing temps only fits for short periods of the 20th century climate while the opposite correlation appears to be the case from ~1940 – 1979. The models also included hypothetical feedbacks, which have yet to actually observed in the real world.
In short, the models shown more warming from more atmospheric CO2 because that’s how the models were programmed.
I am not implying fraud in the creation of these models, I am saying the modelers got it wrong. What is fraudulent, however, is how the modelers let the media mispresent their work.
John Galt (12:31:57) :
I was being facetious with my comment regarding peer-review, but never-the-less the question remains: Where are the documents (scientific or otherwise) that support these adjustments?
The earliest one from Mitchell I can’t get my hands on. Enjoy!
Karl, T.R., C.N. Williams, P.J. Young, and W.M. Wendland, Model to estimate the time of observation bias associated with
monthly mean maximum, minimum and mean temperatures for the United States, J. Clim. Appl. Meteorol., 25, 145-
160, 1986.
Mitchell, J.M., Effect of changing observation time on mean temperature, Bull. Am. Meteorol. Soc., 39, 83-89, 1958.
Karl, T.R., and C.N. Williams, An approach to adjusting climatological time series for discontinuous inhomogeneities, J. Clim.
Appl. Meteorol., 26, 1744-1763, 1987.
Karl, T.R., C.N. Williams, P.J. Young, and W.M. Wendland, Model to estimate the time of observation bias associated with
monthly mean maximum, minimum and mean temperatures for the United States, J. Clim. Appl. Meteorol., 25, 145-
160, 1986.
Karl, T.R., H.F. Diaz, and G. Kukla, Urbanization: Its detection and effect in the United States climate record, J. Clim., 1, 1099-
1123, 1988.
Karl, T.R., J.D. Tarplay, R.G. Quayle, H.F. Diaz, D.A. Robinson, and R.S. Bradley, The recent climate record: What it can and
cannot tell us, Rev. Geophys., 27, 405-430, 1989.
Karl, T.R., C.N. Williams, F.T. Quinlan, and T.A. Boden, in United States Historical Climatology Network (USHCN), Environ.
Sci. Div. Publ. 3404, Carbon Dioxide Inf. and Anal. Cent., Oak Ridge Natl. Lab, Oak Ridge, Tenn., 1990.
Karl, T.R., R.W. Knight, and J. Christy, Global and hemispheric temperature trends: Uncertainties related to inadequate
sampling, J. Clim., 7, 1144-1163, 1994.
I did a run on the V2009 series where I calculated the delta of two anomalies i.e. d_1881 = a(1881) – a(1880) along up to 2008 – thinking there should be a trend when a high year is not compensated with a low year. But I found the mean of all these deltas to be 0.0026 which is as good as zero – alas the linear trend in the year to year delta of the anomalies is flat on zero but completely uncorrelated R2=1E-5.
I sometimes find it interesting to see if that kind of derivative runs away on series but here it stays put on the long run.
Can anybody explain why or what it means.
Note: I may have done something completely wrong – don’t hesitate to call me a fool but tell me why …
BTW the two series (2000 and 2009) have a nicely matching year to year delta even they differ in the absolute value of the anomalies.
Martin
@Bill D (13:56:52) :
There is some delay while posts are waiting moderation and that post was not visible before I posted mine.
I thank you for the links, but the GISS documents are not helpful in determining why they back-adjusted the data. Is the answer in one or more of the 16 documents referenced on that page? If so, I’m not going to be able to sort through all of them to find the information I requested.
Climate Audit does a wonderful job critiquing the adjustments, but these posts do not explain the GISS rationale for adjusting past observations.
The question isn’t how (or why) does GISS adjust current readings, but how does it adjust historical data and what is the justification for this? I think that question alone merits an answer that is not buried in some other document.
If GISS discovered that all the thermometers in use in the 1930’s read too high, then that’s newsworthy, don’t you think?
BTW: I didn’t find the answer in the documents linked by bluegrue, either. I did a search on the PDFs and they don’t seem to address that issue, just adjustment of current readings.
Thank you
What we do know is that the occurrence of climate change is nothing new to this planet, and our participation is all that we are unsure of!
http://www.edwardjones.com/cgi/getHTML.cgi?page=/en_US/fa/index.html
Re TOBS.
Method is all very nice, but wouldn’t it be even nicer actually to look at the B-91forms and see what those TOBS actually were? Which I understand NOAA does not do. Or is that too novel a concept?
I’m having a little trouble understanding how the comparison between the adjusted data in 2000 and the adjusted data in 2008 shows that a trend line from 1880 to about 1915 changed from neutral to cooling.
My recollection of this, from what I read over at ClimateAudit, is that missing or unreliable data is interpolated using more recent information. For example, if an average temperature for month “x” was not available, it is deduced from preceding and subsequent months. I’m assuming that corrections for UHI and TOB, etc. also follow the paradigm that the adjusted data is derived from data points subsequent to the suspect data, hence revisions will necessarily involve ongoing alterations to the historical record.
But there has to be some real-world phenomenon upon which those estimates/corrections are based, and that phenomonon can’t reasonably be expected to reach forward/back in time by a century or more. As an analogy, if a camera sensor has a bad pixel, good image processing software can estimate the value of that pixel from those of its neighbors. Though not guaranteed, you can at least pick a “statistically likely” value that over a number of images will me more likely correct than not. But I can’t imagine that there is any reason why the value of a pixel at one corner of a 15-20 megapixel image has any expected relationship to the value of a pixel at the other corner.
In this instance, we know that the corrected version in 2000 had raw data values for the 1880-1915 period, but at some point after 2000 the corrections continued to adjust these values. Now I understand that these are anomolies from an average measured over a large base period of time, such that if you adjust the values of any year in that base year, the entire set of anomolies is going to have to be adjusted because the average just changed. Having said that, however, you would still think that the corrective process would affect “x” less and less as you move froward in time from “x”, but that doesn’t seem to be the case if information post-2000 could change a 35-year trend that started about 120 years ago.
Just from eyeballing the animation it looks like the changes at the back of the 120+ year history are only slightly less significant, if at all, from the changes at the front.
Kurt: You wrote, “I’m having a little trouble understanding how the comparison between the adjusted data in 2000 and the adjusted data in 2008 shows that a trend line from 1880 to about 1915 changed from neutral to cooling.”
Please identify who you’re addressing your comment to. Thanks. In the post, I didn’t discuss the any change from neutral to cooling in the early data.