GISS Ts+dSST Numbers Are In

by John Goetz

The GISS Ts+dSST numbers are in.

June comes in at 26, continuing the downward trend at GISS and making it the seventh lowest anomaly this decade.

Lots of history was rewritten by the June temperature, with 89 monthly adjustments upward and 22 downward. Most of the downward adjustments were made this decade, and most of the upward adjustments were made pre-1941. At an annual level, 9 years before 1928 were adjusted upward, and 2007 was adjusted downward.

As for 2008, Jan and Feb were unchanged, Mar up 2, Apr up 1, and May up 3. The uplifts in M-A-M surprised me some, because I would have expected out of season months (such as June) to have no effect. Such is the GISS method.

I will post up a plot later today, unless Anthony beats me to it (I like his format).

REPLY: Go for it. I’m jammed up today, fires are threatening again. 1/4 mile visibility due to smoke.

Added reference: A number of comments ask why the historical numbers change. I wrote a post on that earlier this year, which was publicized on this blog and Climate Audit. I am not saying it is OK that they change, only describing why they change.

0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
July 9, 2008 9:01 am

In relation to the GISS figures and also Hadcrut I saw this over at realclimate
“The differences in the two products (HadCRUT3v and GISTEMP) are mostly a function of coverage and extrapolation procedures where there is an absence of data. Since one of those areas with no station coverage is the Arctic Ocean, (which as you know has been warming up somewhat), that puts in a growing difference between the products. HadCRUT3v does not extrapolate past the coast, while GISTEMP extrapolates from the circum-Arctic stations – the former implies that the Arctic is warming at the same rate as the rest of the globe, while the latter assumes that the Arctic is warming as fast as the highest measured latitudes”
I am not sure about the significance or otherwise of this difference in treatment of the polar regions

July 9, 2008 9:29 am

JG, is it possible to plot the adjustments in some way so we can see what the scale and number of them is? Or has this already been done? Even a sample to show for a given station year, how the readings had varied over the last 20 years in consequence of the adjustments.
Or is the raw data simply not available any more?

An Inquirer
July 9, 2008 9:31 am

Check out this GISS map: It shows the movement in GISS temperatures around the world from 1988 to 2008.
(This map is for the winter months. For some reason, GISS does not allow me to make the comparison for Spring or for June, and I have not yet understood the explanation of why. Of course, a reference to a GISS map is not be understood as an endorsement of GISS methodology.)

July 9, 2008 9:36 am

Here is a little exercise I did yesterday and you can play too!
Go to NOAA’sUnited States Climate Summary page (it says May but June’s numbers are in the database). Fill in the form as follows:
Leave “Data Type” at it’s default of Mean Temperature
Period: “Most Recent 12-month Period”
First Year To Display: 1998
Last Year To Display: 2008
Base Period Beg Yr: 1977
EndYr: 2007
Leave other data fields at their default and click the “Select” button to plot the graph.
And there you have it. A trend of -0.63C per decade or a cooling rate of 6 degrees per century since 1998. Now that looks to me like some significant “Global Cooling”. Keep in mind that those are only for North America, not global data.

July 9, 2008 9:40 am

Latest GISTEMP vs. UAH/RSS (Hadley not yet out for June), with adjusted baselines:
REPLY: Odd. GISS goes down while UAH and RSS go up slightly. – Anthony

Bill Marsh
July 9, 2008 9:48 am

I still have trouble with the idea that last months temperatures affect those recorded in 1927. I can see the need to make adjustments, but I would think that only one would be necessary, not readjustments every month.

July 9, 2008 10:29 am

This may be slightly OT but I have just been reading Steve McIntyre’s post of yesterday on Climate Audit – – and it seems pretty important to me, but as a layman fairly new to this debate, I should welcome a view from people here as to just how significant this is? It suggests to me, at least, that a key tenet of the IPCC argument may well be seriously flawed.

July 9, 2008 10:36 am

Maybe Hansen has bitten the dust. BTW there is no way NH ice is close to last years melting getting to 1 million Km2 above now

July 9, 2008 10:37 am

BTW La nina has gone too!

Gary Gulrud
July 9, 2008 10:51 am

From CA, the response to Steve Mc’s request for Jones’ Russian station data:
“C) The only data a U.S. government author used in the cited paper was for U.S. An exact replicate of those data are no longer available having undergone continued update and adjustment including the addition of new data and metadata. The earliest digital version of the HCN data is the 1996 release which is updated continuously and available (no charge) at: This document will need to be opened with UNIX uncompressed command. From that dataset, the listed stations can be extracted as either raw or adjusted data (adjusted data were used in the Jones et al. article). The data will not exactly match those used in the Jones et al article, but for purposes of testing the analysis as described in that article, there were no major changes in adjustment methods in the data through 1984. For data after 1984 there was an additional correction for introduction of the Maximum Minimum Temperature Sensor (MMTS) measurements at Cooperative Observing Stations, some of which are stations in the HCN.”
So, we cannot check their work or do our own (ok, not me, those of you who get these things done)? Does GISS then adjust adjusted data?

July 9, 2008 11:51 am

It is still unfathomable to me that historical data is revised every month. I can’t think of a real life analogy for this circumstance. The closest thing that comes to mind is the “fish that got away” whose size tends to be directly related to number of times the tale is told.
Imagine if stock charts were backward revised every month.

July 9, 2008 12:31 pm

I believe someone recently referred to this as the worst scandal in science history going on right under our noses. Piltdown Man didn’t threaten to bankrupt entire economies. Its just waiting for a significant climate cool-down to expose it, I guess, they’re not even trying to hide their re-writing of temperature history.
Maybe they forgot that nature is going to do whatever its going to do no matter what mankind reaches consensus about.

July 9, 2008 12:40 pm

It appears the continuous re-writing is not a conscious attempt at deception, but an artifact of the “adjustment” process.
Don’t confuse bias and group think with maliciousness.
Or as the famous quote goes:

Don’t assume malice when stupidity will explain it.

Despite their many degrees and many “peer-reviewed” papers, the mob psychology that has infected the team is not unusual in the history of science.

Robert Wood
July 9, 2008 12:44 pm

Jeez, some executives at Northern Telecom tried that for a few years witth the company accounts.

July 9, 2008 12:47 pm

Dittos to what jeez and Bill Marsh have said. As an economist, I’m very familiar with preliminary estimates being later revised into final figures. But that’s not what is taking place here, when recent data causes revisions to data going back decades and decades. There is obviously a serious methodological issue with this. Since this is “official US Government data,” why can we not get a Congressional investigation going to look into this? Is anybody in a position to plant a bug in Senator Inhofe or one of his staff and have somebody in a position of .gov influence to look into this?

July 9, 2008 12:49 pm

To follow up to what jeez posted while I was writing, I realize that this is an artifact of the adjustment process used by GISS. Which is all the more reason why the adjustments need to be more transparent. What kind of adjustment process really justifies data revisions like this?

Robert Wood
July 9, 2008 12:54 pm

Can someone answer me this:
The “average” daily temperature is taken to be (highest + lowest )/2. Isn’t this, rather the median temperature? Isn’t the average a more important indicator, but would require integrating the temperature samples over the day and then dividing by the number of the samples. Wouldn’t this give a different result, more indicative of the heat of the day?

Evan Jones
July 9, 2008 1:11 pm

Hope everything goes okay re. the brush fires. Good fortune to you and yours.

July 9, 2008 1:13 pm

Robert Wood,
No, it’s the average because only two temperatures are taken per day (the high and the low) using minimum and maximum thermometers. Technically it also is the median, but not in the sense you are thinking as in the case when more frequent reading are taken. Remember this process was started well over 100 years ago when readings were done manually. It took some effort just to get the high and low every day. Only with automated technology can you integrate a lot of reading over the course of a day.

Robert Wood
July 9, 2008 1:15 pm

But there can be a huge difference between the two techniques.

Evan Jones
July 9, 2008 1:16 pm

Isn’t this, rather the median temperature?
No, that would be like taking hourly samples and taking the 6th warmest (or coolest) as the measure and paying no attention to how much warmer or cooler anything else was.
Actually, the Min+Max/2 system seems to work quite well and compares favorably with systems that measure temperatures hourly and do the HourSum/24.
But having said that, it is obviously a better procedure to use the latter method. The new NOAA/CRN system is set up to do just that (it’s automated).

Evan Jones
July 9, 2008 1:16 pm

Oops. For median, I mean the 12th warmest or coolest!

Bill Illis
July 9, 2008 1:19 pm

What is interesting about the past data adjustments is that net total adjustment to date adds up to +0.7C which is essentially the amount that temperatures have increased since 1900.
So not only are temperatures increasing (falling lately actually) less than predicted by the global warming models, all of that increase is just Hansen and Hadley playing with the historical records (some of which might be justified if we could just find out what they actually did.)

July 9, 2008 1:21 pm

Robert Wood,
(highest+lowest)/2 doesn’t have to be the mean or the median. Although it will be both if you assume a sinusoidal temperature function. The problem is that historically temperatures were not taken every minute or hour. Instead, they used two thermometers that would measure the minimum and maximum temperatures during a specific time period. An observer would check the thermometers once a day, and record them. Thus the best estimate of the mean daily temperature would be (highest+lowest)/2.

Evan Jones
July 9, 2008 1:23 pm

Or is the raw data simply not available any more?
It is. But NASA/GISS raw data = NOAA adjusted data.
NOAA raw data must be available because they show a before/after adjustment map. Not for public consumption, of course! (If the public could see it, they’d be marching up to Abbeville (or wherever the hell) with torches, flails, and pitchforks.) The Rev has it though, and posted it a couple of months ago.

Stevie B
July 9, 2008 1:29 pm

Can someone please explain to me why they adjust temperatures every month? I understand them trying to adjust for the terrible placement of climate stations, but why adjust past data based on current data. Lastly, why adjust it every month?

July 9, 2008 1:32 pm

Robert Wood, I couldn’t agree with you more. That is the mathematical definition of average. The problem, of course, is that with most of the historical data, all we have is the high and the low. No records of additional data points available. In order to be able to use same method comparisons for averages, they only worry about the high and low with the modern data. That allows them to say they are following historical trends. It isn’t necessarily reflective of reality, but it gives the illusion. You also have to remember that the temperature listings being used are monthly, seasonal, and annual averages, which even further skew data points.
I would much rather they put the data from 1902 to one side, and concentrate upon the more current data where we have more data points for a day to use. The trend length might not be as long, but temperature values would be more indicative of heat and we might get better predictive ability.

Pierre Gosselin
July 9, 2008 1:34 pm

Bob Tilsdale,
LOL! Yes, and you’ll recall what Hansen was saying before Congress exactly 20 years ago. It’s just too amazing.

Evan Jones
July 9, 2008 1:36 pm

fred: As for NOAA/USHCN adjustment procedure, if you want to raise AGW by having your blood boil, check this out:
This is the USHCN-1 method.
Adjustments 1900-2000:
Map of Raw Temperature (USA)
Map of NOAA-Adjusted Temperature (USA)
Before you calm down, let me emphasize that the USHCN-2 is out now, and the adjustments are even worse. (Up to +.425°C trend adjustment from +.30°C) But they are MUCH too smart to provide a graph (or maps) this time around!
The best that I can put together is that they carefully account for the CRN1/2 to CRN4/5 differences–by “adjust” the CRN1/2 results to fit the others! Then they show a graph about how proud they are that their fine adjustment procedure “fixes” station quality, so go away, you denier, you!

Evan Jones
July 9, 2008 1:41 pm

Can someone please explain to me why they adjust temperatures every month?
I think it’s because they like to keep their options open? If they do it every month, then they can, effectively say anything at any time. And go, “…..What?” if anyone objects.
The story (poss. apoc.) is that the beastly Sovs used to send the British government a new Soviet Encyclopedia every year which included instructions as to what articles to add and cut out of the old ones.

July 9, 2008 1:45 pm

Gary Gulrud:
Do these include those FUBARd Soviet-era weather stations that habitually exaggerated their wintertime lows in order to qualify for more fuel subsidies?

July 9, 2008 1:53 pm

Don’t assume malice when stupidity will explain it.
Corollary: …Unless reputations and big, big bucks are involved.
Question: Does GISS provide the historical raw data along with the cooked data? If so, for how far back does their raw data go? Is it archived online?
Also, Gary mentions the old mercury thermometers [usually about 30″+/- long, with sliding magnifiers and 1/100th degree reticles] were by far the most accurate. They are more accurate than today’s thermocouple/datalogger setups, because thermocouples have inherent hysteresis; mercury thermometers don’t. Something to consider when tenths/hundreths of a degree matter.

July 9, 2008 2:15 pm

— Paul Clark
Yes basically this is blowing up. Or this piece of it is, the UHI piece. No-one could assess Jones’ claims as long as the stations were kept secret. When they found the Chinese names out, after FOI proceedings in the UK last year, it turned out they likely were heavily contaminated with UHI. So now we have the Russian ones, and we must see whether they too are contaminated or are lacking adequate histories.
It always did seem odd that in studies the ROW station record showed so much more warming than the US station record. Now that the names of the stations have been revealed after 15 or 20 years, we can for the first time check into why this should be.
REPLY: I may need to go to Russia to solve this issue. -Anthony

Diatribical Idiot
July 9, 2008 2:17 pm

Well, I’ve thrown up my monthly analysis and charts related to the GISS temps here:

July 9, 2008 2:25 pm

Some people are asking why the past should change all the time. Well, you have to be a physicist to understand this, its a quantum mechanics matter, but here is a journalistic explanation for lay readers. You see, with temperatures, the act of observing affects them. This means that if you know when the temperature was recorded, you can only tell in a probabilistic way what it was. Obviously therefore the observation will change as surrounding data points are observed. You may not have quite followed this last step, but its as I say a quantum matter.
Its one of the things that makes climate science so difficult, and why we should particularly respect people like Hansen who struggle with this sort of thing every day.
Its almost as bad if you find you do know the temperature exactly. Then you can only tell in a probabilistic way when it was measured. This is why its so hard to establish what the warmest years of a given period were, and why it changes all the time. Its not the temperatures that were moving around, but the dates.
Its a bit like the husbands some of us used to have. If you knew where they were, you did not know what they were doing. If you knew what they were doing, you could only tell in probabilistic terms who they were doing it with. As to when they had done it, well, probably in the lunch hour. But not always.
Do you see now?
REPLY: For laymen readers, this is satire. -Anthony

Diatribical Idiot
July 9, 2008 2:27 pm

“It is still unfathomable to me that historical data is revised every month.”
I can actually think of a reasonable reason to do so. I am not saying the GISS approach is reasonable, just that I can see a theoretical reason why it isn’t a problem.
Suppose you take measurements with a crude instrument until year X. In year X, you are able to take more accurate readings with a new instrument. Or, perhaps you used to take readings only twice a day, but the new instrument is able to take a proper 24-hour average. If you continue to take readings using both instruments from year X forward, each new month provides more information on how the crude instrument deviated from the new one. And, perhaps with further analysis, it can be shown that these deviations are of different magnitudes depending on certain conditions. With each new bit of information, then, the old data can be adjusted to give what can reasonably be considered to be a more accurate adjustment of the old readings to the current technology.
Again, I’m not saying that this is what has been done, or even if it is, that it has been done correctly. But it is a plausible argument in favor of making historical adjustments.

July 9, 2008 2:32 pm

Their reasoning is something like that–JG knows it thoroughly, I’m just a dumb moderator.

July 9, 2008 2:34 pm

In the article, Anthony ponders:

The uplifts in M-A-M surprised me some, because I would have expected out of season months (such as June) to have no effect. Such is the GISS method.

I wouldn’t be at all surprised that since the last release some late May data also arrived. If so, then that data would apply to the M-A-M GISS I-can’t-believe-they-do-that data adjustment for past M-A-Ms.
Just another uplifting GISS experience and yet another qualification in trying to keep track of data to reproduce past analyses.

July 9, 2008 2:47 pm

What Diatribical Idiot is refering to is either a boostrap method or a Bayesian method. Both are common statistical treatises, but I would think that after a certain amount of time enough data form the new reliable statistical distribution would have been collected, resulting in very little change in the posterior (or newly estimated) result. It would be interesting to see if they were actually using this – perhaps John mentioned it in the preceeding post – I’ll have to go and look.

Gary Gulrud
July 9, 2008 3:50 pm

fred: You had me going.

Gary Gulrud
July 9, 2008 3:53 pm

leebert: Sorry, you’re beyond my pay grade.

Robert Wood
July 9, 2008 4:14 pm

Fred, to continue in all innocence, this is precisely the problem that Hansen has: climate is always changing, all over the place, all the time, in a probabalistic fashion.
He needs it to change upwards, all over the palce, all the time, so you can imagine the stress of his job and the fast-footedness of his statistical adjustments of data.

Steve Moore
July 9, 2008 4:28 pm

Fred, Anthony,
It made perfect sense to me.
It reads like something from “The Dancing Woo-Woo Masters”.

July 9, 2008 4:45 pm

Unconcious bias is well documented and widespread in science. If you study science at university, you get drummed into you that you have to construct your experiments to minimize unconcious bias. Adjusting data is a big ‘no no’ for this reason.
Which is not to say the temperature adjustments aren’t justified, but they need to be done in a transparent auditable way.
And if you think the situation is bad in the USA, it’s worse in other countries where they won’t say whether they even adjust the data, never mind, by how much.

July 9, 2008 4:49 pm

Actually, THIS may be Hansen’s problem.

July 9, 2008 5:23 pm

Would that be the Hansenberg Uncertainty Principle???
I view GISS temperature record more like the cat in Schrodinger’s box and the current monthly temperature as the random quantum fluctuation that releases Hansen. The observer is unable to know the actually state of the GISS record until they open the box and GISS record collapses into a series of positive and negative adjustments.

Robert Wood
July 9, 2008 5:34 pm

Hey, thanks to thopse who answered my question re: median v. av erage temperature.
I was right (yeah :-)) but this measure is kept for historical reasons.

John D.
July 9, 2008 5:42 pm

These measurement/data problems may be telltale of limits in our technological abilities. Perhaps observation of long-term trends in species distribution (altitude, latitude, temporal) are more revealing to actual trends. Could these highly evolved natural systems be more sensitive to climate trends than our timeframe-sensitive, technology-based perceptions? Maybe they are also too noisy; I guess we’ll find out.

July 9, 2008 6:17 pm

Robert Wood (12:54:34) :

The “average” daily temperature is taken to be (highest + lowest )/2. Isn’t this, rather the median temperature? Isn’t the average a more important indicator, but would require integrating the temperature samples over the day and then dividing by the number of the samples. Wouldn’t this give a different result, more indicative of the heat of the day?

All this has been answered, but a personal empirical note applies. For a while I had a surplus thermograph that dad had rescued from work. (Nice big industrial style round charts with thermocouples and vacuum tube amplifiers. None of this search for an hour looking for the USB stick you dropped.) When I went to college I keypunched several months of data for each 3 hours and wrote various Algol programs to produce “ASCII art” graphs and whatnot.
One thing I did was to compare (max+min)/2 and average of 3 hour periods (half weighting each to the 0000 and 2400 readings). I was moderately surprised at how well they matched. Only in some of the obvious situations, e.g. winter cold front passing right after midnight so that 0000 had the high for the day was the data obviously bad.
The difference was typically a few tenths of a degree, IIRC, so I’d be very reluctant to mix both styles into any climate change analysis, but for producing climate atlases and other low precision documents, I decided it was close enough for government work.
All this data was from northeastern Ohio, but probably holds in most other areas. Perhaps not coastal with summer seabreezes, but perhaps it does.
40 year-old data. I know I don’t have the punch cards, but I might have it on listings still.

July 9, 2008 6:28 pm

jeez (11:51:54) :

It is still unfathomable to me that historical data is revised every month. I can’t think of a real life analogy for this circumstance. The closest thing that comes to mind is the “fish that got away” whose size tends to be directly related to number of times the tale is told.
Imagine if stock charts were backward revised every month.

First, only missing data is adjusted in this pass. Second, temperature records from a single station are typically recorded by a single person and propagated to not too many other people. Company stock prices are recorded many times in many places and there is strong financial incentive to do that. OTOH, you might have trouble finding the stock price for DEC for, say July 9, 1968. DEC was bought by Compaq, that was bought by HP. DEC is no longer traded, so you probably can’t find it at your favorite online broker.

July 9, 2008 6:32 pm

Questions. Is CO2 dispersed equally in the circumference of the globe? If it isn’t, where is it concentrated? I looked for actual numbers but all the web sites I went to gave me ……well garbage and interpretive language. Just wondering about the dynamics of the CO2 theory. If it does capture heat, and it is concentrated, shouldn’t we see warming where it is concentrated? If it is evenly dispersed, shouldn’t we see a smoother line in our global temps?

July 9, 2008 6:36 pm

Yes, adjusting the value of missing days in 1927 because we had a cold June in 2008.
If we keep doing this for another thousand years, can we assume we will know better what the exact temperature was on some random day and random place in 1927 than we know it today? I see CRU temperatures for a 100 years ago stated in 1000’s of a degree. Does this remotely make any logical sense to you?
No more accuracy is being added by this process, only a false sense of sciency looking adjusting.
And what relevance is there to your DEC example? Just because information is obscure doesn’t mean it won’t be accurate and consistent when located. I don’t feel like subscribing to Hoovers just to find this information.

Evan Jones
July 9, 2008 6:56 pm

Does GISS then adjust adjusted data?
To be clear:
GISS uses fully adjusted NOAA data as its own raw data. It then adjusts it further.

July 9, 2008 6:57 pm

Hey Evan, you live in SF right? Ok if I send you an email?

Evan Jones
July 9, 2008 6:58 pm

I see CRU temperatures for a 100 years ago stated in 1000’s of a degree.
Fischer is pleased to refer to this as “the fallacy of misplaced precision”.

July 9, 2008 7:00 pm

When I was in high school science class it was called spurious accuracy–same thing.

July 9, 2008 7:00 pm

Temperature data is a proxy for heat gain by the climate system. Some scientists like Pielke senior argue we should put more effort into directly measuring heat gain.
Using an average of min max is a poor way of trying to measure heat gain, because the difference from min to max to the next min is just a measure of how much heat is gained and then lost over the day and what we are interested in the net heat gain after the daily gain and loss. Hence we should use minimum temperatures only.
However, most people in most places are much more interested in the daily high temps than the lows and most people would consider rising minimum temps, ie less cold nights to be a good thing. So using just minimum temperatures would have a much weaker propaganda value.

July 9, 2008 7:02 pm

Ric, how do you adjust missing data? I must be missing something.

Evan Jones
July 9, 2008 7:16 pm

Hey Evan, you live in SF right? Ok if I send you an email?
I live in NYC. But it’s still OK if you send me an email.
My email address is: [email snipped by jeez, now officially known as Charles the moderator]

July 9, 2008 7:19 pm

Maybe I will, but I was looking for a potential drinking buddy. I’m mixing you up with someone else around here. I cut your email above. It’s never a good idea to leave it hanging out there.

July 9, 2008 7:30 pm

I found this article interesting: “Mysterious California Glaciers Keep Growing Despite Warming”,2933,378144,00.html

July 9, 2008 7:38 pm

I still like this one. Rising CO2 must be good for the Arctic Ice…….

July 9, 2008 8:30 pm

That’s a good idea! Lets get the local groups together and have some beers. Jeez – you should begin to take a survey of where people live and figure out a few major beer drinking locations. I’m sure that survey software would be at least good for that!

July 9, 2008 8:42 pm

Duct tape all attic, crawl space vents closed. That’s where embers get sucked in. Drape AC intake with wet cloth and keep wet. Park vehicles, lawnmower, other fuel tanks inside garage with the door closed. Sprinkler on roof. Keep vigil for embers. Most homes can withstand quite a bit if properly prepped.

July 9, 2008 9:31 pm

Glenn (19:02:57) :
“Ric, how do you adjust missing data? I must be missing something.”
describes the process pretty well.
Suppose the observer for Penacook was on vacation for part of April 1933 and hence wasn’t able to come up with the 30 days of data to compute the average for the month. Averaging the days he was around for doesn’t work, escpecially if he was away during the beginning or end of the month when temperatures are usually furthest from the mean. Replacing the missing data with daily averages might work pretty well, and may be what should be done.
Instead, the missing month’s data is filled in by the other two months in the season and the average temperature of the month in that season for all other years. As the temperature record gets longer, the amount of data gets longer.
It would make a lot more sense, at least to me, to use something like the average of the preceding month, the following month, and the previous year’s month, and the following. Maybe a few other years too, but certainly no more than a few.
Probably some use of averages for the month and adjacent ones too.
There are some algorithms for doing this, I have one that’s used to take a collection of data samples and spread them out over an even matrix. If I ever have time, I want to try going from GPS traces to topographic map and I need that step to make some contour drawing software happy.

July 9, 2008 11:11 pm

At the risk of stating the obvious, there is never any need or justification for infilling data that is missing for whatever reason.
This is a case where the computational algorithm is determining the data required, where it should be the other way around. And the algorithm should handle the data available. Calculating means and trends where data is missing from a time series isn’t a hard problem.
Amateur is too weak a word for this.

July 10, 2008 1:14 am

Ric, Thanks. But I can’t see adding data where none exists is constructive here. Anything added that changes a trend (or affects the existing record in any way) would be pure speculation; weather hardly ever does what we tell it to, even if before it happens, but especially after. Perhaps I’m still missing something here, but it doesn’t sound like sound science. How far could you stretch this vacation (30 days… 3 years), how would you set limits on “accuracy”?

July 10, 2008 4:15 am

A few points…
I keep a local copy of Hadley/GISS/UAH/RSS at home. I also update the entire dataset, instead of merely adding the latest month. I was about to make a fool of myself by crowing about 12 consecutive months of temperatures falling year-over-year at GISS. Then I did a double-take. The GISS dataset with data to May 2008 showed March 2007 as 0.60 and March 2008 as 0.58, a 0.02 decline over 12 months. The GISS dataset with data to June 2008 shows March 2007 as 0.59 and March 2008 as 0.60, a rise of 0.01 over 12 months. They’re the only one showing a rise over the previous year for March 2008. Anything to deny the skeptic community a propaganda point.
The GISS 12-month running mean anomaly to June 2008 is 0.421. The GISS 12-month running mean anomaly to March 1998 was 0.422 (at least on the current GISS dataset :p ). So we’re back to where we were over 10 years ago, according to GISS.
Dr. Hansen should note that in the real world, corporate executives do not get thrown into prison for disagreeing with him, but rather for fiddling with corporate numbers. GISS restates previous years’ numbers more often than Nortel, fercryinoutloud.

July 10, 2008 4:41 am

Anthony: Odd. GISS goes down while UAH and RSS go up slightly.

Could just be noise, but it might slightly confirm my hunch that the satellite datasets react quicker and more strongly to short-term events (e.g. La Nina).

July 10, 2008 5:50 am

As a layman (relatively new to this debate – spurred by a feeling that not all is what it seems to be) I have tried to read widely from this and a number of other sites, on both sides of the debate, to get a handle on the key issues.
When dealing with issues in my working life I have always found the best question at the start is: “What is the problem we want to fix?” (BTW – the end game should always answer clearly the question – “why is this strategy/policy/plan the answer to that problem?”)
Therefore – my first question is – “Is the earth warming?” (Please stay with this point and do not get onto secondary issues such as – is it dangerous? can we forecast what will happen? – and what is causing it? – question ‘1’ has to be is the earth warming?)
Now, with all such issues as this my basic logic is to say there are 2 key factors (to me as a layman please understand)
1). what is the appropriate start point?
2). what is the data that shows this?
(Please understand I am trying to be purposefully simplistic here)
Now, depending on which side of the argument you are on you will naturally choose a start point that fits with the thesis you are trying to make. I would like to put that to one side for the moment and concentrate on getting clear in my mind an understanding of the data.
I have been singularly horrified by how much data manipulation may be going on in this area. I am not a scientist but I do suffer the consequences, as a citizen in my country, of decisions made by politicians on the basis of government policy formulations founded on the data produced about global warming/climate change.
So all I ask is:
1). ‘Is there a clear set of data that shows the average temperature of the earth over the last 100 hundred years?’
2). ‘Is this data available/reported (in its raw form as well as its adjusted form) in order that scientists around the world can evaluate it and have such assessments peer reviewed with rigour?’

July 10, 2008 6:18 am

Glenn (01:14:09) :
“But I can’t see adding data where none exists is constructive here. Anything added that changes a trend (or affects the existing record in any way) would be pure speculation; weather hardly ever does what we tell it to, even if before it happens, but especially after. Perhaps I’m still missing something here, but it doesn’t sound like sound science. How far could you stretch this vacation (30 days… 3 years), how would you set limits on “accuracy”?”
There are certainly limits, but science has always dealt with measurement errors and omissions. You certainly don’t want to throw out the entire 100 year history of a good site because someone lost a monthly data sheet or the tanscriber couldn’t make out a poorly written digit.
Stepping well beyond my statistics knowledge, one reason to fill in missing data with a good guess is that there are statistical methods that need a full history. One of them, (Principle components) was used in producing Mann’s hockey stick graph, though it had more problems with extrapolating data beyond a sample period than filling in holes. And many, many other problems to boot.
Some times the missing data can be omitted without much trouble, e.g. gaps in USHCN and COOP records during equipment relocations, Anthony frequently posts graphs with breaks in them. The breaks are convenient markers to see what short of shift in data occurs with the relocation.
Given all the other error sources in the data, filling in missing data with something close to neutral provides a lot of computational convenience at little cost.
Here’s an example – suppose a site had that April 1933 data missing. How would you compute the average temperature for that year? If you don’t, then that would suggest you couldn’t compute an annual average for the country that year because there ought to be at least one missing month somewhere for each month of the year. By splicing in neutral data, then all the simple averaging math (e.g. add up the monthly averages and divide by 12) still work.

John Finn
July 10, 2008 7:36 am

There are several reasons for the discrepancies between GISS and HadCrut in particular. But the fact that the Arctic has cooled (anomaly wise) recently while the tropics have warmed might explain the relatively low GISS anomaly for June.

July 10, 2008 9:28 am

That’s possible, Paul, but then you also have to consider that all four metrics dropped sharply to about the same level in January. Then in March, the satellites stayed fairly cool, while Hadley and especially GISS jumped way up. The overall trends are somewhat clear, though…global temps dropped a lot in January, recovered somewhat through March, and since have cooled off again through June.

July 10, 2008 11:32 am

Therefore – my first question is – “Is the earth warming?”
I’ll have a shot at answering this.
First of all, there are 2 questions that need answering. One is, is the world’s climate warming and the second is, is it due to a global effect. Because we know peat fires in Indonesia, coal fires in China, etc, are excerting a warming influence on the climate.
The answer to the first question is, on average, probably yes.
The answer to the second question is more important. A global effect should show up equally in both hemispheres (north and south). Even though, ex tropics, the SH has less than 5% of the population of the NH.
There has been no significant warming over the entire satellite record for the SH (about 30 years).
So the obvious conclusion is that whatever warming that has occured in the NH is not due to a global effect and therefore greenhouse gases cannot be the cause because they are distributed moreorless equally across the hemispheres.

Michael Jennings
July 10, 2008 12:28 pm

Paul Clark:
Excellent questions and the crux of the debate. Sorry to say that the answers are highly subjective and time sensitive. There are so many variables to take into account that a definitive answer to “are we warming” is impossible or incomplete at best. With the difference in measuring devices, locales, times of observations, changes in any of the these, and throw in the accuracy of the observer, to make any judgment is premature. As Anthony has pointed out with his auditing of stations and Steve McIntyre’s overall Climate Auditing, it becomes clear that the strict discipline needed in Science is regretfully absent in the field of Climate Science and makes answers highly suspect and open to question.

July 10, 2008 2:19 pm

The reason GISS is so cool this month is the Antarctic. I’ll throw up the link to the GISS map for June 2008 again, just in case you missed it. Scroll down to the zonal mean plot. They have to lIve with the 1200km smoothing either way it leads them.

July 11, 2008 5:53 am

[This namespace collision is confusing even me! In case anyone hasn’t realised, there are two Paul Clarks posting here – Paul H. Clark is the other one (Hi, namesake!). I’ve changed my display name to include ‘woodfortrees’ to try and make it more obvious who’s who.]
To answer Paul H.’s question indirectly: Here is the HACRUT3 data for NH, SH and global, moderately smoothed:
So yes, it has warmed a bit, and may still be warming a bit, but there’s no evidence in these records yet of anything that looks like runaway positive feedback.
On the question of data manipulation, I personally think the different datasets tally pretty well, and since they’re all derived differently any adjustments are not having a great effect on long-term changes, which is all that really matters. So as an engineer if I ask myself “can I rely on this data?”, my answer would be “Yes, to a first approximation over long term trends”.
More notes on comparing different datasets, and in particular the baseline issue:
Paul (R) Clark

July 11, 2008 2:43 pm

With respect to datasets I have been playing with the UAH dataset and comparing the northern and southern hemisphere figures. I graphed them in Excel then created a third order polynomial trend line. The odd thing which came out is that the southern hemisphere has a lower variability than the north. Would this be because of the greater sea area in the south or just a statistical oddity.

July 12, 2008 12:08 pm

Hadley is now out, BTW. It does occasionally do this on a Saturday! Also unusually Hadley is Higher that GISS for this month, coming in at +0.314C. Not sure if I’m reading it right, but there appears an awful lot of data missing from the SH, and Hadley doesn’t usually post quite this early in the month.

%d bloggers like this: