Fudged Fevers in the Frozen North

Guest Post by Willis Eschenbach

[see Update at the end of this post]

I got to thinking about the (non) adjustment of the GISS temperature data for the Urban Heat Island effect, and it reminded me that I had once looked briefly at Anchorage, Alaska in that regard. So I thought I’d take a fresh look. I used the GISS (NASA) temperature data available here.

Given my experience with the Darwin, Australia records, I looked at the “homogenization adjustment”. According to GISS:

The goal of the homogenization effort is to avoid any impact (warming or cooling) of the changing environment that some stations experienced by changing the long term trend of any non-rural station to match the long term trend of their rural neighbors, while retaining the short term monthly and annual variations.

Here’s how the Anchorage data has been homogenized. Figure 1 shows the difference between the Anchorage data before and after homogenization:

Figure 1. Homogenization adjustments made by GISS to the Anchorage, Alaska urban temperature record (red stepped line, left scale) and Anchorage population (orange curve, right scale)

Now, I suppose that this is vaguely reasonable. At least it is in the right direction, reducing the apparent warming. I say “vaguely reasonable” because this adjustment is supposed to take care of “UHI”, the Urban Heat Island effect. As most everyone has experienced driving into any city, the city is usually warmer than the surrounding countryside. UHI is the result of increasing population, with the accompanying changes around the temperature station. More buildings, more roads, more cars, more parking lots, all of these raise the temperature, forming a heat “island” around the city. The larger the population of the city, the greater the UHI.

But here’s the problem. As Fig. 1 shows, until World War II, Anchorage was a very sleepy village of a few thousand. Since then the population has skyrocketed. But the homogeneity adjustment does not match this in any sense. The homogeneity adjustment is a straight line (albeit one with steps …why steps? … but I digress). The adjustment starts way back in 1926 … why would the 1926 Anchorage temperature need any adjustment at all? And how does this adjust for UHI?

Intrigued by this oddity, I looked at the nearest rural station, which is Matanuska. It is only about 35 miles (60 km) from Anchorage, as shown in Figure 2.

Figure 2. Anchorage (urban) and Matanuska (rural) temperature stations.

Matanuska is clearly in the same climatological zone as Anchorage. This is verified by the correlation between the two records, which is about 0.9. So it would be one of the nearby rural stations used to homogenize Anchorage.

Now, according to GISS the homogeneity adjustments are designed to adjust the urban stations like Anchorage so that they more closely match the rural stations like Matanuska. Imagine my surprise when I calculated the homogeneity adjustment to Matanuska, shown in Figure 3.

Figure 3. Homogenization adjustments made by GISS to the Matanuska, Alaska rural temperature record.

Say what? What could possibly justify that kind of adjustment, seven tenths of a degree? The early part of the record is adjusted to show less warming. Then from 1973 to 1989, Matanuska is adjusted to warm at a feverish rate of 4.4 degrees per century … but Matanuska is a RURAL station. Since GISS says that the homogenization effort is designed to change the “long term trend of any non-rural station to match the long term trend of their rural neighbors”, why is Matanuska  being adjusted at all?

Not sure what I can say about that, except that I don’t understand it in the slightest. My guess is that what has happened is that a faulty computer program has been applied to fudge the record of every temperature station on the planet. The results have then been used without the slightest attempt at quality control.

Yes, I know it’s a big job to look at thousands of stations to see what the computer program has done to each and every one of them … but if you are not willing to make sure that your hotrod whizbang computer program actually works for each and every station, you should not be in charge of homogenizing milk, much less temperatures.

The justification that is always given for these adjustments is that they must be right because the global average of the GISS adjusted dataset (roughly) matches the GHCN adjusted dataset, which (roughly) matches the CRU adjusted dataset.

Sorry, I don’t find that convincing in the slightest. All three have been shown to have errors. All that shows is that their errors roughly match, which is meaningless. We need to throw all of these “adjusted datasets” in the trash can and start over.

As the Romans used to say “falsus in unum, falsus in omnibus”, which means “false in one thing, false in everything”. Do we know that everything is false? Absolutely not … but given egregious oddities like this one, we have absolutely no reason to believe that they are true either.

Since people are asking us to bet billions on this dataset, we need more than a “well, it’s kinda like the other datasets that contain known errors” to justify their calculations. NASA is not doing the job we are paying them to do. Why should citizen scientists like myself have to dig out these oddities? The adjustments for each station should be published and graphed. Every single change in the data should be explained and justified. The computer code should be published and verified.

Until they get off their dead … … armchairs and do the work they are paid to do, we can place no credence in their claims of temperature changes. They may be right … but given their egregious errors, we have no reason to believe that, and certainly no reason to spend billions of dollars based on their claims.

[Update – Alaska Climate Research Center releases new figures]

I have mentioned the effect of the Pacific Decadal Oscillation (PDO) below. The Alaska Climate Research Center have just released their update to the Alaska data. Here’s that information:

Figure 4. Alaska Temperature Average from First Order Observing Stations

In the Alaska Climate Research Center data, you can clearly see the 1976 shift of the PDO from the cool to the warm phase, and the recent return to the cool phase. Unsurprisingly, the rise in the Alaska temperatures (typically shown with a continuously rising straight trend line through all the data) have been cited over and over as “proof” that the Arctic is warming. However, the reality is a fairly constant temperature from 1949-1975, a huge step change 1975-1976, and a fairly constant temperature from 1976 until the recent drop. Here’s how the IPCC Fourth Assessment Report interprets these numbers …

Figure 5. How the IPCC spins the data.

SOURCE: (IPCC FAR WG1 Chapter 9, p. 695)

As you can see, they have played fast and loose with the facts. They have averaged the information into decade long blocks 1955-1965, 1965-1975, 1975-1985 etc. This totally obsures the 1975-1976 jump. It also gives a false impression of the post-1980 situation, falsely showing purported continuing warming post 1980. Finally, they have used “adjusted data” (an oxymoron if there ever was one). As you can see from Fig. 4 above, this is merely global warming propaganda. People have asked why I say the Alaska data is “fudged” … that’s a good example of why.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
315 Comments
Inline Feedbacks
View all comments
3x2
February 22, 2010 8:03 pm

carrot eater (18:46:33) :
3×2 (17:45:59) :
As I’ve said before, manual adjustments to every last station seems not feasible, not necessarily desirable, and almost certainly not worth the effort over the ~1200 regular reporting stations in GHCN and then again for USHCN.

Without getting into the politics too far – a mind numbing amount of money has already been spent in the area one way or another. Not sure where to price actual data other than at the very top end.
One, you can use some human judgment, but you’ll never have all the information you need. Then, you’ll end up making a lot of ad-hoc decisions that nobody else could reproduce. And then the people on this site would really go off – it isn’t reproducible, it isn’t science, the adjustment guy has his finger on the scales, etc. And what would it really gain you? Some oddball stations might be treated a bit better; the regional and global trends would rather likely be about the same.
Don’t know that having a central and open repository where one could go to get the raw readings and (copious) metadata for any (GHCN?) station is such a bad idea. Surfacestations has shown that it can be done (with no budget). The trends may not ultimately change much but at least everyone could see that everything has been done out in the light. In the current atmosphere of mistrust “pronouncement from the tower” is obviously not working out so well.
And as it turns out, statistical methods for adjustment can be better than manual ones. For example, the change from old thermometers in the US to the MMTS stations. This used to be dealt with (in USHCN, now) using a set adjustment described in Quayle (1991). But it turns out, this instrument shift doesn’t always have the same effect because the reasons why the instrument shift affects the temp reading varies from station to station. A human would have some trouble dealing with this, but the statistical routine can sort it out.
I have no doubt that statistical methods have their part to play but at the same time it is clear that at least in the case of Matanuska and GISS something is not quite right (IMHO). I then have to wonder how many other stations have gone the same way. Unknown soldiers lost in the push so to speak.
If the “generally accepted” trend were 4 °C over recent times there would probably be little disagreement but as we are talking in tenths then Matanuska (and others?) might well matter. If 2010 is announced as “warmest on record” and the difference between 2010 and ’98 (or ’31) is some 0.07°C does Matanuska matter then? You can bet the MSM won’t care.
To make matters worse the number of stations falls off quite dramatically the further north you go in an area that is probably the most heavily scrutinised. Is Alaska really warming or is it just an artefact of the processing algorithm that goes un-noticed in more station rich regions?

Harold Vance
February 22, 2010 8:13 pm

carrot eater (19:11:09) :
“Basically, you just keep arguing for an adjustment procedure that requires a ton of manual work, ad-hoc manual decisions, a ton of historical metadata that won’t be available for each station, and no real indication of a significantly improved global result.”
In summary Willis is arguing for real science and you are arguing for science that produces fake but accurate results because the real science is just too hard or flat out impossible due to tons of missing historical metadata.
I think that is a pretty fair assessment of your position.
Do you think a jury would buy your take?

February 22, 2010 8:37 pm

Re: Willis Eschenbach (Feb 22 20:18),
Well, hopefully this plot will show what CE and I are getting at. It shows (anomaly) temperatures at Matanuska (red), the weighted average of surrounding Rural temperatures in green and the difference in black. So far, that’s just data – no GISS artifice.
The lines are the fitted broken line to the black curve. It goes down and up – again, just data.
Now GISS think, based on brightness, that Matanuska may be affected by UHI. If they just added the black curve to the red, they’d get the green. This is just the rural average. If they use that “adjusted” value, there is no Matanuska information at all. It is replaced by the rural average. Since those rural stations are already included, it no longer adds information, but it doesn’t remove any either.
If you add the line approximation, which GISS does, then you remove the Matanuska trend, which keeping the shorter term signal. That would take out the UHI, if present. I actually don’t think the short term signal is useful, so they should just add the black curve. But it doesn’t do any harm.
Anyway, we’re obviously not making progress here. I’ll write it up on my blog, with R code that does a reasonable emulation of the GISS adjustment procedure for individual stations.

George Turner
February 22, 2010 9:17 pm

Nick Stokes, re 17:46
Re: 3×2 (Feb 22 17:08),
If we now accept that Matanuska is rural
No, GISS uses an objecive criterion – night brightness. It’s actually a good criterion. Arguing about population is missing the point, because that’s not a good indicator of UHI either. What you want is a measure of local heat release, and artificial lighting is a good indicator. Maybe the satellites are getting it wrong, but it isn’t GISS.
No, night brightness would be a horrible criterion.
1) Night lighting tends to be highly efficient high-pressure sodium, mercury vapor, or other such bulb. Those don’t produce significant amounts of heat compared to other lighting technologies.
2) The heat produced by night lights tends to be up on poles, or at least up gutter level. The concentrated heat from their bulb and ballast goes up, never dropping down to the levels where surface temperatures are measured.
3) If there is snow on the ground, its reflectance will make such night lights look much brighter, perhaps by a factor of ten, which might explain why Montreal was listed as more urban than Paris, New York, or Tokyo.
4) External night lighting isn’t a side effect of urbanization, or a measure. It’s a conscious decision. Areas might increase night lighting because of crime problems, especially by bears.
5) Areas might decrease upward night lighting in response to pressure from the astronomical community, which goes to great effort to convince cities to decrease their upward illumination in key parts of the spectrum that indicate night lighting.
6) Night lighting can also be a sign that a city has grown to a size where third shift work becomes common, which has a domino effect on the service sector.
7) Night lighting is also a sign that the local industry contains a significant high-captial investment that is best recouped by continuous operation, again producing three-shift work in an area that is rural. This is common in many mining operations, such as are found in Alaska.
Vastly better measures of UHI would be pulling up data on energy consumption, energy consumption per capita, local albedo, nightime IR signature, and countless other measures.
But all of that won’t get around the stubborn refusal to actual measure the temperature instead of guestimating what it would be in a parallel universe where the cities didn’t exist.
I think climatology must be the only branch of science where people add in a large trend before examing the data for a small trend.

February 22, 2010 9:59 pm

I have a problem with your use of the term, “tin foil hat”.
Everyone knows that only a steel V2K Cap will protect against mind control weaponry.

February 22, 2010 10:09 pm

Willis,
Yes, the emulation isn’t exact. I haven’t done the duplicates properly – with rural stations, the duplicates (if any, I didn’t see any) are added separately to the average. I didn’t use the GISS anomaly period – for a single station and its neighbors it’s enough for this exercise to just subtract the mean for the data period, It’s a constant offset. I did use their method – gathering the temps at the central point, and subtracting a group average.
I think the periods that go into the bent line fitting may be different. I didn;t search for the knee – I just used the apparent GISS value.
My blog is here. It will take a few hours yet.

John Whitman
February 23, 2010 12:05 am

Willis,
I just finished reading all the comments to your post on Anchorage and Matanuska.
!! A word of encouragement!! You provide great stuff for people like me who have some engineering knowledge and experience, however I am only a couple of months into the studying all things climate related to science.
You inspire me. Thank you, such inspiration is profoundly priceless.
My observations at this point are:
1) I postulate that neutral observer cannot find the specific justifications for the GISS adjustments to Anchorage and Matanuska. Arm waving and appeals to the “experts knowing what they are doing” do not mean anything to a neutral observer. Nor do suspicions by skeptics of GISS having an AGW bias which causing GISS to manipulate data have any impact on a neutral observer. Info on how GISS made the adjustments does exist. What is needed is an official GISS supplied justification of why they made to adjustments to Anchorage and Matanuska.
2) ANCORAGE: We can see what the GISS adjustments did. Anchorage had late 1920s raw temp data raised by 0.9 C when it was significantly less urban. Counter intuitively in the late 1990s after some profound increases in urbanization the raw data was not adjusted at all (zero adjustment). An explanation is required from GISS.
3) MATANUSKA: Again we can see what the GISS adjustments did. They start out in early 1920s with no adjustment to the raw data but then they start to progressively apply an increasing UHI like correction to raw data for next ~50 years even though the station remains at a rural location. Then starting at ~1970 they apply adjustments that effectively unUrbanize the raw temp data even though there is a possibility evidence of some local population increase. An explanation is required from GISS.
4) It is unlikely GISS will voluntarily come forth to provide the above info. I say this only based on observation of past GISS behavior. I would like to be surprised. In order to keep the needed audit of climate science moving forward then reverse engineering by independent rational thinkers (aka skeptics) should increase in pace and scope. The eventual weight of information on GISS adjustments should move this into an more open dialog with GISS.
John

carrot eater
February 23, 2010 5:36 am

3×2 (20:03:03) :
“everyone could see that everything has been done out in the light. In the current atmosphere of mistrust “pronouncement from the tower” is obviously not working out so well.”
How can you say that, when GISS is already doing everything in the light, using a procedure that anybody can implement for himself, using code that anybody can download for himself? There is no black box here.
“Don’t know that having a central and open repository where one could go to get the raw readings and (copious) metadata for any (GHCN?) station is such a bad idea.”
NOAA has historical metadata for its own stations. It has none for all the other countries. You just aren’t going to be able to track down the complete station histories in every country.
Though at least nowadays, data storage capacity isn’t the bar it was 20 years ago.
Starting from scratch and with things under its own control, you can do it, though. They do it with the US CRN.
“Surfacestations has shown that it can be done (with no budget). ”
Surfacestations has a bunch of current pictures. It doesn’t give you the information you need to make adjustments in the past, nor do current pictures alone tell you what the temperature readings are doing. And as I’ve said before, even if you had a picture from 1932 and a picture from 1937, and something visually looks different, you’d still not know what adjustment to make. You’d have to statistically examine the temperature record and the neighbors, using the methods of NOAA.
In the end, you’d do what NOAA is doing with the USHCN now. Use your little statistical program to make all the adjustments, and then go back and look in your historical metadata to see how many of the adjustments line up with something you have field notes about.
“The trends may not ultimately change much but at least everyone could see that everything has been done out in the light.”
It’s already in the light, and if the trends don’t change much, then what’s the point?
We’ve already got two different records with different philosophies on this matter: GISS and GHCN. As discussed, GHCN and in particular USHCN are much more ambitious about trying to make detailed corrections. And yet, it doesn’t much matter how you go about it.
“I have no doubt that statistical methods have their part to play but at the same time it is clear that at least in the case of Matanuska and GISS something is not quite right (IMHO).”
By statistical methods there, I don’t mean what GISS is doing. GISS’s method isn’t even trying to use statistics to correct errors; it’s just erasing the difference in trend between urban station and rural station. I mean the homogenisation method of Menne used in USHCN, which is much more sophisticated and is meant to sniff out station moves and instrument changes and the like.
“If 2010 is announced as “warmest on record” and the difference between 2010 and ‘98 (or ‘31) is some 0.07°C does Matanuska matter then?”
Two years separated by 0.07 C are statistically indistinguishable. Ranking years is pointless; it’s the trends that matter.
As for what matters: random errors don’t. For something to matter to the global or regional mean, you need a systematic bias in some direction for some reason. That’s why TOB makes such a difference for the USA. That’s why people are worried about UHI. But the UHI here is gone. The rural stations do the driving, and information about the long-term trend at Anchorage is erased. Are you worried that maybe the rural stations aren’t really rural? Well here, GISS’s method is being overanxious about that as well, and puts in Matanuska as urban. In this case, their method results in being abundantly cautious about UHI, and you still don’t like it. Go figure.

carrot eater
February 23, 2010 5:43 am

Harold Vance (20:13:17) :
No method is going to be perfect. Willis’s method would be just as imperfect, because you simply don’t have the full amount of information required to perfectly recreate the past. So his record would be just as ‘fake’ as anybody else’s.
So the question is, what to do. I think reproducibility is important in science, and I thought the general thought at WUWT was to agree with that. So I prefer objective methods. I prefer the objective method of the current USHCN to that of GISS; it actually tries to correct the record, instead of just accepting raw data from rural stations and tossing out the urban ones. But the latter as the advantage of simplicity. And in the end, it doesn’t matter which you do. Random errors wash themselves out.

Maik H
February 23, 2010 5:48 am

carrot eater (18:57:39) :
” If you make it such that the longer trends roughly match the other surrounding stations, then the station in question is not affecting the longer trends at that grid point. In effect, it is being tossed out.
So again, what you need to do to assess this is actually look at those surrounding rural stations.
To assess the calculation, you’d need to use the GISS method to find the temperature at that grid box, with and without Matansuka. If the GISS adjustment did what it’s supposed to do, then the longer trends for the combined record at that grid box would not be affected by adding or removing Matansuka.”
This would be a valid adjustment method if (and only if) the resulting record is exclusively used for statements about ‘longer’ trends. Therefore, it would be nice to know what, exactly, constitutes a longer trend.
And this might swiftly carry us to the heart of the matter: assuming that ‘longer’ trends are around a 100 years, the use of the adjusted data in proving the extraordinariness of the time period 1970-2010 would turn a legitimate adjustment into a fudge.

carrot eater
February 23, 2010 5:54 am

Willis Eschenbach (20:18:00) :
“I have no problem with statistical methods, I use them all the time. However, when you apply them to temperature stations, you need, must have, absolutely require quality control.”
Again, by statistical methods, I’m referring to the USHCN, not GISS. The GISS methods are too simple to earn the name ‘statistical’ from me.
“Otherwise, you end up with Baghdad classified as rural, and Matanuska Agricultural Experimental Station classified as urban.”
There’s no harm done by the false positive at Matanuska. A false negative at Baghdad is more interesting.
“Oh, please. 1200 stations, boo hoo, the jobs too big, it’s just not feasible … but we want you to spend trillions on our conclusions. What’s wrong with this picture?”
OK, you go track down the entire station history for some station in the Central African Republic.
Again, GHCN tries to do the detailed work; GISS does not even bother. And both give the same result. Which globally, is the same result as what you get from the raw data. What does that tell you?
“require a bit of work.”
This coming from a guy who refuses to take a day and look at the neighboring rural stations, in order to see why GISS did what it did.
“The argument that the adjustments are not “reproducible” is a red herring. ”
I don’t buy that for one second.
“So far, I’ve calculated the adjustments at three stations: Darwin, Anchorage, and Matanuska. Each one contained what to me are hugely incorrect adjustments.”
You’ve shown absolutely nothing of the sort, because you have not put in the work required to assess those adjustments. At Darwin, you would have to collect all the neighboring stations and go through the GHCN process to see why it did what it did. Maybe it did something weird, but you simply have not shown that. For Anchorage and Matanuska, all that GISS is doing is erasing those stations from affecting the long term trends at the local grid point.

carrot eater
February 23, 2010 6:01 am

George Turner (21:17:33) :
“Vastly better measures of UHI would be pulling up data on energy consumption, energy consumption per capita, local albedo, nightime IR signature, and countless other measures.”
No, the best measure would be to actually look for it in the temperature record itself, compared to the neighbors. This is what the USHCN does now.
UHI varies strongly spatially. You could be in the middle of a city with all the measures you speak of, but be in a park with no obvious UHI trend.
“But all of that won’t get around the stubborn refusal to actual measure the temperature instead of guestimating what it would be in a parallel universe where the cities didn’t exist.”
How do you measure a temperature that doesn’t exist? And when there are plenty of rural stations around, GISS is not unreasonable in just knocking out the cities altogether.

Richard S Courtney
February 23, 2010 7:27 am

Willis:
Thankyou for your fine analysis and subsequent responses to comments.
You repeatedly state that you know how adjustments are made to records of station data but you do not know why they are made. For example, you say to Carrot Eater at (15:29:36):
“What I don’t understand is how this is all justified. I keep asking for a reason that anyone would start adjusting a pristine rural record in 1920. Do you or GISS have the slightest scrap of evidence that there was something wrong with the record?”
The adjustments are not intended to correct individual station records because it is thought “there was something wrong with the record”. And I think you have been side-tracked by arguments (e.g. from carrot eater and Nick Stokes) that the adjustments may be making correct adjustments in individual cases.
I think I know why the adjustments are universally applied by computer algorithm acting on each data set from each station record. And it is not relevant to the purpose of the adjustments whether or not the adjustments can be justified for any individual station record.
Please note that the the adjustments to station records are conducted as part of the data processing to obtain values of mean global temperature (MGT) by combination of all the station records. And the purpose of this data processing is an attempt to determine changes that have happened to MGT since station records began to be compiled. The intended determination from this processing is MGT (and mean hemispheric temperatures). And, importantly, the compilers of the MGT data sets provide no stated reason why the stages of that processing should provide correct data for individual localities (e.g. the sites of individual measurement stations).
In paragraph 9 of my submission to the UK Parliament Select Committee I say:
“9.
It should also be noted that there is no possible calibration for the estimates of MGT.
The data sets keep changing for unknown (and unpublished) reasons although there is no obvious reason to change a datum for MGT that is for decades in the past. It seems that – in the absence of any possibility of calibration – the compilers of the data sets adjust their data in attempts to agree with each other. Furthermore, they seem to adjust their recent data (i.e. since 1979) to agree with the truly global measurements of MGT obtained using measurements obtained using microwave sounding units (MSU) mounted on orbital satellites since 1979. This adjustment to agree with the MSU data may contribute to the fact that the Jones et al., GISS and GHCN data sets each show no statistically significant rise in MGT since 1995 (i.e. for the last 15 years). However, the Jones et al., GISS and GHCN data sets keep lowering their MGT values for temperatures decades ago.”
Such adjustment “to agree with each other” provides a complete explanation for why “anyone would start adjusting a pristine rural record in 1920”.
And, Willis, at (20.18. 00) you say;
“You guys keep claiming I’m sifting through stations looking for oddities. I’m not. So far, I’ve calculated the adjustments at three stations: Darwin, Anchorage, and Matanuska. Each one contained what to me are hugely incorrect adjustments.”
Well, that is not surprising according to my understanding (that I have stated in this posting). An algorithm making an adjustment to cause an MGT data set to more-closely agree with other MGT data sets would plough through all the station data and provide most station data with what appear to be “hugely incorrect adjustments”. So what when these apparently “hugely incorrect adjustments” are merely intermediate calculations in the obtaining of MGT data sets?
And, I again stress that there is no possible calibration for the estimates of MGT.
But, as the final paragraph of my submission to the UK Parliament Select Committee, says:
“12.
None of this gives confidence that the MGT data sets provide reliable quantification of change to global temperature.”
Richard

3x2
February 23, 2010 9:16 am

carrot eater (05:36:55) :
Looks like the MO have gone for much the same ideas as I posted earlier (spooky) (TL WUWT post)
The new effort, the proposal says, would provide:
–”verifiable datasets starting from a common databank of unrestricted data”
–”methods that are fully documented in the peer reviewed literature and open to scrutiny;”
–”a set of independent assessments of surface temperature produced by independent groups using independent methods,”
–”comprehensive audit trails to deliver confidence in the results;”
–”robust assessment of uncertainties associated with observational error, temporal and geographical in homogeneities.”

George Turner
February 23, 2010 10:20 am

Carrot eater,
If GISS ignores the cities, why do any math at all? Just don’t include them in the data set.

carrot eater
February 23, 2010 10:22 am

3×2 (09:16:52) :
While that statement doesn’t imply it or require it, those guys (CRU) apparently actually do a lot of things on a somewhat subjective, manual basis. Quality control is done with visual eyeball tests, as opposed to using statistical criteria. They get some data from the source countries in an already homogenised form; those individual countries might use different methods to do it. They do some homogenisation themselves by taking stations, looking for step changes in the difference, and seeing if they can find any notes about a station move or instrument change.
But they realise they’ll never have anywhere close to complete information about station histories, so then they slap on some uncertainty bounds.
So if you want a dataset with human-led adjustments (if that’s still how they do it), go with CRU. But you’ll see how little it matters.

carrot eater
February 23, 2010 10:40 am

George Turner (10:20:22) :
My first inclination is the same as yours. If the urban stations aren’t contributing to the longer trends in the record, then why use them at all?
I suppose GISS doesn’t want to throw away information about the short-term variations, that is present in the urban records.
In the end, it doesn’t really matter whether you keep the cities in there, or not. That’s what has to be kept in mind in all these discussions – you can get really bogged down in the details of the processing, but for the most part it doesn’t matter that much in the big picture (the global mean trend, on land at least).

Jack F
February 23, 2010 12:03 pm

I really enjoyed this discussion, but I am wondering two things:
1. Does adjusting old temperatures somehow affect the graphic comparisons for those who insist there is a correlation between temps and worldwide CO2 levels?
2. If everyone agrees that even the raw data indicates a general worldwide warming trend, when do we start discussing the cause?

GP
February 23, 2010 12:31 pm

carrot eater (10:40:31) :
George Turner (10:20:22) :
My first inclination is the same as yours. If the urban stations aren’t contributing to the longer trends in the record, then why use them at all?
I suppose GISS doesn’t want to throw away information about the short-term variations, that is present in the urban records.
In the end, it doesn’t really matter whether you keep the cities in there, or not. That’s what has to be kept in mind in all these discussions – you can get really bogged down in the details of the processing, but for the most part it doesn’t matter that much in the big picture (the global mean trend, on land at least).
=================
Right, so I’ve been following the generic points made in thsi thread and so far as I can interpret CE (and also NS?) are saying that no mattrer which of the 3 main approaches one takes the trends are the same, more or less for any practical purpose, and give the same trend, also more or less, as the raw data readings.
So, now that we know that (have we had enough time to ‘know’ for sure?) can we assume that all of the work undertaken to re-present the raw readings, when averaged out over all the readings and their ‘errors’ from all causes around the entire globe, guive essentially the same result for trend, can we stop funding most of the bodies that are, allegedly, still ‘researching’ these things and re-distribute the effort and budget to something or things more worthwhile?
Or should we be considering such a net result, assuming that nothing is known to have been missed in coming to that conclusion, and ponder whether such a common agreement, in spite of seemingly different methods, may have some common cause that is as yet unidentified?
Might it be that all the temperature measurement cancel each other out on a rural/urban comparison, except one which then becomes the sole source of the global temp change?
Just a thought.
I really don’t know but with the social policies being proposed and persued over such a short period I think such matters are due some serious consideration and effort.

February 23, 2010 12:44 pm

Willis;
I think I understand (finally) what carrot eater is saying:
The funky adjusted temps are okay for determining a global average temperature,
but should not be used for an individual station study (or a regional study).
The following link is to the Beeville, TX station showing NOAA adjustments with more of the same funky methodolgy:
http://tinypic.com/r/2mz0dqu/6
Hopefully this shows up and is readable.

February 23, 2010 12:45 pm

Re: Nick Stokes (Feb 22 22:09),
I’ve now put up the post and code at my blog. There are a few improvements – it now does search for the optimal “knee” (and finds one for Anchorage), and it treats the rural duplicates better.