Guest Post by Willis Eschenbach
[see Update at the end of this post]
I got to thinking about the (non) adjustment of the GISS temperature data for the Urban Heat Island effect, and it reminded me that I had once looked briefly at Anchorage, Alaska in that regard. So I thought I’d take a fresh look. I used the GISS (NASA) temperature data available here.
Given my experience with the Darwin, Australia records, I looked at the “homogenization adjustment”. According to GISS:
The goal of the homogenization effort is to avoid any impact (warming or cooling) of the changing environment that some stations experienced by changing the long term trend of any non-rural station to match the long term trend of their rural neighbors, while retaining the short term monthly and annual variations.
Here’s how the Anchorage data has been homogenized. Figure 1 shows the difference between the Anchorage data before and after homogenization:
Figure 1. Homogenization adjustments made by GISS to the Anchorage, Alaska urban temperature record (red stepped line, left scale) and Anchorage population (orange curve, right scale)
Now, I suppose that this is vaguely reasonable. At least it is in the right direction, reducing the apparent warming. I say “vaguely reasonable” because this adjustment is supposed to take care of “UHI”, the Urban Heat Island effect. As most everyone has experienced driving into any city, the city is usually warmer than the surrounding countryside. UHI is the result of increasing population, with the accompanying changes around the temperature station. More buildings, more roads, more cars, more parking lots, all of these raise the temperature, forming a heat “island” around the city. The larger the population of the city, the greater the UHI.
But here’s the problem. As Fig. 1 shows, until World War II, Anchorage was a very sleepy village of a few thousand. Since then the population has skyrocketed. But the homogeneity adjustment does not match this in any sense. The homogeneity adjustment is a straight line (albeit one with steps …why steps? … but I digress). The adjustment starts way back in 1926 … why would the 1926 Anchorage temperature need any adjustment at all? And how does this adjust for UHI?
Intrigued by this oddity, I looked at the nearest rural station, which is Matanuska. It is only about 35 miles (60 km) from Anchorage, as shown in Figure 2.
Figure 2. Anchorage (urban) and Matanuska (rural) temperature stations.
Matanuska is clearly in the same climatological zone as Anchorage. This is verified by the correlation between the two records, which is about 0.9. So it would be one of the nearby rural stations used to homogenize Anchorage.
Now, according to GISS the homogeneity adjustments are designed to adjust the urban stations like Anchorage so that they more closely match the rural stations like Matanuska. Imagine my surprise when I calculated the homogeneity adjustment to Matanuska, shown in Figure 3.
Figure 3. Homogenization adjustments made by GISS to the Matanuska, Alaska rural temperature record.
Say what? What could possibly justify that kind of adjustment, seven tenths of a degree? The early part of the record is adjusted to show less warming. Then from 1973 to 1989, Matanuska is adjusted to warm at a feverish rate of 4.4 degrees per century … but Matanuska is a RURAL station. Since GISS says that the homogenization effort is designed to change the “long term trend of any non-rural station to match the long term trend of their rural neighbors”, why is Matanuska being adjusted at all?
Not sure what I can say about that, except that I don’t understand it in the slightest. My guess is that what has happened is that a faulty computer program has been applied to fudge the record of every temperature station on the planet. The results have then been used without the slightest attempt at quality control.
Yes, I know it’s a big job to look at thousands of stations to see what the computer program has done to each and every one of them … but if you are not willing to make sure that your hotrod whizbang computer program actually works for each and every station, you should not be in charge of homogenizing milk, much less temperatures.
The justification that is always given for these adjustments is that they must be right because the global average of the GISS adjusted dataset (roughly) matches the GHCN adjusted dataset, which (roughly) matches the CRU adjusted dataset.
Sorry, I don’t find that convincing in the slightest. All three have been shown to have errors. All that shows is that their errors roughly match, which is meaningless. We need to throw all of these “adjusted datasets” in the trash can and start over.
As the Romans used to say “falsus in unum, falsus in omnibus”, which means “false in one thing, false in everything”. Do we know that everything is false? Absolutely not … but given egregious oddities like this one, we have absolutely no reason to believe that they are true either.
Since people are asking us to bet billions on this dataset, we need more than a “well, it’s kinda like the other datasets that contain known errors” to justify their calculations. NASA is not doing the job we are paying them to do. Why should citizen scientists like myself have to dig out these oddities? The adjustments for each station should be published and graphed. Every single change in the data should be explained and justified. The computer code should be published and verified.
Until they get off their dead … … armchairs and do the work they are paid to do, we can place no credence in their claims of temperature changes. They may be right … but given their egregious errors, we have no reason to believe that, and certainly no reason to spend billions of dollars based on their claims.
[Update – Alaska Climate Research Center releases new figures]
I have mentioned the effect of the Pacific Decadal Oscillation (PDO) below. The Alaska Climate Research Center have just released their update to the Alaska data. Here’s that information:
Figure 4. Alaska Temperature Average from First Order Observing Stations
In the Alaska Climate Research Center data, you can clearly see the 1976 shift of the PDO from the cool to the warm phase, and the recent return to the cool phase. Unsurprisingly, the rise in the Alaska temperatures (typically shown with a continuously rising straight trend line through all the data) have been cited over and over as “proof” that the Arctic is warming. However, the reality is a fairly constant temperature from 1949-1975, a huge step change 1975-1976, and a fairly constant temperature from 1976 until the recent drop. Here’s how the IPCC Fourth Assessment Report interprets these numbers …
Figure 5. How the IPCC spins the data.
SOURCE: (IPCC FAR WG1 Chapter 9, p. 695)
As you can see, they have played fast and loose with the facts. They have averaged the information into decade long blocks 1955-1965, 1965-1975, 1975-1985 etc. This totally obsures the 1975-1976 jump. It also gives a false impression of the post-1980 situation, falsely showing purported continuing warming post 1980. Finally, they have used “adjusted data” (an oxymoron if there ever was one). As you can see from Fig. 4 above, this is merely global warming propaganda. People have asked why I say the Alaska data is “fudged” … that’s a good example of why.





Willis Eschenbach (10:39:18) :
And one more thought:
“But given that Matanuska and Anchorage are only 30 miles apart, what is the explanation for the huge difference in the two adjustments?”
You’re looking at it backwards. The adjustments will be whatever they need to be, in order to get the long-term trends to be the same at the two stations (meaning, to get them both to match that common set of surrounding rural stations). Or at least, as close as you can match, with the two-legged adjustment and the data you’re working with.
So if you want to see if the adjustment was successful in what it was trying to do, overlay the two adjusted series, and see how well the trends match. Don’t compare the adjustments themselves.
I keep forgetting Alaska is outside the USHCN.
Robin Edwards (11:29:58)
That is the complete dataset. And you are correct about the effect of the PDO on Alaska temperatures. See my 2004 analysis of the PDO and Alaskan temperatures here. It’s another of the things to consider when making adjustments, I should have mentioned it in my reply to carrot eater at Willis Eschenbach (11:39:56).
Willis Eschenbach (11:39:56) :
“More conservative? I find the GHCN method more conservative”
Just thinking about it, I disagree. I think GISS is more aggressive in removing UHI, whereas GHCN is more precise.
At least for comparing GISS to USHCN using actual data (back when USHCN still had a separate UHI step), Hansen et al (2001) shows me to be correct; GISS is more aggressive in removing UHI.
As for comparing GISS to GHCN using actual data: let’s hold off for now. I have zero motivation to study GHCN adjustments, when I know they’re probably overhauling it for v3.0 later this year anyway.
“What I would do first is get actual data about Matanuska, including the total station history, and all of the photos that I could get, both current and historical. I’d get as much population data about the surrounding area, including total economic activity (since McKitrick has shown this to be a factor). I’d look at all of these plus the temperature record, and see which if any of this seems to be affecting the temperature record.”
Willis, this is not a serious option. You’ll never collect enough historical metadata to be able to explain every divergence between each station and its neighbors. And even if you could, it’d still be largely best to use the statistical methods (Menne’s new homogenisation, for example) to remove the artifacts. A picture could tell you that a tree was covering the station at a certain angle, and then it was cut down, but that in itself won’t tell you how big an adjustment to make.
Finally, if GISS used historical metadata this closely, the adjustment method would no longer be objective. It would not be reproducible, as it would require some human judgments. If you remember from Darwin, this is an issue in the Australian BoM adjustments. One student did some adjustments using all the historical metadata, but the next person couldn’t easily repeat it.
While some oddball stations might be bestowed with a weird adjustment, overall I think objective methods are both more reliable and preferable, for reasons of reproducibility.
Thinking a bit more about the method GISS is using, that I described above, . The plots Willis has shown are just approximators to the difference between the “urban” station and the weighted average of surrounding “rural” stations. If you think there’s something extreme about them, it’s probably not in the adjustment arithmetic – it reflects the actual behaviour of that difference. There’s nothing particularly bad about a LS fit of a bent two-line segment.
So what does the adjustment achieve? Suppose you put more effort into getting a better approximator to the difference. In fact, you could just use the exact difference. That would completely replace the urban station result with the average of surrounding rural. The eventual global trend would then be just provided by rural stations. That is good if you have enough of them.
So why include the urban stations in the calc at all? They have an odd effect – by putting them in, and then replacing them by the average of local rural stations, you upweight the effect of those local stations in the global average. This probably wouldn’t matter much, and the effect is further muted anyway by the gridding.
The piecewise linear approx leaves a small residual effect of the urban stations in the global average, but only contributing short-term variation. The trend effect has been removed. And since the short-term effects are almost totally lost in the averaging, it seems to me that there’s little difference between GISS UHI adjusting and leaving out urban stations completely.
Paul Vaughan (11:21:58) : edit
Well, here’s a start on Agassiz, I just figured out how to make a blink comparator …

Willis, I’ve just looked at your 2004 paper on Alaskan climate, and enjoyed it a lot. Just one thing, though! I note that you seem always to use 17 year Gaussian smoothing. I’m a strong believer in eschewing the salve of smoothed data, which I tend to equate with trying to make things easier for journalists and politicians. OK, we know their limitations, so I suppose that’s a justification. However, when you look carefully at the individual month data, a somewhat different picture emerges. First, as you know, it is very useful to “deseasonalise” the monthly data, by subtracting the overall month averages from each item. Next, form the cumulative sum of the deseasonalised data, and plot this against the time base. What happens might be an eye opener. The steady increase that your plots show over the period from around 1970 to perhaps 1985 now becomes a step change that takes place over one or two /months/, and which is preceded and followed by periods of remarkable stability The PDO itself underwent the same step change, but a few months earlier.
This sort of underlying behaviour renders the conventional “trend line”, routinely computed, without consideration of its real-world implications, a misleading concept. My approach is to let the data themselves guide one’s thoughts about fitting a linear model. The notion that a linear fit must be “safe” is not one to be applied without due thought in the realm of climate science. Good though the human brain is at spotting patterns in data plots it may not be good enough!
I am pretty sure that most methods currently used for handling climate time series over periods of perhaps a couple of centuries down to a few years tend to disguise the occurrence of step changes. This simplifies potential explanations but may be hiding something of fundamental importance.
Over the last 16 years I have looked at several thousand series, of many types, and am convinced that abrupt change is the norm, rather than the exception. What you are likely to notice is that once the position of a potential step has been suggested your ideas about the linear fit, and indeed the process of smoothing, might need some modification.
I would like to be able to show a few GIFs, in this thread, but don’t know how :-(( They would lend some weight to my words.
Robin
carrot eater (12:04:14), thanks for your thoughtful reply. You say inter alia
There are mathematical methods which can identify a breakpoint in a dataset without reference to its neighbors. If we know a tree was cut down in 1938, and we find a breakpoint in 1938, we can make a reasonable adjustment for the change based on the math.
Next, I have no problem with someone making an adjustment based on a change, as long as they clearly identify the amount and the reason for the adjustment. It does require human judgement, but that is true in many parts of science. As long as they can specify their reasons, we can see if they are reasonable.
Next, I do not find “reproducibility” to be a valid reason for selecting an automated method. If it is making bad adjustments reproducibly across a variety of stations, that’s not a good outcome.
Finally, if you do use an automated method as GISS and GHCN do, it is imperative that you go through and toss out the adjustments that are clearly bogus. Adjusting Matanuska to increase the recent (UHI?) warming is one such adjustment. Quality control is a crucial part of any computerized adjustment system.
there should be some good coming out of all this Cherry (picked) Fudge; now there must be a market.
carrot eater (11:40:02)
I took a look at the period of overlap where there is no missing annual data (1938-1990).
Before adjustment, the long-term trend for Matanuska was -0.01°C per decade.
After adjustment, the long-term trend for Matanuska was -0.01°C per decade.
Before adjustment, the long-term trend for Anchorage was 0.18°C per decade.
After adjustment, the long-term trend for Anchorage was 0.07°C per decade.
So I’d say that your claim, that the purpose of the adjustments was to correct the long-term trends, didn’t work for Matanuska. Anchorage’s trend was reduced, but Matanuska’s was left unchanged.
I still haven’t heard any reason for increasing the recent Matanuska trend, or reducing the earlier Matanuska trend. Why should it match its neighbors? Nature doesn’t work that way, there are differences between stations.
Using nightlighs only for judging urban/rural status of station is risky, since the coordinates of many stations are not exact enough for this.
I’ve looked into this for Swedish stations in the GHCN, and there are several significant errors. Härnösand for example is a fairly large town, but the coordinates are off by a few miles, placing the station in the middle of an uninhabeted forest area. Same thing for Halmstad where the station is at the airport on the outskirtsof the town, but the coordinates places it in the middle of a field several kilometers east of the airport.
In some cases the coordinate errors may even be deliberate. I suspect this is true for Jokkmokk where the station is at an Air Force Base whose position and even existence was secret until the recce satellite era. Here the coordinates are off by more than 10 km.
Re: Willis Eschenbach (Feb 22 12:50),
Why should it match its neighbors?
I think this is the wrong way to look at it. Matanuska is being defined (maybe wrongly) as urban, and is being replaced by an average of it’s rural neighbors. The adjustment curve you have plotted is close to what is required to do this.
Since the neighbors were already contributing to the global average, the nett effect is that M is simply being left out, with some minor changes to the weighting of the nearby rural stations in the mix.
Willis Eschenbach (12:40:08) :
Well, once the crowd disperses, it’s apparently easier to have a rational discourse.
“There are mathematical methods which can identify a breakpoint in a dataset without reference to its neighbors. ”
In some applications, sure. In this one, I just don’t see how you can pull it off. You’ll find the breakpoints, but without the neighbors, you won’t know what should have been happening at your spot.
“Next, I do not find “reproducibility” to be a valid reason for selecting an automated method. If it is making bad adjustments reproducibly across a variety of stations, that’s not a good outcome.”
I think it’s quite important, and I think the nature of WUWT testifies to it. People don’t trust adjustments that they can’t dig into for themselves. Having a lot of non-objective adjustments will just become a huge black box that nobody here could really examine, and it would only lead to yet more controversy.
Also, I do not think you’ve demonstrated that the adjustments are on the whole bad.
“Adjusting Matanuska to increase the recent (UHI?) warming is one such adjustment.”
Again, I strongly encourage you to finish the job you started, and analyse all the rural stations that serve as the reference set here. The work is very incomplete without it.
The GISS UHI adjustment often does put in an increased warming trend, but that could be for some good reason. Say Matanuska had some station move or somesuch that messed with its trend, such that it didn’t match well with the neighbors. The UHI adjustment will step in and so something about it. I agree it’s crude, so I prefer the NOAA methods. But because of this, I wouldn’t just toss out any positive UHI adjustment out of hand; it could be positive because it’s correcting for something else.
This is assuming the rural reference stations aren’t themselves garbage.
“Robert (10:34:27) :
“No – it is the AGW fantasists that need to do that, actually.”
You’re wrong, Willis. You’re ”
Robert is amazing. After repeatedly misrepresenting other people’s statements, he now uses outright wrong quotes. I got suspicious because “AGW fantasists” is not Willis’ style. Robert, are you such a helpless tw*t that you need to do that or are you just extremely careless?
Willis Eschenbach (12:50:13) :
“I took a look at the period of overlap where there is no missing annual data (1938-1990).”
You should try the comparisons on either side of the Matanuska dog-leg in 1970.
“I still haven’t heard any reason for increasing the recent Matanuska trend, or reducing the earlier Matanuska trend. ”
Because you haven’t looked for the reason. Please just look at the entire set of neighboring rural stations.
“Why should it match its neighbors? Nature doesn’t work that way, there are differences between stations.”
And yet, nature does work that way to a pretty good extent. You can see how good the correlation is between Anchorage and Matanuska, even with any UHI still in there. Anomalies correlate pretty far out, both in trend and in the variance. This has been well demonstrated, and if you disagree, you’ll have to do much more than just saying you don’t like it.
Nick Stokes (12:58:09) :
That is exactly correct. Though, if the Matanuska record was really messed up for some reason, it actually wouldn’t be possible to give it the same trend as the average of the rural neighbors. At least not by the GISS method.
But somebody needs to come up with a distance-weighted average of rural neighbors, to provide the comparison set.
This is what we’ve needed the entire time.
tty (12:56:17) :
There was an amusing article recently where the authors tried to use Google Earth to look at the stations, using the coordinates. They also found that the coordinates were not perfectly precise.
Willis,
This is looking like a typical Eschenbach effort. You scour stations to find something that looks a little odd. You write a post under the heading “Fudged fevers”. You say you don’t understand it.
But GISS publish the data that they use, and the algorithms and the code. Other independent groups have used and emulated the code, getting essentially the same results, so GISS are using the code that they publish. So if you’re going to persist with accusations of fudging, you might at least try to show where they do that in the code. I’ve pointed out the relevant parts.
Nick Stokes (14:17:52) :
My complaint as well. “Fudged” is in the title; Willis says he can’t think of what could justify x, y or z. When in fact everything is wide open and transparent. Perhaps Willis doesn’t agree with how the algorithm works, but he owes it to the readers to at least tell them what the algorithm is, or even that one exists and where to find it. It’s an objective method, so there’s no specific fudging. What you can then do is look at the surrounding stations and decide whether the algorithm churned out anything reasonable.
Though I wouldn’t explain it from the code primarily; just point to the papers. They’re rather easier to read. GISS papers aren’t paywalled, so nobody has that excuse to not even look at them.
Nick Stokes (14:17:52)
Starting out with ad hominems doesn’t do your cause any good, Nick. I didn’t “scour stations”, I looked at one station, so that’s a lie. Starting out with a lie is a curious way to establish your credentials.
I said I didn’t understand why Matanuska had been adjusted down and then back up again.
So far, nobody has given me a reason. Including you. I’m sorry, but “all the neighbors do it” doesn’t work for me. So what? That’s what makes for horseraces.
If you are so insightful about all this, please explain why.
First, we’ve gone over the “others get the same results” question above.
Perhaps my use of the word “fudged” is confusing you. I meant it in the sense of “pushed around without reason”. If you have those reasons, bring it on.
You keep insisting that perfectly good data should be changed purely because a computer says so. I ask why, and you point me toward the computer code … miss the point much? I know what the code says, I was likely writing computer programs before you were born. I’m questioning the whole procedure, not the details of how they code a program to shoehorn all the local stations into a “one-size-fits-all” straitjacket. Yes, you can do it, and yes, the computer codes do it.
But until someone comes forth to explain a reason for adjusting Matanuska down for fifty years and up for twenty years, I’m going to continue to ask the question. If you want to blindly adjust what appears to be perfectly good data just because some computer orders you to do so, that’s your choice.
Me, I ignore alien orders. I won’t adjust until you can explain why they should apply to Matanuska. I’ve asked that again and again, and neither you nor anyone else has answered the question. When you have the answer, come back and tell us. Until then …
carrot eater (14:33:39) : edit
You, like Nick, are missing the point. I know how the data was fudged. It was fudged by a computer algorithm, one that obviously doesn’t work well.
What I don’t understand is how this is all justified. I keep asking for a reason that anyone would start adjusting a pristine rural record in 1920. Do you or GISS have the slightest scrap of evidence that there was something wrong with the record?
Because if you don’t, then the data is fudged. Fudged by a computer, using a known and documented program … but fudged nonetheless. When you change data without a reason, merely because you were ordered by the almighty Computer to do so, I call that fudging. Don’t like it? Provide the reason for the adjustments made to Matanuska. We know the method, which is to force them to agree with their neighbors. But what is the reason to fudge the data that way? That’s what I don’t understand.
Which is what I said at the start.
DCC (09:06:36) :
@ur momisuglyAlan S (15:55:04) :
“I have extreme difficulty understanding, from GISS policy as related above, why any rural station would be adjusted upwards. I assume I am missing something obvious and would dearly like to be enlightened.”
“I had trouble with that, too, until I realized that they are doing things a bit backwards. They are assuming that the highest urban readings, which include UHI, are the correct temperature. Therefore the rural temperatures need to have UHI added in. It’s a very peculiar way to do things. One would think that you should subtract the UHI out. Maybe they just like high numbers!”
I was hoping that I had missed something fundamental, so basically the surface temperature record has been hi-jacked.
It is thoroughly depressing to see.
I assume that is why Trollbert and Pia Carrot are currently spinning like a tops to obfuscate, re-direct, misquote and if that fails out right lie, to try to muddy this thread.
When I saw the PDO hit the deck @ur momisugly 1998/1999, re conned, ” That can’t be good”, then we had the Solar minimum dragging on and on, then we had the Argo Buoys reporting sea temperatures dropping and now we have an air effect the AO going negative.
It is looking like time to buy shares in companies who make down clothing and get kitted out early before next winter.
“It’s worse than we thought”, perhaps?
carrot eater (13:19:08) :
Though, if the Matanuska record was really messed up for some reason, it actually wouldn’t be possible to give it the same trend as the average of the rural neighbours.
If the data comes from the experimental farm (see my earlier post) then I am not so sure just how much more “rural” the Matanuska site could be. If the farm is the source then adjustments should really only come from the instrument/site change side. That is to say not from UHI. Pop. 1917=0 Pop. 2010=0.
(stay with me carrots) Nick Stokes (14:17:52) :
(…) So if you’re going to persist with accusations of fudging, you might at least try to show where they do that in the code.
I get the feeling that that is exactly where this (suspicious changes) will have to go. The answer from the pro’s seems to be “it’s in the PR literature”. Fair comment. BUT as with the NZ Parliament asking for the details of changes to individual stations, the question is met with … ??? (I suspect this will be a global answer)
My view, currently, is that … Yes, everything is in the literature somewhere (at the macro level, in terms of the method(s) used on a particular run). No, nobody (CRU, NASA, NOAA…) could ever explain the micro details of Darwin or Matanuska (don’t get me started on Iceland!) as they are the by-products of the bulk processing algorithms used. The detail is lost in much the same way as it would be in processing a huge mail order address list.
Anyways … I think what needs to be done is a bit of debug output at each stage of the code we are allowed(!) to see (and run) or at least (PR) methods we can duplicate. Tracing Darwin or Matanuska at each stage may spread some light (groan) on why they are as they are.
Despite what the politicians and climateers say, the details are important. If you (CE and NS or even PJ and TP) cannot convince me (and presumably Willis) that Matanuska is a valid adjustment then we can never agree on the final result.
Willis – what, if anything, did you decide re: a surface stations type domain?
Re: Willis Eschenbach (Feb 22 15:22),
So far, nobody has given me a reason. Including you.
I have,here. Matanuska has been classified as urban, and is being eliminated from the trend calculation. The device being used is to add the difference between an average of nearby rural values and the M values. If that is done exactly, M has gone completely. It’s done approximately using this piecewise linear fit, to preserve some short term information. I don’t think that is much of a gain; it will, have very little effect at all.
The down/up that you complain about is not artificial. It’s the observed discrepancy between M and its rural neighbors. Whether you add that discrepancy, removing the effect of M, or just omit M, has the same effect.
“I was likely writing computer programs before you were born.”
I wrote my first computer program in 1964, using Manchester Autocode.
A bit of osbcure and unintentionally funny: Being that I am from that area (Kenai/Soldotna) I grew up drinking Matanuska Maid Homogenized Milk…
On that note, Matanuska did have a relative population boom, but exact siting of the area needs to be figured. Alaska is BIG, most areas can eat up a large population without much impact.
So, unless we do a Surface Station run down of it, we really don’t know. I didn’t see any links to it on the surface stations site, so…
If I end up back in AK for some work (pending) I can swing up and check it out.
carrot eater (13:13:49)
You are quite correct, that some “anomalies correlate pretty far out, both in trend and variance” … but some don’t. The present case is a perfect example. Although there is good correlation between Anchorage and Matanuska in terms of the variance, the trends are radically different. Anchorage has a trend of almost two degrees per century, while Matanuska shows no trend at all. Go figure …
You are taking a general observation (nearby stations tend to correlate) and trying to make it into an absolute (therefore if they don’t correlate, we are justified in forcing them to correlate).
To take another example, yes, members of a family tend to look alike too … except when they don’t. Their body measurements tend to correlate just like nearby temperatures. But we wouldn’t dream of “adjusting” the body measurements of the one solitary tall thin member of a family to match the measurements of a dozen of his shorter, heavier relatives. So how can you justify doing the same to temperatures? If you are looking for some mythical Nature without sports and freaks and oddballs and things that break the rules, you’re on the wrong planet.
Dominic Marcello (09:26:43) :
Matanuska station has not just sat in the same place since its inception. Its moved around a bit.
Good catch but I’m not convinced that the moves account for the GISS end. As far as I understand it GISS uses GHCN as a base set but replaces GHCN records with USHCN records where matches exist. USHCN (again, as far as I can see) has already adjusted for station moves and such. GISS may/not then attempt (as far as I can not/partialy see) to remove any adjustments made by the source parties (of GHCN/USHCN) to the GHCN(+/-)USHCN resulting set!
Is it any wonder Matanuska is a bustling metropolis – full of folk trying to escape a life chained to climate science.