On the "march of the thermometers"

I’ve been away from WUWT this weekend for recovery from a cold plus family time as we have visitors, so I’m just now getting back to regular posting.  Recently on the web there has been a lot of activity and discussions around the issue of the dropping of climatic weather stations aka “the march of the thermometers” as Joe D’Aleo and I reported in this compendium report on issues with surface temperature records.

Most of the station dropout issue covered in that report is based on the hard work of E. M. Smith, aka “chiefio“, who has been aggressively working through the data bias issues that develop when thermometers have been dropped from the Global Historical Climate Network. My contribution to the study of the dropout issue was essentially zero, as I focused on contributing what I’ve been studying for the past three years, the USHCN. USHCN has had a few station dropout issues, mostly due to closure, but nothing compared to the magnitude of what has happened in the GHCN.

That said, the GHCN station dropout Smith has been working on is a significant event, going from an inventory of 7000 stations worldwide to about 1000 now, and with lopsided spatial coverage of the globe. According to Smith, there’s also been an affinity for retaining airport stations over other kinds of stations. His count shows 92% of GHCN stations in the USA are sited at airports, with about 41% worldwide.

The dropout issue has been known for quite some time. Here’s a video that WUWT contributor John Goetz made in March 2008 that shows the global station dropout issue over time. You might want to hit the pause button at time 1:06 to see what recent global inventory looks like.

The question that is being debated is how that dropout affects the outcome of absolutes, averages, and trends. Some say that while the data bias issues show up in absolutes and averaging, it doesn’t effect trends at all when anomaly methods are applied.

Over at Lucia’s Blackboard blog there have been a couple of posts on the issue that raise some questions on methods.  I’d like to thank both Lucia Liljegren and Zeke Hausfather for exploring the issue in an “open source” way. All the methods and code used have been posted there at Lucia’s blog which enables a number of people to have a look at and replicate the issue independently. That’s good.

E.M Smith at “chiefio” has completed a very detailed response to the issues raised there and elsewhere. You can read his essay here.

His essay is lengthy, I recommend giving yourself more than a few minutes to take it all in.

Joe D’Aleo and I will have more to say on this issue also.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

239 Comments
Inline Feedbacks
View all comments
March 8, 2010 3:01 pm

Yo Ant-nee ( best south jersey lingo)
Get some glutamine ( amino acid for the immune system) for that cold
and then meet in the gym for some benches
Get well,
JB

March 8, 2010 3:07 pm

Re: Keith W. (Mar 8 14:12),
I presume you mean GISS, not GHCN. They use rural stations within 500 km, or 1000 if they have to, to do their UHI adjustments. They use stations within 1200 km to calculate the anomaly base for a grid point (not an adjustment). In both cases the results are weighted inversely by distance, so the further stations have less effect.
While the derivation of the anomaly base should be as local as possible, even if it is biased by remote stations, that just provides a constant offset, and doesn’t affect trends.

Alex Heyworth
March 8, 2010 3:14 pm

Re: steven mosher (Mar 8 14:16),
Good couple of posts, Steven. Liked your first one identifying what the real issues are.
WRT your second post, I would be convinced there was no issue IF I thought what GISS does in calculating anomalies was as simple as your example. As it is, I am about 3/4 convinced, but would like to see what Chiefio comes up with.
On your first post, I’m inclined to suggest that the real issue isn’t to do with air temperature at all, but with heat content, as suggested by David Evans above. However, I’m a bit dubious that we have much of a handle on ocean heat content yet. For example, I have yet to see an estimate of ocean heat content change that has admitted what its error bars are.

DocMartyn
March 8, 2010 3:17 pm

The march of the thermometers would work as an explanation if the sites that were deleted had a smaller (Max + Min) range than the remainder. I was looking for the effect of altitude and urban heat Island effects on (max + min).
Does anyone know the effect of height on (max + min) ?

geo
March 8, 2010 3:22 pm

@steven mosher (14:16:25) :
Nice. What does “micro-site” mean in this context? I ask this, because the airport work leads me to believe there are sort of 3 levels of UHI, one of which is micro-site along the lines of what surfacestations is looking at with the French 1-5 scale, and classic UHI which is more of a “regional” phenomenon.
But the airport thing to me is a ‘tweener that does not comfortably fit in either. The 1-5 scale clearly has certain assumptions built into it at a very basic level about just how big a potentially contaminating nearby heat source might be –and large commercial jet engines are pretty clearly far outside those base assumptions.
Yet it isn’t necessarily “UHI” (in the classic sense) either, so far as being predictable by population density or somesuch. So, ‘tweener. Would be nice to have language to describe and catagorize that middle case. Airports are the classic example, but possibly not the only example, so “airport problem” while understandable doesn’t really satisfy my need to generalize taxonomies, y’know? I suppose I’m leaning toward UHI as the top-level catagorization, with micro-site, whatever-the-general-term-for-the-airport-king-of-thing, and regional as the subsets.

March 8, 2010 4:27 pm

Of course, if you take this thing to a (ridiculous) limit, you don’t need any thermometers anywhere.

Jan Pompe
March 8, 2010 4:28 pm

steven mosher (13:55:32) :
“What do you think Nick?”
I’m not really sure that I care much for the opinion of a mathematician that thinks this:
“Nick Stokes (13:04:02) :
Re: PeterB in Indainapolis (Mar 8 11:43),
Yes, removing thermometers that are colder than the grid cell average, as I said, will make the grid cellaverage temperature colder. ”
Thank you for continuing the strawman here:
steven mosher (14:16:25) :
Sure where the trends are the same across the stations and if you normalise (do we really need to invent new terms such as anomalise just for climate science?) before you average the trend will remain unchanged.
I would have thought that patently obvious so why waste time with it?
However the reall problem is that the trends from station to station are not the same and the the stations dropped are those with a cooling or steady trend in favour of those with say UHI contamination we have a problem that needs to be addressed.

E.M.Smith
Editor
March 8, 2010 4:30 pm

curious (03:45:07) : E. M. Smith, who seems to change his mind about the existence of the warming bias.
No, I have not changed my mind at all. There is a clear warming bias in the DATA. There are changes that put more warm thermometers in the present and with flatter seasonal profiles. That IS station bias. Period.
Somehow folks seem to take that and want to turn it into “I think the anomalies are warming”. They are two very different things. Please keep them apart. There are more kinds of warming bias than just rising anomalies.
There are at least 5 ways of cooking up an anomaly that I know of. Before you can asses how well they each work and what one has the least problems with the particular “issues” in the data that you have, you must first understand that DATA.
So, for example, the First Difference method resets to zero on any data gaps. Well, if you look at the “Digging in the clay” article linked at the top, you find LOADS of data gaps. Hmmm… Maybe FD “has issues” here….
If you look at “Climatology” (i.e. what GIStemp does with things like making up values to fill in the blanks with “The reference station method”) you can avoid the damage from the blanks, but you get ‘leakage’ of the warming BIAS in another station into a missing station and into the anomaly maps (As Gavin said in his email from the foia batch… The GISS guys know this.)
You can go down the list of methods. They each have “issues”.
So to choose a “good one” (or as I am doing, to measure one) you simply MUST know what issues are in the DATA prior to the process.
So what we find are LOADS of bias in the DATA. Latitude, altitude, airport percentage, etc. That you might find a way to mitigate it, does not mean the bias is not there and does not mean you can ignore it. (And does not answer why it is being put in in ever greater amounts when there is no need to do so… FWIW, my bias it to assume “stupidity” rather than “malice” in conformance with Hanlon’s Razor: Never attribute to malice that which is adequately explained by stupidity. [ my paraphrase ] But I’d really rather have much less bias in the data to begin with. )
For each method used, that added bias has the risk of reaching the result in significant amounts and has the probability of increasing the error bars. Neither is good.
So I cooked up my own “method” that I think is better than either of those other two (and much better than the “some of this some of that, recursively” adjusting and in-filling done by the whole of GIStemp, not just the grid / box anomaly STEP3) since it avoids the “in-fill” errors but also carries forward the anomaly over data gaps so will preserve a trend better with holey data. A I found that the USA is basically FLAT in trend but with a rolling pattern roughly in step with the PDO. Hardly CO2 matching.
http://chiefio.files.wordpress.com/2010/02/usa-dt.gif
But hey, it uses anomalies so it can’t possibly have any error in it. So the USA ought to now get a complete free pass on any AGW issues. Right!?
/sarcoff>
And that is my whole point. We are asked to take on faith that “the anomaly will fix it” and there are many different ways to do anomalies with many different results and no clear way to know which one is right.
(Though I’m pretty sure the way GIStemp does it is wrong. Applied AFTER all the UHI, in-fill, homogenizing, USHCN.v2 / GHCN averaging, etc. )
Now look at that chart again. Notice how wide the swings get back in the very early years (on the far right). The error band is opening up with station reductions. Now tell me station drop outs don’t matter. I’m good with that. Happy as a clam. Because if station drop outs don’t matter, then we are now clearly in a 260 year cooling trend and we can all go home.
But if station drop outs DO matter, then they matter for all time, and not just the far past.

Do you think this statement is supported by the “very detailed response” of Smith? What about Roy Spencer’s results then?

Take a look at that graph again. Notice the dip between the 30’s and now? That’s the GIStemp baseline. Yeah, I think setting your baseline in a dip matters. Make the ‘baseline’ from 1925 to 2005 and you get different (and cooler) anomalies. We are exactly a zero now and that is where we were in the 190x and 181x eras. And it was all done with anomalies so it’s perfect… 😉
BTW, Spencer’s early result saw “nothing of interest” and my comment was “needs to look at more data”. Having now looked back to the ’70s he is finding divergence with Jones. I have no disagreement with that. It is just the kind of thing I would expect to find. I’d further speculate that the further back he looks, the more divergence he will find. All the ‘odd adjustments’ seem to be piled up in the older data (at least given what I’ve seen comparing ‘old really raw’ with ‘as in the data products’).
So I’d have to say I agree with Spencer. There is an unexplained bias that gets bigger the further back in time you look.

carrot eater
March 8, 2010 4:39 pm

Alex Heyworth (15:14:57) :
No, that isn’t what GISS does, but their method also works just fine.
Roughly, what GISS does:
Station A is hot. say it never changes. Readings are 30 30 30 30 30
Station B is cold. say it also never changes. Readings are 0 0 0 0 0.
Now say that station B stopped reporting its data. So we only have the first three readings: 0 0 0.
Using absolute temperatures, the average would be 15 15 15 30 30. That of course is wrong.
Using GISS, you start with the longer record. so you take the 30 30 30 30 30. Great.
Then you get the next longest record. That’s 0 0 0.
You find the period where they overlap, and find the mean of each station over that period. That’s 30 and 0. Find the difference of those means, and add it to each value of the second station.
So now station B becomes 30 30 30.
Now you combine A and B. And you get 30 30 30 30 30. Then, you subtract out the baseline, and you end up with the final anomalies 0 0 0 0 0. But that’s cosmetic; the trend doesn’t change by re-centering the anomalies on zero.
So in summary, you had two constant-temp stations, one hot and one cold. Dropping the cold made no difference; the combined average still had a constant temp.
Now, if A and B had different trends, then dropping one would make a difference. That’s true, no matter what method you use.

rbateman
March 8, 2010 4:41 pm

steven mosher (14:16:25) :
I must say that’s quite the clever trick.
Except they didn’t drop data from merely warm or cool stations.
Specifically, a warm rural station relates to a cool rural station in the same way as a warm urban station relates to cool urban station.
In the real world, they dropped the rural stations, and kept the UHI affected urban stations.
Result: Unprecedented warming.
That’s how you get the gridded output to rise dramatically. You dump the vast majority of stations that don’t show appreciable UHI.

carrot eater
March 8, 2010 4:46 pm

Gary Hladik (14:37:42) :
The only adjustment GISS makes is a UHI adjustment. But why are you surprised that it comes before the stations are combined? It has to be, if you want it to actually be incorporated in the spatial averages.
As for actually using the GISS method, see ccc and Ron Broberg.
http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
http://rhinohide.wordpress.com/2010/03/08/gistemp-high-alt-high-lat-rural/

"Popping a Quiff"
March 8, 2010 5:07 pm

Nick Stokes (02:22:28) :
Your argument leaves to much room for tinkering with the method. And with the obvious bias of people like James Hansen tinkering only leaves suspicions.
Let’s compare the data from station that have been dropped to stations that have been retained. According to your method everything should be the same—shouldn’t it—even the actual temperatures?
But I bet a cup of coffee they aren’t.

E.M.Smith
Editor
March 8, 2010 5:08 pm

@Tony Rogers: You understand. Tears of joy… 😉
@J.P. Miller: Thanks. The code is public (though barely readable…) and I’ve gotten it to run. One of only a few folks on the planet to do so 8-0
The issue that bugs me is that they OUGHT to have a full test suite including neutral, warming, cooling, red – pink – whatever data patterns and they ought to have a full QA suite that feeds in broken data and bogus values and they ought to have a published set of benchmarks. And they don’t.
Instead you get really rough code, some pointers to papers that barely apply to what the code does (like one justifies The Reference Station Method in one place at one point in time; but does not show how that is going to work when, oh, the PDO flips and the Jet Stream goes loopy and your reference period is now in a quite different weather regime; nor why applying The Reference Station Method can be done recursively… Showing that a ‘fill in’ from 1000 km away ONCE works does not mean you can do it 3 times in a row and maybe be making fill ins from fill ins from a datum 3000 km away…). And a hearty “Trust Us, we ARE Rocket Scientists”…
That’s the problem…
David A (04:27:37) :
Re Mike McMillan (02:02:16) :
Your blink charts are very impressive and warrants more commentary. (a great deal more) The lowering of the past appears the strongest.
Can you give a quick summary of how you know this is USHCN original raw data vs USHCN version 2 revised raw data, and what GISS does with this data?

The USHCN and USHCN.v2 are by definition ‘not raw’. They are constructed sets with adjustments. You know they are “as constructed” if you get them from the constructor NCDC at their site (and I do, as does GIStemp. Instructions under the GIStemp tab on my site.)
What GISS does with, well, i’ve spent about a year working that out and there a few dozens of pages devoted to it… buy a bottle of Scotch first, though 😉
@Anna V.: You got it! Gozintas, Gozoutas, Transforms, and deltas. Prove validity or find the issues. Can’t tell the players without a benchmark…
BTW, the ‘mysterious property’ I’m working on as potentially publishable, that I can’t mention yet, is related to exactly the kinds of bias you get in the DATA from station selection bias… and the idea that the anomaly automatically fixes all is just wrong. It helps, sometimes a lot, but it does not fix. (And GIStemp does it at the end in stead of the beginning… How something done in STEP3 will fix what was done in STEP0, STEP1, and STEP2 is still a bit of a mystery to me 😉

"Popping a Quiff"
March 8, 2010 5:13 pm

E.M.Smith (16:30:02) :
That IS station bias.
A warming bias which is their desired outcome. Otherwise they’d keep the rural and mountain stations in the record—-if for nothing else to avoid the very real possibility of looking suspicious, which possibility must have crossed their minds.
The fact that they dropped those particular stations makes anyone conclude they want warming bias—-no matter what argument they use to convince of otherwise.

"Popping a Quiff"
March 8, 2010 5:18 pm

Nick Stokes (02:22:28) :
E.M.Smith does not like anomalies
Did he actually say this? In all cases he does like anomalies? Or are you trying to influence how people view EM Smith?

"Popping a Quiff"
March 8, 2010 5:24 pm

Nick Stokes
so what you are saying is ‘pay no attention to that actual temperature behind the curtain’
I’ll stay awake in Kansas instead

David Alan Evans
March 8, 2010 5:46 pm

Alex Heyworth (15:14:57) :
A loss or gain of 1ºC in ocean temperature would mean an enormous energy loss/gain. You are right. we have no handle on this
My personal view is that the atmosphere is a bit player.
DaveE

Harry Lu
March 8, 2010 5:53 pm

steven mosher (13:55:32) :
etc.
Climate4u (excelent web sight!
http://www.climate4you.com/GlobalTemperatures.htm#Temporal stability of global air temperature estimates
This section shows temporal changes in various temperature record.
Interestingly it is CRU data that shows the least changes! its a pity no one HERE believes them any more.
/harry

E.M.Smith
Editor
March 8, 2010 6:05 pm

Tim Clark (06:01:55) :
Nick Stokes (02:22:28) :
E.M.Smith does not like anomalies, and likes to do his analysis with absolute temperatures, In that world, the “march of the thermometers” towards the Equator, or wherever, may have cause a real temperature bias.

Swing and a miss…
I have no distaste for anomalies. I have a distaste for the ASSUMPTION that you can throw an anomaly step into a program and then claim perfection.
Big Difference.
Also, different tools ought to be used for different things.
So be careful with the word ‘analysis’. Many folks in the climatology world seem to use that as shorthand for “Global Average Temperature Trend Analysis”. It isn’t.
So I’ve done an “analysis” of the STATION BIAS in the DATA. It’s still an analysis, but using anomalies on it would remove exactly the things you are trying to find… How big is the station selection bias. Where is is located. What types of stations carry more of it, or less. What regions of the planet show the most, or the least. For all of that kind of analysis, you want to avoid anomalies. That’s the “Gozinta” part.
I’ve also done an anomaly based analysis for the purpose of making a neutral and uncomplicated (by things like The Reference Station Method, data drop outs within a station, UHI “correction” etc.) analysis. This type of analysis does benefit greatly from anomaly based processes. It will be used for the “Delta” part.
I’ve got a first rough benchmark of GIStemp, but I need to make some code to turn the product of GIStemp into something that can be directly compared with dT/dt. That can either be from making them both use the same grid/boxes or by taking the GIStemp grid/boxes and turning them back into a “temperature” series. Still TBD. That’s the “Gozouta” part.
Then, and only then, I can compare the Gozinta to the Gozouta and the Delta and see where there are Variances … and those variances will tell an interesting tale, one way or the other.
And while many folks seem to think I was done at the “Gozinta” step, that was only a first step. I’m pretty much ready for a final approval on the “Delta” step (and then the code will be published with results). It already shows that “Global Warming” is not Global and that the pattern is more in line with Instrument Change and Airport Growth than CO2; but that’s an early conclusion so “more to come”.
The “Gozouta” is the one I dread, as I’ll have to go back into the GIStemp rats nest again… Oh Well, can’t be helped.

But the climate scientists do it differently.

I noticed…
They do two things that prevent that bias. One is the use of anomalies.
Nope. It does NOT “prevent” that bias. It CAN mitigate. If done right, it can mitigate a lot. To “prevent” would require that it not be in the data to begin with. So put the thermometers back. The whole point behind measuring the DATA to to measure the degree of MITIGATION done later. To leap to the conclusion that the mitigation is PERFECT is exactly the problem…
That is, you form the global mean by averaging differences of station temps from their a local mean over a fixed period.
And, in GIStemp, they do not do that. They compare an average box of thermometers in time A to an average box of DIFFERENT THERMOMETERS in time B. And they do it AFTER calculating UHI, doing in-fill and homgenizing, etc, etc, etc.
Now maybe you are willing to just ASSUME that process is perfect. I’m not. I want to see benchmark results. What I’ve seen so far says it’s “Pretty good, but not perfect”. And when the STATION BIAS in the input for the Pacific Basin data are measured in 10 C over the life of the data, if you are 95% of perfect you get a 1/2 C warming that is bogus.
Are you really willing to bet the world economy on the HOPE that GIStemp is over 95% perfect? Really?
, it scarcely matters whether stations being dropped are hot or cold.
And that attitude, IMHO, is why The March Of The Thermometers happened.
Once you swallow the whopper that the individual stations used are not relevant, that “The Anomaly Fixes All”; then any old box of thermometers will do. Heck, put one on the BBQ. It may read 350 F, but if the climate is getting warmer, next decade it will read 351 F at the same fuel setting and “The Anomaly Will Fix It”…
I’m not willing to bet so much on so little with no evidence.

March 8, 2010 6:54 pm

I have a distaste for the ASSUMPTION that you can throw an anomaly step into a program and then claim perfection… To leap to the conclusion that the mitigation is PERFECT is exactly the problem… you are willing to just ASSUME that process is perfect.

I sense a disturbance in the Force… as if a million goal posts were moved, and then were still.
I don’t see where anyone in this thread or at Lucia’s or the CCC or even Tamino have claimed “perfection.” What’s at issue was Watts and D’Aleo’s contention that a warming bias was introduced by the march of the thermometers. It’s been shown, by a number of different methods including GISTemp’s that no bias was introduced. The same warming trend was shown with the dropped stations and without. QED, as far as the contention in the SPPI report goes.
That’s a falsifiable hypothesis, and we’ll all be paying attention if you falsify it, Chiefio, but the question was not whether the surface temperature record is “perfect.” That’s an entirely valid question that should be carefully investigated, but it’s a separate question.

E.M.Smith
Editor
March 8, 2010 7:23 pm

Tim Clark (06:17:00) :
Uh, Nick, I think you need to rethink that statement. We’re not talking absolute station temp here. If you drop stations that are showing no trend or a cooling trend and leave mostly stations that have a warming trend (airports), you bias the trend upward, regardless of gridding.

Good point. I think I got who’s who a bit lost in the prior comment. There are also longer term issues with station change. Drop one on one side of the jet stream, pick one up on the other, then have the PDO flip and change the relationship between them. Now your “in-fill” replacement may be out of phase with when the baseline established a relationship…

Tim Clark (06:30:50) : Forgive the double post, it’s a Monday.

It’s Monday already? When did that happen 😉

rbateman
March 8, 2010 8:09 pm

Paul Daniel Ash (18:54:01) :
I believe the temperature record has been left in a lesser state than actually exists.
They didn’t just drop stations, they left them with gaping holes. Holes that I find out are sometimes plugged with data forgotten or a process never checked independently.

E.M.Smith
Editor
March 8, 2010 8:15 pm

geo (07:25:50) : It seems to me the range of skeptics runs to three main lines of thought:
Then there is:
0). We just don’t know. And can’t. The temperature histories are too short and too full of holes to really say much useful at all (and being moth eaten even more as we watch). We know it was much hotter and much colder in the past from natural causes. We know there are many profound cyclical events of long duration (See Bond Events). And there is just no way we can disambiguate those events (that we don’t understand) from ‘normal’ that we have not measured well enough to predict, or even to report, with any accuracy.

1). The globe isn’t warming at all –we’re measuring it wrong (siting, land use, dropouts, whatever).
Reading Chiefio’s article, he seems to be firmly in camp #1.

Close. I agree with the part after the “-” but can’t really see a way to say there is “no warming” flat out.
So far all I can say is “no warming in the USA and some other places” with a modest error band, widening in the distant past. Some other continents also show little or no warming, or a bit of cooling, but the error bands are much wider. A couple of continents show some warming, but is it real or measurement error? (Like New Zealand where we can tease out a recent bit of warming trend, that is exactly what would be expected from putting all your thermometers at airports… so is it “real”? )
So I mostly sit in #0 with occasional bouts of #1 that I fight off with a dose of LIA exit says we’re warming from that point. Then I end up with:
http://chiefio.wordpress.com/2009/10/09/how-long-is-a-long-temperature-history/
where I settle on: It all depends on where you put your starting point.
Just like all fractal things, patterns repeat.
So we are warming, and cooling, each day.
And warming and cooling with each storm wave.
And warming and cooling each seasonal swing.
And warming and cooling with each El Nino / La Nina cycle.
And warming and cooling with each PDO flip (I’m on the Pacific, for folks elsewhere, substitute AMO, AO, etc.)
And warming and cooling as axial tilt shifts.
And warming and cooling as the solar “constant” changes ;-0
And warming and cooling as the ice ages come and go.
And it’s a fools errand to try and say if you “are warming” or “are cooling”.
The answer is “yes” to both, and at all times.
It’s a very satisfying thing, in a Zen like Mu! sort of way:
“The question is ill formed”
But only a few Zen Heads really like that answer, so I usually keep it to myself… and just go with the flow of “it’s probably not warming, much” just because it avoids a long philosophy discussion and folks looking at you strangely when you say “Zen” is the answer 😉
( But it really is the answer, and O really is the form of it… the empty vessel… I do not know… )

March 8, 2010 8:37 pm

Re: Tim Clark (Mar 8 14:11),
I got mixed up with the colders – yes, removing a colder station will make the cell average warmer, as you’d expect. The point is that the average to refer to is that of the grid, not the globe.

Pamela Gray
March 8, 2010 8:39 pm

Here is one example of how station drop might affect anomaly over time. Anomaly changes are different depending on climate zone and GPS address, just like deserts are more sensitive to atmospheric treatments than forests are. With the same degree of “treatment”, IE CO2 increase, you may have one altitude level of therms bouncing up, while another altitude level of therms stays the same. Because cities tend to be near waterways which tend to be at lower altitude, station drops from higher altitudes (which will have different anomaly responses to CO2 forcing) will affect your overall anomaly, bleeding out the more robust to climate warming forcing therms, and leaving the overly sensitive lower altitude therms to run with the ball.
Whether this is the case or not needs examination.

1 3 4 5 6 7 10
Verified by MonsterInsights