The fact that my work is mentioned by NCDC at all is a small miracle, even if it is “muted”, as Roger says. However, I’m pleased to get a mention and I express my thanks to Matt Menne for doing so. Unfortunately they ducked the issue of the long term site bias contribution and UHI to the surface record. But, we’ll address that later – Anthony

From Roger Pielke Sr.’s Climate Science website
There is a new paper on the latest version of the United States Historical Climatololgy Network (USHCN). This data is used to monitor and report on surface air temperature trends in the United States. The paper is
Matthew J. Menne, Claude N. Williams, Jr. and Russell S. Vose, 2009: The United States Historical Climatology Network Monthly Temperature Data – Version 2.(PDF) Bulletin of the American Meteorological Society (in press). [url for a copy of the paper added thanks and h/t to Steve McIntyre and RomanM on Climate Audit].
The abstract reads
“In support of climate monitoring and assessments, NOAA’s National Climatic Data Center has developed an improved version of the U.S. Historical Climatology Network temperature dataset (U.S. HCN version 2). In this paper, the U.S. HCN version 2 temperature data are described in detail, with a focus on the quality-assured data sources and the systematic bias adjustments. The bias adjustments are discussed in the context of their impact on U.S. temperature trends from 1895-2007 and in terms of the differences between version 2 and its widely used predecessor (now referred to as U.S. HCN version 1). Evidence suggests that the collective impact of changes in observation practice at U.S. HCN stations is systematic and of the same order of magnitude as the background climate signal. For this reason, bias adjustments are essential to reducing the uncertainty in U.S. climate trends. The largest biases in the HCN are shown to be associated with changes to the time of observation and with the widespread changeover from liquid-in-glass thermometers to the maximum minimum temperature sensor (MMTS). With respect to version 1, version 2 trends in maximum temperatures are similar while minimum temperature trends are somewhat smaller because of an apparent over correction in version 1 for the MMTS instrument change, and because of the systematic impact of undocumented station changes, which were not addressed version 1.”
I was invited to review this paper, and to the authors credit, they did make some adjustments to their paper in their revision. Unfortunately, however, they did not adequately discuss a number of remaining bias and uncertainty issues with the U.S. HCN version 2 data.
The United States Historical Climatology Network Monthly Temperature Data – Version 2 still contains significant biases.
My second review of their paper is reproduced below.
Review By Roger A. Pielke Sr. of Menne et al 2009.
Dear Melissa and Chet
I have reviewed the responses to the reviews of the Menne et al paper, and, while they are clearly excellent scientists, and have provided further useful information, unfortunately, they still did not adequately respond to several of the issues that have been raised. I have summarized these issues below:
1. With respect to the degree of uncertainty associated with the homogenization procedure, they misunderstood the comment. The issue is that in the creation of each adjustment [time-of-observation bias, change of instrument], there is a regression relationship that is used to create these adjustments. These regression relationships have an r-squared associated with them as well as a standard deviation. These deviations arise from the adjustment regression evaluation. These values need to be provided (standard deviations, r-squared) for each formula that they use.
Their statement that
“Based on this assessment, the uncertainty in the U.S. average temperature anomaly in the homogenized (version 2) dataset is small for any given year but contributes to an uncertainty to the trends of about (0.004°C)”
is not the correct (complete) uncertainty analysis.
2.
i) With respect to their recognition of the pivotal work of Anthony Watt, while they are clear on this contribution in their response; i.e.
“Nevertheless, we have now also added a citation acknowledging the work of Anthony Watts whose web site is mentioned by the reviewer. Note that we have met personally with Mr. Watts to discuss our homogenization approach and his considerable efforts in documenting the siting characteristics of the HCN are to be commended. Moreover, it would seem that the impetus for modernizing the HCN has come largely as a reaction to his work. “
the text itself is much more muted on this. The above text should, appropriately, be added to the paper.
Also, the authors bypassed the need to provide the existing photographic documentation (as a url) for each site used in their study. They can clearly link in their paper to the website
http://www.surfacestations.org/ for this documentation. Ignoring this source of information in their paper is inappropriate.
ii) On the authors’ response that
“Moreover, it does not necessarily follow that poorly sited stations will experience trends that disagree with well-sited stations simply as a function of microclimate differences, especially during intervals in which both sites are stable. Conversely, the trends between two well-sited stations may differ because of minor changes to the local environment or even because of meso-scale changes to the environment of one or both stations..”
they are making an unsubstantiated assumption on the “stability” of well-sited and poorly-sited stations. What documentation do they have that determines when “both sites are stable”? As has been clearly shown on Anthony Watt’s website, it is unlikely that any of the poorly sited locations have time invariant microclimates.
Indeed, despite their claim that
“We have documented the impact of station changes in the HCN on calculations of U.S. temperature trends and argue that homogenized data are the only way to estimate the climate signal at the surface (which can be important in normals calculations etc) for the full historical record “
is not correct. Without photographs of each site (which now exists for many of them), they have not adequately documented each station.
iii) The authors are misunderstanding the significance of the Lin et al paper. They state
“Moreover, the homogenized HCN minimum temperature data can be thought of as a fixed network (fixed in both location and height). Therefore, the mix of station heights can be viewed as constant throughout the period of record and therefore as providing estimates of a fixed sampling network albeit at 1.5 and 2m (not at the 9m for which differences in trends were found in Oklahoma). Therefore, these referenced papers do not add uncertainty to the HCN minimum temperature trends per se. “
First, as clearly documented on the Anthony Watts website, many of the observing sites are not at the same height above the ground (i.e. not at 1.5m or 2m). Thus, particularly for the minimum temperatures, which vary more with height near the ground, the height matters in patching all of the data together to create long term temperature trends. Even more significant is that the trend will be different if the measurements are at different heights. For example, if there has been overall long term warming in the lower atmosphere, the trends of the minimum temperature at 2m will be significantly larger than when it is measured at 4m (or other higher level). Including minimum temperature trends together will result in an overstatement of the actual warming.
The authors need to discuss this issue. Preliminary analyses have suggested that this warm bias can overstate the reported warming trend by tenths of a degree C.
iv) While the authors seek to exclude themselves from attribution; i.e.
“Our goal is not to attribute the cause of temperature trends in the U.S. HCN, but to produce time series that are more generally free of artificial bias.”
they need to include a discussion of land use/land cover change effects on long term temperature trends, which now has a rich literature. The authors are correct that there are biases associated with non-climatic and microclimate effects in the immediate vicinity of the observation sites (which they refer to as “artificial bias”), and real effects such as local and regional landscape change. However, they need to discuss this issue more completely than they do in their paper, since, as I am sure the Editors are aware, this data is being used to promote the perspective that the radiative effect of the well-mixed greenhouse gases (i.e. “global warming”) is the predominate reason for the positive temperature trends in the USA.
iv) The neglect of using a complementary data analysis (the NARR) because it only begins in 1979 is not appropriate. The more recent years in the HCN analyses would provide an effective cross-comparison. Also, even if the NARR does not separate maximum and minimum temperatures, the comparison could still be completed using the mean temperature trends.
Their statement that
” Given these complications, we argue that a general comparison of the HCN trends to one of the reanalysis products is inappropriate for this manuscript (which is already long by BAMS standards)”
therefore, is not supportable as part of any assessment of the robustness of the trends that they compute. The length issue is clearly not a justifiable reason to exclude this analysis.
In summary, the authors should include the following:
1. In their section “Bias caused by changes to the time of observation”
the regression relationship used in
“…the predictive skill of the Karl et al. (1986) approach to estimating the TOB was confirmed using hourly data from 500 stations over the period 1965-2001 (whereas the approach was originally developed using data from 79 stations over the period 1957-64)”
should be explicitly included with the value of explained variance (i.e. the r-squared value) and standard deviation, rather than referring the reader to an earlier paper. This uncertainty in the adjustment process has been neglected in presenting the trend values with its +/- values.
2. In their section “Bias associated with other changes in observation practice”
the same need to present the regression relationship that is used to adjust the temperatures due to instrument changes; i.e. from
“Quayle et al. (1991) concluded that this transition led to an average drop in maximum temperatures of about 0.4°C and to an average rise in minimum temperatures of 0.3°C for sites with no coincident station relocation.”
What is the r-squared and the standard deviation from which these “averages” were obtained?
3. With respect to “Bias associated with urbanization and nonstandard siting”,
as discussed earlier in this e-mail, the link to the photographs for each site needs to be included and citation to Anthony Watt’s work on this subject more appropriately highlighted.
On the application of “In contrast, no specific urban correction is applied in HCN version 2″, this conclusion conflicts with quite a number of urban-rural studies. They assume “that adjustments for undocumented changepoints in version 2 appear to account for much of the changes addressed by the Karl et al. (1988) UHI correction used in version 1.”
The use of text that concludes that this adjustment process “appear” to account for the urban correction of Karl et al (1988) indicates even some uneasiness by the authors on this issue. They need more text as to why they assume their adjustment can accommodate such urban effects. Moreover, the urban correction in Karl et al is also based on a regression assessment with an explained variance and standard deviation; the same data Karl used should be applied to ascertain if the new “undocumented changepoint adjustment” can reproduce the Karl et al results.
The authors clearly recognize this limitation also in their paragraph that starts with
“It is important to note, however, that while the pairwise algorithm uses a trend identification process to discriminate between gradual and sudden changes, trend inhomogenieties in the HCN are not actually removed with a trend adjustment..”
and ends with
“This makes it difficult to robustly identify the true interval of a trend inhomogeneity (Menne and Williams 2008).”
Yet, despite this clear serious limitation of the ability to quantify long term temperature trends in tenths of a degree C with uncertainties, they present such precise quantitative trends; e.g.
“0.071°and 0.077°C dec-1, respectively” (on page 15).
They also write that
“…there appears to be little evidence of a positive bias in HCN trends caused by the UHI or other local changes”
which ignores detailed local studies that clearly show positive temperature biases; e.g.
Brooks, Ashley Victoria. M.S., Purdue University, May, 2007. Assessment of the Spatiotemporal Impacts of Land Use Land Cover Change on the Historical Climate Network Temperature Trends in Indiana.
Christy, J.R., W.B. Norris, K. Redmond, and K.P. Gallo, 2006, Methodology and results of calculating Central California surface temperature trends: Evidence of human-induced climate change?, J. Climate, 19, 548-563.
Hale, R. C., K. P. Gallo, and T. R. Loveland (2008), Influences of specific land use/land cover conversions on climatological normals of near-surface temperature, J. Geophys. Res., 113, D14113, doi:10.1029/2007JD009548.
4. On the claim that
“However, from a climate change perspective, the primary concern is not so much the absolute measurement bias of a particular site, but rather the changes in that bias over time, which the TOB and pairwise adjustments effectively address (Vose et al. 2003; Menne and Williams 2008) subject to the sensitivity of the changepoint tests themselves.”
this is a circular argument. While I agree it is the changes in bias over time that matter most, without an independent assessment, there is no way for the authors to objectively conclude that their adjustment procedure captures these changes of bias in time.
Their statement that
“Instead, the impact of station changes and non-standard instrument exposure on temperature trends must be determined via a systematic evaluation of the observations themselves (Peterson 2006).”
is fundamentally incomplete. The assessment of the impact “of station changes and non-standard instrument exposure on temperature trends” must be assessed from the actual station location and its changes over time! To rely on the observations to extract this information is clearly circular reasoning.
As a result of these issues, their section “Temperature trends in U.S. HCN” overstate the confidence that should be given to the quantitative values of the trends and the statistical uncertainty in their values.
If this paper is published, the issues raised in this review need to be more objectively and completely presented. It should not be accepted until they do this.
I would be glad to provide further elaboration on the subjects I have presented in this review of their revised paper, if requested.
Best Regards
Roger A. Pielke Sr.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
E.M.Smith (20:45:12) :
. . . So now we’re calculating things out to 4/1000 C and asserting that is the most uncertainty possible when the raw data are in full degrees F.
Sigh. Does no one “get it” that you can not take a set of full degree F single samples (for a single location at a single time – no oversampling) average them all together and get any more precision than full degrees F? . . .
The “Monthly Average Temperature” is not a physical thing that can be over sampled (measured several times to create better precision than the measuring device itself supports). It is a mathematical creation and so is limited to the original precision of the raw data and can never have a higher precision than that.
Further, any calculations that use that monthly average will also be limited to whole degree F precision. That applies to the calculated anomalies. But they ought to have less precision than whole degrees F due the large number of calculations done to get to that anomaly result. Error accumulates.
Hi E.M.
Recalling from my geodesy days, accuracy and precision are two different things. Precision for any instrument is half the difference between the smallest readings. The two stations that I visited had MMTS’s reading down to 0.1°, so they gave a precision of 0.05°. Accuracy is a whole ‘nuther thing, depending on how well (or recently) they were calibrated, and they could be degrees off and not know it.
You can get accuracy greater than your precision if you take multiple measurements of something, different observers, different instruments, but with the station temps, while we’re getting 60 readings a month, those are all of different things – the temps for each different day, and all with one instrument, so we know accuracy cannot be better than 0.05°, and is probably worse.
The Monthly Average Temperature is indeed a creation, meaningful only for trends, and their 0.004° uncertainty is at least an order of magnitude too optimistic. The Stevenson screen thermometers read only to full degrees, plus they are subject to observer bias, so that also kicks in to degrade the overall accuracy.
It’s amazing that a couple tenths of a degree anomaly can drive species to extinction, while they survive day/night differences of 10 to 20 degrees.
.
As an aside, Glenn Beck mentioned Anthony’s report on the air the other day.
What I would like to see, and it should be a formal part of the peer-review process, is publication of the reviewers’ comments together with the paper’s authors responses at to how they have addressed the review comments. The responses would include full justiification for why any comments have been rejected. It appears from the above that many of the review comments have been ignored without justification. Ideally both the authors and reviewers should ultimately sign to accept that the comments have been addressed satisfactorily or that there remain points of disagreement. These should be appended to the paper so that anybody reading the paper or using information in the paper, understands the points of disagreement.
“What I would like to see, and it should be a formal part of the peer-review process, is publication of the reviewers’ comments together with the paper’s authors responses”
That’ll happen once journals move online, where space is limitless and costless.
My friend Bratby said: (23:24:59) :
“What I would like to see, and it should be a formal part of the peer-review process, is publication of the reviewers’ comments together with the paper’s authors responses at to how they have addressed the review comments. The responses would include full justiification for why any comments have been rejected.”
I hate to disagree with someone who has been kind enough to comment on my modest excuse for a blog, but I have to disagree with Mr Bratby on this.
Peer review does not give the “peer” a right of veto. He or she can raise objections or suggest improvements but it is for the authors to decide what changes, if any, they make as a result of the reviewer’s comments. Even if the reviewer is correct according to everyone other than the authors, they are entitled to stick to their guns and say “this is our take on the issue, like it or lump it”.
Publishing reviewer’s comments and the response to them would undermine one of the central roles of academic papers on any subject, namely to expose a new thought and leave the reader to form his own view of whether it is sound or unsound. Others will respond if they disagree, that’s what academic debate is all about. An honest author confronted by a comment “don’t you think you’re wrong about X because of Y” will acknowledge that the same argument was raised by Professor Whoever in his pre-publication review and will either change his position or maintain it and debate the point.
I very much appreciate the WUWT site and have even referred the site to a UK journalist which resulted in a WUWT-inspired newspaper article. However, there are, quite naturally, times when I disagree with the WUWT argument (e.g. significant solar cycle influence). I’ m also less than convinced about the surface station survey. I just can’t see where it’s heading and I think a number of posters are confused about what it might achieve.
First, let’s be clear that none of the temperature data organisations (GISS, Hadley, RSS & UAH) are claiming that they have an accurate and precise figure fort average temperature of the surface or troposphere. What they are saying is that, within certain error bounds, they are able to provide a figure for how much the temperature has changed. As an analogy, consider the example of a huge lake. We might not have a clue about the depth and volume of the lake but, with enough samples, we could estimate the change in the water level to a reasonable level of accuracy. In other words, we could provide an anomaly relative to a time period of our choosing.
Of course, it’s possible that temperature measurements could be influenced by non-climatic factors, e.g. urban heat. But for this to affect the overall trend, the trend in urban heat would need to change. If urban heat at a station is contributing 2 deg warming in 1970, and is also contributing 2 deg in 2008 , then urban heat should have no effect on the 1970-2008 station trend.
So is there a UH trend in the US record? Maybe. It’s perfectly possible that a UH signal is present if you go back far enough (e.g. 1900), but I’m not convinced UH is significant over the past few decades. These are the temperature trends for the past 30 years (1979-2008) for contiguous 48 states.
GISS +0.25 deg per decade
UAH +0.26 deg per decade
The fact that the warming rate at the surface is so similar to the warming in the lower atmosphere means that it is not going to be easy to convince anyone that urban heat had a significant impact on the US temperature record. What’s more, because of the land/ocean ratio and the fact that the US is only 2% of the earth’s surface, urban heat will have even less impact on the global record.
Anthony’s survey may produce some interesting results, but it’s not going to somehow demolish global warming case.
Anthony, great post. BTW did you get to read any of the other reviewers comments?
“If urban heat at a station is contributing 2 deg warming in 1970, and is also contributing 2 deg in 2008 , then urban heat should have no effect on the 1970-2008 station trend.” – John Finn
This assumes that all heat islands are created equal and that city growth has not sprawled or intensified in 28 years.
It also assumes that micro-climates have not changed in 28 years, ie. that asphalt parking lots were not built, or acres of shingled roofs added within the vicinity of a surface station.
A station that I am studying is located at a sewage treatment plant that underwent a major expansion in the 1990’s. It is also located downwind from the county fair grounds that was paved two years ago.
The apologists think they can isolate a mistake in this system to 3 decimal places?
Pure bunkonium.
And it is increasingly difficult to grant people who write reports like this good faith in their motives.
To John Finn (02:47:32): Nothing new will demolish the global warming case. It has already been scientifically demolished many times over. And UHIs are the least of the egregious errors. At present AGW in its many guises continues mainly as a political club, but also as a religion for the “masses” of followers, however many remain devoted. It is difficult to understand why you do not see the need for accurate, first-world, scientifically valid gathering of temperature data with which many government policies are made.
As I understand it, Anthony Watts’ final version of the Surfacestation report will find those stations that (might) have remained accurate over the historical time period this paper addresses. Then we might have the data to create a report like “The U.S. Historical Climatology Netword Monthly Temperature Data — Version 2” Right now we do not — and that stinks. Thanks to Dr. Pielke, Sr. for the excellent peer review. The authors, Menne, Williams, and Vose are very fortunate Anthony gave them so much input. He and his crew are the ones with that expertise that we need in our government.
I remember the trial of O.J. Simpson. That was the first time that I got a glimpse of a major city (Los Angeles) in the U.S.ofA. functioning like a the stereotype of a third-world country. What a shock. And the shocks keep coming. I am so grateful for Anthony’s, Dr. Pielke’s, and others’ efforts.
John Finn,
EXACTLY. None of this is going anywhere, at least as far as disproving that CO2 is currently driving climate and that the temperature direction is up. It’s all smoke and mirrors, the outcome will still be the same because the CO2 effect is indisputably real and, perhaps most pertinently of all, this is for a sampling of an area less than much less than one twentieth of the global surface.
Won’t stop them though.
Hunter,
Try reading John’s post above – if you don’t even realise the aim is to establish an anomaly value and not an absolute value, you’ve got a lot of reading to do.
John Finn said,
Of course, it’s possible that temperature measurements could be influenced by non-climatic factors, e.g. urban heat. But for this to affect the overall trend, the trend in urban heat would need to change. If urban heat at a station is contributing 2 deg warming in 1970, and is also contributing 2 deg in 2008 , then urban heat should have no effect on the 1970-2008 station trend.
So you are suggesting that all urban development stopped at 1970, try China and UHI.
http://icecap.us/images/uploads/URBANIZATION_IN_THE_TEMPERATURE_DATA_BASES.pdf
Did Pielke find anything correct in the document? That might have been a shorter write-up…
Thank you Anthony and Rodger for your continued efforts. I am not optimistic but hope that actual science will prevail in what feels like a Salem witch hunt and accompanying trials .
Matt,
If the anamoly value is far below the MOE, it is meaningless.
I would suggest that GIGO is one of the main underpinnings of AGW.
How can you defend such garbage with any seriousness?
Even more significant is that the trend will be different if the measurements are at different heights.
And if station height affects trends, one can reasonably presume that far more egregious violations may do so as well.
Yilmaz (2008). Let us not forget the lessons of Yilmaz . . .
This is a typical example of a dog turning in circles biting it’s own tail.
Is he channeling us, Anthony?
The metadata on such moves are sparse, confusing and essentially unverifiable.
And just plain wrong.
I calculate the total temperature change over this same time period at 0.53C or just 0.1C over 9 decades excluding the adjustments.
For US data, weighting each station equally, raw trends show +0.14C. For USHCN1 TOBS, it’s +0.31
Looking at your map of surveyed stations, while there very few of these, they do seem more evenly distributed and greater in number than stations used in a recent Antarctic temperature reconstruction attempt.
Maybe so. But they are not evenly distributed by any means. They are disproportionately concentrated in naturally warming areas.
the aim is to establish an anomaly value and not an absolute value
Yes. A problem arises, however, when offset data becomes conflated with trend, which, for example, is what happened in the case of Lampasas, TX, or Chama, NM.
Matt,
More importatantly, no one is disputing that CO2 has an effect. What has been falsified is the IPCC/Hansen/Gore hype that the effect of the change in CO2 is going to be apocalyptic.
Until AGW believers can learn to distinguish the facts of CO2 from their faith-based beliefs of what CO2 is doing, believers will continue to believe untrue things and push for unwise policies. And they will continue to embarass themselves defending Mann, Hansen, Lovelock, Gore, the IPCC, etc.
The fact that the warming rate at the surface is so similar to the warming in the lower atmosphere means that it is not going to be easy to convince anyone that urban heat had a significant impact on the US temperature record.
However, it is a basic premise of AGW theory that, given a warming trend, lower troposphere warms at a faster rate than surface. (FWIW, the issue is not mainly UHI, but microsite and station moves.)
None of this is going anywhere, at least as far as disproving that CO2 is currently driving climate and that the temperature direction is up.
But surely this is not about CO2, per se, in the first place. It is about the 100+ year history of US climate, most of which occurred before the era of “Big CO2”.
The problem is sheer laziness. It would actually involve work to manually look at each of the site histories, analyze siting differences for each one, determine the appropriate “fix” for it, and generate a data set. Especially since you might actually have to do some “science” in the process. Like, I don’t know, maybe set up some test stations which match conditions of the actual stations to verify that things work the way you think they do and that your fixes really do correct for problems.
Nah, it’s so much more fun and a lot less time consuming to play computer games. But it isn’t science.
I’m confused. Mr. Pielke begins his review with: “they are clearly excellent scientists” and then takes their work apart with criticisms
When you are about to hang a man it never hurts to be polite.
Matt Bennett:
You have it exactly backwards, and you are the one using smoke and mirrors.
No one has to “disprove” that CO2 is driving the climate. It is preposterous to believe that a very minor trace gas is driving much of anything.
It is not up to normal skeptics to “disprove” anything. It is up to the purveyors of the CO2=AGW silliness to prove their case. They have utterly failed to do so.
If increases in CO2 caused global warming, the globe would be getting warmer. Instead, despite a steady rise in CO2, the planet has been cooling for most of the last decade. Therefore, any insignificant warming due to CO2 is inconsequential, and can be completely disregarded.
To E.M. Smith and Mike McMillan, re precision and accuracy:
I marvel at the contrived level of precision applied to the temperature record. I am a cooperative observer at a station that is not part of the USHCN, but it has identical equipment, and is actually sited pretty well.
Mike, when you saw MMTS readings to 0.1 deg precision, what you saw was a false precision of the display. The acurracy of the Nimbus instrument is actually 0.3 deg, by its published specs. Anyhow, we round off to one degree when filling in the paper record, at the direction of our local NWS supervisor. I leave it to you guys to figure out what .3, .1, and rounding does to the error rate.
The Nimubs MMTS has a feature that allows retrieval of previous days’ high and low, for stations that are not manned continually, like ours. That feature is problematic. The retrieval method is non-intuitive, and the purported stored readings seldom pass a sanity test. I wonder how much error creeps in because of this.
Of course, before the MMTS came along, it was fluid filled glass thermometers, no two of which ever agreed closer than two degrees F when kept side by side in the same enclosure.
Dan
Anthony,I live in south jersey 30 miles S. of Philadelphia.I’m told that Phillys official temps are taken from the Phil. airport.The difference in temps are sometime 20 degrees,especially during winter.My question is, where is the location of the temp station at the airport.It sure does seem to be in a UHI location.
It does seem likely there is a systematic error, based upon the effort to install MMTS cable. The amount of effort increases with distance but there is a more than linear increase, as muscle weakness and complications increase along with increasing distance. So MMTS sensors tend to be close to the building which contains their operator.
The sensor height is an interesting issue when trying to deal with fractions of degrees.
The problem is sheer laziness. It would actually involve work to manually look at each of the site histories, analyze siting differences for each one, determine the appropriate “fix” for it, and generate a data set.
It would be pretty much pointless. The site histories are wrong. There are station moves on record where no moves occurred. There are station moves that are not on record. Coordinates for stations are imprecise to the point of uselessness, and gets (much) worse the further back one goes.
Even if one reconstructed all of the site histories from the B-91 forms, there would be many gaps and no record whatever of changes in local environment.