
by Anthony Watts
There has been a lot of buzz about the Menne et al 2010 paper “On the reliability of the U.S. Surface Temperature Record” which is NCDC’s response to the surfacestations.org project. One paid blogger even erroneously trumpeted the “death of UHI” which is humorous, because the project was a study about station siting issues, not UHI. Anybody who owns a car with a dashboard thermometer who commutes from country to city can tell you about UHI.
There’s also claims of this paper being a “death blow” to the surfacestations project. I’m sure in some circles, they believe that to be true. However, it is very important to point out that the Menne et al 2010 paper was based on an early version of the surfacestations.org data, at 43% of the network surveyed. The dataset that Dr. Menne used was not quality controlled, and contained errors both in station identification and rating, and was never intended for analysis. I had posted it to direct volunteers to so they could keep track of what stations had been surveyed to eliminate repetitive efforts. When I discovered people were doing ad hoc analysis with it, I stopped updating it.
Our current dataset at 87% of the USHCN surveyed has been quality controlled.
There’s quite a backstory to all this.
In the summer, Dr. Menne had been inviting me to co-author with him, and our team reciprocated with an offer to join us also, and we had an agreement in principle for participation, but I asked for a formal letter of invitation, and they refused, which seems very odd to me. The only thing they would provide was a receipt for my new data (at 80%) and an offer to “look into” archiving my station photographs with their existing database. They made it pretty clear that I’d have no significant role other than that of data provider. We also invited Dr. Menne to participate in our paper, but he declined.
The appearance of the Menne et al 2010 paper was a bit of a surprise, since I had been offered collaboration by NCDC’s director in the fall. In typed letter on 9/22/09 Tom Karl wrote to me:
“We at NOAA/NCDC seek a way forward to cooperate with you, and are interested in joint scientific inquiry. When more or better information is available, we will reanalyze and compare and contrast the results.”
“If working together cooperatively is of interest to you, please let us know.”
I discussed it with Dr. Pielke Sr. and the rest of the team, which took some time since not all were available due to travel and other obligations. It was decided to reply to NCDC on a collaboration offer.
On November 10th, 2009, I sent a reply letter via Federal Express to Mr. Karl, advising him that we would like to collaborate, and offered to include NCDC in our paper.. In that letter I also reiterated my concerns about use of the preliminary surfacestation data (43% surveyed) that they had, and spelled out very specific reasons why I didn’t think the results would be representative nor useful.
We all waited, but there was no reply from NCDC to our reply to offer of collaboration by Mr. Karl from his last letter. Not even a “thank you, but no”.
Then we discovered that Dr. Menne’s group had submitted a paper to JGR Atmospheres using my preliminary data and it was in press. This was a shock to me since I was told it was normal procedure for the person who gathered the primary data the paper was based on to have some input in the review process by the journal.
NCDC uses data from one of the largest volunteer organization in the world, the NOAA Cooperative Observer Network. Yet NCDC director Karl, by not bothering to reply to our letter about an offer he initiated, and by the journal not giving me any review process opportunity, extends what Dr. Roger Pielke Senior calls “professional discourtesy” to my own volunteers and my team’s work. See his weblog on the subject:
Professional Discourtesy By The National Climate Data Center On The Menne Et Al 2010 paper
I will point out that Dr. Menne provided thanks to me and the surfacestations volunteers in the Menne et al 2010 paper, and I hear through word of mouth, also in a recent verbal presentation. For that I thank him. He has been gracious in his communications with me, but I think he’s also having to answer to the organization for which he works and that limited his ability to meet some of my requests, like a simple letter of invitation.
Political issues aside, the appearance of the Menne et al 2010 paper does not stop the surfacestations project nor the work I’m doing with the Pielke research group to produce a peer reviewed paper of our own. It does illustrate though that some people have been in a rush to get results. Texas state Climatologist John Nielsen-Gammon suggested way back at 33% of the network surveyed that we had a statistically large enough sample to produce an analysis. I begged to differ then, at 43%, and yes even at 70% when I wrote my booklet “Is the US Surface Temperature Record Reliable?, which contained no temperature analysis, only a census of stations by rating.
The problem is known as the “low hanging fruit problem”. You see this project was done on an ad hoc basis, with no specific roadmap on which stations to acquire. This was necessitated by the social networking (blogging) Dr. Pielke and I employed early in the project to get volunteers. What we ended up getting was a lumpy and poorly spatially distributed dataset because early volunteers would get the stations closest to them, often near or within cities.
The urban stations were well represented in the early dataset, but the rural ones, where we believed the best siting existed, were poorly represented. So naturally, any sort of study early on even with a “significant sample size” would be biased towards urban stations. We also had a distribution problem within CONUS, with much of the great plains and upper midwest not being well represented.
This is why I’ve been continuing to collect what some might consider an unusually large sample size, now at 87%. We’ve learned that there are so few well sited stations, the ones that meet the CRN1/CRN2 criteria (or NOAA’s 100 foot rule for COOPS) are just 10% of the whole network. See our current census:

When you have such a small percentage of well sited stations, it is obviously important to get a large sample size, which is exactly what I’ve done. Preliminary temperature analysis done by the Pielke group of the the data at 87% surveyed looks quite a bit different now than when at 43%.
It has been said by NCDC in Menne et al “On the reliability of the U.S. surface temperature record” (in press) and in the June 2009 “Talking Points: related to “Is the U.S. Surface Temperature Record Reliable?” that station siting errors do not matter. However, I believe the way NCDC conducted the analysis gives a false impression because of the homogenization process used. As many readers know, the FILNET algorithm blends a lot of the data together to infill missing data. This means temperature data from both well sited and poorly sited stations gets combined to infill missing data. The theory is that it all averages out, but when you see that 90% of the USHCN network doesn’t meet even the old NOAA 100 foot rule for COOPS, you realize this may not be the case.
Here’s a way to visualize the homogenization/FILNET process. Think of it like measuring water pollution. Here’s a simple visual table of CRN station quality ratings and what they might look like as water pollution turbidity levels, rated as 1 to 5 from best to worst turbidity:





In homogenization the data is weighted against the nearby neighbors within a radius. And so a station might start out as a “1” data wise, might end up getting polluted with the data of nearby stations and end up as a new value, say weighted at “2.5”. Even single stations can affect many other stations in the GISS and NOAA data homogenization methods carried out on US surface temperature data here and here.

In the map above, applying a homogenization smoothing, weighting stations by distance nearby the stations with question marks, what would you imagine the values (of turbidity) of them would be? And, how close would these two values be for the east coast station in question and the west coast station in question? Each would be closer to a smoothed center average value based on the neighboring stations.
Essentially, in my opinion, NCDC is comparing homogenized data to homogenized data, and thus there would not likely be any large difference between “good” and “bad” stations in that data. All the differences have been smoothed out by homogenization (pollution) from neighboring stations!
The best way to compare the effect of siting between groups of stations is to use the “raw” data, before it has passed through the multitude of adjustments that NCDC performs. However NCDC is apparently using homogenized data. So instead of comparing apples and oranges (poor sited -vs- well sited stations) they essentially just compare apples (Granny Smith -vs- Golden delicious) of which there is little visual difference beyond a slight color change.
We saw this demonstrated in the ghost authored Talking Points Memo issued by NCDC in June 09 in this graph:

Referencing the above graph, Steve McIntyre suggested in his essay on the subject:
The red graphic for the “full data set” had, using the preferred terminology of climate science, a “remarkable similarity” to the NOAA 48 data set that I’d previously compared to the corresponding GISS data set here (which showed a strong trend of NOAA relative to GISS). Here’s a replot of that data – there are some key telltales evidencing that this has a common provenance to the red series in the Talking Points graphic.

When I looked at SHAP and FILNET adjustments a couple of years ago, one of my principal objections to these methods was that they adjusted “good” stations. After FILNET adjustment, stations looked a lot more similar than they did before. I’ll bet that the new USHCN adjustments have a similar effect and that the Talking Points memo compares adjusted versions of “good” stations to the overall average.
There’s references in the new Menne et al 2010 paper to the new USHCN2 algorithm and we’ve been told how it is supposed to be better. While it does catch undocumented station moves that USHCN 1 did not, it still adjusts data at USHCN stations in odd ways, such as this station in rural Wisconsin, and that is the crux of the problem.

Or this one in Lincoln, IL at the local NWS office where they took great effort to have it well sited.


Thanks to Mike McMillan for the graphs comparing USHCN1 and USHCN2 data
Notice the clear tendency in the graphs comparing USHCN1 to USHCN2 to cool off the early record and leave the current levels near recently reported levels or to increase them. The net result is either reduced cooling or enhanced warming not found in the raw data.
As for the Menne et all 2010 paper itself, I’m rather disturbed by their use of preliminary data at 43%, especially since I warned them that the dataset they had lifted from my website (placed for volunteers to track what had been surveyed, never intended for analysis) had not been quality controlled at the time. Plus there are really not enough good stations with enough spatial distribution at that sample size. They used it anyway, and amazingly, conducted their own secondary survey of those stations, comparing it to my non-quality controlled data, implying that my 43% data wasn’t up to par. Well of course it wasn’t! I told them about it and why it wasn’t. We had to resurvey and re-rate a number of stations from early in the project.
This came about only because it took many volunteers some time to learn how to properly ID them. Even some small towns have 2-3 COOP stations nearby, and only one of them is “USHCN”. There’s no flag in the NCDC metadatabase that says “USHCN”, in fact many volunteers were not even aware of their own station status. Nobody ever bothered to tell them. You’d think if their stations were part of a special subset, somebody at NOAA/NCDC would notify the COOP volunteer so they would have a higher diligence level?
If doing an independent stations survey was important enough for NCDC to do to compare to my 43% data now for their paper, why didn’t they just do it in the first place?
I have one final note of interest on the station data, specifically the issue of MMTS thermometers and their tendency to be sited closer to building due to cabling issues.
Menne et al 2010 mentioned a “counterintuitive” cooling trend in some portions of the data. Interestingly enough, former California State Climatologist James Goodridge did an independent analysis ( I wasn’t involved in data crunchng, it was a sole effort on his part) of COOP stations in California that had gone through modernization, switching from Stevenson Screens with mercury LIG thermometers to MMTS electronic thermometers. He sifted through about 500 COOPs in California and chose stations that had at least 60 years of uninterrupted data, because as we know, a station move can cause all sorts of issues. He used the “raw” data from these stations as opposed to adjusted data.
He writes:
Hi Anthony,
I found 58 temperature station in California with data for 1949 to 2008 and where the thermometers had been changed to MMTS and the earlier parts were liquid in glass. The average for the earlier part was 59.17°F and the MMTS fraction averaged 60.07°F.
Jim
A 0.9F (0.5C) warmer offset due to modernization is significant, yet NCDC insists that the MMTS units are tested at about 0.05C cooler. I believe they add this adjustment into the final data. Our experience shows the exact opposite should be done and with a greater magnitude.
I hope to have this California study published here on WUWT with Jim soon.
I realize all of this isn’t a complete rebuttal to Menne et al 2010, but I want to save that option for more detail for the possibility of placing a comment in The Journal of Geophysical Research.
When our paper with the most current data is completed (and hopefully accepted in a journal), we’ll let peer reviewed science do the comparison on data and methods, and we’ll see how it works out. Could I be wrong? I’m prepared for that possibility. But everything I’ve seen so far tells me I’m on the right track.
If doing a stations survey was important enough for NCDC to do to compare to my data now for their paper, why didn’t they just do it in the first place?
We currently have 87% of the network surveyed (1067 stations out of 1221), and it is quality controlled and checked. I feel that we have enough of the better and urban stations to solve the “low hanging fruit” problem of the earlier portion of the project. Data at 87% looks a lot different than data at 43%.
The paper I’m writing with Dr. Pielke and others will make use of this better data, and we also use a different procedure for analysis than what NCDC used.
[Crust (10:01:07) :
Putting aside decorum issues, the main problem as I understand it is the data set used in this study. It’s a 43% subsample, which at first blush doesn’t sound so bad, but 1) it’s not quality controlled and 2) it’s far from a random subsample; in particular rural areas are underrepresented. Looking at the charts in the paper, it’s remarkable how well the trend series constructed from the good sites and the poor sites track each other. So I don’t think it’s plausible there’s a problem with too small a sample size per se ]
No crust that is a problem. The sample size is too small for them to track each other so well. The fact is they shouldn’t. The tracking points to number fudging not “robustessnessessstation.”
Imagine my surprise. You go to the trouble of collecting information about surface stations, have your data in the public domain, and some sly old dog sneaks up and springs a paper on you, crowing about you getting it all wrong.
He who laughs last, laughs longest, however. Good luck with your paper, Anthony.
I read the Menne et al Paper the same way as Leo G @10:16:21
The strong argument on Skeptical Science is that Anthony’s approach is flawed as it does not recognize the correct methodology of comparing trends rather than absolute temps. For the Contra-Skeptics, the nail in the coffin would be the confirmation from Post 1979 Satellite observations in addition to the high trend correlation between the class 1 & 2 sites with the class 3 to 5 sites.
Here is what I do not get.
I would assume that True_Temp=Observed_Temp + Constant +Error Term Where the Constant is a relatively stable coefficient which might be attributable to the sensor type, location, exogenous influence etc. the Error term is some stochastic element which might be influenced by the quality of the measuring instrument, observational error, idiosyncratic time dependent factors etc. I would assume that the magnitude of the error term is greater for CRN 5 than for CRN 1? In the case of a parking lot, I would assume that this contributes to the Constant term but not the Error term? So, many of the arguments in favor of Menne work only if the bias from the exogenous factor (parking lot, Air conditioner) is a constant which is not subject to TOB or seasonal influence. If there are cars in the lot during the day parked close to the sensor, the heat of the engine coupled with the solar effect on the asphalt would influence the Max_Temp but presumably by midnight neither factor would be present (assume no cars at night). So the Avg temp would be biased higher. So far no effect on trend as this would affect the constant term in the equation. But what if the parking lot supports a pool which is used in summer only, and there is a staff building with an A/C located near the sensor which runs in the summer only. So the bias of these two factors would have a seasonal component? I would imagine that there is some of this type of effect. As the most isolated and better rated sites (ie. CRN1) would have less of this influence and error, then Menne should find higher correlation of Climate Trends within the CRN1 group than within CRN 4/5 or between CRN1 and CRN4. Was this the case?
Next the error term. Is there a constant error term with a mean of zero and some defined variance which is influenced by the LIG/MMTS sensor type or other measurable factor. Is there such an observed factor and is it higher in the CRN 3/4/5 sites? If so, this should have shown up in the Menne study and should have reduced the observed correlation between the ‘good’ and ‘bad’ sites.
Long story short, my simple/uneducated understanding of Menne et al suggests that the argument holds water if there is a constant bias for a given site vs other site. If the bias is not constant (day vs night or seasonal examples above) or there is a stochastic term which is greater for CRN 3/4/5 sites, then I would think that the correlation in trends would be lower.
Am I wildly off base?
The second sort of begs the question, ” well, shouldn’t we do something to limit the impact anyway?
Of course. And since water vapor is the biggest contributor and an amplifier of CO2 I propose we start drying out the atmosphere. We need to ban humidifiers and subsidize dehumidifiers.
And washing clothes is out. And we need to cover all the lakes and oceans with poly tarps. For starters.
How does the equipment change help the alarmist position ?
If all the recording sites are properly adjusted to remove the effect of the equipment change then the previous position is reinstated and poor sites are once again more likely to biased to the warm by virtue of the types of site defects that prevail ?
All he has done is point to an additional complicating factor which has not yet been taken into account. I don’t see how it affects the site review at all.
Or have I missed something ?
Furthermore the site characteristics must have an effect on long term anomalies where the site defects are increasing in intensity over time as with more and more nearby development.
And then again if one has several sites with different degrees of site defect then the term ‘anomaly’ has no meaning because one can never get a reliable reading to measure the amount of anomaly from. Even if one of the sites were perfect you would never be able to tell, it would disappear in the melee of surrounding defective sites and be given no additional weight for it’s perfection.
Is that science ?
Another question, there is mention of a constant avg. temp. difference between LIG and MMTS. Are the error terms for LIG and MMTS affected by temperature? In other words at extreme high or low temps does LIG have a bigger error term? Same for MMTS. My question has to do with whether the correlation of trends between MMTS sensors is affected by vary high or very low temperatures and is this an adjustment made by NCDC?
REPLY: each reading is rounded to the nearest degree F when recorded by the observer, and the precision and accuracy of both instruments is comparable. That leaves only the shelter (Wood box -vs- plastic gill shield) or siting (MMTS units get closer to heat sources such as buildings, etc due to cabling issues – can’t get past sidewalks, driveways, etc) as culprits – Anthony
begin making site-specific measurements.
Absolutely.
Nigel S (05:25:15) :
“I agree, the most likely answer is that the whole thing is based on the thermometer on Hansen’s desk.”
That sounds about as scientific as a dashboard thermometer, doesn’t it?
Depends. Is Hansen’s thermometer in a temperature controlled room? Is the dashboard reading connected to a thermometer that samples outdoor temps?
REPLY: each reading is rounded to the nearest degree F when recorded by the observer, and the precision and accuracy of both instruments is comparable. That leaves only the shelter (Wood box -vs- plastic gill shield) or siting (MMTS units get closer to heat sources such as buildings, etc due to cabling issues – can’t get past sidewalks, driveways, etc) as culprits – Anthony
That method introduces sampling noise – increasing the error bars.
Hi all
Could you share your thoughts wether my thinking is in the ballpark of way off.
NCDC strategy
1) Move the discussion from temperature to trend
2) Prove that AGW is still happening
3) End the discussion about temperature differences between
urban and rural stations
So NCDC needed to homogenize then trend between all types of stations
1) They decided what is the trend they need
2) They needed that the trend is similar to urban stations
3) Both urban and rural stations were adjusted to fit this trend
(homogenization)
4) Thus the trend was taken from urban stations and applied to rural stations
5) With rural stations this required that the history of the station was made
colder than it was in order to get the trend they needed and still match to
current temperature (this has been observed)
Then NCDC publishes a study that points out that there is no difference in trend between urban and rural stations (thus proving what they have done to station history)
What needs to be done is to prove that historical data of rural stations was adjusted towards cold. For this purpose we need the raw data, what has now been pulled from public availability.
Menne’s paper is actually a good thing. We now have a scientific evidence what NCDC has inappropriately done to historical climate data.
A quick note to Nick Stokes.
I had another look at the anomaly issue. had to go back and review the code.
In General if you work in anomalies they you dont have to be concerned in principle with droping out stations. That is IF, you calculate an anomaly for each station relative to itself. GISS don’t do this.
At issue is the combination of scribal records to create one record and more importantly the creating of reference stations ( see CA posts) from mulitple stations. Very simply. If you create an anomaly first for the station and THEN combine and average anomalies you dont have a big problem.
But if you first combine stations on the basis of temp and THEN create an anomaly, its an open question.
GISS do the latter
Oh. Buy the book.
http://www.lulu.com/content/e-book/climategate-the-crutape-letters/8243144
RSG (09:49:10) :
Anthony,
Love your website and appreciate your excellent work on the USHCN. Tragic that Menne and company have stooped to such antics. It only proves the desperate state of their position. I hope their sleezy move gets wide exposure….and sooner vs. later.
In discussing MMTS vs. Stevenson Screens/liquid-in-bulb I am reminded of your experiment comparing Stevenson Screens with bare wood, whitewash, and latex paint. Was not your actual air temperature taken with an MMTS? If so, did it not show that all 3 Stevenson Screens registered higher daytime temps than the “actual” air temp (MMTS)…on the sunny day you sampled (A Typical Day in the Stevenson Screen Paint Test)?
Is there a paper or study out there formally comparing the 2 methods (MMTS vs. Stevenson Screen)?
I think you’ll find references to this in peterson03.
Quayle did a study as well.
There are two types of studies:
1. side by side same exposure. This tests to bias in a perfect situation.
2. Large scale studies that would take into account actual siting.
I think the thermometer built into the dashboard is actually showing the temperature of the large amount of air being sucked into the engine. It’s a side effect of the car’s computer needing to know the air temperature so it can better control emissions. So it is closely related to the outside temperature. If the air intake is under the hood, the temperature warms up as the engine warms up, and turning on the air conditioner causes an upward jump due to the heat from the air conditioner’s radiator. Many modern cars have the air intake connected to a space outside the engine compartment and near a wheel well, and those are sucking in air from outside the car.
I may be a little over blunt. Romm is posting about this./ Someone asks if he has ever read the Watts 2010 paper. Apperently he is practicing voodoo and knows what it will say. Dr Masters seems to be reviewing it also and using the same voodoo model. They need to be called out.
Why a PHD is dirt for reputation now.
If romm has a phd and Masters does also, what is with their mind reading and entrails readings?
No other professions has stunts like this.
Criminal Jones won’t release files that he is under law required to release. Then these clowns release reports on a story that is not written.
As bad as Coakley winning story posted in the boston Globe the day before the election.
The way I read it, the offer of “co-author” was in reality an offer of “data-supplier”. Hence, the acknowledgement of receipt of data. These words of the NCDC director say it all:
“We at NOAA/NCDC seek a way forward to cooperate with you, and are interested in joint scientific inquiry. When more or better information is available, we will reanalyze and compare and contrast the results.”
“If working together cooperatively is of interest to you, please let us know.”
The phrase, “When more or better information is available…” indicates the provisional nature of any future cooperative work. The invitation to “please let us know” holds no promise of future work. The director is only asking for an expression of interest.
As for the ethical issue, one could interpret the NCDC communications as a request for the use of data.
It’s also a relief that someone has finally done some analysis on the surface station project and got it published. Wasn’t a preliminary analysis done of the data a couple of years ago, with results much in line with this later study?
PhilM (22:27:21), I don’t see why this subject is so hard for you to understand. The issue isn’t about the data itself being hacked or stolen. Neither of those claims appear in my posts, and so your rejoinders are irrelevant at best.
One more time: The issue is that Anthony owned the scientific priority of his own data. Prof. Karl had no right to publish on it first, and neither did Dr. Menne. They chose to abscond with Anthony’s right to priority. That is a serious breach of scientific ethics.
This has nothing whatever to do with the accusations leveled at AGW workers who hid their data after publication and, just as bad, refused to elucidate their analytical methodologies.
I’ve been a practicing experimental scientist for many years now. It’s not just common practice to reveal all your methods and protocols in the very same publication carrying your results and conclusions. It is mandatory. At least it has been so in every discipline except climate science. There, we get protests and claims of IPO when others want the same information that is automatically given in other fields.
Let’s be clear: there would have been no requests for release of information if the climate scientists now under the gun had followed the normal standard of openness common to the other branches of science. They have called the present onus down upon their own heads by way of the obscurantism in publication and obstructionism in practice they have regularly engaged for at least 10 years.
Nigel S (09:54:42) :
“I don’t think anyone is proposing to destroy the global economy on the strength of their dashboard thermometer readings but the thermometer on Hansen’s desk is a lot more dangerous.”
My amusement was aimed at the offering of anecdotal dashboard/car temperature readings as evidence of UHI. Apparently, a large number of folks here think that siting a climate station 30 meters from a building is a catastrophe, but temperature readings from in instrument mounted in a moving vehicle are somehow providing useful climate data.
I also got quite a chuckle from all the comments last year about how cool the summer was, how snowy the winter was, etc. and how that was surely evidence contrary to AGW. Then the yearly temp anomaly from UAH comes out and presto: +0.25 C.
Pat Frank (16:13:29) :
I guess you (and a few others) will have the last word on this. We seem to be going around and around.
Two questions:
1) What is the “raw” data they use? In my opinion it should be the data written down by the station operators and entered into a database with no adjustments.
2) Why can’t we just calculate the average of all the thermometers on a year by year basis and see how much temperature has increased? That might not be representative of the earth’s total surface temperature but it would tell us how much the recorded temperatures have increased.
Phil M (19:37:08) wrote:
“Anthony posted his data on the internet. Is there anything on the internet that you feel is not in the public domain?”
I repeat:
Why don’t you read their guidelines to see if they adhered to them?
http://history.nasa.gov/footnoteguide.html
They post standards, then ignore them. They are typical, arrogant, self justifing narcissists.
Sponsored Results
Narcissistic Personality Disorder
From basic facts to latest news. Understand personality disorders.
http://www.RightHealth.com
ClaytonB (18:50:38) :
I have followed this project with some enthusiasm for a long time and have even tried to find one of these stations here in TX (not easy btw).
Mandolinjon (19:43:47) :
Anthony: I noticed that there are several temperature sites in New Mexico that have not been reviewed. I live in Torrance County which is the geographical center of the state. Perhaps you could let me know more specifics. Jon
Clayton: I’d be interested in knowing what station you couldn’t find. I got some foundational info on several stations last October that I didn’t succeed in finding. (Eagle Pass, Catarina, Falfurrias, Muleshoe, Mexia) Need to get someone to follow up.
Mandolinjon: I’m sure Anthony will get you up to speed. One place not currently in the gallery is Dulce, which is a V2 replacement station for Durango, CO.
You guys can contact me at juanslayton@dslextreme.com
Phil M (17:12:14) : “Then the yearly temp anomaly from UAH comes out and presto: +0.25 C.”
And now we know why we cannot trust those numbers. WHen you approach things with an open mind, you learn a lot.
One may interpret it that way, or they just could have said it plainly. Why they didn’t speaks volumes.
Phil M (19:37:08) :
“Anthony posted his data on the internet. Is there anything on the internet that you feel is not in the public domain?”
Umm, what? “Public domain” has a very specific meaning in law. Yes, there are idiots who treat everything on the internet as if it were in the public domain, (and they often get away with it for the same reason most speeding violations are never ticketed) but it most assuredly is not.
I would go so far as to say even averaging a grid is bogus. Unless the grids are only a mile across, you can’t average an area where the temp variations can be as much as 10f even just 5 miles apart and end up with anything meaningful.
Well, unless the stations are equally distributed (and they’re NOT), you have to grid, even though it’s not perfect. The average of all USHCN stations is over 0.1C/century lower than the gridded data because station density in the sharply cooled southeast is much greater than in the sharply warmed southwest. That has to be accounted for.