On ‘denying’ Hockey Sticks, USHCN data, and all that – part 2

In part one of this essay which you can see here, I got quite a lot of feedback on both sides of the climate debate. Some people thought that I was spot on with criticisms while others thought I had sold my soul to the devil of climate change. It is an interesting life when I am accused of being in cahoots with both “big oil” and “big climate” at the same time. That aside, in this part of the essay I am going to focus on areas of agreement and disagreement and propose a solution.

In part one of the essay we focus on the methodology that was used that created a hockey stick style graph illustrating missing data. Due to the missing data causing a faulty spike at the end, Steve McIntyre commented, suggesting that it was more like the Marcott hockey stick than it was like Mann’s:

Steve McIntyre says:

Anthony, it looks to me like Goddard’s artifact is almost exactly equivalent in methodology to Marcott’s artifact spike – this is a much more exact comparison than Mann. Marcott’s artifact also arose from data drop-out.

However, rather than conceding the criticism, Marcott et al have failed to issue a corrigendum and their result has been widely cited.

In retrospect, I believe McIntyre is right in making that comparison. Data dropout is the central issue here and when it occurs it can create all sorts of statistical abnormalities.

Despite some spirited claims in comments in part one about how I’m “ignoring the central issue”, I don’t dispute that data is missing from many stations, I never have.

It is something that has been known about for years and is actually expected in the messy data gathering process of volunteer observers, electronic systems that don’t always report, and equipment and or sensor failures. In fact there is likely no weather network in existence that has perfect data without some being missing. Even the new U.S. Climate Reference Network, designed to be state-of-the-art and as perfect as possible has a small amount of missing data due to failures of uplinks or other electronic issues, seen in red:

CRN_missing_data

Source: http://www.ncdc.noaa.gov/crn/newdaychecklist?yyyymmdd=20140101&tref=LST&format=web&sort_by=slv

What is in dispute is the methodology, and the methodology, as McIntyre observed, created a false “hockey stick” shape much like we saw in the Marcott affair:

marcott-A-1000[1]

After McIntyre corrected the methodology used by Marcott, dealing with faulty and missing data, the result looked like this:

 

alkenone-comparison

McIntyre points out this in comments in part 1:

In Marcott’s case, because he took anomalies at 6000BP and there were only a few modern series, his results were an artifact – a phenomenon that is all too common in Team climate science.

So, clearly, the correction McIntyre applied to Marcott’s data made the result better, i.e. more representative of reality.

That’s the same sort of issue that we saw in Goddard’s plot; data was thinning near the endpoint of the present.

Goddard_screenhunter_236-jun-01-15-54

[ Zeke has more on that here: http://rankexploits.com/musings/2014/how-not-to-calculate-temperatures-part-3/ ]

While I would like nothing better than to be able to use raw surface temperature data in its unadulterated “pure” form to derive a national temperature and to chart the climate history of the United States, (and the world) the fact is that because the national USHCN/co-op network and GHCN is in such bad shape and has become largely heterogeneous that is no longer possible with the raw data set as a whole.

These surface networks have had so many changes over time that the number of stations that have been moved, had their time of observation changed, had equipment changes, maintenance issues,or have been encroached upon by micro site biases and/or UHI using the raw data for all stations on a national scale or even a global scale gives you a result that is no longer representative of the actual measurements, there is simply too much polluted data.

A good example of polluted data can be found in Las Vegas Nevada USHCN station:

LasVegas_average_temps

Here, growth of the city and the population has resulted in a clear and undeniable UHI signal at night gaining 10°F since measurements began. It is studied and acknowledged by the “sustainability” department of the city of Las Vegas, as seen in this document. Dr. Roy Spencer in his blog post called it “the poster child for UHI” and wonders why NOAA’s adjustments haven’t removed this problem. It is a valid and compelling question. But at the same time, if we were to use the raw data from Las Vegas we would know it would have been polluted by the UHI signal, so is it representative in a national or global climate presentation?

LasVegas_lows

The same trend is not visible in the daytime Tmax temperature, in fact it appears there has been a slight downward trend since the late 1930′s and early 1940′s:

LasVegas_highs

Source for data: NOAA/NWS Las Vegas, from

http://www.wrh.noaa.gov/vef/climate/LasVegasClimateBook/index.php

The question then becomes: Would it be okay to use this raw temperature data from Las Vegas without any adjustments to correct for the obvious pollution by UHI?

From my perspective the thermometer at Las Vegas has done its job faithfully. It has recorded what actually occurred as the city has grown. It has no inherent bias, the change in surroundings have biased it. The issue however is when you start using stations like this to search for the posited climate signal from global warming. Since the nighttime temperature increase at Las Vegas is almost an order of magnitude larger than the signal posited to exist from carbon dioxide forcing, that AGW signal would clearly be swamped by the UHI signal. How would you find it? If I were searching for a climate signal and was doing it by examining stations rather than throwing out blind automated adjustments I would most certainly remove Las Vegas from the mix as its raw data is unreliable because it has been badly and likely irreparably polluted by UHI.

Now before you get upset and claim that I don’t want to use raw data or as some call it “untampered” or unadjusted data, let me say nothing could be further from the truth. The raw data represents the actual measurements; anything else that has been adjusted is not fully representative of the measurement reality no matter how well-intentioned, accurate, or detailed those adjustments are.

But, at the same time, how do you separate all the other biases that have not been dealt with (like Las Vegas) so you don’t end up creating national temperature averages with imperfect raw data?

That my friends, is the $64,000 question.

To answer that question, we have a demonstration. Over at the blackboard blog, Zeke has plotted something that I believe demonstrates the problem.

Zeke writes:

There is a very simple way to show that Goddard’s approach can produce bogus outcomes. Lets apply it to the entire world’s land area, instead of just the U.S. using GHCN monthly:

Averaged Absolutes

Egads! It appears that the world’s land has warmed 2C over the past century! Its worse than we thought!

Or we could use spatial weighting and anomalies:

 

Gridded Anomalies

Now, I wonder which of these is correct? Goddard keeps insisting that its the first, and evil anomalies just serve to manipulate the data to show warming. But so it goes.

Zeke wonders which is “correct”. Is it Goddard’s method of plotting all the “pure” raw data, or is it Zeke’s method of using gridded anomalies?

My answer is: neither of them are absolutely correct.

Why, you ask?

It is because both contain stations like Las Vegas that have been compromised by changes in their environment, that station itself, the sensors, the maintenance, time of observation changes, data loss, etc. In both cases we are plotting data which is a huge mishmash of station biases that have not been dealt with.

NOAA tries to deal with these issues, but their effort falls short. Part of the reason it falls short is that they are trying to keep every bit of data and adjust it in an attempt to make it useful, and to me that is misguided, as some data is just beyond salvage.

In most cases, the cure from NOAA is worse than the disease, which is why we see things like the past being cooled.

Here is another plot from Zeke just for the USHCN, which shows Goddard’s method “Averaged Absolutes” and the NOAA method of “Gridded Anomalies”:

Goddard and NCDC methods 1895-2013

[note: the Excel code I posted was incorrect for this graph, and was for another graph Zeke produced, so it was removed, apologies – Anthony]

Many people claim that the “Gridded Anomalies” method cools the past, and increases the trend, and in this case they’d be right. There is no denying that.

At the same time, there is no denying that the entire CONUS USHCN raw data set contains all sorts of imperfections, biases, UHI, data dropouts and a whole host of problems that remain uncorrected. It is a Catch-22; on one hand the raw data has issues, on the other, at the bare minimum some sort of infilling and gridding is needed to produce a representative signal for the CONUS, but in producing that, new biases and uncertainty is introduced.

There is no magic bullet that always hits the bullseye.

I’ve known and studied this for years, it isn’t a new revelation. The key point here is that both Goddard and Zeke (and by extension BEST and NOAA) are trying to use the ENTIRE USHCN dataset, warts and all, to derive a national average temperature. Neither method produces a totally accurate representation of national temperature average. Keep that thought.

While both methods have flaws, the issue that Goddard raised has one good point, and an important one; the rate of data dropout in USHCN is increasing.

When data gets lost, they infill with other nearby data, and that’s an acceptable procedure, up to a point. The question is, have we reached a point of no confidence in the data because too much has been lost?

John Goetz asked the same question as Goddard in 2008 at Climate Audit:

How much Estimation is too much Estimation?

It is still an open question, and without a good answer yet.

But at the same time we are seeing more and more data loss, Goddard is claiming “fabrication” of lost temperature data in the final product and at the same advocating using the raw surface temperature data for a national average. From my perspective, you can’t argue for both. If the raw data is becoming less reliable due to data loss, how can we use it by itself to reliably produce a national temperature average?

Clearly with the mess the USHCN and GHCN are in, raw data won’t accurately produce a representative result of the true climate change signal of the nation because the raw data is so horribly polluted with so many other biases. There are easily hundreds of stations in the USHCN that have been compromised like Las Vegas has been, making the raw data, as a whole, mostly useless.

So in summary:

Goddard is right to point out that there is increasing data loss in USHCN and it is being increasingly infilled with data from surrounding stations. While this is not a new finding, it is important to keep tabs on. He’s brought it to the forefront again, and for that I thank him.

Goddard is wrong to say we can use all the raw data to reliably produce a national average temperature because the same data is increasingly lossy and is also full of other biases that are not dealt with. [ added: His method allows for biases to enter that are mostly about station composition, and less about infilling see this post from Zeke]

As a side note, claiming “fabrication” in a nefarious way doesn’t help, and generally turns people off to open debate on the issue because the process of infilling missing data wasn’t designed at the beginning to be have any nefarious motive; it was designed to make the monthly data usable when small data dropouts are seen, like we discussed in part 1 and showed the B-91 form with missing data from volunteer data. By claiming “fabrication”, all it does is put up walls, and frankly if we are going to enact any change to how things get done in climate data, new walls won’t help us.

Biases are common in the U.S. surface temperature network

This is why NOAA/NCDC spends so much time applying infills and adjustments; the surface temperature record is a heterogeneous mess. But in my view, this process of trying to save messed up data is misguided, counter-productive, and causes heated arguments (like the one we are experiencing now) over the validity of such infills and adjustments, especially when many of them seem to operate counter-intuitively.

As seen in the map below, there are thousands of temperature stations in the US co-op and USHCN network in the USA, by our surface stations survey, at least 80% of the USHCN is compromised by micro-site issues in some way, and by extension, that large sample size of the USHCN subset of the co-op network we did should translate to the larger network.

USHCN_COOP_Map

When data drops out of USHCN stations, data from nearby neighbor stations is infilled to make up the missing data, but when 80% or more of your network is compromised by micro-site issues, chances are all you are doing is infilling missing data with compromised data. I explained this problem years ago using a water bowl analogy, showing how the true temperature signal gets “muddy” when data from surrounding stations is used to infill missing data:

bowls-USmap

The real problem is the increasing amount of data dropout in USHCN (and in Co-op and GHCN) may be reaching a point where it is adding a majority of biased signal from nearby problematic stations. Imagine a well sited long period station near Las Vegas out in a rural area that has its missing data infilled using Las Vegas data, you know it will be warmer when that happens.

So, what is the solution?

How do we get an accurate surface temperature for the United States (and the world) when the raw data is full of uncorrected biases and the adjusted data does little more than smear those station biases around when infilling occurs? Some of our friends say a barrage of  statistical fixes are all that is needed, but there is also another, simpler, way.

Dr. Eric Steig, at “Real Climate”, in a response to a comment about Zeke Hausfather’s 2013 paper on UHI shows us a way.

Real Climate comment from Eric Steig (response at bottom)

We did something similar (but even simpler) when it was being insinuated that the temperature trends were suspect, back when all those UEA emails were stolen. One only needs about 30 records, globally spaced, to get the global temperature history. This is because there is a spatial scale (roughly a Rossby radius) over which temperatures are going to be highly correlated for fundamental reasons of atmospheric dynamics.

For those who don’t know what the Rossby radius is, see this definition.

Steig claims 30 station records are all that are needed globally. In a comment some years ago (now probably lost in the vastness of the Internet) we heard Dr. Gavin Schmidt said something similar, saying that about “50 stations” would be all that is needed.

[UPDATE: Commenter Johan finds what may be the quote:

I did find this Gavin Schmidt quote:

“Global weather services gather far more data than we need. To get the structure of the monthly or yearly anomalies over the United States, for example, you’d just need a handful of stations, but there are actually some 1,100 of them. You could throw out 50 percent of the station data or more, and you’d get basically the same answers”

http://earthobservatory.nasa.gov/Features/Interviews/schmidt_20100122.php ]

So if that is the case, and one of the most prominent climate researchers on the planet (and his associate) says we need only somewhere between 30-50 stations globally…why is NOAA spending all this time trying to salvage bad data from hundreds if not thousands of stations in the USHCN, and also in the GHCN?

It is a question nobody at NOAA has ever really been able to answer for me. While it is certainly important to keep these records from all these stations for local climate purposes, but why try to keep them in the national and global dataset when Real Climate Scientists say that just a few dozen good stations will do just fine?

There is precedence for this, the U.S. Climate Reference Network, which has just a fraction of the stations in USHCN and the co-op network:

crn_map

NOAA/NCDC is able to derive a national temperature average from these few stations just fine, and without the need for any adjustments whatsoever. In fact they are already publishing it:

USCRN_avg_temp_Jan2004-April2014

If it were me, I’d throw out most of the the USHCN and co-op stations with problematic records rather than try to salvage them with statistical fixes, and instead, try to locate the best stations with long records, no moves, and minimal site biases and use those as the basis for tracking the climate signal. By doing so not only do we eliminate a whole bunch of make work with questionable/uncertain results, and we end all the complaints data falsification and quibbling over whose method really does find the “holy grail of the climate signal” in the US surface temperature record.

Now you know what Evan Jones and I have been painstakingly doing for the last two years since our preliminary siting paper was published here at WUWT and we took heavy criticism for it. We’ve embraced those criticisms and made the paper even better. We learned back then that adjustments account for about half of the surface temperature trend:

We are in the process of bringing our newest findings to publication. Some people might complain we have taken too long. I say we have one chance to get it right, so we’ve been taking extra care to effectively deal with all criticisms from then, and criticisms we have from within our own team. Of course if I had funding like some people get, we could hire people to help move it along faster instead of relying on free time where we can get it.

The way forward:

It is within our grasp to locate and collate stations in the USA and in the world that have as long of an uninterrupted record and freedom from bias as possible and to make that a new climate data subset. I’d propose calling it the the Un-Biased Global Historical Climate Network or UBGHCN. That may or may not be a good name, but you get the idea.

We’ve found at least this many good stations in the USA that meet the criteria of being reliable and without any need for major adjustments of any kind, including the time-of-observation change (TOB), but some do require the cooling bias correction for MMTS conversion, but that is well known and a static value that doesn’t change with time. Chances are, a similar set of 50 stations could be located in the world. The challenge is metadata, some of which is non-existent publicly, but with crowd sourcing such a project might be do-able, and then we could fulfill Gavin Schmidt and Eric Steig’s vision of a much simpler set of climate stations.

Wouldn’t it be great to have a simpler and known reliable set of stations rather than this mishmash which goes through the statistical blender every month? NOAA could take the lead on this, chances are they won’t. I believe it is possible to do independent of them, and it is a place where climate skeptics can make a powerful contribution which would be far more productive than the arguments over adjustments and data dropout.

 

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

274 Comments
Inline Feedbacks
View all comments
June 26, 2014 4:34 pm

For those who are interested in what Nick Stokes is talking about. This wiki page on intergration is well worth the read. In particular, on the issue of only using a few stations and arriving at the same conclusion, “the Gaussian quadrature often requires noticeably less work for superior accuracy.”
“The explanation for this dramatic success lies in error analysis, and a little luck.”
http://en.wikipedia.org/wiki/Integral
The same principle can also be applied to modern GCM, where they get the right answer from the wrong parameter.

glacierman
June 26, 2014 4:39 pm

There’s a problem with the data so they replace it with regional expectations? That is the problem. Who gets to decide what’s expected? It should not be used if there is a problem detected, not replaced with synthetic numbers.

Shawn from High River
June 26, 2014 4:41 pm
Latitude
June 26, 2014 4:43 pm

Nick Stokes says:
June 26, 2014 at 3:22 pm
====
Nick, thanks….that’s a plausible justification
…and, according to Best, it looks like the Luling station has been moved 7 times, each time to a cooler location

angech
June 26, 2014 4:46 pm

ANTHONY “Goddard is right to point out that there is increasing data loss in USHCN and it is being increasingly infilled with data from surrounding stations. While this is not a new finding, it is important to keep tabs on. He’s brought it to the forefront again, and for that I thank him.”
Zeke has had 3 posts up at Lucia’s since June 5th 2014, the first had 284 comments.
I made several requests to Zeke re the USHCN figures with little response
So to be clear I said
there were “ 1218 real stations (USHCN) in the late 1980s
There are now [???] original real stations left-my guess half 609
There are [???] total real stations – my guess eyeballing 870
There are 161 new real stations , all in airports or cities added to the graph
There are 348 made up stations and 161 selected new stations.
The number of the original 1218 has to be kept
Nobody has put up a new thermometer in rural USA in the last 30 years and none has considered using any of the rural thermometers of which possibly 3000 of the discarded 5782 cooperative network stations.
june 7th Zeke” As I mentioned in the original post, about 300 of the 1218 stations originally assigned to the USHCN in the late 1980s have closed, mostly due to volunteer observers dying or otherwise stopping reporting. No stations have been added to the network to make up for this loss, so there are closer to 900 stations reporting on the monthly basis today.”
yet he also said
Zeke has a post at SG where he admits that there are only 650 real stations out of 1218 . This is a lot less than only 918 that he alludes to above. Why would he say 650 to SG ( May 12th 3.00 pm) and instead #130058 at the Blackboard about 300 of the 1218 stations have closed down.
Anthony 650 real stations is a lot more than 40% missing in fact it is nearly 50%.
Would you be able to get Zeke to clarify his comment to SG and confirm
a. the number of real stations [this may be between his 650 and 850 [last list of up to date reporting stations early this year had 833 twice but presumably a few more not used as missing some days]
b. the number of original real stations remaining this may be lower than 650 if real but new replacement stations have been put in c in which case the number of real original and the number of real replacement stations in the 650

REPLY:
Links to these comments? Don’t make me chase them down please if you want me to look at them -Anthony

June 26, 2014 4:50 pm

30 to 50 stations
>>>>>>>>>>>>
I’d actually agree that if you had a few dozen stations with pristine records, that would be sufficient to calculate the temperature trend of the earth provided that the record was long enough.
If you have a long enough record in time, a single station would be sufficient to determine the earth’s temperature trend. In fact, that’s pretty much what ice core data is. A single location with data for thousands of years, which is sufficient to plot general trends in earth’s temperature as a whole. When you have data for thousands of years, that single location is sufficient for spotting long term trends.
Which brings us back to the 30 to 50 station thing. While it might be fair to say that 50 stations is sufficient, that claim must be qualified in terms of time frame. 50 stations for 1 month would be pretty much useless, but 50 stations for 1,000 years would have obvious value. So the claim is contingent on the accompanying timeline, which doesn’t seem to be stated by the proponents.
Calculating a suitable timeline for 50 stations…. above my pay grade.

June 26, 2014 4:52 pm

Eugene WR Gallun says:
June 26, 2014 at 1:14 pm
Excellent! But I was hoping for “Dark Satanic mills” in there somewhere.

Brandon C
June 26, 2014 4:52 pm

Sorry Nick, but the TOBS adjustments assumes a uniform error that can be simply corrected. But there is no simple or even defendable way to ascertain what measurements were correct and which ones were not, so the assumption of uniformity only adds uncertainty. A truly scientific approach would be to leave the data alone and develop error bars to reflect the uncertainty caused by the unknown past errors. By making assumptions of uniform error, adjusting the data, and then presenting it is having increased or equal accuracy is simply blatant unscientific.
It results in the “correct” answer to be sure, but it leaves the ridiculous situation of adjusting down past temps so that recorded exceptional heat events, in many alternate mediums such as news, correspond with average or cold temps in the “new and improved” temp records. Everyone is so enthralled with their cleverness in finding reasons why they can adjust past temps, nobody is bothering to do even minor due diligence to see if they can alternatively confirm if the adjustments are accurate.
As you are so eager to get the adjustments out there, and in an automated way that happens without oversight, you seem to forget that scientists would be looking for independent ways to cross check if the adjustments were correct. For example hire a few students to review old newspapers and see if examples of very hot records in the station data can be corroborated by stories about the extreme heat in the news. If your method says that the records from a station should be reduced, but on the ground news was talking about heat that was killing livestock, then your adjustments are crap. I have seen many examples of this very thing, so I know it happens. Is it a rarity? Who knows because nobody checks.
But a good example that brings into question past cooling adjustments, is that TOBS does not affect the max temp records for a station, so all the past heat records are forced to be acknowledged and cannot be adjusted. And since the majority of heat records were set in the 20’s, 30’s and 40’s, why do so many happen during a time when it was so much cooler on average than today according to your unbiased programs that keep cooling the past more every year? I have heard endlessly about how a warmer world means heat records will fall with more regularity from endless climate scientists and climate reports. So don’t they contradict the statement that the past was so much cooler and it was just stupid temp station operators that made it seem warmer. I have read interviews with station operators and even talked to 2 in person and they all think TOBS adjustment reasoning is garbage because they identified the problem back then and adjusted their methods to avoid it. But I guess it is just too tempting to assume they are all wrong and your so much smarter.
It’s like sea surface inlet measuring. There are lots of logs and records indicating that the switch from buckets was not even close to done by the time the post revisionist climate scientists declared it was. But the adjustments it causes are “correct” so no need to actually verify.
I get so frustrated by such sloppy and assumption filled science, pretending that their huge assumptions are 100% correct and cannot be questioned. If climate science was forced to act like proper scientists and statiticians, the certainty and smug assurances would disappear. And we would be left with a lot more uncertainties, but more humble and way more accurate estimates of the climate.

June 26, 2014 4:53 pm

Never mind. I must have skipped that stanza. Well done!

angech
June 26, 2014 4:58 pm

a second small request. Can you put the first graph up that Zeke shows at the Blackboard
26 June, 2014 (14:21) | Data Comparisons | By: Zeke
which shows with clarity the adjustment of past records by up to 1.2 degrees from 1920 to 2014.
“You do not change the original real data ever”
Zeke wrote (Comment #130058) blackboard
June 7th, 2014 at 11:45 am. Very Relevant to everyone
“Mosh, Actually, your explanation of adjusting distant past temperatures as a result of using reference stations is not correct. NCDC uses a common anomaly method, not RFM.The reason why station values in the distant past end up getting adjusted is due to a choice by NCDC to assume that current values are the “true” values. Each month, as new station data come in, NCDC runs their pairwise homogenization algorithm which looks for non-climatic breakpoints by comparing each station to its surrounding stations. When these breakpoints are detected, they are removed. If a small step change is detected in a 100-year station record in the year 2006, for example, removing that step change will move all the values for that station prior to 2006 up or down by the amount of the breakpoint removed. As long as new data leads to new breakpoint detection, the past station temperatures will be raised or lowered by the size of the breakpoint.”
so the past temperature is being lowered all the time based on the newest input data which is itself being adjusted by reference to only 50% of real stations many of which are referenced to surrounding cities and airports and to previous recent hot years [shades of GergisMann technique here] as having more temporal and spatial weighting

June 26, 2014 4:59 pm

As shown by the problems with just US “data” as collected & adjusted, GAST is far too hypothetical a concept & flimsy a number, with such minor changes, within margin of error, that trying to base public policy upon it is at best folly, but closer to insanity, especially if it means energy starvation, the impoverishment of billions & death of at least tens of thousands, but probably millions, given enough time.

angech
June 26, 2014 5:02 pm

REPLY: Links to these comments? Don’t make me chase them down please if you want me to look at them -Anthony Will do ASAP.

Paul in Sweden
June 26, 2014 5:02 pm

??? As long as the errors & misguided methodology(mythology) is well documented the wrongful conclusions can be excused. Yup! That sounds like Climate ‘Science’.
=========
dbstealey True Dat indeed – excellent graph!

Nick Stokes
June 26, 2014 5:04 pm

Brandon C says: June 26, 2014 at 4:52 pm
“It results in the “correct” answer to be sure, but it leaves the ridiculous situation of adjusting down past temps so that recorded exceptional heat events, in many alternate mediums such as news, correspond with average or cold temps in the “new and improved” temp records.”

No, it doesn’t do that at all. I doubt that anyone uses adjusted temperatures when talking of exceptional temperatures. They shouldn’t.
Adjustments have a specific purpose – to reduce bias preparatory to calculating a space average. They are statistical; not intended to repair individual readings.
TOBS, for example, would not affect a record max. That temperature is as recorded. The problem is that if you reset the instrument at 5pm, that makes the next day max hot, even if the actual day was quite cool. That is a statistical bias, for which it is appropriate to correct a monthly average.

June 26, 2014 5:09 pm

Anyone have a paper showing a test of the prediction from theory that, “… there is a spatial scale (roughly a Rossby radius) over which temperatures are going to be highly correlated for fundamental reasons of atmospheric dynamics.“?
I’ve read Hansen’s and Lebedef’s,1987 paper showing >=0.5 correlation of station temperatures across about 1000 km. However, at any distance and across every latitudinal range the scatter in correlation is significant, and becomes very large in the tropics. This will mean the “infilling” of any specific temperature record will be subject to a large uncertainty.
Before the Steig-Schmidt 50 well-placed stations solution is applied, a serious validation study is required to show that their solution will in fact produce the reliable and accurate scalar temperature field claimed for it.
That’s how science is done, remember? Prediction from theory, test by experiment. One doesn’t apply a theory merely because of its convenience and compact charm. In judging theory, pretty doesn’t count for much; if it fails the test, it’s wrong.

Nick Stokes
June 26, 2014 5:09 pm

Poptech says: June 26, 2014 at 4:14 pm
“Anthny, I cannot warn you enough on this. Do not give Mr. Mosher any pre-published data, his only intention is to find anything he can distort to make you look bad.”

Where have I heard something like that before???

maccassar
June 26, 2014 5:10 pm

I am shocked that someone has not tried to have a global network of pristine locations in every country. How much error can there be by not covering spatially as extensively as it is now. Has anyone ever attempted this.

David Riser
June 26, 2014 5:10 pm

A#thony,
I like your idea, but I gather the 50 stations would not be to determine the average global temperature, but the trend of the global temperature. Since the rosby radius gives a size for an area of cold air under the warm, which would lead to highly correlated temperatures but not the same temperature. So I would suggest if you were to do this, do it as a test case in the US pick the most pristine and oldest records, use anomalies and see what you get. compare the last 30 years or so to the satellite records for the US only. Should be interesting.
v/r,
David Riser

Nick Stokes
June 26, 2014 5:15 pm

Pat Frank says: June 26, 2014 at 5:09 pm
“This will mean the “infilling” of any specific temperature record will be subject to a large uncertainty.
Before the Steig-Schmidt 50 well-placed stations solution is applied, a serious validation study is required to show that their solution will in fact produce the reliable and accurate scalar temperature field claimed for it.”

There will be uncertainty. But the purpose is to obtain an overall integral (or weighted sum). So the key test is whether it does that well, not the uncertainty in individual readings. Bias is more important than unbiased noise.

June 26, 2014 5:18 pm

Brandon, your comment is exactly right, and goes exactly to the heart of the mess that is global air temperature. No one is paying attention to systematic errors. Instead convenient assumptions are made that allow a pretense of conclusions.
That same failure of scientific integrity permeates all of consensus climate science.

Old England
June 26, 2014 5:19 pm

It struck me, and maybe wrongly, that from the Las Vegas data the distortion from UHI is primarily in the night time T min but the day time T max is unaffected ? Also if a mere 30 – 50 stations globally are sufficient to accurately monitor global temps – what would we find from 50 continuous, site unchanged, UHI unaffected and entirely rural stations plotted over the last 100 or 150 years ?

June 26, 2014 5:19 pm

Nick wrote, “the purpose is to obtain an overall integral (or weighted sum).
No, Nick. The purpose is to obtain physically accurate data.

u.k.(us)
June 26, 2014 5:22 pm

Latitude says:
June 26, 2014 at 4:17 pm
So, now it is a pissing contest ?
===
UK, it’s very important information to a lot of people…and it’s the root of all temp reconstructions
If rural stations are closing and they are infilling with urban stations….
+++++++++++++++++++++++++++++++++
Of course it is, good data is needed for any solution.
I was just playing on the million dollar bet 🙂

June 26, 2014 5:29 pm

Nick Stokes says:
June 26, 2014 at 5:15 pm
Where have I heard something like that before???
You haven’t, I clearly said distort. Every skeptic here trusts people Steve McIntyre with Anthony’s data because he has no ulterior motives to distort anything like Mr. Mosher is known to or launch propaganda PR blitzes like he has been affiliated with in the past (BEST and Muller). This is not some honest skeptical inquiry like Steve McIntyre’s work who is actually qualified for the task. Mr. Mosher has done nothing but attempt to obfuscate temperature data arguments based on his ideologically biased beliefs.

Latitude
June 26, 2014 5:30 pm

Latitude says:
June 26, 2014 at 4:11 pm
[suggest you resubmit the question without the combative snark either you are interested in an answer or you want to pile on -mod]
====
Sorry, didn’t mean to come across as Nik or Willis…won’t happen again

Verified by MonsterInsights