On ‘denying’ Hockey Sticks, USHCN data, and all that – part 2

In part one of this essay which you can see here, I got quite a lot of feedback on both sides of the climate debate. Some people thought that I was spot on with criticisms while others thought I had sold my soul to the devil of climate change. It is an interesting life when I am accused of being in cahoots with both “big oil” and “big climate” at the same time. That aside, in this part of the essay I am going to focus on areas of agreement and disagreement and propose a solution.

In part one of the essay we focus on the methodology that was used that created a hockey stick style graph illustrating missing data. Due to the missing data causing a faulty spike at the end, Steve McIntyre commented, suggesting that it was more like the Marcott hockey stick than it was like Mann’s:

Steve McIntyre says:

Anthony, it looks to me like Goddard’s artifact is almost exactly equivalent in methodology to Marcott’s artifact spike – this is a much more exact comparison than Mann. Marcott’s artifact also arose from data drop-out.

However, rather than conceding the criticism, Marcott et al have failed to issue a corrigendum and their result has been widely cited.

In retrospect, I believe McIntyre is right in making that comparison. Data dropout is the central issue here and when it occurs it can create all sorts of statistical abnormalities.

Despite some spirited claims in comments in part one about how I’m “ignoring the central issue”, I don’t dispute that data is missing from many stations, I never have.

It is something that has been known about for years and is actually expected in the messy data gathering process of volunteer observers, electronic systems that don’t always report, and equipment and or sensor failures. In fact there is likely no weather network in existence that has perfect data without some being missing. Even the new U.S. Climate Reference Network, designed to be state-of-the-art and as perfect as possible has a small amount of missing data due to failures of uplinks or other electronic issues, seen in red:

CRN_missing_data

Source: http://www.ncdc.noaa.gov/crn/newdaychecklist?yyyymmdd=20140101&tref=LST&format=web&sort_by=slv

What is in dispute is the methodology, and the methodology, as McIntyre observed, created a false “hockey stick” shape much like we saw in the Marcott affair:

marcott-A-1000[1]

After McIntyre corrected the methodology used by Marcott, dealing with faulty and missing data, the result looked like this:

 

alkenone-comparison

McIntyre points out this in comments in part 1:

In Marcott’s case, because he took anomalies at 6000BP and there were only a few modern series, his results were an artifact – a phenomenon that is all too common in Team climate science.

So, clearly, the correction McIntyre applied to Marcott’s data made the result better, i.e. more representative of reality.

That’s the same sort of issue that we saw in Goddard’s plot; data was thinning near the endpoint of the present.

Goddard_screenhunter_236-jun-01-15-54

[ Zeke has more on that here: http://rankexploits.com/musings/2014/how-not-to-calculate-temperatures-part-3/ ]

While I would like nothing better than to be able to use raw surface temperature data in its unadulterated “pure” form to derive a national temperature and to chart the climate history of the United States, (and the world) the fact is that because the national USHCN/co-op network and GHCN is in such bad shape and has become largely heterogeneous that is no longer possible with the raw data set as a whole.

These surface networks have had so many changes over time that the number of stations that have been moved, had their time of observation changed, had equipment changes, maintenance issues,or have been encroached upon by micro site biases and/or UHI using the raw data for all stations on a national scale or even a global scale gives you a result that is no longer representative of the actual measurements, there is simply too much polluted data.

A good example of polluted data can be found in Las Vegas Nevada USHCN station:

LasVegas_average_temps

Here, growth of the city and the population has resulted in a clear and undeniable UHI signal at night gaining 10°F since measurements began. It is studied and acknowledged by the “sustainability” department of the city of Las Vegas, as seen in this document. Dr. Roy Spencer in his blog post called it “the poster child for UHI” and wonders why NOAA’s adjustments haven’t removed this problem. It is a valid and compelling question. But at the same time, if we were to use the raw data from Las Vegas we would know it would have been polluted by the UHI signal, so is it representative in a national or global climate presentation?

LasVegas_lows

The same trend is not visible in the daytime Tmax temperature, in fact it appears there has been a slight downward trend since the late 1930′s and early 1940′s:

LasVegas_highs

Source for data: NOAA/NWS Las Vegas, from

http://www.wrh.noaa.gov/vef/climate/LasVegasClimateBook/index.php

The question then becomes: Would it be okay to use this raw temperature data from Las Vegas without any adjustments to correct for the obvious pollution by UHI?

From my perspective the thermometer at Las Vegas has done its job faithfully. It has recorded what actually occurred as the city has grown. It has no inherent bias, the change in surroundings have biased it. The issue however is when you start using stations like this to search for the posited climate signal from global warming. Since the nighttime temperature increase at Las Vegas is almost an order of magnitude larger than the signal posited to exist from carbon dioxide forcing, that AGW signal would clearly be swamped by the UHI signal. How would you find it? If I were searching for a climate signal and was doing it by examining stations rather than throwing out blind automated adjustments I would most certainly remove Las Vegas from the mix as its raw data is unreliable because it has been badly and likely irreparably polluted by UHI.

Now before you get upset and claim that I don’t want to use raw data or as some call it “untampered” or unadjusted data, let me say nothing could be further from the truth. The raw data represents the actual measurements; anything else that has been adjusted is not fully representative of the measurement reality no matter how well-intentioned, accurate, or detailed those adjustments are.

But, at the same time, how do you separate all the other biases that have not been dealt with (like Las Vegas) so you don’t end up creating national temperature averages with imperfect raw data?

That my friends, is the $64,000 question.

To answer that question, we have a demonstration. Over at the blackboard blog, Zeke has plotted something that I believe demonstrates the problem.

Zeke writes:

There is a very simple way to show that Goddard’s approach can produce bogus outcomes. Lets apply it to the entire world’s land area, instead of just the U.S. using GHCN monthly:

Averaged Absolutes

Egads! It appears that the world’s land has warmed 2C over the past century! Its worse than we thought!

Or we could use spatial weighting and anomalies:

 

Gridded Anomalies

Now, I wonder which of these is correct? Goddard keeps insisting that its the first, and evil anomalies just serve to manipulate the data to show warming. But so it goes.

Zeke wonders which is “correct”. Is it Goddard’s method of plotting all the “pure” raw data, or is it Zeke’s method of using gridded anomalies?

My answer is: neither of them are absolutely correct.

Why, you ask?

It is because both contain stations like Las Vegas that have been compromised by changes in their environment, that station itself, the sensors, the maintenance, time of observation changes, data loss, etc. In both cases we are plotting data which is a huge mishmash of station biases that have not been dealt with.

NOAA tries to deal with these issues, but their effort falls short. Part of the reason it falls short is that they are trying to keep every bit of data and adjust it in an attempt to make it useful, and to me that is misguided, as some data is just beyond salvage.

In most cases, the cure from NOAA is worse than the disease, which is why we see things like the past being cooled.

Here is another plot from Zeke just for the USHCN, which shows Goddard’s method “Averaged Absolutes” and the NOAA method of “Gridded Anomalies”:

Goddard and NCDC methods 1895-2013

[note: the Excel code I posted was incorrect for this graph, and was for another graph Zeke produced, so it was removed, apologies – Anthony]

Many people claim that the “Gridded Anomalies” method cools the past, and increases the trend, and in this case they’d be right. There is no denying that.

At the same time, there is no denying that the entire CONUS USHCN raw data set contains all sorts of imperfections, biases, UHI, data dropouts and a whole host of problems that remain uncorrected. It is a Catch-22; on one hand the raw data has issues, on the other, at the bare minimum some sort of infilling and gridding is needed to produce a representative signal for the CONUS, but in producing that, new biases and uncertainty is introduced.

There is no magic bullet that always hits the bullseye.

I’ve known and studied this for years, it isn’t a new revelation. The key point here is that both Goddard and Zeke (and by extension BEST and NOAA) are trying to use the ENTIRE USHCN dataset, warts and all, to derive a national average temperature. Neither method produces a totally accurate representation of national temperature average. Keep that thought.

While both methods have flaws, the issue that Goddard raised has one good point, and an important one; the rate of data dropout in USHCN is increasing.

When data gets lost, they infill with other nearby data, and that’s an acceptable procedure, up to a point. The question is, have we reached a point of no confidence in the data because too much has been lost?

John Goetz asked the same question as Goddard in 2008 at Climate Audit:

How much Estimation is too much Estimation?

It is still an open question, and without a good answer yet.

But at the same time we are seeing more and more data loss, Goddard is claiming “fabrication” of lost temperature data in the final product and at the same advocating using the raw surface temperature data for a national average. From my perspective, you can’t argue for both. If the raw data is becoming less reliable due to data loss, how can we use it by itself to reliably produce a national temperature average?

Clearly with the mess the USHCN and GHCN are in, raw data won’t accurately produce a representative result of the true climate change signal of the nation because the raw data is so horribly polluted with so many other biases. There are easily hundreds of stations in the USHCN that have been compromised like Las Vegas has been, making the raw data, as a whole, mostly useless.

So in summary:

Goddard is right to point out that there is increasing data loss in USHCN and it is being increasingly infilled with data from surrounding stations. While this is not a new finding, it is important to keep tabs on. He’s brought it to the forefront again, and for that I thank him.

Goddard is wrong to say we can use all the raw data to reliably produce a national average temperature because the same data is increasingly lossy and is also full of other biases that are not dealt with. [ added: His method allows for biases to enter that are mostly about station composition, and less about infilling see this post from Zeke]

As a side note, claiming “fabrication” in a nefarious way doesn’t help, and generally turns people off to open debate on the issue because the process of infilling missing data wasn’t designed at the beginning to be have any nefarious motive; it was designed to make the monthly data usable when small data dropouts are seen, like we discussed in part 1 and showed the B-91 form with missing data from volunteer data. By claiming “fabrication”, all it does is put up walls, and frankly if we are going to enact any change to how things get done in climate data, new walls won’t help us.

Biases are common in the U.S. surface temperature network

This is why NOAA/NCDC spends so much time applying infills and adjustments; the surface temperature record is a heterogeneous mess. But in my view, this process of trying to save messed up data is misguided, counter-productive, and causes heated arguments (like the one we are experiencing now) over the validity of such infills and adjustments, especially when many of them seem to operate counter-intuitively.

As seen in the map below, there are thousands of temperature stations in the US co-op and USHCN network in the USA, by our surface stations survey, at least 80% of the USHCN is compromised by micro-site issues in some way, and by extension, that large sample size of the USHCN subset of the co-op network we did should translate to the larger network.

USHCN_COOP_Map

When data drops out of USHCN stations, data from nearby neighbor stations is infilled to make up the missing data, but when 80% or more of your network is compromised by micro-site issues, chances are all you are doing is infilling missing data with compromised data. I explained this problem years ago using a water bowl analogy, showing how the true temperature signal gets “muddy” when data from surrounding stations is used to infill missing data:

bowls-USmap

The real problem is the increasing amount of data dropout in USHCN (and in Co-op and GHCN) may be reaching a point where it is adding a majority of biased signal from nearby problematic stations. Imagine a well sited long period station near Las Vegas out in a rural area that has its missing data infilled using Las Vegas data, you know it will be warmer when that happens.

So, what is the solution?

How do we get an accurate surface temperature for the United States (and the world) when the raw data is full of uncorrected biases and the adjusted data does little more than smear those station biases around when infilling occurs? Some of our friends say a barrage of  statistical fixes are all that is needed, but there is also another, simpler, way.

Dr. Eric Steig, at “Real Climate”, in a response to a comment about Zeke Hausfather’s 2013 paper on UHI shows us a way.

Real Climate comment from Eric Steig (response at bottom)

We did something similar (but even simpler) when it was being insinuated that the temperature trends were suspect, back when all those UEA emails were stolen. One only needs about 30 records, globally spaced, to get the global temperature history. This is because there is a spatial scale (roughly a Rossby radius) over which temperatures are going to be highly correlated for fundamental reasons of atmospheric dynamics.

For those who don’t know what the Rossby radius is, see this definition.

Steig claims 30 station records are all that are needed globally. In a comment some years ago (now probably lost in the vastness of the Internet) we heard Dr. Gavin Schmidt said something similar, saying that about “50 stations” would be all that is needed.

[UPDATE: Commenter Johan finds what may be the quote:

I did find this Gavin Schmidt quote:

“Global weather services gather far more data than we need. To get the structure of the monthly or yearly anomalies over the United States, for example, you’d just need a handful of stations, but there are actually some 1,100 of them. You could throw out 50 percent of the station data or more, and you’d get basically the same answers”

http://earthobservatory.nasa.gov/Features/Interviews/schmidt_20100122.php ]

So if that is the case, and one of the most prominent climate researchers on the planet (and his associate) says we need only somewhere between 30-50 stations globally…why is NOAA spending all this time trying to salvage bad data from hundreds if not thousands of stations in the USHCN, and also in the GHCN?

It is a question nobody at NOAA has ever really been able to answer for me. While it is certainly important to keep these records from all these stations for local climate purposes, but why try to keep them in the national and global dataset when Real Climate Scientists say that just a few dozen good stations will do just fine?

There is precedence for this, the U.S. Climate Reference Network, which has just a fraction of the stations in USHCN and the co-op network:

crn_map

NOAA/NCDC is able to derive a national temperature average from these few stations just fine, and without the need for any adjustments whatsoever. In fact they are already publishing it:

USCRN_avg_temp_Jan2004-April2014

If it were me, I’d throw out most of the the USHCN and co-op stations with problematic records rather than try to salvage them with statistical fixes, and instead, try to locate the best stations with long records, no moves, and minimal site biases and use those as the basis for tracking the climate signal. By doing so not only do we eliminate a whole bunch of make work with questionable/uncertain results, and we end all the complaints data falsification and quibbling over whose method really does find the “holy grail of the climate signal” in the US surface temperature record.

Now you know what Evan Jones and I have been painstakingly doing for the last two years since our preliminary siting paper was published here at WUWT and we took heavy criticism for it. We’ve embraced those criticisms and made the paper even better. We learned back then that adjustments account for about half of the surface temperature trend:

We are in the process of bringing our newest findings to publication. Some people might complain we have taken too long. I say we have one chance to get it right, so we’ve been taking extra care to effectively deal with all criticisms from then, and criticisms we have from within our own team. Of course if I had funding like some people get, we could hire people to help move it along faster instead of relying on free time where we can get it.

The way forward:

It is within our grasp to locate and collate stations in the USA and in the world that have as long of an uninterrupted record and freedom from bias as possible and to make that a new climate data subset. I’d propose calling it the the Un-Biased Global Historical Climate Network or UBGHCN. That may or may not be a good name, but you get the idea.

We’ve found at least this many good stations in the USA that meet the criteria of being reliable and without any need for major adjustments of any kind, including the time-of-observation change (TOB), but some do require the cooling bias correction for MMTS conversion, but that is well known and a static value that doesn’t change with time. Chances are, a similar set of 50 stations could be located in the world. The challenge is metadata, some of which is non-existent publicly, but with crowd sourcing such a project might be do-able, and then we could fulfill Gavin Schmidt and Eric Steig’s vision of a much simpler set of climate stations.

Wouldn’t it be great to have a simpler and known reliable set of stations rather than this mishmash which goes through the statistical blender every month? NOAA could take the lead on this, chances are they won’t. I believe it is possible to do independent of them, and it is a place where climate skeptics can make a powerful contribution which would be far more productive than the arguments over adjustments and data dropout.

 

5 1 vote
Article Rating
274 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Roy Spencer
June 26, 2014 11:47 am

Yup! good stuff.

June 26, 2014 11:52 am

What you haven’t quite addressed, which Steve Goddard nails a lot
Is the changing of adjustments to past records that we’ve got
There are TOBS adjustments, fair enough, but then adjusted TOBS
And more TOBS changes, “fixing history” with major probs
===|==============/ Keith DeHavelle

Editor
June 26, 2014 11:54 am

There is evidence that a lot of the “Estimated” USHCN data, that Steve Goddard has found, has nothing to do with station dropout.
At Luling, Texas, which just happens to be the top of the USHCH Final dataset, there are ten months in 2013, which are shown as “Estimated”. Yet, station records indicate that full daily data is available for every single month.
So why have they been estimated?
Worse still, the temperature estimates are more than 1C higher than the real actual measurements. This should not be TOBS, as this is usually applied to historic temperatures, and not current ones.
http://notalotofpeopleknowthat.wordpress.com/2014/06/26/massive-temperature-adjustments-at-luling-texas/

June 26, 2014 12:02 pm

As I said in the comments for part one. Zeke may advocate anomalies, but in the Final monthly dataset (I repeat FINAL) there are only 51 stations with 360 values without an E flag from 1961-1990.
There is only ONE station with 360 values with no flags.
Only 64% of the 2013 values do not have an E flag.

June 26, 2014 12:10 pm

[snip – off-topic and we also don’t do bargaining here about what you might do if we do something -mod]

NikFromNYC
June 26, 2014 12:11 pm

I pointed out the Goddard/Marcott analogy on the 24th, twice, and was roundly attacked for it since my name isn’t McIntyre, attacked for my nefarious motives that are merely to strongly shun mistakes, outlandish conspiracy theories and outspoken crackpot comments on mainstream skeptical blogs that afford the usual hockey stick team members newfound media attention with very lasting damage to skepticism and also strongly hinder my ability to reach out to reasonable people who are strongly averse to extremism as nearly all normal people happen to be for very good reason.
http://bishophill.squarespace.com/blog/2014/6/24/watts-reasons-with-goddard.html
When I’m attacked like this I say you’re on your own now boys. The thousands of comments a week on these skeptical blogs represent a vast opportunity cost as Al Gore tutored activists continue to blanket news sites with comments, often only opposed by amateur hour skeptics who are readily shot down. I worked much harder for this than the vast majority of you mere blog citizens, and I’m sick of fighting two fronts.

June 26, 2014 12:11 pm

Several things one learns from studying statistics : 1) one gets increases in accuracy as sample size increases, but it is a matter of dimishing returns – you don’t need a large sample
to get a pretty good fix on the population’s characteristics , and 2) biased samples will kill you.

Brian R
June 26, 2014 12:12 pm

A couple of things. Has anybody done a comparison between the old USHCN and new USCRN data? I know the USCRN is a much shorter record but it should be telling about data quality of the USHCN. Also if a majority of the measured temperature increase is from UHI affecting the night time temps, why not use TMAX temps only?. It seems to figure that if “Global Warming” was true it should effect daytime temps a much as night time.

ossqss
June 26, 2014 12:22 pm

Thanks Anthony
Nice and thorough job of explaining things.
Looking forward to the revised paper.
Regards Ed

Doug Danhoff
June 26, 2014 12:25 pm

Anyone who has paid attention to weather and climate for the past fifty years knows that there has not been a spike in temperature. Giss claims a .6 C rise over the last decade or two and that is obvious a fabrication. Are they thinking that that is about as big a lie they can tell without getting too much flack? In reality temperatures have trended down for thousands of years once the “adjustments” are removed. I have been told by many that I will never convince the liars by calling them liars, but I am far past the point of worrying about them their feelings or sensibilities.
All I want now , and will never get, is their appropriate punishment for the consequences of their destructive self serving participation in the biggest science scam of our lifetime.

Zeke Hausfather
June 26, 2014 12:26 pm

Brian R,
NCDC lets you compare USHCN and USCRN here:
http://www.ncdc.noaa.gov/temp-and-precip/national-temperature-index/time-series?datasets%5B%5D=uscrn&datasets%5B%5D=cmbushcn&parameter=anom-tavg&time_scale=p12&begyear=2004&endyear=2014&month=5
They are very similar over the period from 2004 to present (USCRN actually has a slightly higher trend). However, Goddard’s misconceptions notwithstanding, there has been very little adjustments to temperatures after 2004, so its difficult to really say whether raw or homogenized data agrees more with USCRN.
If you look at satellite records, raw data agrees better with RSS and homogenized data agrees better with UAH over the U.S. from 1979-present.

PeteJ
June 26, 2014 12:42 pm

Often Steve graphs data only from stations with long histories so in a sense he is already doing what you suggest. I just don’t like when people claiming to be scientists take daily averages, spatial averages, monthly averages, geographic averages and then averages all the averages together you kind of lose all frame of reference and the margin of error explodes exponentially, even more so if you insist on using anomalies.

onlyme
June 26, 2014 12:44 pm

if I remember correctly UAH 6 will be released soon, matching #RSS more closely.
[Nbr 3RSS (^”#” ?) perhaps? .mod]

Owen
June 26, 2014 12:45 pm

I find it odd that the past is always colder and the recent era always warmer after the adjustments. ALWAYS.
And I think many of you are very naïve to believe NOAA and GISS and NASA wouldn’t doctor the data to make the Climate Liars (global warmers) position look correct. The people running these organizations are appointed by a President who has decreed that the science is settled. I seriously doubt one of his appointed hacks is going to release data that doesn’t support the President. More then likely they are there to reinforce the President’s position, by hook or crook.
Stop being so damn gullible. Global warming has nothing to do with science. It’s a political agenda being imposed by people who have no inclination to play by the rules like the skeptics.

Louis
June 26, 2014 12:47 pm

Paul Homewood says:
June 26, 2014 at 11:54 am

Good work Paul. Is there anyone out there comparing estimated temperatures with actual recorded measurements, like you did with Luling Texas? Increasing temperatures of a rural site by 2.26C of warming since 1934 seems indefensible. It is doing the opposite of adjusting for UHI. I would like to hear their explanation if they have one.

OmegaPaladin
June 26, 2014 12:54 pm

Anthony,
What is the justification for adjusting past values, and is there any way to convey the increasing level of statistical uncertainty in the USHCN values, like confidence intervals or error bars on charts?

Johan
June 26, 2014 12:57 pm

I did find this Gavin Schmidt quote:
“Global weather services gather far more data than we need. To get the structure of the monthly or yearly anomalies over the United States, for example, you’d just need a handful of stations, but there are actually some 1,100 of them. You could throw out 50 percent of the station data or more, and you’d get basically the same answers”
http://earthobservatory.nasa.gov/Features/Interviews/schmidt_20100122.php

Alexej Buergin
June 26, 2014 12:59 pm

It is a bit OT, but I still would like to know if the annual mean temperature in Reykjavik in 1940 is, as the Icelanders say, 5°C, or, as GISS and Stokes say, 3°C?
http://stevengoddard.files.wordpress.com/2012/05/iceland-1.gif

Otter (ClimateOtter on Twitter)
June 26, 2014 1:03 pm

I seem to recall someone- was it Paul Homewood? – saying that adjustments had been made to daily temperatures all the way back into the 1880s. 1) Why would they need to make such adjustments, and B) why would they make the distant past even cooler than was recorded, while appearing to warm everything after 1963?

A C Osborn
June 26, 2014 1:03 pm

I cannot believe that you agree that 50-60 stations (even perfect stations) can give you the average temperature of the world.
How many countries are there in the world?
Do you believe that on average 1 thermometer per country would do it?

June 26, 2014 1:12 pm

Zeke: “there has been very little adjustments to temperatures after 2004”
Because all the dirty work was done in cooling the past. The trend is still up. And adjusting the past continues.

Dougmanxx
June 26, 2014 1:13 pm

What needs to stop is “activist scientists” saying we need to “act now”, while they have a “data” set that is so full of holes no rational person would use it. Just looking at the stations Geographically near me you get stuff like this:

USH00xxxxxx 2004  -695     -217      482      942     1668a    1865a    2141     1931     1795a    1097      601a     -72h       0
USH00xxxxxx 2005  -323     -100       96      938     1286     2285     2351     2249a    1851a    1174a     583a    -337a       0
USH00xxxxxx 2006   268c    -202      284a    1053     1505     1882     2367b    2161     1602      938a     563b     317a       0
USH00xxxxxx 2007   -83b    -764      421b     769     1664a    2012a    2092c    2184c    1848     1475      426a     -34f       0
USH00xxxxxx 2008  -220     -376a      21     1057a    1249a    2131a    2189     2048a    1807c     981a     367a    -140a       0
USH00xxxxxx 2009  -789d    -183      329b     924c    1492a    1944E    1978E    2119E    1700E     930E     710E    -146E       0
USH00xxxxxx 2010  -455E    -436E     443E    1174E    1688E    2164E    2349E    2249E    1777E    1144E     481E    -433E       0
USH00xxxxxx 2011  -606E    -311E     214E     932E    1583E    2064E    2476E    2110E    1707E    1080E     750E     211E       0
USH00xxxxxx 2012   -87E      61E    1011E     839E    1823E    2095E    2462E    2091E    1629E    1058E     349E     251E       0
USH00xxxxxx 2013  -189E    -295E      53E     848E    1675E    2008E    2219E    2022E    1693E    1154E     300E    -105E       0
USH00xxxxxx 2014  -793E    -699E    -169E     929E    1564E    1989E   -9999    -9999    -9999    -9999    -9999    -9999        0

No reported data since 2009, but a “full record” nonetheless. Instead of making up data, places like this need to either be dropped, or only the truncated history used. Or you just beg for people to claim nefarious intent. I’ll stick with the opinion Goddard helped me form: none of you know whether it’s getting warmer, colder, or staying the same. He doesn’t either, how could he?
According to the paper records that you can see here: http://www.ncdc.noaa.gov/IPS/coop/coop.html the above referenced station was “officially” closed in 2011, and yet has data up until today. This is what we are supposed to accept as “good science”? Argue all you want, it isn’t. What happened to scientists who could say: “I don’t know”? Because none of you do.
REPLY: since you didn’t identify what station it is, I can’t check. However, the “closure” you see in 2011 might actually be a station move, and the data since then is from the new station. Sometimes observers die, or decide they don’t want to do it anymore. If you can tell me what station it is (name and the number, or optionally lat/lot) I can check and tell you if that is the case. – Anthony

G. Karst
June 26, 2014 1:14 pm

What “circa 1950ish futurist” would have predicted that by the year 2014, our relatively advanced civilization, would not have sorted out global surface temperatures… yet? GK

Eugene WR Gallun
June 26, 2014 1:14 pm

i just semi-finished this today. Surprisingly it is on topic so I will post it. Professor Jones is English so I tried to write something the English might enjoy (or maybe groan loudly about). Americans might not appreciate that right behind “Rule Britannia” the song “Jerusalem” holds second place in the national psyche of England.
PROFESSOR PHIL JONES
The English Prometheus
To tell the tale as it began
An ego yearned
Ambition burned
Inside a quiet little man
No one had heard of Phillip Jones
Obscure to fame
(And likewise blame)
The creep of time upon his bones
Men self-deceive when fame is sought
Their fingers fold
Their ego told
That fire is what their fist has caught
So self-deceived, with empty hand
Jones made it plain
That Hell would reign
In England’s green and pleasant land
Believe! Believe! In burning heat!
In mental fight
To get it right
I’ve raised some temps and some delete!
And with his arrows of desire
He pierced the backs
Of any hacks
That asked the question — where’s the fire?
East Anglia supports him still
Whitewash was used
The truth abused
Within that dark Satanic mill
Prometheus or Burning Man?
The truth will out
And none will doubt
The evil that this imp began
Eugene WR Gallun

Lance Wallace
June 26, 2014 1:20 pm

Anthony says
“Wouldn’t it be great to have a simpler and known reliable set of stations…We’ve found at least this many good stations in the USA that meet the criteria of being reliable…
Anthony, your crowd source approach has already found most or all of the US stations that meet (at the 1 or 2 level) the siting criteria. Why not publish your best list? Then let the crowd (skeptics and lukewarmers alike) comment on individual stations, (e.g., using their specialized knowledge about individual stations to comment on whether they deserve to be there). with a final selection made in a few months after all comments are received? That would constitute a “consensus” best set for the US.
It would be great if the Surface Stations project could be expanded to be global, but realistically there seems no chance.

Vieras
June 26, 2014 1:20 pm

It’s so obvious that you should only use quality measurements. Identify less than 100 very good stations, document them thoroughly and use only them. There is no point in throwing thousands of bad ones in the equation. That’s what HARRY_READ_ME.TXT is all about. A complete and unbeliavable mess.
When you have those 100 stations, the next step is to use only a subset of them. Pick randomly 10 to simulate a perfect proxy reconstruction. When you do a number of those, will you get different results? If you do, there’s no chance whatsoever, that an actual proxy reconstruction (trees, corals, whatever) can tell us anything reliable about past temperatures.

A C Osborn
June 26, 2014 1:21 pm

What the Raw data shows is that CO2 has absolutley nothing to do with the Temperature increases.
Temps were stable at 12 degees from 1900 to 1950.
Jumped in a year or 2 over 1.5 degrees to over 13.5 degrees and slowly reduced back to 12.25 degrees over the next 50 years.
Jumped 1.75 in another year and then fell back over a degree after 10 years and has remained stable at 13 degrees for about 10 years.
Warmists would not want the world to see this graph.

June 26, 2014 1:22 pm

“Goddard is wrong to say we can use all the raw data to reliably produce a national average temperature because the same data is increasingly lossy and is also full of other biases that are not dealt with. ~AW
But that is not exactly what Goddard is saying. Goddard is saying that all adjustments always go in favor of global warming. Always. And we can never leave the G** D**** past alone. The past always cools. 1940 is a moving Fracking Target! Come on now; the past continues to change? Is this one of those very tiresome scifi novels about time travel?
Goddard is not telling you how to come up with a national average. He is telling you that the present method of doing that is not sane. It does not pass a sanity test. (I would use stronger language but the mods tell me that the software will then stuff my comment in moderation)
At present they are using F**udulent methods to push a national agenda of De-industrialaztion on us against our wills.

June 26, 2014 1:22 pm

Las Vegas is a bit of an outlier due to the fact that within 3 or 4 decades a desert was changed into a green paradise. More vegetation does in fact trap heat. (see my table on minima for Las Vegas). The opposite, i.e. the removal of trees [vegetation] does exactly the opposite. (see my table on minima for Tandil, Argentina). It causes cooling.
As to how to balance a sample of weather stations, I have explained that too,
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/

June 26, 2014 1:33 pm

A couple of issues an a repeated request i made two years ago
Issue 1
USHCN is a tiny fraction of all the daily data available for the US.
Issue 2.
over two years ago you “published” a draft of your paper.
That paper relied on ‘rating’ some 700 stations.
My request 2 years ago was that you release the station ratings under a NON disclosure release.
That is I will sign a licence to not use your data for any publication or transmit it to anyone.
I will use that data for one purpose only.
To create a automated process for rating stations.
You dont even have to seen the full data. In fact I prefer to only have half of the stations.
Half of the stations from each category ranking. This should be enough for me to build a classification tree. Then when you publish your paper I will get the other half to test the tree.
REPLY: While it is tempting, as you know I’ve been burned twice by releasing data prior to publication. Once by Menne in 2009 (though Karl made him do it) and again by the your boss at BEST, Muller, who I had an agreement with in writing (email) not to use my data except for publishing a paper. Two weeks later he was parading it before Congress and he came up with all sorts of bullshit justification for that. I’ll never, ever, trust him again.
My concern is that you being part of BEST, that “somehow” the data will find its way back to Muller again, even if you yourself are a man of integrity. Shit happens, but only if you give it an opportunity and I’m not going to. Sorry. – Anthony

Glacierman
June 26, 2014 1:35 pm

You asked: “So if that is the case, and one of the most prominent climate researchers on the planet (and his associate) says we need only somewhere between 30-50 stations globally…why is NOAA spending all this time trying to salvage bad data from hundreds if not thousands of stations in the USHCN, and also in the GHCN?”
So they can tell a story, as opposed to presenting the data. The data doesn’t tell the story they want told.
Another good question is why is so much time and energy wasted discussing the mashing of numbers that don’t give any meaningful information? All of the temp series/statistical masters. Maybe there is a new term in the field: climatological masturbation.

June 26, 2014 1:36 pm

@Col Mosby at 12:11 pm
1) one gets increases in accuracy as sample size increases, but it is a matter of dimishing returns – you don’t need a large sample to get a pretty good fix on the population’s characteristics ,
On the other hand, you need a very large sample to measurably reduce the reported uncertainty. This is the attraction to use 10,000 stations when 100 will get you close with unacceptable (to politicians) uncertainty.
The rub, I suspect, is that they are not adequately modeling the uncertainty ADDED to the analysis by including thousands of infill data points.
Wattts: If it were me, I’d throw out most of the the USHCN and co-op stations with problematic records rather than try to salvage them with statistical fixes, and instead, try to locate the best stations with long records, no moves, and minimal site biases and use those as the basis for tracking the climate signal.
Yes, sir. At the very least, accept that as an INDEPENDENT analytical process. They ought to achieve the same result as the more complicated NOAA, NCDC, BEST, CRU approaches. If they are not similar, THERE’s you real uncertainty.

Glacierman
June 26, 2014 1:36 pm

Mosher,
Could you please read before hitting the submit button?

June 26, 2014 1:40 pm

That’s interesting, your comments are in many ways a mirror of mine.
I’m just going to roll my eyes at the whole “fabricated” vs “infilled” thing and move on. Word games are silly. Data is interesting.
Also, Steve’s character has been questioned, as has yours. Again, eyeroll. I will accept results from Satan himself if they can be independently replicated by skeptical observers. Ad hominems are boring. Data is interesting.
I applaud your idea for a simplified network, but unless Rand Paul is elected I do not think it is remotely plausible that it will ever be adopted by the powers-that-be. OTOH, should WUWT start publishing its own such temperature, I will upgrade my applause to a standing ovation.
Here’s a point I’ve made a few times now, that I like more and more the more I consider it: isn’t it possible, even likely given the UHI problems you, McIntyre, and others have pointed out, that Steve’s simple average, despite all its major flaws, is more accurate than what USHCN is officially reporting? And I think we can probably prove that with correlations to proxies like Great Lakes ice, economic reports, etc.

Zeke Hausfather
June 26, 2014 1:40 pm

By the way, I have a new post up looking in more detail about potential biases in Goddard’s averaged absolutes approach for the U.S.: http://rankexploits.com/musings/2014/how-not-to-calculate-temperatures-part-3/

FundMe
June 26, 2014 1:42 pm

As far as I can tell…Each station has its own climatology by which the anomalies are calculated.
So when a station does not report and it is infilled using the nearby station the nearby stations climatology is used, the anomaly is just moved across to the missing station.
Now as far as I can tell this will alter the climatology of the infilled station to that of the nearby station and because RECENT data is favoured over older data this kicks up a red flag with regard to past climatology of the infilled station.
The Algorithm is trained to spot problems and because it trusts new data more than old data it changes the old data. Now this is reversed again when real data is finally reported the climatology calculation is done again. The past data is then adjusted again.
Man o Man talk about giving a junkie the keys to the pharmacy,

June 26, 2014 1:44 pm

“Maybe there is a new term in the field: climatological masturbation.”
I call it “Squiggology.”
That’s when you apply advanced stupidity to make better squiggles in graphs that have no relevance to real life.
Andrew

Frank K.
June 26, 2014 1:46 pm

“This is because there is a spatial scale (roughly a Rossby radius) over which temperatures are going to be highly correlated for fundamental reasons of atmospheric dynamics.”
Are there papers which support this assertion? I would find this very hard to believe in general. In fact, the original Hansen and Lebedeff (1987) paper (despite their claim in the abstract) found very poor correlation over many parts of the globe. Read it for yourself…
http://pubs.giss.nasa.gov/abs/ha00700d.html

Frank K.
June 26, 2014 1:47 pm

Zeke – any valid links to NCDC data processing software (TOBS etc.) – or are you too busy??

June 26, 2014 2:00 pm

“Steig claims 30 station records are all that are needed globally. ”
I disagree. The problem is choosing which which stations best represent the region. Microclimates often have varying trends which are most pronounced in minimum temperatures. Maximum temperatures will likely provide better trend data. Furthermore rmaximum temperatures provide the more reliable measure of heat accumulation. However droughts and land use changes can raise temperatures by lowering heat capacity which can then raise maximum temperatures with much LESS heat.

RossP
June 26, 2014 2:04 pm

Mark Stoval ( Stoval) @ 1.22pm
Totally agree with your post !!

Eliza
June 26, 2014 2:10 pm

1.I would not trust Steig for anything. Isnt he the fellow who smeared 1 antarctic station all over antarctica and said the whole place was warming? A completely debunked piece of work I believe by CA.
2. That said it seems that WUWT and Real Science have more in common than not.
3. All the adjustments are towards warming…not possible.
4. Goddard is correct.
5. WUWT is partly correct but only in this case and overlooks the BIG picture of other 100’sof examples. Adjustments upwards
6. http://notalotofpeopleknowthat.wordpress.com/2014/06/26/massive-temperature-adjustments-at-luling-texas/ is correct
7. LOL

Latitude
June 26, 2014 2:11 pm

Goddard is not trying to compute some temperature….
…he’s talking about the way they are doing it
His 40% was referring to infilling…..which he calls the F word
..his 30% was not some snap shot waiting on someone to mail in their results in a week like you played it
his 30% was for the YEAR 2013..and was not talking about waiting for some.stations to mail it in….it was 40% stations dropped…..gone…..bombed back to the stone age….not in existence
REPLY: “it was 40% stations dropped…..gone…..bombed back to the stone age….not in existence”
Nope, sorry. You are 110% wrong about that, many of those stations are still reporting. His method looks at every data point, so if for example Fen 2014 daily data from Feb 14th and 27th are missing because the observer didn’t write anything down for those days, that gets counted. The station reported the rest of February, March, etc. There are a lot of stations like that where observers may miss a couple of days per month. That’s a whole different animal than being “gone…..bombed back to the stone age….not in existence”.
Anthony

June 26, 2014 2:12 pm

Eugene WR Gallun @1:14 pm,
You have a real talent.

June 26, 2014 2:13 pm

Mr Layman here.
Am I in the ballpark when I say the sites and stations are being used to arrive at something they were never designed for?
I mean, an airport station was put there to give ground conditions to the pilots landing the planes.
Most, if not all, other stations were set up to give local conditions for local use (and weather forecasting, of course).
But now their readings are being used for global temperatures. Square peg, round hole.
It would seem to me that only satellites are in a position to give a good idea of a global temperature. But I don’t think even they can read the deep oceans.
To paraphrase an old song, “Does anybody really know what temp it is?”

GaryW
June 26, 2014 2:15 pm

Col Mosby: “Several things one learns from studying statistics : 1) one gets increases in accuracy as sample size increases, but it is a matter of dimishing returns – you don’t need a large sample to get a pretty good fix on the population’s characteristics , and 2) biased samples will kill you.”
I think I need to correct you on ‘1’ above. One actually may achieve increased precision as sample size increases, assuming sample variation is random. Accuracy and precision are independent characteristics of data samples. Accuracy can be no better than that of each individual observed value as determined by instrumentation calibration, coupling of sensor to process measured, and recording ability of values. Precision can be greater than accuracy, meaning that in some situations you may observe values with measurement increments finer than the accuracy of the measurement. It other situations, precision may be poorer than the accuracy.
‘2’ above attempts to reconcile this disparity between accuracy and precision by calling accuracy errors ‘bias’. There are many other kinds of errors besides what we would normally label as bias. One source of inaccuracy could simply be that the instrument used to collect an observation was never designed to provide high accuracy or even repeatability of measurements. Issues such as short and long term drift, aging, hysteresis, and non-linearity in each of these, as well as the quality of initial calibration and calibration markings all introduce errors within an instrument that are not removed by averaging observations. Likewise, things that effect the coupling of a sensor to a measured process can introduce errors in observations that do not average out over time.
So, simply assuming errors are a combination of random noise plus a static bias is quite invalid. Never accept claims of increased accuracy of observations based upon averaging, or statistically massaging data in any way. A claim of improved precision may be justified in some cases. However, accuracy is always that of each individual observation included in the statistical reduction.
I am pleased that Anthony acknowledges that the USHCN is doing its job correctly as designed. It was never intended as a fractional degree Fahrenheit or Celsius long term climate trend instrumentation system. That this system shows human impacts and large Urban Heat Island temperature trends is both correct and desired. People really are interested in what the temperature actually is, not necessarily what it would have been if humans hadn’t has an effect on it. The USCRN is our first real shot at looking for that long term trend. Let’s hope that trend will stay positive and not negative!

Nick Stokes
June 26, 2014 2:18 pm

“The question is, have we reached a point of no confidence in the data because too much has been lost?”
There is always a question of how much data you need. It’s familiar in continuum science. But there are answers.
We’re trying to get an average for the whole US. That is, to calculate an integral over space. Numerical integration is a well studied problem. You have a formula for a function that provides an interpolate everywhere, based on the data, and you integrate that function by calculus. It will probably be used as a weighted sum expression, but that is the basis for it.
So how much data do you need? Basically, enough so you can almost adequately interpolate each value from its neighbours. That’s testable. I say almost, because if you can interpolate with fully adequate accuracy, you have more data than you need. But if the accuracy is very poor, then even with the info about that point added, there are likely areas remaining that are poorly interpolated.
That’s why it’s so silly to talk of infilling as “faking” data. The whole concept that we’re dealing with (average US) is based on interpolation. And there is enough US data that known data is, in monthly average, close to its interpolate from other data.
It’s also yet another reason for using anomalies. Real temperature is the sum of climatology and anomaly. You can interpolate anomaly; it varies fairly smoothly. Climatology varies with every hill. So to find temperature for the whole US, you average anomaly, and add it to locally known climatology.

June 26, 2014 2:20 pm

dbstealey says:
June 26, 2014 at 2:12 pm
Eugene WR Gallun @1:14 pm,
You have a real talent.

==============================================================
Agreed.
At times I’ve done parodies of poems and songs. That means that somebody else did the hard work of meter, rhyme etc. I just have some fun with it.

Owen in GA
June 26, 2014 2:34 pm

Ok, You are definitely in cahoots with big oil climate…(or is that big climate oil)../sarc
Pollution of records is a very difficult problem. I still don’t agree with changing the past data in the time series, especially if those changes are based on interpolated estimates. There really needs to be some sort of way to automate some of these night time rises or declines, but how to do it without very good station metadata I don’t know. It just seems like whatever we do we are guessing. Then we misapply the law of large numbers to say there is no way that many stations could all be wrong, and apply tiny error bars on populations that still have large unresolved systematic errors and biases. Then the activists who have been trying to deindustrialize us since the beginning of the industrial revolution take that false certainty and exclaim “it’s worse than we thought, we are all going to die!!!!!”

Nick Stokes
June 26, 2014 2:37 pm

“Steig claims 30 station records are all that are needed globally”
He actually suggested 60, divided into halves that you could compare. There was a thread at Jeff Id’s; thiscomment of Eric’s is a good starting point.
I got involved, and did an actual calculation with 60 stations. One can ask for them to be rural, at least 90 years data, etc. It wasn’t bad, and the result not far from a full station calc. Later I updated with better area weighting here and here.
Zeke and I have done tests with just rural, excluding airports etc. It just doesn’t make a big difference.

Latitude
June 26, 2014 2:40 pm

His method looks at every data point, so if for example Fen 2014 daily data from Feb 14th and 27th are missing because the observer didn’t write anything down for those days, that gets counted
=====
Goddard took his numbers from 2013….it’s 2014 now……….
That’s a year ago….nothing was lost in the mail
you quoted him directly
Goddard said since 1990……”The data is correct.
Since 1990, USHCN has lost about 30% of their stations, but they still report data for all of them.”
Did they lose 30% of their stations since 1990 or not? are they infilling those station now or not?

June 26, 2014 2:42 pm

… And there is enough US data that known data is, in monthly average, close to its interpolate from other data. …
And it will always interpolate in the direction that the warmists want, right? Like the example of one western city with 5 stations one day. 4 were cooler than the airport station and they got interpolated upwards to match the higher station. (wish I had a link to that post, but I don’t keep them — disregard this example if you want)
The fact that the raw data in long term rural reporting stations always show a long term flat temperature (or a decline) while the interpolated anomaly shows warming is going to tend to inform me that people are fudging the records. Oh, they are sophisticated about it. They do a lot of “infilling” and interpolation. Most have advanced degrees in [snipped] and so forth. But honest data? No; they don’t deal in honest.

June 26, 2014 2:52 pm

Anthony, there are 1218 stations. That means there should be 14,616 monthly records for 2013.
There are 11568 that have a raw and final data record = 79%. of the 14,616
There are only 9384 of those 11,568 that do not have an E flag. 64.2% of the 1,4616
There are only 7374 that have a en empty error flag. 50%. of the 14,616.
36.8% is close enough to 40% for me.
And yet the NOAA publishes press releases claiming this month or that is warmest ever.
REPLY: Thanks for the numbers, I’ll have a look. Please note that the USHCN is not used exclusively to publish the monthly numbers, that comes from the whole COOP network. – Anthony

June 26, 2014 2:54 pm

NOAA software.
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/software/52i/
the adjustement software has been available for the last 7 years in its various forms.
Those of us really interested found it, pointed others to it, or simply ASKED for it

Paul in Sweden
June 26, 2014 2:56 pm

Splitting hairs a 10th of a degree C at a time. When will the madness end?

June 26, 2014 2:59 pm

The elephant in the room:
‘Adjustments’ are almost always made so that earlier temperature [T] is lower, while most recent T is higher — which of course shows a scary rise in T.
I could accept that adjustments were made as simple corrections. But the way they are done is nothing but alarmist propaganda.
If Nick Stokes has an answer, I would be interested in seeing it.
=======================
Paul in Sweden,
True dat.

June 26, 2014 3:02 pm

Charts! Data! Real Records!
Paul Homewood (Not A Lot Of People Know That) is on USHCN’s case — “Massive Temperature Adjustments At Luling, Texas”
http://notalotofpeopleknowthat.wordpress.com/2014/06/26/massive-temperature-adjustments-at-luling-texas/
There are all sorts of official records posted and analysis, but the following should suffice to make a point:

In other words, the adjustments have added an astonishing 1.35C to the annual temperature for 2013. Note also that I have included the same figures for 1934, which show that the adjustment has reduced temperatures that year by 0.91C. So, the net effect of the adjustments between 1934 and 2013 has been to add 2.26C of warming.
Note as well, that the largest adjustments are for the estimated months of March – December. This is something that Steve Goddard has been emphasising.

This post is worth a look to all interested in this subject.

June 26, 2014 3:02 pm

REPLY: While it is tempting, as you know I’ve been burned twice by releasing data prior to publication. Once by Menne in 2009 (though Karl made him do it) and again by the your boss at BEST, Muller, who I had an agreement with in writing (email) not to use my data except for publishing a paper. Two weeks later he was parading it before Congress and he came up with all sorts of bullshit justification for that. I’ll never, ever, trust him again.
My concern is that you being part of BEST, that “somehow” the data will find its way back to Muller again, even if you yourself are a man of integrity. Shit happens, but only if you give it an opportunity and I’m not going to. Sorry. – Anthony”

very simply I will sign a licence with substantial penalties. lets say
a million dollars.
Lets make it even simpler. Choose only 30 CRN12 and 30 CRN5
Now, since there were maps of the station locations in your original materials, reverse engineering it by digitizing wasnt that hard. Still hacking around to get it is not my style.
I rather do things above board.
REPLY: $1mil ? That would be even more tempting, if it were a real offer, but I know you can’t come up with a million dollars, so why even offer it? It’s insulting. – Anthony

Tony Berry
June 26, 2014 3:04 pm

Anthony, an excellent post and a set of well reasoned comments and proposals. In my field – drug clinical trials similar problems occur I.e. Doctors don’t fill in the trial data forms correctly and you have the problem of do you ignore the errors or correct them or do you not use the record due to errors. The FDA take a dim view of throwing out patient data and expect everything to be submitted and the data analysed on an intention to treat basis which often results n faulty analysis and errors (and law suits from patients later). There is no simple answer in this case just like your problem.
Tony Berry

mark
June 26, 2014 3:05 pm

Owen says:
June 26, 2014 at 12:45 pm
“Stop being so damn gullible. Global warming has nothing to do with science. It’s a political agenda being imposed by people who have no inclination to play by the rules like the skeptics.”
+1 I’m usually not a conspiracy theory believer but AGW reeks of it. It will take a change of administration in the US to alter the course of CO2 and even then it may be too late. With AGW being taught in public schools the tide will be difficult to overcome. Only a deep and sustained cooling cycle will have any effect. All the last 17 years of cooling has done is alter some public opinion but not enough to make a difference. The evil force is strong with this one.

charles nelson
June 26, 2014 3:10 pm

Tell you what makes me chuckle…here we are arguing about what the correct temperature is during the 20th century in one of the most advanced and measured countries in the world, reaching the conclusion that ‘we’re not really sure’. Yet people claim to have measured ‘global’ temperatures climb by .3 of one degree over the same period?

Rob R
June 26, 2014 3:12 pm

If an excellent network of unbiased historic stations was developed and if, from that network, it was possible to interpolate between sites that are in the network, then presumably one could identify just how much bias there is in data produced at climate stations that are not part of the network.
Then we might get a better estimate of the effect of UHI and other anthropogenic landuse influences. It would be useful to start in the USA where the density of climate observations is high, then extend to other well-measured countries. Why have the massively funded NOAA and GISS not already completed the USA portion of such a conceptually simple project? Perhaps it has been done but the results are inconvenient?

Richard Wright
June 26, 2014 3:21 pm

Trying to find a way to use bad data is futile. One is never able to prove anything. Garbage in, garbage out. But then we wouldn’t need all of those research grants for fudging the data, and we can’t have that.

Nick Stokes
June 26, 2014 3:22 pm

dbstealey says: June 26, 2014 at 2:59 pm
“If Nick Stokes has an answer, I would be interested in seeing it.”

It’s here. Almost all the adjustment complaint re USHCN is about TOBS – it’s the biggest. And the reason it ups the trend is well documented. Observers shifted over time from evening to morning observation. You can count them. Evening obs double counts warm afternoons – a warm bias. Morning tends to double count cold mornings (though less so, at 9 or 10 am). So the past had been artificially warmed. Adjusting “cools” it.

Latitude
June 26, 2014 3:23 pm

[combative snark serving no other purpose -mod]

dorsai123
June 26, 2014 3:38 pm

educated as an engineer and find much of this statistics talk eye rolling boring … funny thing about engineering … not much use for statistics … a bridge that is safe to 95% confidence is not a bridge but a death trap …
It all comes down to data … and you don’t have any … what you have is 3rd hand massaged nonsense … even your new paper will just be putting lipstick on a pig …
you should spend your time making sure that the world knows its ALL BS not just 60% of the sites …

Chuckarama
June 26, 2014 3:41 pm

Fantastic series on issues of temperature data. Now you need to do part 3 on what the issues with satellite temp data are.

David
June 26, 2014 3:41 pm
u.k.(us)
June 26, 2014 3:56 pm

Steven Mosher says:
June 26, 2014 at 3:02 pm
“REPLY: While it is tempting, as you know I’ve been burned twice by releasing data prior to publication. Once by Menne in 2009 (though Karl made him do it) and again by the your boss at BEST, Muller, who I had an agreement with in writing (email) not to use my data except for publishing a paper. Two weeks later he was parading it before Congress and he came up with all sorts of bullshit justification for that. I’ll never, ever, trust him again.
My concern is that you being part of BEST, that “somehow” the data will find its way back to Muller again, even if you yourself are a man of integrity. Shit happens, but only if you give it an opportunity and I’m not going to. Sorry. – Anthony”
very simply I will sign a licence with substantial penalties. lets say
a million dollars.
Lets make it even simpler. Choose only 30 CRN12 and 30 CRN5
Now, since there were maps of the station locations in your original materials, reverse engineering it by digitizing wasnt that hard. Still hacking around to get it is not my style.
I rather do things above board.
REPLY: $1mil ? That would be even more tempting, if it were a real offer, but I know you can’t come up with a million dollars, so why even offer it? It’s insulting. – Anthony
===========================
So, now it is a pissing contest ?

MikeN
June 26, 2014 3:59 pm

I think you need to acount for the possibility that global warming will primarily take effect at night. So warmer Tmin vs neutral TMax is not nevessarily a contradiction.

Latitude
June 26, 2014 4:11 pm

[suggest you resubmit the question without the combative snark either you are interested in an answer or you want to pile on -mod]

June 26, 2014 4:13 pm

The world population of people living in urban areas is increasing roughly 1/2 percent per year. These people are transitioning from exurban to cenurban UHI areas and it should be no surprise they have personal experience with rising temperatures. Perceptual bias is not a reliable observation.
Let me give a simplified example. Ten bus routes. Nine carry one passenger per hour. One carries 20. Poll all riders and ask how many people are riding the bus. Answer: 29. Observation survey: 9 people say 1. 20 people say 20.

June 26, 2014 4:14 pm

Anthony, I cannot warn you enough on this. Do not give Mr. Mosher any pre-published data, his only intention is to find anything he can distort to make you look bad. This will then be used in a massive PR blitz against you by all the usual suspects. I have no idea why anyone entertains what he has to say. The only skeptics I am interested in hearing about your project from is yourself and Steve McIntyre.

Latitude
June 26, 2014 4:17 pm

So, now it is a pissing contest ?
===
UK, it’s very important information to a lot of people…and it’s the root of all temp reconstructions
If rural stations are closing and they are infilling with urban stations….

Nick Stokes
June 26, 2014 4:24 pm

Mark Stoval (@MarkStoval) says: June 26, 2014 at 3:02 pm
“Massive Temperature Adjustments At Luling, Texas”

Something odd has happened at Luling. Here is the BEST analysis. They show data to Sep 2013, but in the plot of temp relative to regional expectations, there is a sudden drop of about 2°C. It seems that this has alarmed the USHCN algorithm, and they have replaced obs with regional expectation pending further information. So of course there is a big “adjustment”. That’s how errors are found.
It may be that the Luling obs are correct. Time will tell.

June 26, 2014 4:34 pm

For those who are interested in what Nick Stokes is talking about. This wiki page on intergration is well worth the read. In particular, on the issue of only using a few stations and arriving at the same conclusion, “the Gaussian quadrature often requires noticeably less work for superior accuracy.”
“The explanation for this dramatic success lies in error analysis, and a little luck.”
http://en.wikipedia.org/wiki/Integral
The same principle can also be applied to modern GCM, where they get the right answer from the wrong parameter.

glacierman
June 26, 2014 4:39 pm

There’s a problem with the data so they replace it with regional expectations? That is the problem. Who gets to decide what’s expected? It should not be used if there is a problem detected, not replaced with synthetic numbers.

Shawn from High River
June 26, 2014 4:41 pm
Latitude
June 26, 2014 4:43 pm

Nick Stokes says:
June 26, 2014 at 3:22 pm
====
Nick, thanks….that’s a plausible justification
…and, according to Best, it looks like the Luling station has been moved 7 times, each time to a cooler location

angech
June 26, 2014 4:46 pm

ANTHONY “Goddard is right to point out that there is increasing data loss in USHCN and it is being increasingly infilled with data from surrounding stations. While this is not a new finding, it is important to keep tabs on. He’s brought it to the forefront again, and for that I thank him.”
Zeke has had 3 posts up at Lucia’s since June 5th 2014, the first had 284 comments.
I made several requests to Zeke re the USHCN figures with little response
So to be clear I said
there were “ 1218 real stations (USHCN) in the late 1980s
There are now [???] original real stations left-my guess half 609
There are [???] total real stations – my guess eyeballing 870
There are 161 new real stations , all in airports or cities added to the graph
There are 348 made up stations and 161 selected new stations.
The number of the original 1218 has to be kept
Nobody has put up a new thermometer in rural USA in the last 30 years and none has considered using any of the rural thermometers of which possibly 3000 of the discarded 5782 cooperative network stations.
june 7th Zeke” As I mentioned in the original post, about 300 of the 1218 stations originally assigned to the USHCN in the late 1980s have closed, mostly due to volunteer observers dying or otherwise stopping reporting. No stations have been added to the network to make up for this loss, so there are closer to 900 stations reporting on the monthly basis today.”
yet he also said
Zeke has a post at SG where he admits that there are only 650 real stations out of 1218 . This is a lot less than only 918 that he alludes to above. Why would he say 650 to SG ( May 12th 3.00 pm) and instead #130058 at the Blackboard about 300 of the 1218 stations have closed down.
Anthony 650 real stations is a lot more than 40% missing in fact it is nearly 50%.
Would you be able to get Zeke to clarify his comment to SG and confirm
a. the number of real stations [this may be between his 650 and 850 [last list of up to date reporting stations early this year had 833 twice but presumably a few more not used as missing some days]
b. the number of original real stations remaining this may be lower than 650 if real but new replacement stations have been put in c in which case the number of real original and the number of real replacement stations in the 650

REPLY:
Links to these comments? Don’t make me chase them down please if you want me to look at them -Anthony

June 26, 2014 4:50 pm

30 to 50 stations
>>>>>>>>>>>>
I’d actually agree that if you had a few dozen stations with pristine records, that would be sufficient to calculate the temperature trend of the earth provided that the record was long enough.
If you have a long enough record in time, a single station would be sufficient to determine the earth’s temperature trend. In fact, that’s pretty much what ice core data is. A single location with data for thousands of years, which is sufficient to plot general trends in earth’s temperature as a whole. When you have data for thousands of years, that single location is sufficient for spotting long term trends.
Which brings us back to the 30 to 50 station thing. While it might be fair to say that 50 stations is sufficient, that claim must be qualified in terms of time frame. 50 stations for 1 month would be pretty much useless, but 50 stations for 1,000 years would have obvious value. So the claim is contingent on the accompanying timeline, which doesn’t seem to be stated by the proponents.
Calculating a suitable timeline for 50 stations…. above my pay grade.

milodonharlani
June 26, 2014 4:52 pm

Eugene WR Gallun says:
June 26, 2014 at 1:14 pm
Excellent! But I was hoping for “Dark Satanic mills” in there somewhere.

Brandon C
June 26, 2014 4:52 pm

Sorry Nick, but the TOBS adjustments assumes a uniform error that can be simply corrected. But there is no simple or even defendable way to ascertain what measurements were correct and which ones were not, so the assumption of uniformity only adds uncertainty. A truly scientific approach would be to leave the data alone and develop error bars to reflect the uncertainty caused by the unknown past errors. By making assumptions of uniform error, adjusting the data, and then presenting it is having increased or equal accuracy is simply blatant unscientific.
It results in the “correct” answer to be sure, but it leaves the ridiculous situation of adjusting down past temps so that recorded exceptional heat events, in many alternate mediums such as news, correspond with average or cold temps in the “new and improved” temp records. Everyone is so enthralled with their cleverness in finding reasons why they can adjust past temps, nobody is bothering to do even minor due diligence to see if they can alternatively confirm if the adjustments are accurate.
As you are so eager to get the adjustments out there, and in an automated way that happens without oversight, you seem to forget that scientists would be looking for independent ways to cross check if the adjustments were correct. For example hire a few students to review old newspapers and see if examples of very hot records in the station data can be corroborated by stories about the extreme heat in the news. If your method says that the records from a station should be reduced, but on the ground news was talking about heat that was killing livestock, then your adjustments are crap. I have seen many examples of this very thing, so I know it happens. Is it a rarity? Who knows because nobody checks.
But a good example that brings into question past cooling adjustments, is that TOBS does not affect the max temp records for a station, so all the past heat records are forced to be acknowledged and cannot be adjusted. And since the majority of heat records were set in the 20’s, 30’s and 40’s, why do so many happen during a time when it was so much cooler on average than today according to your unbiased programs that keep cooling the past more every year? I have heard endlessly about how a warmer world means heat records will fall with more regularity from endless climate scientists and climate reports. So don’t they contradict the statement that the past was so much cooler and it was just stupid temp station operators that made it seem warmer. I have read interviews with station operators and even talked to 2 in person and they all think TOBS adjustment reasoning is garbage because they identified the problem back then and adjusted their methods to avoid it. But I guess it is just too tempting to assume they are all wrong and your so much smarter.
It’s like sea surface inlet measuring. There are lots of logs and records indicating that the switch from buckets was not even close to done by the time the post revisionist climate scientists declared it was. But the adjustments it causes are “correct” so no need to actually verify.
I get so frustrated by such sloppy and assumption filled science, pretending that their huge assumptions are 100% correct and cannot be questioned. If climate science was forced to act like proper scientists and statiticians, the certainty and smug assurances would disappear. And we would be left with a lot more uncertainties, but more humble and way more accurate estimates of the climate.

milodonharlani
June 26, 2014 4:53 pm

Never mind. I must have skipped that stanza. Well done!

angech
June 26, 2014 4:58 pm

a second small request. Can you put the first graph up that Zeke shows at the Blackboard
26 June, 2014 (14:21) | Data Comparisons | By: Zeke
which shows with clarity the adjustment of past records by up to 1.2 degrees from 1920 to 2014.
“You do not change the original real data ever”
Zeke wrote (Comment #130058) blackboard
June 7th, 2014 at 11:45 am. Very Relevant to everyone
“Mosh, Actually, your explanation of adjusting distant past temperatures as a result of using reference stations is not correct. NCDC uses a common anomaly method, not RFM.The reason why station values in the distant past end up getting adjusted is due to a choice by NCDC to assume that current values are the “true” values. Each month, as new station data come in, NCDC runs their pairwise homogenization algorithm which looks for non-climatic breakpoints by comparing each station to its surrounding stations. When these breakpoints are detected, they are removed. If a small step change is detected in a 100-year station record in the year 2006, for example, removing that step change will move all the values for that station prior to 2006 up or down by the amount of the breakpoint removed. As long as new data leads to new breakpoint detection, the past station temperatures will be raised or lowered by the size of the breakpoint.”
so the past temperature is being lowered all the time based on the newest input data which is itself being adjusted by reference to only 50% of real stations many of which are referenced to surrounding cities and airports and to previous recent hot years [shades of GergisMann technique here] as having more temporal and spatial weighting

milodonharlani
June 26, 2014 4:59 pm

As shown by the problems with just US “data” as collected & adjusted, GAST is far too hypothetical a concept & flimsy a number, with such minor changes, within margin of error, that trying to base public policy upon it is at best folly, but closer to insanity, especially if it means energy starvation, the impoverishment of billions & death of at least tens of thousands, but probably millions, given enough time.

angech
June 26, 2014 5:02 pm

REPLY: Links to these comments? Don’t make me chase them down please if you want me to look at them -Anthony Will do ASAP.

Paul in Sweden
June 26, 2014 5:02 pm

??? As long as the errors & misguided methodology(mythology) is well documented the wrongful conclusions can be excused. Yup! That sounds like Climate ‘Science’.
=========
@ dbstealey True Dat indeed – excellent graph!

Nick Stokes
June 26, 2014 5:04 pm

Brandon C says: June 26, 2014 at 4:52 pm
“It results in the “correct” answer to be sure, but it leaves the ridiculous situation of adjusting down past temps so that recorded exceptional heat events, in many alternate mediums such as news, correspond with average or cold temps in the “new and improved” temp records.”

No, it doesn’t do that at all. I doubt that anyone uses adjusted temperatures when talking of exceptional temperatures. They shouldn’t.
Adjustments have a specific purpose – to reduce bias preparatory to calculating a space average. They are statistical; not intended to repair individual readings.
TOBS, for example, would not affect a record max. That temperature is as recorded. The problem is that if you reset the instrument at 5pm, that makes the next day max hot, even if the actual day was quite cool. That is a statistical bias, for which it is appropriate to correct a monthly average.

June 26, 2014 5:09 pm

Anyone have a paper showing a test of the prediction from theory that, “… there is a spatial scale (roughly a Rossby radius) over which temperatures are going to be highly correlated for fundamental reasons of atmospheric dynamics.“?
I’ve read Hansen’s and Lebedef’s,1987 paper showing >=0.5 correlation of station temperatures across about 1000 km. However, at any distance and across every latitudinal range the scatter in correlation is significant, and becomes very large in the tropics. This will mean the “infilling” of any specific temperature record will be subject to a large uncertainty.
Before the Steig-Schmidt 50 well-placed stations solution is applied, a serious validation study is required to show that their solution will in fact produce the reliable and accurate scalar temperature field claimed for it.
That’s how science is done, remember? Prediction from theory, test by experiment. One doesn’t apply a theory merely because of its convenience and compact charm. In judging theory, pretty doesn’t count for much; if it fails the test, it’s wrong.

Nick Stokes
June 26, 2014 5:09 pm

Poptech says: June 26, 2014 at 4:14 pm
“Anthny, I cannot warn you enough on this. Do not give Mr. Mosher any pre-published data, his only intention is to find anything he can distort to make you look bad.”

Where have I heard something like that before???

maccassar
June 26, 2014 5:10 pm

I am shocked that someone has not tried to have a global network of pristine locations in every country. How much error can there be by not covering spatially as extensively as it is now. Has anyone ever attempted this.

David Riser
June 26, 2014 5:10 pm

A#thony,
I like your idea, but I gather the 50 stations would not be to determine the average global temperature, but the trend of the global temperature. Since the rosby radius gives a size for an area of cold air under the warm, which would lead to highly correlated temperatures but not the same temperature. So I would suggest if you were to do this, do it as a test case in the US pick the most pristine and oldest records, use anomalies and see what you get. compare the last 30 years or so to the satellite records for the US only. Should be interesting.
v/r,
David Riser

Nick Stokes
June 26, 2014 5:15 pm

Pat Frank says: June 26, 2014 at 5:09 pm
“This will mean the “infilling” of any specific temperature record will be subject to a large uncertainty.
Before the Steig-Schmidt 50 well-placed stations solution is applied, a serious validation study is required to show that their solution will in fact produce the reliable and accurate scalar temperature field claimed for it.”

There will be uncertainty. But the purpose is to obtain an overall integral (or weighted sum). So the key test is whether it does that well, not the uncertainty in individual readings. Bias is more important than unbiased noise.

June 26, 2014 5:18 pm

Brandon, your comment is exactly right, and goes exactly to the heart of the mess that is global air temperature. No one is paying attention to systematic errors. Instead convenient assumptions are made that allow a pretense of conclusions.
That same failure of scientific integrity permeates all of consensus climate science.

Old England
June 26, 2014 5:19 pm

It struck me, and maybe wrongly, that from the Las Vegas data the distortion from UHI is primarily in the night time T min but the day time T max is unaffected ? Also if a mere 30 – 50 stations globally are sufficient to accurately monitor global temps – what would we find from 50 continuous, site unchanged, UHI unaffected and entirely rural stations plotted over the last 100 or 150 years ?

June 26, 2014 5:19 pm

Nick wrote, “the purpose is to obtain an overall integral (or weighted sum).
No, Nick. The purpose is to obtain physically accurate data.

u.k.(us)
June 26, 2014 5:22 pm

Latitude says:
June 26, 2014 at 4:17 pm
So, now it is a pissing contest ?
===
UK, it’s very important information to a lot of people…and it’s the root of all temp reconstructions
If rural stations are closing and they are infilling with urban stations….
+++++++++++++++++++++++++++++++++
Of course it is, good data is needed for any solution.
I was just playing on the million dollar bet 🙂

June 26, 2014 5:29 pm

Nick Stokes says:
June 26, 2014 at 5:15 pm
Where have I heard something like that before???
You haven’t, I clearly said distort. Every skeptic here trusts people Steve McIntyre with Anthony’s data because he has no ulterior motives to distort anything like Mr. Mosher is known to or launch propaganda PR blitzes like he has been affiliated with in the past (BEST and Muller). This is not some honest skeptical inquiry like Steve McIntyre’s work who is actually qualified for the task. Mr. Mosher has done nothing but attempt to obfuscate temperature data arguments based on his ideologically biased beliefs.

Latitude
June 26, 2014 5:30 pm

Latitude says:
June 26, 2014 at 4:11 pm
[suggest you resubmit the question without the combative snark either you are interested in an answer or you want to pile on -mod]
====
Sorry, didn’t mean to come across as Nik or Willis…won’t happen again

milodonharlani
June 26, 2014 5:30 pm

Get back to me when you have at least thirty years of good data from 51,000 ideally sited & well maintained, identical reporting stations, one for every 10,000 square kilometers (3861 sq. mi.) equidistant across the entire surface of the planet. That’s an area larger than Delaware & Rhodes Island combined (6452 + 3140 km2), but still pretty darn small.
GAST is so questionable as to be worse than worthless for purposes of formulating public policy, IMO. It would help if Phil Jones’ dog had not eaten the data they supposedly got from the Met Office.

milodonharlani
June 26, 2014 5:32 pm

Pat Frank says:
June 26, 2014 at 5:19 pm
The purpose is to increase state power & that of supranational regimes, while also providing a nice living & travel opportunity for Nick’s pal-reviewing buddies &, ideally, reducing the number of evil, greedy, carbon dioxide emitting humans.

Bill Illis
June 26, 2014 5:32 pm

Even the “adjusted” temperatures are not keeping up with the theory’s expectations or the climate model’s actual predictions.
Let’s say “half” of the trend is spurious over-active adjustments or 0.35C.
If the adjustments hadn’t been implemented, then they really would have had to rewrite the theory by now you would think.
They have no choice but to face the choice of re-writing the theory or continuing to adjust the old temperature records. That means they need to have about 1.4C of further adjustments in the next 86 years to reach the year 2100 +3.25C prediction. Now that’s alot of adjustments.
Can they keep up the pace? Another 1.4C of adjustments in just 86 years? Surely not.
Wrong.
It is only 0.0014C every month which is almost exactly the trend of 0.35C of adjustments over the last 20 years (more-or-less what they have been doing).
I’m afraid all those northerly crop growing regions, never did actually produce crops in 1900 like the old-timers tell us. There was frost into July you see, going by the historical record.

June 26, 2014 5:35 pm

Hey Anthony-
A hug.
That’s all.
Just a hug. For doing what you do. For taking the crap you take. For coming back day after day into this mess and being a truly nice, dedicated, and patient man. Especially when it would be so easy AND understandable to walk away.
So, no comment on the data or anything else today.
Just a hug.
That’s all.
🙂

milodonharlani
June 26, 2014 5:39 pm

Old England says:
June 26, 2014 at 5:19 pm
Temperature increase in Vegas is thanks to square miles of paving & building, air conditioners, power plants, Hoover Dam, planes, trains (monorail), automobiles, humans & other heat-emitters & producers. Since “adjustments” for the UHI effect used by the Team tend to adjust urban temperatures up rather than down (as a rational person would expect), I’d like to see what the algorithms do for Vegas.
I wonder if the feds have placed new stations next to the relatively new power plants, as they so conveniently located a second, totally unnecessary for scientific purposes (but politically vital) Death Valley station opposite a south-facing rock wall, since the already good station wasn’t providing the desired new record high.

John Slayton
June 26, 2014 5:43 pm

Re: angech @ 4:45
I’m not sure who’s saying what in this comment, but I scratch my head at this:
Nobody has put up a new thermometer in rural USA in the last 30 years and none has considered using any of the rural thermometers of which possibly 3000 of the discarded 5782 cooperative network stations.
For many months I have been sitting on photos that I have taken of USHCN v.2 stations that I can’t upload to the gallery because the gallery has no folders for these sites. The gallery has no folders because these sites were not in the USHCN system at the time surfacestations.org was set up. These stations have been explicitly substituted for their predecessors which were dropped.
Chandler Heights AZ
Dulce NM
Farmington 3NW UT
Hysham 25 SSE MT
Marysvale UT
May 2 SSE ID
Nampa Sugar Factory ID
Nephi UT
Paris TX
Pearce-Sunsites AZ
Saint John WA
Salina 24E UT
Saint Regis 1NE MT
Scofield-Skyline Mine UT
Smith Center KS

Brandon C
June 26, 2014 5:47 pm

Nick
The problem is not in understanding the TOBS error. I agree that it can create uncertainty and introduce error. But the assumption that this is a uniform predictable error is laughable. Can you tell me what percentage of station records were done in that manner and which stations adjusted their methods to eliminate the error or when the ones that used alternate methods changed? No you cannot and nobody can. I know the people I talked to said they were told by the people that set up the station how to do it correctly to avoid TOBS, although they didn’t call it that. But the assumption that it was a uniform error that can be systematically adjusted for is frankly silly.
I know people don’t want to accept uncertainty in past temperatures, and the temptation to try and “correct” them is great. But pretending you are increasing accuracy by using such assumptions, doesn’t follow scientific reasoning. You are adding an additional area of uncertainty and should end up with the same level of uncertainty, or greater, since you will have corrected some and falsely corrected others. But since you don’t even have a accurate accounting of the percentage of methods used and if or when they changed, it would be proper science to leave it alone and document the uncertainties. At least then the science that uses the temp for base work would have an accurate accounting of the beginning uncertainties.
As far as using surrounding stations to infill. I currently live on a farm 20 miles from a city. The city has a good quality station about 3 miles from the city at a small airport and would rate well using the rating guidelines. A seed cleaning plant to my direct south 1/2 mile installed a official temp station about 5 years ago, also well sited away in pastureland a long way from anything else. They are both situated in the open prairie plains without hills or obstructions. The city station, uniformly reads 2-5 degrees C different that the local one. It is not an artifact of the station types because they disagree both warmer and colder, and simply driving to the city sees the car thermometers measuring the same difference. So if 2 stations 23 miles apart can vary by such amount, how accurate is infilling? OR for that matter, how accurate is looking for large steps and eliminating them. I have experiences such profound shifts of climate in our area, twice in my life (43 years), but both would have triggered a correction, but both were real and endured for multiple years before things changed.
Frankly, the endless adjustments and assumptions are not adding any accuracy and only serve to show the assumptions of the adjusters. It is only serving to add false sense of reduce uncertainties, when they are adding new uncertainties for every one they claim to correct. If you can’t do your climate science without all the assumptions and data adjustments, then it is time to admit you don’t have a strong scientific case. Since science done in the past, say pre-1999, must all be incorrect if they used historic temp series prior to such large adjustments and you could not compare data from old report to new ones since the past temps have changed nearly a degree in adjustments since. All these are adjustments are doing is adding hidden uncertainties in the place of known uncertainties.

Nick Stokes
June 26, 2014 6:00 pm

Pat Frank says:June 26, 2014 at 5:19 pm
“Nick wrote, “the purpose is to obtain an overall integral (or weighted sum).”
No, Nick. The purpose is to obtain physically accurate data.”

Here is what Hansen says in his 2001 paper:
“Some prefatory comments about adjustments to the temperature records are in order. The aim of adjustments is to make the temperature record more “homogeneous,” i.e., a record in which the temperature change is due only to local weather and climate.”
The purpose is not to provide an improved version of what that thermometer should have read. It may legitimately show changes that are not due to weather and climate (eg moves). The purpose is to remove bias – estimate what the temperature in thjat region would have been.

Latitude
June 26, 2014 6:13 pm

angech says:
June 26, 2014 at 4:58 pm
====
You guys did read angech’s post, right?

June 26, 2014 6:16 pm

You avoided the issue, Nick. Your reply is a complete non-sequitur; an irrelevance.

David Walton
June 26, 2014 6:21 pm

Re: It is an interesting life when I am accused of being in cahoots with both “big oil” and “big climate” at the same time.
Well of course you are! You are playing both sides against the middle. Machiavelli himself would be proud with such a deft and devious manipulation.
Just kidding Anthony. 😀 Thanks again for going into these details. We do live in interesting times.

richard verney
June 26, 2014 6:30 pm

Brandon C says:
June 26, 2014 at 4:52 pm
Sorry Nick, but the TOBS adjustments assumes a uniform error that can be simply corrected. But there is no simple or even defendable way to ascertain what measurements were correct and which ones were not, so the assumption of uniformity only adds uncertainty. A truly scientific approach would be to leave the data alone and develop error bars to reflect the uncertainty caused by the unknown past errors. By making assumptions of uniform error, adjusting the data, and then presenting it is having increased or equal accuracy is simply blatant unscientific.
//////////////////
Nailed.
The data is the data. It should never be adjusted. The only interpretation that needs to be made is to assess the errors, and assess the relevance. In other words leave the raw data alone, and just caveat its short commings. This will allow others (and in particular other generations) to review the data, and with better techniques and/or better understanding their assessment of the errors may be revised and/or its relevance altered.
Unfortunately we have destroyed the data such that we can no longer be sure what it says, and we are left with interpreting the adjustments not assessing what the data really tells us.
It is time to accept that it is not fit for purpose, and if we cannot restore the original record (a time consuming exercise that could only be conducted if the original raw data exists) it should simply be discarded. The temp data set was never going to be ideal since it has been put for a purpose for which it was never designed, and the land based record is not even measuring the right metric. Without relative hunidity it tells us nothing about energy imbalance.
Incidentally, Rob Dawg (says:June 26, 2014 at 4:13 pm) example not simply demonstrates confirmation bias, it shows why consensus has no place in science. The consensus approrach would be that the bus carries 20 passengers per hour, which is a huge over statement; the minority view (1 passenger per hour) whilst wrong would have been far nearer reality. It demonstrates the dangers inherent in accepting what the majority might consider to be the case..

June 26, 2014 6:33 pm

Some help is feasible where there are more than two readings per day.
UHI is largely caused by a change in thermal coupling and must produce a phase change. Some degree of correction is possible. Practical, don’t know.

angech
June 26, 2014 6:43 pm

Links to these comments? at angech says: June 26, 2014 at 4:46 pm above for Anthony
1 Re why adjustment is always done and always and only affects the distant past the most.
Also a list of number of actual stations??
.Zeke (Comment #130058 June 7th, 2014 at 11:45 am How not to calculate temperature 5 June, 2014
Mosh,
Actually, your explanation of adjusting distant past temperatures as a result of using reference stations is not correct. NCDC uses a common anomaly method, not RFM.
The reason why station values in the distant past end up getting adjusted is due to a choice by NCDC to assume that current values are the “true” values. Each month, as new station data come in, NCDC runs their pairwise homogenization algorithm which looks for non-climatic breakpoints by comparing each station to its surrounding stations. When these breakpoints are detected, they are removed. If a small step change is detected in a 100-year station record in the year 2006, for example, removing that step change will move all the values for that station prior to 2006 up or down by the amount of the breakpoint removed. As long as new data leads to new breakpoint detection, the past station temperatures will be raised or lowered by the size of the breakpoint.
An alternative approach would be to assume that the initial temperature reported by a station when it joins the network is “true”, and remove breakpoints relative to the start of the network rather than the end. It would have no effect at all on the trends over the period, of course, but it would lead to less complaining about distant past temperatures changing at the expense of more present temperatures changing.
.angech,
As I mentioned in the original post, about 300 of the 1218 stations originally assigned to the USHCN in the late 1980s have closed, mostly due to volunteer observers dying or otherwise stopping reporting. No stations have been added to the network to make up for this loss, so there are closer to 900 stations reporting on the monthly basis today.
.To folks in general,
If you don’t like infilling, don’t use infilled values and create a temperature record only from the 900 stations that are still reporting, or from all the non-infilled stations in each month. As the first graph in the post shows, infilling has no effect on CONUS-average temperatures.
2. He first commented to SG Zeke Hausfather says: May 10, 2014 at 3:08 pm
at the SG post Me Vs. Hansen Vs. NOAA Posted on May 10, 2014
Hi Steven,
Your analysis would greatly benefit from using anomalies rather than absolute temperatures, as everyone else working with temperature data does. Absolute temperatures cause problems when the composition of the station network is non-uniform over time, and anomalies are a relatively easy way to fix it without introducing any bias.
3. The comment on only 650 raw stations was in
Zeke Hausfather says: A Different Approach To The USHCN Code at the SG post
Posted on May 11, 2014 May 12, 2014 at 3:00 pm
The difference is straightforward enough. Even if you use monthly rather than annual averages of absolute temperatures, you will still run into issues related to underlying climatologies when you are comparing, say, 650 raw stations to 1218 adjusted stations. You can get around this issue either by using anomalies OR by comparing the 650 raw stations to the adjusted values of those same 650 stations.
The reason why the 1218 to 650 comparison leads you astray is that NCDC’s infilling approach doesn’t just assign the 1218 stations a distance-weighted average of the reporting 650 stations; rather, it adds the distance-weighted average anomaly to the monthly climate normals for the missing stations. This means that when you compare the raw and adjusted stations, differences in elevation and other climatological factors between the 1218 stations and the 650 stations will swamp any effects of actual adjustments (e.g. those for station moves, instrument changes, etc.). It also gives you an inconsistant record for raw stations, as the changing composition of the station network will introduce large biases into your estimate of absolute raw station records over time. Using anomalies avoids this problem, of course.
stevengoddard says: May 12, 2014 at 5:56 pm
In other words, they are fabricating data and generating fake temperatures – which very conveniently cool the past and warm the present, to turn an 80 year cooling trend into a warming trend. Got it.
Note he did not say that there were only 650 raw stations but he did use the right number USHCN attributed stations 1218 and it would seem strange that he would use a made up number rather than the real number when talking to SG of all people. Particularly when he used the figure ” closer to 900 stations reporting on the monthly basis today.” in 1. above only a few weeks later. Note he said 300 of the original assigned stations had closed due to stopping reporting but this leaves open how many stations were assigned later to replace other original stations and how many of these may have closed.
That figure could approach another an 370 stations that no longer exist [based on 650 raw original stations] and easily broach the 40% figure of made up data SG claims

angech
June 26, 2014 6:47 pm

comment 1 was at the blackboard blog I missed attributing How not to calculate temperature 5 June, 2014 (14:04) | Data Comparisons Written by: Zeke (Comment #130058 June 7th, 2014 at 11:45 am

jmrSudbury
June 26, 2014 6:53 pm

Latitude (June 26, 2014 at 2:40 pm) asks, “[d]id they lose 30% of their stations since 1990 or not? are they infilling those station now or not?”
The data Steven used was the raw that had -9999 and the final for the same station had an E for estimate beside the calculated number for a given month.
And Anthony asked for example of stations that have problems. One example: In Aug 2005, USH00021514 stopped publishing data (had -9999 instead of temperature data) save two months (June 2006 and April 2007) that have measurements. Save those two same months, the final tavg file has estimates from Aug 2005 until May 2014. The last year in its raw file is 2007.
John M Reynolds

angech
June 26, 2014 6:58 pm

John Slayton says: June 26, 2014 at 5:43 pm Re: angech @ 4:45
I’m not sure who’s saying what in this comment, but I scratch my head at this:
” Nobody has put up a new thermometer in rural USA in the last 30 years and none has considered using any of the rural thermometers of which possibly 3000 of the discarded 5782 cooperative network stations.” It was irony and sarcasm , John, sorry if you did not get it but it was a comment at another blog where it was more suited. I am sure we all know there are some new thermometers out there somewhere. Not that you are allowed to use them historically of course [inserts smiley face]

Duster
June 26, 2014 6:58 pm


Brian R says:
June 26, 2014 at 12:12 pm
A couple of things. Has anybody done a comparison between the old USHCN and new USCRN data? I know the USCRN is a much shorter record but it should be telling about data quality of the USHCN. Also if a majority of the measured temperature increase is from UHI affecting the night time temps, why not use TMAX temps only?. It seems to figure that if “Global Warming” was true it should effect daytime temps a much as night time.

The trend in CRN is very slightly negative. So slight that for statistical purposes, you should assume no change over the published span. Zero trend falls within the estimate error of the mean.

milodonharlani
June 26, 2014 6:59 pm

Nick Stokes says:
June 26, 2014 at 6:00 pm
Nick, as you must know, GAST is a highly artificial construct designed with a particular outcome in mind. It is as far from scientific “data” as it is possible with human ingenuity to devise.
Set aside the totally unjustified systematic cooling of older observations & warming of more recent “data”. How about Jones’ admitted “adjustment” of ocean surface “data” to align them with land surface “data” because, after being adjusted upwards by, among other “tricks”, adjusting for UHI effect by making urban readings warmer rather than cooler, land & sea numbers were out of whack (imagine that!), so of course the adjustment needed was to make the less stepped on ocean observations (2/3 of earth) warmer rather than the land “data” (1/3) cooler.
At every opportunity to rig the data, the Team has opted to cook the books rather than chill them.
That is all ye need to know of GAST. GIGO & not worth the electrons expended in rigging the numbers.

Matthew R Marler
June 26, 2014 7:05 pm

thank you. This was a good post. Good luck with the new publication.

Matthew R Marler
June 26, 2014 7:08 pm

Nick Stokes: You can interpolate anomaly; it varies fairly smoothly.
I doubt you could show that. It sounds like an unverifiable assumption — maybe reasonable, maybe not, but hardly trustworthy.

Matthew R Marler
June 26, 2014 7:14 pm

Nick Stokes: You can interpolate anomaly; it varies fairly smoothly.
Is it the case that change due to CO2 is much less variable, across space and time, than baseline temperature?

A. Smith
June 26, 2014 7:14 pm

Use all the data from all locations, but just use MAXIMUM temperatures. You’ve proven that the minimum temperature has risen due to UHI in some location, so toss it. Toss all minimum temperatures from all locations. The anomaly should be pretty accurate then…..unless ….. UHI is hiding a more significant cooling trend.

June 26, 2014 7:19 pm

Those believing that 30 or 50 stations is a solution better rethink – correct figure needs to be over 100 000 and that still wouldn’t present a correct view of reality.
Why? That one was easy to answer: Statistic is one thing. Approximation an other. Two places less than 1000 meter apart can have different hights over sealevel different winds and ground under and if they are located in areas on Northern Hemisphere where the last Ice Age’s Ice Coat once pressed land down they almost certainly have had completely different “uprise” ( Archimedes principle ) This in itself needs to be taken into consideration because that factor alone has more impact on divergenses between to places located less than 1000 meters from each other than anyone of the Alarmists ever thought of.
Then there is of course a hugh statistic science as well as theory of science aspect on the CO2 question. Deliberatly chosen “stations” can never ever be used as equalizer of other place than where those stations are located. It’s a hugh difference between random chosen observations and pre-hypotes-testing chosen ones….
Alarmists better read: Huff’s How to lie with statistic today and yesterday they used almost all of the non scientific methods Huff warned could be used.

Editor
June 26, 2014 7:22 pm

NikFromNYC says:
June 26, 2014 at 12:11 pm

I pointed out the Goddard/Marcott analogy on the 24th, twice, and was roundly attacked for it since my name isn’t McIntyre,

Thanks, Nik. Reading at Bishop’s, I can’t find one comment saying the Goddard/Marcott analogy was wrong. Not one. So I can only conclude that you weren’t attacked for making that analogy, as you now insist.
Instead, there were commenters saying that your rant about Steve, starting with

Goddard was such a willfully sensationalistic fool that …

and segueing through such things as

His regular promotion of a brain washing gun control conspiracy theory complete with Holocaust photos loses him the entire Internet culture debate for *all* of us skeptics because he has the second highest traffic skeptical site.

without a single link to Steve’s alleged crimes of commission and omission … well, the commenters said, that might be both a bit (or two) over the top, and in any case, wildly off-topic and immaterial to the scientific question.
Next time, you might be pleasantly surprised at what happens if you just address the scientific question, and leave the polemical rant for some other thread where it would be on-topic.
All the best,
w.

Nick Stokes
June 26, 2014 7:25 pm

richard verney says:
June 26, 2014 at 6:30 pm
“The data is the data. It should never be adjusted. The only interpretation that needs to be made is to assess the errors, and assess the relevance.”

No, that is a fundamental fallacy. Take the min-max data. What do we know? An observer has, at a stated time, noted the position of the markers in a min/max thermometer, and reset them. So what does that mean?
The observer didn’t observe the max for Tuesday. He observed the position of the markers on Monday and Tuesday. If you want to convert that to data about Tuesday, you have to make assumptions. An assumption that it is the Tuesday max may be justified if, say, he read at midnight. But if he read at 5pm Tue, you don’t know. It might have been Monday.
Ultimately, you are going to average over a month, and then over a larger scale. So it doesn’t matter to get it exactly right for each case. What is imnportant is to avoid bias.
Leaving the data unadjusted is just one of many assumptions you could make. It is testable (with modern hourly data), and if the reading time was 5pm, it will fail the test.

June 26, 2014 7:26 pm

Nick Stokes,
Despite all your disclaimers, the fact is that about 99% of all ‘adjustments’ are in the direction of making a more alarming chart.
That cannot be just a coincidence.

Jimmy Finley
June 26, 2014 7:26 pm

Anthony: That is a very good article, and one I can agree with and support in my inimitable way. 😉 Quit messing with the mess; the various measurements are possibly useful locally and if people want to pay for them, let them continue, but nationally they are dreck. NOAA isn’t about to “discover” this; there are lots of 25-year and 30-year careers of going to work on a very relaxed schedule, drinking coffee, surfing the ‘Net and porn, then off to a nice retirement involved in messing with those data. Dispense with NOAA and let Joe Bastardi do it for a fee. Fire him if he can’t do the job, and get somebody else. Similarly, if NASA doesn’t have a manned mission to someplace, then we don’t need NASA in any way. When the time comes, it can be reconstituted, if there are any people left in America that can successfully add two plus two.
Get it down to the nitty-gritty, with sites that are few enough to examine closely to see if there are issues. Give the data collection to someone who gets paid for it (with the now-dismissed NOAA/NASA funds in hand, one can pay an attractive amount to someone) and fired if it gets screwed up. Get rid of government hacks that are either slugs, or appointee activists willing to cheat to serve a political mean. Get some solid, useful data, so that we know if good times keep on rolling, on the Great White Blanket is about to descend on us.
Now that we’ve solved those problems, what to do about “paid for and bought” science?
Jim Finley

mark
June 26, 2014 7:29 pm

norah4you says:
June 26, 2014 at 7:19 pm
Alarmists better read: Huff’s How to lie with statistic today and yesterday they used almost all of the non scientific methods Huff warned could be used.
Seems they have read it.

milodonharlani
June 26, 2014 7:31 pm

Correct me if wrong, but are the major data set keepers/adjusters not down to about 3000 locations? That would be one for every 170,000 sq km of earth’s surface, ie a little less than one per area the size of the state of Washington, with of course coverage worse over the oceans.
And armed with these “peer-reviewed data”, Hansen et al have gotten rich jetting around the globe spewing polluting garbage with a carbon footprint large as all outdoors? This racket puts all of organized crime, to include drug smuggling, human trafficking, illegal gambling, murder for hire & every other racket to shame.

bw
June 26, 2014 7:37 pm

1. Bad data must be tossed out. That’s fundamental. You can’t “save” corrupted data because the errors can’t be quantified.
IF the data had been collected with good planning and methodology in the first place, you would not need all this wasted effort.
The effort to recover corrupted data is wasted, you will never recover good data from bad.
That’s why the USCRN was created.
IF the USCRN had been established 100 years ago, we would not be having any AGW discussion.
2. There is no point in “averaging” widely different climates. Seattle has a different climate than North Dakota. What’s the point of averaging those two. San Francisco has a different climate than Las Vegas, what’s the point of averaging those with any others?? That’s the point of averaging Texas and Alaska?? or Florida and Maine?? There is no “average” between tropical and polar, it’s nonsense.
3. There are a few individual stations with long term data in rural locations. Some in the US, some in western europe, and the four in Antarctica. Just look at the data from those stations and you will see that there is no global warming on any time scale. The good data from 1958 at the Antarctica stations of Amundsen-Scott, Vostok, Halley and Davis proves that without doubt.

milodonharlani
June 26, 2014 7:37 pm

dbstealey says:
June 26, 2014 at 7:26 pm
As Lenin said, “It is no accident”. Maybe if after November the US Senate joins the House in having more rational, pro-science, anti-CACA rather than emotional, anti-science, pro-CACA, anti-human members, then America can side with Canada & Australia against the CACA Mafia.

milodonharlani
June 26, 2014 7:40 pm

bw says:
June 26, 2014 at 7:37 pm
There is definitely no warming in Antarctica since the geophysical year of 1958. There may or might not be some regional warming someplace else. But even if the whole globe could be shown to have warmed a fractional degree since 1945, so what? It has little if nothing to do with CO2, & in any case is a good thing.

Björn
June 26, 2014 7:42 pm

Quted from AW’s posting
“…
The key point here is that both Goddard and Zeke (and by extension BEST and NOAA) are trying to use the ENTIRE USHCN dataset, warts and all, to derive a national average temperature. Neither method produces a totally accurate representation of national temperature average. Keep that thought.
…..”
And the trouble NOAA seems have in predicting an accurate recent past is well examplified in Ole Humlums in one of the graphs on his Climate4You website , the one he calls ” the GiSS maturity graph” (link below) :
http://www.climate4you.com/images/GISS%20Aug1935%20and%20Aug2006.gif
so it’s not only Goddard ( though he shouts more and louder about it ) that is critical of
the slowly but unidirectionally widening gap between the past and present, as for example like in this day instance of the diffrence between the anomalies of the two values for the same day of the month 70 years apart , as the more resent days value heads for heavenly heigth’s , while the older anomaly is clearly speeding down the higway to underworld inferno.
It’s hard avoid the thought that there isn’t something amiss here , because of how univerasally one directional this process seems to be with all the adjustment that are being made to the data, one intuitionally would expect a a bit more randomness in the signs of adjustments.
And I cannot help it but my brain sometimes inadvertly conjures up flashes of “Comical Ali” ( Saddam Hussenin’s PR man in the early days of the first Iraq mess , who I at that time nicknamed “the green man” as it was the color of the uniform he most commonly wore in his TV apperance , no eco-green connection there ), when mr. Stokes ride in to defend the saying that there is no diffrence between the raw and adjusted anomalies at those moments he takes on some kind of ” the green man” , cause if they make no diffrence the why bother why expand a lot of effort to with no effect. ??

June 26, 2014 8:17 pm

The event times are not near long enough.
Busy work not much else.

Nick Stokes
June 26, 2014 8:22 pm

Matthew R Marler says: June 26, 2014 at 7:08 pm
“I doubt you could show that. It sounds like an unverifiable assumption — maybe reasonable, maybe not, but hardly trustworthy.”

No, it’s verifiable – one of the things Hansen did early on. You can see it here. It’s a shaded plot of raw GHCN/ERSST temperatures for a chosen month. You can choose to show the stations. The shading is such that it is exact for each station location, otherwise interpolated. You’ll see that mostly stations are shaded with their neighbours.
There are exceptions, of course. But again, it’s all about calculating an average. That’s what people hear about. If you’re averaging 4000 or so stations, that will damp a lot of noise.
Incidentally, quite a few of the exceptions are glitches in raw GHCN that I have been blogging about recently.

milodonharlani
June 26, 2014 8:44 pm

Nick Stokes says:
June 26, 2014 at 8:22 pm
Thanks, Nick. Wasn’t sure of the current number of stations. So one for every 127,500 sq km of the earth’s surface. Assume maybe half of them are good enough, & you get one per 250,000 sq km, or ~100,000 sq miles. That’s as if life & death public policy were based upon one reading for an area the size of my state of Oregon.
IOW, YHGTBSM.

Richard Ilfeld
June 26, 2014 8:51 pm

The Notion of one station being sufficient for a very long time series in interesting. One of the MWP pro arguments has always been that grapes for wine were grown in England then, but not when it was cooler. Now, I know proxies are always a bit dicey, but a data set that might be interesting is: date of last frost-date of first frost / by year. Surely there is a monastary somewhere in Britain with a 1000 year or more record. I wouldn’t have a clue how to find such a thing, but it might be interesting and a very long series of measured data points — about all one could measure and record before thermometers.

Reg Nelson
June 26, 2014 8:53 pm

Nick Stokes says:
Adjustments have a specific purpose – to reduce bias preparatory to calculating a space average. They are statistical; not intended to repair individual readings.
TOBS, for example, would not affect a record max. That temperature is as recorded. The problem is that if you reset the instrument at 5pm, that makes the next day max hot, even if the actual day was quite cool. That is a statistical bias, for which it is appropriate to correct a monthly average.
***
The problem is you (or anyone) has no idea what the time of observance adjustment should actually be — unless you have the ability to to time travel and conduct experiments\observations.
And trying to estimate it based on current observations is ridiculous, if your premise is that climate has changed dramatically because of CO2.
More importantly, where are UHI adjustments? Where are the station relocation adjustments?
You state “Adjustments have a specific purpose”. Where are these other adjustments?
I ask because i don’t know. Do you?

JoshC
June 26, 2014 8:59 pm

First: Thank you for the work Anthony, I have had a good chunk of social media this year. It is a wall of work, and mine was not as technical as yours. I really appreciate it.
Normally I don’t comment, since most of the things I would have suggested are better fleshed out by others. This time I didn’t see something I expected so:
Why not find a set of site records that correlate well with the RISS/UAH/weatherballon datasets and use those records?
I know, there is a lot most everybodies plate, but if people are interested in finding the best stations, the ones that match UAH could be better choices than some of the other ways suggested I would think.
Just a humble thought that seemed missed in the discussion. 🙂

Patrick B
June 26, 2014 9:12 pm

@PeteJ
“…then averages all the averages together you kind of lose all frame of reference and the margin of error explodes exponentially, even more so if you insist on using anomalies.”
Exactly. If climate “scientists” who did infilling, gridding etc. to fill in data then applied the proper margin of error analysis, they would quickly find the margin of error is so large as to render the results useless. Instead they pretend to know global temperatures to tenths and hundredths of a degree. I was taught that if your data was so poor that your margin of error overwhelmed your theory, you went back, re-designed your experiment and started over. The fact that in climate “science” we wish we had good data, but do not, should not change that approach.

john robertson
June 26, 2014 9:41 pm

Thank you a fine explanation.
Highlighting that Climatology is more theology than science.
Right now we lack data of quality and duration.
Funny how the possibility of man caused global warming, was sufficient to demand world domination by the concerned ones.Their risk being the end of mankind.
Yet right from the start the data was foggy and there has been very little done to improve our understanding of weather past.
So while Steve Goddard is using immoderate language and flourishing a digital paintbrush to highlight his anger, is he wrong?
I remain deeply offended by the behaviour of our government people in this business.
What is the “official Global Average Temperature?
What error bars?
Then the temperature measurement accuracy and error bar.
The world proclaimed warming signal, Team IPCC ™ , is noise.
Sad that this nonsense can be called science and depressing that it ever progressed to the massive devourer of public wealth and energy that CAGW is.

June 26, 2014 9:54 pm

I can’t speak to the issue of the missing station data, but I’ve archived as many versions as I can find of the NASA GISS “Fig.D” Averaged U.S. 48-State Surface Air Temperature Anomaly files, and put them in a chronological table, here:
http://sealevel.info/GISS_FigD/
The revisions are very striking. They drastically cooled the older temperatures (esp. the 1930s) and warmed the more recent temperatures.
In the earliest (mid-1999) version†, 1998 is 0.54°C cooler than 1934.
In the 6/2000 version, 1998 is 0.25°C cooler than 1934.
In the 2001, 2003 & 2004 versions, 1998 is only 0.04°C cooler than 1934.
In the 2005 version, 1998 is only 0.01°C cooler than 1934.
In the 1/2005 version, 1998 and 1934 were tied, as equally warm.
In the 6/2006 version, 1998 is again 0.01°C cooler than 1934.
In the late Feb. 2007 version, for the first time they showed 1998 as 0.01°C warmer than 1934.
In an early 8/2007 version, they’re tied again, as equally warm.
In a later 8/2007 version, 1998 is again 0.02°C cooler than 1934.
In the 1/2008 and 1/2009 versions, they’re tied again, as equally warm.
In the 11/2009 version, 1998 is 0.03°C warmer than 1934.
In the 1/2010 version, 1998 is 0.09°C warmer than 1934.
In the 2/2010 version, 1998 is 0.12°C warmer than 1934.
In the 2/2011 version, 1998 is 0.122°C warmer than 1934.
In the 3/2011 through 7/2011 versions, 1998 is 0.123°C warmer than 1934.
In the 8/2011 version, 1998 is 0.122°C warmer than 1934.
In the 1/2012 version, 1998 is 0.121°C warmer than 1934.
In the 4/2012 version, 1998 is 0.071°C warmer than 1934.
In the 7/2/2012 version, 1998 is 0.089°C warmer than 1934.
In the 7/13/2012 version, 1998 is 0.083°C warmer than 1934.
In the 8/2012 version, 1998 is 0.082°C warmer than 1934.
In the 10/2012 version, 1998 is 0.112°C warmer than 1934.
In the 12/2013 version, 1998 is 0.1054°C warmer than 1934.
In the 1/2014 version, 1998 is 0.1057°C warmer than 1934.
In the 3/2014 version, 1998 is 0.1088°C warmer than 1934.
In the latest version, 1998 is 0.0935°C warmer than 1934.
Comparing the current version to 15 years ago, NCDC has added 0.6335°C of warming to the temperature record, when 1998 is compared to 1934.
† Caveat: the 1999 version was reconstructed by digitizing a graph.

June 26, 2014 9:58 pm

Minor correction to my previous msg: “In the 1/2005 version” should read “In the 1/2006 version”

June 26, 2014 10:09 pm

@Willis Eschenbach
“His regular promotion of a brain washing gun control conspiracy theory complete with Holocaust photos loses him the entire Internet culture debate for *all* of us skeptics because he has the second highest traffic skeptical site.”
I’m inclined to agree with you on this. Goddard as far as I can recall (and I am a regular although skeptical reader of his blog) when using Holocaust imagery is making certain general or philosophical points. One, for example, concerned the evil of the eugenics movement. Another is a reminder of why the “right to keep and bear arms” was added to the US constitution. Most Westerners have governments that are relatively corruption free, so tend to have lazy thinking on such issues. There are no specific conspiracy theories here that I can recall, which is what Nick is accusing Goddard of. And in fact, you’ll find him most of the time mocking conspiracy theories rather than promoting them.
I do recall he has posted at least once, perhaps several times, links to mind control/brain washing conspiracy involving the government. I couldn’t find the links to these as he may have taken them down or I failed to look for them in the right places.
Having said all that, Goddard can be a bit of a knuckle head at times, although he is certainly nobody’s fool. Originally I was inclined to assume that Goddard was controversial because controversy is an effective means of getting his points across and to drive traffic to his blog. I’m more inclined these days to view Goddard as someone who just has a controversial personality.

Eugene WR Gallun
June 26, 2014 10:59 pm

This is a better version.
PROFESSOR PHIL JONES
The English Prometheus
To tell the tale as it began —
An ego yearned
Ambition burned
Inside a quiet little man
No one had heard of Phillip Jones
Obscure to fame
(And likewise blame)
The creep of time upon his bones
Men self-deceive when fame is sought
Their fingers fold
Their ego told
That fire is what their fist has caught!
So self-deceived, with empty hand
Jones made it plain
That Hell would reign
In England’s green and pleasant land!
Believe! Believe! In burning heat!
In mental fight
To get it right
I’ve raised some temps and some delete!
And with his arrows of desire
He pierced the backs
Of any hacks
That asked the question — where’s the fire?
East Anglia supports him still
The truth abused
Whitewash was used
Within that dark Satanic mill
The evil that this wimp began
Soon none will doubt
The truth will out
Prometheus soon wicker man

June 26, 2014 11:10 pm

Eugene WR Gallun,
You da Man!!
.
Kudos, dude. Truly. You are good.

AW35
June 26, 2014 11:23 pm

I’d like to thank Steve and Anthony for at least making me realise how things work with these stations and potential problems for a person who didn’t really have any interest before this difference of opinion.
Originally I was confused because the mainstream media has picked up about the change in temps between 1930′s and today claim by Goddard, then this fabrication claim, two different points.
It does show though how many things have changed and problems with the whole system. Not only changes in environment, infilling of data but it seems time the measurements were taken all changing. That’s an awful lot of things to factor in. Considering how careful scientists are when moving simply from one satellite sensor to a new better one to make sure the record is not skewed I can see trying to get a nice accurate trend over many years is definitely a non trivial problem!
I think my main beef with Steve Goddard is the terminology used in graphs, headlines etc etc. Even since when he was on here, the headlines didn’t actually match what he was trying to say, due to trying to give too much shock and awe; most infamous being the CO2 freezing out when actually all he was saying really it was cold in Antartica. Well, that’s what it seemed to me he was saying, he was right too, it was cold at that point. But CO2 was not freezing out. He still claims this. No flip flopping….. 🙂
I think the problem is also he does too much stuff so quality control falters. I visit his site to look at the Arctic side, although I am not a skeptic I do find it interesting and he does post on it. One post he did he mentioned how little time the melt season has, and did a graph split with the months above 0c shown. Problem is sea ice does not melt at 0c. I mentioned this to him and although he did not admit his mistake ( he never does ) he did remove that line in the graph. So as i say, he produces a lot per day and does not really check into it. He needs a Chief Editor to check it over.
He does post some interesting stuff though on the Arctic, and always good to have a different viewpoint rather than just visiting sites which confirms your own view.
Once again thanks for the info on your network over there. You should use the British stations, Mary Poppins once described them as being more than practically perfect I believe.

angech
June 26, 2014 11:44 pm

.Zeke (Comment #130058 June 7th, 2014 at 11:45 am How not to calculate temperature 5 June, 2014 says the past station temperatures will be raised or lowered by the size of the breakpoint.
Nick Stokes says: June 26, 2014 at 5:04 pm WUWT
I doubt that anyone uses adjusted temperatures when talking of exceptional temperatures. They shouldn’t.
Adjustments have a specific purpose – to reduce bias preparatory to calculating a space average. They are statistical; not intended to repair individual readings.
TOBS, for example, would not affect a record max. That temperature is as recorded. The problem is that if you reset the instrument at 5pm, that makes the next day max hot, even if the actual day was quite cool. That is a statistical bias, for which it is appropriate to correct a monthly average.
So which is it Nick? I can say that you cannot have an unadjusted TOBS maximum and an adjusted monthly average, that is what used to be called keeping double books and Zeke notes that all temperatures are adjusted down [or up on very odd occasions].

KNR
June 26, 2014 11:54 pm

You would not want to play poker with these people , how ‘lucky’ can you get that all the adjustments of the past happen to results in reductions in past temperatures which make more modern ones look higher which happen to be very useful to those pushing ‘the cause ‘ Lucky indeed !

Brandon C
June 27, 2014 12:18 am

Nick. Again you just assume that every single reading is done exactly the same and followed only one protocol. This is easily proven not to be the case from interviews with station operators. Rather than admit that the extent of the problem is unknown and leave it as an uncertainty, you just proceed as though your easily disproven assumption is correct. This is not just poor science, it borders on outright dishonesty. We all understand what the tobs issue is, repeating the basic reason does not change the fact that it assumes a uniform error despite evidence to the contrary. If the effect was small, it would not be important, but the adjustments are huge compared to the trend you are trying to measure. No real scientist should even consider using such significant adjustments when you don’t even have any additional solid data to verify at least a sampling against. You are assuming a position and verifying the outcome against nothing more than your own expectations. Scientifically this is not defensible and at some point there will be an actual independant auditing of all this poor work. This is a black eye for science. Scientific method and principals were designed to be unmutable rules to never be ignored, so as to avoid the bias and error. Not to be ignored and glossed over because your sure it must be the correct answer. I have never seen a group act so unscientifically as climate scientists. It is reaching the point of pathetic.

Lance Wallace
June 27, 2014 12:18 am

daveburton says:
June 26, 2014 at 9:54 pm
Very useful historical record, thanks for your efforts.
I used your data to look at the difference between 2014 and 1999, 2000, and 2006:
https://dl.dropboxusercontent.com/u/75831381/NASA%20FIGD%20CHANGE%20OVER%20TIME.pdf
It was not a monotonic lowering of past temperatures and a raising of recent ones; rather the 5-year averages for the years from 1880-1910 were lifted up a bit (0-0.1 C), then from 1910-1940 they were lowered by about the same amount , then from 1940 to the present it was off to the races, eventually reaching a delta of about 0.3 C in 1996 (the latest date available from the Hansen 1999 figure). Hard to look at that beautiful parabola and attribute the changes to random adjustments for TOBS, missing data, etc.

MDS
June 27, 2014 12:18 am

Whenever I see a network of instruments being used to measure a phenomenon, I always wonder about the thoroughness of how well the instruments, their housings, and their calibration are maintained. Yes, I understand and agree that the local environment is a part of the issue, but having seen other measurements made by dissimilar and differently maintained systems of instruments—yes, even within federally-funded agencies—it’s easy to suspect that the number of errors can begin to accumulate. I think everyone just assumes that measurements are being taken with care and the systems maintained the same way, but who checks?

richardscourtney
June 27, 2014 12:29 am

X Anonymous:
I am not surprised that you choose to be anonymous when presenting stuff such as your post at June 26, 2014 at 4:34 pm which says

For those who are interested in what Nick Stokes is talking about. This wiki page on intergration is well worth the read. In particular, on the issue of only using a few stations and arriving at the same conclusion,

“the Gaussian quadrature often requires noticeably less work for superior accuracy.”
“The explanation for this dramatic success lies in error analysis, and a little luck.”

http://en.wikipedia.org/wiki/Integral
The same principle can also be applied to modern GCM, where they get the right answer from the wrong parameter.

The result is “dramatic success” based on “a little luck”!
And this “success” is like “modern GCM” which don’t suffer from GIGO because they “they get the right answer from the wrong parameter”.
Science is NOT a matter of “luck”. It is a rigorous adherence to a method.
So, whatever these compilations of GASTA are called, they cannot validly be called “science”.

And this anti-science is not surprising when at least one involved person (Steven Mosher) chooses to make unsolicited proclamations that he does not know what the scientific method is (see here).
Richard

ferdberple
June 27, 2014 12:32 am

Goddard is saying that all adjustments always go in favor of global warming.
===============
this is the elephant in the room that everyone is ignoring. if the errors are randomly distributed then the adjustment cannot be correct if they show a trend.
Since there is no evidence to indicate the errors are not random, the trend that all 4 methods Anthony showed of calculating the adjustments indicates the adjustments are statistically bogus. Anthony’s graph in part I confirms Goddard’s findings in this respect.

scf
June 27, 2014 12:47 am

Great essay. Funny how you never see a discussion like this in, god forbid, the media. I look forward to Anthony’s paper. I would also like to say I have a great deal of respect for Goddard. No matter the hype, he is doing the best job of as showing just how much of the temperature record is an artifact of adjustments. Adjustments exceed the measured changes, and the adjustments always move in the same direction, to cool the past and warm the present, which is the opposite of what you would expect when adjusting for uhi. Steve McIntyre also illustrates how even a single paper like marcott et al, with as obvious deformity in the results, which of course alters the results in the usual manner, that goes unacknowledged to this day. If even that gross and ridiculous hockey blade cannot be acknowledged, then we need people like Goddard, and you can certainly understand where Goddard’s hype is a reaction to, when you look at papers like marcott or you see Goddard’s diagrams showing how the year 1930 has somehow cooled substantially between 2000and now. Every single day the past gets colder, sooner or later 1930 will become an ice age that was somehow undetected at the time.

June 27, 2014 1:03 am

Lance Wallace wrote, “I used your data to look at the difference between 2014 and 1999, 2000, and 2006.”
Thanks for creating the very nice graph showing the adjustments to the 5-year averages, Lance. Your graph certainly illustrates how their adjustments have served to progressively depress the high temperatures 3/4 century ago, and progressively raise temperatures at the end of the 20th century. However it is also interesting that in your graphs of 5-year averages, the adjustments add a total of only about 0.4°C of warming, rather than the 0.63°C that I saw when comparing the peak years of 1934 and 1998.
Since your graph is in a dropbox file, and dropbox files tend to be transient but archive.org won’t archive them, I took the liberty of archiving it with WebCitation, here:
http://www.webcitation.org/6QdesojPY

Stephen Richards
June 27, 2014 1:17 am

As I pointed out in part 1 your comments were not warranted. Your behaviour unacceptable. While I accept the pressures on you created by your blog and your business it is not acceptable that you criticise a fellow traveller before fully understanding what he is trying to explain. In my opinion you failed to do so. This post however goes some way towards a full analysis. Why would you ever take any note of Zeke how’syerfather and Mosher I simply do not know?
Perhaps an apology to Goddards site might be in order?.

Stephen Richards
June 27, 2014 1:24 am

Will Nitschke says:
June 26, 2014 at 10:09 pm
@Willis Eschenbach
“His regular promotion of a brain washing gun control conspiracy theory complete with Holocaust photos loses him the entire Internet culture debate for *all* of us skeptics because he has the second highest traffic skeptical site.”
This is all strawman stuff. Yes he uses controversial images to get his points across but his points are valid if a little OTT. You are doing what you accuse the AGW team of doing. Look at yourselves. Go look in the mirror.
There is only one Steve Mc. None of us comes close to his skills of expression, analysis and above all firm politeness. So engage brain before typing. I know I fail to do so on many occassions.

Dr. Paul Mackey
June 27, 2014 1:29 am

I think using a single global average temperature is fundamentally meaningless as a metric for climate, even if there were a perfect set of records. In my opinion there needs to be some thought about to what that metric should be, which would not be a trivial excercise given the multi-facetted nature of climate with an innumerable number of variables and processes, some of which are not yet understood completely.
As it is the number seems to be just a statistical artifact with no physical meaning for the real world. This can been seen from Anthony’s set of graphs from Las Vegas. One (Min temp average) paints a warming picture and the other (Max average ) paints a cooling picture. Anthony provides the explaination via UHI effect.
The temperature record for Las Vagas therefore is a measure of Urban Growth. Other stations records may provide a measure of deforestation for example or some other process. The national and global laverage mindleessly amagalmate measures from different “experiements” into one figure. This is meaningless.
Back when I was an experimental phyicist, I had to be very careful about systematic errors in my apparatus. I see no mention of systematic errors in the climate debate. Each met station surely is an individual “experiment”. The UHI, for example, seems to me to be a systematic error, one that is a also function of time. Site location changes, equipement changes etc, to my mind, fall into this category. Surely these have to be removed individually, from each experiment’s results prior to the ruseults being amalgamated or compared? If this is not being taken into account, then the number is meaningless as you are effectively averaging differnt measurements.

knr
June 27, 2014 1:33 am

Adjustments themselves are not an issue, it’s the methodology of adjustments that matter.
First the justification of why there is need for adjustments.
Second it is made clear how these adjustments where done.
Thirdly, the retaining of the unadjusted data, so it is possible for others to check the validity of these adjustments or to reverse to the old ones should the new adjustments prove unsound over time.
Climate ‘science’ with its ‘the dog eat my homework’ outlook to data retention and control frequent forgets do these things. And by ‘lucky chance’ the mistakes they make in adjustments always favour the ideas they are pushing . So you can why ‘adjustments’ are a problem . What you cannot see, sadly, is any steps being taken to account for these issues other than attempts to deny the right of none-supporters to raise their concerns in public.

richardscourtney
June 27, 2014 1:46 am

Dr. Paul Mackey:
In your post at June 27, 2014 at 1:29 am you say

I think using a single global average temperature is fundamentally meaningless as a metric for climate, even if there were a perfect set of records. In my opinion there needs to be some thought about to what that metric should be, which would not be a trivial excercise given the multi-facetted nature of climate with an innumerable number of variables and processes, some of which are not yet understood completely.
As it is the number seems to be just a statistical artifact with no physical meaning for the real world.

YES! I have been saying that for years.
If you have not seen it then I think you will want to read this especially its Appendix B.
And please remember that – as I pointed out in my above post at June 27, 2014 at 12:29 am – at least one of those involved in ‘altering the past’ denies the scientific Null Hypothesis; i.e.
A system must be assumed to have not changed unless there is empirical evidence that it has changed.
Richard

Nick Stokes
June 27, 2014 2:20 am

Brandon C says: June 27, 2014 at 12:18 am
“Nick. Again you just assume that every single reading is done exactly the same and followed only one protocol. This is easily proven not to be the case from interviews with station operators. Rather than admit that the extent of the problem is unknown and leave it as an uncertainty, you just proceed as though your easily disproven assumption is correct. This is not just poor science, it borders on outright dishonesty.”

No, it’s the do-nothing option that is poor science. The fact is that assumptions are inevitably made. You have a series of operator observations of markers at stated times, and you have to decide what they mean. Assuming they are maxima on the day of observation is an assumption like any other. And it incorporates a bias.
The unscientific thing is just to say, as you do, that it’s just too hard to measure, so let’s just assume it is zero. The scientific thing is to do your best to work out what it is and allow for it. Sure, observation times might not have been strictly observed. Maybe minima are affected differently to maxima (that can be included). You won’t get a perfect estimate. But zero is a choice. And a very bad one.

A C Osborn
June 27, 2014 3:00 am

I still cannot understand why no else has picked up WHAT IS UP WITH THAT 7th Graph of GHCN-M v3 Raw, Averaged Absolutes.
Has anyone ever seen a Global Temperature graph with 2 such 1.5 degree step changes around 1950 & 1990?
There is no TREND other than downwards from the 1950s to the 1990s.
With so many of the Stations being in the USA you would expect to see some element of the 1930/40s high temperatures coming through the global record, but it doesn’t they are a whole degree lower than the 1990s, compare that to the Graph 9 of the USHCN Raw data.
So what Cooling event in the rest of the world offset those very high temperatures?
Don’t forget these are alledgedly RAW actual temperatures
The same thing applies to the 9th Graph of USHCN Temperatures, Raw 5-yr Smoothed, no nice upward TRENDS just 4 Step Changes of 0.5 degree over a couple of years around 1920, 1930, 1950 & 1995, with a very fast Trend up of 0.6 degree between 1978 and 1990.
None of these suggest anything to do with steady increases due to the steady increase in CO2.
What in the Earths Climate Systems can produce such major Shifts in Climate?
In the USA data are we seeing the 11 Year Solar Cycle in some way?

Chuck L
June 27, 2014 3:06 am

I have followed the comments and have learned more about different methods of determining trends, TOBS, station drop-out, etc. but to reiterate what a number of commenters have said, the adjustments always create a greater rising trend. Why are adjustments continually being made on (already adjusted) older temperatures on an annual or even less than annual basis?! Did they not get it “right” the first time? It is hard to imagine this is coincidental, if not outright malfeasance/fraud by NOAA/NCDC/GISS to fit the global warming agenda.

Nick Stokes
June 27, 2014 3:22 am

A C Osborn says: June 27, 2014 at 3:00 am
“I still cannot understand why no else has picked up WHAT IS UP WITH THAT 7th Graph of GHCN-M v3 Raw, Averaged Absolutes.”

Zeke picked up on it. He introduced that plot thus:
“There is a very simple way to show that Goddard’s approach can produce bogus outcomes. Lets apply it to the entire world’s land area, instead of just the U.S. using GHCN monthly:”
It’s a faulty method, and can give you anything.

A C Osborn
June 27, 2014 3:31 am

Nick Stokes says:
June 27, 2014 at 3:22 am
How about it gives you the TRUTH???
Are they real values or aren’t they?
Are they reproducable?

richardscourtney
June 27, 2014 3:40 am

Nick Stokes:
At June 27, 2014 at 3:22 am you rightly say of Goddard’s method

It’s a faulty method, and can give you anything.

Yes, indeed so.
The same can be said of each and every determination of global average surface temperature anomaly (GASTA) produced by BEST, GISS, HadCRU, etc…
This is because there is no agreed definition of GASTA so each team that compiles values of GASTA uses its own definition. Also, each team changes the definition it uses almost every month: this is why past values of GASTA change every month; if the definition did not change then the values would not change.
The facts of this are as follows.
There is no agreed definition of GASTA.
And
There are several definitions of GASTA.
And
The teams who determine values of GASTA each frequently changes its definition.
And
There is no possibility of independent calibration of GASTA determinations.
Therefore

Every determination of GASTA is determined by a faulty method, and can give you anything.

Richard

A C Osborn
June 27, 2014 3:50 am

One thing that I disagree with is Combining Air and Water Temperature data, it should either be Air and Air or Water and land Surface.

Nick Stokes
June 27, 2014 3:51 am

richardscourtney says: June 27, 2014 at 3:40 am
“Every determination of GASTA is determined by a faulty method, and can give you anything.”

Zeke gave his demonstration of how Goddard’s method gives wrong answers. Let’s see your demonstration re GASTA?

June 27, 2014 3:53 am

@Stephen Richards
“This is all strawman stuff. Yes he uses controversial images to get his points across but his points are valid if a little OTT. You are doing what you accuse the AGW team of doing. Look at yourselves. Go look in the mirror.”
Oddly, I think I’m making the same point you’re making, so I’m not sure which mirror I should be peering into.

June 27, 2014 3:53 am

One of the apologists for constantly changing the temperature record of the past (does he have a time machine???) says that the abomination of the records at Luling, Texas (station number 415429) is a good thing and that is how errors are found. Hmmmmm. A blogger takes his first ever look at a station and finds massive problems? This is good?
But far, far worse is the idea that “NASA Is Constantly Cooling The Past And Warming The Present” as presented here: http://stevengoddard.wordpress.com/2014/06/27/nasa-is-constantly-cooling-the-past-and-warming-the-present/
Come on fellows, when will the past record ever be settled? Does the temperature time series really depend on who is running the agency? Does the subjective, personal beliefs of the “scientists” involved really determine the reality that occurred in the past. Does anyone here really think they can defend constantly altering the past records?
And what about this:

What is really ugly about this is that they overwrite the data in place, don’t archive the older versions, and make no mention of their changes on the web pages where the graphs are displayed. There should be prominent disclaimers that the actual thermometer data shows a 90 year cooling trend in the US, and that their graphs do not represent the thermometer data gathered by tens of thousands of Americans over the past 120 years.

They don’t archive the older versions? They don’t archive the changes? They toss out the record of their altering of the data? Oh my!
They don’t even mention all these changes on the pages where the graphs are? They don’t even mention that the real data shows cooling rather than warming? My, my, my.
And some people just trust the “scientists”. Is it because of the white lab coats?

Greg Goodman
June 27, 2014 4:04 am

WUWT: ” The challenge is metadata, some of which is non-existent publicly”
This is the real challenge. Most european met services still seem very possessive about their daily data. Despite WMO regs requiring free sharing of data, they are often using prohibitive “distribution” charges to deter “free” data sharing.
Also Austrian service require a NON DISCLOSURE AGREEMENT.
Swiss don’t even have link to request data and give ZERO data on their site.
UK met offices charge for daily data.
As Phil Jones revealed in one of his climategate emails, they are HIDING behind intellectual property rights.
So the data is there do a reliable global network but there needs to be a significant change attitude at many of these european meteo services before that can happen.
Since WMO rules already include that intention maybe some coorinated action is needed to ensure national MO services are not “hiding behind” IP or raising abusive charges for individual requests instead of making detailed data readily available on line.

richardscourtney
June 27, 2014 4:54 am

Nick Stokes:
At June 27, 2014 at 3:40 am I wrote

there is no agreed definition of GASTA so each team that compiles values of GASTA uses its own definition. Also, each team changes the definition it uses almost every month: this is why past values of GASTA change every month; if the definition did not change then the values would not change.
The facts of this are as follows.
There is no agreed definition of GASTA
And
There are several definitions of GASTA
And
The teams who determine values of GASTA each frequently changes its definition
And
There is no possibility of independent calibration of GASTA determinations.
Therefore
Every determination of GASTA is determined by a faulty method, and can give you anything.

At June 27, 2014 at 3:51 am you have ignored that explanation and argument t and replied with this ‘red herring’

Zeke gave his demonstration of how Goddard’s method gives wrong answers. Let’s see your demonstration re GASTA?

I did that when I wrote,
“each team changes the definition it uses almost every month: this is why past values of GASTA change every month; if the definition did not change then the values would not change.
But, of course, there is also this.
My having landed your ‘red herring’, perhaps you would now be willing to address the fundamental issue which I raised, or is my explanation and argument so good that you are incapable of addressing it?
Richard

June 27, 2014 4:56 am

What about the temperature stations that have been in rural areas for decades? Seems to me that these would give a more accurate picture of past temperature

MikeUK
June 27, 2014 4:59 am

I suspect that 50 US stations would NOT give a very accurate ABSOLUTE estimate for the mean temperature, but WOULD be adequate for indicating CHANGES in temperature.
You could pick 50 US babies and average their weights as they age, the result would be unlikely to be an accurate estimate of the average weight, but the resulting TREND would be highly likely to be accurate enough to quantify how weight changes with age.

June 27, 2014 5:02 am

Aircraft contrails stoke warming, cloud formation
Tue, Mar 29 2011
http://uk.reuters.com/assets/print?aid=UKTRE72S4A220110329
Maybe this is related to nighttime warming?

onlyme
June 27, 2014 5:02 am

Mark Stoval (@MarkStoval) says:
June 27, 2014 at 3:53 am
Nasa Giss doesn’t archive the actual data. They give no link as to where the actual data used is archived in the faq. The rationale for this is explained in yet another faq.
http://data.giss.nasa.gov/gistemp/FAQ.html and http://data.giss.nasa.gov/gistemp/abs_temp.html
Even without the original data being archived the organization is somehow still able to often adjust the anomalies generated from that data they don’t have. Since they state they don’t have the actual data of course the way the anomalies are calculated and recalculated is questioned. How many times can you adjust an adjusted figure before it becomes meaningless?
The CRU doesn’t archive the actual data either. No idea how they keep revising figures when original data to make revisions to is not kept but that’s another question.
The CRU states that even if they did archive actual data they can’t share it for numerous reasons, including agreements with the data providers, some of which have been lost and others of which have been oral agreements only.
http://www.cru.uea.ac.uk/cru/data/availability/

Latitude
June 27, 2014 5:14 am

jmrSudbury says:
June 26, 2014 at 6:53 pm
Latitude (June 26, 2014 at 2:40 pm) asks, “[d]id they lose 30% of their stations since 1990 or not? are they infilling those station now or not?”
The data Steven used was the raw that had -9999 and the final for the same station had an E for estimate beside the calculated number for a given month.
And Anthony asked for example of stations that have problems. One example: In Aug 2005, USH00021514 stopped publishing data (had -9999 instead of temperature data) save two months (June 2006 and April 2007) that have measurements. Save those two same months, the final tavg file has estimates from Aug 2005 until May 2014. The last year in its raw file is 2007.
John M Reynolds
=====
Thanks John

John Silver
June 27, 2014 5:16 am

What a long winded way of saying that you have nothing. Junk data = no data.
You better stick to rural MMTS, no airports.

Nick Stokes
June 27, 2014 5:23 am

richardscourtney says:June 27, 2014 at 4:54 am
“perhaps you would now be willing to address the fundamental issue which I raised:

Well, if you aren’t doing numbers, perhaps you can at least cite some supporting evidence for your assertions, such as:
“There are several definitions of GASTA”
And
“The teams who determine values of GASTA each frequently changes its definition”
And
“There is no possibility of independent calibration of GASTA determinations.”
I do independent surface temperature determinations. And compare.

June 27, 2014 6:01 am

Anthony,
Your plots comparing minimum annual and maximum annual temperature trends for Las Vegas are surely a most compelling reason for not estimating long term temperature trends using the average of minimum and maximum, but only to use maximum temperatures. And theory also supports this view, surely, concerning boundary layers at night and so forth?
So why not only use daytim maximum temperatures in climatic trend estimation?
REPLY: this is something Pielke and others have suggested..to no avail. – Anthony

MDS
June 27, 2014 6:24 am

The issue with measurement stations is not only absolute accuracy, but changes in the instrument housing as well. Are the stations painted? If so, I presume that the paint used is not only white, but that Observatory White is used and refreshed periodically—pain changes over time and this changes the way that sun loading can affect instruments. Calibration is not only an offset metric, but also a slope metric—instruments need to be calibrated, maintained, and—if the instrument is changed for some reason—the changes noted for future reference in the data set. Often the experimental aspects, the obsessive attention needed to ensure that measurements are as precise (repeatable) as possible is sometimes overlooked. People simply assume that these things are done properly, but without a plan to do it, who knows? I’ve seen changes in instruments creep in before, when people become careless. And these do affect the data set, especially as the number of stations is reduced.
Food for thought.

richardscourtney
June 27, 2014 6:36 am

Nick Stokes:
Your evasions at June 27, 2014 at 5:23 am do not wash.
Each team that produces values of global average surface temperature (GASTA) uses a different
definition; e.g. the weightings they apply to land and ocean differ and they compute gridding differently.
They change their used definition each month, and that is why their past values change each month. If a value of GASTA changes for someyear decades inn the past then the two values pertain to different definitions of GASTA or either the old value was wrong (why?) or the new value is wrong (why?).
It is not possible to have an independent calibration of GASTA because there is no agreed definition of GASTA.
Nick, I thought I had explained the issue in such clear and simple terms that even a climastrologist could understand it. Obviously, I was wrong, so I will state the issue again but in a different form.
There cannot be a ‘wrong’ value of global average surface temperature (GASTA) because there is no definition of what would be a ‘right’ value of GASTA, and a calibration to determine if a value is ‘right’ does not – and cannot – exist.
Richard

Alexej Buergin
June 27, 2014 6:42 am

It is not true that Tony H. (“SG”) never adnits that he is wrong.
As far as I know he is the only climate blogger to comment on THE GAME.
And after each and every contribution about THE GAME he admits that it was nonsense.

Latitude
June 27, 2014 6:43 am

MikeUK says:
June 27, 2014 at 4:59 am
====
Mike, what would you say if every time they computed the average baby weight…
..they shaved a few pounds off their original start weight
That would make any trends in baby weight wrong, wouldn’t it?

Alexej Buergin
June 27, 2014 6:46 am

According to the beautiful book “Meteorology Today” by C. Donald Ahrens:
“Most scientists use a temperature scale called the absolute or Kelvin scale…”
The fact that Tony W. now calls the Fahrenheit scale “absolute” seems to indicate that he has been promoted from meteorologist to climatologist…

Reply to  Alexej Buergin
June 27, 2014 7:22 am

Alexej, you battled with me over there.
Your “absolute” jibe was exposed as unfair
You then slunk away, out of shame or of fear
But shamelessly try to repeat the jibe here
Your statements above are just patently wrong
You know that, but hope that folks here go along
The “absolute” as he had used in his post
Was perfectly fine. But your arguments? Toast.
http://stevengoddard.wordpress.com/2014/06/14/how-zeke-hides-the-decline/#comment-368564
===|==============/ Keith DeHavelle

June 27, 2014 7:41 am

Zeke and Nick Stokes keep ignore the elephant in the room. Their baseline of 1961-1990 has only 50 stations with all non-Estimated data.
Second, the “proof” that stations are dropping out and changing the latitude mix ignores the fact you can take Goddards methodology and apply it to individual states, Anomalies is a red herring to con people into using a contaminated baseline.

Samuel C Cogar
June 27, 2014 7:45 am

Anthony said:
This is why NOAA/NCDC spends so much time applying infills and adjustments; the surface temperature record is a heterogeneous mess. But in my view, this process of trying to save messed up data is misguided, counter-productive, and causes heated arguments (like the one we are experiencing now) over the validity of such infills and adjustments, especially when many of them seem to operate counter-intuitively.
—————
I agree, the surface temperature record(s) (from 1880 to present) is/are a heterogeneous mess simply because said origin of said records were never meant to be anything other than what they are, ….. a recorded temperature for the locale/location at which they were recorded. And their intended purpose was for “forecasting” weather conditions in adjoining locales/locations for up to 4 or 5 days in advance, ….. and after that their value was nil except as reference data. And later on, the local weather people started using their temperature record for “touting” Record High and Record Low temperatures for appeasing the local public’s curiosity. A practice that continues to this very day.
But, the “Original Sin” associated with the heterogeneous mess of the surface temperature record was perpetrated by James Hansen et el in the early 1980’s when they decided to use the surface temperature record to prove and/or justify their “junk science” claims of CO2 causing Anthropogenic Global Warming / Climate Change.
It was an idiotic idea then ….. and it still is, …. but they refuse to admit said primarily because said “idea” now has a massive “cult following” of true believers, flim-flammers, etc. and horrendous amounts of money, reputations and careers are highly dependent upon it continuing “as is”. And a majority of the aforesaid will do “what ever it takes” to protect their vested interest.

beng
June 27, 2014 7:46 am

Nice post, Anth*ny — quite a few important points.

Latitude
June 27, 2014 7:49 am

sunshinehours1….thanks for you earlier post…..it was an eye opener
=====
sunshinehours1 says:
June 26, 2014 at 2:52 pm
Anthony, there are 1218 stations. That means there should be 14,616 monthly records for 2013.
There are 11568 that have a raw and final data record = 79%. of the 14,616
There are only 9384 of those 11,568 that do not have an E flag. 64.2% of the 1,4616
There are only 7374 that have a en empty error flag. 50%. of the 14,616.
36.8% is close enough to 40% for me.
And yet the NOAA publishes press releases claiming this month or that is warmest ever.
REPLY: Thanks for the numbers, I’ll have a look. Please note that the USHCN is not used exclusively to publish the monthly numbers, that comes from the whole COOP network. – Anthony

flyfisher
June 27, 2014 8:16 am

You realize, of course, this argument cannot be won. Even it NASA were to admit that it has manipulated temperatures, the hue and cry will then be that “the climate is cooling at an unprecedentedly slow rate. All computer models suggest we should have been cooling much faster than we currently are.”

June 27, 2014 8:19 am

Some folks seem to be confused by my position, and Anthon’y post aims at fiinding agreement.
So, Let me state some things clearly
My position
1. Averaging Absolutes as goddard does is not the best method to use especially when records are missing.
A) it’s not the best method to calculate a global average
B) its not the best method to Assess the impact of adjustments.
2. IF you choose a method that requires long continuous records then you have to adjust for station changes
A) changes in location
B) changes in TOBS
C) changes in instrument.
3. The alternative to adjusting (#2) is to slice stations.
A) When a station moves, its a new fricking station because temperature is a function of SITING
B) When the instrument changes, its a new fricking station
C) when you change the time of observation, its a new station.
4. Another alternative to 2 is to pre select stations according to criteria of goodness.
On #1. The method of averaging absolutes is unreliable. Sometimes it will work, sometimes it will give you biases in both directions. deciding which method to use should be done with a systematic
study using synthetic data. This is not a skeptic versus warmist argument. This is a pure method
question.
On #2. This approach means that every adjustment you do will be subject to examination. You
will never ever get them all correct. Since adjustment codes are based on statistical models
you might be right 95% of the time, 5% you will be wrong. there are 40000 stations. Go figure
5% of that.
On # 3. This is my preferred approach versus #2. Why? because when the station changes its a new station. Its measuring something different. The person who changed my mind about this
was Willis. I used to like #2.
on #4 Im all for it. However, the choice of station rating must be grounded in field test.
Actual field test of what makes a site good and what disqualifies a site. Site rating needs to be objective ( based on measurable properties ) and not merely visual inspection. Humans need to taken out of rating or strict rating protocals must be established and tested.
Now, let the personal attacks commence. or you can look at 1-4 and say whether you agree or disagree.

Carrick
June 27, 2014 8:27 am

Alexej:

The fact that Tony W. now calls the Fahrenheit scale “absolute” seems to indicate that he has been promoted from meteorologist to climatologist…

This is an example of a little bit of knowledge is a dangerous thing.
Technically, Kelvin is an absolute thermodynamic temperature scale: one that has “zero” at absolute zero.
Fahrenheit and Celsius are “absolute scales” in the metrological sense because they are tied to specific measurables that can link these readings to an absolute thermodynamic scale (thermodynamic absolute zero is -459.67°F and -273.15°C respectively).
Metrologically, we distinguish devices as being “relative” versus “absolute”. Relative measurements are done relative to some selected, but in principle arbitrary value. Differential pressure sensors, that measure the difference in pressure inside of a tank relative to the outside pressure are an example of this. A laser range finder which measure the position of one object relative to some selected point is another example.
Examples of absolute devices would be an absolute barometer and a GPS unit is an example of an absolute position measurement relative to a defined geoid.

Sam Glasser
June 27, 2014 8:36 am

I like the idea of selecting two groups of 50 stations to represent the change in temperature.
In fact, in 2010 I surveyed all the station lists in GISS and found 422 such stations (excluding the US) with continuous data extending back before 1940 (when CO2 concentrations began to rise). As a matter of interest, these same stations showed an average increase of ~2 deg. C (1.8 calc.). Compare that with the current (re-adjusted since my survey) GISS “Global Temperature” (SAT) which shows 4 times that increase. As to readjustments, the current GISS graph shows recent temperatures 0.1 deg.C higher than the graph in 2010 and about 0.2 deg.C cooler prior to 1920.

basicstats
June 27, 2014 8:48 am

Anomalies provide a useful way of salvaging temperature records corrupted by problems of station loss, relocation, changing observation times etc. etc. They also offer a more coherent presentation of spatial averages, removing local distortions. But, they are not actual temperatures and this needs to be considered when applying statistical procedures, eg kriging. Whatever kriged anomalies are, they are not what you would get from first kriging the actual temperatures and then taking anomalies. Something Cowtan and Way, and others, might note.

June 27, 2014 9:30 am

MarkStoval wrote, “They don’t archive the older versions? They don’t archive the changes? They toss out the record of their altering of the data? Oh my!”
It’s even worse than that. NASA GISS takes active measures to try to prevent their data (our data!!) from being archived. I’m not kidding.
I admit that I do occasionally cringe at Steve Goddard’s over-the-top rhetoric, but give the man credit: he is the one who exposed this. See:
http://stevengoddard.wordpress.com/2012/06/11/giss-blocking-access-to-archived-data-and-hansens-writings/
(I have some comments there, too.)
This was the final paragraph of my 2nd comment there:
“This amazes me. I really am surprised at how blatant their misbehavior is. They’re absolutely shameless. I’m becoming convinced that the guys running GISS are just plain crooks. If I’d given an order that they cease blocking archive.org with robots.txt, and I subsequently discovered this subterfuge, I’d fire somebody so fast there would be skid marks on the sidewalk outside the front door where their butt hit the concrete.”

Lance Wallace
June 27, 2014 9:44 am

daveburton says:
June 27, 2014 at 1:03 am
Thanks for putting my Dropbox graph of your Fig. “D” historical data of the US 48-state temperature anomalies into more permanent archive. Here is the full Excel file with the graph.
https://dl.dropboxusercontent.com/u/75831381/NASA%20Fig%20D%201999-2014.xlsx
I used the same data (cut off at 1998 so all the datasets could be compared from the Hansen 1999 up to the present) to calculate the change in the linear rate of increase. The rate was 0.32 degrees C per century according to Hansen (1999) and is now 0.43 per century, about a 35% increase, due entirely to adjustments to the historical data. (See the graph in the third tab of this second Excel file.)
https://dl.dropboxusercontent.com/u/75831381/NASA%20Fig%20D%201880-1998.xlsx
You are welcome to archive these files if you find them useful. These data should be more widely distributed, in my opinion; you have preformed a real service here.

Eugene WR Gallun
June 27, 2014 9:45 am

Sleepless nights and endless worry compose a poem.
PROFESSOR PHIL JONES
The English Prometheus
To tell the tale as it began–
An ego yearned
Ambition burned
Inside a quiet little man
No one had heard of Phillip Jones
Obscure to fame
(And likewise blame)
The creep of time upon his bones
Men self-deceive when fame is sought
Their fingers fold
Their ego told
That fire is what their fist has caught
So self-deceived, with empty hand
Jones made it plain
That Hell would reign
In England’s green and pleasant land
Believe! Believe! In burning heat!
In mental fight
To prove I’m right
I’ve raised some temps and some delete
And with his arrows of desire
He shot the backs
Of any hacks
That asked the question — where’s the fire?
East Anglia supports him still
His lies denied
Whitewash applied
Within that dark Satanic mill
The evil that this wimp began
No one should doubt
The truth will out
Prometheus soon wicker man

June 27, 2014 9:59 am

Nick Stokes: “Almost all the adjustment complaint re USHCN is about TOBS – it’s the biggest. ”
I’m not sure about that.
I just graphed TMAX raw, tob, FLs.52i (Final)
The trend for raw was -.0.01C/decade
The trend for tobs was 0.02 C/decade
The trend for Final was 0.05 C/decadecomment image

June 27, 2014 10:33 am

TOBS (Time of Observation Bias) have been mentioned.
For my little spot on the globe the record highs in the 2002 list for Feb. 4th was 66° F set in 1946.
The April 2012 list says it was 61° F set in 1962 and tied in 1991. (The earlier list did not include ties.) The records for Feb. 3rd was 63° F and Feb. 5th was 64° F. What happened to the 66° F? If it was shifted to the 3rd or the 5th due to TOBS then those records should have been 66° F or higher.
PS To add to the confusion of adjustments, the June 2012 list has the record high for Feb. 4th back up to 66° F but now it was set in 1890. No mention of it being tied in 1946.

Evan Jones
Editor
June 27, 2014 11:32 am

Anthony:
First, I will send you and J-NG the final station review today. Yes, there were a few changes, but yes, we are still golden (Class 1\2s clock in at 0.193/decade for raw+MMTS adjustment only).
Second, interesting thing about Las Vegas (USHCN 294862):
From 1979 – 2008, its raw data is -0.109 C per decade (-0.081 after my MMTS adjustment). This is “adjusted” to a whopping +0.465 C/d. And no, there is no TOBS bias. And no moves.
Homogenization strikes again?
Note that this is a cooling Class 4 station, so it is likely that the cooling is exaggerated by bad siting. Recall that bad siting will tend to exaggerate any trend, either warming or cooling. But i see no justification for the huge adjustment that NOAA makes.

Evan Jones
Editor
June 27, 2014 11:37 am

TOBS (Time of Observation Bias) have been mentioned.
TOBS for Las Vegas (USHCN 294862) is 8:30 AM throughout the observation period (1979 – 2008). This would have an effect on the actual readings (too low), but would likely have no effect at all on trend.

Solomon Green
June 27, 2014 11:44 am

I look forward to reading the Evan Jones and Anthony Watts finished paper. But while waiting I would love to see evidence as to the accuracy of the algorithms used for the “infilled” or “fabricated” data produced by NOAA/NCDC to account for the missing observations.
It should be easy to design an experiment to test the accuracy. (1) Take a random sample of stations with known observations. (2) Remove a random number of those known observations (3) Compare the infilled data with the known observations.
I am sure that this must have been done many times, since without such a test the algorithm for infilling the data from a particular site or sites cannot be justified. But I have never seen any paper that justifies the accuracy of the methods used for infilling. Could someone point me in the right direction, please?

Evan Jones
Editor
June 27, 2014 11:44 am

Almost all the adjustment complaint re USHCN is about TOBS – it’s the biggest.
Yes and no. It is the single largest pre-homogenization adjustment. But then homogenization comes into play. And homogenization, based on the 80% of USHCN stations that are badly sited, increases trend of the well sited stations by 70%. And that is using only non-moved, constant TOBS stations with only MMTS adjustment applied (an adjustment we can’t avoid).
So homogenization whipsaws the well sited stations into “conformity” with the poorly sited stations. Poorly sited stations, on average, see virtually no overall homogenization adjustment whatever: It is the well sited stations (70% lower trend) that have their data obliterated.
Homogenization. That is the “biggest adjustment”. And it is 100% spurious.

Evan Jones
Editor
June 27, 2014 12:01 pm

I say we have one chance to get it right, so we’ve been taking extra care to effectively deal with all criticisms from then, and criticisms we have from within our own team.
You said it, brother. You remember the caning we took for the “preliminary release”, of course? That was a very wise decision on your part. It allowed us to address the three main criticisms (TOBS-bias, moves, MMTS conversion) before we submit.

A C Osborn
June 27, 2014 12:09 pm

sunshinehours1 says:
June 27, 2014 at 9:59 am
I just love it, Nick makes the statement, you shoot him down and then Evan Jones confirms it with the top station data.
Way to go.

A C Osborn
June 27, 2014 12:10 pm

This thread is so important it should really be pinned to the top.

June 27, 2014 12:12 pm

Nick Stokes: “Almost all the adjustment complaint re USHCN is about TOBS – it’s the biggest. ”
I took tmax raw,tobs and final. I gridded the data on 1×1 grids to deal with station drop out. And then I looked at each month.
From 1998 to 2013 the tobs adjust added .04 to .09C/decade to the trend (compared to raw)
From 1998 to 2013 the Final adjust added .15 to .25C/decade to the trend (compared to tob)
So Nick … you are wrong.

Quinn the Eskimo
June 27, 2014 12:18 pm

This has been a rather informative discussion. It is actually produced a relatively snark-free post from Mosher, an event so surprising that I repaired momentarily to my fainting couch.
Having done so, I’m now ready to declare a winner in this contest:
The winner: Anthony Watts and Steven Goddard.
The loser by TKO: The instrumental temperature record.

June 27, 2014 12:31 pm

Here is the Jan 1998-2013 1×1 gridded TMAX data I mentioned in my last post.comment image
I also did a 5×5 grid and was surprised how much it changed the trends.comment image

Alexej Buergin
June 27, 2014 12:36 pm

Carrick says: June 27, 2014 at 8:27 am
Kelvin is an absolute thermodynamic temperature scale
Sorry, I am a physicist. It is physics that defines what “temperature” is. You should write: Kelvin is THE absolute (or themodynamic) temperature scale.
(Some people distinguish between “thermodynamic” temperature as used in the second law of thermodynamics, and “absolute” temperature as measured with a gas thermometer, and then proceed to show that these are the same).
Your sentence: “GPS unit is an example of an absolute position measurement relative to a defined geoid” makes no sense. All positions, and all speeds, are relative (as in theorie of).
Fahrenheit is RELATIVE to the lowest point Daniel Gabriel managed to reach by mixing ice, water and salt., and Celsius is RELATIVE to the freezing point of water. No physicist would call that absolute.
My only consolation is that climatologists make an even worse mess of statistics; seems to be part of their system.

June 27, 2014 12:38 pm

One addition. The NOAA graph for same period:comment image
URL here:
http://www.ncdc.noaa.gov/cag/time-series/us/110/00/tmax/1/01/1998-2013?base_prd=true&firstbaseyear=1901&lastbaseyear=2000&trend=true&trend_base=10&firsttrendyear=1998&lasttrendyear=2013
But their graph ignores the trend request in the url. So you have to change trend dates to 1998 and 2013.

June 27, 2014 12:44 pm

PS to Nick/Zeke/Mosher. No anomalies were used … or needed. Just gridding and analyzing each month separately.
Maybe tomorrow I will check the E flag.

Alexej Buergin
June 27, 2014 12:45 pm

Carrick says: June 27, 2014 at 8:27 am
Calculated the temperature that is half of 10°C = 50°F.
You need to do that in an “absolute” temperature scale.
Thus 10°C = 283.15K; half is 141.575K = -131.575°C

June 27, 2014 12:51 pm

sunshinehours1 says:
June 27, 2014 at 12:12 pm
Nick Stokes: “Almost all the adjustment complaint re USHCN is about TOBS – it’s the biggest. ”
I took tmax raw,tobs and final. I gridded the data on 1×1 grids to deal with station drop out. And then I looked at each month.
From 1998 to 2013 the tobs adjust added .04 to .09C/decade to the trend (compared to raw)
From 1998 to 2013 the Final adjust added .15 to .25C/decade to the trend (compared to tob)
So Nick … you are wrong.

Here we again see evidence that all “adjustments” are biased toward supporting the warming agenda. This is not science. Not even close.

June 27, 2014 1:08 pm

I wonder what numbers are being analyzed? The original numbers or the adjusted numbers after they have been adjusted after they have been adjusted after……….
Does anybody know how many times the numbers have been adjusted before the current set is being analyzed?

A C Osborn
June 27, 2014 1:14 pm

Mark Stoval (@MarkStoval) says:
June 27, 2014 at 12:51 pm
I beg to differ, it is Science, but not of Climatology, it is of obsfucation, Political Tampering and downright falsification.

Alexej Buergin
June 27, 2014 1:18 pm

Carrick says: June 27, 2014 at 8:27 am
This is an example of a little bit of knowledge is a dangerous thing.
Look up “Meteorology Today” by Ahrens on Amazon (since it costs more than $100 and has more than 500 pages, I would not advise you to buy it).
There is a great big lot of knowlege there.

Evan Jones
Editor
June 27, 2014 1:19 pm

Nick is both right and wrong.
Overall, TOBS has a greater effect on all stations, both poorly and well sited. In that sense, Nick is right. (Some have TOBS -bias, some do not.)
But for the “true signal” — the well sited stations — the biggie is homogenization. When we remove TOBS-biased Class 1\2s we go from 0.156 per decade to 0.193 (raw + MMTS adjustment only). But homogenization adjusts those “true signal” stations from 1.93 to the extent that the final result is ~0.324C/decade, almost identical to the poorly sited stations. So, in that sense, Nick is wrong, wrong, wrong.

Editor
June 27, 2014 1:26 pm

In addition to infilling data for stations that no longer report data, GISS estimates data missing from a station’s record using an averaging algorithm. This averaging algorithm simply reinforces whatever underlying trend is present in the data that exists. That is one problem with the algorithm. The other problem is that the averaging is redone every time GISS updates their global temperature. What this means is that the estimate for the missing temperature for May, 1927 at the East Podunk weather station will be influenced by the March, April, and May 2014 temperature at the same station. And next year, the March, April, and May 2015 temperatures will influence that estimate. And so on.

Latitude
June 27, 2014 1:35 pm

Gunga Din says:
June 27, 2014 at 1:08 pm
Does anybody know how many times the numbers have been adjusted before the current set is being analyzed?
===
Looks like it could be every month…
=====
angech says:
June 26, 2014 at 4:58 pm
a second small request. Can you put the first graph up that Zeke shows at the Blackboard
26 June, 2014 (14:21) | Data Comparisons | By: Zeke
which shows with clarity the adjustment of past records by up to 1.2 degrees from 1920 to 2014.
“You do not change the original real data ever”
Zeke wrote (Comment #130058) blackboard
June 7th, 2014 at 11:45 am. Very Relevant to everyone
“Mosh, Actually, your explanation of adjusting distant past temperatures as a result of using reference stations is not correct. NCDC uses a common anomaly method, not RFM.The reason why station values in the distant past end up getting adjusted is due to a choice by NCDC to assume that current values are the “true” values. Each month, as new station data come in, NCDC runs their pairwise homogenization algorithm which looks for non-climatic breakpoints by comparing each station to its surrounding stations. When these breakpoints are detected, they are removed. If a small step change is detected in a 100-year station record in the year 2006, for example, removing that step change will move all the values for that station prior to 2006 up or down by the amount of the breakpoint removed. As long as new data leads to new breakpoint detection, the past station temperatures will be raised or lowered by the size of the breakpoint.”
so the past temperature is being lowered all the time based on the newest input data which is itself being adjusted by reference to only 50% of real stations many of which are referenced to surrounding cities and airports and to previous recent hot years [shades of GergisMann technique here] as having more temporal and spatial weighting

Carrick
June 27, 2014 1:39 pm

Alexej Buergin:

Sorry, I am a physicist.

Well good. I’ll just say that people often say “absolute temperature” in physics, and it is most common to interpret this as “absolute thermodynamic (temperature) scale”, but the metrological meaning is valid too. Technically it’s an abuse of language to just say “absolute temperature” when you really mean Kelvin scale. I’d say “Kelvin scale” if that’s what I wanted, since it’s less ambiguous.

It is physics that defines what “temperature” is.

Actually we define quantities to be “useful”, so there’s more than just pure physics in the definition. Convention and history play a big role, especially in thermodynamics.

You should write: Kelvin is THE absolute (or themodynamic) temperature scale.

Actually, ANY temperature scale that defines its zero to be absolute zero is an absolute thermodynamic (temperature) scale.
So Kelvin is an absolute scale, but not the only one. Rankine is another, albeit less used these days
Technically scale \alpha \times K where \alpha \ne 0 is a valid absolute thermodynamic temperature scale.

Your sentence: “GPS unit is an example of an absolute position measurement relative to a defined geoid” makes no sense. All positions, and all speeds, are relative (as in theorie of).

They are defined in reference to the geoid. What makes a measurement “relative” is that it is related to an arbitrary reference value (like the atmospheric pressure at the point in time I measure the pressure inside of the tank).
What makes a measurement “absolute” is that if I measure a particular location, e.g., 45°N, 90°W, anybody knows what point on Earth that value relates to. If I say “+3°N, -5°W” relative to my current location, nobody has a way of determining exactly what that means.

Calculated the temperature that is half of 10°C = 50°F.
You need to do that in an “absolute” temperature scale.

I’m not sure what this relates to, but there isn’t any fundamental meaning to “half of the temperature” unless you relate it to the microscopic theory of thermodynamics. Without that interpretation, “dividing temperature by two” has no physical meaning. With that interpretation, the prescription that you first convert to an absolute scale before dividing, becomes obvious.

Nick Stokes
June 27, 2014 2:07 pm

evanmjones says: June 27, 2014 at 1:19 pm
“But for the “true signal” — the well sited stations — the biggie is homogenization. When we remove TOBS-biased Class 1\2s we go from 0.156 per decade to 0.193 (raw + MMTS adjustment only). But homogenization adjusts those “true signal” stations from 1.93 to the extent that the final result is ~0.324C/decade, almost identical to the poorly sited stations. So, in that sense, Nick is wrong, wrong, wrong.”

You don’t say what period. It obviously depends on that.
TOBS, when it is needed, is a big adjustment. Jerry Brennan at johndaly.com did a lot of calculations; I graphed them here. If you change from 5pm to midnight, the histogram is centred at about 0.8°C. Of course, how that translates in trend depends on the period, and for a group of stations, it depends on how many stations underwent such a change. I suspect the subgroup you have selected may be one where observation times changed less.
You can see the dominance of TOBS in this familiar plot (V1). It isn’t always positive, so it may not at each year be largest in absolute terms. But it varies the most, which affects trend.
The plot also emphasises that while going from raw to tobs is one change, going from tobs to final involves a number. So showing that in some cases the second stage made a bigger change to trend doesn’t prove that TOBS wasn’t the largest.

Nick Stokes
June 27, 2014 2:13 pm

John Goetz says: June 27, 2014 at 1:26 pm
“In addition to infilling data for stations that no longer report data, GISS estimates data missing from a station’s record using an averaging algorithm.”

Do you mean USHCN? That’s what we’ve been talking about. AFAIK, GISS doesn’t infill non-reporting stations, nor estimate missing data within station time series. They use a rather complicated gridding method in which such measures would not be needed.

Nick Stokes
June 27, 2014 2:28 pm

richardscourtney says: June 27, 2014 at 6:36 am
“Each team that produces values of global average surface temperature (GASTA) uses a different definition; e.g. the weightings they apply to land and ocean differ and they compute gridding differently.”

That’s not a different definition. They are calculation a spatial integral of temperature anomaly, and using different numerical approximations. There are a huge number of different numerical ways to approximate integrals, and most work well. Using a different method doesn’t mean you are redefining the thing you are calculating.

Evan Jones
Editor
June 27, 2014 2:44 pm

You don’t say what period. It obviously depends on that.
1979 – 2008. A period of unequivocal warming. After all, for bad siting to affect trend, there must be a real, genuine trend to exaggerate in the first place.
Overall, warming from 1950 to date is ~+0.107C/decade (Haddy4). My bottom-line “best guess” is that the “true signal” is closer to 0.07C per decade, exaggerated by poor microsite. That translates to raw CO2 warming with perhaps a bit of negative feedback in play.

richardscourtney
June 27, 2014 2:47 pm

Nick Stokes:
Your post at June 27, 2014 at 2:28 pm says

richardscourtney says: June 27, 2014 at 6:36 am

“Each team that produces values of global average surface temperature (GASTA) uses a different definition; e.g. the weightings they apply to land and ocean differ and they compute gridding differently.”

That’s not a different definition. They are calculation a spatial integral of temperature anomaly, and using different numerical approximations. There are a huge number of different numerical ways to approximate integrals, and most work well. Using a different method doesn’t mean you are redefining the thing you are calculating.

Nice try but no coconut.
Clearly, the different methods provide different results. Indeed, the methods change from month to month and that changes the results.
I repeat my question and I add another.
If the different methods are not analysing different definitions then why do values of global average surface temperature (GASTA) from decades ago alter when the method is changed from month to month: which is the right determination any of the ones before a change or any of those after it? In other words, if the different – and frequently changed – methods are are not assessing different definitions of GASTA then why do the methods not provide the same results?
And my additional question is
What is the definition of global average surface temperature (GASTA).
Richard

Evan Jones
Editor
June 27, 2014 2:52 pm

TOBS, when it is needed, is a big adjustment.
Yes, surely. We find it to be ~0.09C (trend, sic) from 1979 – 2008 (Comparing USHCN Class 1\2 TOBS-biased stations with Class 1\2 non-TOBS-biased). It obviously depends on when the TOBS change occurs during the series: If it’s in the middle of the series (in this case, 1979 – 2008), it has a very large effect, and if it occurs at the end or beginning of the series it has little effect. But is is a big step change, no matter how you slice it. Even averaged out over 30 years.
And it is the uncertainty therein that causes us to drop all stations with TOBS bias (for the current study): Sometimes the best way to deal with a problem is simply to bypass it. (We confirm our accuracy by comparing “TOBS-adjusted data” for the non-TOBS biased stations we include, and the raw+MMTS figures match the TOBS figures very well. So we pruned correctly.)
That is why we filter out all stations with moves and TOBS bias and apply only MMTS adjustment (regrettable but unavoidable). That way we get a near-pure assessment of poorly sited stations vis a vis well sited stations. And the difference is staggering.

Svend Ferdinandsen
June 27, 2014 3:29 pm

The whole problem is that they pretend to make a (artificial) measurement of the temperature, as it would be without mans influence on land use. The meaning is that then CO2 would be the only one to change it.
That is anyway one of the reasons it has not so much to do with temperature.
The result is, we have not a temperature that reflects the reality.
I mean, UHI, land use, aircondtion and what ever is a natural part of the temperature. If UHI raises the temperature, then it is real and should be reported as it is.
If they want a rural average, then make such one out of real rural measurements, but keep the two temperatures apart.
But again, what is rural? It can be influenced by the crop and its harvest, the forrests and pinebeetles and some other changes in the land. No matter what is done, it is not possible to make an average that is not somehow influenced by our use of the land.

Nick Stokes
June 27, 2014 3:29 pm

richardscourtney says: June 27, 2014 at 2:47 pm
“And my additional question is
What is the definition of global average surface temperature (GASTA).”

Here’s my definition:
1. Surface temperature is as measured by weather stations on land, SST over ocean.
2. Anomaly is the local difference between ST and the local average over a prescribed period (eg 1961-90).
3. GASTA is the mean calculated by spatial integral of a time average (say one month) of anomaly.

June 27, 2014 3:33 pm

Calibration studies show that unaspirated land-station temperature sensors produce a systematic warm-bias. That warm bias-error is variable in time and space, non-random, occurs in summer and winter measurements, and is not a constant offset.
Any USHCN – USCRN anomaly comparison, as Zeke H. referenced, in which both temperature data sets were differenced against a common normal, ought to show that bias. However, in the event, it doesn’t.
It is hard to believe that the systematic USHCN warm bias-error would just automatically average away, so that the recent USHCN surface anomaly trend should be within (+/-)0.1 C of the USCRN trend.

Evan Jones
Editor
June 27, 2014 4:11 pm

But there is no real trend from when CRN started . So there would be no trend difference between USHCN and CRN over that period. For bad siting to affect a trend, there must first be a trend to affect.
For the 1979 – 2008 period, there was significant warming, greatly exaggerated by poor microsite.

June 27, 2014 4:16 pm

This is funny. NOAA has this new nClimDiv using a 5k grid.
NOAA TMAX May 1998 to 2014 Trend = .-0.6F/decade
My simple USHCN 5×5 TMAX May 1998 to 2014 Trend = -0.62F
Mine vs NOAA so far this year.
1 Gridded 5×5 -1.02 -.93
2 Gridded 5×5 -2.54 -2.66
3 Gridded 5×5 0.67 1.01 ** Strange
4 Gridded 5×5 -0.11 -0.10
5 Gridded 5×5 -0.62 -0.6
Scary. Because all you have to do to change the trend is change the gridding.
Here is comparing my 5×5 to 1×1
1 Gridded 1×1 -1.17″
1 Gridded 5×5 -1.02″
2 Gridded 1×1 -2.96″
2 Gridded 5×5 -2.54″
3 Gridded 1×1 0.86″
3 Gridded 5×5 0.67″
4 Gridded 1×1 -0.2″
4 Gridded 5×5 -0.11″
5 Gridded 1×1 -0.51″
5 Gridded 5×5 -0.62″

Alexej Buergin
June 27, 2014 4:45 pm

Carrick says: June 27, 2014 at 1:39 pm
Well good. I’ll just say that people often say “absolute temperature” in physics, and it is most common to interpret this as “absolute thermodynamic (temperature) scale”, but the metrological meaning is valid too.
Supposing that you intended to say meteorological my answer would be: I learned about WX at an US university, we used the book by Ahrens, and it says:”Most scientists use a temperature scale called the absolute or Kelvin scale…” Seems unambiguous to me. (But that was 20 years ago and it was the fourth edition, now the ninth is current.)
(By the way: Do you personally know somebody who uses Rankine, or can you find some recent writing which uses Rankine? In non-US countries C and K are as normal as the fact that football is played with the feet.)

Latitude
June 27, 2014 4:56 pm

evanmjones says:
June 27, 2014 at 4:11 pm
For the 1979 – 2008 period, there was significant warming, greatly exaggerated by poor microsite.
====
Wildman, what do you or Anthony consider “significant”?

June 27, 2014 5:33 pm

evanmjones, there is a trend. It’s just approximately zero slope. The systematic and variable warm bias of unaspirated (USHCN) sensors is demonstrated and beyond dispute. The USHCN bias ought to show up in the anomalies. if they have a common normal, the USHCN trend ought to be variably warmer than, i.e., variably and positively offset from, the USCRN trend. But it’s not.
The warm bias is not from bad siting, by the way. It occurs even in perfectly sited sensors that are in excellent repair. The bias is due to thermal loading of the shield. It’s intrinsic to the construction of the original USHCN sensors, including the MMTS sensors, and is endemic in the USHCN record.

Evan Jones
Editor
June 27, 2014 6:13 pm

Wildman, what do you or Anthony consider “significant”?
We find ~0.19C/decade over the 30-year period from 1979 – 2008 (for our precious Class 1\2s). Or at about two degrees per century. (But this is during a positive PDO. Negative PDO years show no trend at all, or slight cooling. So it averages maybe half that.)
That would be statistically significant for the 1979 – 2008 period. This is adjusted by NOAA to 0.324/decade — for the Class 1/2 stations.
The ~0.14C/decade trend difference between the well and poorly sited stations is significant as well: Our chance of obtaining the same results by random chance is calculated by J-NG, using Monte Carlo methodic, at 0.00000. (Yes, that many zeros.)
The reason we beat 95% significance so easily is the large number of stations we use (~400), even after dropping 2 out of 3 (of the full 1221) for TOBS bias, station moves, etc. (And, yes, if a CRS and MMTS are at the same site and have different ratings, we consider this to be a station move.)

Evan Jones
Editor
June 27, 2014 6:23 pm

The warm bias is not from bad siting, by the way. It occurs even in perfectly sited sensors that are in excellent repair.
And yet the well sited stations, on average, have only a little over half the trend as the badly sited stations.
The bias is due to thermal loading of the shield. It’s intrinsic to the construction of the original USHCN sensors, including the MMTS sensors, and is endemic in the USHCN record.
MMTS was adjudged to be far more accurate than CRS. And, yes, well sited CRS shows considerably higher trends than well sited MMTS. So that is part of it. We are compelled to adjust MMTS trends (upward) as a result.
There may be an inherent overall equipment bias (including both CRS and MMTS), but microsite appears to be king. Bad microsite spuriously increases trends by over two thirds.

Editor
June 27, 2014 6:52 pm

Nick Stokes:
Thanks for clarifying part of my point. USHCN does the infilling, and GISS uses the gridding in it’s average global (and regional) temperature calculations.
However, GISS does estimate data. If they pick up a station’s data from USHCN or GHCN, and a month or an entire season are missing here and there, they will create an estimate for that missing month or season. They use an averaging algorithm to do this, and the algorithm looks at all “relevant” data in the station’s record from the time the station went online to the present.
This is big reason why the station records change.

Latitude
June 27, 2014 7:02 pm

Thanks Evan..[got] it……this one begs a question
“MMTS was adjudged to be far more accurate than CRS. And, yes, well sited CRS shows considerably higher trends than well sited MMTS. So that is part of it. We are compelled to adjust MMTS trends (upward) as a result. ”
If MMTS is far more accurate…..and CRS shows a higher trend….why are you adjusting MMTS up?
Shouldn’t you be adjusting CRS down?

Latitude
June 27, 2014 7:03 pm

put a “t” in there……got it……not go it! LOL

Mark Albright
June 27, 2014 11:08 pm

Since we are looking for high quality single station records I nominate Lincoln NE as a candidate with complete data from 1887-2014 near the geographical center of the US. The Lincoln NE time series shows the 1930s to be the warmest decade:
http://www.atmos.washington.edu/marka/lincoln.1887_2013.png
Monthly data source for Lincoln NE:
http://snr.unl.edu/lincolnweather/data/monthly-avg-temperatures.asp

Evan Jones
Editor
June 28, 2014 12:16 am

If MMTS is far more accurate…..and CRS shows a higher trend….why are you adjusting MMTS up?
Shouldn’t you be adjusting CRS down?

That’s what I asked myself at the time. One could. But it makes no difference to trend (sic) if CRS is adjusted down or MMTS is adjusted up. Here’s why:
We are talking trend, here, not absolute measurements. In 1979, all of the stations were CRS. MMTS conversion did not even begin until 1983. So we start with an absolute reading that is too high, and then introduce a downward step change when the CRS is replaced with the MMTS. That produces a spurious negative effect on trend. So the trend must be adjusted upwards to mitigate the downward step-change. If you adjusted the CRS downward, you would also mitigate the change. Just so long as you eliminate that spurious upward data spike at the beginning — either by adjusting what comes after (i.e., MMTS) up or by adjusting what comes before (i.e., CRS) down.
Bear in mind that for purposes of the study, we don’t give a rat’s patoot what the actual readings are. We only care about what the trend of the actual readings are. Therefore it matters not which half of the data is pushed up or down, just so that step change is removed.
If this is not clear, ask again and I will try to do better.
(Even after this adjustment, MMTS stations have lower trends. CRS stations warm too fast and cool too fast. We definitely note this and point it out. But that is a separate issue and is dealt with separately.)

richardscourtney
June 28, 2014 1:49 am

Nick Stokes:
Thankyou for your reply to my post at June 27, 2014 at 2:47 pm which you provide at June 27, 2014 at 3:29 pm.
It (again) ignores my question which was

If the different methods are not analysing different definitions then why do values of global average surface temperature (GASTA) from decades ago alter when the method is changed from month to month: which is the right determination any of the ones before a change or any of those after it? In other words, if the different – and frequently changed – methods are are not assessing different definitions of GASTA then why do the methods not provide the same results?

However, it purports to provide an answer to my other question which was

What is the definition of global average surface temperature (GASTA)?

You say

Here’s my definition:
1. Surface temperature is as measured by weather stations on land, SST over ocean.
2. Anomaly is the local difference between ST and the local average over a prescribed period (eg 1961-90).
3. GASTA is the mean calculated by spatial integral of a time average (say one month) of anomaly.

No, Nick. That is NOT a definition: it is an evasion.
What do you mean by “average”; mean, median, mode, something else, weighted, etc.?

Nick, a definition of a parameter specifies what the parameter is. It does not provide a range of possible things the parameter could be. As I said in my first post to you at June 27, 2014 at 3:40 am which is here.

Every determination of GASTA is determined by a faulty method, and can give you anything.

Your so-called “definition” can give you anything by changing the used type of “average” at any time, and IT DOES so historical determination of GASTA change from month to month.
Richard

richardscourtney
June 28, 2014 1:57 am

Nick Stokes:
I see there is a possible claim of ambiguity in my post at June 28, 2014 at 1:49 am so I write to forestall a possible misunderstanding.
When I wrote

No, Nick. That is NOT a definition: it is an evasion.
What do you mean by “average”; mean, median, mode, something else, weighted, etc.?

I was referring to your use of the word “average” in this statement

3. GASTA is the mean calculated by spatial integral of a time average (say one month) of anomaly.

Richard

Nick Stokes
June 28, 2014 3:05 am

richardscourtney says: June 28, 2014 at 1:57 am
“I was referring to your use of the word “average” in this statement
3. GASTA is the mean calculated by spatial integral of a time average (say one month) of anomaly.”

Average is simple time average – add the days and divide by 31 etc. Same for the anomaly base.

richardscourtney
June 28, 2014 3:44 am

Nick Stokes:
Sincere thanks for your clarification at June 28, 2014 at 3:05 am.
That, of course, returns us to my original question which you have still not addressed; i.e.

If the different methods are not analysing different definitions then why do values of global average surface temperature (GASTA) from decades ago alter when the method is changed from month to month: which is the right determination any of the ones before a change or any of those after it? In other words, if the different – and frequently changed – methods are are not assessing different definitions of GASTA then why do the methods not provide the same results?

You see, Nick, the original measurements don’t change but the processing does. There can only be one valid method to process the data if there is only one definition of GASTA (i.e. global AVERAGE surface temperature anomaly).
There is an ‘elephant’ filling the room. The ‘elephant’ is that there is no agreed definition of GASTA so anybody can compute a value of GASTA as they alone desire and THEY DO; indeed, they each frequently change how they calculate GASTA and, thus, change their determined values of it.
I am shouting, “Look at the elephant!”
You are replying to my shouts by looking out the window of the room and saying you cannot see a nit to pick.
Richard

A C Osborn
June 28, 2014 4:39 am

evanmjones says: June 28, 2014 at 12:16 am
CRS stations warm too fast and cool too fast.
How do you know?
Why isn’t it the MMTS that is wrong?

A C Osborn
June 28, 2014 4:43 am

evanmjones says: June 27, 2014 at 6:23 pm
MMTS was adjudged to be far more accurate than CRS.
By whom and why?
Where is the equivalent of National Physical Laboratory Standard for Temperature, we have it for most kinds of measurement, but what is it for temperatures.

Latitude
June 28, 2014 5:56 am

Evan, …..it’s still not clear
Since CRS has a faster trend than MMTS…..aren’t you recreating that faster trend by adjusting MMTS up?…. “Even after this adjustment, MMTS stations have lower trends.”
I understand the step change…….I don’t understand continuing a steeper trend, when new equipment shows less trend?

Samuel C Cogar
June 28, 2014 6:00 am

evanmjones says:
June 27, 2014 at 2:44 pm
My bottom-line “best guess” is that the “true signal” is closer to 0.07C per decade, exaggerated by poor microsite. That translates to raw CO2 warming with perhaps a bit of negative feedback in play.
——————
I am curious to know what method you used for doing said …. “translates to CO2 warming”, ….. as well as your method for calculating …. “a bit of negative feedback” …. to achieve said result of “ 0.07C per decade”.

Carrick
June 28, 2014 9:23 am

[Sorry if this is a repeat.. my last comment apparently was eaten by the Bit Monster.]
Alexej Buergin says:

Supposing that you intended to say meteorological my answer would be: I learned about WX at an US university, we used the book by Ahrens, and it says:”Most scientists use a temperature scale called the absolute or Kelvin scale…” Seems unambiguous to me. (But that was 20 years ago and it was the fourth edition, now the ninth is current.)

To make it clear, there’s nothing wrong with saying “absolute temperature” when you mean “absolute thermodynamic temperature scale”. It’s a rubrik, and it’s a well understood one. If you were talking thermodynamic quantities, everybody would understand what you meant.
Were I to write an introductory book for meteorologists, I probably would not make the distinction between “absolute” and “relative” as they are used in measurement science (metrology) and the “absolute” as it’s used in thermodynamics.
The only error is in conflating the use of “absolute temperature” as it is used in the thermodynamics community, with its more general use in the metrological sense. There is no error in calling Celsius an “absolute scale” in the metrological sense, as long as you didn’t mean to imply it was a thermodynamic temperature scale.

(By the way: Do you personally know somebody who uses Rankine, or can you find some recent writing which uses Rankine? In non-US countries C and K are as normal as the fact that football is played with the feet.)

I don’t know anybody personally. It’s my impression that it used to be widely used by communities that mostly had Fahrenheit-scale mercury thermometers (e.g., heating and cooling engineers), and not so much since people have gone to thermoelectric devices.

Brandon C
June 28, 2014 9:25 am

Nick says: “No, it’s the do-nothing option that is poor science. The fact is that assumptions are inevitably made. You have a series of operator observations of markers at stated times, and you have to decide what they mean. Assuming they are maxima on the day of observation is an assumption like any other. And it incorporates a bias.
The unscientific thing is just to say, as you do, that it’s just too hard to measure, so let’s just assume it is zero. The scientific thing is to do your best to work out what it is and allow for it. Sure, observation times might not have been strictly observed. Maybe minima are affected differently to maxima (that can be included). You won’t get a perfect estimate. But zero is a choice. And a very bad one.”
Can you really say that with a straight face?
– For one I never once said lets assume it is zero, you are making that up or have appalling reading comprehension. I said lets Leave the uncertainty caused by this in place and show the error bars to reflect this. This is not the same thing and anyone older than 2 knows that. Your comment is either stupid or dishonest, you can pick.
– You state that there are assumptions made in the original data, that is not correct. We have identified a possible source of error in the original data and by doing so create an uncertainty bar to reflect that the actual error is unknown. It is not “incorporating bias”, it is specifically avoiding adding steps that can add bias and leave the uncertainties known, surely you must know this. That is not an assumption, it is proper data handling. The assumption is in taking the unscientific position that you can fully account and quantify the error without enough data to do it, and then pretend that the answer is somehow accurate. It is nothing but a reflection of your unbacked assumption and is scientifically useless because you no longer even know the exact extent the possible errors. I don’t support reducing the error bars on the original data either, because I act like a scientists and not a priest that knows the answer from the start.
– Any scientists was taught that you only adjust original data when you have a “KNOWN” error that can be “ACCURATELY QUANTIFIED” and ‘IDEPENDENTLY VERIFIED”. If you cannot satisfy those 3 requirements, then you leave the original data alone and create uncertainty bars. That is the actual scientific way, that it is better to leave the original data alone with a range of uncertainty than to contaminate it with assumptions and guesswork. You can do the estimates on TOBS errors to better inform your error bars, but only an idiot changes the base data using that method because it creates additional uncertainty. The argument that “we have to do something” is not a scientific one, but and argument of emotion and activism.
– It is even worse scientifically when the adjustments are being used in a key dataset that is being used by thousands of other researchers as the base for other areas of research. You leave the base data the hell alone so all the referring research has a constant baseline. You don’t keep changing it on your whims creating the situation that past work cannot be accurately compared to newer work because a supposed solid data source keeps changing. You can no longer use past study data because adjustments means that new papers are dealing with entirely different trends and local temps than in past versions. If climate science was acting scientifically, they would have informed any referring papers that their conclusions may be wrong because the past data has been changed by an order of magnitude.
This issue is so unbelievably unscientific it makes my skin crawl. And your defense and reasoning makes one wonder if you are even trying to act scientifically or if you are just circling the wagons in defense of poor science.

June 28, 2014 9:53 am

Brandon C, your approach — adhering to the strict methodological integrity of science — is exactly what is missing throughout consensus climatology. These people have rung in sloppy methods that permit them specious but convenient conclusions.
And it’s not just the air temperature record. The same sloppy criteria are applied in climate modeling and in so-called proxy paleo-temperature reconstructions. The entire field has descended into pseudo-science. I’ll have a paper in E&E about this, perhaps later this year. It’s titled, Negligence, Non-science, and Consensus Climatology.

June 28, 2014 10:38 am

Brandon C says:
June 28, 2014 at 9:25 am
… This issue is so unbelievably unscientific it makes my skin crawl. And your defense and reasoning makes one wonder if you are even trying to act scientifically or if you are just circling the wagons in defense of poor science.

I think the only answer is that he and many others are circling the wagons in defense of poor science. People have been complaining about the data sets and the “adjustments” that always cool the past and warm the present (It is Worse than we Thought!!!!) to advance the alarmist agenda. I predict this will all blow over and the “skeptics” will go back to ignoring the data fr***. (can’t use the f-word here they say) Heck, we can’t even get the story out that these people claiming two decimal places of accuracy is a G-D joke.
What if there is no “pause”, but in fact there has been a cooling over the last couple of decades? How would the public ever know that if that were true?

A C Osborn
June 28, 2014 10:48 am

Brandon C says: June 28, 2014 at 9:25 am
Well said Sir. I worked in a Metrology Lab in my youth, everything was referenced back to the National Physical Laboratory Standards and woe betide anybody who took short cuts or did not adhere to procedure.
I then worked in Quality Control and Industry introduced ISO 2000/9000/9001 to control and document everything. Climate Science badly needs some ISO 9000 type Audits.
When Climategate first burst on the scenes Phil Jones work desk and procedures were exposed for all to see, I have seen local garages run better than that. Considering who employs them, how important the work is and how much they are payed it is a Global disgrace.
The mere fact that they would not release their data and making all sorts of excuses says it all.

A C Osborn
June 28, 2014 10:51 am

Mark Stoval (@MarkStoval) says: June 28, 2014 at 10:38 am
“What if there is no “pause”, but in fact there has been a cooling over the last couple of decades? How would the public ever know that if that were true?”
Well if it carries on it as expected they won’t need any “GLOBULL” temperatures to tell them, they will be able to feel it and measure it for themselves.

mark
June 28, 2014 11:25 am

Brandon C says: June 28, 2014 at 9:25 am
” I predict this will all blow over and the “skeptics” will go back to ignoring the data…”
This cannot be allowed to die. It is a “smoking gun” that even the least scientific savvy person would understand as fraudulent. Who wouldn’t understand that it’s not OK to go back in history and continually change data? We can all start by writing our representatives in Congress and the House to bring attention to this bogus ‘practice.’ At minimum we should demand that the original data be restored.

richardscourtney
June 28, 2014 11:43 am

mark:
re your post at June 28, 2014 at 11:25 am.
It saddens me, but I agree with Brandon C. The matter is not news and has been raised in many places including WUWT for years.
Please read this
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm
Richard

mark
June 28, 2014 12:21 pm

richardscourtney says: June 28, 2014 at 11:43 am
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm
How many politicians or people on their staffs would understand anything in that article? Wrong audience. It has to be more simplified and straight forward to get their attention. Something to the effect “here’s obvious proof that they are cooking the books” (when the same ‘data’ point continually changes). It only takes one representative to start a campaign bringing corruption of the opposition to the light…..something they love to do. Yes it’s been going on for a while but the results are becoming cumulative and the acts so egregious I doubt the even the average person wouldn’t take notice. Besides, isn’t it obvious by now that it’s not the scientist but the politicians, media, and people that are driving the discourse? Take it to their level and this is an ideal opening to make that happen. I hope I’m right. The alternative is to wait for nature to prove them wrong. Or not.

Eliza
June 28, 2014 12:30 pm

The point here is that in fact both WUWT and Goddard have been pointing this out for years.Only now has it reached the MSM thanks mainly to Goddards uncompromising attitude. You cannot be “soft” with these AGW people.They have an agenda. Skeptics do not.

Eliza
June 28, 2014 12:32 pm

In the end of course its the actual climate/weather that is winning the debate ie: no change. What a laugh! (and waste of time)

richardscourtney
June 28, 2014 2:15 pm

Eliza:
I write to support your post at June 28, 2014 at 12:30 pm which says

The point here is that in fact both WUWT and Goddard have been pointing this out for years.Only now has it reached the MSM thanks mainly to Goddards uncompromising attitude. You cannot be “soft” with these AGW people.They have an agenda. Skeptics do not.

Yes. I draw your attention to my post at June 28, 2014 at 11:43 am and its link that discusses a ‘climategate’ email from 2003 showing my interaction with compilers of global temperature which complains about the data changes.
If the matter has – at last – reached the MSM then it needs as much publicity as possible.
Richard

Eugene WR Gallun
June 28, 2014 5:10 pm

This might be it.
PROFESSOR PHIL JONES
The English Prometheus
To tell the tale as it began —
An ego yearned
Ambition burned
Inside a quiet little man
No one had heard of Phillip Jones
Obscure to fame
(And likewise blame)
The creep of time upon his bones
Men self-deceive when fame is sought
Their fingers fold
Their ego told
That fire is what their fist has caught!
Men want to feel, not understand!
Jones made it plain
That Hell would reign
In England’s green and pleasant land!
Believe! Believe! In burning heat!
In mental fight
To prove I’m right
I’ve raised some temps and some delete!
And with his arrows of desire
He shot the backs
Of any hacks
That asked the question — where’s the fire?
East Anglia supports him still
The truth denied
Whitewash applied
Within that dark Satanic Mill
The evil that this wimp began
Will go around
And come around
Prometheus soon wicker man
Eugene WR Gallun
Note: Though some may know it, most probably don’t. In William Blake’s JERUSALEM the line — “Among these dark Satanic Mills” — was not referring to factories but to churches. Mills were places where things were ground down and Blake was saying that England’s state run churches (and Catholic churches — well, really all churches) which demanded conformity were mills grinding down both mind and spirit. So this poem is sort of an oblique comparison between the ideals of William Blake and Phil Jones (if such a man as Jones can be said to have ideals).
When I start using Blake’s words the poem has to get a little poetasterish since I’m no fool. Blake’s words in JERUSALEM are a thousand times better than anything I could ever write — some of the greatest words in the English language — so I preemptively surrender so the reader doesn’t even think about making such a comparison.
Eugene WR Gallun

basicstats
June 29, 2014 1:47 am

I must correct an earlier comment about applying statistical procedures to anomalies rather than actual temperatures. The example of kriging given was wrong – spatial methods apply to anomalies perfectly well. It’s time series models, including regressions, which need deconstructing
when fitted to anomalies instead of actual temperatures.

Eugene WR Gallun
June 29, 2014 7:52 am

Sigh! Double sigh! After reviewing my poem i realized I needed to add a connective stanza to make the train of thought clearer. I can’t take this writing of poetry — it is slowly killing me.
PROFESSOR PHIL JONES
The English Prometheus
To tell the tale as it began —
An ego yearned
Ambition burned
Inside a quiet little man
No one had heard of Phillip Jones
Obscure to fame
(And likewise blame)
The creep of time upon his bones
Men self-deceive when fame is sought
Their fingers fold
Their ego told
That fire is what their fist has caught!
Because they’d rather rule than serve
They, with their heat
The light defeat
And damning ignorance preserve
Such want to feel not understand!
Jones made it plain
That Hell must reign
In England’s green and pleasant land!
Believe! Believe! In burning heat!
In mental fight
To prove I’m right
I’ve raised some temps and some delete!
And with his arrows of desire
He shot the backs
Of any hacks
That asked the question — where’s the fire?
East Anglia supports him still
The truth denied
Whitewash applied
Within that dark Satanic Mill
The evil that this wimp began
Will go around
And come around
Prometheus soon wicker man
Eugene WR Gallun

June 29, 2014 8:03 am

Lance Wallace wrote:

Thanks for putting my Dropbox graph of your Fig. “D” historical data of the US 48-state temperature anomalies into more permanent archive. Here is the full Excel file with the graph.
https://dl.dropboxusercontent.com/u/75831381/NASA%20Fig%20D%201999-2014.xlsx
I used the same data (cut off at 1998 so all the datasets could be compared from the Hansen 1999 up to the present) to calculate the change in the linear rate of increase. The rate was 0.32 degrees C per century according to Hansen (1999) and is now 0.43 per century, about a 35% increase, due entirely to adjustments to the historical data. (See the graph in the third tab of this second Excel file.)
https://dl.dropboxusercontent.com/u/75831381/NASA%20Fig%20D%201880-1998.xlsx
You are welcome to archive these files if you find them useful.

Thank you, Lance!
The two archived locations are:
http://www.webcitation.org/6Qh9Xt5bt
http://www.webcitation.org/6Qh9c1FhC
BTW, for archiving pages, I keep bookmarks and “bookmarklets” in my Chrome bookmark toolbar with:
Citebite.com, and a “Cite this” bookmarklet
Archive.org, and an “Archive this” bookmarklet
Webcitation.org, and a “WebCite this” bookmarklet
Archive.is, and an “Archive.is this” bookmarklet
They don’t all work on all pages, but they are all useful for preserving things that might otherwise someday be lost, when a web page changes or goes away.

Evan Jones
Editor
June 29, 2014 12:32 pm

How do you know?
Why isn’t it the MMTS that is wrong?

I don’t own that information directly. But NOAA considers MMTS to be the more accurate, the reason given being that the thin plastic gill structure of the MMTS shielding does not absorb and retain heat, unlike that of CRS, which is housed in a large wooden box, and are better placed to provide circulation.
This is the mechanism: In a sense, the CRS box itself is acting as a heat sink. And, like (other) bad microsite effects, this will tend to increase trend (be it either cooling or warming).
This makes it necessary to adjust MMTS station trends upward as conversion occurs, to counteract the step change. It’s our only adjustment (and, incidentally, it works against our hypothesis).

Evan Jones
Editor
June 29, 2014 12:45 pm

I am curious to know what method you used for doing said …. “translates to CO2 warming”, ….. as well as your method for calculating …. “a bit of negative feedback” …. to achieve said result of “ 0.07C per decade”.
Call it a very simple, top down model based on the simple presumption that the ultimate Arrhenius results are correct. (At least they can be replicated in the lab.) According to that, we should see a modest raw forcing of 1.1C per CO2 doubling.
Take it since 1950, when CO2 emissions became (rather abruptly) significant. Negative and positive PDO cancel each other out over that period, so we can more or less discount that effect.
We are left with 0.7C warming (“adjusted”) over 60 years, or ~ 0.107C warming per decade, after a 30%+ increase in atmospheric CO2, which is right in line with Arrhenius.
Yet the land temperature trend record itself appears to be exaggerated by poor microsite, which would reduce that number by a bit. So we are perhaps seeing somewhat less warming than even the Arrhenius experiments indicate, implying some form of net negative feedback in play.
At least this accounts for the bottom line (unlike the models).
There are, of course, some other issues in play (such as the diminishing effect of aerosols, soot-on-ice, natural recovery from the LIA, unknown solar, etc.), but it is hard to quantify them.

Evan Jones
Editor
June 29, 2014 12:58 pm

But again, what is rural? It can be influenced by the crop and its harvest, the forrests and pinebeetles and some other changes in the land.
Cropland shows distinctly higher trends. ~20% of land area is cropland, with 30% of stations being located in cropland. So a slight warming bias is implied here.
But you need to consider that it is “Class 1” world. Over 80% of land mass is uninhabited. Only 2% of CONUS is considered urban, but 9% of USHCN stations are urban.
There should be representative, proportional mesosite coverage for USHCN. But, for whatever reason, there is not.

Evan Jones
Editor
June 29, 2014 1:48 pm

Since CRS has a faster trend than MMTS…..aren’t you recreating that faster trend by adjusting MMTS up?…. “Even after this adjustment, MMTS stations have lower trends.”
Mmmm. In a sense, yes. All we do is remove the step change. We do not attempt to make a “CRS adjustment”.
We try not to overcomplicate. Instead, we bin the data to show (for 1979 – 2008), stations that were MMTS for most of the period, stations that were CRS for most of the period, and “pure” CRS stations that were never converted. So you can look at the entire dataset and then, afterward, compare MMTS warming with CRS warming. You want MMTS with Cropland and urban removed? We got!
And we’ll provide my spreadsheet (with all the tags for equipment, mesosite, altitude, etc.), and all you have to do is filter the station list to obtain whatever subset sample set you like.
But we only make the one adjustment for equipment conversion. When it comes to data adjustment, if you shake it more than three times, you’re playin’ with it.