On ‘denying’ Hockey Sticks, USHCN data, and all that – part 2

In part one of this essay which you can see here, I got quite a lot of feedback on both sides of the climate debate. Some people thought that I was spot on with criticisms while others thought I had sold my soul to the devil of climate change. It is an interesting life when I am accused of being in cahoots with both “big oil” and “big climate” at the same time. That aside, in this part of the essay I am going to focus on areas of agreement and disagreement and propose a solution.

In part one of the essay we focus on the methodology that was used that created a hockey stick style graph illustrating missing data. Due to the missing data causing a faulty spike at the end, Steve McIntyre commented, suggesting that it was more like the Marcott hockey stick than it was like Mann’s:

Steve McIntyre says:

Anthony, it looks to me like Goddard’s artifact is almost exactly equivalent in methodology to Marcott’s artifact spike – this is a much more exact comparison than Mann. Marcott’s artifact also arose from data drop-out.

However, rather than conceding the criticism, Marcott et al have failed to issue a corrigendum and their result has been widely cited.

In retrospect, I believe McIntyre is right in making that comparison. Data dropout is the central issue here and when it occurs it can create all sorts of statistical abnormalities.

Despite some spirited claims in comments in part one about how I’m “ignoring the central issue”, I don’t dispute that data is missing from many stations, I never have.

It is something that has been known about for years and is actually expected in the messy data gathering process of volunteer observers, electronic systems that don’t always report, and equipment and or sensor failures. In fact there is likely no weather network in existence that has perfect data without some being missing. Even the new U.S. Climate Reference Network, designed to be state-of-the-art and as perfect as possible has a small amount of missing data due to failures of uplinks or other electronic issues, seen in red:

CRN_missing_data

Source: http://www.ncdc.noaa.gov/crn/newdaychecklist?yyyymmdd=20140101&tref=LST&format=web&sort_by=slv

What is in dispute is the methodology, and the methodology, as McIntyre observed, created a false “hockey stick” shape much like we saw in the Marcott affair:

marcott-A-1000[1]

After McIntyre corrected the methodology used by Marcott, dealing with faulty and missing data, the result looked like this:

 

alkenone-comparison

McIntyre points out this in comments in part 1:

In Marcott’s case, because he took anomalies at 6000BP and there were only a few modern series, his results were an artifact – a phenomenon that is all too common in Team climate science.

So, clearly, the correction McIntyre applied to Marcott’s data made the result better, i.e. more representative of reality.

That’s the same sort of issue that we saw in Goddard’s plot; data was thinning near the endpoint of the present.

Goddard_screenhunter_236-jun-01-15-54

[ Zeke has more on that here: http://rankexploits.com/musings/2014/how-not-to-calculate-temperatures-part-3/ ]

While I would like nothing better than to be able to use raw surface temperature data in its unadulterated “pure” form to derive a national temperature and to chart the climate history of the United States, (and the world) the fact is that because the national USHCN/co-op network and GHCN is in such bad shape and has become largely heterogeneous that is no longer possible with the raw data set as a whole.

These surface networks have had so many changes over time that the number of stations that have been moved, had their time of observation changed, had equipment changes, maintenance issues,or have been encroached upon by micro site biases and/or UHI using the raw data for all stations on a national scale or even a global scale gives you a result that is no longer representative of the actual measurements, there is simply too much polluted data.

A good example of polluted data can be found in Las Vegas Nevada USHCN station:

LasVegas_average_temps

Here, growth of the city and the population has resulted in a clear and undeniable UHI signal at night gaining 10°F since measurements began. It is studied and acknowledged by the “sustainability” department of the city of Las Vegas, as seen in this document. Dr. Roy Spencer in his blog post called it “the poster child for UHI” and wonders why NOAA’s adjustments haven’t removed this problem. It is a valid and compelling question. But at the same time, if we were to use the raw data from Las Vegas we would know it would have been polluted by the UHI signal, so is it representative in a national or global climate presentation?

LasVegas_lows

The same trend is not visible in the daytime Tmax temperature, in fact it appears there has been a slight downward trend since the late 1930′s and early 1940′s:

LasVegas_highs

Source for data: NOAA/NWS Las Vegas, from

http://www.wrh.noaa.gov/vef/climate/LasVegasClimateBook/index.php

The question then becomes: Would it be okay to use this raw temperature data from Las Vegas without any adjustments to correct for the obvious pollution by UHI?

From my perspective the thermometer at Las Vegas has done its job faithfully. It has recorded what actually occurred as the city has grown. It has no inherent bias, the change in surroundings have biased it. The issue however is when you start using stations like this to search for the posited climate signal from global warming. Since the nighttime temperature increase at Las Vegas is almost an order of magnitude larger than the signal posited to exist from carbon dioxide forcing, that AGW signal would clearly be swamped by the UHI signal. How would you find it? If I were searching for a climate signal and was doing it by examining stations rather than throwing out blind automated adjustments I would most certainly remove Las Vegas from the mix as its raw data is unreliable because it has been badly and likely irreparably polluted by UHI.

Now before you get upset and claim that I don’t want to use raw data or as some call it “untampered” or unadjusted data, let me say nothing could be further from the truth. The raw data represents the actual measurements; anything else that has been adjusted is not fully representative of the measurement reality no matter how well-intentioned, accurate, or detailed those adjustments are.

But, at the same time, how do you separate all the other biases that have not been dealt with (like Las Vegas) so you don’t end up creating national temperature averages with imperfect raw data?

That my friends, is the $64,000 question.

To answer that question, we have a demonstration. Over at the blackboard blog, Zeke has plotted something that I believe demonstrates the problem.

Zeke writes:

There is a very simple way to show that Goddard’s approach can produce bogus outcomes. Lets apply it to the entire world’s land area, instead of just the U.S. using GHCN monthly:

Averaged Absolutes

Egads! It appears that the world’s land has warmed 2C over the past century! Its worse than we thought!

Or we could use spatial weighting and anomalies:

 

Gridded Anomalies

Now, I wonder which of these is correct? Goddard keeps insisting that its the first, and evil anomalies just serve to manipulate the data to show warming. But so it goes.

Zeke wonders which is “correct”. Is it Goddard’s method of plotting all the “pure” raw data, or is it Zeke’s method of using gridded anomalies?

My answer is: neither of them are absolutely correct.

Why, you ask?

It is because both contain stations like Las Vegas that have been compromised by changes in their environment, that station itself, the sensors, the maintenance, time of observation changes, data loss, etc. In both cases we are plotting data which is a huge mishmash of station biases that have not been dealt with.

NOAA tries to deal with these issues, but their effort falls short. Part of the reason it falls short is that they are trying to keep every bit of data and adjust it in an attempt to make it useful, and to me that is misguided, as some data is just beyond salvage.

In most cases, the cure from NOAA is worse than the disease, which is why we see things like the past being cooled.

Here is another plot from Zeke just for the USHCN, which shows Goddard’s method “Averaged Absolutes” and the NOAA method of “Gridded Anomalies”:

Goddard and NCDC methods 1895-2013

[note: the Excel code I posted was incorrect for this graph, and was for another graph Zeke produced, so it was removed, apologies – Anthony]

Many people claim that the “Gridded Anomalies” method cools the past, and increases the trend, and in this case they’d be right. There is no denying that.

At the same time, there is no denying that the entire CONUS USHCN raw data set contains all sorts of imperfections, biases, UHI, data dropouts and a whole host of problems that remain uncorrected. It is a Catch-22; on one hand the raw data has issues, on the other, at the bare minimum some sort of infilling and gridding is needed to produce a representative signal for the CONUS, but in producing that, new biases and uncertainty is introduced.

There is no magic bullet that always hits the bullseye.

I’ve known and studied this for years, it isn’t a new revelation. The key point here is that both Goddard and Zeke (and by extension BEST and NOAA) are trying to use the ENTIRE USHCN dataset, warts and all, to derive a national average temperature. Neither method produces a totally accurate representation of national temperature average. Keep that thought.

While both methods have flaws, the issue that Goddard raised has one good point, and an important one; the rate of data dropout in USHCN is increasing.

When data gets lost, they infill with other nearby data, and that’s an acceptable procedure, up to a point. The question is, have we reached a point of no confidence in the data because too much has been lost?

John Goetz asked the same question as Goddard in 2008 at Climate Audit:

How much Estimation is too much Estimation?

It is still an open question, and without a good answer yet.

But at the same time we are seeing more and more data loss, Goddard is claiming “fabrication” of lost temperature data in the final product and at the same advocating using the raw surface temperature data for a national average. From my perspective, you can’t argue for both. If the raw data is becoming less reliable due to data loss, how can we use it by itself to reliably produce a national temperature average?

Clearly with the mess the USHCN and GHCN are in, raw data won’t accurately produce a representative result of the true climate change signal of the nation because the raw data is so horribly polluted with so many other biases. There are easily hundreds of stations in the USHCN that have been compromised like Las Vegas has been, making the raw data, as a whole, mostly useless.

So in summary:

Goddard is right to point out that there is increasing data loss in USHCN and it is being increasingly infilled with data from surrounding stations. While this is not a new finding, it is important to keep tabs on. He’s brought it to the forefront again, and for that I thank him.

Goddard is wrong to say we can use all the raw data to reliably produce a national average temperature because the same data is increasingly lossy and is also full of other biases that are not dealt with. [ added: His method allows for biases to enter that are mostly about station composition, and less about infilling see this post from Zeke]

As a side note, claiming “fabrication” in a nefarious way doesn’t help, and generally turns people off to open debate on the issue because the process of infilling missing data wasn’t designed at the beginning to be have any nefarious motive; it was designed to make the monthly data usable when small data dropouts are seen, like we discussed in part 1 and showed the B-91 form with missing data from volunteer data. By claiming “fabrication”, all it does is put up walls, and frankly if we are going to enact any change to how things get done in climate data, new walls won’t help us.

Biases are common in the U.S. surface temperature network

This is why NOAA/NCDC spends so much time applying infills and adjustments; the surface temperature record is a heterogeneous mess. But in my view, this process of trying to save messed up data is misguided, counter-productive, and causes heated arguments (like the one we are experiencing now) over the validity of such infills and adjustments, especially when many of them seem to operate counter-intuitively.

As seen in the map below, there are thousands of temperature stations in the US co-op and USHCN network in the USA, by our surface stations survey, at least 80% of the USHCN is compromised by micro-site issues in some way, and by extension, that large sample size of the USHCN subset of the co-op network we did should translate to the larger network.

USHCN_COOP_Map

When data drops out of USHCN stations, data from nearby neighbor stations is infilled to make up the missing data, but when 80% or more of your network is compromised by micro-site issues, chances are all you are doing is infilling missing data with compromised data. I explained this problem years ago using a water bowl analogy, showing how the true temperature signal gets “muddy” when data from surrounding stations is used to infill missing data:

bowls-USmap

The real problem is the increasing amount of data dropout in USHCN (and in Co-op and GHCN) may be reaching a point where it is adding a majority of biased signal from nearby problematic stations. Imagine a well sited long period station near Las Vegas out in a rural area that has its missing data infilled using Las Vegas data, you know it will be warmer when that happens.

So, what is the solution?

How do we get an accurate surface temperature for the United States (and the world) when the raw data is full of uncorrected biases and the adjusted data does little more than smear those station biases around when infilling occurs? Some of our friends say a barrage of  statistical fixes are all that is needed, but there is also another, simpler, way.

Dr. Eric Steig, at “Real Climate”, in a response to a comment about Zeke Hausfather’s 2013 paper on UHI shows us a way.

Real Climate comment from Eric Steig (response at bottom)

We did something similar (but even simpler) when it was being insinuated that the temperature trends were suspect, back when all those UEA emails were stolen. One only needs about 30 records, globally spaced, to get the global temperature history. This is because there is a spatial scale (roughly a Rossby radius) over which temperatures are going to be highly correlated for fundamental reasons of atmospheric dynamics.

For those who don’t know what the Rossby radius is, see this definition.

Steig claims 30 station records are all that are needed globally. In a comment some years ago (now probably lost in the vastness of the Internet) we heard Dr. Gavin Schmidt said something similar, saying that about “50 stations” would be all that is needed.

[UPDATE: Commenter Johan finds what may be the quote:

I did find this Gavin Schmidt quote:

“Global weather services gather far more data than we need. To get the structure of the monthly or yearly anomalies over the United States, for example, you’d just need a handful of stations, but there are actually some 1,100 of them. You could throw out 50 percent of the station data or more, and you’d get basically the same answers”

http://earthobservatory.nasa.gov/Features/Interviews/schmidt_20100122.php ]

So if that is the case, and one of the most prominent climate researchers on the planet (and his associate) says we need only somewhere between 30-50 stations globally…why is NOAA spending all this time trying to salvage bad data from hundreds if not thousands of stations in the USHCN, and also in the GHCN?

It is a question nobody at NOAA has ever really been able to answer for me. While it is certainly important to keep these records from all these stations for local climate purposes, but why try to keep them in the national and global dataset when Real Climate Scientists say that just a few dozen good stations will do just fine?

There is precedence for this, the U.S. Climate Reference Network, which has just a fraction of the stations in USHCN and the co-op network:

crn_map

NOAA/NCDC is able to derive a national temperature average from these few stations just fine, and without the need for any adjustments whatsoever. In fact they are already publishing it:

USCRN_avg_temp_Jan2004-April2014

If it were me, I’d throw out most of the the USHCN and co-op stations with problematic records rather than try to salvage them with statistical fixes, and instead, try to locate the best stations with long records, no moves, and minimal site biases and use those as the basis for tracking the climate signal. By doing so not only do we eliminate a whole bunch of make work with questionable/uncertain results, and we end all the complaints data falsification and quibbling over whose method really does find the “holy grail of the climate signal” in the US surface temperature record.

Now you know what Evan Jones and I have been painstakingly doing for the last two years since our preliminary siting paper was published here at WUWT and we took heavy criticism for it. We’ve embraced those criticisms and made the paper even better. We learned back then that adjustments account for about half of the surface temperature trend:

We are in the process of bringing our newest findings to publication. Some people might complain we have taken too long. I say we have one chance to get it right, so we’ve been taking extra care to effectively deal with all criticisms from then, and criticisms we have from within our own team. Of course if I had funding like some people get, we could hire people to help move it along faster instead of relying on free time where we can get it.

The way forward:

It is within our grasp to locate and collate stations in the USA and in the world that have as long of an uninterrupted record and freedom from bias as possible and to make that a new climate data subset. I’d propose calling it the the Un-Biased Global Historical Climate Network or UBGHCN. That may or may not be a good name, but you get the idea.

We’ve found at least this many good stations in the USA that meet the criteria of being reliable and without any need for major adjustments of any kind, including the time-of-observation change (TOB), but some do require the cooling bias correction for MMTS conversion, but that is well known and a static value that doesn’t change with time. Chances are, a similar set of 50 stations could be located in the world. The challenge is metadata, some of which is non-existent publicly, but with crowd sourcing such a project might be do-able, and then we could fulfill Gavin Schmidt and Eric Steig’s vision of a much simpler set of climate stations.

Wouldn’t it be great to have a simpler and known reliable set of stations rather than this mishmash which goes through the statistical blender every month? NOAA could take the lead on this, chances are they won’t. I believe it is possible to do independent of them, and it is a place where climate skeptics can make a powerful contribution which would be far more productive than the arguments over adjustments and data dropout.

 

Get notified when a new post is published.
Subscribe today!
5 1 vote
Article Rating
274 Comments
Inline Feedbacks
View all comments
KNR
June 26, 2014 11:54 pm

You would not want to play poker with these people , how ‘lucky’ can you get that all the adjustments of the past happen to results in reductions in past temperatures which make more modern ones look higher which happen to be very useful to those pushing ‘the cause ‘ Lucky indeed !

Brandon C
June 27, 2014 12:18 am

Nick. Again you just assume that every single reading is done exactly the same and followed only one protocol. This is easily proven not to be the case from interviews with station operators. Rather than admit that the extent of the problem is unknown and leave it as an uncertainty, you just proceed as though your easily disproven assumption is correct. This is not just poor science, it borders on outright dishonesty. We all understand what the tobs issue is, repeating the basic reason does not change the fact that it assumes a uniform error despite evidence to the contrary. If the effect was small, it would not be important, but the adjustments are huge compared to the trend you are trying to measure. No real scientist should even consider using such significant adjustments when you don’t even have any additional solid data to verify at least a sampling against. You are assuming a position and verifying the outcome against nothing more than your own expectations. Scientifically this is not defensible and at some point there will be an actual independant auditing of all this poor work. This is a black eye for science. Scientific method and principals were designed to be unmutable rules to never be ignored, so as to avoid the bias and error. Not to be ignored and glossed over because your sure it must be the correct answer. I have never seen a group act so unscientifically as climate scientists. It is reaching the point of pathetic.

Lance Wallace
June 27, 2014 12:18 am

daveburton says:
June 26, 2014 at 9:54 pm
Very useful historical record, thanks for your efforts.
I used your data to look at the difference between 2014 and 1999, 2000, and 2006:
https://dl.dropboxusercontent.com/u/75831381/NASA%20FIGD%20CHANGE%20OVER%20TIME.pdf
It was not a monotonic lowering of past temperatures and a raising of recent ones; rather the 5-year averages for the years from 1880-1910 were lifted up a bit (0-0.1 C), then from 1910-1940 they were lowered by about the same amount , then from 1940 to the present it was off to the races, eventually reaching a delta of about 0.3 C in 1996 (the latest date available from the Hansen 1999 figure). Hard to look at that beautiful parabola and attribute the changes to random adjustments for TOBS, missing data, etc.

MDS
June 27, 2014 12:18 am

Whenever I see a network of instruments being used to measure a phenomenon, I always wonder about the thoroughness of how well the instruments, their housings, and their calibration are maintained. Yes, I understand and agree that the local environment is a part of the issue, but having seen other measurements made by dissimilar and differently maintained systems of instruments—yes, even within federally-funded agencies—it’s easy to suspect that the number of errors can begin to accumulate. I think everyone just assumes that measurements are being taken with care and the systems maintained the same way, but who checks?

richardscourtney
June 27, 2014 12:29 am

X Anonymous:
I am not surprised that you choose to be anonymous when presenting stuff such as your post at June 26, 2014 at 4:34 pm which says

For those who are interested in what Nick Stokes is talking about. This wiki page on intergration is well worth the read. In particular, on the issue of only using a few stations and arriving at the same conclusion,

“the Gaussian quadrature often requires noticeably less work for superior accuracy.”
“The explanation for this dramatic success lies in error analysis, and a little luck.”

http://en.wikipedia.org/wiki/Integral
The same principle can also be applied to modern GCM, where they get the right answer from the wrong parameter.

The result is “dramatic success” based on “a little luck”!
And this “success” is like “modern GCM” which don’t suffer from GIGO because they “they get the right answer from the wrong parameter”.
Science is NOT a matter of “luck”. It is a rigorous adherence to a method.
So, whatever these compilations of GASTA are called, they cannot validly be called “science”.

And this anti-science is not surprising when at least one involved person (Steven Mosher) chooses to make unsolicited proclamations that he does not know what the scientific method is (see here).
Richard

ferdberple
June 27, 2014 12:32 am

Goddard is saying that all adjustments always go in favor of global warming.
===============
this is the elephant in the room that everyone is ignoring. if the errors are randomly distributed then the adjustment cannot be correct if they show a trend.
Since there is no evidence to indicate the errors are not random, the trend that all 4 methods Anthony showed of calculating the adjustments indicates the adjustments are statistically bogus. Anthony’s graph in part I confirms Goddard’s findings in this respect.

scf
June 27, 2014 12:47 am

Great essay. Funny how you never see a discussion like this in, god forbid, the media. I look forward to Anthony’s paper. I would also like to say I have a great deal of respect for Goddard. No matter the hype, he is doing the best job of as showing just how much of the temperature record is an artifact of adjustments. Adjustments exceed the measured changes, and the adjustments always move in the same direction, to cool the past and warm the present, which is the opposite of what you would expect when adjusting for uhi. Steve McIntyre also illustrates how even a single paper like marcott et al, with as obvious deformity in the results, which of course alters the results in the usual manner, that goes unacknowledged to this day. If even that gross and ridiculous hockey blade cannot be acknowledged, then we need people like Goddard, and you can certainly understand where Goddard’s hype is a reaction to, when you look at papers like marcott or you see Goddard’s diagrams showing how the year 1930 has somehow cooled substantially between 2000and now. Every single day the past gets colder, sooner or later 1930 will become an ice age that was somehow undetected at the time.

June 27, 2014 1:03 am

Lance Wallace wrote, “I used your data to look at the difference between 2014 and 1999, 2000, and 2006.”
Thanks for creating the very nice graph showing the adjustments to the 5-year averages, Lance. Your graph certainly illustrates how their adjustments have served to progressively depress the high temperatures 3/4 century ago, and progressively raise temperatures at the end of the 20th century. However it is also interesting that in your graphs of 5-year averages, the adjustments add a total of only about 0.4°C of warming, rather than the 0.63°C that I saw when comparing the peak years of 1934 and 1998.
Since your graph is in a dropbox file, and dropbox files tend to be transient but archive.org won’t archive them, I took the liberty of archiving it with WebCitation, here:
http://www.webcitation.org/6QdesojPY

Stephen Richards
June 27, 2014 1:17 am

As I pointed out in part 1 your comments were not warranted. Your behaviour unacceptable. While I accept the pressures on you created by your blog and your business it is not acceptable that you criticise a fellow traveller before fully understanding what he is trying to explain. In my opinion you failed to do so. This post however goes some way towards a full analysis. Why would you ever take any note of Zeke how’syerfather and Mosher I simply do not know?
Perhaps an apology to Goddards site might be in order?.

Stephen Richards
June 27, 2014 1:24 am

Will Nitschke says:
June 26, 2014 at 10:09 pm
@Willis Eschenbach
“His regular promotion of a brain washing gun control conspiracy theory complete with Holocaust photos loses him the entire Internet culture debate for *all* of us skeptics because he has the second highest traffic skeptical site.”
This is all strawman stuff. Yes he uses controversial images to get his points across but his points are valid if a little OTT. You are doing what you accuse the AGW team of doing. Look at yourselves. Go look in the mirror.
There is only one Steve Mc. None of us comes close to his skills of expression, analysis and above all firm politeness. So engage brain before typing. I know I fail to do so on many occassions.

Dr. Paul Mackey
June 27, 2014 1:29 am

I think using a single global average temperature is fundamentally meaningless as a metric for climate, even if there were a perfect set of records. In my opinion there needs to be some thought about to what that metric should be, which would not be a trivial excercise given the multi-facetted nature of climate with an innumerable number of variables and processes, some of which are not yet understood completely.
As it is the number seems to be just a statistical artifact with no physical meaning for the real world. This can been seen from Anthony’s set of graphs from Las Vegas. One (Min temp average) paints a warming picture and the other (Max average ) paints a cooling picture. Anthony provides the explaination via UHI effect.
The temperature record for Las Vagas therefore is a measure of Urban Growth. Other stations records may provide a measure of deforestation for example or some other process. The national and global laverage mindleessly amagalmate measures from different “experiements” into one figure. This is meaningless.
Back when I was an experimental phyicist, I had to be very careful about systematic errors in my apparatus. I see no mention of systematic errors in the climate debate. Each met station surely is an individual “experiment”. The UHI, for example, seems to me to be a systematic error, one that is a also function of time. Site location changes, equipement changes etc, to my mind, fall into this category. Surely these have to be removed individually, from each experiment’s results prior to the ruseults being amalgamated or compared? If this is not being taken into account, then the number is meaningless as you are effectively averaging differnt measurements.

knr
June 27, 2014 1:33 am

Adjustments themselves are not an issue, it’s the methodology of adjustments that matter.
First the justification of why there is need for adjustments.
Second it is made clear how these adjustments where done.
Thirdly, the retaining of the unadjusted data, so it is possible for others to check the validity of these adjustments or to reverse to the old ones should the new adjustments prove unsound over time.
Climate ‘science’ with its ‘the dog eat my homework’ outlook to data retention and control frequent forgets do these things. And by ‘lucky chance’ the mistakes they make in adjustments always favour the ideas they are pushing . So you can why ‘adjustments’ are a problem . What you cannot see, sadly, is any steps being taken to account for these issues other than attempts to deny the right of none-supporters to raise their concerns in public.

richardscourtney
June 27, 2014 1:46 am

Dr. Paul Mackey:
In your post at June 27, 2014 at 1:29 am you say

I think using a single global average temperature is fundamentally meaningless as a metric for climate, even if there were a perfect set of records. In my opinion there needs to be some thought about to what that metric should be, which would not be a trivial excercise given the multi-facetted nature of climate with an innumerable number of variables and processes, some of which are not yet understood completely.
As it is the number seems to be just a statistical artifact with no physical meaning for the real world.

YES! I have been saying that for years.
If you have not seen it then I think you will want to read this especially its Appendix B.
And please remember that – as I pointed out in my above post at June 27, 2014 at 12:29 am – at least one of those involved in ‘altering the past’ denies the scientific Null Hypothesis; i.e.
A system must be assumed to have not changed unless there is empirical evidence that it has changed.
Richard

Nick Stokes
June 27, 2014 2:20 am

Brandon C says: June 27, 2014 at 12:18 am
“Nick. Again you just assume that every single reading is done exactly the same and followed only one protocol. This is easily proven not to be the case from interviews with station operators. Rather than admit that the extent of the problem is unknown and leave it as an uncertainty, you just proceed as though your easily disproven assumption is correct. This is not just poor science, it borders on outright dishonesty.”

No, it’s the do-nothing option that is poor science. The fact is that assumptions are inevitably made. You have a series of operator observations of markers at stated times, and you have to decide what they mean. Assuming they are maxima on the day of observation is an assumption like any other. And it incorporates a bias.
The unscientific thing is just to say, as you do, that it’s just too hard to measure, so let’s just assume it is zero. The scientific thing is to do your best to work out what it is and allow for it. Sure, observation times might not have been strictly observed. Maybe minima are affected differently to maxima (that can be included). You won’t get a perfect estimate. But zero is a choice. And a very bad one.

A C Osborn
June 27, 2014 3:00 am

I still cannot understand why no else has picked up WHAT IS UP WITH THAT 7th Graph of GHCN-M v3 Raw, Averaged Absolutes.
Has anyone ever seen a Global Temperature graph with 2 such 1.5 degree step changes around 1950 & 1990?
There is no TREND other than downwards from the 1950s to the 1990s.
With so many of the Stations being in the USA you would expect to see some element of the 1930/40s high temperatures coming through the global record, but it doesn’t they are a whole degree lower than the 1990s, compare that to the Graph 9 of the USHCN Raw data.
So what Cooling event in the rest of the world offset those very high temperatures?
Don’t forget these are alledgedly RAW actual temperatures
The same thing applies to the 9th Graph of USHCN Temperatures, Raw 5-yr Smoothed, no nice upward TRENDS just 4 Step Changes of 0.5 degree over a couple of years around 1920, 1930, 1950 & 1995, with a very fast Trend up of 0.6 degree between 1978 and 1990.
None of these suggest anything to do with steady increases due to the steady increase in CO2.
What in the Earths Climate Systems can produce such major Shifts in Climate?
In the USA data are we seeing the 11 Year Solar Cycle in some way?

Chuck L
June 27, 2014 3:06 am

I have followed the comments and have learned more about different methods of determining trends, TOBS, station drop-out, etc. but to reiterate what a number of commenters have said, the adjustments always create a greater rising trend. Why are adjustments continually being made on (already adjusted) older temperatures on an annual or even less than annual basis?! Did they not get it “right” the first time? It is hard to imagine this is coincidental, if not outright malfeasance/fraud by NOAA/NCDC/GISS to fit the global warming agenda.

Nick Stokes
June 27, 2014 3:22 am

A C Osborn says: June 27, 2014 at 3:00 am
“I still cannot understand why no else has picked up WHAT IS UP WITH THAT 7th Graph of GHCN-M v3 Raw, Averaged Absolutes.”

Zeke picked up on it. He introduced that plot thus:
“There is a very simple way to show that Goddard’s approach can produce bogus outcomes. Lets apply it to the entire world’s land area, instead of just the U.S. using GHCN monthly:”
It’s a faulty method, and can give you anything.

A C Osborn
June 27, 2014 3:31 am

Nick Stokes says:
June 27, 2014 at 3:22 am
How about it gives you the TRUTH???
Are they real values or aren’t they?
Are they reproducable?

richardscourtney
June 27, 2014 3:40 am

Nick Stokes:
At June 27, 2014 at 3:22 am you rightly say of Goddard’s method

It’s a faulty method, and can give you anything.

Yes, indeed so.
The same can be said of each and every determination of global average surface temperature anomaly (GASTA) produced by BEST, GISS, HadCRU, etc…
This is because there is no agreed definition of GASTA so each team that compiles values of GASTA uses its own definition. Also, each team changes the definition it uses almost every month: this is why past values of GASTA change every month; if the definition did not change then the values would not change.
The facts of this are as follows.
There is no agreed definition of GASTA.
And
There are several definitions of GASTA.
And
The teams who determine values of GASTA each frequently changes its definition.
And
There is no possibility of independent calibration of GASTA determinations.
Therefore

Every determination of GASTA is determined by a faulty method, and can give you anything.

Richard

A C Osborn
June 27, 2014 3:50 am

One thing that I disagree with is Combining Air and Water Temperature data, it should either be Air and Air or Water and land Surface.

Nick Stokes
June 27, 2014 3:51 am

richardscourtney says: June 27, 2014 at 3:40 am
“Every determination of GASTA is determined by a faulty method, and can give you anything.”

Zeke gave his demonstration of how Goddard’s method gives wrong answers. Let’s see your demonstration re GASTA?

June 27, 2014 3:53 am

Richards
“This is all strawman stuff. Yes he uses controversial images to get his points across but his points are valid if a little OTT. You are doing what you accuse the AGW team of doing. Look at yourselves. Go look in the mirror.”
Oddly, I think I’m making the same point you’re making, so I’m not sure which mirror I should be peering into.

June 27, 2014 3:53 am

One of the apologists for constantly changing the temperature record of the past (does he have a time machine???) says that the abomination of the records at Luling, Texas (station number 415429) is a good thing and that is how errors are found. Hmmmmm. A blogger takes his first ever look at a station and finds massive problems? This is good?
But far, far worse is the idea that “NASA Is Constantly Cooling The Past And Warming The Present” as presented here: http://stevengoddard.wordpress.com/2014/06/27/nasa-is-constantly-cooling-the-past-and-warming-the-present/
Come on fellows, when will the past record ever be settled? Does the temperature time series really depend on who is running the agency? Does the subjective, personal beliefs of the “scientists” involved really determine the reality that occurred in the past. Does anyone here really think they can defend constantly altering the past records?
And what about this:

What is really ugly about this is that they overwrite the data in place, don’t archive the older versions, and make no mention of their changes on the web pages where the graphs are displayed. There should be prominent disclaimers that the actual thermometer data shows a 90 year cooling trend in the US, and that their graphs do not represent the thermometer data gathered by tens of thousands of Americans over the past 120 years.

They don’t archive the older versions? They don’t archive the changes? They toss out the record of their altering of the data? Oh my!
They don’t even mention all these changes on the pages where the graphs are? They don’t even mention that the real data shows cooling rather than warming? My, my, my.
And some people just trust the “scientists”. Is it because of the white lab coats?

Greg Goodman
June 27, 2014 4:04 am

WUWT: ” The challenge is metadata, some of which is non-existent publicly”
This is the real challenge. Most european met services still seem very possessive about their daily data. Despite WMO regs requiring free sharing of data, they are often using prohibitive “distribution” charges to deter “free” data sharing.
Also Austrian service require a NON DISCLOSURE AGREEMENT.
Swiss don’t even have link to request data and give ZERO data on their site.
UK met offices charge for daily data.
As Phil Jones revealed in one of his climategate emails, they are HIDING behind intellectual property rights.
So the data is there do a reliable global network but there needs to be a significant change attitude at many of these european meteo services before that can happen.
Since WMO rules already include that intention maybe some coorinated action is needed to ensure national MO services are not “hiding behind” IP or raising abusive charges for individual requests instead of making detailed data readily available on line.

richardscourtney
June 27, 2014 4:54 am

Nick Stokes:
At June 27, 2014 at 3:40 am I wrote

there is no agreed definition of GASTA so each team that compiles values of GASTA uses its own definition. Also, each team changes the definition it uses almost every month: this is why past values of GASTA change every month; if the definition did not change then the values would not change.
The facts of this are as follows.
There is no agreed definition of GASTA
And
There are several definitions of GASTA
And
The teams who determine values of GASTA each frequently changes its definition
And
There is no possibility of independent calibration of GASTA determinations.
Therefore
Every determination of GASTA is determined by a faulty method, and can give you anything.

At June 27, 2014 at 3:51 am you have ignored that explanation and argument t and replied with this ‘red herring’

Zeke gave his demonstration of how Goddard’s method gives wrong answers. Let’s see your demonstration re GASTA?

I did that when I wrote,
“each team changes the definition it uses almost every month: this is why past values of GASTA change every month; if the definition did not change then the values would not change.
But, of course, there is also this.
My having landed your ‘red herring’, perhaps you would now be willing to address the fundamental issue which I raised, or is my explanation and argument so good that you are incapable of addressing it?
Richard

1 5 6 7 8 9 11