On ‘denying’ Hockey Sticks, USHCN data, and all that – part 2

In part one of this essay which you can see here, I got quite a lot of feedback on both sides of the climate debate. Some people thought that I was spot on with criticisms while others thought I had sold my soul to the devil of climate change. It is an interesting life when I am accused of being in cahoots with both “big oil” and “big climate” at the same time. That aside, in this part of the essay I am going to focus on areas of agreement and disagreement and propose a solution.

In part one of the essay we focus on the methodology that was used that created a hockey stick style graph illustrating missing data. Due to the missing data causing a faulty spike at the end, Steve McIntyre commented, suggesting that it was more like the Marcott hockey stick than it was like Mann’s:

Steve McIntyre says:

Anthony, it looks to me like Goddard’s artifact is almost exactly equivalent in methodology to Marcott’s artifact spike – this is a much more exact comparison than Mann. Marcott’s artifact also arose from data drop-out.

However, rather than conceding the criticism, Marcott et al have failed to issue a corrigendum and their result has been widely cited.

In retrospect, I believe McIntyre is right in making that comparison. Data dropout is the central issue here and when it occurs it can create all sorts of statistical abnormalities.

Despite some spirited claims in comments in part one about how I’m “ignoring the central issue”, I don’t dispute that data is missing from many stations, I never have.

It is something that has been known about for years and is actually expected in the messy data gathering process of volunteer observers, electronic systems that don’t always report, and equipment and or sensor failures. In fact there is likely no weather network in existence that has perfect data without some being missing. Even the new U.S. Climate Reference Network, designed to be state-of-the-art and as perfect as possible has a small amount of missing data due to failures of uplinks or other electronic issues, seen in red:

CRN_missing_data

Source: http://www.ncdc.noaa.gov/crn/newdaychecklist?yyyymmdd=20140101&tref=LST&format=web&sort_by=slv

What is in dispute is the methodology, and the methodology, as McIntyre observed, created a false “hockey stick” shape much like we saw in the Marcott affair:

marcott-A-1000[1]

After McIntyre corrected the methodology used by Marcott, dealing with faulty and missing data, the result looked like this:

 

alkenone-comparison

McIntyre points out this in comments in part 1:

In Marcott’s case, because he took anomalies at 6000BP and there were only a few modern series, his results were an artifact – a phenomenon that is all too common in Team climate science.

So, clearly, the correction McIntyre applied to Marcott’s data made the result better, i.e. more representative of reality.

That’s the same sort of issue that we saw in Goddard’s plot; data was thinning near the endpoint of the present.

Goddard_screenhunter_236-jun-01-15-54

[ Zeke has more on that here: http://rankexploits.com/musings/2014/how-not-to-calculate-temperatures-part-3/ ]

While I would like nothing better than to be able to use raw surface temperature data in its unadulterated “pure” form to derive a national temperature and to chart the climate history of the United States, (and the world) the fact is that because the national USHCN/co-op network and GHCN is in such bad shape and has become largely heterogeneous that is no longer possible with the raw data set as a whole.

These surface networks have had so many changes over time that the number of stations that have been moved, had their time of observation changed, had equipment changes, maintenance issues,or have been encroached upon by micro site biases and/or UHI using the raw data for all stations on a national scale or even a global scale gives you a result that is no longer representative of the actual measurements, there is simply too much polluted data.

A good example of polluted data can be found in Las Vegas Nevada USHCN station:

LasVegas_average_temps

Here, growth of the city and the population has resulted in a clear and undeniable UHI signal at night gaining 10°F since measurements began. It is studied and acknowledged by the “sustainability” department of the city of Las Vegas, as seen in this document. Dr. Roy Spencer in his blog post called it “the poster child for UHI” and wonders why NOAA’s adjustments haven’t removed this problem. It is a valid and compelling question. But at the same time, if we were to use the raw data from Las Vegas we would know it would have been polluted by the UHI signal, so is it representative in a national or global climate presentation?

LasVegas_lows

The same trend is not visible in the daytime Tmax temperature, in fact it appears there has been a slight downward trend since the late 1930′s and early 1940′s:

LasVegas_highs

Source for data: NOAA/NWS Las Vegas, from

http://www.wrh.noaa.gov/vef/climate/LasVegasClimateBook/index.php

The question then becomes: Would it be okay to use this raw temperature data from Las Vegas without any adjustments to correct for the obvious pollution by UHI?

From my perspective the thermometer at Las Vegas has done its job faithfully. It has recorded what actually occurred as the city has grown. It has no inherent bias, the change in surroundings have biased it. The issue however is when you start using stations like this to search for the posited climate signal from global warming. Since the nighttime temperature increase at Las Vegas is almost an order of magnitude larger than the signal posited to exist from carbon dioxide forcing, that AGW signal would clearly be swamped by the UHI signal. How would you find it? If I were searching for a climate signal and was doing it by examining stations rather than throwing out blind automated adjustments I would most certainly remove Las Vegas from the mix as its raw data is unreliable because it has been badly and likely irreparably polluted by UHI.

Now before you get upset and claim that I don’t want to use raw data or as some call it “untampered” or unadjusted data, let me say nothing could be further from the truth. The raw data represents the actual measurements; anything else that has been adjusted is not fully representative of the measurement reality no matter how well-intentioned, accurate, or detailed those adjustments are.

But, at the same time, how do you separate all the other biases that have not been dealt with (like Las Vegas) so you don’t end up creating national temperature averages with imperfect raw data?

That my friends, is the $64,000 question.

To answer that question, we have a demonstration. Over at the blackboard blog, Zeke has plotted something that I believe demonstrates the problem.

Zeke writes:

There is a very simple way to show that Goddard’s approach can produce bogus outcomes. Lets apply it to the entire world’s land area, instead of just the U.S. using GHCN monthly:

Averaged Absolutes

Egads! It appears that the world’s land has warmed 2C over the past century! Its worse than we thought!

Or we could use spatial weighting and anomalies:

 

Gridded Anomalies

Now, I wonder which of these is correct? Goddard keeps insisting that its the first, and evil anomalies just serve to manipulate the data to show warming. But so it goes.

Zeke wonders which is “correct”. Is it Goddard’s method of plotting all the “pure” raw data, or is it Zeke’s method of using gridded anomalies?

My answer is: neither of them are absolutely correct.

Why, you ask?

It is because both contain stations like Las Vegas that have been compromised by changes in their environment, that station itself, the sensors, the maintenance, time of observation changes, data loss, etc. In both cases we are plotting data which is a huge mishmash of station biases that have not been dealt with.

NOAA tries to deal with these issues, but their effort falls short. Part of the reason it falls short is that they are trying to keep every bit of data and adjust it in an attempt to make it useful, and to me that is misguided, as some data is just beyond salvage.

In most cases, the cure from NOAA is worse than the disease, which is why we see things like the past being cooled.

Here is another plot from Zeke just for the USHCN, which shows Goddard’s method “Averaged Absolutes” and the NOAA method of “Gridded Anomalies”:

Goddard and NCDC methods 1895-2013

[note: the Excel code I posted was incorrect for this graph, and was for another graph Zeke produced, so it was removed, apologies – Anthony]

Many people claim that the “Gridded Anomalies” method cools the past, and increases the trend, and in this case they’d be right. There is no denying that.

At the same time, there is no denying that the entire CONUS USHCN raw data set contains all sorts of imperfections, biases, UHI, data dropouts and a whole host of problems that remain uncorrected. It is a Catch-22; on one hand the raw data has issues, on the other, at the bare minimum some sort of infilling and gridding is needed to produce a representative signal for the CONUS, but in producing that, new biases and uncertainty is introduced.

There is no magic bullet that always hits the bullseye.

I’ve known and studied this for years, it isn’t a new revelation. The key point here is that both Goddard and Zeke (and by extension BEST and NOAA) are trying to use the ENTIRE USHCN dataset, warts and all, to derive a national average temperature. Neither method produces a totally accurate representation of national temperature average. Keep that thought.

While both methods have flaws, the issue that Goddard raised has one good point, and an important one; the rate of data dropout in USHCN is increasing.

When data gets lost, they infill with other nearby data, and that’s an acceptable procedure, up to a point. The question is, have we reached a point of no confidence in the data because too much has been lost?

John Goetz asked the same question as Goddard in 2008 at Climate Audit:

How much Estimation is too much Estimation?

It is still an open question, and without a good answer yet.

But at the same time we are seeing more and more data loss, Goddard is claiming “fabrication” of lost temperature data in the final product and at the same advocating using the raw surface temperature data for a national average. From my perspective, you can’t argue for both. If the raw data is becoming less reliable due to data loss, how can we use it by itself to reliably produce a national temperature average?

Clearly with the mess the USHCN and GHCN are in, raw data won’t accurately produce a representative result of the true climate change signal of the nation because the raw data is so horribly polluted with so many other biases. There are easily hundreds of stations in the USHCN that have been compromised like Las Vegas has been, making the raw data, as a whole, mostly useless.

So in summary:

Goddard is right to point out that there is increasing data loss in USHCN and it is being increasingly infilled with data from surrounding stations. While this is not a new finding, it is important to keep tabs on. He’s brought it to the forefront again, and for that I thank him.

Goddard is wrong to say we can use all the raw data to reliably produce a national average temperature because the same data is increasingly lossy and is also full of other biases that are not dealt with. [ added: His method allows for biases to enter that are mostly about station composition, and less about infilling see this post from Zeke]

As a side note, claiming “fabrication” in a nefarious way doesn’t help, and generally turns people off to open debate on the issue because the process of infilling missing data wasn’t designed at the beginning to be have any nefarious motive; it was designed to make the monthly data usable when small data dropouts are seen, like we discussed in part 1 and showed the B-91 form with missing data from volunteer data. By claiming “fabrication”, all it does is put up walls, and frankly if we are going to enact any change to how things get done in climate data, new walls won’t help us.

Biases are common in the U.S. surface temperature network

This is why NOAA/NCDC spends so much time applying infills and adjustments; the surface temperature record is a heterogeneous mess. But in my view, this process of trying to save messed up data is misguided, counter-productive, and causes heated arguments (like the one we are experiencing now) over the validity of such infills and adjustments, especially when many of them seem to operate counter-intuitively.

As seen in the map below, there are thousands of temperature stations in the US co-op and USHCN network in the USA, by our surface stations survey, at least 80% of the USHCN is compromised by micro-site issues in some way, and by extension, that large sample size of the USHCN subset of the co-op network we did should translate to the larger network.

USHCN_COOP_Map

When data drops out of USHCN stations, data from nearby neighbor stations is infilled to make up the missing data, but when 80% or more of your network is compromised by micro-site issues, chances are all you are doing is infilling missing data with compromised data. I explained this problem years ago using a water bowl analogy, showing how the true temperature signal gets “muddy” when data from surrounding stations is used to infill missing data:

bowls-USmap

The real problem is the increasing amount of data dropout in USHCN (and in Co-op and GHCN) may be reaching a point where it is adding a majority of biased signal from nearby problematic stations. Imagine a well sited long period station near Las Vegas out in a rural area that has its missing data infilled using Las Vegas data, you know it will be warmer when that happens.

So, what is the solution?

How do we get an accurate surface temperature for the United States (and the world) when the raw data is full of uncorrected biases and the adjusted data does little more than smear those station biases around when infilling occurs? Some of our friends say a barrage of  statistical fixes are all that is needed, but there is also another, simpler, way.

Dr. Eric Steig, at “Real Climate”, in a response to a comment about Zeke Hausfather’s 2013 paper on UHI shows us a way.

Real Climate comment from Eric Steig (response at bottom)

We did something similar (but even simpler) when it was being insinuated that the temperature trends were suspect, back when all those UEA emails were stolen. One only needs about 30 records, globally spaced, to get the global temperature history. This is because there is a spatial scale (roughly a Rossby radius) over which temperatures are going to be highly correlated for fundamental reasons of atmospheric dynamics.

For those who don’t know what the Rossby radius is, see this definition.

Steig claims 30 station records are all that are needed globally. In a comment some years ago (now probably lost in the vastness of the Internet) we heard Dr. Gavin Schmidt said something similar, saying that about “50 stations” would be all that is needed.

[UPDATE: Commenter Johan finds what may be the quote:

I did find this Gavin Schmidt quote:

“Global weather services gather far more data than we need. To get the structure of the monthly or yearly anomalies over the United States, for example, you’d just need a handful of stations, but there are actually some 1,100 of them. You could throw out 50 percent of the station data or more, and you’d get basically the same answers”

http://earthobservatory.nasa.gov/Features/Interviews/schmidt_20100122.php ]

So if that is the case, and one of the most prominent climate researchers on the planet (and his associate) says we need only somewhere between 30-50 stations globally…why is NOAA spending all this time trying to salvage bad data from hundreds if not thousands of stations in the USHCN, and also in the GHCN?

It is a question nobody at NOAA has ever really been able to answer for me. While it is certainly important to keep these records from all these stations for local climate purposes, but why try to keep them in the national and global dataset when Real Climate Scientists say that just a few dozen good stations will do just fine?

There is precedence for this, the U.S. Climate Reference Network, which has just a fraction of the stations in USHCN and the co-op network:

crn_map

NOAA/NCDC is able to derive a national temperature average from these few stations just fine, and without the need for any adjustments whatsoever. In fact they are already publishing it:

USCRN_avg_temp_Jan2004-April2014

If it were me, I’d throw out most of the the USHCN and co-op stations with problematic records rather than try to salvage them with statistical fixes, and instead, try to locate the best stations with long records, no moves, and minimal site biases and use those as the basis for tracking the climate signal. By doing so not only do we eliminate a whole bunch of make work with questionable/uncertain results, and we end all the complaints data falsification and quibbling over whose method really does find the “holy grail of the climate signal” in the US surface temperature record.

Now you know what Evan Jones and I have been painstakingly doing for the last two years since our preliminary siting paper was published here at WUWT and we took heavy criticism for it. We’ve embraced those criticisms and made the paper even better. We learned back then that adjustments account for about half of the surface temperature trend:

We are in the process of bringing our newest findings to publication. Some people might complain we have taken too long. I say we have one chance to get it right, so we’ve been taking extra care to effectively deal with all criticisms from then, and criticisms we have from within our own team. Of course if I had funding like some people get, we could hire people to help move it along faster instead of relying on free time where we can get it.

The way forward:

It is within our grasp to locate and collate stations in the USA and in the world that have as long of an uninterrupted record and freedom from bias as possible and to make that a new climate data subset. I’d propose calling it the the Un-Biased Global Historical Climate Network or UBGHCN. That may or may not be a good name, but you get the idea.

We’ve found at least this many good stations in the USA that meet the criteria of being reliable and without any need for major adjustments of any kind, including the time-of-observation change (TOB), but some do require the cooling bias correction for MMTS conversion, but that is well known and a static value that doesn’t change with time. Chances are, a similar set of 50 stations could be located in the world. The challenge is metadata, some of which is non-existent publicly, but with crowd sourcing such a project might be do-able, and then we could fulfill Gavin Schmidt and Eric Steig’s vision of a much simpler set of climate stations.

Wouldn’t it be great to have a simpler and known reliable set of stations rather than this mishmash which goes through the statistical blender every month? NOAA could take the lead on this, chances are they won’t. I believe it is possible to do independent of them, and it is a place where climate skeptics can make a powerful contribution which would be far more productive than the arguments over adjustments and data dropout.

 

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

274 Comments
Inline Feedbacks
View all comments
A C Osborn
June 28, 2014 4:43 am

evanmjones says: June 27, 2014 at 6:23 pm
MMTS was adjudged to be far more accurate than CRS.
By whom and why?
Where is the equivalent of National Physical Laboratory Standard for Temperature, we have it for most kinds of measurement, but what is it for temperatures.

Latitude
June 28, 2014 5:56 am

Evan, …..it’s still not clear
Since CRS has a faster trend than MMTS…..aren’t you recreating that faster trend by adjusting MMTS up?…. “Even after this adjustment, MMTS stations have lower trends.”
I understand the step change…….I don’t understand continuing a steeper trend, when new equipment shows less trend?

Samuel C Cogar
June 28, 2014 6:00 am

evanmjones says:
June 27, 2014 at 2:44 pm
My bottom-line “best guess” is that the “true signal” is closer to 0.07C per decade, exaggerated by poor microsite. That translates to raw CO2 warming with perhaps a bit of negative feedback in play.
——————
I am curious to know what method you used for doing said …. “translates to CO2 warming”, ….. as well as your method for calculating …. “a bit of negative feedback” …. to achieve said result of “ 0.07C per decade”.

Carrick
June 28, 2014 9:23 am

[Sorry if this is a repeat.. my last comment apparently was eaten by the Bit Monster.]
Alexej Buergin says:

Supposing that you intended to say meteorological my answer would be: I learned about WX at an US university, we used the book by Ahrens, and it says:”Most scientists use a temperature scale called the absolute or Kelvin scale…” Seems unambiguous to me. (But that was 20 years ago and it was the fourth edition, now the ninth is current.)

To make it clear, there’s nothing wrong with saying “absolute temperature” when you mean “absolute thermodynamic temperature scale”. It’s a rubrik, and it’s a well understood one. If you were talking thermodynamic quantities, everybody would understand what you meant.
Were I to write an introductory book for meteorologists, I probably would not make the distinction between “absolute” and “relative” as they are used in measurement science (metrology) and the “absolute” as it’s used in thermodynamics.
The only error is in conflating the use of “absolute temperature” as it is used in the thermodynamics community, with its more general use in the metrological sense. There is no error in calling Celsius an “absolute scale” in the metrological sense, as long as you didn’t mean to imply it was a thermodynamic temperature scale.

(By the way: Do you personally know somebody who uses Rankine, or can you find some recent writing which uses Rankine? In non-US countries C and K are as normal as the fact that football is played with the feet.)

I don’t know anybody personally. It’s my impression that it used to be widely used by communities that mostly had Fahrenheit-scale mercury thermometers (e.g., heating and cooling engineers), and not so much since people have gone to thermoelectric devices.

Brandon C
June 28, 2014 9:25 am

Nick says: “No, it’s the do-nothing option that is poor science. The fact is that assumptions are inevitably made. You have a series of operator observations of markers at stated times, and you have to decide what they mean. Assuming they are maxima on the day of observation is an assumption like any other. And it incorporates a bias.
The unscientific thing is just to say, as you do, that it’s just too hard to measure, so let’s just assume it is zero. The scientific thing is to do your best to work out what it is and allow for it. Sure, observation times might not have been strictly observed. Maybe minima are affected differently to maxima (that can be included). You won’t get a perfect estimate. But zero is a choice. And a very bad one.”
Can you really say that with a straight face?
– For one I never once said lets assume it is zero, you are making that up or have appalling reading comprehension. I said lets Leave the uncertainty caused by this in place and show the error bars to reflect this. This is not the same thing and anyone older than 2 knows that. Your comment is either stupid or dishonest, you can pick.
– You state that there are assumptions made in the original data, that is not correct. We have identified a possible source of error in the original data and by doing so create an uncertainty bar to reflect that the actual error is unknown. It is not “incorporating bias”, it is specifically avoiding adding steps that can add bias and leave the uncertainties known, surely you must know this. That is not an assumption, it is proper data handling. The assumption is in taking the unscientific position that you can fully account and quantify the error without enough data to do it, and then pretend that the answer is somehow accurate. It is nothing but a reflection of your unbacked assumption and is scientifically useless because you no longer even know the exact extent the possible errors. I don’t support reducing the error bars on the original data either, because I act like a scientists and not a priest that knows the answer from the start.
– Any scientists was taught that you only adjust original data when you have a “KNOWN” error that can be “ACCURATELY QUANTIFIED” and ‘IDEPENDENTLY VERIFIED”. If you cannot satisfy those 3 requirements, then you leave the original data alone and create uncertainty bars. That is the actual scientific way, that it is better to leave the original data alone with a range of uncertainty than to contaminate it with assumptions and guesswork. You can do the estimates on TOBS errors to better inform your error bars, but only an idiot changes the base data using that method because it creates additional uncertainty. The argument that “we have to do something” is not a scientific one, but and argument of emotion and activism.
– It is even worse scientifically when the adjustments are being used in a key dataset that is being used by thousands of other researchers as the base for other areas of research. You leave the base data the hell alone so all the referring research has a constant baseline. You don’t keep changing it on your whims creating the situation that past work cannot be accurately compared to newer work because a supposed solid data source keeps changing. You can no longer use past study data because adjustments means that new papers are dealing with entirely different trends and local temps than in past versions. If climate science was acting scientifically, they would have informed any referring papers that their conclusions may be wrong because the past data has been changed by an order of magnitude.
This issue is so unbelievably unscientific it makes my skin crawl. And your defense and reasoning makes one wonder if you are even trying to act scientifically or if you are just circling the wagons in defense of poor science.

June 28, 2014 9:53 am

Brandon C, your approach — adhering to the strict methodological integrity of science — is exactly what is missing throughout consensus climatology. These people have rung in sloppy methods that permit them specious but convenient conclusions.
And it’s not just the air temperature record. The same sloppy criteria are applied in climate modeling and in so-called proxy paleo-temperature reconstructions. The entire field has descended into pseudo-science. I’ll have a paper in E&E about this, perhaps later this year. It’s titled, Negligence, Non-science, and Consensus Climatology.

June 28, 2014 10:38 am

Brandon C says:
June 28, 2014 at 9:25 am
… This issue is so unbelievably unscientific it makes my skin crawl. And your defense and reasoning makes one wonder if you are even trying to act scientifically or if you are just circling the wagons in defense of poor science.

I think the only answer is that he and many others are circling the wagons in defense of poor science. People have been complaining about the data sets and the “adjustments” that always cool the past and warm the present (It is Worse than we Thought!!!!) to advance the alarmist agenda. I predict this will all blow over and the “skeptics” will go back to ignoring the data fr***. (can’t use the f-word here they say) Heck, we can’t even get the story out that these people claiming two decimal places of accuracy is a G-D joke.
What if there is no “pause”, but in fact there has been a cooling over the last couple of decades? How would the public ever know that if that were true?

A C Osborn
June 28, 2014 10:48 am

Brandon C says: June 28, 2014 at 9:25 am
Well said Sir. I worked in a Metrology Lab in my youth, everything was referenced back to the National Physical Laboratory Standards and woe betide anybody who took short cuts or did not adhere to procedure.
I then worked in Quality Control and Industry introduced ISO 2000/9000/9001 to control and document everything. Climate Science badly needs some ISO 9000 type Audits.
When Climategate first burst on the scenes Phil Jones work desk and procedures were exposed for all to see, I have seen local garages run better than that. Considering who employs them, how important the work is and how much they are payed it is a Global disgrace.
The mere fact that they would not release their data and making all sorts of excuses says it all.

A C Osborn
June 28, 2014 10:51 am

Mark Stoval (@MarkStoval) says: June 28, 2014 at 10:38 am
“What if there is no “pause”, but in fact there has been a cooling over the last couple of decades? How would the public ever know that if that were true?”
Well if it carries on it as expected they won’t need any “GLOBULL” temperatures to tell them, they will be able to feel it and measure it for themselves.

mark
June 28, 2014 11:25 am

Brandon C says: June 28, 2014 at 9:25 am
” I predict this will all blow over and the “skeptics” will go back to ignoring the data…”
This cannot be allowed to die. It is a “smoking gun” that even the least scientific savvy person would understand as fraudulent. Who wouldn’t understand that it’s not OK to go back in history and continually change data? We can all start by writing our representatives in Congress and the House to bring attention to this bogus ‘practice.’ At minimum we should demand that the original data be restored.

richardscourtney
June 28, 2014 11:43 am

mark:
re your post at June 28, 2014 at 11:25 am.
It saddens me, but I agree with Brandon C. The matter is not news and has been raised in many places including WUWT for years.
Please read this
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm
Richard

mark
June 28, 2014 12:21 pm

richardscourtney says: June 28, 2014 at 11:43 am
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm
How many politicians or people on their staffs would understand anything in that article? Wrong audience. It has to be more simplified and straight forward to get their attention. Something to the effect “here’s obvious proof that they are cooking the books” (when the same ‘data’ point continually changes). It only takes one representative to start a campaign bringing corruption of the opposition to the light…..something they love to do. Yes it’s been going on for a while but the results are becoming cumulative and the acts so egregious I doubt the even the average person wouldn’t take notice. Besides, isn’t it obvious by now that it’s not the scientist but the politicians, media, and people that are driving the discourse? Take it to their level and this is an ideal opening to make that happen. I hope I’m right. The alternative is to wait for nature to prove them wrong. Or not.

Eliza
June 28, 2014 12:30 pm

The point here is that in fact both WUWT and Goddard have been pointing this out for years.Only now has it reached the MSM thanks mainly to Goddards uncompromising attitude. You cannot be “soft” with these AGW people.They have an agenda. Skeptics do not.

Eliza
June 28, 2014 12:32 pm

In the end of course its the actual climate/weather that is winning the debate ie: no change. What a laugh! (and waste of time)

richardscourtney
June 28, 2014 2:15 pm

Eliza:
I write to support your post at June 28, 2014 at 12:30 pm which says

The point here is that in fact both WUWT and Goddard have been pointing this out for years.Only now has it reached the MSM thanks mainly to Goddards uncompromising attitude. You cannot be “soft” with these AGW people.They have an agenda. Skeptics do not.

Yes. I draw your attention to my post at June 28, 2014 at 11:43 am and its link that discusses a ‘climategate’ email from 2003 showing my interaction with compilers of global temperature which complains about the data changes.
If the matter has – at last – reached the MSM then it needs as much publicity as possible.
Richard

Eugene WR Gallun
June 28, 2014 5:10 pm

This might be it.
PROFESSOR PHIL JONES
The English Prometheus
To tell the tale as it began —
An ego yearned
Ambition burned
Inside a quiet little man
No one had heard of Phillip Jones
Obscure to fame
(And likewise blame)
The creep of time upon his bones
Men self-deceive when fame is sought
Their fingers fold
Their ego told
That fire is what their fist has caught!
Men want to feel, not understand!
Jones made it plain
That Hell would reign
In England’s green and pleasant land!
Believe! Believe! In burning heat!
In mental fight
To prove I’m right
I’ve raised some temps and some delete!
And with his arrows of desire
He shot the backs
Of any hacks
That asked the question — where’s the fire?
East Anglia supports him still
The truth denied
Whitewash applied
Within that dark Satanic Mill
The evil that this wimp began
Will go around
And come around
Prometheus soon wicker man
Eugene WR Gallun
Note: Though some may know it, most probably don’t. In William Blake’s JERUSALEM the line — “Among these dark Satanic Mills” — was not referring to factories but to churches. Mills were places where things were ground down and Blake was saying that England’s state run churches (and Catholic churches — well, really all churches) which demanded conformity were mills grinding down both mind and spirit. So this poem is sort of an oblique comparison between the ideals of William Blake and Phil Jones (if such a man as Jones can be said to have ideals).
When I start using Blake’s words the poem has to get a little poetasterish since I’m no fool. Blake’s words in JERUSALEM are a thousand times better than anything I could ever write — some of the greatest words in the English language — so I preemptively surrender so the reader doesn’t even think about making such a comparison.
Eugene WR Gallun

basicstats
June 29, 2014 1:47 am

I must correct an earlier comment about applying statistical procedures to anomalies rather than actual temperatures. The example of kriging given was wrong – spatial methods apply to anomalies perfectly well. It’s time series models, including regressions, which need deconstructing
when fitted to anomalies instead of actual temperatures.

Eugene WR Gallun
June 29, 2014 7:52 am

Sigh! Double sigh! After reviewing my poem i realized I needed to add a connective stanza to make the train of thought clearer. I can’t take this writing of poetry — it is slowly killing me.
PROFESSOR PHIL JONES
The English Prometheus
To tell the tale as it began —
An ego yearned
Ambition burned
Inside a quiet little man
No one had heard of Phillip Jones
Obscure to fame
(And likewise blame)
The creep of time upon his bones
Men self-deceive when fame is sought
Their fingers fold
Their ego told
That fire is what their fist has caught!
Because they’d rather rule than serve
They, with their heat
The light defeat
And damning ignorance preserve
Such want to feel not understand!
Jones made it plain
That Hell must reign
In England’s green and pleasant land!
Believe! Believe! In burning heat!
In mental fight
To prove I’m right
I’ve raised some temps and some delete!
And with his arrows of desire
He shot the backs
Of any hacks
That asked the question — where’s the fire?
East Anglia supports him still
The truth denied
Whitewash applied
Within that dark Satanic Mill
The evil that this wimp began
Will go around
And come around
Prometheus soon wicker man
Eugene WR Gallun

June 29, 2014 8:03 am

Lance Wallace wrote:

Thanks for putting my Dropbox graph of your Fig. “D” historical data of the US 48-state temperature anomalies into more permanent archive. Here is the full Excel file with the graph.
https://dl.dropboxusercontent.com/u/75831381/NASA%20Fig%20D%201999-2014.xlsx
I used the same data (cut off at 1998 so all the datasets could be compared from the Hansen 1999 up to the present) to calculate the change in the linear rate of increase. The rate was 0.32 degrees C per century according to Hansen (1999) and is now 0.43 per century, about a 35% increase, due entirely to adjustments to the historical data. (See the graph in the third tab of this second Excel file.)
https://dl.dropboxusercontent.com/u/75831381/NASA%20Fig%20D%201880-1998.xlsx
You are welcome to archive these files if you find them useful.

Thank you, Lance!
The two archived locations are:
http://www.webcitation.org/6Qh9Xt5bt
http://www.webcitation.org/6Qh9c1FhC
BTW, for archiving pages, I keep bookmarks and “bookmarklets” in my Chrome bookmark toolbar with:
Citebite.com, and a “Cite this” bookmarklet
Archive.org, and an “Archive this” bookmarklet
Webcitation.org, and a “WebCite this” bookmarklet
Archive.is, and an “Archive.is this” bookmarklet
They don’t all work on all pages, but they are all useful for preserving things that might otherwise someday be lost, when a web page changes or goes away.

Evan Jones
Editor
June 29, 2014 12:32 pm

How do you know?
Why isn’t it the MMTS that is wrong?

I don’t own that information directly. But NOAA considers MMTS to be the more accurate, the reason given being that the thin plastic gill structure of the MMTS shielding does not absorb and retain heat, unlike that of CRS, which is housed in a large wooden box, and are better placed to provide circulation.
This is the mechanism: In a sense, the CRS box itself is acting as a heat sink. And, like (other) bad microsite effects, this will tend to increase trend (be it either cooling or warming).
This makes it necessary to adjust MMTS station trends upward as conversion occurs, to counteract the step change. It’s our only adjustment (and, incidentally, it works against our hypothesis).

Evan Jones
Editor
June 29, 2014 12:45 pm

I am curious to know what method you used for doing said …. “translates to CO2 warming”, ….. as well as your method for calculating …. “a bit of negative feedback” …. to achieve said result of “ 0.07C per decade”.
Call it a very simple, top down model based on the simple presumption that the ultimate Arrhenius results are correct. (At least they can be replicated in the lab.) According to that, we should see a modest raw forcing of 1.1C per CO2 doubling.
Take it since 1950, when CO2 emissions became (rather abruptly) significant. Negative and positive PDO cancel each other out over that period, so we can more or less discount that effect.
We are left with 0.7C warming (“adjusted”) over 60 years, or ~ 0.107C warming per decade, after a 30%+ increase in atmospheric CO2, which is right in line with Arrhenius.
Yet the land temperature trend record itself appears to be exaggerated by poor microsite, which would reduce that number by a bit. So we are perhaps seeing somewhat less warming than even the Arrhenius experiments indicate, implying some form of net negative feedback in play.
At least this accounts for the bottom line (unlike the models).
There are, of course, some other issues in play (such as the diminishing effect of aerosols, soot-on-ice, natural recovery from the LIA, unknown solar, etc.), but it is hard to quantify them.

Evan Jones
Editor
June 29, 2014 12:58 pm

But again, what is rural? It can be influenced by the crop and its harvest, the forrests and pinebeetles and some other changes in the land.
Cropland shows distinctly higher trends. ~20% of land area is cropland, with 30% of stations being located in cropland. So a slight warming bias is implied here.
But you need to consider that it is “Class 1” world. Over 80% of land mass is uninhabited. Only 2% of CONUS is considered urban, but 9% of USHCN stations are urban.
There should be representative, proportional mesosite coverage for USHCN. But, for whatever reason, there is not.

Evan Jones
Editor
June 29, 2014 1:48 pm

Since CRS has a faster trend than MMTS…..aren’t you recreating that faster trend by adjusting MMTS up?…. “Even after this adjustment, MMTS stations have lower trends.”
Mmmm. In a sense, yes. All we do is remove the step change. We do not attempt to make a “CRS adjustment”.
We try not to overcomplicate. Instead, we bin the data to show (for 1979 – 2008), stations that were MMTS for most of the period, stations that were CRS for most of the period, and “pure” CRS stations that were never converted. So you can look at the entire dataset and then, afterward, compare MMTS warming with CRS warming. You want MMTS with Cropland and urban removed? We got!
And we’ll provide my spreadsheet (with all the tags for equipment, mesosite, altitude, etc.), and all you have to do is filter the station list to obtain whatever subset sample set you like.
But we only make the one adjustment for equipment conversion. When it comes to data adjustment, if you shake it more than three times, you’re playin’ with it.

1 9 10 11