The National Climate Assessment report denies that siting and adjustments to the national temperature record has anything to do with increasing temperature trends. Note the newest hockey stick below.
Source: http://nca2014.globalchange.gov/system/files_force/downloads/low/NCA3_Climate_Change_Impacts_in_the_United%20States_LowRes.pdf?download=1
Yet as this simple comparison between raw and adjusted USHCN data makes clear…

…adjustments to the temperature record are increasing – dramatically. The present is getting warmer, the past is getting cooler, and it has nothing to do with real temperature data – only adjustments to temperature data. The climate reality our government is living in is little more than a self-serving construct.
Our findings show that trend is indeed affected, not only by siting, but also by adjustments:
The conclusions from the graph above (from Watts et al 2012 draft) still hold true today, though the numbers have changed a bit since we took all the previous criticisms to heart and worked through them. It has been a long, detailed rework, but now that the NCA has made this statement, it’s go time. (Note to Mosher, Zeke, and Stokes – please make your most outrageous comments below so we can point to them later and note them with some satisfaction.).


Stepped out for several hours, no response from Zeke or Mosh so I will ask again:
If it is Mosher’s position that station drop out does not affect the over all answer, how can he justify the past becoming colder by increasing the number of stations? If the latter were true, then station drop out would have to warm the present.
Pick one guys. Which is it?
The Mosh is now just a paid mouthpiece shill for Best .
I know you’re not wild about Mosh. But you have to understand that the questions he is asking are just the sort of ones I would ask, myself. It is part and parcel to the scientific method that our data, means, and methods be fully and easily available, and that there must be ability to replicate the results.
When we publish, all of that material will be made fully available.
Dear Jimmi the Dalek,
Re: “(the 2014 point looks spurious),” — that is correct. THAT, if I’m not mistaken, goes to the main point of this post. We must, as Gunga Din said yesterday: “Nip it in the bud.”
(http://wattsupwiththat.com/2014/05/05/how-not-to-measure-temperature-part-95-new-temperature-record-of-102-in-wichita-but-look-where-they-measure-it/#comment-1629465)
“NOAA final adjusted data says: + .309°C/decade”
(above at “Comparison…”)
“USHCN Final Minus Raw Temperature May 5, 2014” — shows about a 1 degree jump for 2014 (above graph),
which certainly does NOT agree with the satellite temperature record:
UAH (Satellite) Temperature Anomalies:
YEAR MON NH (Northern Hemisphere)
2013…….1…….+0.517
2013…….2…….+0.372
201 …….3 …….+0.333
2013 …..4 …….+0.128
2013……5……..+0.180
2013……6……..+0.335
2013……7……..+0.134
2013……8……..+0.111
2013……9……..+0.339
2013 …..10……+0.331
2013……11……+0.160
2013……12……+0.272
2014…….1…….+0.387
2014……. 2…….+0.320
2014……..3 ……+0.337
(Source: http://wattsupwiththat.com/2014/04/07/uah-global-temperature-update-for-march-2014-status-quo/)
Finally,
A Little Perspective to Keep the Facts in View:
“{Per t}he monthly satellite lower-troposphere temperature anomaly from Remote Sensing Systems, Inc., … there has now been no global warming – at all – for 17 years 5 months.”
(Source: http://wattsupwiththat.com/2014/02/06/satellites-show-no-global-warming-for-17-years-5-months/ — emphasis mine)
And the Forest….:
(emphasis mine)
(Source: http://wattsupwiththat.com/2013/11/17/climate-and-human-civilization-over-the-last-18000-years/)
Zeke and others,
I think you should look carefully at a systematic choosing of stations by their quality rather than their trends. The result of each method simply cannot be significantly different if the bulk processing methods and station choosing methods both work.
For Anthony’s result to be inaccurate and the bulk math method to be accurate, you simply need to identify how choosing the best possible stations is somehow biasing the record downward. Nick Stokes wrote a compelling post on the reliability of using only 60 stations for global temp. If that is the case and Anthony chooses stations based on their quality, there should be no difference whatsoever. That said, if he finds ANY difference at all in trend between station quality levels, I would think that BEST would be highly interested in the result rather than dismissive.
Excepting some of the historic unexplained pre-usage of privately disclosed data, I fail to see why this is a heated discussion. It should be resolvable by looking closely at station sorting criteria to determine whether an error was made or whether there is merit to the findings.
Mr. X (at 3:08pm) — I think you said it best (lol):
“We will control all that you see and hear….. sit quietly… .”
“The Outer Limits” — Intro.
yet the surface temperature record is in reasonable (i.e better than 1 degree) agreement with the satellite record over that period. How can that be the case? Has the satellite record been adjusted too?
Satellite readings are, in one sense, a proxy. They are based on microwave reflections; clouds and ice can be an issue. They are not adjusted for anything else, I think, and we are directly assured that UAH does not use the surface record in any way. But, more to the point, satellites measure Lower Troposphere temperatures (+ the other atmospheric layers), not surface.
Dr. Christy (a co-author in this) had previously calculated that LT trends must, necessarily, be 20% higher than surface trends (1.2 amplification), and up to 40% higher (1.4 amplification), heading towards the equator. He was perplexed that this did not show up in the record. Our current results split the uprights at an amplification factor of 1.25.
So Dr. Christy’s theory is vindicated, as well as Anthony’s grand vision.
Actually, the biases aren’t mostly in the same direction, are they? Seems they changed directions sometime after the year 2000 from making the raw data cooler to making it warmer… why is this?
Because that was the one year they got it right?
Satellite readings are, in one sense, a proxy. They are based on microwave reflections; clouds and ice can be an issue. they are not adjusted for anything else, I think, and we are directly assured that UAH does not use the surface record in any way.
===
Evan, I’m curious about this…
..initially, how were they tuned?….and what, if anything, are their readings compared to in order to account for drift, etc
I know they are trying to “devine” a temp…but like sea levels….they would have to have some way to check for accuracy…and they would have had to have some way to initially get them on track
Because that was the one year they got it right?
===
No, they are just saying that 2001 and 2002 were the only two years that anyone could read a thermometer right…….. 🙂
Even the raw data produced by the NCDC cannot be trusted. Someday, the auditors, forensic accountants and justice lawyers will be going in and I hope people will be held to account.
Meteorologists in 1870, and 1880 and 1910 and 1930 and 1950 and 1990 and 2013 were too dumb to understand that the temperature should be recorded at the same time of day or that a simple minimum and maximum would suffice. They NEVER learned how to properly record the temperature. All 1 million of them through history.
That is the justification of continuing to adjust the historical temperature record every single month. In fact, even last month’s temperature recorders were just as dumb as those in 1870 who received the directive from the Weather Bureau on the time of day temperature recording. They never got it right and hence even last month’s records require an adjustment.
It cannot be justified by “another” paper (among 28 done before) showing how records were screwed up, even last month.
Even if they designed the perfect climate computer game….it would never be right…and they wouldn’t even know it
It wouldn’t even be fun.
Some of these so-called mathematicians simply have not rolled in the mud with the numbers the way some of us have. They seem to have forgotten the top-down world where things actually have to add up.
If you want to know if the dice are loaded, don’t ask one of them; they can’t give you an answer you can use — give me a good old common-sense hard copy wargamer’s quantification assessment every time.
Why adjust the data at all?
If you are really trying to find “change”, homogenization is the exact opposite
of what you should be doing!
If you are honestly attempting to find “real” trends, then it is the relative temperature that is required from the local raw data, and not the absolute temperatures!
Each record should be examined in its own context.
Every site move, instrument or methodology change creates a discontinuity that should be treated as a new and unique dataset, that should be examined independently.
It amazes me the that climate scientists can make outrageous claims for the veracity of proxy records and then in the same breath completely discount recorded weather data. What they should be seeing, is that local raw data, is the best ‘proxy’ record the world has ever had and treat it with according reverence!!
Just one well sited station with consistent instrumentation and record keeping methodologies will tell you the truth about so called “Climate Change”. It will show you if it is Global, how it affects that zone, if there is a ‘change’ in any direction, how long, of what magnitude and more.
If a site begins as a rural paddock and ends as a car park in the concrete jungle, all other things being equal, the data alone is useful because it can tell us a lot about the history of that process!
It is the data from the trees and not the forest that matter and as one great Mann demonstrated, even a single tree can be very useful! 😉
jimmi_the_dalek says:
May 6, 2014 at 4:18 pm
That graph puzzles me. It shows an ‘adjustment’ of nearly 1 degree from 1979 to 2013 (the 2014 point looks spurious), yet the surface temperature record is in reasonable (i.e better than 1 degree) agreement with the satellite record over that period. How can that be the case? Has the satellite record been adjusted too?
—————————————————————-
The agreement between RSS and the surface record was reasonable. Lately the divergence is increasing rapidly, at a rate greater then the warming rate.
http://stevengoddard.wordpress.com/2013/05/10/giss-rapidly-diverging-from-rss/
Even the raw data produced by the NCDC cannot be trusted.
I know. I brought the case to Mac a couple of years ago over some “inhomogeneities” regarding USHCN1 vs. USHCN2. (I’ll omit the various four-letter words.) He did a water test and found some discrepancies. We never followed up on that, though.
Further supportin the likely error of the ajustments is the fact that off all continuesley active USCHN stations, the vast majority of the highs occured in the 30s and 40s. (There is no ajustments on a high record, it just is.) If anything UHI in conjunction with CAGW should have smashed those records.
David, I know you’ve seen this already…but you’re right….
http://stevengoddard.files.wordpress.com/2010/10/1998changesannotated.gif?w=500&h=355
Hey Steve,
If your not a troll you have a lot of reading to do. There are a lot of posts on this site that go into detail about every facet of the worlds climate and weather for that matter. You can start by reading about the USHCN here…http://www.ncdc.noaa.gov/oa/climate/research/ushcn/
v/r,
David Riser
Cynical Scientst: You are on the right track. Think Tmax and Tmin.
In general, the NCDC’s approach to homogenization assumes that the present records are correct
Ah there’s the rub. And if it turns out it ain’t correct, then homogenization not only adjusts in the wrong direction, but it makes whatever correct signal you may or may not have vanish, leaving not a trace.
In my old profession, we called that “crappy design and development”. In my current profession, too, come to think of it.
evanmjones says (May 6, 2014 at 6:25 pm): “Dr. Christy (a co-author in this) had previously calculated that LT trends must, necessarily, be 20% higher than surface trends (1.2 amplification), and up to 40% higher (1.4 amplification), heading towards the equator. He was perplexed that this did not show up in the record.”
Wasn’t this mentioned in an article/comment at WUWT? I remember reading about this before, i.e. that the “official” global surface temp trend pretty well matches the LT trend, and it shouldn’t, so there is something wrong with the “official” surface temps or with the theory of troposphere temp trend amplification. I’ve looked, but can’t find it.
..initially, how were they [satellites] tuned?….and what, if anything, are their readings compared to in order to account for drift, etc
They weren’t at first, for drift. That was later corrected.
I know they are trying to “devine” a temp…but like sea levels….they would have to have some way to check for accuracy…and they would have had to have some way to initially get them on track
That I do not know. All I do know is that they deny using surface data to adjust, and I believe them.
evanmjones says (May 6, 2014 at 2:17 pm): “I am a wargame designer…”
What a coincidence! I’ve played wargames! 🙂
May I ask which titles you’ve worked on?
Meteorologists in 1870, and 1880 and 1910 and 1930 and 1950 and 1990 and 2013 were too dumb to understand that the temperature should be recorded at the same time of day or that a simple minimum and maximum would suffice.
IIRC, TOBS-bias was not discovered until the 1950s. It’s a nasty error, and I did not really understand it until I blocked out an example.
The new CRN network is pristine. I don’t think I spotted a Class 3, and many of them are so Class 1 it hurts. Makes you think America is a big place, all of a sudden. They have triple-redundant PRT sensors (so much for homogenization) and are 24-hour records (so much for TOBS).
Unfortunately we will have to wait a couple decades until that data is useful.
Should Anthony and StevenGoddard be the only sites discussing the increasing divergence between the satellites and GISS? Is this addressed anywhere in the “approved” literature?
Is there a plot of the USHCN raw temp data over the time span of the corrections applied – 1880 – present?
Yes, raw USHCN data is available. The HOMR metadata going back to the late 70’s is very good these days, greatly improved. Someone at NCDC made a good hire.