I've been waiting for this statement, and the National Climate Assessment has helpfully provided it

The National Climate Assessment report denies that siting and adjustments to the national temperature record has anything to do with increasing temperature trends. Note the newest hockey stick below.

NCA_sitingh/t to Steve Milloy

Source: http://nca2014.globalchange.gov/system/files_force/downloads/low/NCA3_Climate_Change_Impacts_in_the_United%20States_LowRes.pdf?download=1

Yet as this simple comparison between raw and adjusted USHCN data makes clear…

2014_USHCN_raw-vs-adjusted
Click for graph source – Source Data: NOAA USHCN V2.5 data http://www.ncdc.noaa.gov/oa/climate/research/ushcn/

…adjustments to the temperature record are increasing – dramatically. The present is getting warmer, the past is getting cooler, and it has nothing to do with real temperature data – only adjustments to temperature data. The climate reality our government is living in is little more than a self-serving construct.

Our findings show that trend is indeed affected, not only by siting, but also by adjustments:

Watts_et_al_2012 Figure20 CONUS Compliant-NonC-NOAA

The conclusions from the graph above (from Watts et al 2012 draft) still hold true today, though the numbers have changed a bit since we took all the previous criticisms to heart and worked through them. It has been a long, detailed rework, but now that the NCA has made this statement, it’s go time. (Note to Mosher, Zeke, and Stokes – please make your most outrageous comments below so we can point to them later and note them with some satisfaction.).

 

 

 

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
258 Comments
Inline Feedbacks
View all comments
Another Gareth
May 7, 2014 4:17 am

Nick Adams, May 6, 2014 at 11:25 am, provided the following link detailing the adjustments that have been made to the temperature record:
http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html#QUAL
The one that interests me is SHAP.
That page says “Application of the Station History Adjustment Procedure (yellow line) resulted in an average increase in US temperatures, especially from 1950 to 1980. During this time, many sites were relocated from city locations to airports and from roof tops to grassy areas. This often resulted in cooler readings than were observed at the previous sites. When adjustments were applied to correct for these artificial changes, average US temperature anomalies were cooler in the first half of the 20th century and effectively warmed throughout the later half.”
Doesn’t this just export any historic UHI to the new location? The SHAP adjustments surely ought to be negative on temp data before a move to a cooler site and unadjusted after the move as you would think the new site would be chosen to be better. The temp data instead appears to be unadjusted before a move to a cooler site and positively adjusted after a move, with those adjustments reaching a plateau once the moves were largely done.

Joseph Murphy
May 7, 2014 5:03 am

Hi Janice! Abnout as new as you 😉

Robert of Ottawa
May 7, 2014 5:26 am

I have been saying for a long time that this stratagem is so clever, it can only be deliberate. Past temperatures cannot be verified while present ones can, so to create warming by cooling the past is shrewd. Dishonest, yes, but shrewd.

Robert of Ottawa
May 7, 2014 5:41 am

Let the temperature fit the crime
Mann’s objective all sublime
He shall achieve within time
A data torture crime
Temperature is a crime
And make each data point
Unwillingly represent
A source of innocent increment
A source of increment!
Gratis Gilbert & Sullivan Mikado

Evan Jones
Editor
May 7, 2014 7:13 am

Did you really just compare their efforts to homeopathy, as their homogenization has watered down the meaningful stuff until there is only a memory of it?
Worse. There is no trace of it, whatever. After homogenization, the well sited stations average 0.324 C/decade and the poorly sited stations average 0.325.
Unless you go back to formula you would never suspect there was “meaningful stuff” in the first place.

May 7, 2014 7:30 am

That chart as done by NOAA has been around for a long time, although it only goes to 2000:
src=”http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif”>

Rod Everson
May 7, 2014 8:06 am

DC.sunsets says:
May 6, 2014 at 11:42 am
All science is Political Science when politics writes the grant-money checks.

Maybe this has already been done, but if not I’d like to see someone do a “funding study.”
The purpose would be to categorize all federal grants awarded to climate-related studies as being awarded to known skeptics, known alarmists, and neutral parties. And of the neutral parties, how many of them were awarded future grants if their original results were classifieds as skeptic, neutral, or alarmist.
Put another way, do climate skeptics have trouble getting federal grant money and is there a study that indicates that to be the case?

beng
May 7, 2014 8:22 am

I knew the fabrication was bad indeed, but that graph takes the cake.

Logicophilosophicus
May 7, 2014 8:32 am

If they keep lowering past temperatures people will think we’re emerging from a little ice age! Oh, we are, aren’t we.
Anyway, it’s good to see they keep increasing the choco ration.

May 7, 2014 8:43 am

Lots of questions from folks; let me see if I can catch up now that I’ve had my morning coffee.
——–
Cynical Scientist – Interesting suggestion regarding reoccuring step changes like cutting grass at rural stations. I suspect those effects would be too small to be picked up by the homogenization algorithms, though they could result in problems over time. Williams et al tried to address this by looking at how the results change when you raise or lower the threshold for detecting step changes such that only major changes (e.g. station moves, instrument changes, paving the area under the station) trigger homogenization.
Variability differences shouldn’t really affect the trend unless there is also a change in the mean, something that can be easily picked up by pairwise comparisons.
As far as UHI goes, it is likely a combination of step changes (many triggered by microsite changes) and gradual slope (more influenced by macro-scale changes). Our recent paper found that step-change homogenization is pretty good at removing UHI impacts in the U.S., at least for the last 60 years or so: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/hausfather-etal2013.pdf
———
Latitude – Each record is compared to its surrounding neighbors, and the algorithm looks for break points at one station that are not shared by its 20 or so surrounding stations. The assumption is that climate change is by-and-large a regional phenomenon, and any persistent step changes on a monthly scale seen at one station but none of its surrounding stations are due to some localized bias rather than a real climate effect.
The current temperatures are used as the reference, so if there is a 1/2 degree step change down at a specific station at some point in the past, the NCDC method corrects it by moving past temperatures up by 1/2 degree. Berkeley does something somewhat different, cutting station records whenever it detects a breakpoint and treating them as individual stations. The results of the two approaches are quite similar, however, especially for the U.S.: http://rankexploits.com/musings/wp-content/uploads/2013/01/USHCN-adjusted-raw-berkeley.png
You do get pretty much the same result if you do homogenization using only rural stations and toss out all the urban stations. We did this test in our UHI paper.
———
Salamano – I’m using a subset of UHI only over land areas in the U.S. It agrees quite well with homogenized data. As does the USCRN since it began having complete U.S. data in 2004: http://rankexploits.com/musings/wp-content/uploads/2013/01/Screen-Shot-2013-01-16-at-10.40.46-AM.png
———
Jared – The figure shows the cumulative effect of adjustments, and its far from linear. Also, while there were some major adjustments to CRN12 stations in the 1940s, the biggest adjustments were TOBs changes and the MMTS transition in the 1960s-1980s, which is where you see the bulk of the increase. I also get something slightly different from Anthony when I compare raw and homogenized data: http://i81.photobucket.com/albums/j237/hausfath/USHCNHomogenizedminusRaw_zps284d69fe.png
———
davidmhoffer – Adding stations can cool the past if they are in areas with no spatial coverage prior to adding those station (e.g. in the Arctic) and if they have a higher trend than the global average. This is not really relevant for the U.S., however, where spatial coverage is fine. In general, unless there is an area with low spatial coverage, adding more stations will have a minor effect on the temperature record.
———
Jeff Id – I’d hope no one ever chooses stations based on their trends :-p. Its worth pointing out that Anthony finds that the best sited stations have the same trend as the badly sited ones post-homogenization. They appear to have a lower trend prior to homogenization, though in Fall et al they had the same trend, so its worth looking in more detail exactly what changed in the ratings between the old and new papers.
The difference between CRN12 and CRN345 trends in the raw data could have a number of explanations: (1) homogenization is biasing the trends upward by “spreading” the warming from badly sited stations; (2) there are some inhomogenities (like the 1940s move from city centers to airports) that are correlated with CRN ratings; (3) there is bias in the spatial coverage between the different sets of stations contributing to the trend differences.
Once the ratings are released, I’d like to look more in-depth at the specific breakpoints detected in the CRN12 stations to see what is driving these differences. I’d also like to compare CRN12 temperatures to nearby Climate Reference Network stations, satellite data (UAH/AIRS), and reanalysis data. Anthony may well be correct, though I’ll defer judgement until I actually have the data to see for myself.
———
ThinkingScientist – I’d be happy to answer your questions to the extent I can.
1) There are a number of systemic biases introduced in the U.S. record over the past century. First a significant number of stations were moved from urban rooftops to newly constructed airports in the 1940s, resulting in a step change downward in readings after the move. Second, a large portion of the network had its time of observations changed in the 1960s and 1970s, also resulting in a negative bias. Third, most of the network transitioned from liquid in glass thermometers to MMTS electronic instruments in the 1980s, resulting in an average max cooling bias of around 0.6 C. There are also numerous documented and undocumented station moves and microsite changes, as well as a non-negligable UHI effect. All of these are biases that, ideally, should be addressed in creating our best estimate of U.S. (and global) temperatures.
2) UHI is real, but its impact isn’t huge. Time of observation changes introduce a larger bias, for example. Homogenization (excluding TOBs corrections) actually lowers min temperatures, which is what we would expect when UHI is being corrected. I’d suggest reading my recent paper on UHI and homogenization in the U.S. for more details: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/hausfather-etal2013.pdf
3) Homogenization is generally done separately on max and min temperatures. Mean is calculated from the resulting homogenized max and min.

Alexej Buergin
May 7, 2014 8:53 am

“Zeke Hausfather says:
May 6, 2014 at 12:35 pm
Methinks the last point in your raw vs adjusted USHCN graph is in error”
And if it is not in error, will he agree to call it “bullshit of the highest order”, too?

May 7, 2014 8:56 am

evanmjones,
If I recall correctly, the TLT amplification factor over land is actually right around 1.
See http://www.realclimate.org/index.php/archives/2009/11/muddying-the-peer-reviewed-literature/
and http://climateaudit.org/2011/11/07/un-muddying-the-waters/
REPLY: Yes but nothing on RealClimate is actually real. And, I simply don’t trust a NASA organization that uses noisy and maladjusted surface temperature data over a satellite sensing program – i.e. the business of NASA. You’d think that would be their goal.
We’ll get the answer from the people that actually DO the work in satellite sounding and post here. – Anthony

May 7, 2014 9:01 am

Alexej Buergin,
If that last point is not in error, I will eat my hat 🙂

May 7, 2014 9:09 am

Salamano –
Oops, I meant UAH, not UHI in my reply above (which looks to still be in moderation). Climate science is something of an acronym soup, and sometimes its hard to keep them all straight…

May 7, 2014 9:13 am

Anthony,
Well, given that Klotzbach get their 1.25 amplification figure from Schmidt 2009, I think he might be the right person to ask :-p
Steve McIntyre did the math himself and got an amplification factor of 1.05 over land. Read the Nov. 8th update to Steve’s post.
REPLY: I did, and while Steve’s work is admirable, he’s not in the business of remote sensing (this appears to be his first effort), and neither is Schmidt. I prefer to ask somebody who actually does the work, and the person who designed the instrument on the bird, Spencer, is the best choice. – Anthony

Chris D.
May 7, 2014 9:43 am

I’m very much looking forward to reading this paper. Considering all the work that went into this project, it’s very exciting to see that it was not a waste of time. Perhaps it will even end up being a landmark paper. I’m chilling some bubbly in any event.

scf
May 7, 2014 10:41 am

The adjusted temperature record is a joke. In countries with strong global warming political movements, the supposedly scientific data gets adjusted in the same direction. In other countries, that doesn’t happen. Strange, that.
Multitudes of papers are written based on a joke of a temperature record, rendering those papers a joke as well. If the foundation is weak, so is whatever you build on top of it.
The next step will be for the believers to delete the facts (ie the raw data) so that their own twisted version of reality becomes the new “raw” data. If you can control the facts…

May 7, 2014 10:48 am

Zeke, I read through the paper that you linked in a post (in moderation?) hausfather-etal2013.pdf and have a couple of questions. First, what is the actual homogenization algorithm used in USHCN? Your paper says “Homogenization of the USHCN monthly version 2 temperature data does not specifically target changes associated with urbanization. Rather, the procedure used involves identifying and accounting for shifts in the monthly temperature series that appear to be unique to a specific station‐‐the assumption being that a spatially isolated and sustain shift in a station series is caused by factors unrelated to background climate variations [Menne et al. 2010].”
I found the Menne 2010 paper here: http://onlinelibrary.wiley.com/doi/10.1029/2009JD013094/full It says “n version 2 of the USHCN temperature data [Menne et al., 2009], the apparent impacts of documented and undocumented inhomogeneities were quantified and removed through automated pairwise comparisons of mean monthly maximum and minimum temperature series as described by Menne and Williams [2009].” I found the Menne 2009 paper here: http://journals.ametsoc.org/doi/pdf/10.1175/2008BAMS2613.1 and I’ve seen it before several times. It claims “Use of a simple difference in means test does, however, address both gradual and sudden changes,…”. What is the difference in means test? It is not explained in the paper.

John Slayton
May 7, 2014 10:52 am

Evan (and any other interested persons):
Here’s another early paper dealing with TOB: W. Ellis, “On the Difference produced in the Mean Temperature derived from Daily Maxima and Minima as dependent on the Time at which the Thermometers are read,” Quart. Journ. Roy. Met. Soc. XVL, 1890, 213-218.
Tony B: Thanks for the references.

May 7, 2014 10:58 am

Eric,
“Difference in means” in this case refers to the difference in mean station anomalies before and after a breakpoint in the difference series relative to nearby stations. You can visually see this in the Berkeley Earth difference series graphs, e.g. http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Figures/34635-TAVG-Alignment.pdf

May 7, 2014 11:41 am

Zeke, thanks for the graphic. It shows means separated by “empirical breaks” How are those breaks determined? I assume they are station specific? From metadata? And if there is no metadata? Then presumably the station mean is compared to the regional mean. Then it is adjusted? How?
Thanks in advance.

Matt G
May 7, 2014 11:44 am

jimmi_the_dalek says:
May 6, 2014 at 4:18 pm
That graph puzzles me. It shows an ‘adjustment’ of nearly 1 degree from 1979 to 2013 (the 2014 point looks spurious), yet the surface temperature record is in reasonable (i.e better than 1 degree) agreement with the satellite record over that period. How can that be the case? Has the satellite record been adjusted too?
————————————————————————————————————–
The surface warms and cools greater than the troposphere above any one point from satellite data, Surface data should show more cooling or warming than satellite, without the exception of strong ENSO signals. This indicates that cooling over recent years should show more with surface data than satellite data. This has not been observed because they are dishonestly adjusting out this cooling to keep with the satellite data..
When global temperature warms over a longer period the surface does so more than the satellite and this is then not adjusted to match satellite data. It is a spurious cherry picking warm bias incompetent way of managing surface data. Interpolating with sparse data regions in favor of satellite data, says all everyone needs to know about the mismanagement of surface data. Despite the differences between the two interpolation is far less accurate.

Evan Jones
Editor
May 7, 2014 11:55 am

Thanks, John.
I remember hearing the explanation and having to figure out what was actually going on by constructing a top-down model.
By the way, folks, John is one of our best and most determined surface station surveyors.

May 7, 2014 11:55 am

eric1skeptic,
The pairwise homogenization process used by NCDC iterates through each station, calculating the difference between each station and all neighbors in its proximity (say, the nearest 20 stations, though that parameter is tunable). It looks through these difference series for sudden step changes that occur in the difference series that are consistent across all neighbor pairs. Effectively its looking for changes that occur at a particular point in time at one station but not at any of the surrounding stations, with the assumption that abrupt localized changes that occur at one station but not at any of its neighbors reflect localized biases like station moves or instrument changes.
This can result in problems if there are simultaneous changes at all the neighbors but not at the target station. Thankfully most of the inhomogenities in the surface temperature record like station moves, TOBs changes, or instrument changes were phased in gradually, and were not adopted simultaneously across the network. Problems can also occur when there are few neighboring stations, forcing the algorithm to go further afield to find neighbors and potentially misclassifying true regional climate changes as localized biases. This seems to have occurred in the case of a number of Arctic stations, as this recent piece by Robert Way points out: http://www.skepticalscience.com/how_global_warming_broke_the_thermometer_record.html
In the U.S., thankfully, station coverage is dense enough (especially since all ~7000 coop stations are used in breakpoint detection) that this should’t be much of an issue here.
Homogenization also doesn’t necessarily deal well with slope inhomogenities (vs. step-change inhomogenities). Thankfully most of the inhomogenities (including UHI) appear to show up more as a set of smaller step changes rather than a very gradual warming bias, though more would could be done analyzing this.
I’d also suggest reading Williams et al if you haven’t yet. They set out to test homogenization by creating synthetic temperature data where the “truth” is known and artificial biases of different types are added. They explore how well the algorithm deals with different types of issues and different bias signs, to make sure that the algorithm doesn’t end up artificially introducing cooling or warming bias when correcting errors. They also look at different possible permutations of the number of neighbors needed, the size of the break needed, or the distance from the station needed to see how they affect the results. Their paper is here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf
The code used in the pairwise homogenization method is available here: ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/software/52i/

May 7, 2014 12:07 pm

eric1skeptic,
To address your other questions, the size of the step change in the difference series needed to flag an inhomogenity is configurable; I’m not sure what exact value is used in the PHA, though the Williams et al paper tested a number.
Once a breakpoint is detected, different algorithms “correct” it in different ways. NCDC’s PHA essentially just collapses and step changes identified in the difference series to create a continuous record. Berkeley Earth cuts the station records at that point, treating everything before and after as different stations and using a least squares/kriging approach to combine all the fragments into a spatial temperature field.
Metadata is treated differently in different algorithms. The PHA lowers the threshold for detecting breakpoints when metadata indicates that a breakpoint has occurred. Berkeley just creates breaks at any metadata-determined breakpoint even if it doesn’t show up in the difference series. Its worth pointing out that both methods detect quite a few breakpoints that are not documented in the metadata, as the metadata is rather poor (especially further back in time) in the U.S. and nearly non-existent for the rest of the world.