30 year trends of temperature are shown to be lower, using well-sited high quality NOAA weather stations that do not require adjustments to the data.
This was in AGU’s press release news feed today. At about the time this story publishes, I am presenting it at the AGU 2015 Fall meeting in San Francisco. Here are the details.
NEW STUDY OF NOAA’S U.S. CLIMATE NETWORK SHOWS A LOWER 30-YEAR TEMPERATURE TREND WHEN HIGH QUALITY TEMPERATURE STATIONS UNPERTURBED BY URBANIZATION ARE CONSIDERED
Figure 4 – Comparisons of 30 year trend for compliant Class 1,2 USHCN stations to non-compliant, Class 3,4,5 USHCN stations to NOAA final adjusted V2.5 USHCN data in the Continental United States
EMBARGOED UNTIL 13:30 PST (16:30 EST) December 17th, 2015
SAN FRANCISCO, CA – A new study about the surface temperature record presented at the 2015 Fall Meeting of the American Geophysical Union suggests that the 30-year trend of temperatures for the Continental United States (CONUS) since 1979 are about two thirds as strong as officially NOAA temperature trends.
Using NOAA’s U.S. Historical Climatology Network, which comprises 1218 weather stations in the CONUS, the researchers were able to identify a 410 station subset of “unperturbed” stations that have not been moved, had equipment changes, or changes in time of observations, and thus require no “adjustments” to their temperature record to account for these problems. The study focuses on finding trend differences between well sited and poorly sited weather stations, based on a WMO approved metric Leroy (2010)1 for classification and assessment of the quality of the measurements based on proximity to artificial heat sources and heat sinks which affect temperature measurement. An example is shown in Figure 2 below, showing the NOAA USHCN temperature sensor for Ardmore, OK.
Following up on a paper published by the authors in 2010, Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends2 which concluded:
Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends
…this new study is presented at AGU session A43G-0396 on Thursday Dec. 17th at 13:40PST and is titled Comparison of Temperature Trends Using an Unperturbed Subset of The U.S. Historical Climatology Network
A 410-station subset of U.S. Historical Climatology Network (version 2.5) stations is identified that experienced no changes in time of observation or station moves during the 1979-2008 period. These stations are classified based on proximity to artificial surfaces, buildings, and other such objects with unnatural thermal mass using guidelines established by Leroy (2010)1 . The United States temperature trends estimated from the relatively few stations in the classes with minimal artificial impact are found to be collectively about 2/3 as large as US trends estimated in the classes with greater expected artificial impact. The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization. The homogeneity adjustments applied by the National Centers for Environmental Information (formerly the National Climatic Data Center) greatly reduce those differences but produce trends that are more consistent with the stations with greater expected artificial impact. Trend differences are not found during the 1999- 2008 sub-period of relatively stable temperatures, suggesting that the observed differences are caused by a physical mechanism that is directly or indirectly caused by changing temperatures.




Key findings:
1. Comprehensive and detailed evaluation of station metadata, on-site station photography, satellite and aerial imaging, street level Google Earth imagery, and curator interviews have yielded a well-distributed 410 station subset of the 1218 station USHCN network that is unperturbed by Time of Observation changes, station moves, or rating changes, and a complete or mostly complete 30-year dataset. It must be emphasized that the perturbed stations dropped from the USHCN set show significantly lower trends than those retained in the sample, both for well and poorly sited station sets.
2. Bias at the microsite level (the immediate environment of the sensor) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend. Well sited stations show significantly less warming from 1979 – 2008. These differences are significant in Tmean, and most pronounced in the minimum temperature data (Tmin). (Figure 3 and Table 1)
3. Equipment bias (CRS v. MMTS stations) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend when CRS stations are compared with MMTS stations. MMTS stations show significantly less warming than CRS stations from 1979 – 2008. (Table 1) These differences are significant in Tmean (even after upward adjustment for MMTS conversion) and most pronounced in the maximum temperature data (Tmax).
4. The 30-year Tmean temperature trend of unperturbed, well sited stations is significantly lower than the Tmean temperature trend of NOAA/NCDC official adjusted homogenized surface temperature record for all 1218 USHCN stations.
5. We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.
6. The data suggests that the divergence between well and poorly sited stations is gradual, not a result of spurious step change due to poor metadata.
The study is authored by Anthony Watts and Evan Jones of surfacestations.org , John Nielsen-Gammon of Texas A&M , John R. Christy of the University of Alabama, Huntsville and represents years of work in studying the quality of the temperature measurement system of the United States.
Lead author Anthony Watts said of the study: “The majority of weather stations used by NOAA to detect climate change temperature signal have been compromised by encroachment of artificial surfaces like concrete, asphalt, and heat sources like air conditioner exhausts. This study demonstrates conclusively that this issue affects temperature trend and that NOAA’s methods are not correcting for this problem, resulting in an inflated temperature trend. It suggests that the trend for U.S. temperature will need to be corrected.” He added: “We also see evidence of this same sort of siting problem around the world at many other official weather stations, suggesting that the same upward bias on trend also manifests itself in the global temperature record”.
The full AGU presentation can be downloaded here: https://goo.gl/7NcvT2
[1] Leroy, M. (2010): Siting Classification for Surface Observing Stations on Land, Climate, and Upper-air Observations JMA/WMO Workshop on Quality Management in Surface, Tokyo, Japan, 27-30 July 2010
[2] Fall et al. (2010) Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends https://pielkeclimatesci.files.wordpress.com/2011/07/r-367.pdf
Abstract ID and Title: 76932: Comparison of Temperature Trends Using an Unperturbed Subset of The U.S. Historical Climatology Network
Final Paper Number: A43G-0396
Presentation Type: Poster
Session Date and Time: Thursday, 17 December 2015; 13:40 – 18:00 PST
Session Number and Title: A43G: Tropospheric Chemistry-Climate-Biosphere Interactions III Posters
Location: Moscone South; Poster Hall
Full presentation here: https://goo.gl/7NcvT2
Some side notes.
This work is a continuation of the surface stations project started in 2007, our first publication, Fall et al. in 2010, and our early draft paper in 2012. Putting out that draft paper in 2012 provided us with valuable feedback from critics, and we’ve incorporated that into the effort. Even input from openly hostile professional people, such as Victor Venema, have been highly useful, and I thank him for it.
Many of the valid criticisms of our 2012 draft paper centered around the Time of Observation (TOBs) adjustments that have to be applied to the hodge-podge of stations with issues in the USHCN. Our viewpoint is that trying to retain stations with dodgy records and adjusting the data is a pointless exercise. We chose simply to locate all the stations that DON”T need any adjustments and use those, therefor sidestepping that highly argumentative problem completely. Fortunately, there was enough in nthe USHCN, 410 out of 1218.
It should be noted that the Class1/2 station subset (the best stations we have located in the CONUS) can be considered an analog to the Climate Reference Network in that these stations are reasonably well distributed in the CONUS, and like the CRN, require no adjustments to their records. The CRN consists of 114 commissioned stations in the contiguous United States, our numbers of stations are similar in size and distribution. This should be noted about the CRN:
One of the principal conclusions of the 1997 Conference on the World Climate Research Programme was that the global capacity to observe the Earth’s climate system is inadequate and deteriorating worldwide and “without action to reverse this decline and develop the GCOS [Global Climate Observing System], the ability to characterize climate change and variations over the next 25 years will be even less than during the past quarter century” (National Research Council [NRC] 1999). In spite of the United States being a leader in climate research, long term U.S. climate stations have faced challenges with instrument and site changes that impact the continuity of observations over time. Even small biases can alter the interpretation of decadal climate variability and change, so a substantial effort is required to identify non-climate discontinuities and correct the station records (a process calledhomogenization). Source: https://www.ncdc.noaa.gov/crn/why.html
The CRN has a decade of data, and it shows a pause in the CONUS. Our subset of adjustment free unperturbed stations spans over 30 years, We think it is well worth looking at that data and ignoring the data that requires loads of statistical spackle to patch it up before it is deemed usable. After all, that’s what they say is the reason the CRN was created.
We do allow for one and only one adjustment in the data, and this is only because it is based on physical observations and it is a truly needed adjustment. We use the MMTS adjustment noted in Menne et al. 2009 and 2010 for the MMTS exposure housing versus the old wooden box Cotton Region Shelter (CRS) which has a warm bias mainly due to [paint] and maintenance issues. The MMTS gill shield is a superior exposure system that prevents bias from daytime short-wave and nighttime long-wave thermal radiation. The CRS requires yearly painting, and that often gets neglected, resulting in exposure systems that look like this:
See below for a comparison of the two:
Some might wonder why we have a 1979-2008 comparison when this is 2015. The reason is so that this speaks to Menne et al. 2009 and 2010, papers launched by NOAA/NCDC to defend their adjustment methods for the USCHN from criticisms I had launched about the quality of the surface temperature record, such as this book in 2009: Is the U.S. Surface Temperature Record Reliable? This sent NOAA/NCDC into a tizzy, and they responded with a hasty and ghost written flyer they circulated. In our paper, we extend the comparisons to the current USHCN dataset as well as the 1979-2008 comparison.
We are submitting this to publication in a well respected journal. No, I won’t say which one because we don’t need any attempts at journal gate-keeping like we saw in the Climategate emails. i.e “I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow — even if we have to redefine what the peer-review literature is!” and “I will be emailing the journal to tell them I’m having nothing more to do with it until they rid themselves of this troublesome editor.”.
When the journal article publishes, we’ll make all of the data, code, and methods available so that the study is entirely replicable. We feel this is very important, even if it allows unscrupulous types to launch “creative” attacks via journal publications, blog posts, and comments. When the data and paper is available, we’ll welcome real and well-founded criticism.
It should be noted that many of the USHCN stations we excluded that had station moves, equipment changes, TOBs changes, etc that were not suitable had lower trends that would have bolstered our conclusions.
The “gallery” server from that 2007 surfacestations project that shows individual weather stations and siting notes is currently offline, mainly due to it being attacked regularly and that affects my office network. I’m looking to move it to cloud hosting to solve that problem. I may ask for some help from readers with that.
We think this study will hold up well. We have been very careful, very slow and meticulous. I admit that the draft paper published in July 2012 was rushed, mainly because I believed that Dr. Richard Muller of BEST was going before congress again the next week using data I provided which he agreed to use only for publications, as a political tool. Fortunately, he didn’t appear on that panel. But, the feedback we got from that effort was invaluable. We hope this pre-release today will also provide valuable criticism.
People might wonder if this project was funded by any government, entity, organization, or individual; it was not. This was all done on free time without any pay by all involved. That is another reason we took our time, there was no “must produce by” funding requirement.
Dr. John Nielsen-Gammon, the state climatologist of Texas, has done all the statistical significance analysis and his opinion is reflected in this statement from the introduction
Dr. Nielsen-Gammon has been our worst critic from the get-go, he’s independently reproduced the station ratings with the help of his students, and created his own series of tests on the data and methods. It is worth noting that this is his statement:
The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization.
The p-values from Dr. Nielsen-Gammon’s statistical significance analysis are well below 0.05 (the 95% confidence level), and many comparisons are below 0.01 (the 99% confidence level). He’s on-board with the findings after satisfying himself that we indeed have found a ground truth. If anyone doubts his input to this study, you should view his publication record.
COMMENT POLICY:
At the time this post goes live, I’ll be presenting at AGU until 18:00PST , so I won’t be able to respond to queries until after then. Evan Jones “may” be able to after about 330PM PST.
This is a technical thread, so those who simply want to scream vitriol about deniers, Koch Brothers, and Exxon aren’t welcome here. Same for people that just want to hurl accusations without backing them up (especially those using fake names/emails, we have a few). Moderators should use pro-active discretion to weed out such detritus. Genuine comments and/or questions are welcome.
Thanks to everyone who helped make this study and presentation possible.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.




Finally what was suspected along is now proven. I suspect the trend would be exactly what CET or Armagh “unadjusted urban” show
Heh. Can’t go by one station, though. One needs a large set to beat the statistical significance monster, after all.
If Figure 4 is any clue, then “adjusted temperatures” can be just about anything.
Only in climate science can you have one set of data be low (Class 1/2), have a second set of data be in the middle (Class 3/4/5), and then have the final average of all the data be the highest of all (NOAA).
The point of adjusting and homogenising badly sited thermometers about as logical as taking an average and standard deviation of many provably broken climate model outputs and pretending they represent something not inconsistent with your measurements.
It is very difficult to get trend to 0.02K/a from devices which are not measuring something that can be well defined to that precision. Simply telling “diurnal min/max temperature in shade at two meters height” is far from defining the problem. And I’m not denying it’s warming, our host calculated it as 0.2K/decade in US, it is just that the uncertainty is not only about the temperature, but about the thing to be measured. I’m content with climate scientists as long as they don’t let uncertainty to be used as a weapon for detrimental mitigation attempts.
One of my favorites
http://surfacestations.org/images/lovelock_mig480.jpg
(posting this here for it is, indeed, Mr. Jones, the “red meat” of this paper’s implications)
(Credit for noting the above to RD in her or his post far below, here: http://wattsupwiththat.com/2015/12/17/press-release-agu15-the-quality-of-temperature-station-siting-matters-for-temperature-trends/comment-page-1/#comment-2100535 )
This is a colossal effort and achievement by Anthony Watts and deserves the widest study and acknowledgement. I hope that there are no mis-guided efforts to block its publication. The benefits of this study are self-evident. Reliable data is the basis of all science, and reliable data has been missing from the Climate debate for a long while.
+1
Sou [snip] over at [snip] been trying to debunk this paper since she saw Anthony’s tweet in October! I’m so excited to read it Anthony!
Yes, really rattled the litter tray, excellent work!
Miriam in delirium.
How funny, I thought only “climate scientists” were qualified to speak of such an issue, or have an opinion on it. I guess I hadn’t realized an MBA and a bachelor’s in agricultural science and a “freelance consultant” position make you a “climate scientist”.
Yet she gave me a valuable forum. And I am grateful to her.
Miriam in delirium.
Meanwhile, Evan in seventh heaven (Region 7.)
Frankly, the personal insults ban should also apply here. Ms. O’Brien may be obsessive and vitriolic, but that’s no reason to bring insults toward her in this forum.
REPLY – +1. Hear, hear. Please, guys, if ever there was a time for the high road, it is now. You all know my feelings on the matter. ~ Evan
Two Labs,
There are other reasons. One example: I’ve never commented there, but she has referred to my comments here in very disparaging terms. Anthony gets treated even worse. So if your suggestion is to just turn the other cheek, I don’t agree, because that way you get slapped on both sides.
Anyway, we’re hardly being “personally insulting” to her. Just telling it like it is.
I totally agree. Mods, please feel free to remove my personally insulting comment.
REPLY – Thank you for that comment. Done. You are forgiven; go forth and sin no more. ~ Evan
What makes you a “climate scientist” is doing the drill and surviving peer (and independent) review. Anthony has done this several times. I have done once.
No sheepskin required. Sou’s criticisms of this project have yielded value to it — and me. I so wish that there were not so much bad blood under the bridge. both sides in this have a lot to learn from each other. Being on speaking terms helps.
The Rev doth bestride this narrow world like a colossus.
Welcome him to Vindication-Nation.
But this project demonstrates why Climate Science is not a science.
I like to think of it how Climate Science lives and grows — like any other science.
Absolutely. Congratulations and many thanks to all involved.
But this project demonstrates why Climate Science is not a science. At the very start, the object is to get the best data quality possible and work with good data, not poor. But Climate Science has not involved in that way. Heck, their main data set does not even measure what they are interested in, ie., it does not measure energy and does not tell one whether energy is accumulating over time.
The first step that the Team should have engaged in, was an audit of all the stations used to compile the land based thermometer record and to identify those best sited, those with the best maintenance record and data recording rigour, and those that had the longest data record in time. They should only have used good quality data source which requires no adjustment whatsoever.
If Global Warming is truly Global, then one does not need 6000, or 2000 or so stations, but one does require good quality data. It would have been much better to have rooted out the good data sources even if this resulted in only 300 to 700 stations world wide. Heck, even 100 to 200 stations would tell us all we need to know if the data that they are returning is good data. Of course, there would be spatial issues but that is not really so much of a problem since Climate is regional and not global and climate response and impact to climate change is also regional and not global. What we need to know is what each continent is doing and what each country is doing so it does not matter greatly whether globally the distribution of the spatial coverage is less than ideal.
Presently what we are doing is simply evaluating the efficacy of the adjustments made to cr*ppy data. What is needed is only good data that needs no adjustments whatsoever.
There’s a link to this study on Drudge this morning.
Good job, Anthony. It’s been a very long haul for you and other authors of this work. We should also congratulate the many volunteers who took up the enormous task of documenting every surface station generating temperature data used to influence public policy, despite opposition from government satraps (who are therefore plainly unfit for public trust).
“I hope that there are no mis-guided efforts to block its publication.”
Unfortunately, as the fetid contents of the Climategate emails (undenied by their authors) plainly demonstrate on numerous occasions and beyond any possible doubt, there were persistently corrupt and malicious efforts to hinder publication which were not merely mis-guided.
Those efforts were very carefully guided, indeed, and demonstrate just how fundamentally untrustworthy the “climate science” establishment has been from inception — and always will be (because when it comes to personal integrity, the leopard never really changes his spots.) Corrupt people attract others and weed out of their ranks anyone who may question their carefully wrought fictions. These academic authorities hand pick and hand feed a new generation of “climate scientists”. There’s no reason to believe that crop of shiny new faces will offer any improvement. The acorn doesn’t fall far from the oak. The professional villainy won’t end when the present top-tier “team” of carney barkers and fraudsters have died or retired. It will be permanently institutionalized at the expense of the public.
Nothing from this branch of fraudulent “science” can be trusted. Especially assertions based on methods, data, or assumptions not independently verified (the way real science actually works). Altered data without the original available should be thrown out entirely since it’s as tainted and untrustworthy as the people who “adjusted” it. And anyone who points to it as evidence supporting any conclusive assertion should be pilloried as either a fool or a scammer. Probably both.
edit A long time ago
Thanks to all of you. I’m sure this will be front and center of the NYT and WSJ tomorrow.
(Sorry, couldn’t resist).
Thank you sincerely.
Maybe FoxNews……
Next they will dilute the mercury in the thermometers .
Have you ever tried that?
Well done. Good science well described. It’ll be fun to see the responses.
Anthony, I, and I’m sure, the rest of the “screeching mercury monkeys” who surveyed stations back in the day thank and congratulate you on persevering with this research. These results demonstrate clearly that method matters and fiddling with numbers ex post facto isn’t going to fix faulty procedures.
I’m another one, and I echo your expressions and opinions. It’s so nice to see this work coming to fruition.
Ook. Ook. (Scritch-scratch.)
Evan, and many thanks to you too for all your hard work on this project verifying the surveys. EE-EE-OO-OO-AH-AH.
Ting, tang, walla-walla bing-bang.
All of the surveyors are brothers in arms. All of you own a piece of this.
We are so proud of you, Anthony (et. al.)!!
So very proud.
********************************
(sometime, how about a list — as a posted article — of all (local site guys, etc…) who made this giant effort possible?)
Hear, hear, Janice Moore.
Yes, hear, hear, Janice!
You, Sir, are a Great American.
(I was addressing Anthony, but the same applies to his collaborators.)
He’s great bloke.
NOBODY BEATS THE REV.
Yes, at last the evidence we all knew was there and somehow nobody was able to give us! This is a massive achievement and breaks the foundations of the lie on which this fake science has been built over many years. Yes everyone who has been following this site must feel pride and joy for what Anthony Watts has and is achieving.
breaks the foundations of the lie
Oh, you mustn’t say that. That was not a lie. It was an error. Now they get to check us out for errors. This is how science progresses.
You mean Anthony only used people who were not on the take of the Koch brothers and big oil.
Is that even legal to do climate research without oil money or Koch money ??
I think the OED definition of science talks about “observation and experimentation”.
Good enough for me.
Anthony’s Army all deserve to be publicly recognized for their home grown do it yourself go out and observe science achievement. you hired a great crew Anthony.
Cheap too !
g
Anthony’s Army
“Still Recruiting.”
mass movement warning
tenere scepticismo
Here’s hoping you help move the subject from “settled” back to science.
Ooo, nice idea for a Josh cartoon, Mr. Din …
Anthony (et. al.) standing in front of big billboard, painting a line through “SETTLED” … with a wry smile…
Brilliant !!!
I like it.
Thanks Marcus and Evan! (on behalf of Mr. Din, too)
De nada. (De mucho.)
and Weird?
What really has been missed in this whole debate was that there are in fact FOUR RADIOSONDE data sets that AGREE with TWO Satellite data sets which show NO warming for the past 18 years. This is incontrovertible evidence. Somehow the radiosonde data was never mentioned or put on graphs until recently. I find this an incredible omission. I wonder if this data corresponds well with Anthony’s latest unadjusted compliant surface data for the same period… trend anyway.
“incontrovertible” Nothing in empirical science has that status. In using that term you only ape the corruption of the APS other attempt to close debate by those on the other side. Otherwise I thoroughly support your case.
Radiosonde comparison has been frequently mentioned at Steve Goddard/Tony Heller’s realclimatescience.com
Can you provide a link to the RADIOSONDE data sets? I would like to ad them to DebunkingClimate.com
jim: until someone comes along with better info., here is what I found:
1. Data sets (click on page linked to “download”): https://ghrc.nsstc.nasa.gov/hydro/details.pl?ds=gpmradsecgcpex
2. A paper you might find of interest:
MSU Tropospheric Temperatures: Dataset Construction and Radiosonde Comparisons
(Christy, Spencer, and Bratwell, 1999)
http://www.ncdc.noaa.gov/temp-and-precip/msu/uah-msu.pdf
Best wishes finding what you are looking for,
Janice
P.S. To jim: Here is are two helpful (I hope!) excerpts from the Christy, et. al., paper:
(Christy, et. al, 1999 (linked just above) at 1153, 1165)
Correct me if I’m wrong, but aren’t the satellite data adjusted to match the radiosonde data in some fashion?
Maybe adjusted isn’t the right word, but I thought the radiosonde data were somehow used as a reference for deciding what satellite data correlate to a certain tropospheric temperature.
If so that doesn’t invalidate the significance of this correlation, but they shouldn’t be considered two completely independent data sets.
No, they are independent sets.
I never heard that before regarding 4 radiosonde sets. That would be great to get a look at all four of them side by side.
After looking at the data like this, I started to look at how much each series change going from min to max and back, and while the absolute temps aren’t the same in the different zones, this daily cycle over the year returns to on average 0.0F.
Indeed, you did — and did a mighty fine job of it, too:
“Climate science is all about surface temperature trends. The problem with this is that the CAGW is a rate of cooling problem, not a static temperature problem. … What can weather station data tell us about this?”
Michael Crow
http://wattsupwiththat.com/2013/05/17/an-analysis-of-night-time-cooling-based-on-ncdc-station-record-data/
Janice, how many RAM chips do you have protruding from your head ?? How the heck did you remember that ??
That will have been a reference to our original Tmin findings in Fall et al. (2011). Those numbers will shortly be superseded by our current paper, which is, in a sense, a followup study, far more intensely done and using a far more difficult rating process.
Hi, Marcus — lol, I have so little else to occupy my RAM, that WUWT stuff can use most of the available RAM (some of it is, unfortunately, on ROM and my brain refuses to access it to let me write what it says… IOW: I forget a lot, too) — Mike Crow’s work impressed me from the start and not TOO long ago, there was a thread that also brought his fine work to mind… .
And, Marcus: don’t ever go away — WUWT needs your lovely personality (things can get mighty, MIGHTY, heeeaaaaavvvvy around here sometimes (even to the point of fiercely mean! — you keep the atmosphere light and healthy — humor, enthusiasm, and good cheer are ESSENTIAL!).
Each one of is has a role to play. Each one of us is important.
Janice, memory that you can’t access would be WOM. Write only memory. I have a data sheet around here somewhere for one of those.
Thanks for the computer science lesson, MarkW. I really messed up how I wrote that. I used the fact that ROM (read only memory) cannot be altered by the “reader,” thus, could not be accessed in such a way as to make it available to my “write-out” (i.e. memory recall) code. I blew it!
And, want to (just in case you see THIS one, heh) say: Way to go standing up for truth in science (against AGW) to the extent that you lost your job at a major laboratory — you are a hero for truth!
Janice
Janice, thank you (again) for your continued praise of my effort, I truly appreciate it!
My pleasure, Mike.
Very Nice work!!, A study based on actual findings and not could, If or may occur created by models.
Well done Anthony etal. hope presentation went well. thanks for keeping up the pressure.
It never stopped since the end of 2008. Never, ever. We were at it all the time.
And, it is high time to say:
Thanks, too, for going the extra mile and showing up here when you are likely exhausted.
Sleep well,
Janice
You’re most welcome. Yet always keep in mind that the Rev is the Grand Old Man. I am proud and privileged to be his mudslogger.
(Jump on in. The water’s, er, lukewarm!)
Excellent work Anthony et al.
Nice some real climate science for a change!
+ 10
I think the interesting comparison is these data sets to the USCRN, the climate reference network.
All those are pristine top quality sites with triple redundant aspirated temperature sensors.
No adjustments allowed or needed, and guess what… they show NO warming for the past 10 years. The decade time interval probably extends to the right or Anthony’s graph.
Why on earth should NOAA have any interest in showing curves like this? :
“The U.S. Climate Reference Network (USCRN) is a systematic and sustained network of climate monitoring stations with sites across the conterminous U.S., Alaska, and Hawaii. These stations use high-quality instruments to measure temperature, precipitation, wind speed, soil conditions, and more. Information is available on what is measured and the USCRN station instruments.
The vision of the USCRN program is to provide a continuous series of climate observations for monitoring trends in the nation’s climate and supporting climate-impact research.
Stations are managed and maintained by the National Oceanic and Atmospheric Administration’s (NOAA) National Centers for Environmental Information.”
Two things are important to note, here.
1.) The trend is flat just an insignificant bit on the cool side.
2.) COOP tracks very well with CRN from 2005 to 2014.
(Trends are to be considered Tmean unless otherwie specified.)
This is important, because it supports our hypothesis: Poor microsite exaggerates trend. And it doesn’t even matter if that trend is up or down.
Poor microsite exaggerates a warming trend, causing a divergence with well sited stations. Poor microsite also exaggerates a cooling trend, causing an equal and opposite divergence. And if there is essentially no trend to exaggerate (as per the 2005-2014 interval), there will be essentially no divergence.
That explains why poorly sited stations have stronger warming trends than well sited stations from 1977 – 1998. It explains why poorly sited stations have stronger cooling trend from 1999 – 2008. And, finally, it explains the lack of divergence between COOP and the CRN from 2005 – 2014. That is what is called working forward, backward — and sideways.
Evan, I also found when looking at how individual stations measured temps evolve (difference between daily rising and falling temps)over a year’s time to be slightly cooling.
If you haven’t seen what I’ve done previously, I think it’s a nice compliment to your teams work. I haven’t looked at absolute temperature trends, just the delta change, and have processed unaltered station data into various sized grids.
https://micro6500blog.wordpress.com/2015/11/18/evidence-against-warming-from-carbon-dioxide/
Science or Fiction, thanks for posting the recent USCRN plot. It would be interesting to see a comparison plot for the same time period using the best sited USHCN sites that Anthony and company examined. Anthony, this would make a great post in the future … hint hint.
That explains why poorly sited stations have stronger warming trends than well sited stations from 1977 – 1998. It explains why poorly sited stations have stronger cooling trend from 1999 – 2008. And, finally, it explains the lack of divergence between COOP and the CRN from 2005 – 2014. That is what is called working forward, backward — and sideways.
And CRN from 2001 to 2015 doesn’t diverge from good or bad stations.
Going forward when it warms it will be interesting.
Problem is, those results would — in isolation — be moot. there should be little diversion in trend between well sited USHCN, poor sited USHCN, and CRN because during the interval they overlap, there is essentially no trend to exaggerate.
Evan, What is COOP? Also I tried to post this earlier from home (on my 3rd computer) without success
I think. Here it is again. I think it is related to what you are saying here.
Anthony, Evan, or anyone else who may know: What is meant by this? “Trend differences are not found during the 1999- 2008 sub-period of relatively stable temperatures, suggesting that the observed differences are caused by a physical mechanism that is directly or indirectly caused by changing temperatures”.
Does this mean no trend difference between the Class 1/2 versus Class 3/4/5 during this time or none between the NCDC adjustments and the Class 1/2 during the last 7 years?
1.) To be clear, the COOP network is the entire ~6000 NOAA stationset, of which the 1218-station USHCN is a subset of the “best” stations, those with the ,ongest history, and most complete data/metadata.
2.) I think Anthony may have used a pre-edited version of the abstract. The one he posted earlier is the corrected version.
1.) There is no divergence between the COOP network and CRN from 2005 (when CRN went online) to 2014. That is because poor Microsite exaggerates trend, and there is no significant trend during that interval to exaggerate.
2.) There is a trend from 1999 and 2008. A cooling trend. And the poorly sited stations cool more rapidly than the well sited stations during that interval.
Therein we see that heat sink exaggerates trends — in either direction — and when there is no trend to exaggerate, there will be no divergence between well and poorly sited stations.
Going forward when it warms it will be interesting.
In that case I would expect a divergence, with the poorly sited stations showing the highest trends.
If the sun a.) does a bunk, and b.) the data gives half a hoot about it, then, in combo with the current negative PDO (etc.) progression, we might see a bit of cooling — and the poorly sighted stations would be expected to exaggerate that trend.
If the negative PDO pushes down with AGW pushing up, and the trend remaining flat, expect no material divergence between well and poorly sited stations (though the poorly sited stations would probably warm more in summer and cool more in winter).
Eventually, the PDO will flip back to positive and we will be into medium-term warming no matter how you slice it. Of course, the microsite problem may be either solved or reasonably adjusted for (by us if no one else). It’s even possible almost-as-good alternative energy will be available (but don’t bet on wind/solar as currently approached).
“Of course, the microsite problem may be either solved or reasonably adjusted for (by us if no one else)”
To me, it is far from obvious that poor measurements can be compensated for by automatic routines. It is not even obvious to me that poor measurements can be adjusted by manual routines.
You poked them in the UHI.
Youch!
A station rating of one (1) has an error range of ~2,5 degrees. How many stations got a rating of one (1)? That’s gonna leave a mark.
Sorry. should read “error range of ≤1degree”
Rather few. And their trends are higher than Class 2. that is because almost all Class 1 stations are CRS units, and those will have an inherently exaggerated trend, no matter how well they are sited.
Not bad.
But you can go further than that. We find UHI, while it may have a significant effect on offset, it has not much discernible effect on trend, not for the unperturbed set, anyway. And the compliant urban set trends well under the non-compliant rural set.
It’s all down to Microsite. Via the heat sink effect.
Microsite is the New UHI. You heard it here, first.
Evan, please give us a clean “laymans” definition of “microsite”
[The “very local” 10-50-100-500 meters around a site that affects any or all of the following factors:
Local sensible heat sources (air conditioners, heaters, stoves, ovens, buildings, furnaces, kilns, or generators. these may, or may not, be running any any given time. 5, 10, to 20 meter effect.)
Local radiated and re-radiated energy (from buildings, walls, asphalt or concrete parking lots, sidewalks, and parking garages. 10-50 meter effect.)
Local wind breaks, or wind accelerators. (Wind is blocked by a building or wall, or wind is accelerated across the sensor by being forced between a row of buildings at certain wind directions, or air is moved from a hot spot (parking lot or building wall) towards (or away from) the sensor. 50-500 meter effect.)
Local shading (or removal!) of natural shading and trees over time. 10-50 meter effect.
Local UHI. An otherwise “ideal” sensor recording good data for the nearest 500 meters unchanged, is in the middle of a small city or county, who 10000-50,000 meter radius now has 10x to 50x the urban heat island seen in the 1920’s or 1930’s.
.mod]
I’ll add that UHI is inherently non-local. It is Mesosite. Microsite is only concerned with the immediate proximity of the station, be it urban or non-urban. At most 100m. distant, and usually what matters is the 30m and 10 m. radii. Well sited urban station trends (sic) clock in lower, on average than poorly sited non-urban stations. Microsite IS the New UHI. ~ Evan
Scott: Until Evan has time to answer, just a couple of excerpts from the above press release that might be helpful:
**{my guess is: “unnatural thermal mass” = heat (or energy, heh) -retaining/emitting to a degree not normally found in nature}
That is to say, the above criteria would be the characteristics of a given “microsite.”
Just a little help (I hope) from your non-tech, friendly, neighborhood librarian,
Janice
Hurrah for .mod! #(:))
Sure hope Scott reads your great response to him above!
Should Urban readings even be included in the global calculation?
They (urban sites) are not representative of offset. But if they are well sited, their trends are useful and should be included.
The problem isn’t urban itself, it is the change in the microsite conditions over time. If the heatsinks in the area change during the period of study a bias will appear in the data. In the study of “climate change” it is the changes that make or break the data. If you build a parking lot, change the surface of the nearby playground, build a large building, install a chiller plant, upgrade a chiller plant, change brown space to green space, these all will impact measurement trends, and that is what pollutes the trend data. Sure cities will be warmer at night than the surrounding countryside.
We can’t compare urban readings from the 30’s to now because of too much change in the microsite conditions though.
If a station’s microsite rating changes during the 30-year study period, we drop that station. Poor microsite exaggerates trends even when a station’s siting is constant and unchanging throughout.
I cannot emphasize how very important that concept is. Our entire hypothesis would be falsified without it.
Gratz, !!!
Was just wondering about this paper last week.
So were we.
If one is going to argue on the basis of evidence, obviously evidence matters. Good post!
You should be very proud of the time and effort put into this.
My biggest congratulations. Very impressive sir! Also to the coauthors and those who put so much time into supporting this effort.
Congratulations and a big thank you to all of the authors for this excellent work.
Outstanding work Anthony! I’ll reiterate what was said upstream: Reliable data is the basis of all science.
It’s not perfect, but it’s as good as it can reasonably be. We define our terms and what we think is going on in the paper, itself.
We will also be archiving the data and formulas in Excel, which will put it in a format that anyone can dicker with it or change the parameters — add or drop stations, change ratings, add categories (i.e., subsets), add whatever other version of MMTS adjustment you like, that sort of thing. (And I have some iconoclastic notions of how MMTS should really be addressed.)
But the thing is, we welcome review. Some station ratings are obvious at a glance, but there are a few close calls. So it will all be open for review, complete with tools to test and vary. This paper is not intended as an inalterable doctrine. It is just part of a process of knowledge in a format it is easy to alter and expand.
If anyone has any questions, I’ll be glad to answer.
How many stations were “close calls”? Would it be possible to take a station that was borderline between say 1 and 2, and call it a 1.5? I suppose if there are only a dozen or so close call stations, any change to the results would be too small to be meaningful.
The only case where it makes a dime’s worth of difference is the Class2\3 demarcation. That is where the biggest difference occurs. That is the split between compliance and non-compliance.
There is a small handful of stations that are close calls. Some time earlier, for experimental purposes, I dropped the five coolest Class 1\2 stations. The trends were, of course a bit higher, but the confidence remained statistically significant (95%+ level).
Gives one hope that that (the very brief) age of science hasn’t yet been ground to a halt by magnet therapy, vitamins and global warming for big bucks. Congratulations Anthony.
this is the main takeaway for me after the results bcbill. outstanding effort by anthony and the team. the dogged determination to get it right,the huge amount of time and effort that took and the continued commitment to make sure all data is made available to ensure the in depth scrutiny a paper so important requires.
thank you all involved for restoring some confidence in science, for me at least.
Great work, the amount of hard work that must have gone into this astounds me.
Whoops!
….a warm bias mainly due to pain and maintenance issues…
A bit of a typo there, I think. Or hope!!
Painting is painful for me. Shoulder problems.
I tend to get a little hot headed when in pain.
Congratulation to AW and co-authors. The station ground truth data collected by volunteers is pure gold.
I conducted a small experiment using just the surface stations CRN1 from the database, guest posted here earlier this year. What was compared was GISS raw to GISS homogenized for those pristine stations. (Did not expand to CRN2 to get valid statistics, as my Koch check never arrived.) What it showed (keep in mind the limited sample size did not provide conclusive statistics) was that GISS homogenization did a fairly decent job of removing large urban UHI, but for suburban and rural stations it imported heat ‘contamination’ from poorly microsited ‘adjacent’ stations. In other words, the homogenized GISS end result is irreparably unfit for purpose. For sure for CONUS. Essay When Data Isn’t suggests the general result is also true globally, and not just for GISS. For the same reasons.
Essay When Data Isn’t suggests the general result is also true globally, and not just for GISS.
We would like nothing more than to take this show on the road to the GHCN. But that would require either an intense and precise foreign volunteer effort — or real funding.
Online satellite resources such as google earth are a lot better than they used to be but are yet inadequate to the entire global task. In some areas (not by any means all) of the the US, you can pick an MMTS off a fly’s butt and trace its funky little shadow. Outer Mongolia, not so much. And, “Beware the bight of Benin. Those who go in don’t come out again.”
We’d have to leg it or have other legs leg it to those stations and observe them with Leroy (2010) parameters in mind while they’re doing it. And my Uzbecki is getting a little rusty.
I have the feeling that what ya’ll started is gonna change things.
Thanks for the hard work !!
Evan, this could be larger crowd sourced. My comment went to data like Kotsouyanis on GHCN, or Aus BOM, for example Rutherglen. Not for you or AW to organize.but could be done.
Congratulations Anthony and all. I was pleased to buy the first publication on surfacestations…to help with funding. The fact that almost every site was visited and photographed by volunteers – and what a rogues gallery of station pictures!! When they came out, NOAA ran out of all their offices and took down the worst stations in the album. The optics of this for the world’s number one climate agency must have scared the daylights out them. It woke them up for sure. They probably spent a good part of that year’s budget digging up the worst stations, putting out papers and op eds, polishing the door knobs and just about everything they could think of.
Having visited essentially all the stations but a few, in my mind, makes you guys THE experts on the US temperature networks. Collectively, I would say more work on this one metric that has caused so much angst and trillions in spending on energy toys and studies was done by Anthony et al than the smoke shoveling of the world’s temperature agencies and university departments. Big computers adjusting the world with algorithms have been shown how the job is done!
I say the rest of the world can also be done. A call from the mighty WUWT would reach all 200 countries in an hour. Crowd sourcing, photos and videos of each station would done and selection of the best (you might have to go with classes 2 and 3 for the rest of the world, though – perhaps adjustable using a factor you have determined for these cases in the US. This would finally create the WUWT Global T Network. I suggest your 30year trend is even at least slightly warmer than reality, but probably the best we can do. The work would be even better with funding to twin random stations worldwide with the newest temperature instruments available, running them side by side to see what we get. I’m sure Canada and Australia could be done fairly quickly, most of Europe is what we would call a short drive and should be done quickly. Add Mexico and soon the argument that US is only 3% of the land mass would be shut off.
The next thing is to bring the work up to 2015 and compare it with the satellite record and CRN. I believe we are going to get wonderful coroboration with the satellite records.
Oh and evanjones, I’ve been to the Bight of Benin a couple of times, once in the 1960s for three years with a civil war on that killed 3 million people and I came back again! Of course, I’m from Manitoba.
Thank you for your service to humanity.