30 year trends of temperature are shown to be lower, using well-sited high quality NOAA weather stations that do not require adjustments to the data.
This was in AGU’s press release news feed today. At about the time this story publishes, I am presenting it at the AGU 2015 Fall meeting in San Francisco. Here are the details.
NEW STUDY OF NOAA’S U.S. CLIMATE NETWORK SHOWS A LOWER 30-YEAR TEMPERATURE TREND WHEN HIGH QUALITY TEMPERATURE STATIONS UNPERTURBED BY URBANIZATION ARE CONSIDERED
Figure 4 – Comparisons of 30 year trend for compliant Class 1,2 USHCN stations to non-compliant, Class 3,4,5 USHCN stations to NOAA final adjusted V2.5 USHCN data in the Continental United States
EMBARGOED UNTIL 13:30 PST (16:30 EST) December 17th, 2015
SAN FRANCISCO, CA – A new study about the surface temperature record presented at the 2015 Fall Meeting of the American Geophysical Union suggests that the 30-year trend of temperatures for the Continental United States (CONUS) since 1979 are about two thirds as strong as officially NOAA temperature trends.
Using NOAA’s U.S. Historical Climatology Network, which comprises 1218 weather stations in the CONUS, the researchers were able to identify a 410 station subset of “unperturbed” stations that have not been moved, had equipment changes, or changes in time of observations, and thus require no “adjustments” to their temperature record to account for these problems. The study focuses on finding trend differences between well sited and poorly sited weather stations, based on a WMO approved metric Leroy (2010)1 for classification and assessment of the quality of the measurements based on proximity to artificial heat sources and heat sinks which affect temperature measurement. An example is shown in Figure 2 below, showing the NOAA USHCN temperature sensor for Ardmore, OK.
Following up on a paper published by the authors in 2010, Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends2 which concluded:
Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends
…this new study is presented at AGU session A43G-0396 on Thursday Dec. 17th at 13:40PST and is titled Comparison of Temperature Trends Using an Unperturbed Subset of The U.S. Historical Climatology Network
A 410-station subset of U.S. Historical Climatology Network (version 2.5) stations is identified that experienced no changes in time of observation or station moves during the 1979-2008 period. These stations are classified based on proximity to artificial surfaces, buildings, and other such objects with unnatural thermal mass using guidelines established by Leroy (2010)1 . The United States temperature trends estimated from the relatively few stations in the classes with minimal artificial impact are found to be collectively about 2/3 as large as US trends estimated in the classes with greater expected artificial impact. The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization. The homogeneity adjustments applied by the National Centers for Environmental Information (formerly the National Climatic Data Center) greatly reduce those differences but produce trends that are more consistent with the stations with greater expected artificial impact. Trend differences are not found during the 1999- 2008 sub-period of relatively stable temperatures, suggesting that the observed differences are caused by a physical mechanism that is directly or indirectly caused by changing temperatures.




Key findings:
1. Comprehensive and detailed evaluation of station metadata, on-site station photography, satellite and aerial imaging, street level Google Earth imagery, and curator interviews have yielded a well-distributed 410 station subset of the 1218 station USHCN network that is unperturbed by Time of Observation changes, station moves, or rating changes, and a complete or mostly complete 30-year dataset. It must be emphasized that the perturbed stations dropped from the USHCN set show significantly lower trends than those retained in the sample, both for well and poorly sited station sets.
2. Bias at the microsite level (the immediate environment of the sensor) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend. Well sited stations show significantly less warming from 1979 – 2008. These differences are significant in Tmean, and most pronounced in the minimum temperature data (Tmin). (Figure 3 and Table 1)
3. Equipment bias (CRS v. MMTS stations) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend when CRS stations are compared with MMTS stations. MMTS stations show significantly less warming than CRS stations from 1979 – 2008. (Table 1) These differences are significant in Tmean (even after upward adjustment for MMTS conversion) and most pronounced in the maximum temperature data (Tmax).
4. The 30-year Tmean temperature trend of unperturbed, well sited stations is significantly lower than the Tmean temperature trend of NOAA/NCDC official adjusted homogenized surface temperature record for all 1218 USHCN stations.
5. We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.
6. The data suggests that the divergence between well and poorly sited stations is gradual, not a result of spurious step change due to poor metadata.
The study is authored by Anthony Watts and Evan Jones of surfacestations.org , John Nielsen-Gammon of Texas A&M , John R. Christy of the University of Alabama, Huntsville and represents years of work in studying the quality of the temperature measurement system of the United States.
Lead author Anthony Watts said of the study: “The majority of weather stations used by NOAA to detect climate change temperature signal have been compromised by encroachment of artificial surfaces like concrete, asphalt, and heat sources like air conditioner exhausts. This study demonstrates conclusively that this issue affects temperature trend and that NOAA’s methods are not correcting for this problem, resulting in an inflated temperature trend. It suggests that the trend for U.S. temperature will need to be corrected.” He added: “We also see evidence of this same sort of siting problem around the world at many other official weather stations, suggesting that the same upward bias on trend also manifests itself in the global temperature record”.
The full AGU presentation can be downloaded here: https://goo.gl/7NcvT2
[1] Leroy, M. (2010): Siting Classification for Surface Observing Stations on Land, Climate, and Upper-air Observations JMA/WMO Workshop on Quality Management in Surface, Tokyo, Japan, 27-30 July 2010
[2] Fall et al. (2010) Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends https://pielkeclimatesci.files.wordpress.com/2011/07/r-367.pdf
Abstract ID and Title: 76932: Comparison of Temperature Trends Using an Unperturbed Subset of The U.S. Historical Climatology Network
Final Paper Number: A43G-0396
Presentation Type: Poster
Session Date and Time: Thursday, 17 December 2015; 13:40 – 18:00 PST
Session Number and Title: A43G: Tropospheric Chemistry-Climate-Biosphere Interactions III Posters
Location: Moscone South; Poster Hall
Full presentation here: https://goo.gl/7NcvT2
Some side notes.
This work is a continuation of the surface stations project started in 2007, our first publication, Fall et al. in 2010, and our early draft paper in 2012. Putting out that draft paper in 2012 provided us with valuable feedback from critics, and we’ve incorporated that into the effort. Even input from openly hostile professional people, such as Victor Venema, have been highly useful, and I thank him for it.
Many of the valid criticisms of our 2012 draft paper centered around the Time of Observation (TOBs) adjustments that have to be applied to the hodge-podge of stations with issues in the USHCN. Our viewpoint is that trying to retain stations with dodgy records and adjusting the data is a pointless exercise. We chose simply to locate all the stations that DON”T need any adjustments and use those, therefor sidestepping that highly argumentative problem completely. Fortunately, there was enough in nthe USHCN, 410 out of 1218.
It should be noted that the Class1/2 station subset (the best stations we have located in the CONUS) can be considered an analog to the Climate Reference Network in that these stations are reasonably well distributed in the CONUS, and like the CRN, require no adjustments to their records. The CRN consists of 114 commissioned stations in the contiguous United States, our numbers of stations are similar in size and distribution. This should be noted about the CRN:
One of the principal conclusions of the 1997 Conference on the World Climate Research Programme was that the global capacity to observe the Earth’s climate system is inadequate and deteriorating worldwide and “without action to reverse this decline and develop the GCOS [Global Climate Observing System], the ability to characterize climate change and variations over the next 25 years will be even less than during the past quarter century” (National Research Council [NRC] 1999). In spite of the United States being a leader in climate research, long term U.S. climate stations have faced challenges with instrument and site changes that impact the continuity of observations over time. Even small biases can alter the interpretation of decadal climate variability and change, so a substantial effort is required to identify non-climate discontinuities and correct the station records (a process calledhomogenization). Source: https://www.ncdc.noaa.gov/crn/why.html
The CRN has a decade of data, and it shows a pause in the CONUS. Our subset of adjustment free unperturbed stations spans over 30 years, We think it is well worth looking at that data and ignoring the data that requires loads of statistical spackle to patch it up before it is deemed usable. After all, that’s what they say is the reason the CRN was created.
We do allow for one and only one adjustment in the data, and this is only because it is based on physical observations and it is a truly needed adjustment. We use the MMTS adjustment noted in Menne et al. 2009 and 2010 for the MMTS exposure housing versus the old wooden box Cotton Region Shelter (CRS) which has a warm bias mainly due to [paint] and maintenance issues. The MMTS gill shield is a superior exposure system that prevents bias from daytime short-wave and nighttime long-wave thermal radiation. The CRS requires yearly painting, and that often gets neglected, resulting in exposure systems that look like this:
See below for a comparison of the two:
Some might wonder why we have a 1979-2008 comparison when this is 2015. The reason is so that this speaks to Menne et al. 2009 and 2010, papers launched by NOAA/NCDC to defend their adjustment methods for the USCHN from criticisms I had launched about the quality of the surface temperature record, such as this book in 2009: Is the U.S. Surface Temperature Record Reliable? This sent NOAA/NCDC into a tizzy, and they responded with a hasty and ghost written flyer they circulated. In our paper, we extend the comparisons to the current USHCN dataset as well as the 1979-2008 comparison.
We are submitting this to publication in a well respected journal. No, I won’t say which one because we don’t need any attempts at journal gate-keeping like we saw in the Climategate emails. i.e “I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow — even if we have to redefine what the peer-review literature is!” and “I will be emailing the journal to tell them I’m having nothing more to do with it until they rid themselves of this troublesome editor.”.
When the journal article publishes, we’ll make all of the data, code, and methods available so that the study is entirely replicable. We feel this is very important, even if it allows unscrupulous types to launch “creative” attacks via journal publications, blog posts, and comments. When the data and paper is available, we’ll welcome real and well-founded criticism.
It should be noted that many of the USHCN stations we excluded that had station moves, equipment changes, TOBs changes, etc that were not suitable had lower trends that would have bolstered our conclusions.
The “gallery” server from that 2007 surfacestations project that shows individual weather stations and siting notes is currently offline, mainly due to it being attacked regularly and that affects my office network. I’m looking to move it to cloud hosting to solve that problem. I may ask for some help from readers with that.
We think this study will hold up well. We have been very careful, very slow and meticulous. I admit that the draft paper published in July 2012 was rushed, mainly because I believed that Dr. Richard Muller of BEST was going before congress again the next week using data I provided which he agreed to use only for publications, as a political tool. Fortunately, he didn’t appear on that panel. But, the feedback we got from that effort was invaluable. We hope this pre-release today will also provide valuable criticism.
People might wonder if this project was funded by any government, entity, organization, or individual; it was not. This was all done on free time without any pay by all involved. That is another reason we took our time, there was no “must produce by” funding requirement.
Dr. John Nielsen-Gammon, the state climatologist of Texas, has done all the statistical significance analysis and his opinion is reflected in this statement from the introduction
Dr. Nielsen-Gammon has been our worst critic from the get-go, he’s independently reproduced the station ratings with the help of his students, and created his own series of tests on the data and methods. It is worth noting that this is his statement:
The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization.
The p-values from Dr. Nielsen-Gammon’s statistical significance analysis are well below 0.05 (the 95% confidence level), and many comparisons are below 0.01 (the 99% confidence level). He’s on-board with the findings after satisfying himself that we indeed have found a ground truth. If anyone doubts his input to this study, you should view his publication record.
COMMENT POLICY:
At the time this post goes live, I’ll be presenting at AGU until 18:00PST , so I won’t be able to respond to queries until after then. Evan Jones “may” be able to after about 330PM PST.
This is a technical thread, so those who simply want to scream vitriol about deniers, Koch Brothers, and Exxon aren’t welcome here. Same for people that just want to hurl accusations without backing them up (especially those using fake names/emails, we have a few). Moderators should use pro-active discretion to weed out such detritus. Genuine comments and/or questions are welcome.
Thanks to everyone who helped make this study and presentation possible.
Finally what was suspected along is now proven. I suspect the trend would be exactly what CET or Armagh “unadjusted urban” show
Heh. Can’t go by one station, though. One needs a large set to beat the statistical significance monster, after all.
If Figure 4 is any clue, then “adjusted temperatures” can be just about anything.
Only in climate science can you have one set of data be low (Class 1/2), have a second set of data be in the middle (Class 3/4/5), and then have the final average of all the data be the highest of all (NOAA).
The point of adjusting and homogenising badly sited thermometers about as logical as taking an average and standard deviation of many provably broken climate model outputs and pretending they represent something not inconsistent with your measurements.
It is very difficult to get trend to 0.02K/a from devices which are not measuring something that can be well defined to that precision. Simply telling “diurnal min/max temperature in shade at two meters height” is far from defining the problem. And I’m not denying it’s warming, our host calculated it as 0.2K/decade in US, it is just that the uncertainty is not only about the temperature, but about the thing to be measured. I’m content with climate scientists as long as they don’t let uncertainty to be used as a weapon for detrimental mitigation attempts.
One of my favorites
http://surfacestations.org/images/lovelock_mig480.jpg
(posting this here for it is, indeed, Mr. Jones, the “red meat” of this paper’s implications)
(Credit for noting the above to RD in her or his post far below, here: http://wattsupwiththat.com/2015/12/17/press-release-agu15-the-quality-of-temperature-station-siting-matters-for-temperature-trends/comment-page-1/#comment-2100535 )
This is a colossal effort and achievement by Anthony Watts and deserves the widest study and acknowledgement. I hope that there are no mis-guided efforts to block its publication. The benefits of this study are self-evident. Reliable data is the basis of all science, and reliable data has been missing from the Climate debate for a long while.
+1
Sou [snip] over at [snip] been trying to debunk this paper since she saw Anthony’s tweet in October! I’m so excited to read it Anthony!
Yes, really rattled the litter tray, excellent work!
Miriam in delirium.
How funny, I thought only “climate scientists” were qualified to speak of such an issue, or have an opinion on it. I guess I hadn’t realized an MBA and a bachelor’s in agricultural science and a “freelance consultant” position make you a “climate scientist”.
Yet she gave me a valuable forum. And I am grateful to her.
Miriam in delirium.
Meanwhile, Evan in seventh heaven (Region 7.)
Frankly, the personal insults ban should also apply here. Ms. O’Brien may be obsessive and vitriolic, but that’s no reason to bring insults toward her in this forum.
REPLY – +1. Hear, hear. Please, guys, if ever there was a time for the high road, it is now. You all know my feelings on the matter. ~ Evan
Two Labs,
There are other reasons. One example: I’ve never commented there, but she has referred to my comments here in very disparaging terms. Anthony gets treated even worse. So if your suggestion is to just turn the other cheek, I don’t agree, because that way you get slapped on both sides.
Anyway, we’re hardly being “personally insulting” to her. Just telling it like it is.
I totally agree. Mods, please feel free to remove my personally insulting comment.
REPLY – Thank you for that comment. Done. You are forgiven; go forth and sin no more. ~ Evan
What makes you a “climate scientist” is doing the drill and surviving peer (and independent) review. Anthony has done this several times. I have done once.
No sheepskin required. Sou’s criticisms of this project have yielded value to it — and me. I so wish that there were not so much bad blood under the bridge. both sides in this have a lot to learn from each other. Being on speaking terms helps.
The Rev doth bestride this narrow world like a colossus.
Welcome him to Vindication-Nation.
But this project demonstrates why Climate Science is not a science.
I like to think of it how Climate Science lives and grows — like any other science.
Absolutely. Congratulations and many thanks to all involved.
But this project demonstrates why Climate Science is not a science. At the very start, the object is to get the best data quality possible and work with good data, not poor. But Climate Science has not involved in that way. Heck, their main data set does not even measure what they are interested in, ie., it does not measure energy and does not tell one whether energy is accumulating over time.
The first step that the Team should have engaged in, was an audit of all the stations used to compile the land based thermometer record and to identify those best sited, those with the best maintenance record and data recording rigour, and those that had the longest data record in time. They should only have used good quality data source which requires no adjustment whatsoever.
If Global Warming is truly Global, then one does not need 6000, or 2000 or so stations, but one does require good quality data. It would have been much better to have rooted out the good data sources even if this resulted in only 300 to 700 stations world wide. Heck, even 100 to 200 stations would tell us all we need to know if the data that they are returning is good data. Of course, there would be spatial issues but that is not really so much of a problem since Climate is regional and not global and climate response and impact to climate change is also regional and not global. What we need to know is what each continent is doing and what each country is doing so it does not matter greatly whether globally the distribution of the spatial coverage is less than ideal.
Presently what we are doing is simply evaluating the efficacy of the adjustments made to cr*ppy data. What is needed is only good data that needs no adjustments whatsoever.
There’s a link to this study on Drudge this morning.
Good job, Anthony. It’s been a very long haul for you and other authors of this work. We should also congratulate the many volunteers who took up the enormous task of documenting every surface station generating temperature data used to influence public policy, despite opposition from government satraps (who are therefore plainly unfit for public trust).
“I hope that there are no mis-guided efforts to block its publication.”
Unfortunately, as the fetid contents of the Climategate emails (undenied by their authors) plainly demonstrate on numerous occasions and beyond any possible doubt, there were persistently corrupt and malicious efforts to hinder publication which were not merely mis-guided.
Those efforts were very carefully guided, indeed, and demonstrate just how fundamentally untrustworthy the “climate science” establishment has been from inception — and always will be (because when it comes to personal integrity, the leopard never really changes his spots.) Corrupt people attract others and weed out of their ranks anyone who may question their carefully wrought fictions. These academic authorities hand pick and hand feed a new generation of “climate scientists”. There’s no reason to believe that crop of shiny new faces will offer any improvement. The acorn doesn’t fall far from the oak. The professional villainy won’t end when the present top-tier “team” of carney barkers and fraudsters have died or retired. It will be permanently institutionalized at the expense of the public.
Nothing from this branch of fraudulent “science” can be trusted. Especially assertions based on methods, data, or assumptions not independently verified (the way real science actually works). Altered data without the original available should be thrown out entirely since it’s as tainted and untrustworthy as the people who “adjusted” it. And anyone who points to it as evidence supporting any conclusive assertion should be pilloried as either a fool or a scammer. Probably both.
edit A long time ago
Thanks to all of you. I’m sure this will be front and center of the NYT and WSJ tomorrow.
(Sorry, couldn’t resist).
Thank you sincerely.
Maybe FoxNews……
Next they will dilute the mercury in the thermometers .
Have you ever tried that?
Well done. Good science well described. It’ll be fun to see the responses.
Anthony, I, and I’m sure, the rest of the “screeching mercury monkeys” who surveyed stations back in the day thank and congratulate you on persevering with this research. These results demonstrate clearly that method matters and fiddling with numbers ex post facto isn’t going to fix faulty procedures.
I’m another one, and I echo your expressions and opinions. It’s so nice to see this work coming to fruition.
Ook. Ook. (Scritch-scratch.)
Evan, and many thanks to you too for all your hard work on this project verifying the surveys. EE-EE-OO-OO-AH-AH.
Ting, tang, walla-walla bing-bang.
All of the surveyors are brothers in arms. All of you own a piece of this.
We are so proud of you, Anthony (et. al.)!!
So very proud.
********************************
(sometime, how about a list — as a posted article — of all (local site guys, etc…) who made this giant effort possible?)
Hear, hear, Janice Moore.
Yes, hear, hear, Janice!
You, Sir, are a Great American.
(I was addressing Anthony, but the same applies to his collaborators.)
He’s great bloke.
NOBODY BEATS THE REV.
Yes, at last the evidence we all knew was there and somehow nobody was able to give us! This is a massive achievement and breaks the foundations of the lie on which this fake science has been built over many years. Yes everyone who has been following this site must feel pride and joy for what Anthony Watts has and is achieving.
breaks the foundations of the lie
Oh, you mustn’t say that. That was not a lie. It was an error. Now they get to check us out for errors. This is how science progresses.
You mean Anthony only used people who were not on the take of the Koch brothers and big oil.
Is that even legal to do climate research without oil money or Koch money ??
I think the OED definition of science talks about “observation and experimentation”.
Good enough for me.
Anthony’s Army all deserve to be publicly recognized for their home grown do it yourself go out and observe science achievement. you hired a great crew Anthony.
Cheap too !
g
Anthony’s Army
“Still Recruiting.”
mass movement warning
tenere scepticismo
Here’s hoping you help move the subject from “settled” back to science.
Ooo, nice idea for a Josh cartoon, Mr. Din …
Anthony (et. al.) standing in front of big billboard, painting a line through “SETTLED” … with a wry smile…
Brilliant !!!
I like it.
Thanks Marcus and Evan! (on behalf of Mr. Din, too)
De nada. (De mucho.)
and Weird?
What really has been missed in this whole debate was that there are in fact FOUR RADIOSONDE data sets that AGREE with TWO Satellite data sets which show NO warming for the past 18 years. This is incontrovertible evidence. Somehow the radiosonde data was never mentioned or put on graphs until recently. I find this an incredible omission. I wonder if this data corresponds well with Anthony’s latest unadjusted compliant surface data for the same period… trend anyway.
“incontrovertible” Nothing in empirical science has that status. In using that term you only ape the corruption of the APS other attempt to close debate by those on the other side. Otherwise I thoroughly support your case.
Radiosonde comparison has been frequently mentioned at Steve Goddard/Tony Heller’s realclimatescience.com
Can you provide a link to the RADIOSONDE data sets? I would like to ad them to DebunkingClimate.com
jim: until someone comes along with better info., here is what I found:
1. Data sets (click on page linked to “download”): https://ghrc.nsstc.nasa.gov/hydro/details.pl?ds=gpmradsecgcpex
2. A paper you might find of interest:
MSU Tropospheric Temperatures: Dataset Construction and Radiosonde Comparisons
(Christy, Spencer, and Bratwell, 1999)
http://www.ncdc.noaa.gov/temp-and-precip/msu/uah-msu.pdf
Best wishes finding what you are looking for,
Janice
P.S. To jim: Here is are two helpful (I hope!) excerpts from the Christy, et. al., paper:
(Christy, et. al, 1999 (linked just above) at 1153, 1165)
Correct me if I’m wrong, but aren’t the satellite data adjusted to match the radiosonde data in some fashion?
Maybe adjusted isn’t the right word, but I thought the radiosonde data were somehow used as a reference for deciding what satellite data correlate to a certain tropospheric temperature.
If so that doesn’t invalidate the significance of this correlation, but they shouldn’t be considered two completely independent data sets.
No, they are independent sets.
I never heard that before regarding 4 radiosonde sets. That would be great to get a look at all four of them side by side.
After looking at the data like this, I started to look at how much each series change going from min to max and back, and while the absolute temps aren’t the same in the different zones, this daily cycle over the year returns to on average 0.0F.
Indeed, you did — and did a mighty fine job of it, too:
“Climate science is all about surface temperature trends. The problem with this is that the CAGW is a rate of cooling problem, not a static temperature problem. … What can weather station data tell us about this?”
Michael Crow
http://wattsupwiththat.com/2013/05/17/an-analysis-of-night-time-cooling-based-on-ncdc-station-record-data/
Janice, how many RAM chips do you have protruding from your head ?? How the heck did you remember that ??
That will have been a reference to our original Tmin findings in Fall et al. (2011). Those numbers will shortly be superseded by our current paper, which is, in a sense, a followup study, far more intensely done and using a far more difficult rating process.
Hi, Marcus — lol, I have so little else to occupy my RAM, that WUWT stuff can use most of the available RAM (some of it is, unfortunately, on ROM and my brain refuses to access it to let me write what it says… IOW: I forget a lot, too) — Mike Crow’s work impressed me from the start and not TOO long ago, there was a thread that also brought his fine work to mind… .
And, Marcus: don’t ever go away — WUWT needs your lovely personality (things can get mighty, MIGHTY, heeeaaaaavvvvy around here sometimes (even to the point of fiercely mean! — you keep the atmosphere light and healthy — humor, enthusiasm, and good cheer are ESSENTIAL!).
Each one of is has a role to play. Each one of us is important.
Janice, memory that you can’t access would be WOM. Write only memory. I have a data sheet around here somewhere for one of those.
Thanks for the computer science lesson, MarkW. I really messed up how I wrote that. I used the fact that ROM (read only memory) cannot be altered by the “reader,” thus, could not be accessed in such a way as to make it available to my “write-out” (i.e. memory recall) code. I blew it!
And, want to (just in case you see THIS one, heh) say: Way to go standing up for truth in science (against AGW) to the extent that you lost your job at a major laboratory — you are a hero for truth!
Janice
Janice, thank you (again) for your continued praise of my effort, I truly appreciate it!
My pleasure, Mike.
Very Nice work!!, A study based on actual findings and not could, If or may occur created by models.
Well done Anthony etal. hope presentation went well. thanks for keeping up the pressure.
It never stopped since the end of 2008. Never, ever. We were at it all the time.
And, it is high time to say:
Thanks, too, for going the extra mile and showing up here when you are likely exhausted.
Sleep well,
Janice
You’re most welcome. Yet always keep in mind that the Rev is the Grand Old Man. I am proud and privileged to be his mudslogger.
(Jump on in. The water’s, er, lukewarm!)
Excellent work Anthony et al.
Nice some real climate science for a change!
+ 10
I think the interesting comparison is these data sets to the USCRN, the climate reference network.
All those are pristine top quality sites with triple redundant aspirated temperature sensors.
No adjustments allowed or needed, and guess what… they show NO warming for the past 10 years. The decade time interval probably extends to the right or Anthony’s graph.
Why on earth should NOAA have any interest in showing curves like this? :
“The U.S. Climate Reference Network (USCRN) is a systematic and sustained network of climate monitoring stations with sites across the conterminous U.S., Alaska, and Hawaii. These stations use high-quality instruments to measure temperature, precipitation, wind speed, soil conditions, and more. Information is available on what is measured and the USCRN station instruments.
The vision of the USCRN program is to provide a continuous series of climate observations for monitoring trends in the nation’s climate and supporting climate-impact research.
Stations are managed and maintained by the National Oceanic and Atmospheric Administration’s (NOAA) National Centers for Environmental Information.”
Two things are important to note, here.
1.) The trend is flat just an insignificant bit on the cool side.
2.) COOP tracks very well with CRN from 2005 to 2014.
(Trends are to be considered Tmean unless otherwie specified.)
This is important, because it supports our hypothesis: Poor microsite exaggerates trend. And it doesn’t even matter if that trend is up or down.
Poor microsite exaggerates a warming trend, causing a divergence with well sited stations. Poor microsite also exaggerates a cooling trend, causing an equal and opposite divergence. And if there is essentially no trend to exaggerate (as per the 2005-2014 interval), there will be essentially no divergence.
That explains why poorly sited stations have stronger warming trends than well sited stations from 1977 – 1998. It explains why poorly sited stations have stronger cooling trend from 1999 – 2008. And, finally, it explains the lack of divergence between COOP and the CRN from 2005 – 2014. That is what is called working forward, backward — and sideways.
Evan, I also found when looking at how individual stations measured temps evolve (difference between daily rising and falling temps)over a year’s time to be slightly cooling.
If you haven’t seen what I’ve done previously, I think it’s a nice compliment to your teams work. I haven’t looked at absolute temperature trends, just the delta change, and have processed unaltered station data into various sized grids.
https://micro6500blog.wordpress.com/2015/11/18/evidence-against-warming-from-carbon-dioxide/
Science or Fiction, thanks for posting the recent USCRN plot. It would be interesting to see a comparison plot for the same time period using the best sited USHCN sites that Anthony and company examined. Anthony, this would make a great post in the future … hint hint.
That explains why poorly sited stations have stronger warming trends than well sited stations from 1977 – 1998. It explains why poorly sited stations have stronger cooling trend from 1999 – 2008. And, finally, it explains the lack of divergence between COOP and the CRN from 2005 – 2014. That is what is called working forward, backward — and sideways.
And CRN from 2001 to 2015 doesn’t diverge from good or bad stations.
Going forward when it warms it will be interesting.
Problem is, those results would — in isolation — be moot. there should be little diversion in trend between well sited USHCN, poor sited USHCN, and CRN because during the interval they overlap, there is essentially no trend to exaggerate.
Evan, What is COOP? Also I tried to post this earlier from home (on my 3rd computer) without success
I think. Here it is again. I think it is related to what you are saying here.
Anthony, Evan, or anyone else who may know: What is meant by this? “Trend differences are not found during the 1999- 2008 sub-period of relatively stable temperatures, suggesting that the observed differences are caused by a physical mechanism that is directly or indirectly caused by changing temperatures”.
Does this mean no trend difference between the Class 1/2 versus Class 3/4/5 during this time or none between the NCDC adjustments and the Class 1/2 during the last 7 years?
1.) To be clear, the COOP network is the entire ~6000 NOAA stationset, of which the 1218-station USHCN is a subset of the “best” stations, those with the ,ongest history, and most complete data/metadata.
2.) I think Anthony may have used a pre-edited version of the abstract. The one he posted earlier is the corrected version.
1.) There is no divergence between the COOP network and CRN from 2005 (when CRN went online) to 2014. That is because poor Microsite exaggerates trend, and there is no significant trend during that interval to exaggerate.
2.) There is a trend from 1999 and 2008. A cooling trend. And the poorly sited stations cool more rapidly than the well sited stations during that interval.
Therein we see that heat sink exaggerates trends — in either direction — and when there is no trend to exaggerate, there will be no divergence between well and poorly sited stations.
Going forward when it warms it will be interesting.
In that case I would expect a divergence, with the poorly sited stations showing the highest trends.
If the sun a.) does a bunk, and b.) the data gives half a hoot about it, then, in combo with the current negative PDO (etc.) progression, we might see a bit of cooling — and the poorly sighted stations would be expected to exaggerate that trend.
If the negative PDO pushes down with AGW pushing up, and the trend remaining flat, expect no material divergence between well and poorly sited stations (though the poorly sited stations would probably warm more in summer and cool more in winter).
Eventually, the PDO will flip back to positive and we will be into medium-term warming no matter how you slice it. Of course, the microsite problem may be either solved or reasonably adjusted for (by us if no one else). It’s even possible almost-as-good alternative energy will be available (but don’t bet on wind/solar as currently approached).
“Of course, the microsite problem may be either solved or reasonably adjusted for (by us if no one else)”
To me, it is far from obvious that poor measurements can be compensated for by automatic routines. It is not even obvious to me that poor measurements can be adjusted by manual routines.
You poked them in the UHI.
Youch!
A station rating of one (1) has an error range of ~2,5 degrees. How many stations got a rating of one (1)? That’s gonna leave a mark.
Sorry. should read “error range of ≤1degree”
Rather few. And their trends are higher than Class 2. that is because almost all Class 1 stations are CRS units, and those will have an inherently exaggerated trend, no matter how well they are sited.
Not bad.
But you can go further than that. We find UHI, while it may have a significant effect on offset, it has not much discernible effect on trend, not for the unperturbed set, anyway. And the compliant urban set trends well under the non-compliant rural set.
It’s all down to Microsite. Via the heat sink effect.
Microsite is the New UHI. You heard it here, first.
Evan, please give us a clean “laymans” definition of “microsite”
[The “very local” 10-50-100-500 meters around a site that affects any or all of the following factors:
Local sensible heat sources (air conditioners, heaters, stoves, ovens, buildings, furnaces, kilns, or generators. these may, or may not, be running any any given time. 5, 10, to 20 meter effect.)
Local radiated and re-radiated energy (from buildings, walls, asphalt or concrete parking lots, sidewalks, and parking garages. 10-50 meter effect.)
Local wind breaks, or wind accelerators. (Wind is blocked by a building or wall, or wind is accelerated across the sensor by being forced between a row of buildings at certain wind directions, or air is moved from a hot spot (parking lot or building wall) towards (or away from) the sensor. 50-500 meter effect.)
Local shading (or removal!) of natural shading and trees over time. 10-50 meter effect.
Local UHI. An otherwise “ideal” sensor recording good data for the nearest 500 meters unchanged, is in the middle of a small city or county, who 10000-50,000 meter radius now has 10x to 50x the urban heat island seen in the 1920’s or 1930’s.
.mod]
I’ll add that UHI is inherently non-local. It is Mesosite. Microsite is only concerned with the immediate proximity of the station, be it urban or non-urban. At most 100m. distant, and usually what matters is the 30m and 10 m. radii. Well sited urban station trends (sic) clock in lower, on average than poorly sited non-urban stations. Microsite IS the New UHI. ~ Evan
Scott: Until Evan has time to answer, just a couple of excerpts from the above press release that might be helpful:
**{my guess is: “unnatural thermal mass” = heat (or energy, heh) -retaining/emitting to a degree not normally found in nature}
That is to say, the above criteria would be the characteristics of a given “microsite.”
Just a little help (I hope) from your non-tech, friendly, neighborhood librarian,
Janice
Hurrah for .mod! #(:))
Sure hope Scott reads your great response to him above!
Should Urban readings even be included in the global calculation?
They (urban sites) are not representative of offset. But if they are well sited, their trends are useful and should be included.
The problem isn’t urban itself, it is the change in the microsite conditions over time. If the heatsinks in the area change during the period of study a bias will appear in the data. In the study of “climate change” it is the changes that make or break the data. If you build a parking lot, change the surface of the nearby playground, build a large building, install a chiller plant, upgrade a chiller plant, change brown space to green space, these all will impact measurement trends, and that is what pollutes the trend data. Sure cities will be warmer at night than the surrounding countryside.
We can’t compare urban readings from the 30’s to now because of too much change in the microsite conditions though.
If a station’s microsite rating changes during the 30-year study period, we drop that station. Poor microsite exaggerates trends even when a station’s siting is constant and unchanging throughout.
I cannot emphasize how very important that concept is. Our entire hypothesis would be falsified without it.
Gratz, !!!
Was just wondering about this paper last week.
So were we.
If one is going to argue on the basis of evidence, obviously evidence matters. Good post!
You should be very proud of the time and effort put into this.
My biggest congratulations. Very impressive sir! Also to the coauthors and those who put so much time into supporting this effort.
Congratulations and a big thank you to all of the authors for this excellent work.
Outstanding work Anthony! I’ll reiterate what was said upstream: Reliable data is the basis of all science.
It’s not perfect, but it’s as good as it can reasonably be. We define our terms and what we think is going on in the paper, itself.
We will also be archiving the data and formulas in Excel, which will put it in a format that anyone can dicker with it or change the parameters — add or drop stations, change ratings, add categories (i.e., subsets), add whatever other version of MMTS adjustment you like, that sort of thing. (And I have some iconoclastic notions of how MMTS should really be addressed.)
But the thing is, we welcome review. Some station ratings are obvious at a glance, but there are a few close calls. So it will all be open for review, complete with tools to test and vary. This paper is not intended as an inalterable doctrine. It is just part of a process of knowledge in a format it is easy to alter and expand.
If anyone has any questions, I’ll be glad to answer.
How many stations were “close calls”? Would it be possible to take a station that was borderline between say 1 and 2, and call it a 1.5? I suppose if there are only a dozen or so close call stations, any change to the results would be too small to be meaningful.
The only case where it makes a dime’s worth of difference is the Class2\3 demarcation. That is where the biggest difference occurs. That is the split between compliance and non-compliance.
There is a small handful of stations that are close calls. Some time earlier, for experimental purposes, I dropped the five coolest Class 1\2 stations. The trends were, of course a bit higher, but the confidence remained statistically significant (95%+ level).
Gives one hope that that (the very brief) age of science hasn’t yet been ground to a halt by magnet therapy, vitamins and global warming for big bucks. Congratulations Anthony.
this is the main takeaway for me after the results bcbill. outstanding effort by anthony and the team. the dogged determination to get it right,the huge amount of time and effort that took and the continued commitment to make sure all data is made available to ensure the in depth scrutiny a paper so important requires.
thank you all involved for restoring some confidence in science, for me at least.
Great work, the amount of hard work that must have gone into this astounds me.
Whoops!
….a warm bias mainly due to pain and maintenance issues…
A bit of a typo there, I think. Or hope!!
Painting is painful for me. Shoulder problems.
I tend to get a little hot headed when in pain.
Congratulation to AW and co-authors. The station ground truth data collected by volunteers is pure gold.
I conducted a small experiment using just the surface stations CRN1 from the database, guest posted here earlier this year. What was compared was GISS raw to GISS homogenized for those pristine stations. (Did not expand to CRN2 to get valid statistics, as my Koch check never arrived.) What it showed (keep in mind the limited sample size did not provide conclusive statistics) was that GISS homogenization did a fairly decent job of removing large urban UHI, but for suburban and rural stations it imported heat ‘contamination’ from poorly microsited ‘adjacent’ stations. In other words, the homogenized GISS end result is irreparably unfit for purpose. For sure for CONUS. Essay When Data Isn’t suggests the general result is also true globally, and not just for GISS. For the same reasons.
Essay When Data Isn’t suggests the general result is also true globally, and not just for GISS.
We would like nothing more than to take this show on the road to the GHCN. But that would require either an intense and precise foreign volunteer effort — or real funding.
Online satellite resources such as google earth are a lot better than they used to be but are yet inadequate to the entire global task. In some areas (not by any means all) of the the US, you can pick an MMTS off a fly’s butt and trace its funky little shadow. Outer Mongolia, not so much. And, “Beware the bight of Benin. Those who go in don’t come out again.”
We’d have to leg it or have other legs leg it to those stations and observe them with Leroy (2010) parameters in mind while they’re doing it. And my Uzbecki is getting a little rusty.
I have the feeling that what ya’ll started is gonna change things.
Thanks for the hard work !!
Evan, this could be larger crowd sourced. My comment went to data like Kotsouyanis on GHCN, or Aus BOM, for example Rutherglen. Not for you or AW to organize.but could be done.
Congratulations Anthony and all. I was pleased to buy the first publication on surfacestations…to help with funding. The fact that almost every site was visited and photographed by volunteers – and what a rogues gallery of station pictures!! When they came out, NOAA ran out of all their offices and took down the worst stations in the album. The optics of this for the world’s number one climate agency must have scared the daylights out them. It woke them up for sure. They probably spent a good part of that year’s budget digging up the worst stations, putting out papers and op eds, polishing the door knobs and just about everything they could think of.
Having visited essentially all the stations but a few, in my mind, makes you guys THE experts on the US temperature networks. Collectively, I would say more work on this one metric that has caused so much angst and trillions in spending on energy toys and studies was done by Anthony et al than the smoke shoveling of the world’s temperature agencies and university departments. Big computers adjusting the world with algorithms have been shown how the job is done!
I say the rest of the world can also be done. A call from the mighty WUWT would reach all 200 countries in an hour. Crowd sourcing, photos and videos of each station would done and selection of the best (you might have to go with classes 2 and 3 for the rest of the world, though – perhaps adjustable using a factor you have determined for these cases in the US. This would finally create the WUWT Global T Network. I suggest your 30year trend is even at least slightly warmer than reality, but probably the best we can do. The work would be even better with funding to twin random stations worldwide with the newest temperature instruments available, running them side by side to see what we get. I’m sure Canada and Australia could be done fairly quickly, most of Europe is what we would call a short drive and should be done quickly. Add Mexico and soon the argument that US is only 3% of the land mass would be shut off.
The next thing is to bring the work up to 2015 and compare it with the satellite record and CRN. I believe we are going to get wonderful coroboration with the satellite records.
Oh and evanjones, I’ve been to the Bight of Benin a couple of times, once in the 1960s for three years with a civil war on that killed 3 million people and I came back again! Of course, I’m from Manitoba.
Thank you for your service to humanity.
They’ll pull lower tropo satellite response, “but we live in the cities”. On a rational note it looks more and more like 1998 El Nino brought in a step change as nothing was really going on up to that point and not much since. Go figure.
All it would take is a change in the location of a large pool of warm ocean water that persists. The heated water that evaporates is carried downwind where the water vapor cools, part of liberating all of that energy (heat) warms everything else up, including surface stations.
How many billions of gallons of warm water (from vapor) is this El Nino transporting on to the continent to cool, How much energy does all that take?
We have air conditioning in the cities.
Climate change is fearsome for the wild lands.
Cities have air conditioning. Or will if Santa doesn’t take away all the coal.
My eyeballing the Wood for Trees plot suggests that even that compliant network’s trend exceeds the satellite-record trend for the same interval.
There are no compliant networks with which to make the comparison. CRN is the only one and that network has only been online during trendless times.
Our findings are ~10% under the RSS/UAH6.0 trends. And LT trends are supposed to be 10% to 40% higher than surface trends, depending on latitude. So our current results split the uprights — on the safe side. Not only is Klotzbach et al., vindicated (at least supported), but so is Dr. Christy.
I’m not following you. The plot above gives a 0.204 K/decade trend for 1979-2008, whereas eyeballing RSS and UAH on Wood for Trees gives me 0.16 and 0.14, respectively, for 1979-2008.
That is global data, including over oceans. The CONUS saw higher trends than global. RSS and UAH6.0 clock in at ~10% higher than our Class 1\2 surface results, ~0.225/decade (UAH5.6 a little higher).
Got it. Thanks for the response.
Hate to be the spoiler here, but….
All this work means nothing if NOAA doesn’t recant. As I have said many times before, here and elsewhere, this whole AGW thing is not about science, it is about money, and that makes it a fraud issue. You can pump out all the data you like, and personally, I believe it. But it was clear from the beginning there was no AGW. This study will just go in the trash, like all the others. Science is now corrupt, and the crooks are running the show. For every meaningful chart you show, they will come back with a mountain of hogwash.
If you really want to fight this corrupt influence on science, you have to go to the heart of it. Scientists committing fraud by lying to attract funds for their personal gain. That is a crime. It is white-collar crime. We put people in prison if they steal $10,000 from a bank, but when a scientist commits fraud for half a million, what do we do? Send him/her to an all expenses paid trip to Paris.
Turning a blind eye to this crime, will only make things worse as the years go on. With shield laws, like tenure, that are protecting criminals, textbooks written by snake oil salesmen, and institutions/universities/conferences/governments working together to conspire and defraud the taxpayer, you really think that a lowly over glorified group of bloggers is going to change the system? If you do, your damn fools!
When are you people are going to face facts? Its not about the science, its about money, always has been, always will be. Until we are prepared to treat white-collar criminals like we do with blue-collar criminals, nothing is going to change. Do all the studies you like. YOU ARE ALL WASTING YOUR TIME, and putting society, and the economy in real jeopardy. All because, you/we are all too proud, or arrogant, or dare I say it, cowardly, to really face this problem head on. That is, the problem of white-collar crime!
you’re damn fools
You talkin to me?
Right behind you Dorian!!!
I can only assume that you’ve created an organization, located and rented headquarters, done all the required paperwork for tax purposes, created a foolproof campaign, hired the appropriate lawyers to put our case together and are ready to go with coffee pots and phones all plugged in and ready to roll. When’s the next meeting?
Somebody just wake you up, Brian H? Tourettes syndrome? Do you make these cryptic non sequiturs often?
Ah, well. There’s no damn fool like an old damn fool.
Not sure, but Brian H may have been correcting Dorian’s “…your damn fools!” to “you’re damn fools”
Brian?
Dorian, calm. Latest word is that NOAA gave some of the subpoenaed emails to Rep. Smiths committee. Smith said he was working from NOAA whistle blower information. So the Karl ‘adjustments’ will likely become ‘Exhibit A’. It does not happen overnight when you are fighting a 25 year world war with gov funding, MSM, and leftist sentiments on the other side. But it can and does happen, one skirmisch, one battle, at a time. Soldier on.
It must be just a coincidence that NOAA started to comply with Congress’ request, just after the Paris Climate Change Conference.
You can’t win a war without ammunition, and people like Anthony are manufacturing bullets (cannonballs are a closer approximation…or missiles…) 24/7 for the cause. But really, what does Dorian expect from a “lowly over glorified group of bloggers” anyway? Matching uniforms polished to gleaming… swords reflecting torchlight across the meadow in the pre-dawn light….a magnificent army so vast and so furious that a mere glance would make all enemies collapse like Mike Mann’s proxy data?
Sounds like Dorian needs to examine some FACTS himself. For example, since he doesn’t really spoil much of anything here, he can’t call himself “the spoiler here”. 🙂
You know, I commented on the thread where NOAA was pooh poohing overhyped Godzilla El Nino that there is nothing like a congressional investigation to moderate such an agency’s excessive enthusiasm for end of world climate. I didn’t know the emails are already flowing in!!! This is exactly the kind of activity needed to corral activist, ideologue science. Bad stuff gets done in the dark.
NOAA has been hit by a CRUZ missile !!!
NOAA employee’s are not just worried about their jobs right now, some could be facing jail time !!
“you really think that a lowly over glorified group of bloggers is going to change the system?”
What a sad, defeatist, ineffectual little coward you are.
Thankfully, we don’t have to rely on pathetic little gob$hites like you who are beaten before you start to get stuff done.
REGARDING NOAA
You don’t understand. It’s not what NOAA thinks. It’s whether these results stand up under review that ultimately counts. She do or she don’t. The real deal. It just takes a little time, that’s all. We don’t take their word for it, so it’s only common courtesy not to expect them to take our word for it.
On the one hand, Microsite/Tmean Trend having been (re)introduced as an issue, and the current results challenging the official record, this subject will no doubt undergo a degree of further examination.
On the other hand, we are making extraordinary claims. And extraordinary claims require extraordinary proof.
I would like to add, most emphatically, an in no uncertain terms:
This is no fraud. This is not scam. NOAA has not lied. This is an error.
It is an error we ourselves partially made in Fall et al., by taking the easy way out by failing to convert to Leroy (2010). It appeared to be (and was) an intensely time-consuming task, and we thought (incorrectly) it wouldn’t have made any material difference anyway.
[Besides, who knows? I never quite did get around to running our unperturbed subset using Leroy 1999 ratings. But someday, maybe someday soon, I will. Maybe the findings will change as a direct result of addressing the criticisms of the 2012 pre-release. And if those results turn out to be compatible with what we have found using Leroy 2010, I am going to have one heck of a scientific horse-laugh. Our critics all seem to be enamoured of the quaint notion that the pre-release was for publicity purposes (loved it!) rather than for purposes of eliciting hostile independent review (the real and carefully explained reason).]
And there were very valid criticisms of our 2012 pre-release. Criticisms we had to address. So if we made all those errors, how can we call fraud on NOAA if they make the exact same sorts of errors? Confirmation bias? There is no man without one on any side of this, and that’s why we have a scientific method — to protect ourselves from it.
Human nature is. Seeing as how scientists, or even ones playing one on TV (like me), are at least part human. I am not inclined to judge. I find it just gets in the way. So let us put out past differences aside, get our heads together and make a little science, already.
Speaking personally, my favorite way of understanding the gestalt, ebb, flow of homogenization is to hash it out with the world’s leading expert on it, not deconstructing abstracts. I’m a game designer/developer, not (just) a rules lawyer.
So how do I do that if I am not on speaking terms with him? I want to get my hands and head into this stuff. I don’t want to be always trading potshots. Let’s do science.
A beautiful call for civility: Treat opponents with respect and give them room to correct their errors. Make room for honest, mutually respectful disagreement. Give sympathetic people in NOAA cover to engage in dialog with skeptics.
This shows how science is done. Let’s hope it catches on in a field that needs it.
evanmjones,
“I would like to add, most emphatically, an in no uncertain terms:
This is no fraud. This is not scam. NOAA has not lied. This is an error”
I do not see how you could possibly know such a thing . . your sanity comes into question to my mind, by speaking so.
REPLY – I have been up to my eyeballs in the data, both raw and adjusted. Perhaps three thousand hours in. Maybe more. Having deconstructed the mechanisms, it is my honest opinion that this is error compounded by confirmation bias and not fraud, scam, or any other synonym thereof. We made much the same sort of mistakes ourselves, at the outset. Maybe we are making other mistakes, quien sabe? If so, they are honest errors. And I make no presumption that NOAA is not subject to the same degree of honest errors that we are.
As for my sanity, it has been called into question so many times, I have developed an immunity … but the above is my call from the trenches. ~ Evan
JK
I don’t “know” Evan, but reading the pattern of his replies he realizes he is about to engage in a hostile environment. My guess is he wants to give the benefit of the doubt so that he can possibility divide and conquer. By taking his approach he allows those at the authority level to self separate. If he comes in firing 6 guns, he makes it much harder for that to occur.
Perhaps that’s his frame of reference.
:::: sorry for jumping in evan, it’s just such a juicy topic :::::
REPLY – Sure is. And that’s what I do. It will be incumbent on me to defend this paper in hostile territory. And it is in hostile territory that I acquired the invaluable feedback that allowed for the corrections since 2012. That is valuable to me. I need the other side. So do we all, though some of us may not yet realize it. Besides, what I do is push. And in order to push, I need something to push against. We are not out to convince our pals. We are out to convince our opponents in this. ~ Evan
knutesea,
“… If he comes in firing 6 guns, he makes it much harder for that to occur.”
Do you feel sticking an ‘It seems likely to me’ in there somewhere, would make it any “harder”?
REPLY – I fight with knives in both hands. I was trained in deconstruction and the dialectic from the day I was born (and those who trained me have not always been entertained to find their own weapons turned upon them). But I prefer a clean fight. An honorable fight. No one will know the knife that bears the poison, not until I choose to use it. But, be advised, it is a part of me, part of my arsenal. I can no more lay it aside than cut off my hands. And any who engage me ignore that at their peril ~ Evan
I’m fine with wordsmithing. Some folks need more butter on their bread than others.
JK
I’m reading Unstoppable Global Warming 2007 Dennis Avery/Fred Singer
Funny annnnd probably true … paraphase
“Its far harder to understand a 1500 year cycle and things like orbits, tilts and rotations. Of course it was easy to replace any talk of such things with an easy reason like carbon dioxide.”
Sir, he emphatically stated what he cannot (by any stretch of my imagination) know to be true . . this is not a good way to maintain credibility with serious people, it seems to me. Why one does it, is virtually irrelevant to me, it’s crazy talk . .
Oops, out of place there . .
Oops, cancel that oops ; )
amen to that evan. too many people see this as an issue of right and left. for me it has always been about right or wrong. you ,anthony and the team are doing things the correct way. you guys do the science, leave the potshotting to idiots like me.
evan,
I was moving down the comment thread looking for a good place to add my praise and thanks for what you guys have done, when I came upon that emphatic declaration you made . . My concern in this particular matter is your credibility, honest.
REPLY – I know, really I do. And I appreciate it. Understand that when I do what I do, they can throw all the low blows they care to — but cannot lay a glove on me. I have disarmed them, evaded them, forced them to fight on my terms. Furthermore, they become aware that I have other weapons at my disposal that I do not — but can — use. And deterrence is a powerful tool. ~ Evan
Evan,
I would like to add, most emphatically, an in no uncertain terms:
This is no fraud to me. This is not scam. NOAA has not lied. This is an error.
I grew up in a house full of knife wielders ; )
There will be a wave of many 1000’s of unfortunate computer hard drive crashes and email server backup tapes erasures across the Obama Admin when Hillary loses next November.
Dorian December 17, 2015 at 2:19 pm
All this work means nothing if NOAA doesn’t recant.
Truth always means something and will prevail in the end.
Dorian, you came late to the party it seems. Skeptics are the number one target of the zealots. Your white collar crime stuff has already spawned a number of whitewashes – the latest ones, however, based on skeptics information and data are going to be something different. Guess how we know there has been white collar crime in the first place? You are getting an inkling I can feel it. It was through the relentless hard work of skeptics who have never let something go by that doesn’t look right. Skeptics published Climategate, skeptics turned the light on the RICO 20, the lead guy having collected 63 million dollars from one agency and has no significant work to show for it and hired his wife and daughter to run the empire. The NSF didn’t go after them, skeptics did and after the NSF, too. Skeptics have had a number of scientific papers cause to be retracted. Skeptics have emboldened marginalized scientists of dissenting opinions to publish more and more good alternative climate studies, skeptics have given the most powerful testimony at Senate and Congressional committee hearings and in the UK parliament. That’s how it’s done. You don’t have much to contribute it seems except to put down skeptics efforts.
Early: I wish they were all dead.
Lee: Why, I do not wish they were all dead. I merely wish that they would return to their homes and leave us in peace.
Early (later, to Stuart): I would not say so in front of General Lee, but I not only wish they were dead, but in hell.
These attitudes manifested themselves in their respective fighting styles. Who was the better general, Lee or Early? The cool hand or the hot head? History has made its judgment.
Dorian
The truth always means something and is worth saying.
The first step to proving fraud is to demonstrate that the fraudulent statement isn’t true. This paper does good work towards satisfying that condition. It’s a fundamental building block that your fraud approach must have in order to succeed.
Embrace the healing power of and.
Civility is my weapon. And a terrible, implacable weapon it is — if one only knows how to use it.
The next step is unnecessary. Presumptive, alienating. I do not want their scalps. Ultimately, all I want is their ear.
But you are wrong, Dorian. WUWT and other skeptics are having a profound effect. The word is getting out, and the CAGW activists are being contained. Climate change isn’t a big concern for most people thanks to the skeptical voice. This paper will add to the impression that many have that skeptics are serious, worth a listen, and have a case.
Yes.
Dorian, We also may have Mother Nature on our side. After all, if it continues to warm less than the models “project” (despite numerous adjustments), fewer will be able to argue the C in CAGW. And there are still many real scientists in the field. Even if some were pulled into the more alarmist or activist camp, they will look at new evidence and modify their opinions. I believe this is a great thing for climate science, hopefully it will get published in a good journal, but even without that, because it was done carefully, it constitutes another step in the building blocks that make up the progress of science.
I like to think so.
Dorian,
“YOU ARE ALL WASTING YOUR TIME, and putting society, and the economy in real jeopardy.”
Please explain how society and the economy could possibly be put in real jeopardy by what Watts et al (or WUWT et all) has done here? I’m having difficulty imagining how you arrived at that idea . .
JK
NOAA doesn’t actually have to recant. The 2010 OIG report says that NOAA will ground truth the stations.
They didn’t. Moan and groaned about funding. Anthony did it cheapo style.
If NOAA is any good at spin, they embrace Anthony’s work (show the happy face) and then drag him thru a looooooong period of validating his methods. He needs to be cautious of this tactic and establish groundrules upfront of what they are actually concerned about. Set a timeline for review, major milestones, blah blah.
Please explain how society and the economy could possibly be put in real jeopardy by what Watts et al (or WUWT et all) has done here?
I think perhaps he fails to see the iron fist within the velvet glove.
Dorian,
It’s just another human mess made by folk with reasonable motivations which has become an unstoppable rolling juggernaut.
Back in early 80s some scientists’ concern over what carbon-dioxide might do to the climate, was taken up by excellent promoters with specific ideologues including:-
(a) World Government is necessary to stop humans destroying the natural world which would make it uninhabitable.
(b) De-industrialise to prevent humans destroying the natural world which would make it uninhabitable.
Unfortunately, the attempt to reduce carbon emissions is actually increasing harm to the natural environment. And the juggernaut rolls on, dragging innocent people with it e.g. workers concerned tofeed their family. Meanwhile, government subsidies are a lucrative income for some industrialists.
Congratulations Anthony
This is of huge importance if the criteria for classifying the stations are recognized as unbiased. The difference between 0.204 C/ decade and 0.324 C/Decade is 59%; i.e. a rather big error. I’ll guess that the error is just as big, if not bigger, in the rest of the world.
However, the importance of your finding depends on whether the objectivity for the classifying criteria can be questioned or not.
Be prepared to be attacked there Antony. The best defense is to give full access to all the data once it is published. Furthermore that is also the best scientific method.
/Jan
In easily accessible, malleable Excel format.
So you’ve effectively quantified the UHI.
And real numbers show the climate sensitivity is less scary than was feared.
This is good news.
And this is very good work.
Take a bow.
You’ve earnt it.
He has. But it is not UHI. It is Microsite. Removing well sited urban data has no measurable effect on non-urban trend. Trendwise, it is all in the microsite.
Microsite is the New UHI.
It’s not the current urbanization that matters, but how that urbanization has increased over time.
PS: I have seen studies that show that even small populations can have a UHI impact. If the area within in a few miles of the sensor has gone from a population of 5000 to 10000, that can have an impact on the measured temperatures.
But upthread there was this:
“The “gallery” server from that 2007 surfacestations project that shows individual weather stations and siting notes is currently offline, mainly due to it being attacked regularly and that affects my office network. I’m looking to move it to cloud hosting to solve that problem. I may ask for some help from readers with that.”
Cloud hosting by BlackBerry corporation is outside the reach of the US government. BlackBerry has never been hacked. BlackBerry is well trusted.
If you need a contact there, I can help.
Thanks, Mr. Watts.
One could ask why our taxpayer-funded government/academic scientists don’t publish this kind of study. But, there’s no reason to ask: it’s because this kind of study gives them answers they don’t like and don’t want. (Note, I didn’t ask why they don’t conduct such studies: for all I know, they may have done so. They just don’t tell us about the results.)
That is, “hypothesis myopia” and “asymmetric attention”, at the very least, are at work.
http://www.nature.com/news/how-scientists-fool-themselves-and-how-they-can-stop-1.18517
One could ask why our taxpayer-funded government/academic scientists don’t publish this kind of study.
They didn’t make it, that’s all. We did, that’s all. Nothing wrong with that. In fact, that’s the way I like it. Gives a mere citizen scientist some elbow room.
What I’d really like to know now is how much — if any — adjustment is done by NOAA’s algorithms to these Class 1 & 2 stations, and if any, why?
As for class 1, see my guest post here earlier this year on. I did precisely that analysis. ‘How good is NASA GISS’, 5 August 2015. WUWT search tool takes you there immediately. Just checked.
Congress has asked for this as well. They’ve been told to take a hike by NOAA….
What’s sauce for the USHCN is sauce for the BoM. And sauce for the GHCN. But we would want to get some mud on our boots over there, to quantify.
What I’d really like to know now is how much — if any — adjustment is done by NOAA’s algorithms to these Class 1 & 2 stations, and if any, why?
It’s real ergly, son. They are bumped up from 0.204C/decade to 0.336.
That’s what happens when homogenization bombs.
Wow. Could this be used as a way to attack homogenization in countries like Australia?
No. Australia can’t use it. BOM uses world’s best practise, without the necessity of using computer programs. Manual adjustment is much more preferred. /sarc
If this is the extent of the problem within the USA, can you imagine how much over-estimation there has been for global temperature rises. Land based weather stations elsewhere in many other countries will be of far lower standards in both quality and reliability and affected even more by heat islands due to their relatively recent greater populated areas’ expansion around the weather stations.
Has anyone tried to compare this corrected trend with USA area satellite data based trends over the same period of time. That would be interesting to see as it could explain the difference between the CAGW supporters quoted global temperature rises using land based weather stations and the parallel satellite date which shows very significant temperature rise flattening if not even no rise at all!
Excellent point.
“Land based weather stations elsewhere in many other countries will be of far lower standards in both quality and reliability and affected even more by heat islands due to their relatively recent greater populated areas’ expansion around the weather stations”
I had posted examples on my weblog several years ago, but apparently they were removed by NOAA or otherwise deleted. Here are the posts
https://pielkeclimatesci.wordpress.com/2006/12/12/new-evidence-of-temperature-observing-sites-which-are-poorly-sited-with-resepct-to-the-construction-of-global-average-land-surface-temperature-trends/
https://pielkeclimatesci.wordpress.com/2011/09/28/set-6-of-the-photographs-of-surface-climate-observing-sites/
https://pielkeclimatesci.wordpress.com/2011/09/16/set-5-of-the-photographs-of-surface-climate-observing-sites/
https://pielkeclimatesci.wordpress.com/2011/09/08/set-4-of-the-photographs-of-surface-climate-observing-sites/
https://pielkeclimatesci.wordpress.com/2011/08/29/set-3-of-the-photographs-of-surface-climate-observing-sites/
https://pielkeclimatesci.wordpress.com/2011/08/16/set-2-of-the-photographs-of-surface-climate-observing-sites/
Maybe you or one of the WUWTs can find them again.
Roger Sr.
If this is the extent of the problem within the USA, can you imagine how much over-estimation there has been for global temperature rises.
We can. We have. We do. But we’d like to check. #B^)
How is the CONUS trend determined? Is there an area weighting applied to each subset of stations?
Yes. We use the average of the nine NOAA CONUS climate regions, weighted for area variation.
thanks Evan. and then for each region are you gridding the stations to get the regional coverage?
No, just regional averages. And some regions are better covered than others (but our basic gridding addresses this).
Note well that our ungridded data runs cooler than the gridded. We have pushed hard against our own hypothesis. We pre-released in order to elicit hostile independent review — which we have addressed. More papers should do that, I think. Measure twice. Cut once.
Now if you could do Australia’s BOM data too, we might be able to tee up a knighthood.
Well done Anthony. I have also been checking on my home town of Broome in Australia, where BoM has their instruments sited at the local small airport, finding a few maximum temperature spikes at passenger jet arrival and departure times. In recent times 4 new large helicopter hangars were built close to BoM’s premises and instruments, They house 13 or 14 large offshore passenger helicopters. Yesterday 5 took off within minutes, with a temperature spike at the same time. A nearby station at Broome Port shows no spikes at all. http://pindanpost.com/2015/12/14/airport-heat-islands-artificial-maximum-temperatures/
Help JoNova organize the Aussie equivalent of surface stations. Is not complicated, and she has the organizational chops.
Agreed.
They’ve told the powers that be, “the public wouldn’t understand what we’ve done, so were not going to tell you” (my paraphrazation)
How about an honorary degree? #B^)
Not with that lefty Turbull in charge.
A dishonorary one would do …
congratulatons Anthony…now tell the Met Office:
17 Dec: BBC: Matt McGrath: Met office says 2016 ‘very likely’ to be warmest on record
When compared to the pre-industrial levels, the forecast predicts that next year’s temperature will be 1.1C above the 1850-1899 average. This is edging closer to the 1.5C level that governments agreed last week they would do their best to keep under in the long term.
Last year, the forecast for 2015 predicted a central estimate of 0.64 above the average. Observational data from January to October this year shows the global mean temperature so far this year is running at 0.72 above 1961-1990…
“The forecast for next year is on the back of some other strong years,” said the Met Office’s Prof Adam Scaife.
“In 2014 we had 0.6 which was nominally a record, 2015 so far we’ve had 0.7 which is also nominally a record, and next year we are talking about 0.8 – so you can see that very rapid rise over three years and by the end of 2016 we may be looking at three record years in a row.”…
The impact of the strong El Nino that started this year continues through the first half of next year…
The forecasters at the Met Office say it is responsible for up to 0.2C of next year’s value. In combination with continuing climate change, the forecasters believe it will lead to new records.
“There is an uncertainty range, the bottom end of the range for 2016 is very close to the current value for 2015, so it’s not impossible that it will come out the same as 2015 but it is very likely to be higher,” said Prof Scaife.
The Met Office says that the rise in temperature predicted for next year may not continue indefinitely…
http://www.bbc.com/news/science-environment-35121340
MSM are lapping this up – before 2015 has ended.
Every evening the BBC weather forecasters tell us that rural area temperatures will be a degree or so lower than the readings from weather stations on their charts which are located largely in the far more urbanised but lesser areas of the overall UK area. What drives heat island effects are a variety of man-made inputs: transport exhausts, industrial processes including power generation, domestic heating and/or air conditioning, heat generation from all electrical appliances and equipment etc. etc. A great deal of these are independent of weather effects, or seasonal effects. These heat generation sources have also increased significantly over the last 20-30 years or so, particularly globally. In such circumstances how can the Met Office dare suggest that these later years are hotter or that such data can be used for assessing CAGW or substantiating the massive sums of money being thrown at it?.
Land-based instrumentation in such an operational environment cannot surely be reliable, nor can it be adequately weighted, given the many differing and different variables affecting their results, and surely not credible for using as “adjusted” temperatures when considering and assessing decadal rises as small as 0.15 degrees C, or even less!
It’s not the offset. It’s the trend. The trend’s the thing. Urban/rural show no significant differences. Offset is all very well. But in terms of trend, Microsite dominates UHI. Well sited urban stations may be hotter, but they average lower trends than poorly sited non-urban trends.
Cassandra, BBC Scotland WX forecasts often say that rural temps will be “several degrees lower than those shown”.
pat: “17 Dec: BBC: Matt McGrath: Met office says 2016 ‘very likely’ to be warmest on record”
So the fix is already in, is it?
I bet 2017 is ‘very likely’ to be warmest on record too, and 2018, 2019 and 2020 after that.
Even if the ice age suddenly strikes and the Met Office is under a kilometre of ice.
And we’ll provide some “fixes” of our own.
congratulatons Anthony…now tell the Met Office
He just did.
“When the journal article publishes, we’ll make all of the data, code, and methods available so that the study is entirely replicable.” that is a gold standard of science very well done
assume “pain” is to be “paint” re “the old wooden box Cotton Region Shelter” and
ignoring “data that requires loads of statistical spackle” is priceless
Very good – I was about to comment on that myself, but search for “paint” first.
We do this by applying the Menne (2009) offset jump to MMTS stations at the point of conversion (0.10c to Tmax, -0.025 to Tmin, and the average of the two to Tmean). We do not use pairwise thereafter: We like to let thermometers do their own thing, inasmuch as is consistent with accuracy.
You made Drudge! He links to the Daily Caller: http://dailycaller.com/2015/12/17/exclusive-noaa-relies-on-compromised-thermometers-that-inflate-u-s-warming-trend/
and –
http://cnsnews.com/news/article/barbara-hollingsworth/study-surface-temps-lower-weather-stations-minimal-artificial
and –
http://www.thenewamerican.com/tech/environment/item/12288-study-shows-global-warming-data-skewed-by-bad-monitoring
They moved the official station in our city within the last 10 years or so, from north of the airport (GJT) (in the stinking desert where few locals live and why the airport was put there) to their new NWS office building in the middle of the asphalt area. Yes, it’s in the correct white, louvred box with a small area of limestone rock, but where the desert is snow covered at times, the car lots are cleared of snow. It always registers warmer year round than the one I have in a shaded area surrounded by grass, trees, and now snow (though my device is bimetallic, not merc/alcohol).
Asphalt as measured with an IR thermometer is about the worse surface you can use.
Here is a sample of IR readings from a clear sky day, starting at 6:30pm, 11:00pm, 12:00pm, then 6:30am.
The slope in the temp of the concrete is a shadow progressively blocking out the Sun prior to measurement, showing the effect of differing amounts of Sun/Clear Sky exposure.
Thanks, micro6500. I left out my point that car parks get cleared of snow. The desert needs to wait until the sun is warm enough to overcome the heat reflecting off snow and melt, other than the tracks of the vehicle that used to drive in to get the readings. Although they tried to mitigate the area right around the new location, zooming out will show a lot of asphalt that gets cleared of snow. Google Map as of 12/2015
Nice work, Anthony. You set a standard to which others should attempt to emulate.
If I read the numbers correctly, it looks as if we should take any warming trend we see from adjusted temperature record trends and multiply it by ~2/3. e.g. for HadCrut, GISS, BEST, etc.
calculations:
Adjusted trend slope = 0.324
Compliant trend slope = 0.204
.204/0.324 = 0.627. (so actually 0.63 is the correct ratio).
Does that sound reasonable? That’s basically extrapolating Anthony’s result across the globe of course, which might be fraught with peril. Also extrapolates Anthony’s result back to 1885, the start of most modern temperature records. Also assumes HadCrut, BEST, etc. are doing similar adjustments.
This therefore impacts the confidence interval on the relationship between C02 and temperature. It lowers all such confidence intervals, reducing them to at least 2/3 of what they used to be (note, confidence intervals aren’t linear, but too lazy to do the Z-score math right now. “what happens to the confidence interval when you move the mean by 1/3” is left as an exercise for the reader).
My new canned response to “XYZ variable is correlated with temperature trend” is going to be “try that when the temperature trend is actually 2/3 of the adjusted record”, and cite Anthony’s paper…
Peter
Well, 324/204 = 1.588. So it’s only a ~59% exaggeration. But that’s with CRS units, and they run much hotter. Without CRS, we get a “gold standard” MMTS (for most of study period) of 0.163C/decade. And it’s likely lower because part of those records are CRS (with an upward MMTS Tmean bump for conversion).
P.S., you cannot knock a third off the top of the global metrics. We do not include sea surface or SAT. We consider land surface, only, and that is only ~30% of global coverage. Haddy SST may be under attack by others, but land-only is what we do.
Good point. So the 2/3 multiplier won’t work. However…
Have you seen how widely variable the estimates of SST are? Given a recent El Nino article here on WUWT that graphed all the estimates on one graph, it seems there’s an error of +/- 0.8degC on SST estimates! So the error bars are probably far larger than any of the common estimates.
Peter
Good work
I have only one skeptical comment about this result, and it’s a technical comment:
How do you account for confirmation bias? Even if you see out of the corner of your eyeball (subconsciously) a larger temperature trend a human being will be more likely to select that station as being out of compliant. Anthony is awesome, but he’s still a human.
This is why medical industry does double blinded studies, to try and remove confirmation bias. (confirmation bias still sneaks through in that the drug companies throw away entire studies if they don’t confirm a good result, but that’s a different layer of the same problem).
Was there any attempt here to remove confirmation bias? Is there a way to apply Leroy (2010) that removes as much potential for confirmation bias as possible?
I’ll just add for fairness that the same argument applies to the keepers of the adjusted temperature records. That’s why I’m a fan of averaging every temperature record together and adding into their error bars the “human bias” error bar, that being variance between the records.
Peter
There is a simple answer. The individual station surveys were done by hundreds of volunteers. They all took multiple pictures up close and personal. Then those plus google earth can be used to MEASURE objectively the written, explicit CRN criteria. There can be no overall confirmation bias in such a methodology. Surely you were not implying a real critique, rather just hoping elicit this sort of comment. Now Karl 2015….
Nope, areal critique, I’m unfamiliar with the CRN procedure, so it may sound naive 🙂 .
So who interprets the pictures, the hundreds of volunteers or the authors?
Also helpful to show the population distribution of the metrics. The ones far away from threshold limits (e.g “at least 3 meters from a heat source”, and actually 20 meters, will be indisputable. The ones closest to the thresholds are the ones that would possibly be subject to confirm bias.
Peter
So who interprets the pictures, the hundreds of volunteers or the authors?
Me. With the Rev pushing and Doc J-NG pulling. No one else is qualified, and I don’t consider it a wrap until I have personally given it the hairy eyeball (and sometimes not even then). I make the Proximity Views.
Okay, so how do you avoid confirmation bias?
How do you account for confirmation bias? Even if you see out of the corner of your eyeball (subconsciously) a larger temperature trend a human being will be more likely to select that station as being out of compliant.
We compute (except in cases of the prima facie obvious) areas of heat sink within specific radii (using polygon area tools and/or measurement views) and apply those findings to the Leroy (2010) rating system. This is not something you just whip up.
We avoid bias by wearing our “own enemy” hats and making all the ratings (photos, GE maps, Birdseye images, etc. publicly available. That is all one can do. I will bet after extensive independent review that not all station ratings will remain exactly the same. And more stations will be surveyed and added to the mix. Then there are the Class As to ponder. Not to mention GHCN.
This is but a frozen moment in a continuing process.
Thank Evan
Who will be doing the “independent” review ?
Our bestest allies and our worstest opponents. Anyone who wants to. That’s how it works, and “gatekeeping” be damned..
As you shake out the cobwebs you’ll want to consider requesting an audit by the below group.
Taken from the 2010 OIG Audit
“A NOAA panel of representatives from NWS, NESDIS, and OAR identifies, surveys,
evaluates, recommends, and selects USHCN-M sites within grid areas evenly distributed
across the 48 contiguous states. The panel analyzes survey packets consisting of a site
survey checklist, site score sheet, site obstruction drawings, and site photos to determine
the ideal location of USHCN-M stations. The USHCN-M Executive Steering Committee
overseeing the panel is chaired by the directors of NCDC and the Office of Climate, Water, and Weather Services. Members of the committee come from various NOAA organizations, as well as the Commer
ce and Transportation Program Office.”
Go big. By challenging the current reviewers to audit your work you gain level par and really get into the weeds with them. Doing this is actually doing the work the congressional committee would need to validate your work.
Hope that helps.
Sorry, didn’t see this before I asked (again).
So how much of this is judgement, and how much is just ordinary math?
Please see my other suggestion – create a metric that gives you a confidence in judging, and plot the distribution of that metric compare to the pass/fail line. If there’s a pile of stations near the pass/fail line, you have a good potential for confirmation bias. If there are very few stations near the pass fail line, then we shouldn’t worry about confirmation bias. For example, a naive-non expert like me would see a possible metric as the efficacy of the heat ink compared to Leroy 2010 – presumably there’s a pass fail line. Then plot the population distribution and look at the pass fail line and see how far away you are. if you have multiple metrics, then the line becomes a decision surface, and your final metric is the distance from the point to that surface (or decision volume, etc, out to N dimensions. Usually easier to use 2-3 metrics only, that way you can visualize it).
I used this technique in manufacturing and in automated ECG interpretation. It’s very useful in tell you how much to trust that very human judgement – by quantifying the judgement and running statistics on it. For example it turns out for automated ECG interpretation, the data points the automated algorithm thought were ambiguous (i.e near the decision surface), were the same ones the doctors (the reference experts) had a hard time judging as well.
Peter
If there’s a pile of stations near the pass/fail line, you have a good potential for confirmation bias. If there are very few stations near the pass fail line, then we shouldn’t worry about confirmation bias.
There are a few, but not very many. Besides, with the tool I provide, you can change a station’s rating fairly easily and argue for the change.
The most reliable way to remove confirmation bias is to make your data available for others to review.
You and anyone else are free to review the stations and come up with your own independent ratings. If they differ, write to the editors. They have shown that they are open to honest criticism.