MEDIA ADVISORY: 96% OF U.S. CLIMATE DATA IS CORRUPTED
Official NOAA temperature stations produce corrupted data due to purposeful placement in man-made hot spots
Nationwide study follows up widespread corruption and heat biases found at NOAA stations in 2009, and the heat-bias distortion problem is even worse now
ARLINGTON HEIGHTS, IL (July 27, 2022) – A new study, Corrupted Climate Stations: The Official U.S. Surface Temperature Record Remains Fatally Flawed, finds approximately 96 percent of U.S. temperature stations used to measure climate change fail to meet what the National Oceanic and Atmospheric Administration (NOAA) considers to be “acceptable” and uncorrupted placement by its own published standards.

The report, published by The Heartland Institute, was compiled via satellite and in-person survey visits to NOAA weather stations that contribute to the “official” land temperature data in the United States. The research shows that 96% of these stations are corrupted by localized effects of urbanization – producing heat-bias because of their close proximity to asphalt, machinery, and other heat-producing, heat-trapping, or heat-accentuating objects. Placing temperature stations in such locations violates NOAA’s own published standards (see section 3.1 at this link), and strongly undermines the legitimacy and the magnitude of the official consensus on long-term climate warming trends in the United States.
“With a 96 percent warm-bias in U.S. temperature measurements, it is impossible to use any statistical methods to derive an accurate climate trend for the U.S.” said Heartland Institute Senior Fellow Anthony Watts, the director of the study. “Data from the stations that have not been corrupted by faulty placement show a rate of warming in the United States reduced by almost half compared to all stations.”
NOAA’s “Requirements and Standards for [National Weather Service] Climate Observations” instructs that temperature data instruments must be “over level terrain (earth or sod) typical of the area around the station and at least 100 feet from any extensive concrete or paved surface.” And that “all attempts will be made to avoid areas where rough terrain or air drainage are proven to result in non-representative temperature data.” This new report shows that instruction is regularly violated.
For more information, or to speak with the authors of this study please contact Vice President and Director of Communications Jim Lakely at jlakely@heartland.org or call/text 312-731-9364.
This new report is a follow up to a March 2009 study, titled “Is the U.S. Surface Temperature Record Reliable?” which highlighted a subset of over 1,000 surveyed stations and found 89 percent of stations had heat-bias issues. In April and May 2022, The Heartland Institute’s team of researchers visited many of the same temperature stations as in 2009, plus many not visited before. The new survey sampled 128 NOAA stations, and found the problem of heat-bias has only gotten worse.
“The original 2009 surface stations project demonstrated conclusively that the federal government’s surface temperature monitoring system was broken, with the vast majority of stations not meeting NOAA’s own standards for trustworthiness and quality. Investigations by government watchdogs OIG and GAO confirmed the 2009 report findings,” said H. Sterling Burnett, director of the Arthur B. Robinson Center on Climate and Environment Policy at The Heartland Institute who surveyed NOAA surface stations himself this spring. “This new study is evidence of two things. First, the government is either inept or stubbornly refuses to learn from its mistakes for political reasons. Second, the government’s official temperature record can’t be trusted. It reflects a clear urban heat bias effect, not national temperature trends.”
An example of the bias problem
The chart below, found on page 17 of the report, shows 30 years of data from NOAA temperature stations in the Continental United States (CONUS). The blue lines show recorded temperatures and the trend from stations that comply with NOAA’s published standards. The yellow lines are temperatures taken from stations that are not compliant with those standards (i.e. near artificial hot spots). The red lines are the “official” adjusted temperature released by NOAA.

“If you look at the unperturbed stations that adhere to NOAA’s published standard – ones that are correctly located and free of localized urban heat biases – they display about half the rate of warming compared to perturbed stations that have such biases,” Watts said. “Yet, NOAA continues to use the data from their warm-biased century-old surface temperature networks to produce monthly and yearly reports to the U.S. public on the state of the climate.”
“The issue of localized heat-bias with these stations has been proven in a real-world experiment conducted by NOAA’s laboratory in Oak Ridge, Tennessee and published in a peer reviewed science journal.” Watts added.
“By contrast, NOAA operates a state-of-the-art surface temperature network called the U.S. Climate Reference Network,” Watts said. “It is free of localized heat biases by design, but the data it produces is never mentioned in monthly or yearly climate reports published by NOAA for public consumption.
The Heartland Institute, a free-market think tank founded in 1984, is one of the world’s leading organizations promoting the work of scientists who are skeptical that human activity is causing a climate crisis.
Heartland has hosted 14 International Conferences on Climate Change attended by thousands since 2008, published the six-volume Climate Change Reconsidered series by the Nongovernmental International Panel on Climate Change, and for 21 years has published Environment and Climate News. The Heartland Institute has also published several popular books on the climate, including Why Scientists Disagree About Global Warming (2015), Seven Theories of Climate Change (2010), and Is the U.S. Surface Temperature Record Reliable? (2009).
###
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
this ole hillbilly for many years has been explaining that indeed the recent global warming is “manmade” the placement of the weather stations where extra HEAT is found caused 100% of the claimed warming
Why isn’t anyone pointing out that .2 deg C/decade is still pretty severe?
It’s more than likely the Sun, but still, this is bittersweet news.
If i move you from a room that is 70F to one that is 70.2F would you notice the difference?
no, also right here on our little seven acres the south side in the sun is always 5+ degrees warmer than the north side of the property
Yep. And the temperature on the north side of the Kansas River Valley is almost always different than the temperature on the south side of the Kansas River Valley. Different temps, different humidity, different elevations, different weather fronts, different cloud cover, different latitude.
And you’re not climate refugees yet?
No, it’s not severe at all. Diurnal variation is enormous by comparison. And it’s also meaningless since temperature is not an extensive measurement and averaging temperatures across landscapes tells you bupkis.
exactly it is almost impossible to get an accurate “average” temperature on our little 7 acres……attempting to do that for the entire globe is a FOOLS quest because before you could gather the data needed most all of it would have changed..
I see La Niña is building again
I was so happy to have participated in this.
Ditto, pretty cool to see one of my pictures in the report.
“Nationwide study follows up widespread corruption and heat biases found at NOAA stations in 2009, and the heat-bias distortion problem is even worse now”
Yup. If we look at the latest scare in the UK where in 1976 we had months of high temperatures to the latest summer where two days where high. But then you find out those temperatures were recorded at airports, Gatwick and Heathrow! D’oh!
You mean they cheat? Who knew?
Story made it to Breitbart
https://www.breitbart.com/politics/2022/07/29/study-noaa-advances-bogus-heat-data-based-on-collection-practices-96-corrupted/
“The Heartland Institute compiled the report using satellite and in-person surveys of NOAA weather stations that contribute to the “official” land temperatures in the United States.
“With a 96 percent warm-bias in U.S. temperature measurements, it is impossible to use any statistical methods to derive an accurate climate trend for the U.S.” Heartland Institute Senior Fellow Anthony Watts, who directed the study, said is the study announcement distributed to the press. “Data from the stations that have not been corrupted by faulty placement show a rate of warming in the United States reduced by almost half compared to all stations.
And we are still averaging temperatures!
Whats that? you use anomalies so its okay really!
I could have swore that anomalies were created by subtracting an averaged base line?
So, anomalies are created from an averaged temperature. Which, in my book, means, creating rubbish from rubbish.
Correct me if I am wrong please.
According to Steve Mosher at BEST, they are not CALCULATING an anomaly, they are PREDICTING one.
Which may be well and true, I don’t have any reason to doubt Steve.
But people USE that number as a CALCULATED number, which is not its proper use.
rubbish from rubbish. you got it!
On Adjustments: (in reply to various comments)
An adjustment of the mean cannot reduce variance, which is a measure of uncertainty. When the mean is adjusted the variance of the adjustment should be added to the variance of the mean, so that every adjustment increases your uncertainty. Climate Science disregards the uncertainty of the adjustments and does not propagate these uncertainties. The result is fictitious.
This doesn’t just apply to adjustments. When you create a mid-range daily value you are creating a data set of two members with a variance. If that variance is different than the variance of tomorrow’s data set even at the same location then how do you average the two days as part of a monthly average without also considering the two different variances? Since winter temps have a wider variance (usually) than summer temps how do you average southern hemisphere temps with northern hemisphere temps without considering the different variances?
Phil,
Thank you.
Please repeat your comment as much as you wish.
Many people seem to need frequent reminding. Geoff S
It is clear that many of the stations are very poorly sited and are reading temperatures that are hotter than they should be. However, it is the change in the temperature that we are after, not the absolute temperate. Is that change increasing with time? Assuming nothing else changes then the fact that they are poorly sited would not matter. Is the assumption then that the number of bad stations is increasing? Or that the heat island effect is encroaching on existing stations? Or that stations that are not regularly cared for have their albedo change?
What are used in the calculations are mid-range temperatures using the maximum and minimum temperatures. These are used to create a long term average which is then subtracted from the current mid-range value to create an anomaly.
If the anomaly changes how do you know what caused it to change?
If I tell you that the average mid-range value over ten years was 10C and the current anomaly is +0.1C can you tell me what the maximum and minimum temps were that contributed to those values? Especially the anomaly.
An asphalt surface will warm more than a grassy meadow or forest glade. If the station is poorly sited, then it will record more change than it should. It is easier to recognize unreasonable actual temperatures than unreasonable temperatures represented by ‘anomalies.’
I have also become convinced that winter months have wider anomalies than summers. In addition, night temps have higher growth that daytime. Mixing one hemisphere with the other smears the seasons together and makes it look like summers are included in any growth.
The climate scientists have reached the point where more detailed information like seasonal or even monthly changes should published all by hemisphere. Either the whole globe is warming or it is not. Trying to hide detailed information behind “global averages” is not scientific. Either the parts add up to the whole, or they do not!
Need lots of those infrared pictures with locations to make this leave a very big mark. Roll them out one state at a time.
Pre 1970 the majority of stations were urban.
But this was quickly flipped on its head urban sites quickly became equal with urban, as temperatures started increasing urban sites became the vast majority and still are.
So the good news is they have milked the UHI for pretty much all it has to give. They can milk it a little bit more but not much.
This is part of why by 1998 and even now we are seeing stalls and pausing in temperature gains.
Further urbanization and very bad sites can still milk it a little bit more but it’s crilimbs compared to what they ahve already extracted
Data here for Melbourne Australia and surrounding suburbs showing some of the effects of UHI. Also shows some of the difficulty in maintaining continuous lines of temperature when one station is closed and another a mile away is opened. (Melb Regional station closed, Olympic Park opened ca 2015. Here are overlap numbers for daily Tmax and Tmin).
Digital date freely available – just ask.
Geoff S
http://www.geoffstuff.com/melbsubshort.xlsx
How does this compare to the Anthony Watt-approved stations?, and
US Climate Reference Network and
Central England Mean Temperature
MM.
It relates in the sense that distance of separation of two stations should affect their correlations, no matter which country.
People commenting here have asked about urban versus rural. Here is some data on this from Australia, which could be a guide as to what to expect elsewhere.
The Aussie data seldom is affected by a T9ime of Observation (TOBS) correction like the case for USA. Otherwise, results should be comp[arable.
In terms of quality of siting of station, Australia tends to be better manicured that USA, so far as I can tell from photography and visual inspection.
The data I just posted above is taken from a much larger Excel spreadsheet that is much more useful. The data so far are a taster to see if anyone is interested enough to work their own simple stats.
Geoff S
Respectfully, not just distance. Also elevation, terrain, and geography. There are probably others I can’t think of right now.
Proximity of a water body to one station but not the other, the stations being separated by a ridge or mountain range, different agricultural crops grown near to the stations, one station being urban and the other being rural, and the aspect (normal to sloping surface) of the sites. For the latter case, consider which side of a tree moss usually grows on.
I should add that those above are the fixed or stationary variables. Additionally, there are weather fronts that move across the land. If two stations are on different sides of the front, an average for a station intermediate may be very much in error. Similarly, rain can be very localized, and a shower can change the temperature and humidity significantly for one station and not the other.
This study is seriously flawed.
.
” The new survey sampled 128 NOAA stations”
.
NOAA has over 1020 stations: https://forecast.weather.gov/stations.php
.
No mention of how stations were selected, obviously not a random sample since WUWT readers did the selection.
In fact, ClimDiv had 10 325 stations in 2014. But many have been added since.
The sample is certainly not random. Of the 128, 80 were former USHCN, which is a subset of 1218 stations. But more seriously, those were, according to the report, selected from those that were featured in the 2009 report, and deemed faulty there. There is no indication of a random selection method.
Random sampling selection? Really? How does that apply to the dearth of temperature stations around the globe that can be used for sampling? Infilling doesn’t “make” stations. Infilling is only guessing.
Your argument that unless you look at a large percent of stations is ludicrous. Do you really believe the ones that were examined aren’t representative of the whole?
Tell the group how many types of sampling are defined. Random selection is only one.
I would submit that because of the many assumptions required for the infilling, the uncertainty should be larger for the infilled station than for the stations supplying data.
One of the problems with colorful maps is that there isn’t any simple way to depict the uncertainty associated with the different colors and observers are inclined to assume that the numbers are exact.
Nick,
Can you link to a study on whether the added stations were, overall, generally hotter or cooler or neutral, compared to the stations existing before the addition? I have looked, no too hard yet, but not found a link. Cheers. Geoff S
Geoff,
The whole point of using anomalies is that it doesn’t matter.
Tell us variance in the anomalies. Then split it up and tell the means and variances between the hemispheres. IT IS IMPORTANT to know whether summers or winters have different variances so we can see how that affects the overall mean.
Of course it matters. The climate is based on the entire temp profile. When you use anomalies calculated using averages you lose sll knowledge of the temp profile associated with the anomalies!
Honestly officer, it does not matter that my velocity was 70 kph in a 60 zone, what matters is that my velocity was not much different to my usual excess.
More seriously, some stations have wide fluctuations about their average, some have small. Subtract the average from each and you can get distortions like more very hot days at the former, depending on how you define. Geoff S
Absolutely! Variance DOES matter. Have you yet to see anyone deal with what the variance is throughout all the calculations being done? You would think mathematicians who are knowledgeable about statistics could AND WOULD crank that out so everyone could see just what kind of distributions we actually dealing with. I’m glad I haven’t held my breath waiting for some mathematician to admit they have never bothered doing that.
From the NOAA nClimDiv state-readme.txt file:
“Statewide, regional and national values in nClimDiv were derived from area-weighted averages of 5km X 5km grid-point estimates interpolated from station data. Station data were gridded via climatologically aided interpolation to minimize biases from topographic and network variability.”
This is probability calculation. This is not measurement.
No, it is standard spatial integration.
Spatial integration only makes sense when the elements are highly correlated. Not just in direction but in absolute values ad well. Since value correlation is highly dependent on distance, elevation, pressure (a time dependent function itself), terrain, and geography spatial integration gives highly dubious results.
Temperature is such a useless silly scientific measurement. Why do we even care what the temperature of some stupid “standard” box is?
Sea level WRT land reference is something I can get with. Sea is pretty well homogenized by nature on average.
Temperature, not so much. I’m always wondering what the temp diff is between the bottom and top of my electric kettle and of my a saucepan, and the water in it seems to move quite a lot.
If your car broke down and you were stranded in Death Valley NP, you would be concerned about the temperature. You would have a better chance of surviving in the Winter than Summer.
The sea is pretty well homogenized, except where it isn’t. Warm currents make Prince Edward’s Island in Canada, and Great Britain, tolerable places to live in the Winter.
Most people who owns a weather station knows how important the siting really is and how easily the local microclimate can affect the temperature for most of the 24-hour period and not only during daytime.
It really feels like most private weather station owners put more time on the siting than NOAA themselfes and their excuses are really lame to be honest.
Around 90 or 91 they actually came out and said they were discontinuing the use of remotely located weather stations,, like in national forests where Heat Islands don’t exist and replacing them with more readily accessible ones, like on top of buildings,, conveeeniently near the air conditioning hot air discharge and they said
Yeah, we know that will produce higher than actual temperature readings, but we will adjust the results to be accurate. And then they did it
Nolan,
But Nick claims that you can use anomaly expression to reduce that problem. Me, I do not agree and I spend too much time converting anomaly back to measured and adjusted back to raw. The branches of science in which I have worked, except climate research, have original measurement as the foundational working data to which all work can be traced back and verified. Geoff S
The use of anomalies that have far more decimal places than what was actually measured is more like counting the wings on the angels on the head of a pin. The uncertainty increases so far beyond what climate scientists quote as uncertainty that it is unbelievable that none have been called on it. It’s hard to understand how physicists and chemists have not risen up to knock them done.
Referring to Simon’s post July 30, 2022 6:15 pm concerning NASA adjustments.
The dotted plot titled ‘uncorrected raw data [sic] indicates that the trend prior to ~1940 trend was steeper than the trend after 1980.
The reason for that ‘correction’ IMO was because the pre-1940 trend could not have been due to human emissions and therefore had to be ‘corrected’ to conform to the theory.
On the subject of badly positioned recording stations Moana Loa observatory sits, at 3400 metres, on the world’s largest and very active volcano that spews out tons of CO2. It, Hawaii, is also in the middle of a Pacific ocean that releases CO2 as it warms.
And, the Trade Winds loft air off the water, over abundant vegetation that absorbs CO2, to the top of Mauna Loa, where it is measured.
Isn’t this the second survey of climate reporting stations? I seem to recall one several years ago in which Stevenson streams were found next to barbecue pits, parking lots, diesel truck parking lots and other heat generating areas. There was also mention made of heat generating devices inside the Stevenson screens.
From the second paragraph of the article:
“Nationwide study follows up widespread corruption and heat biases found at NOAA stations in 2009, …”
Bang goes their thermometers and bang goes their anthropogenic sea level dooming-
Scientists discover cause of catastrophic mangrove destruction in Gulf of Carpentaria (msn.com)
Always remember their science is settled folks give or take 40cms or so.
observa,
Jump East over narrow Cape York and meet the Great Barrier Reef. Was there a 40 cm level change on the Reef as well? Did it show bleaching? Or do we have yet another ‘variety’ of water that holds itself steady long enough for researchers to measure effects, then goes back to normal? Geoff S
A graph only up to 2008? Really?
I will read the report, but that is not a good start.
Ralph
2 observations – The blue line is also going up, just not as fast as the yellow line, so still global warming??
You’re really itching for that appointment for Room 101, aren’t you?
Bottom line is it’s all about money and these “study” groups want to keep it rolling in so they “massage” the numbers to keep it coming in from the politicians and taxpayers.