At left, original BoM Stevenson Screen, at right the smaller replacement screen.

Another Temperature Bias: The Shrinking Stevenson Screen = Warming

Many of you may recall that I got my start in climate skepticism back in 2006 when I started looking at the paint on Stevenson Screens – because there was a change from the original lime-whitewash paint in the 1890s to modern latex paint. I figured there was a bias, and latex paint made the shelter warmer due its different IR signature. Temperature sensor tests over a month proved I was right. But in looking at temperature shelters in my area, I discovered an even bigger problem – most were sited near heat sources and heat sinks, in contradiction to NOAA’s own published siting standards. This started my journey to uncover just how bad the temperature observing network actually was. Comprehensive reports I made in 2009 and again in 2022 showed that surface measurements were a huge warm biased mess. This paper is over 10 years old, but I somehow missed it. I’m correcting that oversight.

Now, to add to that mess, comes this revelation – the Australian Bureau of Meteorology changed the size of Stevenson Screens to something that had just ~ 25% of the volume of the original, and did not run parallel tests to see if the conversion mattered. – Anthony


Craig Kelly of the AFEE in Australia writes on X.com

The peer-reviewed science confirms that shrinking the size of Stevenson Screens increased average temperatures across a year by 0.54°C and, on hot summer days, it can increase the maximum temperature by 1.7°C. https://waclimate.net/stevenson-sizes.pdf Yet the BOM denies the existence of this peer-reviewed science, pretends that it doesn’t exist, and claims that shrinking the screens by 74% had no effect on the recorded temperatures.

Furthermore, at every weather station where the BOM replaced the traditional “large” Stevenson Screens with smaller ones, they ripped out the large ones and replaced them with the small ones on the very same day. This is contrary to long-established practices, which require that when you change measuring equipment, you keep parallel data from both setups to determine whether the equipment change may have introduced a warming or cooling bias into the record.

If you wanted to artificially inflate temperatures and create new “record hot days” to generate propaganda for the climate cult, you’d do exactly what the BOM did: shrink the size of the Stevenson Screens. And if you wanted to cover up your malfeasance and fraud, you’d rip out the large screens and replace them with the small ones on the very same day so there would be no parallel data — exactly what the BOM did.

Attached is a photograph from the Sydney Observatory from 1947 showing the thermometers that officially recorded Sydney’s temperatures housed inside a traditional ‘large’ Stevenson Screen – with an internal volume of approximately 0.23m3. The BOM has shrunk the size of the Stevenson Screens, reducing the internal volume to just 0.06m3 – a 74% reduction. By shrinking the Stevenson Screens in such a manner, how much hotter will the recorded temperatures be inside the smaller screen on a hot and windless day?

Figure 1. Internal view of the large Stevenson screen at Sydney Observatory in 1947 (Top above, black and white image) and the small screen at Wagga Wagga airport in June 2016 (lower above, colour image). While thermometers in the 230-litre screen are exposed on the same plane, the electronic probe at Wagga Wagga is placed behind the frame about 2 cm closer to the rear of the screen, which faces north to the sun. According to metadata, the 60-litre screen at Wagga Wagga was installed on 10 January 2001 and although thermometers were removed on 28 April 2016 inter-comparative data is unavailable. Source: BoMWatch.

From the paper:

The main findings of this research are summarized as follows:

  1. An overheating of air temperature inside the medium-sized Stevenson screen was detected in comparison to the large-sized Stevenson screen throughout the year. This bias affected daily maximum air temperature records, especially during the warm season (May to October) and at 1300 UTC.
  2. The weather conditions enhancing this overheating bias (not statistically significant) are associated with clear skies, high solar radiation rates, weak winds and low relative humidity values.
  3. Comparison to nearby station have revealed that the different size of the naturally ventilated wooden Stevenson screens have an impact on mean, maximum and daily air temperature range. These kinds of investigations are crucial for removing inhomogeneities and accurately assessing the
    spatio-temporal variability and long-term trends of near-surface air temperature measurements.
Get notified when a new post is published.
Subscribe today!
5 41 votes
Article Rating
117 Comments
Inline Feedbacks
View all comments
Scissor
February 8, 2026 6:08 am

Size does matter, unless you’re Michael Mann.

February 8, 2026 6:09 am

And if the bias had gone the other way and caused cooling what do you suppose they would have done?

Scarecrow Repair
Reply to  Steve Case
February 8, 2026 6:59 am

I doubt they would have noticed, just as I doubt they intended to artificially raise the temperature readings. My experience with bureaucrats says this was some combination of cost-saving and a change in available suppliers. Same with switching from lime-whitewash to latex paints. Then when someone came along years later and documented the difference, their natural reaction was to circle the wagons and deny everything. Bureaucrats are not the smartest people.

Sweet Old Bob
Reply to  Scarecrow Repair
February 8, 2026 7:05 am

Of course they wanted to raise the readings !!

FTM !!

Reply to  Sweet Old Bob
February 8, 2026 7:20 am

The lack of any side-=by-side comparison is very indicative. Three months of inter-comparison is the minimum standard.

Either the BOM experts are very inexpert, or they’re dishonest.

Scarecrow Repair
Reply to  Pat Frank
February 8, 2026 8:12 am

Of they’re just bureaucrats who don’t care.

2hotel9
Reply to  Scarecrow Repair
February 8, 2026 9:59 am

Back in the day they didn’t care, now there is a substantial bias fed in from academia and media.

Boff Doff
Reply to  Scarecrow Repair
February 8, 2026 11:45 am

Just to be clear : All of the people who make temperature measurement station siting and operations decisions at the BOM are unqualified bureaucrats with no oversight or input from meteorological or climate scientific experts? Who knew?

Reply to  Pat Frank
February 8, 2026 11:30 am

I met and chatted with a couple of BoM guys at conferences ages ago…

I was apparent that they were climate zealots BEFORE they worked at BoM.

Perhaps, like the Met Office… it is part of the job application ?

Sparta Nova 4
Reply to  Pat Frank
February 9, 2026 5:48 am

I recall reading that a change in sensors occurred in Australia. They conducted the side by side for the required timeframe. When they were done, the adjusted the historical data to align with the new sensor.

jvcstone
Reply to  Sweet Old Bob
February 8, 2026 11:33 am

Got to create the evidence to support the narrative.

MarkW
Reply to  Scarecrow Repair
February 8, 2026 9:45 am

If it was simply that the bureaucrats don’t care, you would expect at least an occasional “error” that caused cooling rather than warming.
The problem is that 100% if these “errors” result in warmer temperatures.

Sparta Nova 4
Reply to  MarkW
February 9, 2026 5:49 am

It does raise an eyebrow.

Reply to  Steve Case
February 8, 2026 7:18 am

Pretty much all of the published field calibrations show a net warm bias.

Bryan A
Reply to  Steve Case
February 8, 2026 10:33 am

They would have determined that the change introduced a cold bias and adjusted the data warmer accordingly.

Bryan A
Reply to  Bryan A
February 8, 2026 3:17 pm

Instead they introduce a further artificial warming bias so obviously they will respond accordingly and adjust the data slightly warmer

Bruce Cobb
February 8, 2026 6:14 am

Honey, I shrunk the Stevenson screens!

Rud Istvan
February 8, 2026 6:23 am

Yet ANOTHER reason the surface temperature record is not fit for climate purpose. And since CMIP climate models are required to be parameter tuned to best hindcast, yet another reason they display an obvious and significant warming bias.

Bruce Cobb
February 8, 2026 6:26 am

One would almost think the BOM had an agenda or something.

Mr.
Reply to  Bruce Cobb
February 8, 2026 8:20 am

According to the ClimateGate emalls they did do.

Reply to  Bruce Cobb
February 8, 2026 11:32 am

The Met Office in the UK almost certainly do.

February 8, 2026 6:35 am

Thanks for exposing more idiotic behavior from the climate fraudsters.

Rod Evans
February 8, 2026 6:40 am

I get the feeling the BOM couldn’t care less about the difference the smaller Stephenson screen makes to anything.
They know they can always make ‘adjustments’ if the new data does not fall in line with preferred objectives.

Had the screens produced lower temperatures than their bigger namesakes BOM would simply have made the ‘adjustment’. As it happens the new screens are perfect from their point of view.
The casual but significant change is shockingly unscientific, hey ho.

Scarecrow Repair
Reply to  Rod Evans
February 8, 2026 7:00 am

Amen. These are bureaucrats, not scientists, not geniuses.

Neil Pryke
February 8, 2026 7:13 am

Great article, Anthony…REAL Climate Science..!

February 8, 2026 7:17 am

It’s too bad they didn’t deploy a sensor in an aspirated shield. That would have provided an accurate temperature reference. The best aspirated temperature sensors have a field accuracy of (+/-)0.05 C.

Doing so would have given them a field calibration of the sensors — their field accuracy — as well as the inter-screen bias.

Jeff Alberts
Reply to  Pat Frank
February 8, 2026 8:31 am

Pat! Stop with the science!!

Erik Magnuson
Reply to  Pat Frank
February 8, 2026 10:19 am

I’ve been thinking about cobbling up such a beast. T.I. makes a medical grade temperature sensor, add a small fan and an appropriate housing, the resulting combination should be accurate to +/-0.2ºC.

Reply to  Erik Magnuson
February 8, 2026 11:01 am

Aspirated shields are available for purchase. It’s important to separate the fan from the shield to prevent thermal contamination, and mounted to have the air drawn through the shield, rather than pushed through.

The best systems pay attention to the wiring, so as to prevent self-heating.

19th Century meteorologists were well aware of the problem of heating, and mostly despaired of ever getting a true air temperature.

In 1884, John Aitken proposed designs for Stevenson screens with a draft tube mounted on, to promote air flow, but they were never implemented.

He also discussed his field calibrations showing the large warm biases of Stevenson screens.

Erik Magnuson
Reply to  Pat Frank
February 8, 2026 4:51 pm

I wasn’t aware of commercially available aspirated shields, though not surprised. Thermal contamination was on my mind, my plans were to pull rather than push, also considering how/where to vent the aspirated air so as not to contaminate the incoming air. Another consideration was installing multiple in different locations around my property to get a handle on very local temperature gradients.

Reply to  Erik Magnuson
February 8, 2026 2:39 pm

Cool. Then you could average different stations measuring different things and report
the results to 0.00002 degrees. /s

sciguy54
February 8, 2026 7:38 am

This was a most excellent addition. Almost certainly the greatest bias would be daily high temperature readings on the warmest and sunniest days. Those nasty peaks commonly ascribed to “carbon-caused global warming”.

But… I believe you jumped to a major unsupported conclusion:


the Australian Bureau of Meteorology changed the size of Stevenson Screens to something that had just ~ 25% of the volume of the original, and did not run parallel tests to see if the conversion mattered. – Anthony

It would be more accurate to state that the BOM did not PUBLISH parallel tests. My bias would lead me to believe that they most certainly had run parallel tests, but then intentionally hid those results.

MarkW
Reply to  sciguy54
February 8, 2026 9:48 am

It’s hard to run parallel tests when you remove the old station the same day you install the new one.

Lark
Reply to  MarkW
February 8, 2026 9:56 am

I believe he means they ran them before they rolled out the new stations
…and that the intent to hide the BOM-caused “warming” would be why they removed the old stations simultaneously with putting in the new.

KevinM
Reply to  Lark
February 8, 2026 10:21 am

Suspicion from working in variable-ambition engineering environments: They probably tested indoors then threw out data that didn’t seem to make sense.

strativarius
February 8, 2026 8:01 am

We have a word for it:

Shrinkflation…. also known as package downsizing.

Reply to  strativarius
February 8, 2026 10:19 pm

Sadly, as I get older, this is happening to me

Mr.
February 8, 2026 8:26 am

Another glaring example of the substandard PROBITY of climate “data”.

February 8, 2026 8:32 am

At this point, there are so many thumbs on the scales, I am surprised that any of the alarmists can still hold silverware or tools.

John Hultquist
February 8, 2026 8:38 am

Except for the replacement at Wagga Wagga (2001), I don’t see the timing of these events. Were all done in the same year? Was Wagga Wagga the last or the first?
Anthony: ” I got my start in climate skepticism back in 2006 …”
So my question is: Were the technicians of the BOM Australia or those of the Spanish (AEMET) thoughtful of or concerned about fractional temperature differences when changes were made? Note, also, in the abstract, the medium versus large-sized screens and the first-order and second-order weather stations.
My first approximation to this issue: When carrying out these installations, the technicians were unconcerned regarding “climate modeling.”
{See Rud Istvan’s comment @6:23 am}
Finally, the official sayings in 2026 of the BOM-AU are biased because the Government is fully into the ClimateCult hysteria. What would you do or say?

February 8, 2026 8:40 am

Great article.

It points out the absolute blindness that climate science has when it comes to measurements, their uncertainty, and its propagation.

One big absence is the complete abdication of scientific creation of uncertainty budgets and their publication. The GUM has been the international standard for measurement for going on 40 years. There is no excuse for ignoring it. Yet climate science just motors on under the guidance of statisticians that claim they can recognize bias (errors) in past measurements and accurately correct for it. Someone needs to give me a scientific discipline that allows changing past measured and recorded data with no evidence that the instruments were measuring incorrectly. Measuring different microclimates maybe, but that doesn’t mean incorrectly.

There is only one reason for climate science to change data, statisticians have convinced them that long temperature records are needed in order to have the appearance of accuracy and precision. That is not science, it is advocacy.

Even running in parallel will not provide evidence of bias sufficient to change past readings. Only calibration under the same conditions can do that. If you did a calibration on the current system, say a large Stevenson screen and it showed no correction was needed, then did a calibration on a CRN station with the same result, would you be justified in changing recorded readings so they agree? The only appropriate scientific answer is declare one or the other as unusable. If that results in short data temperature records then so be it. Otherwise, you are doing Mike’s Nature Trick and changing data to make it appear that there is no difference between the various measuring devices. And people wonder why drug studies can’t be duplicated?

4 Eyes
Reply to  Jim Gorman
February 8, 2026 12:57 pm

Someone reading this knows someone, with some stroke, in the BOM. It would be very helpful if such person with stroke in the BOM could be convinced to conduct a study using a few large Stevenson screens to help back calibrate the smaller screens being used here in Oz. I guess I’ll see some pigs flying by before that happens.

rpercifield
February 8, 2026 8:41 am

I wish I was surprised by this, but knowing the rigor of current measurement methodologies in the public arena, this is the norm. If you do this stupidity in engineering, manufacturing, and chemical processes in the private sector where the costs are real, people get fired. Today, science is nothing more than a political game with no repercussions for blatant and intentional error. It is now appropriate to do what ever is required to support the narrative, morals and integrity be damned.

It is truly sad we are at this point.

February 8, 2026 9:04 am

“Comparison to nearby station have revealed that the different size of the naturally ventilated wooden Stevenson screens have an impact on mean, maximum and daily air temperature range. These kinds of investigations are crucial for removing inhomogeneities and accurately assessing the
spatio-temporal variability and long-term trends of near-surface air temperature measurements.”

Apparently the BoM has made corrections for the change in size of the Stevenson screens: “Explicit corrections have also been introduced for a change in instrument screen size”
https://rmets.onlinelibrary.wiley.com/doi/10.1002/gdj3.95

Anthony Banton
Reply to  Phil.
February 8, 2026 10:42 am

And here is some of what the above says …..

3.4.2 Screen sizesTwo different Stevenson screen sizes have been used in the network over time (Warne, 1998): ‘large’ (approximately 71 × 71 × 53 cm) and ‘small’ (43 × 52 × 27 cm). Originally, large screens predominated in the network, but over time there has been a change to small screens at most sites. The highest frequency of such changes was in the 1990s, but some took place as early as 1967 and some as late as 2012. Only four of the 112 ACORN-SAT sites still have large screens.
In ACORN-SATv1, no specific adjustment was applied for changes in screen size, 3.4.2 Screen sizesTwo different Stevenson screen sizes have been used in the network over time (Warne, 1998): ‘large’ (approximately 71 × 71 × 53 cm) and ‘small’ (43 × 52 × 27 cm). Originally, large screens predominated in the network, but over time there has been a change to small screens at most sites. The highest frequency of such changes was in the 1990s, but some took place as early as 1967 and some as late as 2012. Only four of the 112 ACORN-SAT sites still have large screens.
In ACORN-SATv1, no specific adjustment was applied for changes in screen size, based on the outcome of a field study at Broadmeadows, near Melbourne (Warne, 1998), which found a negligible impact on mean temperature, with the small screen having maximum temperatures 0.094°C higher and minimum temperatures 0.082°C lower. However, in ACORN-SATv2, it was considered that the size of the impact on diurnal temperature range which was reported in the Warne (1998) study was sufficiently large to warrant specific treatment.
This was implemented in a similar way to the observation time adjustment. An adjustment was made for both maximum and minimum temperatures on the date of the screen change (unless an adjustment had already been made within 2 years of that date), without the usual minimum adjustment size criteria being applied. Only stations with no known screen changes within 5 years of the date were used as reference stations. In total, there were documented screen changes at 86 of the 112 locations, with four still having large screens and the remaining 22 having no documented evidence of ever having large screens.
For the stations where the screen change was not associated with a previously identified inhomogeneity (42 stations for maximum temperature, 45 stations for minimum temperature), the mean adjustment was +0.04°C for maximum temperature and −0.06°C for minimum temperature (combining to a result of +0.10°C for diurnal temperature range and −0.01°C for mean temperature). These inhomogeneities, except for the mean temperature shift (negligible in both cases), are of the same sign as those found by Warne (1998) but somewhat smaller in size. The spread of the results between stations was wide (several tenths of a degree), suggesting that the required adjustments are site-specific (one potential influence being the condition of the screen being replaced).based on the outcome of a field study at Broadmeadows, near Melbourne (Warne, 1998), which found a negligible impact on mean temperature, with the small screen having maximum temperatures 0.094°C higher and minimum temperatures 0.082°C lower. However, in ACORN-SATv2, it was considered that the size of the impact on diurnal temperature range which was reported in the Warne (1998) study was sufficiently large to warrant specific treatment.
This was implemented in a similar way to the observation time adjustment. An adjustment was made for both maximum and minimum temperatures on the date of the screen change (unless an adjustment had already been made within 2 years of that date), without the usual minimum adjustment size criteria being applied. Only stations with no known screen changes within 5 years of the date were used as reference stations. In total, there were documented screen changes at 86 of the 112 locations, with four still having large screens and the remaining 22 having no documented evidence of ever having large screens.
For the stations where the screen change was not associated with a previously identified inhomogeneity (42 stations for maximum temperature, 45 stations for minimum temperature), the mean adjustment was +0.04°C for maximum temperature and −0.06°C for minimum temperature (combining to a result of +0.10°C for diurnal temperature range and −0.01°C for mean temperature). These inhomogeneities, except for the mean temperature shift (negligible in both cases), are of the same sign as those found by Warne (1998) but somewhat smaller in size. The spread of the results between stations was wide (several tenths of a degree), suggesting that the required adjustments are site-specific (one potential influence being the condition of the screen being replaced).

Reply to  Anthony Banton
February 8, 2026 11:41 am

So, the study being discussed found errors of +0.54C..

And BoM “say” they adjust by small second decimal place adjustments..

Thanks, Ant !!

Mr.
Reply to  Anthony Banton
February 8, 2026 12:11 pm

“Adjustments” to measuring instruments’ readings means that such numbers are no longer “data”, they’re just constructs reflecting some peoples’ assumptions.

So the PROBITY of the data constructs is non-existent, not fit for any scientific purpose.

ps – what error margins are applied to the “adjustments”?

Sparta Nova 4
Reply to  Anthony Banton
February 9, 2026 5:59 am

So, instead of calibrating each individually, they came up with a common value applied to all. Nothing like averages to make your day.

2hotel9
February 8, 2026 9:56 am

I don’t know, I have raised enough chickens to understand making the hen house smaller makes it warmer. Perhaps, way back in the long ago, when started doing all this they should have listened to some farmers. Just sayin’.

Bryan A
February 8, 2026 10:32 am

It’s almost as if they change things purposefully in order to introduce a warm bias into the data. Now why would anyone want to cook the data???

Sparta Nova 4
Reply to  Bryan A
February 9, 2026 6:00 am

Smaller screens are cheaper.

Alfred T Mahan
February 8, 2026 10:35 am

Isn’t it funny how all these mistakes always seem to lead to higher recorded temperatures? You would have thought that half would go one way, and half the other.

david hartlin
February 8, 2026 12:44 pm

I’m wondering if the modern gilled plastic housings have any impact on readings?

Reply to  david hartlin
February 8, 2026 6:03 pm

gilled plastic housings have any impact on readings

Of course they have an impact. I have a study somewhere that found substantial differences between the beehives.

The term microclimate does mean something. The measuring device itself is part of the microclimate that affects temperature readings.

There is a plethora of measuring devices and microclimates throughout the globe. To average them all while at the same time pretending that at least one order of magnitude of resolution can be squeezed from the mess just boggles my mind. That doesn’t even address how measurement uncertainty can be reduced through averaging. There is no way these guys would do quality assurance for me

February 8, 2026 1:42 pm

I have absolutely no faith in BOM temperature records, raw or adjusted, before or after changing to small screens or electronic sensors. In a survey I completed in 2019, half of all weather stations were not compliant with minimum siting standards. Many were truly abysmal.
See https://kenskingdom.wordpress.com/2020/01/16/australias-wacky-weather-stations-final-summary/

Reply to  kenskingdom
February 8, 2026 4:43 pm

Great report Ken.. 🙂

One-handed clap for BoM….. .. for being way better than the UK Met Office. 🙂

February 8, 2026 2:57 pm

While this is interesting and may well be inflating surfaces temperatures, there’s a parallel satellite record, that also demonstrates clearly rising temperatures…..and doesn’t rely on any kind of surface station. Whatever the nit picking, whether it’s badly located surface stations or new fangled measuring techniques, the clear and resounding data shows a warming world.
Again I write in bold….. whether this is a temporary blip or a natural variation in climate history is the real question.
But, in the meantime don’t deny the data, that shows a very clear warming trend. Quibbles over whether new measuring technology skews the data by a small fraction, is effectively an irrelevant distraction!

Reply to  Neutral1966
February 8, 2026 4:54 pm

Quibbles over whether new measuring technology skews the data by a small fraction, is effectively an irrelevant distraction!

The quibble is not about skewing per se, it is about changing past data in order to make series measured by older devices “join” up with newer measurements. In other words, create long records.

Resolution of measurements goes out the window. Significant digits goes out the window.

How does one justify changing a temperature of 20.2 ±1.0°C to 20.5 ±?. Confidence intervals are just thrown out of the window.

Some of the problem is self inflicted by climate science ignoring significant digits rules and its failure to assess and propagate measurement uncertainty properly.

Reply to  Jim Gorman
February 9, 2026 3:08 am

Yes, you’re correct about this, I agree!
However, at the very most, things can be adjusted only as far as the temperature record exists, which really isn’t very long in the whole scheme of things. My point is that, whichever way you look at it, the earth is currently in a warming phase. Whatever jiggery pokery is going on (if indeed it is any kind of fraudulent manipulation of the temperature record?), then the data shows a clear warming trend. And because we have data that’s derived from both satellite and surface stations, whatever adjustments and tweaks that are being made actually affects the whole trend very little.
I think anyone in their right mind would accept that climate is variable, so any attempt to smooth the long shaft of the hockey stick still further, is clearly nonsense. Romans grew grapes in Scotland 2000 years ago. As we all know, there are several lines of evidence that clearly show the shaft of the hockey stick has had all sorts of bends and kinks. So what if the powers that be are able to produce a record that doesn’t quite match up to reality? They’re always going to struggle with achieving that anyway because the further back in time we go, the more sparse the temperature record becomes. It’s all a bit pointless really. All we do know for sure is that currently, we’re in a warming phase, natural, human caused or a bit of both!

Sparta Nova 4
Reply to  Jim Gorman
February 9, 2026 6:02 am

“Significant digits rules” was once called scientific notation.

Mr.
Reply to  Neutral1966
February 8, 2026 6:13 pm

 don’t deny the data

The “data” as you refer to the temps numbers constructs, are not “data” if the instrumental measurements are being “adjusted”.

They’re just constructs, based upon assumptions of what some geezers think the numbers should be.

Sparta Nova 4
Reply to  Neutral1966
February 9, 2026 6:02 am

Do you know how satellites measure temperature?

Hint. They don’t.

Reply to  Neutral1966
February 9, 2026 6:20 am

there’s a parallel satellite record, that also demonstrates clearly rising temperatures”

In reality world, the satellite record measurement uncertainty subsumes any “warming” differences. The satellite measurements are not even of temperature, they are of “radiance” in the microwave spectrum. The path loss of microwaves through the atmosphere is *not* measured by the satellites meaning they only know the value of the radiance being received, which is not the same as the radiance at the originating point. The satellite measurements are affected by Urban Heat Islands just like surface measurements are. They supposedly “adjust” the measurements to account for this but the adjustments ADD more measurement uncertainty than they solve. One primary reason is because UHI is propagated far beyond the island itself by winds. Trying to adjust for UHI based on things like isolating visible light islands like cities simply doesn’t account for the spread of the UHI downwind into supposedly rural areas.

The measurement uncertainty of the satellite record is probably on par with that of the surface stations, at least +/- 1C and probably more. Since each satellite measurement is a single, independent measurement of different things taken at different times the measurement uncertainty of the data set comprised of all those single, independent measurements is the sum of the measurement uncertainty of the individual data points. You can’t decrease the measurement uncertainty by “averaging”. The measurement uncertainty of that average remains the standard deviation of the data set. Averaging the data points doesn’t change that standard deviation at all.

Bottom line? The satellite record isn’t any more fit for purpose than the surface record. Neither have sufficiently low measurement uncertainty to identify differences in either the tenths digit or hundredths digit. Therefore identifying “trends” is impossible.

waclimate
February 8, 2026 3:04 pm

The PDF (https://waclimate.net/stevenson-sizes.pdf) linked by Craig Kelly in this screen size article is hosted on my site and “attached”, so to speak, to a page I put together several years ago titled “Climate change or instrument change?” (https://www.waclimate.net/aws-probe-influence.html).

It’s a fairly complicated and arguably boring page which may or may not squeeze happily into a mobile screen, but nevertheless also worth looking at re artificial warming if you’re into nitty-gritty detail (as is the rest of the site if you’re really adventurous 🙂

February 8, 2026 3:31 pm

But in looking at temperature shelters in my area, I discovered an even bigger problem – most were sited near heat sources and heat sinks, in contradiction to NOAA’s own published siting standards. This started my journey to uncover just how bad the temperature observing network actually was. 

Has Anthony ever explained why he thinks it is that the temperature data derived from selected ‘gold-standard’ USCRN sites has a faster warming trend than data derived from the ClimDiv sites, which are of varying quality and require adjustments to remove non-climate influences?

It’s easy to check this using the link in the side-panel of this site. NOAA updates both data sets on a monthly basis for comparison. (WUWT only ever shows the USCRN data and never adds a trend line.)

Over their joint period of measurement (currently Jan 2005 to Dec 2025, 21 years exactly), the ‘adjusted’ ClimDiv data have warmed at a rate of +0.40C (+0.72F) per decade; whereas the ‘gold standard’ USCRN data have warmed at a rate of +0.47C (+0.85F) per decade.

The difference in the warming trends between these two data sets is now becoming obvious when charted (below). It appears to be the case that, so far at least, the temperature adjustments made to the ClimDiv data have had the effect of slowing the true rate of warming, not the other way around.

Any explanations?

Screenshot-2026-02-08-231817
Reply to  TheFinalNail
February 8, 2026 4:48 pm

You still haven’t learnt that ClimDiv is NOT REAL DATA.

It has been deliberately adjust to try to match USCRN.

Any differences are totally down to the adjustment algorithm used.

They have altered that algorithm over time to get a closer match.

It is actually proof that the climate wonks can take any junk data and make it give any result they want, in this case a reasonably close match to USCRN.

ps.. because ClimDiv is just a FAKE series adjusted to match USCRN…

…it is actually TOTALLY REDUNDANT….

WHY BOTHER when you already have the USCRN series….

It is just an exercise in data adjustment.

Reply to  bnice2000
February 9, 2026 12:35 am

You still haven’t learnt that ClimDiv is NOT REAL DATA.

It has been deliberately adjust to try to match USCRN.

Here’s the glaring thing about that absurd, unevidenced claim: the rate of warming in the ‘gold standard’ USCRN data over the past 21 years (1995-2025, +0.47C per decade), is much faster than the rate of warming in ClimDiv over the previous 21 years (1984-2024, +0.31C per decade).

In case this needs to be spelled out further; even if (and there’s no evidence to support this) NOAA was somehow adjusting ClimDiv to match USCRN, the ‘gold standard’ data set is still warming at a much faster rate over this past 21-years than the adjusted ClimDiv data was over the previous 21-years, when there was no ‘gold standard’ version to keep a check on it.

What ever way you cut it, even allowing for unhinged and evidence-free conspiracy theories, the rate of warming over the past 21-years in the US has been substantially faster than it was over the previous 21-years and the ‘gold standard’ data are there to prove it.

Sparta Nova 4
Reply to  TheFinalNail
February 9, 2026 6:08 am

21 years is not the modern accepted duration for the climate definition, which is 30 years.

Reply to  Sparta Nova 4
February 10, 2026 12:16 am

Correct, it’s more than two thirds of it though, right?

If the adjustments were having such a profound warming influence don’t you think we would at least be starting to see ClimDiv warm faster than the “gold standard” USCRN, rather than the other way round?

Reply to  TheFinalNail
February 10, 2026 7:55 am

Why are “adjustments” even needed?

Michael Flynn
Reply to  TheFinalNail
February 8, 2026 4:50 pm

Any explanations?

Of course, but the ignorant and gullible who believe that adding CO2 to air makes thermometers hotter refuse to accept physical laws.

You won’t accept reality, will you?

Reply to  Michael Flynn
February 10, 2026 12:50 am

CO2 doesn’t directly make thermometers hotter. No one says it does.

Sparta Nova 4
Reply to  TheFinalNail
February 10, 2026 10:03 am

CO2 “traps heat”

Reply to  TheFinalNail
February 8, 2026 5:00 pm

Your red trend line is statistically meaningless therefore your comment is meaningless and can be ignored.

Reply to  Phil R
February 8, 2026 5:23 pm

The only warming in USCRN is a slight step at the 2016 El Nino.

Matches UAH USA48 closely

USCRNUAH.USA48
Reply to  bnice2000
February 9, 2026 1:18 am

The only warming in USCRN is a slight step at the 2016 El Nino.

The same climatic conditions that affected USCRN also affected climDiv, yet USCRN is warming faster than ClimDiv.

Matches UAH USA48 closely

In terms of trend, ClimDiv is a much better match for UAH’s ‘USA48’ than is USCRN.

Over their joint periods of measurement (2005-2025), the warming trends are as follows (deg. C per decade):

UAH_USA48 = +0.35
ClimDiv = +0.40
USCRN = +0.47

So if you’re going to say USCRN is ‘gold standard’, then you have to say that USA48 is also running cool. You see the tangle you get yourself into when you start this nonsense?

The chart below compares UAH_USA48, ClimDiv and USCRN anomalies, 2005 – 2025, with trendlines.

UAH-comp
Reply to  TheFinalNail
February 9, 2026 3:03 am

Only reason trends in UAH USA 48 and USCRN are different is because the atmosphere responds differently to El Nino events

USCRN has a larger range

But you can see from the graph exposing the jump at the 2016 El Nino that there is ZERO TREND in either of them apart from that step

You keep proving that you ALWAYS have to use El Ninos to get any trend at all.

El Ninos are all the climate zealots have.

Reply to  bnice2000
February 10, 2026 12:05 am

bnice2000:

Only reason trends in UAH USA 48 and USCRN are different is because the atmosphere responds differently to El Nino events

Also bnice2000 (one post previous):

[USCRN] MATCHES UAH USA48 closely

So are UAH_USA48 and USCRN “different” or do they “match closely“?

Or maybe you’ll introduce a third varient?

Reply to  Phil R
February 9, 2026 12:45 am

Your red trend line is statistically meaningless therefore your comment is meaningless and can be ignored.

What isn’t meaningless is the fact that the red line has a faster warming trend than the blue line; not the other way around.

If the conspiracy theory was right, that NOAA adjustments are substantially warming the real US temperature trend, don’t you think that after 21-years we should expect to notice a marked difference between the two, with the adjusted data warming faster than the ‘gold standard’ (‘gold standard is WUWT’s term for USCRN, by the way, not mine)?

Instead we see the ‘gold standard data’ warming faster than the adjusted data.

21-years is over two thirds of a standard period of ‘climatology’; so even if we can’t say something is statistically significant yet, we can at the very least say that the adjustments have statistically ‘matched’ the trend seen in the ‘gold standard’.

Therefore, so far at least, the adjustments appear to be skilful and reliable. Not at all what the WUWT faithful were expecting.

Reply to  TheFinalNail
February 9, 2026 3:14 am

USCRN shows NO WARMING in the USA apart from a small step at the 2016 El Nino.

There is basically ZERO trend either side of the 2016 El Nino even when the 2023/4/5 El Nino is included.



Reply to  bnice2000
February 10, 2026 12:12 am

ClimDiv shows exactly the same (long-term) warming pattern as USCRN, only not as fast. Their monthly ups and downs are in exact agreement with one another.

Whatever has caused their observed warming trends is basically the same thing; it’s just that USCRN is warming faster than ClimDiv over the same period.

Reply to  TheFinalNail
February 9, 2026 10:53 am

Before USCRN that was no “reference” to adjust to, and the agenda was to show as much warming as possible.

The existence of ClimDiv shows they can take any data they want and “adjust” it to get any result they want.

Reply to  bnice2000
February 10, 2026 12:18 am

Before USCRN that was no “reference” to adjust to, and the agenda was to show as much warming as possible.

In that case, how do you explain the fact that the rate of warming in ClimDiv in the 21-year period before 2005 was considerably lower (+0.31C per decade) than it has been in the 21 years since 2005 (+0.40C per decade)?

Reply to  TheFinalNail
February 8, 2026 5:31 pm

Any explanations?

CRN resolution is 0.1°C. The quoted uncertainty in the CRN manual is ±0.3°C and that is a minimum.

Round your values to one-tenth values since you have no idea of the hundredths values.

Draw some uncertainty bars on the graph and see if your “trend” is inside the uncertainty interval. If it is then it isn’t significant.

Here is a question you need to answer. A customer needs a pin ground to 0.02 mm. Your measuring device only has a resolution of 0.1 mm. Are you going to tell him you measured it 100 times and can guarantee it is 0.02 mm?

Reply to  Jim Gorman
February 9, 2026 1:20 am

Round your values to one-tenth values since you have no idea of the hundredths values.

Ok, to the tenth values, ClimDiv is warming at +0.4 C per decade and USCRN is warming at +0.5C per decade.

Happy?

Reply to  TheFinalNail
February 9, 2026 3:05 am

Again, irrelevant, because any difference is totally dependant on the adjustment algorithm use to fabricate the ClimDiv results

Reply to  TheFinalNail
February 9, 2026 4:29 am

Happy?

No. You can’t just play with the trend values. You need to go back to the original temps and round them properly, apply uncertainties, then recalculate your trend. When you are done, show the trend line with uncertainty. Metrology books show you how to properly use linear regression to determine the uncertainty in the “y = mx + b” values. As usual trendologists have no idea about the proper treatment of measurements. Measurements are 100% accurate to you guys.

Reply to  Jim Gorman
February 10, 2026 12:23 am

You can’t just play with the trend values. You need to go back to the original temps and round them properly, apply uncertainties, then recalculate your trend. 

Both data sets show the ‘best estimate’ data of each one, so both are equally uncertain, therefore both are directly comparable as ‘best estimates’.

Regarding rounding; 0.40 rounds to 0.4 and 0.47 rounds to 0.5. They tell that stuff in books too.

Reply to  TheFinalNail
February 10, 2026 4:10 am

 therefore both are directly comparable as ‘best estimates”

ROFL!! You *really* don’t understand metrology and measurements at all, do you?

Having equal uncertainties does *NOT* mean they cancel thus leaving the “best estimates” accurate for comparison.

In fact, if you plot the trend line using the differences between them to get to a best fit line THE UNCERTAINTIES ADD! The uncertainty of the trend becomes the SUM of the uncertainties of each component. The trend line has greater measurement uncertainty than either of the components!

Regarding rounding; 0.40 rounds to 0.4 and 0.47 rounds to 0.5. They tell that stuff in books too.”:

So what? The rounding process is a recognition that you do not KNOW what the last digit actually is.

Mr.
Reply to  TheFinalNail
February 8, 2026 7:19 pm

So what you’ve described is that temperatures probity, provenance, and presentations is really a mess.

Which is what many of us rational folks have been saying since the get-go.

Reply to  Mr.
February 9, 2026 1:22 am

It’s WUWT that describes USCRN as “the gold standard” in US surface temperature records, not me.

Take it up with them if you think it’s “a mess”.

Reply to  TheFinalNail
February 9, 2026 3:05 am

USCRN shows NO WARMING in the USA apart from a small step at the 2016 El Nino.

Reply to  bnice2000
February 10, 2026 12:33 am

It shows more warming than ClimDiv, though.

Reply to  TheFinalNail
February 9, 2026 5:23 am

Take it up with them if you think it’s “a mess”.

You are the one using the data so you are the one that must illustrate that your use is appropriate.

NOAA is deficient in not publicizing the uncertainty in the data they plot. That does not excuse you from doing the research necessary to determine the uncertainty and apply it to your analysis.

Your lack of analysis is astounding. You haven’t even spent the time to analyze if Tmax is the driver or if Tmin is the cause of increased average temperature. All you have done is assume that Tavg has increased and that can’t be good.

Michael Ketterer
Reply to  Jim Gorman
February 9, 2026 7:20 am

Is USCRN average temperature calculation using t-miin/T-max or averaged continuous measurements?

Reply to  Michael Ketterer
February 10, 2026 12:39 am

USCRN uses both but for official mean temperatures it averages the daily high and low, otherwise it wouldn’t be consistent with ClimDiv for comparison purposes (and we can’t have that!)

Reply to  Michael Ketterer
February 10, 2026 6:40 am

Is USCRN average temperature calculation using t-miin/T-max or averaged continuous measurements?

You should be aware that either method of calculating Tavg provides a metric that conceals necessary knowledge.

Both Tmax and Tmin are values that derive from totally different functional relationships. Tmax is a sine and Tmin is an exponential. They should be studied separately.

Reply to  Jim Gorman
February 10, 2026 12:36 am

You are the one using the data so you are the one that must illustrate that your use is appropriate.

Really? Not NOAA, who produce it? Not WUWT who link to it prominently on their side-panel and who show the monthly updates (which is where I got the data from)?

It all falls to little old me just because you don’t like what it implies?

Reply to  TheFinalNail
February 10, 2026 4:11 am

What you just described is typically classed as “the blind leading the blind”.

Reply to  TheFinalNail
February 10, 2026 6:25 am

Not NOAA, who produce it? Not WUWT who link to it prominently on their side-panel and who show the monthly updates (which is where I got the data from)?

NOAA didn’t create this for a scientific purpose. It is propaganda for public consumption, and you just fell into the quicksand it is based upon.

If you are unable or unwilling to delve into the intricacies of scientific analysis then don’t criticize those who do.

When you are told YOUR graph, not NOAA’s, is lacking, don’t try to justify it with an excuse. Two wrongs DON’T make a right.

Reply to  TheFinalNail
February 9, 2026 6:33 am

The explanation is that NEITHER one has sufficient measurement uncertainty to allow an accurate trend calculation!

Since each data point in both of the records are comprised of single measurements of different things taken at different times using different measuring devices it means that the measurement uncertainty of the total population is the SUM of the measurement uncertainties of each individual measurement. The measurement uncertainty of each data point in your graph is the standard deviation of the total population – i.e. the square root of the sum of the variances of the individual components {Var_total = Var1 + Var2 + ….].

The use of anomalies doesn’t solve this. The anomaly is nothing more than the linear transformation of the parent distribution using a constant => which does not change the standard deviation of the anomaly distribution in any way, it remains the standard deviation of the parent distribution. So the measurement uncertainty of the anomaly distribution is the same as the measurement uncertainty of the parent distribution.

All that happens is that your trend line disappears into the black hole of the Great Unknown. It is just one trend line of an infinite number of trend lines that also fit into that measurement uncertainty interval.

All you are doing is pretending that the estimated value of each measurement is 100% accurate, that they form a perfect Gaussian distribution, and that all measurement uncertainty cancels leaving only the sampling error – which can be made smaller and smaller by adding data elements. It’s garbage in-garbage out from beginning to end.

Reply to  Tim Gorman
February 10, 2026 12:42 am

The explanation is that NEITHER one has sufficient measurement uncertainty to allow an accurate trend calculation!

Again, what each data set shows is it’s individual ‘best estimate’ value. There will be uncertainties in each, of course, but why clutter up the chart when you can calculate a ‘best estimate’ (usually the middle of the range of uncertainty) for each one and just compare these?

I suspect that if ClimDiv was warming much faster than USCRN there would be little talk of data ‘uncertainties’ here at WUWT.

(And we’d also be hearing a lot more about it!)

Reply to  TheFinalNail
February 10, 2026 4:20 am

 but why clutter up the chart when you can calculate a ‘best estimate’ (usually the middle of the range of uncertainty) for each one and just compare these?”

The measurement uncertainty of the comparison is the SUM of the measurement uncertainties of the components being compared.

The measurement uncertainty of (x +/- u(x)) – (y +/- u(y)) = u(x) + u(y)

The “best estimate” is *ONLY* the middle of the range of the uncertainty if, and only if, the distribution is perfectly Gaussian. Otherwise the “mode” is the best estimate.

You have just demonstrated that my representation of climate science as always using the meme that “all measurement uncertainty is random, Gaussian, and cancels” is an accurate representation.

It’s why I keep claiming that climate science should provide the 5-number statistical descriptors of their temperature data instead of mean/standard deviation. The use of the mean/standard deviation includes the unstated assumption that the temperature data distribution is random and Gaussian – which is just plain wrong. It’s what you get when you live in “statistical world” instead of reality.

Reply to  TheFinalNail
February 10, 2026 6:17 am

There will be uncertainties in each, of course, but why clutter up the chart when you can calculate a ‘best estimate’ (usually the middle of the range of uncertainty)

Your grasp of uncertainty and it’s use is appalling.

The uncertainty interval describes the range within which the true value is likely to lay. 1σ -> 68%, 2σ -> 95%.

If the uncertainty is such that it can change the slope (God forbid it results in a change in the sign) of your trend, then your trend is not statistically significant.

You are basically expounding the belief that a mean is an accurate measurement and there is no need to “clutter” up presentations with extraneous information. Good luck with that.

Maybe you should tell us how NASA’s insistence in ignoring the uncertainty of an o-ring sealing capability based on temperature was a good decision.

You have obviously never been employed in a job where you can be sued or fired for ignoring uncertainty.

Reply to  TheFinalNail
February 9, 2026 7:53 am

That climatology chooses to ignore significant digit rules and formal measurement uncertainty is another indication that it is not a quantifiable physical science.

Reply to  karlomonte
February 10, 2026 12:43 am

Or an indication that it uses mathematics like every other physical science.

Reply to  TheFinalNail
February 10, 2026 4:23 am

Climate science does *NOT* use the same mathematics as all other physical sciences. It’s why it fell to agricultural science to identify that growing seasons were increasing due to Tmin increases instead of climate science tumbling to it. There is no other physical science that I know of that uses the meme that “all measurement uncertainty is random, Gaussian, and cancels”.

Reply to  TheFinalNail
February 10, 2026 7:17 am

This is the best excuse you could come up with for ignoring measurement uncertainty?

Pathetic.

Reply to  TheFinalNail
February 10, 2026 8:45 am

Every other physical science rely on finding the basic underlying factors that result in a functional relationship.

Why do you never deal with those basic factors like Tmax and Tmin? How about deviations in summer versus winter Tmax and Tmin temperatures? Or regional differences?