Met Office Must Account for the ‘Junk’ Temperature Data Propping up Net Zero Insanity

From The DAILY SCEPTIC

BY CHRIS MORRISON

Pressure is likely to grow in the coming days for the U.K. Met Office to make a full public statement about the state of its nationwide temperature measuring stations. This follows sensational revelations in last Friday’s Daily Sceptic that nearly eight out of ten sites had huge scientifically-designated ‘uncertainties’ that essentially disqualified them from providing the accurate data required to promote the collectivist Net Zero agenda. Our report went viral on social media with over 1,300 retweets on X, and it was reposted on a number of sites. The investigative journalist Paul Homewood has covered the Met Office’s temperature claims for many years, and in the light of the new disclosures he noted that if it wanted to continue to use its existing station measurements, it should show a warning that the margin of error is so great “that they have no statistical significance at all”.

Specifically, nearly one in three (29.2%) Met Office sites are rated by the World Meteorological Organisation (WMO) as CIMO Class 5, and this comes with a warning of “estimated uncertainties added by siting of 5C°”. Class 5 can be termed a ‘junk’ rating since the WMO gives no guidance on where it can be located. The next to junk Class 4 comes with uncertainties of 2C°, while Class 3 has a 1C° warning. From information disclosed under a Freedom of Information request, the Daily Sceptic compiled the graph below that shows Class 4 accounted for 48.7% of the Met Office’s 380 recording stations. Only 13.7%, or 52 stations are free of ‘uncertainties’ warnings.

Net Zero promotion requires reasonably precise measurements of both local and global temperatures and these are simply not available. In the run-up to last year’s COP28 meeting, the BBC ran an explanatory article on the significance of the 1.5C° threshold, a rise of the Earth’s temperature based on the ending of the Little Ice Age. “Every tenth of a degree of warming matters, but as you get warmer each increment matters more”, said Myles Allen, Professor of Geosystem Science at the University of Oxford, and a co-ordinating author of the IPCC’s special report on 1.5C° in 2018. It is difficult to see how precision down to 0.1C° can be achieved by consulting current Met Office data, let alone the ability to claim to two decimal points, as the Met Office did, that last year in the U.K. was only 0.06C° cooler than the all-time annual record.

Comments on social media following the Daily Sceptic publication were often damning. On Homewood’s site, ‘YorksChris’ observed, “wow, this is horrendous… some of the locations of the so-called professional stations simply amaze me!”, while ‘magasox’ commented that this was huge, adding, “sceptics should shout it from the rooftops at every opportunity”. Heads, he felt, should roll, “but of course they won’t”. On the Daily Sceptic blog, ‘For a fist full of roubles’ looked forward to mainstream media latching on to this, “and revealing how the powers that be deliberately mislead us”. On the U.S.-based Watts Up With That? site over 300 posts greeted the news, with ‘UK-Weather-Lass’ stating: “It’s about time there was a public inquiry into how rotten and unfit for purpose the Met Office is, and why it is allowed to continue to be so.”

The problems with the Met Office data are mainly caused by increasing urbanisation which has encroached on the space around stations and corrupted measurements with artificial heat. Similar problems have been identified around the world leading to ever-increasing doubts about the accuracy of frequently quoted ‘global’ temperatures. Scientists estimate that heat corruption is likely to be responsible for up to 30% of warming claimed by the meteorological databases. The Met Office adds its figures to global compilations but the increasingly politicised state-funded operation also uses its data to declare almost constant temperature ‘records’. The Daily Sceptic has investigated the heat records declared since 2000 and found that all bar two should be disqualified. Many of them have been set in ‘junk’ Class 5 and most of the rest in Class 4.

Class 5 records include the highest daily maximum temperature in Northern Ireland, declared in 2021 at Castlederg. The highest January monthly temperature was set this year at Achfary and this Class 5 site also holds the record set in December 2019. Three U.K. area records were also set at Class 5 sites including England NW, East Anglia and England SE and Central S. The latter record was provided by St. James’s Park, which was one of five sites that was said to top 40C° on July 19th, 2022. This particular event was lauded at the time by the Met Office as a “milestone in climate history”. Another one of the 40C° sites, Northolt airport, is also Class 5.

According to the WMO, a Class 5 site is one where nearby obstacles “create an inappropriate environment for a meteorological measurement that is intended to be representative of a wide area”. According to a previous FOI request from Paul Homewood, the Met Office noted that Class 5 data, “will be flagged and not quoted in national records”. This does not appear to happen. On July 25th, 2019, the site at the Cambridge Botanic Gardens was credited with an new U.K. temperature record of 38.7C°. The Cambridge Botanic Gardens is a Class 5 site, and it still holds the July record for the region of East Anglia. All these records should be removed, or at least flagged with the large uncertainties set down by the WMO.

Ditto ‘near junk’ Class 4, where the crowd of record holders is higher. Class 4 sites include Charterhall where the highest Scottish temperature was set in 2022, and Hawarden Airport, home to the highest Welsh recording. A monthly U.K. record for August was set in 2003 at Faversham, while no less than five U.K. areas have records attributed to this class that comes with a WMO ‘uncertainty’ of 2C°. The all-time British record was set on July 19th, 2022 at a Class 3 site, which comes with WMO uncertainty of 1C°. Set halfway down the runway at RAF Coningsby, the record stood for just 60 seconds at 3:12pm, and was preceded by a rise and fall of 0.6C° either side of the event. A previous FOI from the Daily Sceptic disclosed that three typhoon fighter jets were landing on the runway at or around the time the record was set. All of these records should also be ditched as well, or given appropriate warnings.

So far as can be seen, no Met Office heat record has been set since 2000 at pristine Class 1 sites, which might not surprise given there are only 24 of them. Only two – the highest U.K. February temperature at 21.2C° and November at 22.4C° – have been set at near pristine Class 2 locations.

Most propagandising for Net Zero revolves around higher temperatures, and without this ammunition the project will quickly wither. It explains why these so-called ‘records’ are rarely out of the mainstream media headlines. The high temperature readings are weaponised at the national level, but local media is also targeted. This can be shown by considering the photograph below from Google Earth.

This is the site of the Sheffield temperature station. On July 19th, 2022, the local Star newspaper reported that the city had smashed the temperature record with over 39C° for the first time. According to the Met Office, the newspaper reported, the record reached 39.4C° on July 18th. The red marker shows where the Sheffield readings are taken, hard by a busy road with a large bus lane, surrounded by either city buildings or heavy vegetation, and located at or near what appears to be a concrete park. It might not surprise to learn that Sheffield is a Class 5 site.

The Met Office’s Chief Scientist and leading Net Zero promoter Professor Stephen Belcher states that “in a climate unaffected by human influence, climate modelling shows that it is virtually impossible for temperatures in the U.K. to reach 40C°”. It is not thought he was referring to the influence of typhoon jets or the 95 bus.

Chris Morrison is the Daily Sceptic’s Environment Editor

5 24 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

402 Comments
Inline Feedbacks
View all comments
March 4, 2024 10:16 pm

I was in London during that heat wave.
Monday and Tuesday were hot but dry, Wednesday at 27c but much higher humidity was far worse.

Reply to  Pat from Kerbob
March 5, 2024 12:20 am

My wife and me were there in August 2023 in a claimed “heat wave” … 25C! Haha, couldn’t understand the fuss, that’s a nice winter day in Australia.

Reply to  Pat from Kerbob
March 5, 2024 2:02 am

Yes. It was certainly a very hot few weeks, reminiscent perhaps of the very hot summer of the 1970s. But it was, how to say, unusual but not extraordinary. It was within the bounds of normal fluctuations.

What happens is, every so often, on a decadal basis, you get blocking highs in the Atlantic, and hot air is brought up from the southeast – the Sahara. Because the highs move very slowly, this leads to long spells of hot very dry weather.

The striking thing was the hysteria. Where has the stiff upper lip gone? There were endless tirades about how dangerous it was supposed to be to go out, telling the public to drink lots of water etc.

Meanwhile, there was the usual rush of barely dressed Brits stampeding towards any available park or beach in blissful ignorance of the supposed life threatening heat wave in progress. They just, foolish people, thought it was just a really nice summer as Britain gets all too rarely.

In the evening it always cooled off and by the early morning it was cool and pleasant.

We shall see what happens this year. But whatever does, you can be sure it will be greeted with more hysteria and proclamations that whatever it is is unprecedented somehow or other.

Ex-KaliforniaKook
Reply to  Pat from Kerbob
March 5, 2024 10:49 pm

In May 1995 my wife and I were enjoying the Scottish castles and cathedrals in the north. At many pubs and taverns the wait staff chided us (Americans) for not having signed onto the Kyoto Protocol. Daily we listened to how hot the weather was. It was scorching!

We could always tell other Americans in the general populace. We were the idiots shivering and wearing shorts and T-shirts because we watched the morning news warning of dangerous heat, saw the sun that rose and set ridiculously early and late, while the locals sweated in layered clothing.

The high while we were there: 68 F (20 C). With the never-ending wind, it was just plain cold. We could not understand why the locals didn’t just remove a jacket. Go ahead and leave the vest, shirt, and undershirt on, but why bake for no good reason?

Reply to  Ex-KaliforniaKook
March 6, 2024 4:00 am

I can’t believe they consider 68F to be scorching.

it was 80F at my house yesterday. A very pleasant day. I did a lot of yard work. I suppose the Scottish folks would have fainted dead away.

I guess it all depends on what one gets used to. I have a littls Scottish blood in me, but 68F is not considered scorching around here. Not even close.

Reply to  Pat from Kerbob
March 6, 2024 10:23 am

commenting is closed?

Reply to  DonM
March 6, 2024 11:34 am

Exit the site in your browser, come back in and relogin. Happens periodically

Andrew
March 4, 2024 10:20 pm

If you remove the records won’t that be cooling the past to allow a whole new set of records?

Richard Page
Reply to  Andrew
March 5, 2024 6:27 am

The geniuses at East Anglia have already done that. They adjusted and manipulated the data until it was what they wanted then deleted the raw data – someting unheard of in the scientific community and akin to professional malpractice imo.

Reply to  Richard Page
March 5, 2024 7:58 am

Fake Data fraud.

observa
March 4, 2024 10:26 pm

Two faced deniers-
University of Sydney research that shows that dark roofs could be lifting ambient urban temperatures by up to 2.4 degrees.
Dark roofs banned as NSW targets net zero for buildings | ArchitectureAU

Bob B.
Reply to  observa
March 5, 2024 4:38 am

Does that include rooftop solar?

Reply to  Bob B.
March 5, 2024 6:07 am

Those implementing the bans probably don’t know what the predominant color of solar panels are – or don’t care to know.

Reply to  Bob B.
March 5, 2024 7:36 am

Residential roof solar PV systems is the worst way to use PV — the modules run much hotter due to less rear surface cooling, and the systems are invariably mounted in really bad orientations.

I have a picture of one with 14-16 modules mounted on the north facing roof surface, this is throwing power and money down the garbage chute (no more image button so I can’t post it).

old cocky
Reply to  karlomonte
March 5, 2024 12:05 pm

I have a picture of one with 14-16 modules mounted on the north facing roof surface,

Where else would you put them in the southern hemisphere?

Reply to  old cocky
March 5, 2024 12:13 pm

Heh, this is the northern!

Reply to  karlomonte
March 6, 2024 4:18 am

I see an image button down on the right side of this reply box.

A person the other day had the same complaint, but I could see the image button on that post, too.

I wonder what’s going on?

Reply to  Tom Abbott
March 6, 2024 4:51 am

I just now re-logged in tp WUWT, and still don’t see one. Wonder if its a browser deal, will have to test (Brave here).

Petermiller
March 4, 2024 10:49 pm

That’s typically how government policy is made: ‘Don’t confuse me with the facts, my mind is made up.’

Thermageddon is upon us, look at the data, never mind the fact that 86.3% of it is useless junk.

Scarecrow Repair
Reply to  Petermiller
March 4, 2024 11:49 pm

94.6% of all statistics are made up.

Reply to  Scarecrow Repair
March 5, 2024 12:22 am

97%

Scissor
Reply to  Streetcred
March 5, 2024 5:31 am

98% is the new 97%.

Reply to  Scissor
March 5, 2024 7:59 am

Should be 98.3747%.

Richard Page
Reply to  Scarecrow Repair
March 5, 2024 6:31 am

86.3% of all statistics are made up, 8.3% of all statistics are put in there just to confuse us.

ferdberple
March 5, 2024 12:02 am

Here is an interesting answer that shows London has already reached the dreaded 2 C barrier. From Aria:
Therefore, the average temperature difference between a warm winter and a cold winter in London, UK is approximately 2°C. Please note that these are average temperature ranges, and actual temperatures can vary from year to year.”

Scissor
Reply to  ferdberple
March 5, 2024 5:33 am

But 1.5°C over a century is a tipping point. /s

strativarius
March 5, 2024 12:12 am

“”Pressure is likely to grow in the coming days…””

But not regarding our broken barometer. Gaza George is centre stage

Richard Page
Reply to  strativarius
March 5, 2024 2:17 pm

Even further to the left than Jeremy Corbyn and even more anti-semitic. Presumably he used the same tactics of bullying and intimidation that he used in Bradford.

ferdberple
March 5, 2024 12:45 am

Forget 2 C or net zero. Everyone in London survived much more. Here is what the Aria AI engine has to say:

“The average temperature difference between a cold summer and a hot summer in LondonUK can vary significantly. On average, a cold summer in London may see temperatures around 15-18°C, while a hot summer could see temperatures ranging from 25-30°C or higher. Therefore, the temperature difference between a cold and hot summer could be around 10-15°C or more, depending on the specific weather patterns and climatic conditions for each season.”

March 5, 2024 4:22 am

It is just indicative of how climate science is NOT a science. And, I emphasize NOT! I know I sound like a broken record, but it is galling to see such lackadaisical treatment of measurements in order to claim knowledge that is truly unavailable because it is hidden within an uncertainty window.

Climate science needs to examine all the waste heat we generate in heating, cooling, auto radiators, etc. Changing albedo through pavement and roofs add to the heat the atmosphere must absorb. I don’t recall seeing any papers that discuss this.

On a relative basis 5° out of 30° is a 17% difference. Imagine in the U.S. if voltage was allowed to fluctuate by 20 volts out of 120 or if frequency was allowed to fluctuate10 Hz out of 60 Hz. What if 17% of planes failed at takeoff/landing? It is unconscionable.

It is far beyond the point where climate scientists should have demanded accurate data to work with. The only reason for not having done so is that it may ruin their previous work.

The misconception that uncertainty in measurements is reduced every time an average is taken must be erased from the language of temperature. They should all take a course on metrology in analytic chemistry.

Reply to  Jim Gorman
March 5, 2024 7:43 am

They also need to make honest calculations of the “carbon footprint” of a PV module, including all the diesel fuel needed for mining and transportation, the coal needed to reduce silica, copper, and aluminum ore, and the large quantities of uninterrupted electricity that must be available for melting, growing, and sawing silicon into suitable wafers. And then there are the diffusion and sputtering steps that create a solar cell. Followed by the soldering and encapsulation in refined petroleum products, especially EVA.

phrog
Reply to  Jim Gorman
March 5, 2024 7:46 am

Climate science only cares about the trend in anomalies over time; they don’t care at all about the quality of individual measurements.

Jono1066
March 5, 2024 5:18 am

But I was always told it was the number 32 bus that was the culprit ?

Almost all electrical energy consumed in a town/urban landscape ends up as heat , and we can determine the energy being used in a town / city which creates the urban heat island along with the heat radiating bodies called humans. So any temp sensors in the urban landscape should have the measured values reduced accordingly or they shouldnt be there. Met office should publish full list and get it sorted quickly.

AlanJ
March 5, 2024 5:48 am

As always on WUWT, no mention is made of the fact that siting issues are well known and understood in the surface temperature network, and that great pains are undertaken to minimize their influence on regional and global temperature estimates.

Also, from the WMO:

The numbers should not be taken to mean that higher class stations are of low value, as there may be very good reasons for the site exposure depending on the purpose for which that station was established (specific vs general purpose, mountain stations, agricultural stations, safety reasons, …). However, we acknowledge that the use of numbers can easily lead one to suggest a ranking. This is not the purpose and should be avoided. For some time the measurement experts have taken different requirements for different users, and this may be more pronounced in emergency circumstances when higher (number) classes may still be highly valuable for some applications, the SC reflects this. Because many sites have been chosen to serve the needs of many users, it is likely that many sites will not be class 1 for all parameters.

Richard Page
Reply to  AlanJ
March 5, 2024 6:10 am

You cannot adjust for a variable; UHI is a variable, not a constant. The fact that UK sites are badly corrupted is something I’ve been saying for several years now – I even specifically highlighted the Sheffield site (btw the temperature station is a few metres back from the red marker, in amongst the trees). The fact is that the siting issues ARE well known and understood but the methods used to work with them are misguided, badly applied or, in many cases, the issues are simply ignored. This problem needs to be sorted out and the Met Office have done nothing in the years they’ve known about it so their hand must be forced. Either remove the class 3-5 stations from the temperature record as being unfit for purpose or resite them on sites that are fit for purpose.

AlanJ
Reply to  Richard Page
March 5, 2024 6:41 am

You can, of course, adjust a variable. UHI imparts a trend bias, which is accounted for by homogenization algorithms. This has been demonstrated in the peer reviewed literature (see, e.g., Menne et al., 2009). It is simply a tired WUWT canard to insist otherwise. And no one on WUWT wants to produce an alternative temperature dataset with only the “good” stations retained and only well-applied methods, probably because they know exactly what the outcome will be (i.e. they’ll find the same thing as everyone else).

Reply to  AlanJ
March 5, 2024 7:35 am

Homogeniztion does nothing but spread systematic uncertainty around among the different stations.

Nor is UHI a linear trend. It will vary from minute to minute, hour to hour, day to day, and season to season. It is not noise and can’t be filtered out.

And no one on WUWT wants to produce an alternative temperature dataset with only the “good” stations retained”

There are *NO* “good” stations whose measurement uncertainty is less than the differences trying to be identified.

If you have a temperature measurement of 15.1C and another of 15.2C, both with an uncertainty of +/- 0.5C you simply don’t know if the difference is 0.1C or not. The true difference is part of the Great Unknown. The difference should be stated as 0.1C +/- 0.7C (the quadrature combination of the uncertainties).

Tell us what a difference of 0.1C +/- 0.7C means to *you*!

AlanJ
Reply to  Tim Gorman
March 5, 2024 8:44 am

Homogeniztion does nothing but spread systematic uncertainty around among the different stations.

You can’t just insist things into truth. Prove it. Research (e.g. Menne et al. 2009) demonstrates that homogenization is quite successful at addressing systematic bias.

Nor is UHI a linear trend. It will vary from minute to minute, hour to hour, day to day, and season to season. It is not noise and can’t be filtered out.

I don’t think roads and buildings pop in and out of existence minute by minute in cities. I’m looking out my window now and see the skyline looking pretty fixed. But maybe in your delusional reality they do.

If you have a temperature measurement of 15.1C and another of 15.2C, both with an uncertainty of +/- 0.5C you simply don’t know if the difference is 0.1C or not. The true difference is part of the Great Unknown. The difference should be stated as 0.1C +/- 0.7C (the quadrature combination of the uncertainties).

That is true, but quite irrelevant to the problem we are actually trying to solve.

0perator
Reply to  AlanJ
March 5, 2024 8:55 am

 the problem we are actually trying to solve.

And what is that problem?

AlanJ
Reply to  0perator
March 5, 2024 9:07 am

The problem we want to solve is “how is the mean temperature of the region/globe changing over time?”

Richard Page
Reply to  AlanJ
March 5, 2024 9:27 am

No, that’s not it.

0perator
Reply to  AlanJ
March 5, 2024 11:02 am

Credit given for answering, even if it’s meaningless.

Reply to  AlanJ
March 5, 2024 11:03 am

We already have the answer to “how is the mean temperature of the region/globe changing over time?”

The answer is: For the past 2.5 million years, the globe experiences glacial ice expansion for approximately 100,000 years in length with inter glacial warming reducing ice expansion for approximately 20,000 years.

AlanJ
Reply to  doonman
March 5, 2024 1:53 pm

So did we get that answer? What problem did we solve to get it? I’m glad we agree that this is a solved issue, and that the whinging on WUWT about is a waste of breath.

Reply to  AlanJ
March 5, 2024 12:36 pm

Except you can’t get rid of the measurement uncertainty involved in trying to measure it! So the final answer is always going to be “WE DON’T ACTUALLY KNOW!”

Reply to  AlanJ
March 5, 2024 1:12 pm

You mean the urban region.. That is all the urban surface site will show. Even that is highly dubious because of all the intentional mal-adjustments.

Thing is, urban is a rather small part of the globe, even though it makes up most of the surface data , especially after the idiotic homogenisations routines destroy any reasonable rural data.

Any urban temperature fabrication is totally meaningless when it comes to global temperatures.

Reply to  0perator
March 5, 2024 12:34 pm

The problem is the ability of climate science to ignore measurement uncertainty. They just “assume” it away so they don’t have to deal with it.

Reply to  AlanJ
March 5, 2024 9:58 am

Having the measurements near an airport you have no idea about a landing or starting plane.

Reply to  AlanJ
March 5, 2024 12:33 pm

You can’t just insist things into truth. Prove it. Research (e.g. Menne et al. 2009) demonstrates that homogenization is quite successful at addressing systematic bias.”

Malarky! Hubbard and Lin proved this wrong in 2002 and 2006. Their study showed you can’t use the same adjustment on multiple stations because of differences in micro-climate. You’ve been given this reference before. As usual you just ignore it.

Hubbard and Lin, First published: 12 August 2006:

…………………………………..
Abstract[1] The homogenized U.S. Historical Climatology Network (HCN) data set contains several statistical adjustments. One of the adjustments directly reflects the effect of instrument changes that occurred in the 1980s. About sixty percent of the U.S. HCN stations were adjusted to reflect this instrument change by use of separate constants applied universally to the monthly average maximum and minimum temperatures regardless of month or location. To test this adjustment, this paper reexamines the effect of instrument change in HCN using available observations. Our results indicate that the magnitudes of bias due to the instrument change at individual stations range from less than −1.0°C to over +1.0°C and some stations show no statistical discontinuities associated with instrument changes while others show a discontinuity for either maximum or minimum but not for both. Therefore, the universal constants to adjust for instrument change in the HCN are not appropriate. (bolding mine, tpg)
—————————————————-

I highlghted the last part to make sure you saw it. Why climate science wants to ignore this is beyond me. When individual stations can have magnitudes of bias of more than 2C, homogenization does nothing but spread the bias of the stations involved around to other stations. The conclusion sections states: “These biases are not solely caused by the change in instrumentation but may reflect some important unknown or undocumented changes such as undocumented station relocations and siting microclimate changes (e.g., buildings, site obstacles, and traffic roads).”

Chapter 3 in Taylor shows how you combine uncertainty when you involve measurements of different things using different measurement devices. Is that somehow not proof enough for you? Taylor’s Rule 3.16 states that when you have independent and random measurements and they are being summed (or subtracted) the total uncertainty is is their quadratic sum of the uncertainties. It’s really quite simple.

“I don’t think roads and buildings pop in and out of existence minute by minute”

Their impact does! Does the term “light and variable” mean anything at all to you when applied to wind direction and speed? L&V indicates wind can change from second to second let alone minute to minute. And both have an impact on UHI effects on a measuring station. You are just blowing smoke out your backside!

“That is true, but quite irrelevant to the problem we are actually trying to solve.”

Measurement uncertainty and its propagation *IS* the entire problem! You keep wanting to fall back on the meme that all measurement uncertainty is random, Gaussian, and cancels so you don’t have to address it! Doing that is simply wrong!

If your uncertainty interval subsumes what you are trying to identify you can’t know anything about what you are trying to identify! It’s part of the Great Unknown!

The uncertainty interval is the values that can be reasonably assigned to the measurand. *YOU* want to pick one and say THIS IS THE TRUE VALUE!” when you can’t know that!

AlanJ
Reply to  Tim Gorman
March 5, 2024 1:54 pm

You might not be aware of this, but 2009 is chronologically later than both 2002 and 2006. Menne et al, 2009 introduces an automated pairwise homogenization algorithm that performs site specific bias correction. You should read it.

 Is that somehow not proof enough for you? Taylor’s Rule 3.16 states that when you have independent and random measurements and they are being summed (or subtracted) the total uncertainty is is their quadratic sum of the uncertainties. It’s really quite simple.

What if we’re, say, I dunno, taking an average? What does Taylor say about that?

Sweet Old Bob
Reply to  AlanJ
March 5, 2024 2:21 pm

“You can’t just insist things into truth.”

So , why do you keep trying ?

😉

Reply to  AlanJ
March 5, 2024 7:46 am

Another purveyor of climate Fake Data fraud.

phrog
Reply to  AlanJ
March 5, 2024 7:47 am

So, you think that a trend is more important than the individual measurements? Did you not read through the record highs that were set in the past 5 or so years, all thanks to these poorly sited stations?

AlanJ
Reply to  phrog
March 5, 2024 8:44 am

The trend is the thing we care about in regards to climate change, it is objectively vastly more important than individual station readings.

phrog
Reply to  AlanJ
March 5, 2024 8:56 am

Anomalies, by themselves, paint an incomplete picture of how the climate is changing in a region. Multiple months can yield the same monthly anomalies, but they are not the same.

That’s why you have to investigate further. You can get a record warm deviation from the average, but the data points themselves may not reveal any unusually warm temperatures. That’s why they are misleading; they are far too simplistic. Over-simplicity is modern climate science’s hallmark.

AlanJ
Reply to  phrog
March 5, 2024 9:10 am

I don’t think anyone is suggesting that we can stop investigating the patterns and causes of climate change at just knowing that the regional mean temperature is increasing.

phrog
Reply to  AlanJ
March 5, 2024 9:59 am

In a system with chaotic fluctuations, the mean temperature can exhibit an increase over time without the increase actually reflecting a genuine change in the real world.

comment image

AlanJ
Reply to  phrog
March 5, 2024 10:14 am

Yes, of course, a basic aspect of trend analysis is determining the likelihood of producing the observed trend assuming the null hypothesis.

phrog
Reply to  AlanJ
March 5, 2024 12:27 pm

In the graph I provided above, the rise is statistically significant; however, in this hypothetical scenario, there is no real-world change. This is solely due to natural fluctuations. There is a difference between statistical significance and practical significance. That’s why Ordinary Least Squares is mostly meaningless in the context of climate data.

Reply to  phrog
March 5, 2024 12:48 pm

But its the only tool in the trendologists’ toolbox.

AlanJ
Reply to  phrog
March 5, 2024 1:57 pm

It is obvious to everyone that you can’t determine the underlying cause of a trend by merely examining the trend itself (the WUWT gang desperately tries to convince us that people don’t understand this, of course, but we can ignore flagrant lies when we see them). Once you establish the presence of a statistically significant trend, then you can start investigating the cause(es) of it.

phrog
Reply to  AlanJ
March 5, 2024 2:11 pm

You’re not addressing my main point.

comment image

AlanJ
Reply to  phrog
March 5, 2024 3:25 pm

Do a better job of articulating your “main point” so that I can address it, then. My understanding is that you’re saying that not all trends, even when statistically significant, necessarily signal a long term underlying change in the climate. It might be some form of multi-decadal internal variability. Which I agree with.

phrog
Reply to  AlanJ
March 5, 2024 5:52 pm

You made it clear the only thing climate science cares about is the trend in anomalies over time. Why would we use OLS in climate analysis at all? Why is it only helpful in some situations versus others? The main global datasets show increases in mean temperature on the order of microdegrees. When taking into account factors like measurement resolution and uncertainty, it’s hard to take such a trend very seriously or think it’s outside the realm of natural variability.

AlanJ
Reply to  phrog
March 6, 2024 5:29 am

Why would we use OLS in climate analysis at all?

Because it’s a good way to characterize change in a time series over time when the change is well-described by a linear function. The warming has not been linear, and in fact has accelerated, so a linear model is only an approximation, and you certainly could choose a more suitable one.

When taking into account factors like measurement resolution and uncertainty, it’s hard to take such a trend very seriously or think it’s outside the realm of natural variability.

You’re looking at the mean of tens of thousands of individual measurements, the random measurement uncertainty of a single measurement is quite irrelevant in this context. We also know that the observed trend is well outside of any known mode of internal natural variability in the climate system.

Reply to  AlanJ
March 6, 2024 11:00 am

 the random measurement uncertainty of a single measurement is quite irrelevant in this context

Bullshit.

Reply to  karlomonte
March 6, 2024 11:29 am

But my stats book didn’t have ± on the sample numbers so it must be irrelevant!

Reply to  Jim Gorman
March 6, 2024 2:42 pm

All hail stats books!

phrog
Reply to  AlanJ
March 6, 2024 11:45 am

Because it’s a good way to characterize change in a time series over time when the change is well-described by a linear function. The warming has not been linear, and in fact has accelerated, so a linear model is only an approximation, and you certainly could choose a more suitable one.

OLS is meaningless with regards to sinusoidal data. Better to use maybe a running mean or to monitor the rate of change so you don’t overlook the fluctuations. What is long term trend but a collection of short term trends? Those are important too. It’s known that climate changes through cycles, so if there has been warming intact for ~700 years, just as a hypothetical example, and it starts cooling again, the OLS won’t reflect that new change for a very long time.

You’re looking at the mean of tens of thousands of individual measurements, the random measurement uncertainty of a single measurement is quite irrelevant in this context. We also know that the observed trend is well outside of any known mode of internal natural variability in the climate system.

That’s incorrect. The Central Limit Theorem assumes that the samples are independent and identically distributed. Temperature measurements are not identically distributed, and neither are averages. If you can derive temperature measurements for a multitude of different days, then you will only get a numerical normal distribution, but it won’t provide any useful results. This article also rightfully states that stations all over the world are producing corrupted readings. You are arguing they are small, but they still introduce a systematic bias. When you average measurements with a systematic bias, you’ll just get more skewed results with each average.

phrog
Reply to  phrog
March 6, 2024 5:20 pm

If you can derive *the same* temperature measurements for a multitude of different days

AlanJ
Reply to  phrog
March 6, 2024 6:40 pm

OLS is meaningless with regards to sinusoidal data. Better to use maybe a running mean or to monitor the rate of change so you don’t overlook the fluctuations. What is long term trend but a collection of short term trends? Those are important too. It’s known that climate changes through cycles, so if there has been warming intact for ~700 years, just as a hypothetical example, and it starts cooling again, the OLS won’t reflect that new change for a very long time.

A sinusoidal series can exhibit a long term trend, and a linear model can describe that trend. To be sure, there are many ways to describe time series behavior, and which one to use depends on context. If what you want to know is simply “has this series being going up over time?” Then a liner model is quite suitable.

The Central Limit Theorem assumes that the samples are independent and identically distributed. 

You need to be clear about what you mean – there are multiple variants of CLT, including some specifically dealing with non-IDD variables (e.g Lyapunov CLT). But I’m not convinced that temperature measurements are not identically distributed.

Reply to  AlanJ
March 7, 2024 5:27 am

I’m not convinced that temperature measurements are not identically distributed.”

That’s because you (and climate science) never bother with the variance of your data!

What is the variance in temperatures during your local summer? What is the variance in temperatures in your local winter?

To be iid the temps would have to have the same variance!

What is the variance of NH temps during the summer versus the variance of SH temps in the winter? Are they different?

BTW, do combined NH and SH temperatures meet the condition for the Lyapunov CLT? Do you even know? As I understand it the Lyapunov CLT only applies to distributions with limited skewness. Do you *know* what the skewness of the temperature measurement data set is?

phrog
Reply to  AlanJ
March 7, 2024 7:52 am

A sinusoidal series can exhibit a long term trend, and a linear model can describe that trend. To be sure, there are many ways to describe time series behavior, and which one to use depends on context. If what you want to know is simply “has this series being going up over time?” Then a liner model is quite suitable.

But the data is not truly linear; you better can observe changes over time by applying a 5-year running mean. You want to capture the variations. OLS is just a scare tactic used to convince people that there is runaway warming.

I’m not convinced that temperature measurements are not identically distributed.

Just looking at data in my local area for last month, one daily average (35.0°F) is representative of different days. One day (referred to as ‘A’) had a registered high of 43°F and a low of 27°F, and another day (referred to as ‘B’) had a registered high of 41°F and 29°F. For day A, it rained throughout the entire day, with 1.36 inches of rain. For day B, it was clear and sunny, but in the morning, there was a slight snowfall; 1.3 inches of snow fell.

Yet, these two days are represented by the same average. They’re not identically distributed; instead, you’re losing meaningful information.

AlanJ
Reply to  phrog
March 7, 2024 8:39 am

OLS is just a scare tactic used to convince people that there is runaway warming.

No, it’s a common statistical procedure that indicates how much change has occurred in a series. It’s hilarious that the WUWT contrarian set has deluded themselves into thinking linear regression is a conspiracy.

They’re not identically distributed; instead, you’re losing meaningful information.

That’s not what identical distribution is.

phrog
Reply to  AlanJ
March 7, 2024 9:01 am

Even though the average is the same, the daily distributions of specific temperature values and weather conditions are different for A and B.

??

Reply to  AlanJ
March 8, 2024 4:41 am

It’s pretty obvious that you don’t know what identical distribution is. Neither does bellman. Two peas in a pod.

Don’t think it goes unnoticed that you answered not a single question I asked you.

Here they are again:

—————————————-
To be iid the temps would have to have the same variance!
What is the variance of NH temps during the summer versus the variance of SH temps in the winter? Are they different?

BTW, do combined NH and SH temperatures meet the condition for the Lyapunov CLT? Do you even know? As I understand it the Lyapunov CLT only applies to distributions with limited skewness. Do you *know* what the skewness of the temperature measurement data set is?
—————————————

Why won’t you answer these questions? Stating that you don’t know would be acceptable as an answer.

AlanJ
Reply to  Tim Gorman
March 8, 2024 7:55 am

What is the variance of NH temps during the summer versus the variance of SH temps in the winter? Are they different?

We are combining temperature anomalies, so this is not a significant consideration for ID.

Reply to  AlanJ
March 8, 2024 3:16 pm

Of course the anomalies inherit the variances of the components used to find the anomaly. As usual you want to just assume variance doesn’t exist.

So let’s list the statistical descriptors that you think don’t apply to temperature measurements, averages, and anomalies

  1. Variance
  2. Skewness
  3. Kurtosis
  4. Median
  5. Quartiles
  6. Range

I’ve *NEVER* worked anywhere where the “average” is the only statistical descriptor that needs to be provided for a data set. But then I’ve never worked in a climate science group either.

old cocky
Reply to  AlanJ
March 8, 2024 3:38 pm

We are combining temperature anomalies, so this is not a significant consideration for ID.

Anomalies are just the application of a site-specific constant offset. All other properties should be preserved.

Reply to  AlanJ
March 8, 2024 4:48 pm

Everything you talk about is sampling theory! How can you possibly dismiss assumptions that directly apply to sampling?

The very mention of using the √n is an admission of using sampling theory.

You are lost in a forest of ignorance.

Reply to  AlanJ
March 5, 2024 8:35 pm

The only trend in UK data is from really bad sites and extra sunshine hours.

Do try to keep up, dolt !

Reply to  AlanJ
March 5, 2024 8:36 pm

but we can ignore flagrant lies when we see them”

Yet that is all you ever produce. Fragrant LIES and MAL-information.

Reply to  phrog
March 5, 2024 12:56 pm

Miami and Las Vegas can have the exact same anomaly while having wildly different climates. They can have the same median daily temperature while having wildly different climates.

A metric that gives the same value for two different inputs is a useless metric. Climate science can’t differentiate between climates based on daily median temperatures so how can they identify a “global climate”?

phrog
Reply to  Tim Gorman
March 5, 2024 1:05 pm

They’re meaningless; you just lose information as you average. Modern climate science portrays climate change as strictly warmer or cooler, when in reality, it’s far more complex.

Reply to  AlanJ
March 5, 2024 11:05 am

Unless there is no trend for 18 years, then we must deny that is happening.

AlanJ
Reply to  doonman
March 5, 2024 11:38 am

comment image

Reply to  AlanJ
March 5, 2024 1:16 pm

The 2015/16 and 2023 El Ninos get the trend.

You know that.

Yet you STILL keep on with your petty attempts.

Human causation.. YOU HAVE NONE.

AlanJ
Reply to  bnice2000
March 5, 2024 2:00 pm

I am absolutely positive I have not mentioned causation one single time in this thread. Do try and keep up little guy.

Reply to  AlanJ
March 5, 2024 8:37 pm

Great to see you admitting that YOU KNOW the warming is TOTALLY NATURAL

Well done, dip**it !

Reply to  AlanJ
March 5, 2024 11:46 am

But the trend is incredibly affected by previous temperatures. What if all station in the early 1900’s read 5 degrees too low! How about 5 degrees too high?

aussiecol
Reply to  AlanJ
March 5, 2024 1:13 pm

A trend is vastly more important than individual station readings???
How on earth can you find a trend when individual station readings are compromised!?!? What a fools paradise you live in.

Reply to  AlanJ
March 5, 2024 1:14 pm

The trend is the thing we care about in regards to climate change”

And you CANNOT POSSIBLY get even the remotest idea of the “global” trend by using corrupted urban data from lots of really badly effected urban sites.

AlanJ
Reply to  bnice2000
March 5, 2024 2:18 pm

How about with satellites?

comment image?resize=720%2C324&ssl=1

Reply to  AlanJ
March 5, 2024 8:39 pm

Satellite show warming only at NATURAL El Nino events.

Even someone as dim-witted as you must have realised that by now.

NO HUMAN CAUSATION.

….. as you have already admitted.

AlanJ
Reply to  bnice2000
March 6, 2024 5:31 am

Well, once again, I’m just so positive I haven’t mentioned the cause of the trend one single time, I’m simply responding to your claim that there is no identified trend, which is flagrantly false. You really must keep studying your reading lessons so that you won’t have so much of a struggle.

Reply to  AlanJ
March 5, 2024 12:28 pm

It is moronic in the extreme to think that the fake homogenisation routines DON’T greatly increase trends in rural areas.

You only have to look at all the “cooling the past” changes to most rural data.

DENIAL of the homogenisation scam shows just how little your tiny mind is capable of grasping.

Menne is one of the main instigators of the temperature mal-adjusting scam.

AlanJ
Reply to  bnice2000
March 5, 2024 2:17 pm

You mean cooling the past like this?

comment image

Reply to  AlanJ
March 5, 2024 2:57 pm

Nice hockey stick.

If the fraud doesn’t matter, then why do it?

AlanJ
Reply to  karlomonte
March 5, 2024 3:31 pm

Of course it isn’t fraud, but you are correct that the adjustments matter very little at the global scale, in fact they barely change the trend at all (and they actually lower it, contrary to the expectations of the WUWT contrarian set). The reason scientists do the adjustments is first because we don’t always look at the global scale, and sources of systematic bias can be more pronounced at smaller scales, and because, well, scientists are just deeply interested in getting things right, even tiny superfluous details that most of us don’t really care about. But you certainly don’t have to perform any adjustments to get a good global temperature estimate, as illustrated.

phrog
Reply to  AlanJ
March 5, 2024 3:37 pm

scientists are just deeply interested in getting things right, even tiny superfluous details that most of us don’t really care about. 

That’s complete BS. If that were the case, we wouldn’t be using that unstandardized data from the Cooperative Observer Network as part of climate analysis. They would rightfully assert that such data is unfit for such a purpose.

AlanJ
Reply to  phrog
March 6, 2024 5:33 am

They would rightfully assert that such data is unfit for such a purpose.

This isn’t a statement backed by evidence, it’s just your personal opinion. The historical data after adjustment for known biases are exactly consistent with the pristine US reference network, so it is categorically false to say that they are unfit for purpose. And, of course, the historical data is all we have, so obviously scientists have to work with it unless you’ve got some big ideas for time travel up your sleeve.

Reply to  AlanJ
March 6, 2024 11:02 am

You cannot determine after-the-fact the magnitude of a nonrandom error that changes with time.

AlanJ
Reply to  karlomonte
March 6, 2024 11:35 am

Thankfully, scientists are not as intellectually limited as you are and they don’t let hard problems stump them. As shown above, they have indeed successfully accounted for systematic bias in the full station network.

Reply to  AlanJ
March 6, 2024 2:44 pm

No they haven’t, you’re just gaslighting and lying, again.

The paper you are so proud of was written by liars.

AlanJ
Reply to  karlomonte
March 6, 2024 6:41 pm

But you can’t find anything wrong in it.

Reply to  AlanJ
March 6, 2024 4:18 pm

Thankfully, scientists are not as intellectually limited as you are and they don’t let hard problems stump them. “

ROFL!! They just assume everything away that is a hard problem!

phrog
Reply to  AlanJ
March 6, 2024 11:47 am

Any person with a working brain knows that is too good to be true.

AlanJ
Reply to  phrog
March 6, 2024 1:02 pm

The proof is in front of your face, you can accept the reality or deny it:

comment image

phrog
Reply to  AlanJ
March 6, 2024 5:23 pm

If they match so well, then why bother creating the USCRN in the first place? It makes no sense for them to coexist at the same time if, at the end of the day, the adjustments work as you claim.

AlanJ
Reply to  phrog
March 6, 2024 6:44 pm

The motivation is quite obvious, for several reasons: the adjustments are intended to remove systematic bias from the full network. One way to know if they are working is to compare the full, bias adjusted network to a pristine reference series that contains no systematic bias by design – the CRN. Second, we can’t go back in time to get perfect historical data, but we can ensure that we have a robust network in place for climate monitoring for the future.

phrog
Reply to  AlanJ
March 6, 2024 8:38 pm

Okay, so if it’s proven to work, why continue to use both? It’s a waste of money.

AlanJ
Reply to  phrog
March 7, 2024 5:31 am

nClimDiv is a dataset comprised of station primarily from the Global Historical Climatology Network – it isn’t owned or maintained by NOAA. The purpose of the stations comprising the GHCN is to provide daily temperature, precipitation data etc. to local communities. The NOAA just takes these records and refines them into a nationwide temperature network. It also represents the entirety of the historical temperature record for the contiguous US.

USCRN is a network entirely operated and maintained by NOAA, with the express purpose of providing robust long term climate monitoring, with each site being specifically chosen because it is likely to retain pristine conditions for many many years to come, and each site is equipped with state of the art instrumentation.

In short, we have a “network of convenience” comprised of scattered weather stations, which makes up the historical data, and a tailor built climate monitoring network that is a convenient reference series for the full historical network. There is no reason not to have both.

Reply to  AlanJ
March 7, 2024 5:47 am

I see you’ve been researching the internet.

USCRN is a network entirely operated and maintained by NOAA, with the express purpose of providing robust long term climate monitoring, with each site being specifically chosen because it is likely to retain pristine conditions for many many years to come, and each site is equipped with state of the art instrumentation.

Tell why, with pristine sites and state of the art instrumentation, it is necessary for NOAA to begin adjusting CRN data!

AlanJ
Reply to  Jim Gorman
March 7, 2024 6:36 am

CRN temperature data are not bias adjusted. Hope that helps.

phrog
Reply to  AlanJ
March 7, 2024 7:59 am

Yes, there is. With CRN, you are, supposedly, guaranteed a pristine network; there’s no reason to keep the other one. And you’ll always have historical data from the other one, presumably backed up into a computer drive to use as the baseline for future temperature anomalies. That would be easy and efficient to do.

Reply to  phrog
March 7, 2024 8:10 am

AJ claims it has “zero bias”.

AlanJ
Reply to  phrog
March 7, 2024 8:16 am

Again, the GHCN that nClimDiv is compiled from exists independently of the nClimDiv index, all of those stations will be continuously reporting to GHCN. nClimDiv is just an automated analyses applied to this data. There is no drawback whatsoever to maintaining it. Moreover, nClimDiv is much denser than USCRN, so it has a variety of additional use cases beyond nationwide climate observation.

You can feel free to disagree with this motivation, but you aren’t in charge of these initiatives, so does it really matter? We have nClimDiv, we have USCRN, they agree perfectly, ergo the processing NOAA does nClimDiv is unequivocally removing systematic bias from the full network.

phrog
Reply to  AlanJ
March 7, 2024 8:55 am

nClimDiv is derived from the U.S. Cooperative Observer Network. As admitted by Zeke Hausfather, most stations in the network have been subject to station movements and other artificial encroachments.

You say nClimDiv is more dense, but you point out above that they are measuring the same thing; perfect alignment, you went on to say. So, what advantage does more density provide? We know that absolutes are more sensitive to these artificial influences.

AlanJ
Reply to  phrog
March 7, 2024 12:50 pm

Again, nClimDiv is the historic record of climate in the contiguous US. You can moan about its existence all you want, but it isn’t going anywhere.

The one single point that matters is that the adjustments applied to the historic record successfully remove systematic bias.

Reply to  AlanJ
March 7, 2024 1:05 pm

No they don’t; if you understood anything about metrology you’d see how stupid this claim is.

But you don’t understand, plus you are just a gaslighting propagandist trying to keep the climate disaster politics from collapsing and don’t care about the truth.

phrog
Reply to  AlanJ
March 7, 2024 7:27 pm

I see you are now resorting to gaslighting. My point is that if it really removed systematic bias, then NOAA wouldn’t be holding on to one or the other.

AlanJ
Reply to  phrog
March 8, 2024 6:36 am

That’s a non-sequitur. We know the adjustments remove systematic bias because the bias-adjusted network is aligned with the bias-free reference network. You have to align your opinions with this factual reality, not the other way round.

Reply to  AlanJ
March 8, 2024 7:17 am

Why do you think the reference network is bias free. You’ve been shown this before, why do you dismiss it out of hand.

https://www.ncei.noaa.gov/pub/data/uscrn/products/monthly01/readme.txt

On 2013-01-07 at 1500 UTC, USCRN began reporting corrected surface       temperature measurements for some stations. These changes       impact previous users of the data because the corrected values       differ from uncorrected values. To distinguish between uncorrected       (raw) and corrected surface temperature measurements, a surface       temperature type field was added to the monthly01 product. The       possible values of the this field are “R” to denote raw surface       temperature measurements, “C” to denote corrected surface       temperature measurements, and “U” for unknown/missing. 

AlanJ
Reply to  Jim Gorman
March 8, 2024 7:36 am

This is referring to the skin temperature sensors pointed at the ground (e.g. SUR_TEMP_MONTHLY_AVG), not the air temperature sensors (e.g. T_MONTHLY_AVG). It’s good to read the metadata and understand it before commenting.

Reply to  AlanJ
March 8, 2024 8:45 am

 bias-free reference network”

Wow! Talk about a non sequitur!

There is no such thing as a systematic bias network. The CRN network admits in its own documentation that the measurement uncertainty of the CRN measurement devices is +/- 0.3C! Part of that 0.3C is systematic bias and there is no way to adjust for that!

AlanJ
Reply to  Tim Gorman
March 8, 2024 9:19 am

The network is free of systematic bias by design. There is no adjustment needed or performed.

Reply to  AlanJ
March 8, 2024 10:38 am

The only person you are fooling with this act is yourself.

Reply to  AlanJ
March 8, 2024 3:38 pm

And Fern Gully has a fairy population also, right?

If no adjustment is needed or performed then why does the CRN documentation say the temperature measurements from a CRN station have a measurement uncertainty of +/- 0.3C?

Reply to  Tim Gorman
March 8, 2024 10:37 am

This gaslighter is so high he’s on Mars.

Reply to  AlanJ
March 7, 2024 5:51 am

One way to know if they are working is to compare the full, bias adjusted network to a pristine reference series that contains no systematic bias by design – the CRN”

You *TRULY* believe the CRN measurement devices have no systematic bias? That they remain 100% accurate over time?

How often do the CRN measurement devices require re-calibration? If they never have any systematic bias then why would they need to ever be re-calibrated?

Do you know that, in fact, the annual CRN measurement site maintenance checklist does *NOT* include verifying the calibration of the temperature sensors, only the rain guage?

Do you know of any other electronic devices that never need calibration?

Reply to  Tim Gorman
March 7, 2024 6:14 am

These people live on Fantasy Island.

AlanJ
Reply to  Tim Gorman
March 7, 2024 6:46 am

You *TRULY* believe the CRN measurement devices have no systematic bias?

Of course, the network was specifically designed in this way.

Do you know that, in fact, the annual CRN measurement site maintenance checklist does *NOT* include verifying the calibration of the temperature sensors, only the rain guage?

The documentation pretty clearly states that all instruments are calibrated annually and aging sensors are regularly replaced:

Highly accurate measurements and reliable reporting are critical. Station instruments are calibrated annually and maintenance includes routine replacement of aging sensors. The performance of each station’s measurements is monitored on a daily basis and problems are addressed as quickly as possible, typically within days. Each station transmits data hourly to a geostationary satellite. Within minutes of transmission, raw data and computed summary statistics are made available on the USCRN web site. This page describes the details of the data stream.”

But feel free to continue deluding yourself.

Reply to  AlanJ
March 7, 2024 6:53 am

“You *TRULY* believe the CRN measurement devices have no systematic bias?”

Of course, the network was specifically designed in this way.

It is confirmed, you are an idiot sans any clues about real metrology.

But feel free to continue deluding yourself.

It is you who is deluded: “we sent it to the cal lab, there is no error!”

Reply to  AlanJ
March 7, 2024 7:22 am

The documentation pretty clearly states that all instruments are calibrated annually and aging sensors are regularly replaced:”

But the document does *NOT* give an interval for the aging. And the actual annual maintenance checklist does not do so either. So how often are the temperature sensors actually replaced?

In fact, the CRN documentation says the temperature sensors have no mean time between failures. Typically in industry components with a mean time between failures get replaced around that interval of time. Things like school buses get their brakes replaced based on a time interval and not on a measure of their actual usage. It’s quicker and cheaper to do it that way.

But feel free to continue deluding yourself.”

The only one deluding themselves are you. The CRN documentation gives the measurement uncertainty of the temperature as +/- 0.3C at installation. Add the temperature measurement at station 1 with the temperature measurement at station 2 and you get a combined measurement uncertainty of +/- 0.4C. Almost a half of a degree uncertainty – which totally subsumes the ability to differentiate differences in the hundredths digit.



Reply to  Tim Gorman
March 7, 2024 7:34 am

A real, well-designed measurement system defines recalibration intervals for all instrumentation, then accounts for the recalibration periods in its uncertainty analysis.

Climate science method: “we sent it to the cal lab, there is no error!”

AlanJ
Reply to  Tim Gorman
March 7, 2024 7:54 am

And here we see the intrepid contrarian deftly pivoting from “the temperature sensors are never calibrated” to “I’m not quite sure how often they’re being replaced.” Never acknowledging his initial error. How on-brand.

Reply to  AlanJ
March 7, 2024 8:12 am

Liar, another straw man.

Reply to  AlanJ
March 7, 2024 3:03 pm

If the re-calibration interval is not specified then how to you know when it gets done? If you don’t know if it has been done or is scheduled to be done THEN YOU HAVE TO ASSUME THAT THE MEASUREMENT DEVICE HAS NEVER BEEN RECALIBRATED!

Every piece of equipment used for critical measurements should have a sticker giving the date of the last calibration. There should be a calibration document delivered from the cal lab and should be retained with the equipment. The equipment manufacturer should specify a recalibration interval for the equipment.

Only in climate science is it assumed that a specified recalibration interval is not needed. Only in climate science is the maintenance checklist missing a calibration step for the temperature sensor!

AlanJ
Reply to  Tim Gorman
March 8, 2024 6:53 am

This reeks of desperation. We know the calibration is being performed because the NOAA says it is. If you want maintenance logs for a specific site or sites, you can reach out to NOAA to obtain them.

Reply to  AlanJ
March 8, 2024 7:24 am

We know the calibration is being performed because the NOAA says it is.

You are the one making the assertion, it is up to you to provide the evidence.

It is tiresome dealing with folks who continually make assertions without any references at all. Perhaps you don’t know how ignorant it makes you look when you can not find supporting documentation. Chat bots don’t count, ask them for sources.

AlanJ
Reply to  Jim Gorman
March 8, 2024 7:42 am

Already linked above:

https://www.ncei.noaa.gov/access/crn/measurements.html

But I do understand how challenging reading is for you, so I’ll give you a pass on this one.

Reply to  AlanJ
March 8, 2024 8:55 am

Nothing, let me emphasize *NOTHING*, at this site says anything about maintenance intervals for any of the measuring devices thus no calibration intervals for any sensor (let alone the temperature sensor suite) can be gleaned from this site.

Perhaps *YOU* are the one that needs to learn how to read!

AlanJ
Reply to  Tim Gorman
March 8, 2024 9:16 am

Station instruments are calibrated annually and maintenance includes routine replacement of aging sensors.”

That’s the second sentence of the second paragraph. I added the bold and underline so you’ll know which words you need to read, I know there are some long ones so be sure to look up any that you struggle with.

Reply to  AlanJ
March 8, 2024 3:35 pm

Once again – show me where in the CRN Station Maintenance Checklist it shows calibrating the temperature sensors!

It has an entry for calibration of the rain gauge but NO ENTRY for calibration of the temperature sensors.

Prove me wrong. *I* have bothered to research and find the checklist used by the CRN station maintenance organization to conduct the annual review of the station. APPARENTLY YOU HAVE NOT!

Reply to  AlanJ
March 8, 2024 8:50 am

How does NOAA know this? There is NO* entry on the maintenance checklist for temperature sensor calibration. It won’t do any good to get the documents for a station – THERE IS NO ENTRY FOR TEMP SENSOR CALIBRATION!

Reply to  AlanJ
March 6, 2024 5:12 pm

Tell us one physical science endeavor that allows past measurements to be “adjusted” to match those made by newer technology

The fact that it is all we have is a rationalization, not a reason. When did “the ends justify the means” become a scientist law?

AlanJ
Reply to  Jim Gorman
March 6, 2024 6:47 pm

The historic network isn’t being adjusted to match the CRN, it’s being adjusted to remove sources of systematic bias. The expectation is simply that if the systematic bias is removed by the adjustments, the full network would match a reference network that doesn’t contain the systematic bias in the first place (hypothesis). And we see quite clearly that it does (hypothesis confirmed by repeated observation). This is in fact the scientific method at work.

Reply to  AlanJ
March 7, 2024 5:53 am

The CRN maintenance manual includes nothing on checking the calibration of the temperature measuring device during the site annual checkup.

Do you know of *any* other electronic measuring device that never needs calibration?

AlanJ
Reply to  Tim Gorman
March 7, 2024 6:48 am

The temperature sensors are calibrated annually, and aging sensors are regularly replaced. There is also constant monitoring of the system’s performance and maintenance needs are addressed promptly. From the USCRN documentation:

“Highly accurate measurements and reliable reporting are critical. Station instruments are calibrated annually and maintenance includes routine replacement of aging sensors. The performance of each station’s measurements is monitored on a daily basis and problems are addressed as quickly as possible, typically within days. Each station transmits data hourly to a geostationary satellite. Within minutes of transmission, raw data and computed summary statistics are made available on the USCRN web site. This page describes the details of the data stream.”

Reply to  AlanJ
March 7, 2024 8:13 am

So you think this reduces “error” to zero?

Yer an idiot.

AlanJ
Reply to  karlomonte
March 7, 2024 8:58 am

I never said this or implied it.

Reply to  AlanJ
March 8, 2024 4:29 am

aging sensors are regularly replaced.”

The annual maintenance checklist for CRN stations has no entry for the age of the temperature sensor. So how are the maintenance people supposed to know when to replace the sensor?

Nor is the replacement age for the temperature sensor indicated in the CRN documentation of the the stations. So at what age are they supposed to be replaced? The documentation only indicates that the mean time between failures is unknown.

Reply to  AlanJ
March 5, 2024 4:52 pm

Of course its fraud. Fake Data.

Reply to  karlomonte
March 6, 2024 5:36 am

Yes, of course it’s fraud.

The bastardized temperature record is the only scary thing climate alarmists have to sell.

Admitting it was just as warm in the recent past as it is today would blow up their Human-caused Climate Change meme, so they create a fraudulent Hockey Stick chart to hide the facts..

All the written temperature records from around the world show it was just as warm in the recent past as it is today. Climate alarmists want us to forget about this and pretend we have a Hockey Stick past.

Climate alarmists don’t want to talk about the discrepancy between the written, historical temperature record temperature profile and the bogus Hockey Stick temperature profile.

You can’t get one from the other, but climate alarmists don’t want to talk about it. Understandably so. They don’t like their fraud being called into question.

Reply to  AlanJ
March 5, 2024 8:43 pm

Many cooling-the-past adjustment were done well before Zeke fabricated either of those two FAKE graphs.

But I suspect you are well aware of that fact.

Just more mindlessly pathetic trolling..

… that will not fool anyone with a functional brain.

Reply to  AlanJ
March 5, 2024 8:41 pm

Two mal-adjusted fabrications against each other.

Enough to FOOL a mindless twit like you, though !!

You don’t really “believe” Zeke is using actual RAW data do you.

Are you really that stupid and naive !!

AlanJ
Reply to  bnice2000
March 6, 2024 5:37 am

Of course I do, because I’ve done a similar analysis using the raw data myself (for land-only, black line is raw data):

comment image

Unlike you lot, I’ll actually roll up my sleeves and do the work.

Reply to  bnice2000
March 6, 2024 5:38 am

Yes, he is comparing one bastardized chart against another bastardized chart.

It is a common trick of the climate alarmists.

Reply to  AlanJ
March 6, 2024 5:26 am

Any chart that does not show the Early Twentieth Century as being just as warm as today, is a bogus, bastardized Hockey Stick chart, like this one.

The past temperatures were cooled in a climate alarmist computer to make it appear that today is the warmest period in human history.

The historical, written temperature records from around the world show it was just as warm in the Early Twentieth Century as it is today.

Climate alarmists adjusted this data to make the Early Twentieth Century warming disappear from the official record, otherwise they could not scare people with fears of CO2 if it is no warmer today than it was in the past, so the climate alarmists erased the past and made one up that suited their Human-caused climate change narrative.

Can you explain why the written records don’t show a Hockey Stick profile, whereas the global temperature profile does show one. How do you transform non-Hockey Stick data into Hockey Stick data without indulging in fraud?

AlanJ
Reply to  Tom Abbott
March 6, 2024 6:55 am

The past temperatures were cooled in a climate alarmist computer to make it appear that today is the warmest period in human history.

But of course you have no tangible evidence of this, just your wild conspiracy theory.

You frequently refer to this “written historical record” but you have yet to ever actually produce it, and cannot cite any source for such a thing.

The reality is that at one point we didn’t have a global temperature record, and people believed various different things based on evidence available to them. Then scientists compiled a global temperature record, and now we have actual tangible data. You can claim that the record is wrong, fraudulent, whatever, but you never lift a finger to actually produce one that’s right according to you.

Reply to  AlanJ
March 6, 2024 11:06 am

Who are “we”? You are your fellow fraudsters?

Reply to  Richard Page
March 5, 2024 6:49 am

The Sheffield site is not among the trees, it is in the park.
Weston-Park-weather-station.jpg

It’s one of the oldest weather stations in the UK, dating from 1882.

Richard Page
Reply to  Phil.
March 5, 2024 8:46 am

Er yes – if you go a few metres through the trees and a few metres along the path then you get to it. It’s not an exact match in the picture but it’s not too far away either. I think the red marker is more in line with a statue that’s a little way back from the trees. Typing co-ordinates into Google Maps never gets you exactly on top of it but it is quite close.

Reply to  Richard Page
March 5, 2024 7:00 pm

No you don’t. The red marker in the fake image that I alluded to from the OP is over 100m away from the weather station.. The actual site is in a green park, I’ve had picnics there and played cricket on the pitch on the other side of the museum.

Richard Page
Reply to  Phil.
March 6, 2024 9:01 am

It isn’t over 100m away. I’ve also been there and it is far less than 100m. 100ft maybe, at a push.

Reply to  Richard Page
March 6, 2024 2:45 pm

In that case you don’t know where that marker is, it’s by the entrance to the park which is over 100m from the weather station.

Reply to  Richard Page
March 6, 2024 2:50 pm

The Ebenezer Elliot statue which is about 90m from the weather station.

Reply to  Phil.
March 5, 2024 1:17 pm

Buildings all around, trees everywhere, concrete path only a few m a way

When was that big building built?

When was the metal fence 1m or so from the thermometer put in ?

Is this class 4 or 5 ??…. or 6

Reply to  bnice2000
March 5, 2024 7:15 pm

In a grassy park about 250m long, about 100m from a lake in an adjacent 1km long park. The metal fence was put there to restrict public access to the Stephenson screen (it’s about 4m away). The date above the door to the museum says 1867, a few years before the weather station was set up.

Mr.
Reply to  AlanJ
March 5, 2024 6:14 am

I tell myself stuff like that about my bathroom scales when it reads 4 lbs more than the ones at the hospital just did.
So I adjust for donuts.

Reply to  AlanJ
March 5, 2024 6:24 am

I think we would be happy if the class 5 stations et al were only used for specialist needs such as aircraft take off and landing. NOT used to calculate the temperature for the whole country!!!!

AlanJ
Reply to  Steve Richards
March 5, 2024 6:54 am

So produce a surface temp analysis that only uses class 1-3 stations. Describe your methodology, show your results. Of course you won’t do it, and neither will anyone else on this site, because then the big lie would be shattered.

Reply to  AlanJ
March 5, 2024 7:53 am

Class 1-3 stations will *still* have uncertainty. If that uncertainty is greater than the differences you are attempting to identify you are chasing part of the Great Unknown.

When you combine multiple stations those uncertainties add, they don’t cancel. If you add ten stations together to form an average the measurement uncertainty of that average will be the quadrature addition of the individual uncertainties. If each Class 1 station has an uncertainty of 0.1C, when combined the measurement uncertainty of the average will be 0.3C.

If the difference you are attempting to identify is 0.01C then what does the value 0.01C +/- 0.3C mean to you?

AlanJ
Reply to  Tim Gorman
March 5, 2024 10:43 am

If you add ten stations together to form an average the measurement uncertainty of that average will be the quadrature addition of the individual uncertainties.

The uncertainty of the mean is s/sqrt(N) where s is the standard deviation of the measurements. The more measurements you take, the more precise the estimate of the average. You know this.

Reply to  AlanJ
March 5, 2024 11:09 am

Only in normally distributed data. But you forgot all about this as you always do.

AlanJ
Reply to  doonman
March 5, 2024 11:33 am

The CLT tells us that the sample mean approaches a normal distribution with sample size regardless of the underlying sample distribution, so you can calculate SEM for non-normal distributions, but you know this, I’m, sure.

Reply to  AlanJ
March 5, 2024 12:18 pm

Temperature measurements are not an exercise in random statistical sampling.

phrog
Reply to  AlanJ
March 5, 2024 12:35 pm

Temperature averages are not identically distributed.

youcantfixstupid
Reply to  AlanJ
March 5, 2024 1:05 pm

How quaint, one of our resident psychopaths thinks the CLT is a ‘get out jail’ free card for climate science…explains ALOT really…

Since you brought it up, feel free to demonstrate that the sampling of temperatures anywhere in the world fits the conditions under which the CLT applies or even a weakened form of the CLT…

Reply to  youcantfixstupid
March 5, 2024 1:31 pm

The CLT applies if you have multiple samples. A single database of temperatures is *NOT* multiple samples. In this case you have to assume that the sigma of the sample is the sigma of the population. Climate science assumes that with absolutely no justification provided. Just “trust us”.

If you consider each temperature measurement to be a sample then each sample must be iid for the CLT to apply. There is no way global temperature measurements can be iid. The mere fact that SH and NH temperatures are from different seasons legislates against that restriction.

AlanJ
Reply to  Tim Gorman
March 5, 2024 2:26 pm

Not at all. The CLT says that if you draw numerous successively larger samples, the distribution of those means will approach normal, and that this normal mean will have the same value as the original mean and a variance the size of the original variance divided by the square root of the sample size. Thus as the sample size approaches infinite, the variance approaches zero – with an infinitely large sample would you have a perfectly precise estimate of the population mean.

Of course you don’t need an infinitely large sample, just a large sample will reduce the variance enough to make the estimate of the mean quite precise indeed. And with tens of thousands of stations around the world reporting daily, we have far, far more than a large enough sample size to get a precise estimate of the global mean.

Reply to  AlanJ
March 5, 2024 2:59 pm

From where are you “draw[ing] numerous successively larger samples”?

From where are you drawing any samples?

AlanJ
Reply to  karlomonte
March 5, 2024 3:34 pm

I’m describing conceptually the reason why the CLT implies that precision of the mean increases as the sample size. If you’re struggling to keep up, pick up any basic stats text book and rejoin us once you’re up to speed.

Reply to  AlanJ
March 5, 2024 4:53 pm

What is the sample size of a daily Tmax?

ONE

AlanJ
Reply to  karlomonte
March 6, 2024 5:40 am

And then if I average together, say, 200 individual measurements of T-max, what is my sample size then?

TWO HUNDRED

Reply to  AlanJ
March 6, 2024 10:22 am

Oh my another neophyte who knows it all.

You have 200 samples each a size if one (1).

Ask yourself if these “samples” came from the same population of temperatures or different ones. How does that affect sampling error?

  • Same day, same station
  • Different days, same station
  • Same day, different stations
  • Different days, different stations
Reply to  Jim Gorman
March 6, 2024 11:27 am

He does’t care about such nagging little details, all that matters for a trendologist is sigma/root(N).

AlanJ
Reply to  Jim Gorman
March 6, 2024 11:46 am

Right, and by your logic if I measure the height of 200 people, I don’t have a sample size of 200, I have 200 samples of size 1. I have one sample of Jim Gorman, one sample of Tim Gorman, one sample of karlomonte, etc. Never can those samples be meaningfully combined.

Reply to  AlanJ
March 6, 2024 2:46 pm

Another straw man, burning hot.

Reply to  AlanJ
March 6, 2024 4:25 pm

If your sample consists of 100 pygmies and 100 Watusi just what does that sample tell you? You can’t just assume the average means anything if you don’t know the median and quartile values. Unless you are doing climate science that is.

AlanJ
Reply to  Tim Gorman
March 6, 2024 6:53 pm

It tells you that you have not chosen a representative sample if your goal is to get an estimate of the average height of all humans. But both groups are humans, and you can average their heights together. If you observe the average to be changing over time, you can infer that something is changing about the nature of the groups.

Reply to  AlanJ
March 7, 2024 5:56 am

You didn’t answer the question. I didn’t actually expect you to.

Again, length is an extensive property, you can add it. Temperature is an intensive property, you can’t add temperatures. If you can’t add temperatures and get a meaningful answer then how do you calculate an average?

I don’t actually expect you to answer this either.

AlanJ
Reply to  Tim Gorman
March 7, 2024 6:59 am

This is the same thing as saying you can’t meaningfully determine the average size of the American family because no family has 3.13 members. The fact that the quantity you’ve calculated isn’t a physical entity doesn’t mean it isn’t a valuable metric to track. You can compute an average temperature and then you can monitor that average to see if it is changing over time, and that tells you something about the state of the system.

Reply to  AlanJ
March 7, 2024 7:38 am

You *STILL* don’t get the difference between extensive and intensive properties.

COUNTS ARE AN EXTENSIVE PROPERTY! You can add them and divide them.

Temperature is an intensive property! You can’t add and divide intensive properties.

Are you the *best* scientist in the climate science discipline?

The average of an intensive property is meaningless. Even worse, temperature is a function of a whole set of factors. From geography to terrain, to humidity, to pressure, to wind, to etc. What do you think you are identifying when you find an “average temperature”? What is driving the change in that “average temperature”? Humidity? Wind? Harvesting of nearby crops? CO2? Calibration drift?

If you can’t identify all of the factors and differentiate *their differences* then what does a changing “average temperature” actually tell you?

ANS: NOTHING!

Climate science ignores everything but CO2.

AlanJ
Reply to  Tim Gorman
March 7, 2024 8:00 am

The average of an intensive property is meaningless. Even worse, temperature is a function of a whole set of factors. From geography to terrain, to humidity, to pressure, to wind, to etc. What do you think you are identifying when you find an “average temperature”? What is driving the change in that “average temperature”? Humidity? Wind? Harvesting of nearby crops? CO2? Calibration drift?

Thank you for so adroitly illustrating why tracking temperature change is valuable. Look at all of the amazing research questions you’ve just posed, only by knowing that the mean temperature is changing! Imagine what significant findings we’ll uncover as we investigate those questions.

We observe a change in the system, then we investigate what is driving that change. Science.

Reply to  AlanJ
March 7, 2024 8:14 am

Averaging throws information in the trash.

Reply to  AlanJ
March 8, 2024 4:06 am

Look at all of the amazing research questions you’ve just posed, only by knowing that the mean temperature is changing! “

ROFL! Averaging stations around the globe with different trends will tell you what is happening globally?

You just absolutely refuse to acknowledge that the measurement uncertainty interval for such an average is wider than the difference you are attempting to identify! The issue is that you simply can’t know what the trend actually *is*. And no amount of averaging can get rid of that measuring uncertainty or increase resolution to allow you to *know* the difference!



Reply to  AlanJ
March 6, 2024 4:31 pm

Let me repeat KM — “Another straw man, burning hot.”

Never can those samples be meaningfully combined.

And you are correct. One single measurement of myself can not have its uncertainty reduced or even calculated with just one measurement. Not even with a measurement from someone else can you reduce the uncertainty in my measurement.

Do you not understand what a measurand is? You are dealing in numbers and not in physical measurements.

Have you ever taken an upper level physical science lab course? You don’t talk like it!

AlanJ
Reply to  Jim Gorman
March 6, 2024 6:57 pm

One single measurement of myself can not have its uncertainty reduced or even calculated with just one measurement. Not even with a measurement from someone else can you reduce the uncertainty in my measurement.

We aren’t trying to measure the height of Jim Gorman, we are trying to measure the average human height, and Jim Gorman is part of the sample used in our estimate. Our estimate will get more precise the more people we included in our sample. And those people don’t all have to be clones of Jim Gorman, and in fact they shouldn’t be.

Reply to  AlanJ
March 7, 2024 5:58 am

Your example is garbage. It only highlights the fact that you don’t understand the difference between intensive and extensive properties.

How do you add intensive properties?

AlanJ
Reply to  Tim Gorman
March 7, 2024 7:00 am

It’s not garbage, it just obliterates your position, so you’re desperately trying to avoid having to directly address it.

Reply to  AlanJ
March 7, 2024 7:21 am

You ran away from Tim’s point.

Well done.

Reply to  AlanJ
March 7, 2024 7:48 am

I don’t HAVE to address it other than by pointing out that you don’t understand intensive and extensive properties.

An extensive property is a property that depends on the amount of matter in a sample.”

“An intensive property is a property of matter that depends only on the type of matter in a sample and not on the amount”

An intensive property cannot be computed. An extensive property can.

And I’m not even going to get into what is called an “specific” intensive property.

Have you figured out yet why you can’t add the temperature in Las Vegas with the temp in Miami and assume that is the average temp for all area between the two locations?

Reply to  AlanJ
March 6, 2024 4:21 pm

Who cares how large it is. What’s the variance? What’s the median and the quartiles? If you don’t know that then you don’t know what the sample of 200 is representing!

Reply to  AlanJ
March 7, 2024 6:19 am

Absolutely and totally wrong; you have 200 different measurements of Tmax, each of N=1 and each with it own uncertainty interval.

It is up to you to figure out how to honestly combine them into a single uncertainty interval.

The climate science method has been to:

A – ignore them
B – assert they are 100% random and then disappear
C – divide by root(N)

None of these are honest.

Reply to  karlomonte
March 7, 2024 7:09 am

None of the math ever works out.

You have those like bdgwx claiming that Eq 10 in the GUM, when calculating the uncertianty of the average is

u(avg)^2 = (∂f/∂x) u(x)^2 which is wrong.

It should be u(avg)^2 = (∂f/∂x)^2 u(x)^2

thus u(avg) = sqrt[ *∂f/∂x)^2 u(x)^2 ] ==>

u(avg) = (1/n) sqrt[ u(x)^2] ==> u(x) / n

The denominator should be n instead of sqrt(n)

u(x)/n IS THE AVERAGE UNCERTAINTY and not the uncertainty of the average.

The uncertainty of the average is the uncertainty of the data set and not of the average uncertainty.

Statisticians simply don’t understand that an average is *NOT* a measurement. It is a statistical descriptor of a distribution. statistical descriptors are not measurements.

So I would add to your list:

No. 4 – assume the average value is not a statistical descriptor but is a measurement.

Reply to  Tim Gorman
March 7, 2024 7:24 am

Oh yeah, bgw with his database of “algebra errors” — how quickly bad memories fade.

And another yeah, No. 4 should be on the list.

sherro01
Reply to  AlanJ
March 5, 2024 5:11 pm

AlanJ,
But the accuracy does not always diminish with repeated sampling. Pat Frank 2023 showed how early thermometers changed shape over time, leading to drift, which is a change in accuracy. You cannot correct for drift by using more thermometers and averaging. You do not even know that the drift exists without absolute comparison with temperature measurement devices designed to avoid drift. You cannot make a homogenisation adjustment for drift because it can differ from one device to another. It affects trends, so you cannot gain improvement by concentrating on trends rather than actual values. You cannot remove trend errors until you know the timing, the magnitude and even the presence of drift. Use of anomaly values does not help in the least.
So, how did Menne cope with the Pat Frank accuracy errors when Menne possibly did not know what Frank revealed, in enough detail to estimate error magnitude?
But above all, please stop repeating that replication reduces accuracy error. In many cases, it has no effect.
Geoff S

AlanJ
Reply to  sherro01
March 6, 2024 5:43 am

Repeated sampling only reduces random error, it does not address systematic bias. This is, quite literally, exactly what Taylor is stating in his text. No climate scientist thinks repeated measurements reduce systematic bias, that is why scientists go to so much effort to remove systematic bias via adjustment procedures.

 You cannot make a homogenisation adjustment for drift because it can differ from one device to another.

Claimed without evidence. From Menne et al., 2009:

The pairwise algorithm is shown to be robust and efficient at detecting undocumented step changes under a variety of simulated scenarios with step- and trend-type inhomogeneities.

Reply to  AlanJ
March 6, 2024 9:26 am

Repeated sampling only reduces random error, it does not address systematic bias.

You do not have repeated sampling! Throw away your basic statistics training. Temperatures are single measurements made with non-ideal repeatability. That means you are not measuring the same thing repeatedly.

Repeated sampling OF THE SAME THING may reduce random error if done properly. That means you must meet the repeatability standards of the GUM.

Watch this little video.

https://youtu.be/6htJHmPq0Os?si=D_lH1F84pFvAggky

I tire of jockeying with people who think statistics rule in measurements. Tell us how many upper level physical science lab classes you have had where they let you treat measurement this way.

AlanJ
Reply to  Jim Gorman
March 6, 2024 10:11 am

You are measuring the same thing, you’re just flailing to redefine the thing being measured so that your silly ideas can stay afloat. If I want to measure the average height of humans, and I pick a sample of 500 different people, you’d say I’m “not measuring the same thing” by taking their heights because each person is a different thing, so each height is a different thing. A Jim Gorman Height can’t be combined with a Tim Gorman height. But I’d say they’re just human heights, so of course I can combine them.

Similarly, you’d say the average height of a stream can’t be measured because every pocket of water that flows past the gauge is a separate entity that yields a non-repeatable measurement.

It’s not a real issue, it’s just a dumb hill you’ve really chosen to dig into and defend for no reason.

Reply to  AlanJ
March 6, 2024 10:36 am

Let’s take your stream analogy.

Nobody really cares about the average height. They take the ‘height’ (depth) and translate it to flow.

The average depth may not translate into the average flow; it takes other reasonable knowledge to make sure the translation is reasonable.

The average temp may not translate into knowing that we are all going to die in 10 years. Expertise in hiding the decline is not adequate/reasonable knowledge for interpretation/translation of the data.

AlanJ
Reply to  DonM
March 6, 2024 11:48 am

You can take the stream analogy and run away on an irrelevant tangent all you wish, it doesn’t change the fundamental nonsense at the heart of the Gorman twins’ theories of statistics.

Reply to  AlanJ
March 6, 2024 2:33 pm

You are trying to fit measurements into statistics. You have it backward. Measurements utilize statistics when appropriate.

Tell us what upper level physical physical labs you have taken. Did they let you increase resolution of measurements by averaging?

Reply to  AlanJ
March 6, 2024 3:47 pm

Take your straw men arguments elsewhere. We are discussing temperature measurements here. If you can find no metrology references to support your positions, then you should learn from that.

Do you refute NIST TN 1900 Ex. 2 as a possible solution to measurement uncertainty of a monthly average?

Let us know your education and work experience with making physical measurements.

AlanJ
Reply to  Jim Gorman
March 6, 2024 6:58 pm

Take your straw men arguments elsewhere. We are discussing temperature measurements here.

“Don’t confront me with the implications of my own philosophy!”

Reply to  AlanJ
March 6, 2024 4:46 pm

fundamental nonsense”

You don’t even know the difference between intensive and extensive properties! That is a fundamental concept in science.

You, and climate science in general, never EVER calculate the variance of the data sets you use – and you think *my* statistics are fundamental nonsense?

Reply to  AlanJ
March 6, 2024 11:13 am

You are a fraud — embrace your identity, it so 2020s.

Reply to  AlanJ
March 6, 2024 2:47 pm

Air temperature measurements are NOT an exercise in random stats samplings.

Reply to  karlomonte
March 6, 2024 4:05 pm

You bet. The statisticians are trying to fit temp measurements into a statistical framework, metrology as a science be damned. We don’t need any stinking metrologist telling us what to do, we know statistics!

Reply to  AlanJ
March 6, 2024 4:44 pm

The temperature in Holton, KS seldom matches the temperature in Topeka, KS. Right now, at 6pm on March 6, 2024 the temperature in Holton is 54F and in Topeka is 61F. (If I could post pictures I’d give you one for NE KS right now) Temperature is an intensive property so what does the average of the two temperatures actually tell you?

Do you *really* think that the true temp for Northeast KS at this time is the average of the two temperatures?

Height is basically “length”. Length is an extensive property.

Are climate scientists *really* so ignorant of science that they don’t know an intensive property from an extensive property?

AlanJ
Reply to  Tim Gorman
March 6, 2024 7:03 pm

Do you *really* think that the true temp for Northeast KS at this time is the average of the two temperatures?

Not at all, but I would certainly infer that if the average in the afternoon is 57.5, and the average after sundown is 48, that something had changed in Kanas, wouldn’t you? It’s the change in temperature that we are after.

Reply to  AlanJ
March 7, 2024 6:02 am

Not at all, but I would certainly infer that if the average in the afternoon is 57.5, and the average after sundown is 48, that something had changed in Kanas, wouldn’t you? It’s the change in temperature that we are after.”

How do you tell HOW MUCH it changed?

How do you know that the temp in Holton and in Topeka are even representative of the rest of NE Kansas?

If the temps in Las Vegas and Miami are the same does that mean that every place in the US between those two locations is the same?

AlanJ
Reply to  Tim Gorman
March 7, 2024 7:02 am

How do you know that the temp in Holton and in Topeka are even representative of the rest of NE Kansas?

Outside of your hypothetical, we actually know exactly how representative the temperature anomaly is of the surrounding region. There is actually research determining this. Temperature change is correlated over 1000s of km.

Reply to  AlanJ
March 7, 2024 7:25 am

More bullshit.

And again, who are “we”?

AlanJ
Reply to  karlomonte
March 7, 2024 8:03 am

And again, who are “we”?

Literate people who stay abreast of the research.

Reply to  AlanJ
March 7, 2024 8:01 am

km is right, more bullshit!

The temperature at the top of Pikes Peak is far different than that in Colorado Springs yet they are far closer than 1000s of km. Even their temperature trends are different!

The temperature trend in San Diego is far different than the temperature trend in Ramona, CA about 30 miles away inland.

Yet I can find nothing in climate science and global warming literature that weights the temperatures from these locations as far as a “global average” is concerned. They just get thrown into the average regardless of terrain, geography, prevailing wind, humidity, pressure, elevation, etc.

Correlation studies of temperatures have confounding variables that are *NEVER* identified. Things like seasons and travel of the sun. Of course the temperature in Holton, KS will be correlated with the temperature in Topeka, KS because they have the same season and the sun travels the same path for both. That doesn’t make their temperatures THE SAME or even their temperature trends the same!

There are so many bullshit assumptions in climate science that it is buried in the pile!

AlanJ
Reply to  Tim Gorman
March 7, 2024 8:24 am

Please review the difference between absolute temperature and temperature anomaly.

Reply to  AlanJ
March 7, 2024 10:24 am

Why?

AlanJ
Reply to  karlomonte
March 7, 2024 12:51 pm

So that when you and I have discussions together you are not speaking from a place of ignorance.

Reply to  AlanJ
March 7, 2024 1:06 pm

You just melted my best irony meter, gaslighter.

Reply to  AlanJ
March 8, 2024 4:36 am

The temperatures used to calculate the anomalies have variance. Therefore any anomaly derived from those temperatures will inherit the variances of the components.

Thus the anomalies inherit the same uncertainty as the temperatures used to calculate them. If that uncertainty subsumes the difference that is indicated you don’t actually know what the difference is.

I’ve asked you directly three times what the variance of the temperature databases is. Each time you have ignored the question, not even admitting that you don’t know.

Here is the fourth time: What is the variance of the CRN temperature data set? What is the variance of the UAH temperature data set. What is the variance of the USHCN data set? What is the skewness and kurtosis of each of the data sets?

If you don’t answer then I can only assume that you simply don’t care about fully stating the required statistical descriptors necessary to understand the distribution of the various data sets.

That makes anything you say irrelevant to actual physical scientists and engineers. It will be obvious that not only don’t you know anything about your data you also don’t care about knowing.

old cocky
Reply to  Tim Gorman
March 8, 2024 3:41 pm

The temperatures used to calculate the anomalies have variance. Therefore any anomaly derived from those temperatures will inherit the variances of the components.

That’s correct. The application of a constant offset preserves all other properties.

Reply to  old cocky
March 8, 2024 4:21 pm

Actually it doesn’t. You are dealing with the means of two unique random variables. The single monthly average is one random variable and the monthly average baseline average is another random variable. Both have a mean and a variance. The difference of two random variables is:

μ_X – μ_Y = anomaly

Var(X – Y) = Var X + Var Y

The baseline is not a constant, it has a mean and a variance.

The problem with finding the variance of anomalies directly is that you are dealing with numbers that are 1 to 2 orders of magnitude smaller than the absolute temperatures from which they are calculated. That means their variance will be 1 to 2 orders of magnitude smaller. Is it any wonder
that iS whaT is done?

old cocky
Reply to  Jim Gorman
March 8, 2024 5:10 pm

The baseline is not a constant, it has a mean and a variance.

That’s certainly the case for the baseline period.
Subtracting an offset (defined to be a constant equal to the mean) creates an offset distribution with mean 0, while retaining the variance and all other properties.
Adding the constant offset to this resultant distribution gives the original distribution (reversibility).

The problem with finding the variance of anomalies directly is that you are dealing with numbers that are 1 to 2 orders of magnitude smaller than the absolute temperatures from which they are calculated.

It’s unlikely that the anomalies are from the absolute temperatures, it’s more likely that they are offset directly from the (already offset) Celsius temperatures. These do tend to have at least 1 order of magnitude difference from the resulting anomalies, but the variance and measurement uncertainty will be preserved.

The bigger problem is that the anomalies are of the same order as measurement uncertainty and the sd is at least an order of magnitude higher, which gets us into Lorenz territory.

That means their variance will be 1 to 2 orders of magnitude smaller.

For any individual weather station, the measurement uncertainty and the variance are preserved under application of a constant offset to the temperature readings. The variance and measurement uncertainty are identical whether denominated in Kelvin or degrees Celsius (or Rankine and degrees Fahrenheit if you prefer)

Reply to  old cocky
March 9, 2024 6:05 am

I agree subtracting a constant will not remove the variance of the monthly average. However, the baseline also has both a mean and variance (uncertainty) since it is a random variable. You can not ignore that when computing a difference of two random variables.

My problem is that both are ignored when using the anomalies themselves to calculate an uncertainty value. An example would be where you had 10 anomalies with half having a value of 0.1 and the other half 0.4. That would give a variance of 0.025. The actual variance of the combined random variable could easily be 2.0. That is two orders of magnitude difference.

old cocky
Reply to  Jim Gorman
March 9, 2024 1:17 pm

However, the baseline also has both a mean and variance (uncertainty) since it is a random variable. You can not ignore that when computing a difference of two random variables.

Definitely. The uncertainties of both variables need to be used when comparing them. That also allows testing for them belonging to the same population.

An example would be where you had 10 anomalies with half having a value of 0.1 and the other half 0.4. That would give a variance of 0.025. The actual variance of the combined random variable could easily be 2.0. 

There isn’t enough information there to provide more than a rudimentary analysis 🙂
It’s a sample of measurements with a bimodal distribution, and the range of the actual measurements is larger than the range of the adjusted figures provided. There are at least 2 offsets being used.

To be fair, offsetting values to give them a common origin can make comparisons easier.

Reply to  Tim Gorman
March 6, 2024 9:35 pm

Yes, they are this ignorant. Willfully so, and revel in it.

Reply to  AlanJ
March 6, 2024 11:11 am

” that is why non-scientists go to so much effort to ignore remove systematic bias via fraudulent adjustment procedures.”

Edited for you.

Reply to  AlanJ
March 6, 2024 4:34 pm

You can’t adjust away systemic uncertainty unless you do it on a station-by-station basis using a calibration lab to find out its magnitude. All you are suggesting is GUESSING at something you think might work. There’s no room for subjective guesses in actual science.

“The pairwise algorithm is shown to be robust and efficient at detecting undocumented step changes under a variety of simulated scenarios with step- and trend-type inhomogeneities.“”

Calibration drift is almost never a step-change. Microclimate changes can be but many times are not. Growth of a windbreak in the dominant wind direction will gradually affect the measuring station with no step-change able to be identified. Budget cuts which affect the timing of landscaping (e.g. mowing) will change the microclimate gradually over a season and no algorithm will be able to identify it. Rainfall which either floods or dries up a nearby lake will affect the humidity at measuring stations at a great distance. This represents a change in the micro-climate that is gradual and no algorithm will be able to detect it.

It’s almost like those in climate science have never spent any time outdoors actually living in the CLIMATE they are studying!

When you are trying to identify temperature changes in the hundredths digit even the *smallest* change in microclimate will have an impact on the final result. Climate science just fixes this “hard problem” by assuming it away!

AlanJ
Reply to  Tim Gorman
March 6, 2024 7:06 pm

This represents a change in the micro-climate that is gradual and no algorithm will be able to detect it.

Reread the quoted passage from Menne, et al again. Make sure you understand it:

“The pairwise algorithm is shown to be robust and efficient at detecting undocumented step changes under a variety of simulated scenarios with step- and trend-type inhomogeneities.

If you doubt this, go and read the full manuscript, study the author’s arguments, examine their evidence. Then come back and we can have a discussion about what you find.

Reply to  AlanJ
March 6, 2024 9:39 pm

More bullshit, from a stats man that doesn’t have clue one about real-world metrology.

The real problem is these climate pseudoscience clowns are forcing their “solution” of an non-problem on the entire world, at a horrendous and astronomical cost.

It is the modern equivalent of Jonestown, magnified many orders of magnitude.

Reply to  AlanJ
March 7, 2024 6:05 am

I *did* read the entire article. And it depends on identifying trend-type inomogeneities through pair-wise comparisons, always assuming that one station is accurate (including its trend) while another is not. And NEVER a mention of how the algorithm allows for measurement uncertainty. It’s pretty obvious that the writers of the algorithm assume the same thing you do: all measurement uncertainty is random, Gaussian, and cancels – therefore we can assume all stated values are 100% accurate!

AlanJ
Reply to  Tim Gorman
March 7, 2024 7:07 am

Thank you for inadvertently confirming that you have not read (or at least understood) the paper. If you phrase your misinformed objections as questions (help me understand…) then we might be able to have a productive conversation about it.

The methodology does not assume that one station is accurate, it assumes that there can be a threshold beyond which a breakpoint can be identified if a given station diverges from its nearest neighbors. They then go on to prove that this assumption is correct.

Reply to  AlanJ
March 7, 2024 7:27 am

The methodology does not assume that one station is accurate, it assumes that there can be a threshold beyond which a breakpoint can be identified if a given station diverges from its nearest neighbors. 

Another word salad.

There is no “true value” to compare against.

Reply to  AlanJ
March 7, 2024 8:35 am

More bullshit!

There is not a single sentence in “3. Description of the pairwise algorithm” that addresses trend inhomogeneity. Only step-changes.
Part 2d:

d. Impact of local, unrepresentative trendsIdeally, a changepoint detection method would differentiate trend changes from step changes. In practice, however, many of the commonly used tests for undocumented changepoints are not robust to the presence of trends in the test data because they are based solely on comparing the means of two sequential intervals. Use of such tests in the presence of trends can lead to falsely detected step changes as well as to inaccurate estimates of the magnitude of a shift when it occurs within a general trend (DeGaetano 2006Pielke et al. 2007). Conversely, methods that directly account for both step changes and trend changes (e.g., Vincent 1998Lund and Reeves 2002Wang 2003) are characterized by much lower powers of detection than the simpler difference in means tests.
While no one test clearly outperforms others under all circumstances, the standard normal homogeneity test (SNHT; Alexandersson 1986) has been shown to have superior accuracy in identifying the position of a step change under a wide variety of step and trend inhomogeneity scenarios relative to other commonly used methods (DeGaetano 2006R07). For this reason, the pairwise algorithm uses the SNHT along with a verification process that identifies the form of the apparent changepoint (e.g., step change, step change within a trend, etc.” (bolding mine, tpg)

Like I said, there isn’t a single sentence in Section 3 that mentions trend inhomogeneity.

Tell me again who hasn’t read the document? You apparently only read the abstract.

AlanJ
Reply to  Tim Gorman
March 7, 2024 8:50 am

Uh oh, keep your eye on the ball here, folks, Tim is doing some slight of hand. Now he isn’t falsely claiming that the paper does not address gradual discontinuities, no, he’s merely saying gradual discontinuities aren’t explicitly mentioned in one section of the paper. But move on to section 4:

To evaluate the performance of the pairwise algorithm more generally, temperature series were simulated under a number of trend and step-change scenarios. The simulations were designed to test the skill of changepoint detection as well as to facilitate comparison of the results to previous investigations regarding the use of a reference series as well as the identification of the type of changepoint.

The algorithm is designed for and tested against exactly this type of trend bias.

Reply to  AlanJ
March 7, 2024 9:00 am

More climate science models — oh yeah, these always tell you what you want to hear.

And still zero treatment of real measurement uncertainty.

Reply to  AlanJ
March 7, 2024 2:52 pm

ROFL!!

“GRADUAL DISCONTINUITIES”
“STEP-CHANGE SCENARIOS”
“CHANGE POINT”

Saying these are the same thing is cognitive dissonance at its finest!

AlanJ
Reply to  Tim Gorman
March 8, 2024 7:10 am

A jaunty ROFL is like a get out of jail free card round these parts. Allows you to strut away feeling yourself victorious without having to engage substantively with the discussion.

Reply to  AlanJ
March 8, 2024 8:56 am

You are still doing nothing but demonstrating cognitive dissonance.

Reply to  Tim Gorman
March 7, 2024 8:58 am

In other words, averaging throws away information!

Reply to  AlanJ
March 7, 2024 8:29 am

I did read the entire manuscript but apparently *YOU* haven’t.

While it recognizes the fact that trend-type inhomogeneities can exist it does nothing to identify them, just like I said.

“While no one test clearly outperforms others under all circumstances, the standard normal homogeneity test (SNHT; Alexandersson 1986) has been shown to have superior accuracy in identifying the position of a step change under a wide variety of step and trend inhomogeneity scenarios relative to other commonly used methods (DeGaetano 2006R07). For this reason, the pairwise algorithm uses the SNHT along with a verification process that identifies the form of the apparent changepoint (e.g., step change, step change within a trend, etc.). ” (bolding mine, tpg)

All their document discusses with regard to the algorithm is how it identifies step-changes.

for instance:

“The range of pairwise estimates for a particular step change is considered to be a measure of the confidence with which the magnitude of the discontinuity can be estimated.”

“At least three separate pairwise estimates of step-change magnitude are required for each target changepoint because the distribution of estimates is used to determine the significance of the adjustment (when fewer than 3 estimates are available, the shift is considered “unadjustable”).”

There is not a single sentence in “3. Description of the pairwise algorithm” that discusses how to identify trend inhomogeneity, only step-changes.
You have been misled by reading only the abstract and not the entire document.

Reply to  AlanJ
March 7, 2024 6:21 am

No climate scientist thinks

Finally a true statement from the AJ-dood.

Reply to  AlanJ
March 5, 2024 5:34 pm

You are a statistician and not trained in physical measurements.

The CLT can not change the resolution of the measurements made by a given device. If you measure to an integer resolution nothing you do statistically will ever change that.

All the CLT does is let you know how closely your estimated mean is to a population mean. A smaller and smaller number just means you are honing in closer to a mean of a normal distribution.

You and a host of others have somehow convinced yourself that the SEM determines the ultimate resolution of of measurement. It doesn’t. If you need I can give you numerous college lab courses that confirm this.

Reply to  AlanJ
March 5, 2024 6:11 pm

Why don’t you grab a book on metrology and come back when you are ready to provide metrology references that prove your position.

Reply to  AlanJ
March 5, 2024 6:06 pm

Read what you just said here dude. Do you think the numbers you are averaging have no uncertainty? I’ll guarantee your stats training assumed that. Tell us how you get around this.

As I’ve already shown you there are 22 numbers for a monthly average. How do you get an infinite sample size?

Have you bothered to read the information in this online course or are you already an expert?

https://sisu.ut.ee/measurement/uncertainty

I’ll show the appropriate text from Section 3.2

1) If Vm and s (V ) have been found from a sufficiently large number of measurements (usually 10-15 is enough) then the probability of every next measurement (performed under the same conditions) falling within the range Vm ± s (V ) is roughly 68.3%.

2) If we make a number of repeated measurements under the same conditions then the standard deviation of the obtained values characterized the uncertainty due to non-ideal repeatability (often called as repeatability standard uncertainty) of the measurement: u (V, REP) = s(V). Non-ideal repeatability is one of the uncertainty sources in all measurements.

Standard deviation is the basis of defining standard uncertainty – uncertainty at standard deviation level, denoted by small u.

AlanJ
Reply to  Jim Gorman
March 6, 2024 5:46 am

This is exactly consistent with everything I’ve said in this thread. Perhaps you need to keep re-reading it until the information sinks in?

Reply to  AlanJ
March 6, 2024 11:22 am

Standard deviation is the basis of defining standard uncertainty – uncertainty at standard deviation level, denoted by small u.

I certainly don’t remember you ever saying σ = standard uncertainty!

Are you using Google Bard?

Reply to  AlanJ
March 5, 2024 1:22 pm

The sample means distribution (the distribution defined by the mean of each sample) approaches a normal distribution. That is true.

Think carefully about what is going on here. The mean of the sample means estimates the population mean.

Now what is the σ of that sample means distribution? It is the standard deviation of the sample means distribution. That width is determined by the sample size! Voila, the SEM.

The width describes how small the interval surrounding the mean actually is. What does that tell you? It tells you how closely your estimated mean is to the population mean. Nothing more than that.

This is where climate science (and many other scientists) go off the rails. It relates back to the true value concept. People tell themselves that calculating more and more decimal places by increasing the “sample size” means I can reduce error in the measurement. Eventually even claiming more resolution than what was actually measured.

Two problems with this.

It implies you can use a yardstick to get micrometer precision if you just measure enough times, and
the value you obtain is the “true value”, i.e., it matches the international definition of what an SI unit is.

Read these links.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1255808/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2959222/#

The SEM is a measure of precision for an estimated population mean. SD is a measure of data variability around mean of a sample of population. Unlike SD, SEM is not a descriptive statistics and should not be used as such. However, many authors incorrectly use the SEM as a descriptive statistics to summarize the variability in their data because it is less than the SD, implying incorrectly that

their measurements are more precise.

Reply to  AlanJ
March 5, 2024 1:24 pm

Badly WRONG. !!

You don’t have a clue what you are talking about, do you.

Reply to  AlanJ
March 5, 2024 1:25 pm

The sample means distribution is *NOT* the distribution of the parent data set. It is the distribution of the parent data set that determines measurement uncertainty.

It doesn’t matter how precisely you calculate the average if that average has measurement uncertainty.

Reply to  doonman
March 5, 2024 1:23 pm

Climate scientists don’t even calculate the variance of their data. They just assume a Gaussian distribution for everything, ignoring that

  1. the variance of SH and NH temperatures are different because of different seasons
  2. SH and NH temperatures form a multii-modal distribution by definition. Since the variances are different even the anomalies form a multi-modal distribution.
Reply to  AlanJ
March 5, 2024 1:21 pm

Measurement uncertainty is *NOT* the same thing as the SEM.

Why do statisticians ALWAYS get this wrong?

The SEM measures how precisely you have located the mean based on using *ONLY* the stated values of the measurement. That tells you NOTHING about the accuracy of the mean you just so precisely located!

The basic concept of the SEM is that you take multiple samples and calculate the mean of each sample. The spread of the values derived from those samples tells you how close you are to the actual mean of the population.

This assumes that all data points being used are stated values only. But for measurements each data point consists of a stated value AND an uncertainty interval.

Suppose you have three measurements: 5 +/- .5, 7 +/- 1, and 2 +/- .8.

A statistician will tell you the average is 4.7 with an SEM of 1.45. This is based only on using the stated values.

But he standard deviation of that data is 2.5 and the uncertainty is the sqrt(.5^2 + 1^2 + .8^2) = 1.37 (close to the SEM but not the same).

Now which value tells you more about how accurate the precisely calculated mean actually is? 1.37? 1.45? 2.5?

Now suppose you take another sample and get another sample mean. Each of those sample means will have some kind of measurement uncertainty associated with it from propagating the measurement uncertainty of the data.

So in essence you don’t find the mean from the stated value of the sample means but from the stated value plus its individual measurement uncertainty.

Suppose the second sample gives the same mean of 4.7. A statistician would say the SEM is zero since both means are the same. A physical scientist or an engineer would say the measurement uncertainty of that mean of 4.7 is the quadrature addition of the standard deviations of the sample data.

It’s why I’ve tried to point out that the SEM *must* be conditioned by the measurement uncertainty of the measurement data. That’s something climate scientists and statisticians never do. Because they operate under the meme that all measurement uncertainty is random, Gaussian, and cancels.

AlanJ
Reply to  Tim Gorman
March 5, 2024 2:30 pm

The SEM measures how precisely you have located the mean based on using *ONLY* the stated values of the measurement. That tells you NOTHING about the accuracy of the mean you just so precisely located!

I sound like a broken record at this point, but no one thinks otherwise (one broken record to another?). You’ve got it into your head that people disagree with this sentiment, and you spend all your time argument against a phantom.

Reply to  AlanJ
March 5, 2024 1:23 pm

Your lack of mathematical understanding shine through again, AJ.

Best you not comment, than prove you are a mathematical illiterate.

youcantfixstupid
Reply to  AlanJ
March 5, 2024 12:18 pm

“Of course you won’t do it, and neither will anyone else on this site, because then the big lie would be shattered.”

Wow, if doing such an analysis will unequivocally put to end any debate on if the planet is warming I’d expect that you or the highly paid apparatchiks of the MET would gladly do it…in fact I’d think you’d be outraged that the MET is using such horribly inaccurate data without reporting how bad it is when they have an ‘obvious’ solution in using only the class 1-3 stations. You should be absolutely LIVID that your religion is being embarrassed in this way.

AlanJ
Reply to  youcantfixstupid
March 5, 2024 2:33 pm

I don’t know that such an analysis would put an end to debate, but it is the bare minimum hurdle the contrarian set needs to clear to actually be part of the scientific conversation. You claim scientists have it all wrong, show them how to do it right. Publish your results. Roll up your sleeves.

It’s interesting, the folks at Berkeley Earth did this exact same thing, and in early days the WUWT contrarians cheered them on for it, convinced they would unravel the whole global warming scam in the process. Then the Berkeley Earth folks found that they arrived at the very exact same conclusion as everybody else, NASA, Hadley Centre, NOAA, etc., and now the WUWT contrarian set has cast them out and bundled them into the vast worldwide conspiracy.

Reply to  AlanJ
March 5, 2024 2:53 pm

Berkely were the crud of the alarmist crowd. Muller “pretended” not to be, but several statement he had made previously showed that he was very much a rabid alarmist and certainly not above trying to con people.

Then the other members…. All MANIC AGW-cultists.

They use all the WORST DATA in the world, have no idea of its reliability, and can create any result they want to create.

If they get the same result as the massively tainted urban data used by NOAA/GISS, HAdCrud etc….. it is no coincidence.

AlanJ
Reply to  bnice2000
March 5, 2024 3:38 pm

Oh of course, once the BEST results were published, the contrarian set immediately decided they were all wolves in sheep’s clothing and not True Skeptics after all. But then, the contrarian set never followed up by producing their own temperature dataset.

Reply to  AlanJ
March 6, 2024 6:03 am

“Oh of course, once the BEST results were published, the contrarian set immediately decided they were all wolves in sheep’s clothing and not True Skeptics after all.”

That is because skeptics are perfectly capable of reading a temperature chart, and they see the written record and then see the Hockey Stick creation and can see how the Hockey Stick chart deviates from the written reality. So why wouldn’t they be skeptical of Berkeley Earth’s efforts?

Berkeley Earth is just perpetrating the Early Twentieth Century fraud. Hiding the Early Twentieth Century heat, is what they are doing.

It is the BIG LIE of alarmist climate science.

Reply to  bnice2000
March 6, 2024 5:53 am

“it is no coincidence.”

The reason it is no coincidence is because all the data sets start off with bogus temperature data. That’s why they all have the same temperature profile. Credit to Phil Jones and a few others for the bastardized data.

Berkeley Earth didn’t go back and examine every temperature record since 1850, they used bogus data for that part of the temperature record, just like all the other climate alarmist data sets.

The written, historical temperature records show a completely different temperature profile from the bogus Hockey Stick profile, where it was just as warm in the Early Twentieth Century as it is today.

The Climate Alarmists bastardized this data when creating the global temperature record. They did so to make it appear that today is the warmest period in thousands of years, and CO2 is the cause.

If it was no warmer today than in the recent past, then the fact that CO2 has increased would be a non-issue.

It *was* just as warm in the recent past. CO2 should *not* be an issue, but temperature data mannipulators and bastardizers have turned it into one with their distortion of the global temperature record.

Anthony Banton
Reply to  Steve Richards
March 5, 2024 9:31 am

Aircraft require very accurate temp readings for reasons of safety. Take-off weight has to be calculated against available thrust at take-off …. which is in part dependent on air temperature.

Richard Page
Reply to  Anthony Banton
March 5, 2024 10:36 am

Many of us are aware of that, many of us have stated that fact on WUWT for some years now – that is not at issue. The real puzzler has been why they are used in the temperature datasets?

aussiecol
Reply to  Anthony Banton
March 5, 2024 1:19 pm

Which is why the stations are near the tarmac, to measure the radiant heat as well. To use them for weather readings in general should not be allowed as they produce exaggerated readings.

Reply to  Anthony Banton
March 5, 2024 2:50 pm

But they are often totally unrepresentative of the surrounding area, and should NOT be used for “global” anything.

They are meaningless in that regard.

sherro01
Reply to  bnice2000
March 6, 2024 3:53 am

Bruce,
Another big question is why authorities like Met Offices have not performed, or have not published, experiments to clarify uncertainty. We are not talking big money or difficult concepts.
. A number of thermometers at different distances from air traffic at an airport.
. Several overlap comparisons between LiG and electronic thermometers
. Collaboration with National Measurement labs to show the best performance in controlled environments, to place best limits on Argo probe accuracy and precision.
. Publish more data on inspection reports of field thermometers that cause them to be replaced because outside spec
. More reporting of country difference in Pt resistance probes re response time to step changes, frequency of sampling protocols
These are but a few suggestions off the top of the head. Meanwhile, some of us are doing some of this privately for no pay. Example for AlanJ who complains about sceptics not producing UHI free data, after he has already seen this figure, one like I have never seen from a govt body.
Geoff S
comment image

Reply to  AlanJ
March 5, 2024 6:30 am

As always on WUWT, no mention is made of the fact that siting issues are well known and understood in the surface temperature network, and that great pains are undertaken to minimize their influence on regional and global temperature estimates.”

What pains exactly?

siting issues represent systematic uncertainty in the measurements. Systematic uncertainty can’t be statistically analyzed and corrected for.

Bevington on systematic error: “Errors of this type are not easy to detect and not easily studied by statistical analysis. They may result from faulty calibration of equipment or from bias on the part of the observer. They must be estimated from an analysis of the experimental conditions and techniques.”
Taylor: “For this reason, uncertainties are classified into two groups: the random uncertainties, which can be treated statistically, and the systematic uncertainties, which cannot.” (italics are from the text)

You can’t even compare one station with another to “adjust” the readings since the systematic uncertainty causes will be different for each station. In other words, homogenization and infilling do nothing but distribute systematic uncertainties around – and those *always* add, they never cancel since they are not Gaussian.

AlanJ
Reply to  Tim Gorman
March 5, 2024 6:52 am

You’re simply misconstruing the texts you cite. Taylor is saying you can’t assess systematic uncertainty via statistical analysis of repeated measurements, and this is because each repeated measurement carries the same bias (thus, systematic bias). This does not mean, in any way, shape, or form, that statistics cannot be used in analyzing datasets containing systematic bias or that statistical algorithms cannot be used to address sources of known systematic bias. Random error is revealed by repeated measurement, systematic bias is not, that’s all.

Reply to  AlanJ
March 5, 2024 7:47 am

More lies.

Reply to  AlanJ
March 5, 2024 7:47 am

I’m not misconstruing anything. I gave you the exact quotes.

Taylor is saying you can’t assess systematic uncertainty via statistical analysis of repeated measurements, and this is because each repeated measurement carries the same bias (thus, systematic bias). “

You can’t identify the systematic uncertainty in non-repeated measurements either, and that includes Tmax and Tmin from the *same* station as well as from other stations.

This does not mean, in any way, shape, or form, that statistics cannot be used in analyzing datasets containing systematic bias or that statistical algorithms cannot be used to address sources of known systematic bias.” (bolding mine, tpg)

  1. Statistical analyses which identify differences smaller than the systematic uncertainty are themselves part of the GREAT UNKNOWN. If the difference is 0.1C +/- 0.7C tell us what that means to you!
  2. I highlighted your use of “known”. It is part of a circular argument. If you can’t identify systematic uncertainty using statistical analyses then how can they be known? You are implicitly assuming you can identify systematic uncertainty using statistical analysis without actually stating that you are assuming that. And then you state that you can use the systematic uncertainty you have identified statistically. The base problem is that you CANNOT identify systematic uncertainty using statistical analysis.

Random error is revealed by repeated measurement, systematic bias is not, that’s all.”

Uncertainty is a combination of random uncertainty and systematic uncertainty.

u(total) = u(random) + u(systermatic)

If you can’t identify u(systematic) then you can’t subtract it from u(total) to find u(random). Repeated measurements won’t help. It doesn’t matter how many measurements you make, you still won’t be able to separate random uncertainty from systematic uncertainty unless you know one or the other! But if you *know* one or the other then it really isn’t uncertainty at all! The question remains, how do you KNOW one or the other?

AlanJ
Reply to  Tim Gorman
March 5, 2024 8:47 am

You can’t identify the systematic uncertainty in non-repeated measurements either

Of course you can. Always you regress to this inane philosophy that it is impossible to measure anything.

If you can’t identify u(systematic) then you can’t subtract it from u(total) to find u(random).

Your quotations do not suggest that you cannot identify systematic uncertainty, only that it cannot be revealed by the same method (repeated measurement) as random error.

Reply to  AlanJ
March 5, 2024 11:27 am

If systematic uncertainty can’t be identified by repeated measurements, exactly how do you identify it without continuous calibration?

AlanJ
Reply to  Jim Gorman
March 5, 2024 2:38 pm

I measure 500 things with a yardstick, not knowing that it is off by a centimeter. I have 500 measurement that are off by a centimeter, but I won’t know it from those 500 measurements, because, well, they all contain the same error. That is systematic bias.

Then I do a study, comparing my yard stick to a dozen others, and all of them agree with each other, but mine is an outlier, by a centimeter. Now I go to the manufacturer, and they investigate, and, sure enough, the mold was off, and my yard stick is indeed off by a centimeter.

Now I’ve learned that my measurements contain a systematic bias, and I didn’t learn about it from repeated measurements with my yard stick. What do I do with that information? The Gorman twins say, “throw up your hands, give up, walk away, it’s impossible to know anything, the universe is a cold and aloof mistress who cares not for our foibles. Let’s just lay down and wait for the sweet embrace of death to take us.” I, on the other hand, say “let’s subtract a centimeter from each of the 500 measurements.”

Reply to  AlanJ
March 5, 2024 3:08 pm

You get exactly one chance to measure an air temperature before it is gone, forever. The sample size is always equal to one.

There is no way to learn what the “bias” in a thermometer was back in the 1930s without using a ouija board.

What you don’t understand is that nonrandom errors can and do drift with time, it is impossible to back correct for them.

This is Fake Data fraud.

AlanJ
Reply to  karlomonte
March 5, 2024 3:41 pm

So we are to believe in your world then that it is impossible also to ever measure average streamflow, so scientists should abandon all stream gauges? Because once a molecule of water flows past the sensor, that’s it, it’s over, now we are measuring a different thing. Taking the average of a bunch of distinct molecules is a fools’ errant.

There is no way to learn what the “bias” in a thermometer was back in the 1930s without using a ouija board.

That’s just some stupid thing you believe, it isn’t true of course.

Reply to  AlanJ
March 5, 2024 4:56 pm

Oh look, AnalJ has a new set of straw men lined up, and lights them all on fire.

Reply to  AlanJ
March 5, 2024 5:49 pm

Don’t tell us about hypothetical situations, give us concrete examples with your calculations.

Here are 22 temperatures of Tmax as shown in NIST TN 1900. Show what your recommended calculations are for uncertainty.

18.75, 28.25, 25.75, 28.00, 28.50, 20.75, 21.00, 22.75, 18.50, 27.25, 20.75, 26.50, 28.00, 23.25, 28.00, 21.75, 26.00, 26.50, 28.00, 33.25, 32.00, 29.50 

AlanJ
Reply to  Jim Gorman
March 6, 2024 5:48 am

Oh, I’d much rather you do it. Calculate the mean of the dataset, and provide the uncertainty in your estimate. Show your work. Demonstrate for us how the estimate of the uncertainty in the mean would change if you, say, doubled the sample size.

Reply to  AlanJ
March 6, 2024 8:51 am

If one declares the:

measurand = monthly_average_Tmax

then the daily measurements are single measurements taken under non-ideal repeatablity conditions.

The mean and standard deviation is exactly what TN 1900 shows.

μ = 25.6°C and σ = u = 4.1°C

This is what I would report as the dispersion of measurements that could be attributed to the measurand. It would imply ~68% of the measurements were in this interval.

It would not be inappropriate to report it as:

μ = 25.6°C and σ = 4.1°C and U = k•u = 2•4.1 = 8.2°C

This would imply that 95% of the measurements were within this interval.

Another option would do it exactly as NIST does it in TN 1900. You would have:

μ = 25.6 and U = 1.8°C

Please note that NIST assumes negligible systematic and measurement uncertainty and Gaussian errors that cancel. That leaves only the variation in the data to calculate the uncertainty in monthly_average_Tmax.

The resolution uncertainty of LIG is at least ±0.5 which would be added to these values.

Read and interpret this from the GUM to understand what is occurring here.

F.1.1.2 It must first be asked, “To what extent are the repeated observations completely independent repetitions of the measurement procedure?” If all of the observations are on a single sample, and if sampling is part of the measurement procedure because the measurand is the property of a material (as opposed to the property of a given specimen of the material), then the observations have not been independently repeated; an evaluation of a component of variance arising from possible differences among samples must be added to the observed variance of the repeated observations made on the single sample.

AlanJ
Reply to  Jim Gorman
March 6, 2024 9:21 am

Would it also be appropriate to report your estimate of the uncertainty in your estimate of the mean? How would you report that? And how would that uncertainty change if you had a larger sample size?

Reply to  AlanJ
March 6, 2024 11:21 am

Uncertainty always increases, you silly trendologist person. The magic of averaging cannot change this cold, hard fact.

Reply to  AlanJ
March 6, 2024 5:04 pm

Finding the mean more precisely than the measurements provide is a waste of time. The mean can’t be located any more precisely than the resolution (including the measurement uncertainty of the measurements) of the measurements.

Learn about significant figures. If the last decimal digit in just one piece of the data set is in the tenths digit then the mean should be reported to the tenths digit as well.

Thinking you can increase the precision of a mean by taking larger samples is a statisticians meme that is only good for the blackboard in a statistics course. It has no place in the real world, not in science and not in engineering. In the real world you only know what you know, there is no crystal ball that lets you take it further.

Once your sample size is large enough to be sure of the tenths digit it would be a waste of time to go any further. Calculating the mean out to 1.53456 by increasing the sample size is just mental masturbation. That mean should be reported as 1.5 no matter what. You *might* want to go out to the hundredths digit to insure you aren’t introducing a rounding error but the mean would still be reported to the tenths digit, e.g. as 1.5 or 1.6 (if rounding up is justified).

I simply cannot believe how uneducated climate scientists are in the basics of science, at least if you are an example of a climate scientist!

AlanJ
Reply to  Tim Gorman
March 6, 2024 7:12 pm

(-1 + 0.5 + 0.6) / 3 = 0.03.

To how many significant digits did I present the result?

0perator
Reply to  AlanJ
March 6, 2024 7:43 pm

ROFL. You can’t even get sig figs right. What a dope.

Reply to  0perator
March 6, 2024 9:41 pm

Yep. +1000%.

AlanJ
Reply to  0perator
March 7, 2024 5:36 am

Want to give your best go at answering the question?

Reply to  AlanJ
March 7, 2024 5:11 am

It’s not *just* a matter of sig figs, it’s also a matter of resolution. You cannot increase resolution through averaging!

Why did you ignore the very first thing I posted? “Finding the mean more precisely than the measurements provide is a waste of time. The mean can’t be located any more precisely than the resolution (including the measurement uncertainty of the measurements) of the measurements.”

Based on the resolution limit in your numbers, i.e. the tenths digit, your average should be quoted as 0 (zero)!

Both Taylor and Bevington state that the result should have no more decimal places than the uncertainty. Why did you not include the uncertainty part of the measurements?

You just display the worldview of most statisticians. There is no such thing as measurement uncertainty, just stated values. It’s why you didn’t include an uncertainty interval for your measurements, you just don’t think in terms of measurements, only 100% accurate numbers! That’s what leads to the climate science meme of all measurement uncertainty being random, Gaussian, and cancels.

Let’s assume your numbers should be something like -1 +/- 0.1, 0.5 +/- 0.1, and 0.6 +/- 0.1.

What *is* the average of these numbers? What *is* the uncertainty of that average?

AlanJ
Reply to  Tim Gorman
March 7, 2024 5:58 am

Do you acknowledge that the result is presented with the appropriate number of significant figures above? It is the correct calculation of the mean of the three numbers, presented to a single significant digit. The mean is not simply an additional measurement value, it is the estimate of the central tendency of the set of numbers. Implicit in the estimate of the mean is the uncertainty of the set of numbers used to calculate it, because the error is present in the variance of the measured values. Thus the variance (standard deviation) of the set of numbers used to estimate the mean provides the estimate of the uncertainty in the calculated mean.

For the case above, the standard deviation is 0.9, and the SEM is 0.5, so the mean could be anyway from -0.47 to to 0.53. If we make the following additional measurements: 0.1, 0.2, -0.3, the SEM is now 0.2 (our estimate of the mean is more precise).

What is wild to me is that, to the best of my understanding, this isn’t something you even disagree with. You’re just waffling around to try and avoid having to admit it.

Reply to  AlanJ
March 7, 2024 6:26 am

Uncertainty is not error!

Like all climastistas, you can’t make it past this cold, hard reality.

All Hail The Great And Mighty Average!

Reply to  karlomonte
March 7, 2024 7:11 am

Climate science is still stuck in the 17th century. The entire international metrology discipline has abandoned the meme of “true value +/- error) – except for climate science. They think if they can just assume the “error” term away that they will then know the “true value”.

There is a REASON why the metrology world has abandoned this meme.

Reply to  Tim Gorman
March 7, 2024 7:28 am

Bill J doesn’t get this.
Alan J doesn’t get this.
Stokes doesn’t get this.

I could go on…

Reply to  AlanJ
March 7, 2024 6:41 am

Do you acknowledge that the result is presented with the appropriate number of significant figures above?”

The numbers you give are not measurement statements. The nbr of sig figs is correct for the example you give but the example has nothing to do with measurements.

“The mean is not simply an additional measurement value, it is the estimate of the central tendency of the set of numbers. Implicit in the estimate of the mean is the uncertainty of the set of numbers used to calculate “

Measurements don’t have “implicit” uncertainty, they have EXPLICIT uncertainty – which you failed to specify.

” because the error is present in the variance of the measured values.”

Except you didn’t give MEASURED VALUES. A measurement has two parts, the stated value and the measurement uncertainty value.

You are still exhibiting the world view of a statistician, not of a physical scientist or engineer who understand metrology!

Let’s go back to what I postulated, a measurement uncertainty of +/- 0.5.

The variance of a data set is related to its range.

With measurements of -1 +/- 0.5, 0.5 +/- 0.5, and 0.6 +/- 0.5 the possible ranges of the data set run from

-1.5 to 1.1 = 2.6

and

-0.5 to 0.1 = 0.6

The range of just the stated values is -1 to 0.6 = 1.6

Since variance is also a metric for the uncertainty of the average you get a huge set of possible uncertainties for the average when you include the measurement uncertainty.

In fact the measurement uncertainty associated with the average based on the individual uncertainties would run from 1.5 (for direct addition) to 0.9 for quadrature addition.

Think about that for a minute! The average of the three numbers should actually be stated as either 0 +/- 1.5 or 0 +/- 0.9.

With the numbers you gave and using the assumption that the decimal place in the numbers should match the decimal places in the uncertainty, possible uncertainty values would range from +/- 0.1 to_/- 0.9. I just picked a spot in the middle.

With an uncertainty of 0.1 your measurements would have an average of 0 +/- 0.2 (quadrature) to 0 +/- 0.3 (direct addition). With an uncertainty of 0.9 the average would be 0 +/- 1.6 (quadrature) to 0 +/- 2.7 (direct addition).

The variance of the actual stated values is 0.8 and the standard deviation is 0.9. Pretty close to the quad addition of an uncertainty of 0.5 EXCEPT – with just three measurements the assumption of partial cancellation of uncertainty is dubious at best. Meaning that if I was looking at this as part of an engineering proposal I would go with direct addition to be safe or an uncertainty of 1.5.

Reply to  AlanJ
March 7, 2024 6:45 am

Thus the variance (standard deviation) of the set of numbers used to estimate the mean provides the estimate of the uncertainty in the calculated mean.”

You are *still* living in statistical world with bellman. You seemingly can’t even comprehend the concept of measurement uncertainty.

The SEM is meaningless as far as the accuracy of the mean is concerned. It’s a measure of how precisely you have located the mean and not of how accurate the mean is.

The world you are living in is defined by your lack of giving MEASURMENT values instead of assumed 100% accurate stated values.

AlanJ
Reply to  Tim Gorman
March 7, 2024 8:05 am

The SEM is meaningless as far as the accuracy of the mean is concerned. It’s a measure of how precisely you have located the mean and not of how accurate the mean is.

Again, this is not a point of contention, why do you treat it as such? If you agree that the SEM indicates the precision of our estimate of the mean then we are in agreement.

Reply to  AlanJ
March 7, 2024 8:19 am

Bad reading skills you have, or marxist propaganda skills.

Let the viewer choose…

(Tim did NOT say this, dolt.)

Reply to  AlanJ
March 8, 2024 4:22 am

The SEM is *NOT* an indicator of precision. It is an indicator of how precisely you have located the mean, that is not the same thing as the precision of the mean. The precision of the mean can be no more than the components making up the data set. The SEM only indicates the interval in which the mean lies. That interval can’t be specified to any more decimal places than the components allow.

Example: If your data are measurements to the tenths digit then the SEM can’t be specified to more than the tenths digit.

data: x.1 +/- u1, y.2 +/- u2, z.1 _/- u3, w.5+/- u4, r.3 +/- u5, ……

The sample means derived from data specified to the tenths digit also has to be specified to only the tenths digit.

(um is the uncertainty of the sample mean)

sample_mean1 = 5.1 +/- um1, sample_mean2 = 4.9 +/- um2, sample_mean3 = 5.2 +/- um3, …….

When you calculate the standard deviation of those sample means it can’t be specified to more than the tenths digit. All values after the tenths digit are part of the GREAT UNKNOWN.

And the *real* problem is that statisticians (and climate science) totally ignores the fact that the sample means have their own measurement uncertainty! The standard deviation calculated from the stated values *has* to be conditioned by the component measurement uncertainties but not by statisticians and climate scientists!

Reply to  Tim Gorman
March 8, 2024 5:58 am

This perhaps the worst violation of measurement analysis and statisticians don’t even realize they are doing it.

When creating a baseline from 30 monthly averages, are the uncertainties of the monthly averages propagated? Nope! Is the variance of the 30 pieces of data calculated at least? Nope!

Both the monthly average and the baseline average are random variables. When subtracted the difference should inherit the sum of the variances. Is that done? Nope!

old cocky
Reply to  Jim Gorman
March 8, 2024 3:46 pm

This is why the site-specific offset needs to be treated as a constant. The baseline (or any other) period’s properties need to be preserved through any number of transforms involving the subtraction or addition of the offset.

AlanJ
Reply to  Tim Gorman
March 8, 2024 7:19 am

The SEM is *NOT* an indicator of precision. It is an indicator of how precisely you have located the mean, that is not the same thing as the precision of the mean.

Word salad is an appropriate descriptor for this paragraph. The SEM indicates the precision of the estimate of the mean. A larger sample size makes the estimate more precise. You agree with this, though you do everything in your power to avoid admitting it.

Reply to  AlanJ
March 8, 2024 10:42 am

Just because you can’t understand what is written does not make it a word salad, gaslighter.

Reply to  AlanJ
March 8, 2024 3:08 pm

The SEM indicates the precision of the estimate of the mean. A larger sample size makes the estimate more precise.”

Once again you reveal that you are a statistician with absolutely no understanding of measurements. The estimate of the mean can be no more precise than the measurement uncertainty of the data points allow.

When you are working with actual, physical, real world measurements you *must* use the full measurement statement: “stated value +/- uncertainty”. Any sample you make will have the average calculated from the stated values conditioned by the measurement uncertainty of the data in the sample.

Only a blackboard statistician would throw away the measurement uncertainty and assume the stated values are 100% accurate.

This is the second time I’ve had to point this out to you. How many times will it take for it to sink in?

Reply to  AlanJ
March 7, 2024 8:21 am

You keep digging, the hole is getting deeper.

You are treating these as numbers that are not conveying information about a physical phenomenon. We are discussing measurements. You are not.

You used these numbers, “-1”, “0.5”, “0.6”. That indicates your measuring device is capable of 1/10 resolution. Your -1 should be shown as “-1.0” if you used the same device. If you didn’t use the same device, then all bets are off.

Assuming the same device the addition of these numbers gives “0.1” and dividing by “3” results in “0.03333…..”

0.1 has one significant digits. 0.03 has two significant digits and rounds to 0 (zero).

https://web.mit.edu/10.001/Web/Course_Notes/Statistics_Notes/Significant_Figures.html

AlanJ
Reply to  Jim Gorman
March 7, 2024 8:33 am

Your -1 should be shown as “-1.0” if you used the same device. If you didn’t use the same device, then all bets are off.

Ok, it’s -1.0. To how many significant digits should I report the answer?

0.03 has two significant digits 

Oops, no, that contradicts the link you provided, try again. I’ll give you a freebie.

Reply to  AlanJ
March 7, 2024 9:03 am

You ran away (again) from this:

You are treating these as numbers that are not conveying information about a physical phenomenon. We are discussing measurements. You are not.

— JG

Reply to  karlomonte
March 7, 2024 9:21 am

Couldn’t have said it better myself!🤣

AlanJ
Reply to  karlomonte
March 7, 2024 12:59 pm

Because those are just words strung together that don’t mean anything. Notice that Jim ran away from actually acknowledging his mistake.

Reply to  AlanJ
March 7, 2024 1:08 pm

Any lie to keep your propaganda alive.

So very marxist.

Reply to  AlanJ
March 7, 2024 2:52 pm

Haven’t ran away. Things to do other than sit here.

Some teach that 0.03 IS two sig figs when it is the result of a calculation.

It really doesn’t matter because if you had read further about adding numbers you would have seen:

The rough rule for addition and subtraction is to line up the numbers vertically, and keep only columns where every digit is significant. Here the zeroes in 1600 are not significant

1600

168

so we round 168 (to 200):

1600

200

1800

Consequently you would have an average of:

(-1 + 1 + 1) / 3 = 0.3

That is NOT what you got!

The SD of the data is 0.9

So the result is –>. 0.3 ±0.9

Using the SDOM you would get:

0.3 ± (2 * 0.5) -> 0.3 ± 1

You just keep revealing your ignorance of making physical science measurements.

AlanJ
Reply to  Jim Gorman
March 8, 2024 7:14 am

You’re prevaricating. Use the series -1.0, 0.5, 0.6 and find the average. Report the answer to the appropriate number of significant figures. Determine the uncertainty in the estimate of the mean.

Reply to  AlanJ
March 8, 2024 7:50 am

I am not prevaricating. I have shown you the reference from the MIT document about how to add measurements with varying scales. Maybe you missed seeing the indented and italicized text from the MIT document.

To reiterate:

line up the numbers vertically, and keep only columns where every digit is significant

I even went one step beyond and rounded values to the nearest integer. MIT’s procedure does exactly that in case you didn’t notice.

You continually fail to provide any references to support your assertions about how measurements should be handled. Until you do, you will continue to fail in your so called arguments.

AlanJ
Reply to  Jim Gorman
March 8, 2024 8:20 am

You very much are. The point is simply that the magnitude of the result of averaging does not depend on the precision of the numbers being averaged. I can average together -5, 1, and 5, all numbers expressed to 1 significant digit, and derive an answer of 0.3, expressed to one significant digit. I can estimate the uncertainty in the precision of my estimate of the mean by determining the SEM. I can increase the precision of my estimate by increasing the number of measurements in the estimate of the mean.

And, again, you agree with this. You just have painted yourself into a corner where admitting it would, in your mind, represent defeat.

Reply to  AlanJ
March 8, 2024 9:15 am

The point is simply that the magnitude of the result of averaging does not depend on the precision of the numbers being averaged.”

Of course it does!

————————————————–
From Taylor:

“Once the uncertainty in a measurement has been estimated, the significant figures in the measured value must be considered. A statement such as

measure speed = 6051.78 +/- 30 m/s

is obviously ridiculous. The uncertainty of 30 means that the digit 5 might really be as small as 2 and as large as 8. Clearly the trailing digits 1, 7, and 8 have no significance at all and should be rounded. That is, the correct statement is

measured speed = 6050 +/- 30 m/s
—————————————————

One of the major factors determining the uncertainty interval is the precision of the measurements. The average should not be stated to more precision than the uncertainty of the measurement allows.

 I can estimate the uncertainty in the precision of my estimate of the mean by determining the SEM”

You continue to confuse precision and accuracy. Measurements include an accuracy value that conditions the average.

The SEM only shows the interval in which the average might lie. That interval cannot be specified to any more precision than the accuracy of the sample means allows. It makes no difference how many digits you carry out the SEM to, if you go beyond the uncertainty of the sample means you are doing nothing but stating something you can’t possibly know – it’s part of the GREAT UNKNOWN.

You continue to show the world view of a statistician that knows nothing about measurements in the real world. Only statisticians think sample means of samples from a measurement data set have no uncertainty of their own. It stems from having *never* been trained to understand that all measurements have two parts, the stated value and the measurement uncertainty. The data points in a sample of measurements *all* should be given as “stated value +/- measurement uncertainty” since that is how the data points in the parent distribution should be given. The measurement uncertainty of the data points in the sample *must* be propagated onto the sample mean calculated from those data points.

This means that when you are calculating the SEM that it should also be given as “stated value +/- measurement uncertainty”. I have yet to find a statistics textbook or any statistics reference on the internet that goes into this. You will only find it in textbooks and references having to do with metrology – e.g. the JCGM.

AlanJ
Reply to  Tim Gorman
March 8, 2024 9:42 am

In our case we have already determined the appropriate number of significant figures to display the measurements based on the uncertainty: -5, 1, 5. Now you must accept that the average of these three numbers, to one significant digit, is 0.3. There is no leaf left for you to hide under.

You continue to confuse precision and accuracy. Measurements include an accuracy value that conditions the average.

Not even a tiny little bit. Notice that I used to the word “precision” not the word “accuracy.” Repeated measurements increase the precision of the estimate of the mean, they do not increase the accuracy of the estimate of the mean. And, again, you agree with this, but have to keep doing your evasive song and dance because finding a point of agreement is like poison to a contrarian.

Reply to  AlanJ
March 8, 2024 10:48 am

More lies, are these all you have in the tank, gaslighter?

Reply to  AlanJ
March 8, 2024 2:19 pm

Repeated measurements increase the precision of the estimate of the mean,

This is an unsupported assertion. You need to provide resources that support your assertions. No believes you are an expert in metrology.

Here is a well supported argument.

Precision is fundamentally how likely it is that a measuring device will produce the same reading when measuring the same thing. That is, repeatability of measurements. It is inherent in the device being used and can not be improved by more and more measurements. It is a component of measurement uncertainty and is expressed as a standard deviation. It is evaluated by assessing the variation in measurements of the same thing multiple times.

From NIST TN 1297

However, ISO 3534-1 [D.2] defines precision to mean “the closeness of agreement between independent test results obtained under stipulated conditions.” Further, it views the concept of precision as encompassing both repeatability and reproducibility (see subsections  D.1.1.2 and D.1.1.3) since it defines repeatability as “precision under repeatability conditions,” and reproducibility as “precision under reproducibility conditions.” Nevertheless, precision is often taken to mean simply repeatability.

For example, the statement “the precision of the measurement results, expressed as the standard deviation obtained under repeatability conditions, is 2 µΩ” is acceptable, but the statement “the precision of the measurement results is 2 µΩ” is not. (See also subsection D.1.1.1, TN 1297 comment 2.)

(bold by me)

Here is a good article from a reliable source.

https://statisticsbyjim.com/basics/accuracy-vs-precision/

Reply to  AlanJ
March 8, 2024 3:31 pm

In our case we have already determined the appropriate number of significant figures to display the measurements based on the uncertainty: -5, 1, 5. Now you must accept that the average of these three numbers, to one significant digit, is 0.3. There is no leaf left for you to hide under.”

Nope. The average value should be given as 0 (zero).

You do not know the measurements to the tenths digit. That is exactly what Teylor is explaining. So you cannot know the average to the tenths digit, UNLESS, that is, you are a blackboard statistician that knows nothing of the physical world – including metrology.

Not even a tiny little bit. Notice that I used to the word “precision” not the word “accuracy.” “

Once again, you can have no more precision than you have accuracy which is defined by the measurement uncertainty. If you don’t know the tenths digit in the measurements then you can’t know the tenths digit in the average.

How do you KNOW the tenths digit is a 3?

For the umpteenth time, you are giving 100% accurate numbers in a discussion on measurements. No measurement is 100% accurate. You are only displaying your dependence on the blackboard statistics you were taught from a textbook and taught by a teacher that wouldn’t know measurement uncertainty if they were bitten on the backside!

old cocky
Reply to  Tim Gorman
March 8, 2024 3:57 pm

It’s usually acceptable to allow 1 extra digit for averages.

The average is just the total divided by the count. For small counts, it would be better to express it as the actual value, in this case 1/3.

Of course, the average of {-1.1.1} gives the same result, so the other statistical descriptors provide a much better picture.

Reply to  old cocky
March 10, 2024 6:42 am

Sorry for the late reply.

1 extra digit is for calculating interim results in order to try and avoid rounding errors. Final results should *not* include the extra digit. That should be based on the magnitude of the measurement uncertainty.

old cocky
Reply to  Tim Gorman
March 10, 2024 1:50 pm

I checked a few sites regarding significant figures and rounding rules.

You’re quite correct – round down if the last figure is < 5, round up > 5, and round to the nearest even number for 5. I’ve also read that 5 should always round up, because the 0-4 interval is the same size as 5-9.

Reply to  AlanJ
March 8, 2024 4:42 pm

Your whole example is based upon a simplified and ill posed example you would find in a stats book.

In order to accurately make a well posed hypothetical example of measurements you need to supply several things.

  • Define the measurand accurately
  • Are these single measurements
  • Define the uncertainties of the measurement device
  • What correction tables are available
  • What are the repeatability conditions

I suggest you read some texts on measurement uncertainty to find out how problems are stated.

Reply to  AlanJ
March 8, 2024 10:47 am

You are a clown, nothing more, throwing bullshit around like a primate in a cage.

I can increase the precision of my estimate by increasing the number of measurements in the estimate of the mean.

Again, gaslighter, temperature measurements are not exercises in random stats sampling.

You have multiple populations each of size equal to ONE!

Reply to  AlanJ
March 8, 2024 2:19 pm

I can average together -5, 1, and 5

[…]

I can estimate the uncertainty

A complete and total non sequitur — another straw man.

You are averaging numbers, not real measurements! That you can’t understand the difference screams about your incompetence. Actual measurements would have uncertainty intervals for each point.

YOU DON’T HAVE ANY.

Reply to  AlanJ
March 8, 2024 10:43 am

You’re prevaricating.

You are a liar.

AlanJ
Reply to  Tim Gorman
March 7, 2024 6:00 am

Why did you ignore the very first thing I posted? “Finding the mean more precisely than the measurements provide is a waste of time. The mean can’t be located any more precisely than the resolution (including the measurement uncertainty of the measurements) of the measurements.”

Because no one has ever said otherwise. You’re treating this as though it’s a point of contention, when it is not.

Reply to  AlanJ
March 7, 2024 6:48 am

“Because no one has ever said otherwise. You’re treating this as though it’s a point of contention, when it is not.”

*YOU DID* – by using numbers with only stated values and no measurement uncertainty values.

And then quoting the average to a resolution you can’t possibly know from the stated measurement values.

Your own words put the lie to this claim.

Reply to  AlanJ
March 6, 2024 5:43 pm

Sure you can do that. Most people who care about measurements won’t look at that however.

I’m going to repeat this until you address it.

GUM

F.1.1.2 It must first be asked, “To what extent are the repeated observations completely independent repetitions of the measurement procedure?” If all of the observations are on a single sample, and if sampling is part of the measurement procedure because the measurand is the property of a material (as opposed to the property of a given specimen of the material), then the observations have not been independently repeated; an evaluation of a component of variance arising from possible differences among samples must be added to the observed variance of the repeated observations made on the single sample.

See that “property of a material” statement. Is daily Tmax temperature a “property” of the monthly_average_Tmax? Why do you keep from addressing the meaning of this statement?

Tell us what you think it says.

Reply to  AlanJ
March 5, 2024 11:59 am

Bullshit!

You can’t fully account for all the uncertainties as they are not all known and with imprecision inherent in data collected by instruments that themselves have built in error in them.

Your replies are a perfect example of random error……..

Reply to  Sunsettommy
March 5, 2024 12:19 pm

Bingo.

AlanJ
Reply to  Sunsettommy
March 5, 2024 2:40 pm

Science does not offer us absolute certainty, but it offers us something better than absolute ignorance. The Gormans choose absolute ignorance as the best option, what’s your take?

Reply to  AlanJ
March 5, 2024 3:10 pm

What you support (or even perform) is no different from reading tea leaves or goat entrails.

There is no “science” in this fake temperature data fraud, none.

Reply to  AlanJ
March 5, 2024 6:28 pm

Thanks for the ad hominem. You lose the argument.

Why don’t you show some concrete references for handling measurements.

Here is another reference you can peruse.

https://www2.chem21labs.com/labfiles/jhu_significant_figures.pdf

Reply to  AlanJ
March 5, 2024 7:25 pm

Huh what did you say……, your random error isn’t random after all since you didn’t address anything.

You are running on empty.

youcantfixstupid
Reply to  AlanJ
March 5, 2024 12:07 pm

Ah the hypocrisy it burrrnnnsss…

“Of course you can.”

And

“You can’t just insist things into truth.”

Reply to  AlanJ
March 5, 2024 12:53 pm

Always you regress to this inane philosophy that it is impossible to measure anything.”

No, that is *NOT* what I am saying. I am saying that your measurements have uncertainty. You can’t just assume that uncertainty away, you have to include it as part of the measurement statement!

What do you do when confronted with the newest way to state measurements by just giving the interval? E.g. The temperature is 15C-16C? Do you just pick the median value as the “true value” and ignore the rest of the interval? What do you do if the uncertainty interval is asymmetric? Please note that Hubbard and Lin found in 2006 that: “ Furthermore, our study showed different discontinuity attributes for maximum and minimum temperature series.” So how do you homogenize daily averages when you have different discontinuities for min and max temperatures?

Your quotations do not suggest that you cannot identify systematic uncertainty, only that it cannot be revealed by the same method (repeated measurement) as random error.”

Of course they suggest that you cannot identify systematic uncertainty using statistical methods. What other methods would you suggest? Voodoo magic? Systematic uncertainty includes calibration drift, how do you identify calibration drift without a calibration lab certifying the instrument before each measurement? Each instrument will be different, Hubbard and Lin state that unequivocally. So you can’t just pick a value and apply it willy-nilly!



Reply to  Tim Gorman
March 5, 2024 3:14 pm

Like most statistics types, he is stuck in the rut that a measurement is nothing but an exercise in random sampling.

0perator
Reply to  AlanJ
March 5, 2024 6:42 am

Hahaha. Thanks for the laugh this morning.

Reply to  AlanJ
March 5, 2024 7:45 am

Liar.

And you just admitted that many stations are not fit for purpose, good job.

Reply to  AlanJ
March 5, 2024 2:33 pm

Oh, right. Good on them. So they throw out the Class 4 and Class 5 station data entirely? No? I didn’t think so.

Excluding the inaccurate data is the only way to assure that the cumulative data is accurate.

You can’t “adjust” for a value that may (or may not) be off by anywhere from 2 to 5 °C, almost invariably warmer, with no idea how much it’s actually off. You can’t adjust for a number that you don’t know, especially since you’ve never taken the time to calibrate the temperature data against some accurate measurement, and you don’t know that it’s always off by the same amount. You have to throw it out. And you and I both know that they don’t throw away data that supports their narrative. Well I do, anyway. “Homogenization” algorithms are from the same witches cauldron as “gridding”: just making up data that doesn’t exist.

AlanJ
Reply to  stinkerp
March 6, 2024 5:53 am

Class 5 station data is not “inaccurate,” the absolute measured temperature values are just not expected to be representative of a broad region simply due to micro-site conditions. The stations are accurately reporting the air temperature around the station. In other words, the issue is that the air around the station isn’t representative of the air, say, 1000m from the station. So if you’re using the value to see “how hot is it outside today?” You won’t get a good answer if you aren’t right next to the station. That doesn’t affect the station being used to assess long term temperature trends (only changes in micro-site conditions do this, and scientists use refined algorithms to detect and adjust for such changes).

aussiecol
Reply to  AlanJ
March 6, 2024 8:03 am

Exactly…just like a station sited ten meters from the tarmac. It measures the data accurately, but is not the true reading of the surrounds further away…. That is the issue faced with using stations not fit for the purpose of pertaining to true temperatures for measuring a changing climate… period.

AlanJ
Reply to  aussiecol
March 6, 2024 9:36 am

This isn’t an issue facing the use of such data to track long term climate change because we don’t care if the value adequately represents the absolute temperature of nearby areas. We care if the change in temperature near the station adequately represents the change in the temperature of the surrounding area. Do you see this distinction?

If I want to measure the growth rate of two houseplants and take the difference in each plant’s height above the floor over a year and divide by the time to get the growth rate, it doesn’t matter if one of the plants is not sitting on the floor but is on a stand instead, that doesn’t affect the calculated change in its height. It just means I can’t say anything about the plants’ absolute heights relative to each other, but that isn’t the thing I care about anyway.

Reply to  AlanJ
March 6, 2024 9:56 am

Throw away the lipstick you are attempting to use, it isn’t working.

You are trying to rationalize the use of stations being surrounded by ever increasing direct affects from human development.

AlanJ
Reply to  Jim Gorman
March 6, 2024 10:21 am

I don’t need to rationalize it – we have abundant research showing that these stations produce good results when bias-adjusted. Just see the perfect agreement between USCRN and nClimDiv, for example:

comment image

All I’m doing is trying to explain some of the nuance that seems to fly way over the heads of the WUWT contrarian set. But of course, engaging with that nuance is not something you all excel in, preferring big broad black and white generalizations over careful consideration of the science.

Reply to  AlanJ
March 6, 2024 11:37 am

LOL! 🤡

BIAS ADJUSTMENTS to data that was wrong, to make it be what we think it should be.

Reply to  Jim Gorman
March 6, 2024 2:50 pm

Yep. A typical instance of climate pseudoscience circular self action.

Reply to  AlanJ
March 6, 2024 2:49 pm

Another word salad.

Reply to  AlanJ
March 6, 2024 5:17 pm

You still haven’t figured out the difference between intensive and extensive properties, have you?

What if one of those house plants is a lemon tree and the other an aloe plant?

Reply to  AlanJ
March 6, 2024 11:22 am

You have zero understanding of real-world measurement uncertainty.

To be expected of a trendologist bent on creating hockey sticks out of the ether.

Reply to  AlanJ
March 6, 2024 2:48 pm

Nice word salad.

Reply to  AlanJ
March 6, 2024 5:16 pm

Class 5 station data is not “inaccurate,””

Malarky. Once *any* field measurement device leaves the calibration lab it loses accuracy. It may lose a lot or just a little but no field device ever reaches its use location being totally accurate.

It’s why machinists have gauge blocks – to routinely calibrate their measuring devices. And those gauge blocks have to be checked on a regular basis for accuracy. And the machinists measuring tools aren’t usually exposed to the outdoor environment very often, they exist in a controlled environment.

Yet climate science thinks a temperature measuring station never drifts in calibration, it’s always 100% accurate!

That doesn’t affect the station being used to assess long term temperature trends “

Of course it does! At least when that trend is calculated by adding it to the readings at a different location or in a different environment! It’s why the median of Tmax and Tmin has more measurement uncertainty that each individually!

Climate science would be better served by assigning each individual station a plus, a minus, or a 0 based on the local trend. Then just add up all the pluses, minuses, and zeros for all the measuring stations to find out what the overall trend is. You might not know the trend down to the hundredths or thousandths of a unit but you really can’t know that anyway because of the innate measurement uncertainty!

Reply to  AlanJ
March 6, 2024 10:23 am

“… great pains are undertaken to minimize their influence on regional and global temperature estimates.”

PPPPHHHHHTTTTTTTTTpphhhthhttphphtPPPHT

March 5, 2024 6:13 am

Is 5 good or bad??

Met Office WOW – Site Ratings

Richard Page
Reply to  ghalfrunt
March 5, 2024 6:39 am

Bad; very, very bad. “Don’t cross the streams!” bad.
Basically, look at it this way – every single temperature measurement taken at a class 5 site may be off by up to 5°C, or not, or anywhere in between. There is simply no way to tell and it likely changes from time to time. It’s a variable error that has messed up the entire dataset because we simply can’t tell what the error has been for every single class 3, 4 or 5 station.

Reply to  Richard Page
March 5, 2024 7:17 am

You can’t even tell what the error has been for a class 1 or two station. Uncertainty is measurement includes calibration drift, which happens even in Class 1 and 2 stations, as well as microclimate changes.

Think of a measurement station situated over grass, i.e. a rural station in the middle of a pasture 10 miles from the nearest habitation. What happens to that grass over the span of a year? Here in NE Kansas in the states that grass may be quite green in May, a duller green in August after no rain, a mixture of green and brown in Sep/Nov, totally brown in Dec, and white in Jan (from snow). Those changes in the grass represent micro-climate changes. Micro-climate changes affect the systematic uncertainty of the measurements, meaning the uncertainty changes over the span of a year.

You can’t identify that changing uncertainty using statistical analysis. But the uncertainty *does* exist. And it is likely wider than the differences climate science is trying to identify – meaning the differences are part of the Great Unknown!

AlanJ
Reply to  Tim Gorman
March 5, 2024 10:00 am

Those changes in the grass represent micro-climate changes.

No, these changes represent seasons.

Reply to  AlanJ
March 5, 2024 1:06 pm

Now you are down to arguing what the definition of “is” is.

If the maintenance crew mows the lawn around the station during the summer the microclimate changes with no change in season. If it snows at the measurement station the microclimate changes from brown grass to white, reflective snow with no change in the season. If the landscaping company fertilizes the grass around the station the color of the grass can change with no change in season. If a windbreak of pine trees near the station dies because of beetle infection the microclimate of the station changes with no change in season. If a mud dauber builds a nest in the air intake of the screen the microclimate changes with no change in season.

How do you identify any of this from 500 miles away?

sherro01
Reply to  Tim Gorman
March 6, 2024 4:17 am

Tim,
Indirectly you are showing that the temperature changes used in depictions of a global average are tiny, in tenths of a degree a decade order so far as can guess imperfectly, when the noise envelope around time series is typically similar or larger.
It is customary to admit that one distinguish between values inside the envelope. There is no science involved in using replication to reduce the envelope size. Nature is not aware of your math assumptions. Geoff S

Reply to  Richard Page
March 5, 2024 10:48 am

0.01 ±5°C! Oh, I forgot, averaging over several thousand days and stations makes that 0.01 ±0.01°C!🤣

Reply to  Jim Gorman
March 5, 2024 1:26 pm

They really are totally clueless about mathematical realities, aren’t they.

March 5, 2024 6:19 am

The location shown in the OP is incorrect, the weather station in in Weston Park (I’ve been there).
Weston Park Weather Station

Unfortunately the Google Earth image won’t show on here.

Anthony Banton
Reply to  Phil.
March 5, 2024 7:09 am

Yes, here are the details ….

“This is the site of the Sheffield temperature station. On July 19th, 2022, the local Star newspaper reported that the city had smashed the temperature record with over 39C° for the first time. According to the Met Office, the newspaper reported, the record reached 39.4C° on July 18th. The red marker shows where the Sheffield readings are taken, hard by a busy road with a large bus lane, surrounded by either city buildings or heavy vegetation, and located at or near what appears to be a concrete park. It might not surprise to learn that Sheffield is a Class 5 site.”

This from Wiki ….

“The park was opened to the public on Monday 6 September 1875 with the following day’s Sheffield Daily Telegraph reporting: “The weather was fine. The Park looked in its gayest Summer dress. The walks were freshly gravelled, the flower beds were trim and well ordered.” In 1882 the Weston Park Weather Station was erected privately by the curator of the adjacent museum Elijah Howarth. Howarth was known as “Elijah the Prophet” because of his reputation for forecasting the weather, he prepared daily forecasts to warn miners of changes in air pressure that could trigger the release of dangerous gases, he recorded daily weather observations for 47 years. It is the official climatological station for Sheffield and since 1937 it has been run by the museum’s staff. The station consists of thermometers housed in a Stevenson screen, tipping bucket and funnel rain gauges and a soil thermometer which takes readings at the depths of 30 cm and 100 cm.[3] It is one of the oldest weather stations in the country and all records are freely available via computer database or printed media.[4]”

“The red marker shows where the Sheffield readings are taken”
No it doesn’t
This does …..

https://www.google.com/maps/place/Weston+Park+Weather+Station/@53.3813495,-1.4914016,196m/data=!3m1!1e3!4m6!3m5!1s0x4879789d62144213:0xeef3507b49ef48d5!8m2!3d53.3813775!4d-1.4913748!16s%2Fg%2F1yfdqy2sg?entry=ttu

There is a narrow copse of trees between it and the road and it is situated on grass in, well, a park. Not a “concrete park”.

From: https://www.joinedupheritagesheffield.org.uk/events/whatever-the-weather-a-history-of-weston-park-weather-station/

“Weston Park Weather Station is one of the longest, continuously recording weather stations in the UK. This September, it will have been running for an astounding 140 years.”

Richard Page
Reply to  Anthony Banton
March 5, 2024 8:36 am

So a wide tarmac pathway on 2 sides, within 20 or so metres of a busy main road and with a big glass and concrete building (with heating and a/c vents) within 100 metres. Does that or does that not comply with the WMO and Met Office siting requirements? Absolutely not – it breaks virtually every single requirement; no wonder it’s a class 5.

Anthony Banton
Reply to  Richard Page
March 5, 2024 8:42 am

I didn’t say it didn’t.
Just correcting some “facts” that Homewood knows the likes of you will never investigate.

Reply to  Anthony Banton
March 5, 2024 1:28 pm

No, you didn.t correct anything

You just HIGHLIGHTED the fact that this site has been highly corrupted over time.

Reply to  Richard Page
March 5, 2024 8:38 pm

You mean a big stone ~150 yo building with a few small windows and no heating and AC vents?
AF1QipMKxmHLMsa21BD61Izc101tCGFej3avZgSLHwbR=s1360-w1360-h1020

Reply to  Richard Page
March 6, 2024 2:18 pm

Where’s the building with a/c vents? When I lived in Sheffield there was only one building I knew with A/C, that was the University building that housed the mainframe computer. If buildings in Sheffield need A/C then Climate Change is much worse that I thought.

Richard Page
Reply to  Phil.
March 5, 2024 8:52 am

The location shown in the OP is less than 20 or 30 metres away from the weather station – go NW from the red marker through the trees and you’re on it. It’s not bad for a co-ordinate search on Google Maps, although it does have an inbuilt bias for finding the nearest street or road!
These are the facts that Anthony Banton ‘knows’ we will never investigate, apparently!

Anthony Banton
Reply to  Richard Page
March 5, 2024 9:26 am

Correct, you didn’t till I pointed out the real location.
And the actual site is way better than that implied by Homewood.

Richard Page
Reply to  Anthony Banton
March 5, 2024 10:43 am

Actually when I first pointed out the problems with the siting of UK weather stations on here some years ago I mentioned it – it is you are several years behind the times. In actual fact the site is far worse than the OP pointed out – he only mentioned the busy road, not the microclimate produced by the tree cover within 100m of the site, nor the tarmac (or asphalt) paths, nor the buildings, nor the vehicle access on those paths.

Reply to  Anthony Banton
March 5, 2024 1:30 pm

WRONG.. Buildings close by, metal fence 1 m behind, 3m wide concrete path a few meters away. Only a short distance from a road.

Trees all around blocking air movement

It is a JUNK SITE.. class 4, 5 , maybe even a 6 !!

Richard Page
Reply to  bnice2000
March 5, 2024 2:27 pm

They come in sixes?

Reply to  Richard Page
March 5, 2024 7:34 pm

No it isn’t it’s over 100m away, that red marker is by the entrance to the park, not close to the weather station. I’ve been in that park on multiple occasions and knew from the first glance at the image that it was BS.

Richard Page
Reply to  Phil.
March 6, 2024 9:09 am

I’ve also been there multiple times and it’s not very far away from the temperature station.

Reply to  Richard Page
March 6, 2024 2:14 pm

Then you know that I’m right and the marker is over 100m from the weather station.

March 5, 2024 7:31 am

The trendologists won’t like this article, either.

I commend the author for making a better distinction between uncertainty and error this time around.

March 5, 2024 9:01 am

“Tell me you don’t understand statistics”…

By using measuring devices with error ranges much larger than the change you are looking for.

PS – 77.9% of the stations are Class 4-5.

Richard Page
Reply to  g3ellis
March 5, 2024 9:31 am

Yep – it’s like marking a bucket with ml up the side!

youcantfixstupid
Reply to  Richard Page
March 5, 2024 12:53 pm

Which reminds me of the joke about 2 Ukranians, Metro & Demitri out fishing. They’re having an incredible day fishing. Demtri says “Metro we need to come back to this same place tomorrow but I’m worried we won’t be able to find it.”…Metro says “Demitri, Demitri not to worry I marked the side of the boat with an X.”….Demtri says “Ok but what if we don’t get the same boat!”…

PS. I’m half Ukrainian and my full Ukrainian (though born in Canada) father taught me that if you can’t laugh at yourself you can’t laugh at anything…

PPS. The joke is much funnier if hearing it using a bastardized Ukrainian-Canadian accent.

March 5, 2024 11:06 am

I suppose knowing the actual temperature trends uncorrupted by local effects or instrument problems may be useful or at least interesting. But when the IPCC fails to find any adverse trends in any concerning weather/climate phenomena, the continuing failed predictions for those adverse trends can be safely ignored regardless of what the weather stations pretend to measure. We’ve had a mild gentle warming over a century and a half and a substantial greening of the biosphere with no observed adverse outcomes and all the time human society has grown and flourished while learning to preserve our natural world. What’s not to love?

March 5, 2024 11:54 am

I notice that the picture of the poorly sited station is very near a “Bus Lane”.
Perhaps one of those recalled buses was left unattended with the AC on near the station?
We have UHI effect. Perhaps we also need EVBHI effect? 😎

Michael Ketterer
Reply to  Gunga Din
March 6, 2024 5:14 am

Yes, the picture is very near a bus lane, the station not. Have no idea what the picture is supposed to show.

March 5, 2024 12:10 pm

Can’t they just adjust and homogenise the data to make things all better?
/sarc.

Richard Page
Reply to  Chris Nisbet
March 5, 2024 12:28 pm

Actually, I rather think that’s been the problem.

Reply to  Richard Page
March 5, 2024 3:19 pm

And when they are called out on their Fake Data fraud, invariably they pull out a random hockey stick graph to show how their fraud doesn’t matter. Stokes has done this many times, AnalJ did the same in this very thread.

The question they always run away from:

If the fraudulent adjustments don’t matter, then why bother with them?

Reply to  Chris Nisbet
March 5, 2024 12:54 pm

That depends on if their definition of “better” is actually better.
(Sort of paraphrasing Bill Clinton.)

Michael Ketterer
March 6, 2024 1:44 am

Is there any explanation why the Cambrige Met Station is to be considered as class 5 site? Looking at the station I do not see a reason for this. Not to mention the missleading picture of the Shefiled Met station site.

Richard Page
Reply to  Michael Ketterer
March 6, 2024 9:12 am

It’s not really misleading – if you put co-ordinates into Google Maps it puts the marker somewhere within 30 or so metres, often near a road, but almost never right on top of where you want it.
As to Cambridge – it’s surrounded by tall plants, shrubs, trees and buildings within 100m.

Michael Ketterer
Reply to  Richard Page
March 6, 2024 10:56 am

No reason to put it on class 5

Reply to  Richard Page
March 6, 2024 8:22 pm

The Cambridge Met Station I’ve seen is in the middle of a field with no trees or buildings within 100m. Presumably the one you’re referring to is the Cambridge Botanical Gardens which is another 100yo+ station which does have some trees nearby.

Cambridge_Botanic_Garden_Weather_Station_from_the_West.jpg

March 6, 2024 3:35 am

From the article: “Net Zero promotion requires reasonably precise measurements of both local and global temperatures and these are simply not available. In the run-up to last year’s COP28 meeting, the BBC ran an explanatory article on the significance of the 1.5C° threshold, a rise of the Earth’s temperature based on the ending of the Little Ice Age. “Every tenth of a degree of warming matters, but as you get warmer each increment matters more”, said Myles Allen, Professor of Geosystem Science at the University of Oxford, and a co-ordinating author of the IPCC’s special report on 1.5C° in 2018.”

He says, with absolutely no evidence to back up such a claim. This guy can’t tell us what a tenth-of-a-degree increase in temperature will cause with regard to the Earth’s atmosphere and weather. What a ridiculous, unscientific statement!

This is pure speculation masquerading as fact-based science.

This kind of BS (Bad Science) is all the climate alarmists have. They make wild, unsupported claims and expect everyone else to believe them.

Roger Collier
March 7, 2024 1:22 am

The Hawarden airport weather station appears to have been moved nearer to the runway because a refrigerator factory was built close to the old site. And as far as I can tell, the Kinlochewe record still has not been validated.