EXCLUSIVE: A Third of U.K. Met Office Temperature Stations May Be Wrong by Up to 5°C, FOI Reveals

From the DAILY SCEPTIC

BY CHRIS MORRISON

Nearly one in three (29.2%) U.K. Met Office temperature measuring stations have an internationally-defined margin of error of up to 5°C. Another 48.7% of the total 380 stations could produce errors up to 2°C, meaning nearly eight out of ten stations (77.9%) are producing ‘junk’ or ‘near junk’ readings of surface air temperatures. Arguably, on no scientific basis should these figures be used for the Met Office’s constant promotion of the collectivist Net Zero project. Nevertheless, the state-funded operation frequently uses them to report and often catastrophise rises in temperature of as little as 0.01°C.

Under a freedom of information request, the Daily Sceptic has obtained a full list of the Met Office’s U.K. weather stations, along with an individual class rating defined by the World Meteorological Office. These CIMO ratings range from pristine class 1 and near pristine class 2, to an ‘anything goes’ or ‘junk’ class 5. The CIMO ratings penalise sites that are near any artificial heat sources such as buildings and concrete surfaces. According to the WMO, a class 5 site is one where nearby obstacles “create an inappropriate environment  for a meteorological measurement that is intended to be representative of a wide area”. Even the Met Office refers to sites next to buildings and vegetation as “undesirable”. It seems class 5 sites can be placed anywhere, and they come with a WMO warning of “additional estimated uncertainties added by siting up to 5°C”; class 4 notes “uncertainties” up to 2°C, while class 3 states 1°C. Only 13.7%, or 52 of the Met Office’s temperature and humidity stations come with no such ‘uncertainty’ warnings attached.

The above graph shows the percentage totals of each class. Class 1 and 2, identified in green, account for just 6.3% and 7.4% of the total respectively. Class 3 identified as orange comes in at 8.4%. The graph shows the huge majorities enjoyed by the darkening shades of red showing classes 4 and 5. It is possible that the margins of error identified for classes 3, 4 and 5 could be a minus amount – if for instance the measuring device was sited in a frost hollow – but the vast majority are certain to be pushed upwards by heat corruptions. 

Last year, the investigative journalist Paul Homewood sought FOI information from the Met Office about the Welsh weather station Porthmadog, which often appears in ‘hottest of the day’ listings. He was informed that the site was listed as class 4 and “this is an acceptable rating for a temperature sensor”. Hence, continued the Met Office, “we will continue to quote from this site”. In short, observes Homewood, the Met Office is happy to use a class 4 site for climatological purposes, “even though that class is next to junk status”. It is bad enough that the Met Office is using this site, but it is even worse that they know about the issues but still plan to carry on doing so, Homewood continued. “How many other weather stations are of such poor quality?” he asked.

Now we know.

Using these figures with a precision to one hundredth of a degree centigrade, the Met Office declared that 2023 was the second hottest in the U.K., coming in just 0.06°C lower than the all-time record. Cue, of course, all the Thermogeddon headlines in mainstream media. In 2022, the Met Office said that five sites in the U.K. on July 19th went past 40°C, with a record of 40.3°C at RAF Coningsby. Kew Gardens is termed a class 2 site, although it is very close to one of the largest tropical glasshouses in the world. St James’s Park and Northolt airport are class 5 sites, Heathrow is class 4, while RAF Coningsby is class 3. At the time, the Met Office declared that the records set a “milestone in U.K. climate history”. A national record was also set on July 18th at Hawarden Airport in Wales (class 4) and on July 19th at Charterhall in Scotland (class 4).

Always alive to a popular headline catastrophising the weather, the Met Office declared a warmest St. Valentine’s night English record this year of 11.5°C at class 4-rated St. Mary’s airport on the Isles of Scilly. Earlier in the year, the Met Office declared the highest January temperature in Scotland at 19.6°C at Kinlochewe, a class 4 site. Interestingly the previous, much promoted, U.K. record was set on July 31th 2019 at the Cambridge Botanic Gardens, a class 5 site. Even more interesting is that in the Homewood FOI disclosures, the Met Office stated that class 5 data “will be flagged and not quoted in national records”.

The Met Office is between a rock and a hard place with these surface temperature measurements. Many of its long-standing stations have been encroached by urbanisation and corruptions seem to have become endemic across the entire system. In the past, this didn’t matter as much since margin of error allowances could be accepted along with less accurate local and national weather forecasting. Measuring surface temperatures across countries and then the planet is always going to be difficult, but a more accurate reading would be obtained by only using data from WMO classes 1 and 2. However, national and global temperatures have become politicised by the global warming scare and the proposed Net Zero solution. Alarmists often state that climate ‘tipping’ points will be reached with very small increases in temperature measured in tenths of a degree.

Using data from just classes 1 and 2 would likely crash the claimed rises in national and global temperatures. Something similar would likely occur if the Met Office moved the majority of its stations to more suitable spots. A number of scientists have tried to measure the urban heat bias in temperature records with estimates suggesting a general problem of warming corruption around the 20-30% mark. Last October, two scientists working out of the University of Alabama in Huntsville (UAH), produced a paper noting: “The bottom line is that an estimated 22% of the U.S. warming trend, 1895 to 2023, is due to localised UHI [urban heat island] effects.”

Under our FOI request, it can now be seen that the problems with corrupted U.K. weather stations are similar to those discovered in the United States by meteorologist Anthony Watts. In work compiled over a decade, Watts found that 96% of temperature stations used by the U.S. weather service NOAA were “corrupted” by the localised effects of urbanisation.  Sites in close proximity to asphalt, machinery and other heat-producing or heat-accentuating objects, “violates NOAA’s own published standards, and strongly undermines the legitimacy and magnitude of the official consensus on long-term climate warming trends in the United States”, he observed.

Both the U.K. and U.S. temperature datasets are important constituents of global totals compiled by a number of weather operations including the Met Office and NASA. The Met Office runs HadCRUT, where over the last 10 years two retrospective revisions have added about 30% extra warming to recent global temperatures. This had the effect of removing all traces of a pause around 2000-2014. Meanwhile, Professor Ole Humlum has noted that the GISS database run by NASA increased its surface air temperature between 1910 to 2000 from 0.47°C to 0.67°C, a boost of 49% over this period. “Frequent and large corrections in a database unavoidably signal a fundamental uncertainty about the correct values,” commented Humlum.

Pristine temperature data is available. In 2005, NOAA set up a 114 nationwide network of stations called the U.S. Climate Reference Network (USCRN). It was designed to remove all urban heat distortions, aiming for “superior accuracy and continuity in places that land use will not likely impact during the next five decades”.

The graph above shows nothing more than very minor, gentle warming since 2005, slight warming that might be expected in the small and continuing natural rebound from the depths of the pre-industrial Little Ice Age. A reliable source of global data is to be found in the UAH satellite record, which shows less overall warming since 1979 than the surface datasets. Both these datasets are rarely mentioned. In fact one of the compilers of the satellite data, along with the UAH paper on urban heat, is Dr. Roy Spencer. In 2022 he was kicked off Google AdSense for publishing “unreliable and harmful claims”. The move demonetised Dr. Spencer’s widely consulted monthly satellite temperature update page by removing all Google-supplied advertising. Google is on record as stating that it will ban all sites that are sceptical of “well established scientific consensus”.

Chris Morrison is the Daily Sceptic’s Environment Editor.

4.7 33 votes
Article Rating
466 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
walterrh03
March 1, 2024 2:19 pm

Nick Stokes is not going to like this.

Reply to  walterrh03
March 1, 2024 2:58 pm

Who cares. !!

Writing Observer
Reply to  walterrh03
March 2, 2024 1:11 pm

Nick Stokes will wave his hands and find some totally false statement from the Met Office that says they don’t use this garbage in compiling UAH.

Reply to  Writing Observer
March 2, 2024 2:16 pm

Well that’s perfectly true – UAH has got nothing to do with the Met Office. Did you perhaps mean CET and the Met Office Hadley Centre?

Rud Istvan
March 1, 2024 2:19 pm

UK Met did not need the WUWT Surface Stations project. They did it to themselves. Figure is a big OUCH.

And funny how USCRN is almost never mentioned anywhere. It isn’t measuring the ‘right’ answer, so is (intentionally) supressed?

Reply to  Rud Istvan
March 1, 2024 2:52 pm

And funny how USCRN is almost never mentioned anywhere. It isn’t measuring the ‘right’ answer, so is (intentionally) supressed?

You mean it’s warming at the rate of 0.34°C / decade since 2005? Why is that not the “right” answer?

Rud Istvan
Reply to  Bellman
March 1, 2024 3:00 pm

You probably forgot a /sarc.
If not, the pristine (but unfortunately short) USCRN record is provided in the post to which I commented.

And, as concluded after looking at the global surface temperature record from many aspects and regions in essay When Data Isn’t in ebook Blowing Smoke, it simply isn’t fit for climate purpose. This post just exposes the UK land record part of that much more general conclusion.

Reply to  Rud Istvan
March 1, 2024 3:35 pm

If not, the pristine (but unfortunately short) USCRN record is provided in the post to which I commented.

And as is customary here – presented with zero attempt to quantify the data. It’s just hoped that as the data is noisy, no one can see the change.

You can say that it’s too short a period to make any definitive statements about the rate of change so far – but people then keep claiming that in some way it disproves other temperature records. I keep asking how? But get no answers.

Scarecrow Repair
Reply to  Bellman
March 1, 2024 4:11 pm

It’s just hoped that as the data is noisy, no one can see the change.

You seem to be admitting that the data is noisy, and hoping no one can see the change. Yes, either add the /sarc or quit providing own goals.

Reply to  Bellman
March 1, 2024 4:22 pm

no one can see the change.”

On the contrary.. the bulge through the 2015/16 El Nino is quite obvious.

It is what causes the slight monkey-with-a-ruler linear trend.

Richard Greene
Reply to  Bellman
March 1, 2024 4:41 pm

People believe what they want ti believe and disregard the rest. Get used to it. This Rule of Thumb does not only apply to leftists.

Rud Istvan
Reply to  Richard Greene
March 1, 2024 4:56 pm

Never. Data is data. Reality is just is.

Reply to  Rud Istvan
March 1, 2024 6:32 pm

Unless it is Fake Data.

Richard Greene
Reply to  Rud Istvan
March 2, 2024 4:00 am

The NOAA data is whatever NOAA wants to tell us.

Whether it is accurate or not may be impossible to determine.

That may require a judgement on whether or not NOAA can be trusted.

If you trust NOAA, then their USCRN reflects much faster warming in the US (48 states) than the NASA-GISS or HadCRUT global average warming statistics

And that is a fact

Whether any of these statistics are accurate or useful is another debate.

The claims of a coming global warming crisis are NOT based on any specific historical climate trends. The CAGW propaganda only requires some amount of global warming after 1975.

We’d hear the same scary global warming predictions if UAH was used instead of NASA-GISS or HadCRUT.

Reply to  Bellman
March 1, 2024 3:35 pm

Because there has been only one full decade since 2005? That means you are defining a trend using only one data point – how is that in any way useful?

Reply to  Richard Page
March 1, 2024 3:50 pm

That means you are defining a trend using only one data point

The trend is defined by 229 data points.

how is that in any way useful?

If it’s no useful, how is anyone claiming that the data shows that the old station data is wrong?

Reply to  Bellman
March 1, 2024 4:16 pm

If you’re going to use a silly metric such as “0.34°C per decade” when there has only been one full decade since 2005 then you leave yourself open to ridicule. If you had said “0.34°C per year” then it would have been acceptable for the 17 or 18 years of the record.

Reply to  Richard Page
March 1, 2024 4:23 pm

If you’re going to use a silly metric such as “0.34°C per decade” when there has only been one full decade since 2005 then you leave yourself open to ridicule.

Quite the opposite, Richard. You are the ridiculous one.

There are 120 months in a decade. 229 months’ worth of data is more than enough to determine a per decade trend. Whether statistically significant or not is another matter.

I don’t remember you chastising Lord M when he was using a much shorter period to determine that there had been a per decade no-warming trend in UAH (until there wasn’t, and he disappeared).

Reply to  TheFinalNail
March 1, 2024 4:45 pm

You certainly won’t remember me chastising Monckton of Brenchley’s articles, nor heaping praise on them, nor posting much at all on them in fact, although I did read all of them. But then if you had a good memory you wouldn’t be posting these inanities, would you?

Reply to  Richard Page
March 1, 2024 5:50 pm

The per-decade trend thing that you said was “a silly metric”; it’s been used on this site for various data sets ever since it began.

It’s only when the per-decade trend goes in a direction you don’t like that it becomes ‘”silly”; or so it seems.

Reply to  TheFinalNail
March 2, 2024 3:17 am

I have nothing against datasets going back 50, 100 or over 200 years using a metric of temperature change by decade. However, when a dataset has yet to reach a full second decade of operation, it seems ridiculous to project a trend per decade from just one full decade.

Richard Greene
Reply to  Richard Page
March 2, 2024 4:32 am

“I have nothing against datasets going back 50, 100 or over 200 years”

That makes you very gullible
1800s through 1920 average temperature data are worthless wild guesses. Mainly infilling.

All surface data are questionable especially after they made almost all of the initially reported 1940 to 1975 warming “disappear” in the 1990s

I suggest you read the re;ated articles on the Tony Heller Real Climate Science website to understand the huge amount of temperature data tampering of the surface numbers.

Reply to  Richard Greene
March 2, 2024 5:22 am

As you are no doubt perfectly well aware (unless you are completely ignorant) the implied assumption was for the purposes of this debate on the merits (or lack of) using a per decade trend.

Richard Greene
Reply to  TheFinalNail
March 2, 2024 4:20 am

The years from 2014 to mid-2023 with a flat UAH trend were worth mentioning.

I prodded Monckton in comments here to also show a full UAH chart, from 1979 through 2023, along with his 8.5 year chart and he began doing that.

That 8.5 year flat UAH global temperature trend was with the largest amount of CO2 emissions for any 8.5 year period

If CO2 really was a climate control k n o b, that lack of global warming would have been very unlikely.

I have issues with Monckton making false claims of having great climate science knowledge, (mathematical formulas) but his flat temperature trend charts and articles here WERE important for the reason I stated.

Consider the Climate Howlers using the 2023 El Nino heat for climate change propaganda last year.

That’s a one year “trend”, which is worse data mining, by far, than Monckton’s 8.5 year trend data mining, especially after Monckton started including the full 1979 to 2023 UAH dataset in his articles.

From the Climate Howlers: about 8.5 years with no warming. They claim

That’s no big deal
Look over there folks
— a squirrel

(We’ll “adjust” our surface temperature numbers so our chart doesn’t look like a flat trend.)

Reply to  TheFinalNail
March 2, 2024 5:24 am

True there are 120 months in a decade, just as there are 10 years in a decade – yet you didn’t use a monthly or annual trend so that comment is completely worthless and irrelevant.

Reply to  Richard Page
March 1, 2024 4:53 pm

If you’re going to use a silly metric

You mean the same metric that Dr Roy Spencer uses every single month with nobody objecting.

It doesn’t matter how you express the rate of change – it’s a rate, change over time. You can say 3.4°C / century, 0.34°C / decade, or 0.034°C / year. They all mean the same thing, and are all determined by the same data.

I’m old enough to remember Christopher Monckton peddling global cooling of 2°C / century, based on just 7 years. I don’t remember anyone here ridiculing that claim, on the basis that 7 years was only a small part of a century.

Reply to  Bellman
March 1, 2024 5:39 pm

I measured the temperature every ten seconds for the last 5 days.

8,640 individual temp readings over each of the 5 days. A total of 43,200 readings. I compared each of the relative time parameters for the 5 days to determine the rate of change.

I averaged the individual 8640 ‘rates of change’.

I found that the rate of change is 0.02 degrees C per day.

I am using this data for long term and short term planning. 50 days from now it will be 1 degree warmer; 500 days from now it will be 10 degrees warmer.

Basing the rate of change on my 43,200 readings over 3 days, coming up with a 0.02 degree C/day, and projecting that rate over the near and long term future is a reasonable thing to do … how can I go wrong?

Reply to  DonM
March 1, 2024 5:51 pm

I measured the temperature every ten seconds for the last 5 days.

And did you do this all over the world, land and ocean?

You didn’t, did you?

paul courtney
Reply to  TheFinalNail
March 2, 2024 4:42 am

Mr. Nail: Nobody did, but you still rely on GAT charts drawn by folks who didn’t measure what the GAT charts purport to show.

Reply to  TheFinalNail
March 2, 2024 11:25 am

I don’t care about the whole world, only my back yard.

How can I be wrong?

Reply to  TheFinalNail
March 2, 2024 11:38 am

“And did you do this all over the world, land and ocean?

You didn’t, did you?”

A ridiculous statement and even more ridiculous assumption.

Global temperature averages are perfect examples of why averages are idiocy for many things.

Thousands of unique instruments in unique settings at very disparate and common to human urban environments but are not representative of the world.
Mostly just cities and airports and other poor sites.

Reply to  TheFinalNail
March 5, 2024 4:04 am

Huh! How many stations around the globe measured the global average temperature? I suspect 0 (zero).

Reply to  DonM
March 1, 2024 6:04 pm

I measured the temperature every ten seconds for the last 5 days.

I am using this data for long term and short term planning.

How much long term planning can you do based on just 5 days of data?

50 days from now it will be 1 degree warmer

That’s an incredibly dumb conclusion to reach. You shouldn’t project a trend too far outside the range of your data.

500 days from now it will be 10 degrees warmer.

Have you heard of seasons?

Basing the rate of change on my 43,200 readings over 3 days, coming up with a 0.02 degree C/day, and projecting that rate over the near and long term future is a reasonable thing to do

It really isn’t.

paul courtney
Reply to  Bellman
March 2, 2024 4:43 am

Mr. Bellman: Then why do you do it??!!

Reply to  paul courtney
March 2, 2024 6:44 am

I don’t.

Reply to  Bellman
March 2, 2024 11:26 am

Why do you advocate for it?

Richard Greene
Reply to  DonM
March 2, 2024 4:35 am

You must have a lot of spare time

Reply to  Richard Greene
March 2, 2024 11:27 am

Only took a week

Now I can be sure of what to plant and how to heat my home.

Reply to  DonM
March 2, 2024 1:44 pm

Easy, the average rate of change is actually the average of the first differences, not the least squares trend (degC) of each series over time.

As data at that scale are strongly autocorrelated statistical parameters including 95% prediction confidence intervals would be invalid.

b.

Reply to  Bellman
March 1, 2024 6:12 pm

I’m old enough to remember Christopher Monckton peddling global cooling of 2°C / century, based on just 7 years. I don’t remember anyone here ridiculing that claim, on the basis that 7 years was only a small part of a century.

You are a true CAGW advocate.

CMoB never made that prediction except in your mind. He did show that there was a long “pause” even while CO2 was climbing alarmingly and which pretty much shot down a direct causal relationship between CO2 and rising temperature.

Reply to  Jim Gorman
March 1, 2024 7:12 pm

What prediction?

I said he described a seven year period as having a global cooling rate of 2°C / century.

Here’s a screenshot from his witness statement to a hearing, from March 2009. I’m sure it can still be found online if you look for it.

Screenshot-2024-03-02-030607
Reply to  Jim Gorman
March 1, 2024 7:17 pm

He did show that there was a long “pause” even while CO2 was climbing alarmingly and which pretty much shot down a direct causal relationship between CO2 and rising temperature.

But you still insist the data he used to show that pause is not fit for purpose, so how do you think it proves anything? And you keep ignoring the uncertainty of that trend, and what that says about your claimed lack of a casual relationship.

Reply to  Bellman
March 2, 2024 3:26 am

UAH has a dataset going back for 4 full decades; it has the data to support a per decade trend. As to Monckton of Brenchley, I’ve already mentioned that I’ve neither criticised nor encouraged his posts or have you a memory problem as well?

You and I both know that using a ‘per decade’ or ‘per century’ trend encourages people to think in terms of the next decade or century – a projection forward. Those datasets that have data going back for at least several of those units are justified in using such a trend, those that only have 1 of those units are not.

Reply to  Richard Page
March 2, 2024 4:00 am

Exactly—it is implicit extrapolation beyond the fit data. Just like the phrase “limit warming to 1.5C” is extrapolation (of incorrect models). Both are invalid statistically.

Reply to  Richard Page
March 2, 2024 6:39 am

You and I both know that using a ‘per decade’ or ‘per century’ trend encourages people to think in terms of the next decade or century

Speak for yourself. I do not “know” that. I try to credit people here with some intelligence, enough to understand what “per” means.

If you are driving in a car, and it says you are doing 120mph, do you say that’s a silly metric, because you were only driving for a few minutes, and that this is implying you will be driving at that speed for at least the next hour? Do you think that argument will wash if you are caught speeding?

Reply to  Bellman
March 2, 2024 6:55 am

What has driving a car at whatever speed got to do with a trend over time? Do you describe a journey made from point a to point b as a trend of +0.5mph? No, you use an average speed based on time elapsed and distance covered. Stop bringing up irrelevancies as a distraction.

Reply to  Richard Page
March 2, 2024 10:12 am

What has driving a car at whatever speed got to do with a trend over time?

I’m sure if you think hard enough you’ll figure it out – unless you just don’t want to understand.

Do you describe a journey made from point a to point b as a trend of +0.5mph?

I’d call it average velocity, but it’s the same principle. In each case you are measuring the rate of change compared to a unit of time.

No, you use an average speed based on time elapsed and distance covered.

And what do you think a rate of change in temperature is?

Reply to  Bellman
March 2, 2024 10:21 am

It’s a rate of change? MPH is an average speed not a rate of change – using it as an analogy for a rate of change is bizarre to say the least. Acceleration in m/s2 would be a rate of change, not MPH.

Reply to  Richard Page
March 2, 2024 3:11 pm

You aren’t talking to a physical trained person. He sees mph as a rate of change in distance over time. Doesn’t have a clue of what the functional relation actually is.

In other words, if your’re traveling at 100 mph, your distance rate of change is 100 miles in one hour.

Reply to  Jim Gorman
March 2, 2024 3:33 pm

That’s true, I keep assuming people had a similar background in physics as I did, I must stop overestimating their ability.

Reply to  Richard Page
March 2, 2024 5:01 pm

If it’s not a rate of change, what is it? Why are you dividing by time if not to determine how quickly your position is changing?

Don’t like symbols? Well then, here’s another way to define speed. Speed is the rate of change of distance with time.

https://physics.info/velocity/

Rate of change in position, or speed, is equal to distance traveled divided by time.

https://www.khanacademy.org/science/physics/one-dimensional-motion/displacement-velocity-time/v/solving-for-time

Reply to  Bellman
March 2, 2024 6:40 pm

You never took calculus based physics did you.

Acceleration is the rate of change in velocity. Read up on Newton’s laws of motion.

Velocity is a rate, but it is not a rate of change, it is a constant. A body in motion …

Reply to  Jim Gorman
March 2, 2024 7:16 pm

You might have taken more calculus classes than me – but you obviously never learnt anything from them.

Velocity is the rate of change of displacement.
Acceleration is the rate of change of velocity.
Jerk is the rate of change of acceleration.
Snap is the rate of change of Jerk.

And so on.

Velocity is the first derivative of position. It’s the rate of change of position with respect to time.

https://en.wikipedia.org/wiki/Fourth%2C_fifth%2C_and_sixth_derivatives_of_position

Velocity is a rate, but it is not a rate of change, it is a constant.

If velocity is a constant than acceleration is impossible. But even if it is constant it’s still describing the rate of change of position.

And if you don’t think velocity is a rate of change, what on earth do you think the PH is doing in MPH, or the /s in m/s?

Jim Masterson
Reply to  Bellman
March 2, 2024 7:53 pm

“. . . but you obviously never learnt anything from them.”

The definitive knowledge of the Calculus is knowing how to deal with the math. Spewing terms does not demonstrate knowledge. I can solve linear differential equations using Fourier and Laplace transforms. Can you? I can even solve the equations involved with Newton’s orbital mechanics. Can you? All I can say is: Thank goodness for Heaviside.

Reply to  Jim Masterson
March 3, 2024 6:39 pm

More relevant to this discussion is knowing what the terms mean. Differentiating something with respect to time is describing a rate of change. Velocity is the rate of change of position.

The fact that people will argue this, just to avoid admitting that there is nothing wrong in describing a rate of change in °C / decade is one of the many joys these comment sections.

Reply to  Bellman
March 5, 2024 4:17 am

The first derivative of position is taken at a POINT in time. It tells you nothing about what the derivative at the next point in time will be. If at Point A, the derivative of your movement is 100 mph, and you hit a wall at Point A + Δx where Δx -> 0 in the limit then your rate of change at Point A + Δx approaches infinity. In other words the rate of change at Point A is not a predictor of the rate of change at Point A + Δx.

When you quote a value of T per decade you are implicitly making the assumption that the next decade will see the same value.

Climate science *never* states “the past change in temperature is not a predictor of future change”. Neither do you. You and they should take a lesson from those advertising financial advice.

Reply to  Tim Gorman
March 5, 2024 5:39 am

In other words the rate of change at Point A is not a predictor of the rate of change at Point A + Δx.

you’re making my point for me. Giving a speed in mile per hour is not a prediction of where you will be in an hours time. It does not mean using hour as a unit of time is wrong. Just as saying the rate of change in temperature in terms of degrees per decade, or century is not a prediction of where we will be in a decade or century’s time. It is just a measure of the current rate of change.

When you quote a value of T per decade you are implicitly making the assumption that the next decade will see the same value.

Did you not read your own argument? Was your 100 mph making an assumption that you will drive 100 miles over the next hour?

Reply to  Bellman
March 5, 2024 5:55 am

Climate science *never* states “the past change in temperature is not a predictor of future change”. Neither do you. You and they should take a lesson from those advertising financial advice.

Funny. I don’t remember you making that complain when Monckton was comparing short term trends with claimed model prediction for the end of the century.

Screenshot-2024-03-05-134908
Reply to  Bellman
March 5, 2024 6:26 am

Funny. I don’t remember you making that complain when Monckton was comparing short term trends with claimed model prediction for the end of the century.”

Because he wasn’t predicting the future. He was trying to show that CO2 was *NOT* controlling the temperature since CO2 had been going up over the past but the temperature wasn’t! That isn’t predicting the future, it is analyzing the data to see if it defines a causal relationship. If there is no causal relationship then trying to use an assumed one to predict the future is just using voodoo magic – something climate science is very good at using.

I’m not surprised you don’t understand this. You have so many ingrained, unjustified assumptions in your brain that you don’t even know when they are biasing what you assert!

Reply to  Tim Gorman
March 5, 2024 7:12 am

He’s been told this many, many times but refuses to acknowledge this is what CMoB does. Christopher uses the numbers put out by climate science to demonstrate how their conclusions are invalid.

But bellman is just another alarmist who needs a “crisis”.

Reply to  karlomonte
March 5, 2024 7:18 pm

What conclusions of Dr Spencer’s do you accept as being invalid?

Reply to  Tim Gorman
March 5, 2024 7:27 pm

Because he wasn’t predicting the future.

He’s comparing a trend over a short period, with a prediction of the future – or at least he’s claiming to. If you accept it’s not valid to extend this short term trend to the end of the century, how does it prove model is wrong?

He was trying to show that CO2 was *NOT* controlling the temperature since CO2 had been going up over the past but the temperature wasn’t!

Why do you keep making up lies about what he was saying? He’s very clear that he’s saying the models are wrong, not that CO2 has no effect on warming. And he’s actually showing that temperatures are going up – rising at a claimed rate of 1.33°C / century. So even if you ignore all uncertainty in that trend, how is it showing that temperatures are not going up when CO2 is rising.

You keep accepting that there is uncertainty in the trend, but then want to claim that a pause over a few years, with huge uncertainties, cannot be explained if CO2 caused warming.

Reply to  Bellman
March 5, 2024 6:22 am

If the past temperature change rate is *NOT* a predictor then why not include the statement “past results are not predictors of future results” each time you quote past results?

If the past temperature change rate is *NOT* a predictor then why do you quote it at all in *any* context?

You just hope no one will notice your unstated assumption that it *is* a predictor of the future.

Jim Masterson
Reply to  Bellman
March 2, 2024 7:09 pm

“Do you think that argument will wash if you are caught speeding?”

All I have to say is that I’m a “Newcomer,” and they will let me go–Presidential orders. Of course, a few Spanish words will help, like “Si” or “Taco.”

Reply to  Richard Page
March 2, 2024 6:42 am

“As to Monckton of Brenchley, I’ve already mentioned that I’ve neither criticised nor encouraged his posts…”

So you ignored his use of this “silly metric” being published month after month here, yet as soon as I use it in a comment, you start jumping up and down and making all these bogus assumptions. Almost as if you are just trying to detract from the actual point – which was the claim that CRN data is being suppressed becasue it doesn’t give the “right” answers.

Reply to  Bellman
March 2, 2024 7:04 am

Actually the point was that the UK Met Office are using data from sites that aren’t fit for purpose; it’s you and TFN that have been throwing up the USCRN at every opportunity, as some kind of distraction from the Met Office catastrophe. I was merely hoping to get you to stop your silly antics and tomfoolery with the USCRN distraction you have insisted on bringing up at the most inappropriate moments.

Reply to  Richard Page
March 2, 2024 10:19 am

Actually the point was that the UK Met Office…

No. the point I was responding to was the claim

And funny how USCRN is almost never mentioned anywhere. It isn’t measuring the ‘right’ answer, so is (intentionally) supressed?

throwing up the USCRN at every opportunity, as some kind of distraction from the Met Office catastrophe

I was responding to a specific claim about USCRN, made in a comment to an article that concludes with a number of claims about USCRN. It’s up to me which particular arguments I engage with. My comment would have ended straight away if you hadn’t started banging on about using a decade as a unit of time.

Reply to  Bellman
March 5, 2024 4:00 am

As usual, you missed the entire point of the article. With a measurement uncertainty of at least 0.5C per measuring station there is no way to identify a change of 0.34C. That 0.34C per decade is invisible, it is part of the Great Unknown. It’s even worse with a 0.34C per year value. With a +/- 0.5C uncertainty for the measurements you can’t see a .034C value at any individual measuring station, it just becomes part of the Great Unknown. If you can’t identify that kind of change at an individual station then it is impossible to identify from a collection of individual stations. The identifiable change is 0.0C.

As usual, you have reverted to the ubiquitous climate science meme that all measurement uncertainty is random, Gaussian, and cancels. You claim you don’t assume that but it appears in everything you do.

Reply to  Tim Gorman
March 5, 2024 6:10 am

As usual, you missed the entire point of the article.

I’m not getting involved in the claims of this article – I’ve seen enough spurious claims of junk data here to treat them all with a bucket full of salt.

With a measurement uncertainty of at least 0.5C per measuring station there is no way to identify a change of 0.34C.

First people insist that CRN demonstrates that all other data sets are wrong, and claim it’s being suppressed. Then when it’s pointed out there is no significant difference so far between CRN and the older data sets – you start whining about how uncertain CRN data is, and that it’s impossible to know what it’s actually saying.

Why do you think the monthly uncertainty for each CRN station is 0.5°C? Even if it were that big, you still don’t understand that it’s possible to detect trends despite the uncertainty being bigger. You still after all these years refuse to understand how linear regression works, and what the uncertainty of the trend means.

I have said many times that the trend in CRN data is not yet significant. That’s not because the data is inaccurate, it’s because it’s a short period where monthly values have a lot of variation. This would be true even if the measurements had zero uncertainty.

But the point is that as the trend is uncertain, it’s impossible to use it claim the trend based on the current data is spurious.

That 0.34C per decade is invisible, it is part of the Great Unknown.

You need to actually look to be able to see it.

Reply to  Bellman
March 5, 2024 6:50 am

Why do you think the monthly uncertainty for each CRN station is 0.5°C?”

I said *at least*. That is the typical measurement uncertainty used for a temperature measurement station if no other, more accurate value, is available. It even applies to the Argos floats. Call it a Type B uncertainty based on educated judgement.

“Even if it were that big, you still don’t understand that it’s possible to detect trends despite the uncertainty being bigger.”

It is *NOT* possible to identify trends from measurements whose uncertainty interval subsumes the individual differences between stated values. What don’t you get about this?

If I tell you that value2 is .3 greater than value1 but the uncertainty in each is +/- .5 just how do you know that value2 is .3 greater than value1? How do you define a trend if the differences between values are all part of the Great Unknown?

You’ve never understood uncertainty in measurements and you’ve never bothered to try and grasp the concept. You just continue believing that measurement error is random, Gaussian, and totally cancels. You can’t seem to help yourself!

“You still after all these years refuse to understand how linear regression works, and what the uncertainty of the trend means.”

Your “uncertainty of the trend” is nothing more than the best fit of the stated values of the measurements to an assumed trend line developed from using only the stated values of the measurements. You totally ignore the uncertainty part of the data points!

You can’t seem to get that concept into your brain for some reason. Black out the area around each data point based on its uncertainty. And then tell me you can generate an accurate trend line using those black areas. I’ve shown you multiple times how this works. For temperatures, the trend line could be positive, negative, or zero and there isn’t any way to tell – unless you have an uncloudy crystal ball or some kind of voodoo magic available.

For some reason the ability to post pictures has disappeared from WUWT or I’d show you again.

One more time in simple words: If you don’t know the difference between two values then how do you define a trend line between them?

I have said many times that the trend in CRN data is not yet significant.”

It’s not just that it isn’t significant! It’s that it is UNKNOWN – it’s part of the Great Unknown! The differences aren’t large enough to exist outside the uncertainty intervals!

“You need to actually look to be able to see it.”

You’d make a good huckster at a carnival trying to tell people’s future from a clouding crystal ball!

Reply to  Tim Gorman
March 5, 2024 8:32 am

I said *at least*.

Well done. So now you are claiming the pristine CRN data, that is constantly being claimed here as demonstrating how all other data sets are junk, actually has a monthly uncertainty that may well be greater than 0.5°C.

It is *NOT* possible to identify trends from measurements whose uncertainty interval subsumes the individual differences between stated values.

And yet here we are doing it.

What don’t you get about this?

I don’t get why you think it’s impossible. And why you think just assertion these claims gives you a right to dismiss every argument to the contrary.

If I tell you that value2 is .3 greater than value1 but the uncertainty in each is +/- .5 just how do you know that value2 is .3 greater than value1?

You don’t. Though you could make a reasonable estimate that it is likely greater.

But that is not what we are doing. We are not taking 1 single measurement and comparing it with another single measurement.

How do you define a trend if the differences between values are all part of the Great Unknown?

The only “Great Unknown” exists in your head. Statistics, including Metrology is based on being able to figure things out about what you do not know for certain. Hence terms like “confidence”, “likelihood”, “probability” and “reasonably attributed to”. They are all saying we don’t know for sure, but we can put figures to what we don’t know.

You just continue believing that measurement error is random, Gaussian, and totally cancels.

Repeat those pointless lies all you want – it just demonstrates you are not listening and it isn’t worth looking at the rest of your nonsense.

Your “uncertainty of the trend” is nothing more than the best fit of the stated values of the measurements to an assumed trend line developed from using only the stated values of the measurements.

Way to demonstrate you don’t understand what you are talking about. How is the uncertainty the “best fit”.

And then tell me you can generate an accurate trend line using those black areas.

You are saying if you destroy the data you can’t work out a linear regression based on the lack of data? In fact it’s quite possible, just assume the best estimate is the mid-point of each blacked out interval.

For temperatures, the trend line could be positive, negative, or zero and there isn’t any way to tell

As you keep trying to explain to Monckton, when he’s talking about pauses or realitimeters. No wait, that’s my job. You just keep attacking me for not realizing what a genius Monckton is.

For some reason the ability to post pictures has disappeared from WUWT or I’d show you again.

Let me do it for you. Note the uncertainty of the trend is not based on these fantasy ±0.5°C annual uncertainties. It’s based, as it should be, on the variation in the annual values. And as I keep saying, the trend is not statistically significant at this point, and it certainly is not significantly different than the previous data.

20240305wuwt2
Reply to  Bellman
March 5, 2024 8:54 am

So now you are claiming the pristine CRN data, that is constantly being claimed here as demonstrating how all other data sets are junk”

I didn’t say they were junk! STOP PUTTING WORDS IN MY MOUTH.

I said their uncertainty makes them unusable for the purpose they are being used for. Trying to identify differences in the hundredths digit when the uncertainty is in at least in the tenths digit if not greater is wrong.

“And yet here we are doing it.”

No, here *YOU* are doing it, not me.

“You don’t. Though you could make a reasonable estimate that it is likely greater.” (bolding mine, tpg)

No, you can GUESS that it is, but you can’t *KNOW* that it is. Guessing is not reasonable. That’s why you include an uncertainty interval!

But that is not what we are doing. We are not taking 1 single measurement and comparing it with another single measurement.”

Unfreakingbelievable! How do you lay out a trend line if it isn’t based on the point-to-point values?

“Statistics, including Metrology is based on being able to figure things out about what you do not know for certain.”

Did you actually read this after you typed it? You *can’t* figure out things you don’t know, not even using statistics. Only huckster fortune tellers can do this. Or voodoo magic practiioners!

Why do you think the GUM has deprecated “true value +/- error”?

“Hence terms like “confidence”, “likelihood”, “probability” and “reasonably attributed to”. They are all saying we don’t know for sure, but we can put figures to what we don’t know.”

When those are based on “stated values” while ignoring the uncertainties of the stated values then they are useless! And, as usual, you bring up probability. Probability has nothing to do with uncertainty!

Reasonably attributed to is a RANGE OF VALUES according to the GUM. That’s the whole point of uncertainty – you simply don’t know where in that range of values the true value lies – and you CAN’T KNOW!

“Way to demonstrate you don’t understand what you are talking about. How is the uncertainty the “best fit”.”

Now you are back to equivocating again – just like saying uncertainty of the average without specifying whether you are talking about the measurement uncertainty of the average or the standard deviation of the sample means.

Uncertainty is *NOT* the best fit. The uncertainty of the trend line *IS* the best fit metric. If you are going to specify the measurement uncertainty of the trend line then the right way to state it is “the slope of the trend line is x +/- u. Meaning the slope could be positive, negative, or zero!

Let me do it for you. Note the uncertainty of the trend is not based on these fantasy ±0.5°C annual uncertainties.”

That’s exactly what I told you and it *is* the problem! The uncertainty interval is how far the STATED VALUES are from the trend line – WITH NO CONSIDERATION OF THE ERROR BARS FOR EACH VALUE!

The trend line from 2006 to 2020 *could* actually run from almost +1 to -0.8! A trend line with a very negative slope – if you use the values (i.e. reasonable values assigned to the measurand) within the error bars. *YOUR* trend line only considers the stated values while ignoring the uncertainty intervals. Your “uncertainty of the trend line” is only based on how far the stated values are from the trend line created using the stated values – while totally ignoring the uncertainty ranges!

Reply to  Tim Gorman
March 5, 2024 11:28 am

I didn’t say they were junk! STOP PUTTING WORDS IN MY MOUTH.

Calm down and learn top read. I didn’t say you claimed they were junk, I said that people here are implying that’s what CRN shows. You reject the idea that you said that, and then immediately say they are not fit for the purpose they are being used.

No, you can GUESS that it is, but you can’t *KNOW* that it is.”

You don’t “KNOW” it is, that’s the point of making a reasonable estimate. If you want to call that a GUESS then that’s up to you. Do you call all your measurements a GUESS?

How do you lay out a trend line if it isn’t based on the point-to-point values?

You’d know if you ever actually studied any of these things you claim to be an expert on.

Did you actually read this after you typed it? You *can’t* figure out things you don’t know, not even using statistics.

I stand by what I said. You figure out things you don’t know for certain. That’s what all talk about uncertainty is all about. You don;t know for certain what the value you are trying to measure is, so you quantify how much uncertainty there is.

Why do you think the GUM has deprecated “true value +/- error”?

Does it? They might like not the words, but all their equations are based on that model.

When those are based on “stated values” while ignoring the uncertainties of the stated values then they are useless

And there you go, dismissing whole fields of expertise in statistics as useless. Maybe you should publish your research into the subject.

Probability has nothing to do with uncertainty!

Thus demonstrating you have zero understanding of the subject. Try searching through the GUM to see how often probability is mentioned. E.g.

3.3.4 The purpose of the Type A and Type B classification is to indicate the two different ways of evaluating uncertainty components and is for convenience of discussion only; the classification is not meant to indicate that there is any difference in the nature of the components resulting from the two types of evaluation. Both types of evaluation are based on probability distributions (C.2.3), and the uncertainty components resulting from either type are quantified by variances or standard deviations.

My emphasis.

Reasonably attributed to is a RANGE OF VALUES according to the GUM.

A dispersion of the values that could reasonably be attributed to the measurand. You seem to want this to mean there is no distribution of reasonable values.

That’s the whole point of uncertainty – you simply don’t know where in that range of values the true value lies – and you CAN’T KNOW!

You do not’ know where in the dispersion of values the true value lies – that’s why it’s uncertain. If you could ever consider what the words you use mean, rather than just yelling them in all caps, you might begin to understand this. Not knowing where the true value lies is why you have statistical analysis. You do not know what the true value is, but you can work out what values it’s more likely to be.

Uncertainty is *NOT* the best fit

Nobody, apart from you, said it is. The best fit is the best fit – the fit that has the greatest likelihood of producing the observed result. The uncertainty is a measure of how that probability changes as you consider different trend lines.

The uncertainty of the trend line *IS* the best fit metric.

You really need to work out what you are trying to say, and then find words that express your intent. You keep saying things like the uncertainty is the best fit metric, and it just sounds like gibberish to me.

The uncertainty interval is how far the STATED VALUES are from the trend line – WITH NO CONSIDERATION OF THE ERROR BARS FOR EACH VALUE!

You should realize that your nonsense becomes less easy to understand when you write it in all caps – not easier as you seem to think.

The uncertainty interval is not how far values are from the trend line. I think you are confusing this with the prediction interval.

Reply to  Bellman
March 5, 2024 1:35 pm

Your replay to ME was ““So now you are claiming”.

And now you are trying to claim that you weren’t replying to me?

Tell me again who needs to learn how to read?

Reply to  Tim Gorman
March 5, 2024 2:27 pm

Learn to read.

I said

So now you are claiming the pristine CRN data, that is constantly being claimed here as demonstrating how all other data sets are junk, actually has a monthly uncertainty that may well be greater than 0.5°C.

“that is constantly being claimed here as demonstrating how all other data sets are junk,” is a sub clause pertaining to the claimed qualities of the CRN – not something I am claiming you said.

The fact that you keep obsessing about these minor grammatical points is one of the many tells that you have no actual argument.

Reply to  Bellman
March 5, 2024 2:01 pm

You don’t “KNOW” it is, that’s the point of making a reasonable estimate.”

Nope! You state what you measured and then state the uncertainty interval. If all you have to do is make a reasonable estimate then why bothering measuring the measurand at all? Just ESTIMATE it’s value!

“You’d know if you ever actually studied any of these things you claim to be an expert on.”

That’s not an answer to the question. It’s an evasion to avoid having to answer.

“all their equations are based on that model.”

Sorry, that’s just not true. It just shows that you *still* haven’t studied the GUM for meaning.

“And there you go, dismissing whole fields of expertise in statistics as useless. Maybe you should publish your research into the subject.”

There you go again, putting words in my mouth. I said specifically that if you ignore the uncertainty part of the measurements then you are *NOT* using the whole field of expertise in statistics. That is EXACTLY what Taylor, Possolo, and Bevington state. You continue to ignore that they all say that in the real world, systematic uncertainty cannot be statistically identified.

“ou do not’ know where in the dispersion of values the true value lies – that’s why it’s uncertain. If you could ever consider what the words you use mean, rather than just yelling them in all caps, you might begin to understand this. Not knowing where the true value lies is why you have statistical analysis. You do not know what the true value is, but you can work out what values it’s more likely to be.”

Not knowing where the true value lies cannot be rectified by using statistical analysis, not in the real world of metrology. If it could then there wouldn’t be any use in the uncertainty concept. You would just use statistics to find the true value!

If I use the new method of stating a measurement as only an interval, e.g. value1 = 15.1C to 15.9C and value2 = 15.1C to 15.9C how do you determine a trend line? How do you determine the fit of that trend line to the measurements?

Reply to  Tim Gorman
March 5, 2024 3:29 pm

The trendology graphs will look very different if they use the real interval limits!

Reply to  Tim Gorman
March 5, 2024 7:00 pm

That’s not an answer to the question. It’s an evasion to avoid having to answer.

I keep giving you answers and you keep ignoring them. Time to imitate karlo and point out you are in no position to demand answers from me.

You keep claiming to have read Taylor. I’m sure he explains how to calculate a trend line based on multiple points. It’s an elementary technique – even Monckton manages it.

Sorry, that’s just not true.

Then you are going to have to explain what their model is and how they derive all their equations using it. By all mean rename the values if the words offend you. But the measurement will be equal to some value (that of the measurand) plus some deviation from that value.

There you go again, putting words in my mouth.

Your exact words were

When those are based on “stated values” while ignoring the uncertainties of the stated values then they are useless

This was in response to me saying

“Hence terms like “confidence”, “likelihood”, “probability” and “reasonably attributed to”. They are all saying we don’t know for sure, but we can put figures to what we don’t know.”

That is what statistics does – it quantifies what we know and don’t know. And it is can be done just by looking at “stated values”. Your claim that this is useless illustrates your whole attitude perfectly – if you don’t understand how something works it must be useless.

That is EXACTLY what Taylor, Possolo, and Bevington state.

Look at how Taylor calculates a trend line, or Possolo calculates the uncertainty of a monthly average – all based on just the stated values.

I’ve explained to you multiple times that uncertainty in the values is already in the variance of the measured values and is usually going to be a small component of that variance.

You continue to ignore that they all say that in the real world, systematic uncertainty cannot be statistically identified.

And so you switch to systematic errors, whilst ignoring what the GUM actually says. And, of course, when talking about a trend systematic are not going to be a problem because they are systematic.

What you should be looking at is if there are systematic error that change over time – and try to identify and correct for them. But that has nothing to do with the individual measurement uncertainties uncertainties.

Not knowing where the true value lies cannot be rectified by using statistical analysis, not in the real world of metrology.

You are not trying to “rectify” the unknown parameters. You are trying to identify how uncertain you are about them.

If I use the new method of stating a measurement as only an interval, e.g. value1 = 15.1C to 15.9C and value2 = 15.1C to 15.9C how do you determine a trend line?

Why do you keep wanting to ignore the actual measurement. You have a measurement – the best estimate of the value, and a range indicating the uncertainty. You now have to assume the range is symmetric and choose the mid point. Honestly you keep bleating on about the GUM, then choose to ignore all their recommendations as to how report a measurement.

How would you calculate the the volume of a water tank, knowing the height = 32.43m to 32.57m, and radius = 8.37m to 8.43m?

You keep talking about this new method of only quoting in interval, but I’m still waiting for you to point to when this new standard has been declared.

Reply to  Tim Gorman
March 5, 2024 8:39 am

One more time in simple words: If you don’t know the difference between two values then how do you define a trend line between them?

Again, in simple words, you do not define a trend line based on two points.

It’s that it is UNKNOWN – it’s part of the Great Unknown!

You know that the trend is the “best estimate” of the actual trend. You know that there is a confidence interval based around the trend.

You’d make a good huckster at a carnival trying to tell people’s future from a clouding crystal ball!

That just about sums you up – the last 100+ years if statistics is just crystal ball viewing as far as you understand it.

Reply to  Bellman
March 5, 2024 11:55 am

Again, in simple words, you do not define a trend line based on two points.”

The problem is that you don’t have two POINTS. You have two intervals.

“You know that the trend is the “best estimate” of the actual trend. You know that there is a confidence interval based around the trend.”

Not if you use only the stated values without also considering the uncertainty interval. That uncertainty interval contains the values that can be reasonably assigned to the measurand. You keep wanting to say the stated value is the TRUE VALUE. It isn’t. The stated value is what your measuring device gives you, within precision and resolution restrictions. It is *NOT* the “true value” no matter how much you want it to be. In the case where you have asymmetric uncertainty the stated value may be far from the best estimate.

The trend line can be defined just as easily by any value within the uncertainty interval as it can by the stated value.

You *still* haven’t grasped the meaning of uncertainty. It is doubtful you ever will. You are stuck in the “true value +/- error” world. The world the rest of the *real* world has left behind. What do you do when confronted by the newest method of giving a measurement by just stating the interval which may be reasonably assigned to the measurand? Just pick the median value of the interval? That’s just signifying your reliance on the meme that all measurement uncertainty is random, Gaussian, and cancels!

“That just about sums you up – the last 100+ years if statistics is just crystal ball viewing as far as you understand it.”

The last 100+ years of statistics says you can’t describe a distribution using only the average. You *must* specify both the average and the variance even of a Gaussian distribution. How often do you and climate science ever give the variance of the data you are analyzing? If the distribution is skewed at all or is multi-modal the average and variance aren’t even all you need to use in describing the distribution! And when you combine SH and NH temps you *are* creating a multi-modal distribution!

Reply to  Tim Gorman
March 5, 2024 7:12 pm

The problem is that you don’t have two POINTS. You have two intervals.

No. I have multiple points, in most of these case hundreds of them. The points vary considerably but it’s possible to identify a best fit line through them.

Each point may have an expanded uncertainty interval, but it’s very unlikely you can fit a straight line through all of the intervals. As I keep trying to point out, the deviation from the line caused by natural variation is much bigger than any error caused by the measurement uncertainty.

The uncertainty of the trend line depends on the deviations of the values from that trend line. That is true regardless of the reason for the deviations. The deviations are caused by a variety of factors, including any measurement error.

As always your hypocrisy in this is blatant. All this talk of uncertainty only worries you there is a warming trend. You have zero issues when the trend is flat. Monckton presented a zero pause with zero uncertainty in the trend, and zero mention of measurement uncertainty in the UAH. Did you ever complain to him about it? Of course not. You instead spent every month attacking me for even mentioning there was a large uncertainty in the trend. You didn’t even have a problem with him being able to identify the exact month the trend started. No uncertainty there – he was like a tracker identifying the exact spot a deer had been killed.

Reply to  Tim Gorman
March 5, 2024 6:27 am

Be sure to see Bill J the statistician below double-down about how all uncertainty is random, Gaussian, and cancels.

Reply to  karlomonte
March 5, 2024 6:55 am

I left him quotes from Taylor and Bevington stating that systematic uncertainty cannot be analyzed statistically. That means it can’t be assumed to have *any* kind of distribution let alone a Gaussian one. Each and every field measurement station is going to have a different systematic uncertainty and when you combine them those systematic uncertainties will add in some form or fashion.

Hubbard and Lee proved this back in 2002 when their results showed that adjustments must be done on a station-by-station basis using calibrated equipment to determine the adjustment. “Station-by-station” has little meaning in climate science apparently.

Reply to  Tim Gorman
March 5, 2024 7:15 am

Bill thinks quoting recognized texts to him is “appealing to authority.”

He can’t grasp that a single measurement, which like air temperature can never be repeated, has uncertainty.

Richard Greene
Reply to  Richard Page
March 2, 2024 4:08 am

Silly Post Award Nominee

Reply to  Bellman
March 1, 2024 4:22 pm

The trend comes from the location of the 2015/16 El Nino bulge in the data set.

You should be well aware of that fact by now.

Reply to  bnice2000
March 1, 2024 5:10 pm

It doesn’t matter why you think it’s warming, the question is why people here think the data is demonstrating the older data is wrong.

But, in any case, removing the years 2015 and 2016 from the data set, still leaves a warming rate of 0.30°C / decade. Using the “junk” ClimDiv data shows only 0.23°C / decade.

Reply to  Bellman
March 1, 2024 8:40 pm

OMG what a gormless twit you are… !

You have to remove the effect of the El Nino not just the El Nino… way too complicated for you to understand.

Again, that is because ClimDiv started above USCRN and their “adjustments” have been gradually adjusted their urban adjustments and the trends have now actually levelled off.

I guess that they think sitting about 0.1C above USCRN on average is about where they want to be

ClimDiv-minus-USCRN
Richard Greene
Reply to  bnice2000
March 2, 2024 4:57 am

The El Nino Nutter speaks up

Climate change is all El Ninos

Never mind those cooling La Ninas

Never mind that El Ninos seemed to cause global cooling from 1940 to 1975 and globa warming from 1975 to 2024

El Ninos can do everything

El Ninos are faster than a speeding bullet, more powerful than a locomotive, able to leap tall buildings in a single bound. Look down in the ocean. It’s El Nino Man, and his sidekick, bStupid2000.

Reply to  Bellman
March 1, 2024 8:44 pm

We could also look at what happened after the 2015 El Nino.

They really have got the ClimDiv adjustments pretty much trend-perfect now.

And guess what.. even with the current El Nino… COOLING !!

Keep using that El Nino warming….

It is all you have, bellboy …. and you know it.!!

Combine-USA-Temp-since-2015
Reply to  Bellman
March 1, 2024 9:32 pm

And of course, up until the 2015 El Nino..

there was no warming….

USCRN-to-2015
Richard Greene
Reply to  Bellman
March 2, 2024 4:50 am

Please do not confuse the AGW deniers here with facts

They will go berserk and act like monkeys in a zoo when you walk past their cage clanging your steel water cup against the bars. Banana peels will come flying at you. Not that I would ever do that again.

Reply to  Richard Greene
March 2, 2024 5:28 am

You’re lucky it was just banana peels – Monkeys have a reputation for flinging poo at people they dislike.

Richard Greene
Reply to  Richard Page
March 2, 2024 4:07 am

20 full years including 2004 and 2023. That’s two decades of use.
The full USCRN network was completed in 2008.

It has been traditional to convert all warming trends, of any length, into warming per decade in C. degrees..

The first prototype of a USCRN station was constructed in North Carolina in the year 2000. The USCRN was commissioned January 2004, and the continental United States network of 114 stations was completed in 2008

Reply to  Richard Greene
March 2, 2024 7:13 am

And the start of the USCRN dataset is generally held to be 2005 or, more usually, 2006. Do please at least look like you’re keeping up with the conversation, Richard, or is even that beyond your capabilities?

Reply to  Bellman
March 1, 2024 4:19 pm

The 2015 El Nino bulge and the current El Nino give it a slight trend.

But you knew that. !

No evidence of CO2 warming in the USCRN data.

Monkey trendologists love to use El Ninos for warming trends.

Richard Greene
Reply to  Bellman
March 1, 2024 4:38 pm

“You mean it’s warming at the rate of 0.34°C / decade since 2005?”

That is the truth
Some conservatives will reject the truth when it contradicts their beliefs. Even when the truth is based on empirical data.

Some will respond with nasty comments and no science. No attempt to refute your data. While truth is not a leftist value, that is true of some conservatives too.
You will get downvotes even if your data are true.

Reply to  Richard Greene
March 1, 2024 8:46 pm

Some people will be more intelligent and look at the data to see where the warming trend in the linear calculation comes from

You won’t be one of them.!

Richard Greene
Reply to  bnice2000
March 2, 2024 5:00 am

“Some people will be more intelligent … “

Definitely NOT you.

Reply to  Bellman
March 2, 2024 1:17 am

I’m going to give a couple of examples of similar calculations.
I don’t know about the USA but we have a census in the first year of each decade, the last one being in 2021. Over the next 10 years population trends are determined on an annual basis and are used to predict various demands on infrastructure. The next census will re establish the baseline for the next decade.
The other example is the calculation of excess deaths each month or year. Again in the U.K. until recently, this was based on the rolling average of the previous five years. This has recently changed to be a more robust model taking into account various factors.
I presume a similar method to the excess deaths calculation can be used to determine temperature trends.

Reply to  JohnC
March 3, 2024 2:00 am

I don’t know why the down vote. Climate is defined as the average of different weather parameters over a thirty year period, but that has to be a rolling 30 years surely? The alternative is to stick a marker in a particular year say 1901, then average min and max temperature for each month in the years 1901 to 1930, 1931 to 1960, 1961 to 1990, 1991 to 2020, 2021 to 2050.

A further exercise would be to find the average min and max temperature for each month in 1901-1930, then using this as a baseline, determine the excess temperature either + or – for each month in 1931. Then determine the average temperatures for each month for the years 1902-1931 and use this as the baseline for 1932. This gives a rolling 30 year average to determine the excess temperatures for a given year, i.e.the same technique as used to determine excess deaths in the U.K. for a given year.

Reply to  JohnC
March 3, 2024 4:59 am

Alarmists like to pick the right 30 year period to get the most warming they can find. A rolling 30 year period wouldn’t be good for their ’cause’.

Reply to  Richard Page
March 3, 2024 6:44 pm

Nothing row in principle to a 30 year rolling average. But it will only tell you the average over the most recent 30 year period, so always tends to be out of date. And of course it’s useless to describe USCRN data which is only 17 years old.

Reply to  Bellman
March 3, 2024 6:45 pm

Nothing wrong in principle…

Reply to  JohnC
March 5, 2024 4:22 am

If the measurement uncertainty in your measurements is larger than the “excess temperature” you calculate then how do you know what you average actually is?

Reply to  JohnC
March 3, 2024 6:41 pm

And if global temperatures were based on one sample taken every 10 years you might have a reasonable point.

Reply to  Rud Istvan
March 1, 2024 3:04 pm

This is the UK… it is really bad for contaminated sites.

We know the USA and Australia are also highly contaminated.

One can only imagine just how badly contaminated surface data from other countries might be. !

How anyone can pretend that surface fabrications could possibly give a realistic representation of “global” temperatures ..

… is beyond any scientifically rational logic. !

Richard Greene
Reply to  bnice2000
March 1, 2024 4:47 pm

The national average temperatures are whatever the national government bureaucrats want to tell you.

They have predicted rapid warming for many decades and they want their predictions to look good.

The weather station network could have perfect siting yet the national average statistic could still be malarkey. Most likely exaggerating actual warming.

Here’s the important lesson about the siting:

The government agencies don’t care if you know there is poor weather station siting. The mass media does not care. The bureaucrats don’t even try to create an illusion of good science. They know they “own” the national average temperature and you have no way to double check their statistic.

Reply to  bnice2000
March 1, 2024 4:59 pm

We know the USA and Australia are also highly contaminated.”

Wacky weather stations in Australia researched by Ken Stewart at https://kenskingdom.wordpress.com/2020/01/16/australias-wacky-weather-stations-final-summary/

Apart from poor siting there’s also the influence of more rapid responses of automatic weather stations compared to old-fashioned thermometers, in an Australian context researched at http://www.waclimate.net/aws-corruption.html

Richard Greene
Reply to  Rud Istvan
March 2, 2024 3:35 am

USCRN and ClimDiv are nearly identical until recent years, when USCRN has reflected faster warming.

ClimDiv has haphazard weather station siting while USCRN is claimed to have perfect siting and is claimed to need no adjustments.

For most of the years since 2005, they were suspiciously similar for the US average temperature anolaly.

comment image

That suggest weather station siting does not matter, or NOAA is fixing the data.

I think NOAA is “adjusting” ClimDiv, or adjusting USCRN, or adjusting both. Just my opinion.

I refuse to believe weather station siting makes no difference.

I also don’t trust NOAA after they “cooled” the mid-1930s hottest average US temperature year to make 1998 become the hottest US year (at the time)

Reply to  Richard Greene
March 2, 2024 7:14 am

“ClimDiv has haphazard weather station siting while USCRN is claimed to have perfect siting and is claimed to need no adjustments.”

Here is what NOAA itself admits regarding USCRN temperature measurement data under “IMPORTANT NOTES”: (ref: https://www.ncei.noaa.gov/pub/data/uscrn/products/monthly01/readme.txt ):
“I. On 2013-01-07 at 1500 UTC, USCRN began reporting corrected surface temperature measurements for some stations. These changes impact previous users of the data because the corrected values differ from uncorrected values. To distinguish between uncorrected (raw) and corrected surface temperature measurements, a surface temperature type field was added to the monthly01 product. The possible values of the this field are “R” to denote raw surface temperature measurements, “C” to denote corrected surface temperature measurements, and “U” for unknown/missing.” 
(my underlining emphasis added)

Reply to  ToldYouSo
March 2, 2024 10:51 am

When I found this, I crapped. The most sophisticated, accurate, and well sited stations need correction? You are kidding! You and I both know there is an algorithm running somewhere the homogenizes CRN and ClimDiv so changes are apportioned to mine appearances of change.

Reply to  Richard Greene
March 2, 2024 10:26 am

Now take a look at the USCRN, nClimDiv and USHCN plots. The reason they are suspiciously similar for several years after 2005 is that NOAA were using USHCN to calibrate both nClimDiv and USCRN. Garbage in, garbage out.

Bob
March 1, 2024 2:25 pm

Very nice. I am fed up with weather/climate sites using inferior methods and equipment. If a doctor or mechanic or carpenter tried this kind of crap they would be in court in a heart beat. These CAGW crackpots need severe punishment.

Reply to  Bob
March 1, 2024 4:38 pm

These CAGW crackpots need severe punishment.

There was a recent court case in which the climate scientist was vindicated and won substantial damages.

It was a stuck-post here for weeks; until the verdict came in and then suddenly it wasn’t.

Rud Istvan
Reply to  TheFinalNail
March 1, 2024 4:48 pm

You are deluded for thinking Mann’s lawsuit will stand on appeal. And there are multiple valid appeal grounds.

Reply to  Rud Istvan
March 1, 2024 4:54 pm

This site was saying for months that he had no chance, yet he won a jury vote unanimously. So forgive me if I am sceptical of your optimism.

Reply to  TheFinalNail
March 1, 2024 8:48 pm

You really are totally clueless what it was really all about, aren’t you fungal !

Reply to  TheFinalNail
March 2, 2024 12:45 pm

yet he won a jury vote unanimously. So forgive me if I am sceptical of your optimism”

No.
Mann won because of a mismanaged trial that should have been declared a mistrial at multiple times due to hideous mistakes during the trial.

The Judge’s instructions to the jury would put an insomnolent into deep sleep.

Many egregious improper court technicalities will lead to a successful appeal.

Any appeal outside of DC control is likely to dismiss the case with prejudice against Mann.
i.e.,

  • Plaintiff failed to prove damages, mean looks in grocery stores are damages,
  • Plaintiff committed perjury regarding financial damages,
  • in fact Plaintiff committed perjury multiple times in his filings since he first sued Simberg and Steyn,
  • Plaintiff failed to prove career or social damages,
  • Basically, Plaintiff failed to prove any damage or harm,
  • Nye chatting and joking with the jurors, filmed,
  • Egregiously incorrect statements by Mann’s lawyers, in front of the jury,,
  • Plaintiff failed to prove libel,
  • Plaintiff failed to prove malice,
  • Defendants disproved the Hockeystick on multiple levels,
  • Defendants proved Penn State President meddled in the faux investigation,
  • Plaintiff’s alleged exoneration was never a true investigation or exoneration, the investigators literally whined when asked about interviewing McIntyre or other parties or testing the code/math/statistics/data Mann used,
  • The investigators planned to Censure Mann, until Spanier their boss interfered,
  • Plaintiff’s lawyers mislabeled rationale for the lawsuit as fighting climate change deniers to the jury,
  • etc. etc.

Apparently Mann’s mystical lawsuit funder doesn’t cover losses, or Dr. Ball would have been paid for his costs.

Richard Greene
Reply to  TheFinalNail
March 1, 2024 5:10 pm

Mann’s Fraudulent Hockey Stink Tree Ring Circus Chart junk science was NOT vindicated.

The jury believed Mann was defamed by libel and then made an unconstitutional extreme punishment award (violates 8th amendment, below)

“Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.”

Richard Greene
Reply to  Bob
March 1, 2024 4:53 pm

CAGW is not based on any historical temperature trends.

CAGW is a prediction of a new climate trend. Similar to the actual USCRN trend since 2005 — rapid global warming over 0.3 degrees C. per decade. Here is Michigan I have found our warming to be very pleasant, since it is entirely in the colder months (versus the 1970s). I have no idea what anyone believes warmer winters in colder areas are bad news.

CAGW predictions started in the late 1950s and were later defined with a ECS of CO2 consensus in 1979.

We are still waiting for CAGW to show up, 44 years after the ECS of CO2 consensus predictions began in 1979. That’s a lot of waiting.

Reply to  Richard Greene
March 1, 2024 5:25 pm

All global climate data sets, surface or satellite, show statistically significant global warming since 1979.

It looks like it’s accelerating. UAH, beloved of this site, is the fastest-warming global temperature data set over the past 10 years.

Reply to  TheFinalNail
March 1, 2024 8:53 pm

UAH, being atmospheric , responds more to El Nino effects.

That is the only warming in that period.

Your whole calculation hinges on the sudden spike from the current El Nino.

Before the El Nino, UAH was cooling since the 2015/16 El Nino.

Your display of abject cluelessmess continues.

And when you cite Tamino…. You prove how utterly clueless you really are.

Junk science at its worst.

Richard Greene
Reply to  bnice2000
March 2, 2024 5:22 am

If we want junk science we will read your El Nino Nutter posts where you focus on El Ninos and forget La Ninas

In the long run, the eNSO cycle is temperature neutral

You need to be sedated.

Reply to  TheFinalNail
March 1, 2024 8:57 pm

There are no surface data fabrication which is not hopelessly contaminated and compromised by urban, airports, bad sites, dodgy fast-acting thermometers and massive data mal-manipulation.

This whole thread shows just how contaminated and meaningless they are for any sort of “climate” history.

But the AGW scammers and their moronic hangers-on, wouldn’t have it any other way.

Reply to  TheFinalNail
March 2, 2024 4:04 am

“All global climate data sets, surface or satellite, show statistically significant global warming since 1979.”

The key point here being “since 1979”.

This is consistent with a cyclical upswing in temperatures. The same thing has happend twice in the past since the Little Ice Age ended, and each upswing in temperatures was of the same magnitude as the current upswing in temperatures.

The previous two upswings in temperature were not connected to rising levels of CO2. Something other than CO2 caused these warmings. Logic would say that the current warming is probably related to the previous warmings. Much more likely than the current warming being related to CO2 increases.

PhilJones-The-Trend-Repeats
Richard Greene
Reply to  TheFinalNail
March 2, 2024 5:20 am

Lets not jump to the conclusion that warming is accelerating, or unpleasant in any way

Greenhouse warming should be mainly be at night, mainly in the six coldest months of the u year and mainly TMIN (at dawn or 1/2 hour later)

That has been the timing and pattern of most warming since 1975.

100% good news.

In addition, greenhouse warming should not affect almost all of Antarctica, which has a permanent temperature inversion. That has also happened.

I have presented possible evidence of greenhouse warming and THERE WAS NO BAD NEWS

We have had 48 years of global warming since 1975. Almost certainly some caused by humans, maybe even most.

WHERE WAS THE BAD NEWS?

Warming was great news for us in Michigan, other northern states and nations, UK, Canada, Russia, etc.

Please explain why 48 years of pleasant global warming will be followed by 48 years of dangerous global warming.

What would cause that?

We already know the next +100ppm rise of atmospheric CO2 will cause less warming than the last +100ppm rise, due to a logarithmic effect of CO2

And we know the next +100ppm CO2 rise will take 40 years at the current rise rate of 2.5ppm a year.

For what reason should we fear adding more CO2 to the troposphere, and improving plant growth, while lowering their water requirements?

That sounds like good news to me.

Reply to  Bob
March 2, 2024 2:24 am

‘severe’ ? Don’t you mean sever?

Reply to  JeffC
March 2, 2024 7:16 am

Or ‘server’ – remove internet access for 3-5 years!

mleskovarsocalrrcom
March 1, 2024 2:38 pm

Kudos to the Met Office for recognizing and classifying station temperature variations. Anthony should be proud 🙂 Now get them to do something about it and tell the people.

Reply to  mleskovarsocalrrcom
March 2, 2024 7:17 am

Now get NOAA to do something about the US stations.

Reply to  mleskovarsocalrrcom
March 2, 2024 7:24 am

“Anthony should be proud.”

Exactly! Anthony Watts provided an excellent early warning of the errors caused by Urban Heat Island (UHI) effects on surface temperature monitoring stations being used across the US to calculate “global warming” (see https://wattsupwiththat.com/2022/05/24/participate-in-the-surfacestations-project-version-2/ , among many of his WUWT articles on this subject) . . . and he didn’t even request Federal funding under the guise of “climate change™” to undertake this noble effort.

Reply to  ToldYouSo
March 3, 2024 4:40 am

Yes! Thanks to Anthony and others we now have the CRN network. The only problem with CRN is that the powers that be have seen the need to “adjust” some of the temperatures rather than repairing the stations that are being corrected. It’s a whole lot easier to make them show what you want through “corrections” than finding out why the corrections are needed.

MrGrimNasty
March 1, 2024 2:43 pm

This isn’t new information, I’ve long been saying UK temperature records are only fit for pub talk/trivia, not for monitoring climate change.

Nonetheless, it is interesting that the mean CET for February, 2nd warmest in 366 years, missed top slot by 0.1C, supposedly the most scientifically rigorous temperature data, still came out warmer than the ‘junk’ data for the whole of England.

https://www.metoffice.gov.uk/hadobs/hadcet/cet_info_mean.html

https://www.bbc.co.uk/news/science-environment-68435197.amp

Reply to  MrGrimNasty
March 1, 2024 2:58 pm

missed top slot by 0.1C

Considering the record has set in 1779, I think it could be said they are statistically tied.

supposedly the most scientifically rigorous temperature data, still came out warmer than the ‘junk’ data for the whole of England.

Nobody says the CET is the most rigorous data – it’s virtue is its length. But I’m not sure about your comparison. CET for February was 7.8°C. The Met Office figures for Southern England was 8.0°C.

Of course, if you include Northern England it will be cooler.

March 1, 2024 2:44 pm

class 5 does NOT mean 5C error.

I exchanged data with LeRoys research Team. LeRoy Defined the CRN classification Scheme
AND did the ONLY field Test.

The ONLY field test of CRN rating showed an AVERAGE error of .1C for CRN 5
in fact the Average Error for All Classes was .1C.

HOWEVER

but a PEAK instantaneous Error of UP TO 5C in ONE TEST CASE,

happily since we Care about the MONTHLY MEAN ERROR of PREDICTION
and Not the Peak instantanous measure Error,

CRN class doesnt matter. to the average MONTHLY Value.

quick now if a single DAY in the Month is off by 5 C what is the monthly mean

of min max off by?

Daily Average = Max+5C/ Min.

monthly average = daily average /30.

.017C.

Now recall that the land record is 30% of the total. so

we are talking .005C

Ron Long
Reply to  Steven Mosher
March 1, 2024 2:54 pm

Actually, if a single DAY in the month is off by 5 C……the media shouts a burning hell on earth declaration, and all other daily readings disappear into irrelevant noise.

Reply to  Steven Mosher
March 1, 2024 3:24 pm

You are talking about the USCRN data and we are talking about the UK Met Office data. Can you not see that everything you’ve said is irrelevant to this conversation? The WMO site classification system, that the UK Met Office uses, defines a class 5 site as having up to a 5°C error in measurement and a class 4 site as having up to a 2°C error in measurement. Now, bearing that in mind, point me at the documentation that classifies the USCRN sites using the WMO system, please?

Rick C
Reply to  Steven Mosher
March 1, 2024 3:47 pm

Steve: The field test showing average error of 0.1 C sounds like a calibration check on the thermometer, not an uncertainty arising from site conditions. Frankly I’m not sure how one would quantify the added uncertainty of poor siting versus a proper site. How would you establish a known reference temperature to compare? I do know that I’ve often noticed a significant temperature difference when walking past buildings and paved areas to walking past an adjacent large park. 2 – 5 C added uncertainty due to urban heat island effects seems within the realm of possibility.

Reply to  Rick C
March 1, 2024 3:56 pm

Rick,

This doesn’t explain why ‘pristine’ sites in the US are warming faster than ‘contaminated’ sites.

Everyone knows UHI exists. Contaminated sites are adjusted to reflect this. If anything, it seems that the adjustments may be overcompensating for the UHI effect; because the pristine sites have a faster warming trend than the adjusted sites.

Reply to  TheFinalNail
March 1, 2024 4:20 pm

You cannot ‘adjust’ for a random variable like UHI. The only way to do it is to measure a site that is not contaminated by UHI in the first place.

Reply to  Richard Page
March 1, 2024 4:27 pm

You cannot ‘adjust’ for a random variable like UHI. The only way to do it is to measure a site that is not contaminated by UHI in the first place.

Right. And they’ve done that. And it turns out the uncontaminated sites are warming faster.

So…?

Reply to  TheFinalNail
March 1, 2024 5:48 pm

So, the obvious conclusion, if you statement is true, is that the variations caused by UHI are significantly larger than the background.

Reply to  AndyHce
March 1, 2024 6:36 pm

My statement is true. Just look at the data. The USCRN sites are warming at a faster rate than the ClimDiv sites over their joint period of measurement.

Irrespective of the UHI effects, or their amplitude, or the adjustments that are made to compensate for UHI effect; we are left with the fact that pristine sites in the US are warming faster than UHI-contaminated sites.

UHI is not causing the warming, Andy.

Reply to  TheFinalNail
March 1, 2024 9:21 pm

UHI is not causing the warming,”

ClimDiv gives you absolutely ZERO CLUE about urban warming, because it has all been adjusted out so ClimDiv matches USCRN.

The trend in USCRN comes ONLY from the main bulge around the 2015-16 EL Nino. Monkey-like trend line will not help you see that fact.

There is absolutely ZERO evidence of any CO2 warming anywhere in any of UAH global data, UAH-USA48 data or USCRN/ClimDiv matched datasets..

Reply to  TheFinalNail
March 1, 2024 9:23 pm

The USCRN sites are warming at a faster rate than the ClimDiv sites over their joint period of measurement.”

Again the total lack of understanding.

ClimDiv starts warmer than USCRN and they have gradually sorted out their urban adjustment algorithm.

The difference between ClimDiv and USCRN has now levelled off.

ClimDiv-minus-USCRN
Reply to  TheFinalNail
March 1, 2024 9:38 pm

Just look at the data.”

The ClimDiv fabrication already has an algorithm to REMOVE Urban heating effects.

YEs those adjustement have been gradually honed-in.

Thank you for FINALLY realising what is going on.

“we are left with the fact that pristine sites in the US are warming faster than UHI-contaminated sites.”

NO. THEY ARE NOT !

They just didn’t have their “urban adjustments right.

Fixed mostly, and the trend difference between ClimDiv and USCRN since about 2017 basically zero.

not going to bother posting the graph that shows this FACT, again. !

Reply to  TheFinalNail
March 1, 2024 6:36 pm

Even with the CRN stations averaging them is a losing endeavor. The only rational basis for averaging is that there is a linear change between the points being averaged.

In other words, if there is a station at 10 and another at 20, there must be a point midway between them that is at 15. That rational says there is no such thing as weather fronts that cause quick changes in temperature.

This even occurs with anomalies. Averaging automatically makes the assumption that there is a mean between the values. Yet there are areas between that give lie to that assumption.

Don’t believe me? Look at Manhatten, KS. Little growth in temperature since it was brought on line. Yet it obviously lies between stations that are warming.

Tell how you justify “averaging” when you know that there are places that do not fit into a linear change.

Reply to  TheFinalNail
March 1, 2024 9:12 pm

WRONG again….

They are just adjusting the contaminated sites to match.. and gradually improving the adjustment algorithm

Adjusted difference has now pretty much levelled off at about +.1C

ClimDiv-minus-USCRN
Reply to  bnice2000
March 1, 2024 9:16 pm

Since around the end of the 2015/16 El Nino the trend of all 3 temperature sets over the USA is essentially the same

USCRN,

ClimDiv (adjusted to match USCRN even better… trends basically parallel)

and UAH USA48… all show cooling even with the latest El Nino event.

Combine-USA-Temp-since-2015
Reply to  TheFinalNail
March 2, 2024 3:36 am

TFN – if you would kindly point me to the documentation classifying the USCRN sites according to the WMO classification (not the ‘gold standard’ btw, it’s the minimum level) then I would be happy to discuss the findings with you at some length. I noticed when I asked Steven if he could do the same he disappeared, do you intend to do the same?

Reply to  Richard Page
March 2, 2024 10:29 am

Apparently so.

Reply to  TheFinalNail
March 1, 2024 4:32 pm

OMG fungal.. wake up !!.

It has been explained to you many times that ClimDiv started slightly higher, and that they have been honing their adjustment routines gradually to make it a better match to USCRN

ClimDiv is a TOTALLY CONTRIVED data set, unrelated to any measurements made at any stations except USCRN..

Reply to  bnice2000
March 1, 2024 4:48 pm

You’re saying that ClimDiv is adjusting the data to make it ‘cooler’ than the pristine sites?

You are saying that they are adjusting the data to make it seem that the warming rate isn’t as high as the pristine sites suggest it actually is; is that right?

The ‘totally contrived’ data (without the shoutiness, note), show a warming rate that is slower than the pristine data.

The evil scientists are making global warming less fast than the pristine data suggest it actually is?

Honestly, have you thought this through?

(You haven’t, I know.)

Reply to  TheFinalNail
March 1, 2024 9:02 pm

OMG… you are so determined to be IGNORANT…

You haven’t thought…. AT ALL..

You are not capable of it.

ClimDiv started above USCRN, and they have gradually honed-in their urban adjustments until the trends are now about the same

They have levelled off at about 0.1C above USCRN

Yes, it is totally contrived data…..

…. sorry if your feeble little brain cannot see or accept that fact.

ClimDiv-minus-USCRN
Forrest Gardener
Reply to  Steven Mosher
March 1, 2024 4:23 pm

Wow. Is this the most convincing rhetoric you can Mosh up?

Now tell us the probability that a class 5 site will be off by 5C on only “a single DAY in the Month”.

You must be really smart because only really smart people can fool themselves so easily.

Reply to  Steven Mosher
March 1, 2024 4:28 pm

 land record is 30% of the total.”

And the ocean data is mostly just FABRICATED.

But you knew that didn’t you, Moosh !

Show us where the “reliable” temperature measurements of the 70% of the planet came from before ARGO.

It DOESN’T EXIST.. and you know it.

The rest of your post is just arrant anti-science garbage as well.

Denial that bad sites produce bad data, then pretending you can average that bad data away..

… is probably the most gormless nonsense you could come up with.

Reply to  Steven Mosher
March 1, 2024 5:51 pm

I got a new job.

They tell me I will be making $96K/year … they tell me I can expect the accounting department to get me a check ($8,000/month +/- $1,000) on the 5th of each month.

I told them that was really weird, and not acceptable.

They said O.K., we will pay you every day … $267/day +/- $30/day.

I said that sounds much better.

Reply to  Steven Mosher
March 1, 2024 6:34 pm

Uncertainty is not error, mosh.

Reply to  Steven Mosher
March 1, 2024 7:14 pm

The ONLY field test of CRN rating showed an AVERAGE error of .1C for CRN 5 in fact the Average Error for All Classes was .1C

Are you going to tell NOAA their documentation makes an egregious mistake? They specify 0.3°C for CRN temperature readings.

Same old, same old. Why do you make the mistake that all “error” is random Gaussian, and cancel

Daily Average = Max+5C/ Min.

You can’t even do this correctly. If all the temperatures have a system uncertainty of ±5 then the interval is actually:

If the uncertainty is u𝒸(y) =5 then you should combine the uncertainties of Tmax AND Tmin in quadrature and because of the division, relative uncertainties must be used.

Because Tmax and Tmin are separate measurements of the measurand named Daily_Average, it is appropriate to use the standard deviation of the uncertainty along with the mean.

Your lack of knowledge about how to treat systematic error is telling. Statistics IS NOT Metrology. You should study some metrology.

.

Reply to  Jim Gorman
March 2, 2024 2:31 pm

Here we go again!

By definition average-T is (Tmax + Tmin)/2

As error is expressed as +/- uncertainty per observation, it cancels in this case.

A single value contains no information about that value. There is no way to calculate SD or SEM without repeat measurements (i.e., repeating the measurement procedure) under exactly the same conditions.

For such observations SEM = SD/sqrt(N). SEM*2 roughly estimates the 95% CI of the mean (for large samples the multiplier is 1.96). I always use a stats package to calculate SEM or SD from samples – not Excel, where the formulae for samples vs populations are different (STDEV samples vs STDEVP where the samples = the whole population).

[From https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3487226/SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean“.]

The only known uncertainty is that of the instrument, where it equals 1/2 the interval range. There is also no accumulated uncertainty between independent observations.

Systematic error is also not the same as instrument uncertainty.

Kind regards,

Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
March 2, 2024 2:41 pm

The only known uncertainty is that of the instrument, where it equals 1/2 the interval range. There is also no accumulated uncertainty between independent observations.

Idiot ranting, divorced from reality.

Reply to  karlomonte
March 2, 2024 3:26 pm

Good morning karlo,

Speaking of reality, what other uncertainties can you ascribe to a single number?

All the best,

Bill

Reply to  Bill Johnston
March 2, 2024 6:09 pm

You don’t even know what the term means.

Reply to  karlomonte
March 2, 2024 6:30 pm

Dear Karlo,

But speaking of reality, what other uncertainties can you ascribe to a single number?

An an ad hominem response of course!

All the best,

Bill

Reply to  Bill Johnston
March 2, 2024 8:48 pm

Attempting rational discourse with you is a fool’s errand.

Pass.

Reply to  karlomonte
March 3, 2024 12:36 am

Instead of being an overbearing buffoon, why not engage in sensible discourse?

What do you think the term means?

b.

Reply to  Bill Johnston
March 3, 2024 6:50 am

overbearing buffoon

Another irony meter explodes.

walterrh03
Reply to  Bill Johnston
March 2, 2024 3:00 pm

Bill confuses uncertainty with error.

Reply to  walterrh03
March 2, 2024 3:41 pm

Perhaps walterrh03 has never undertaken weather observations as I did with colleagues time-on time-off for a decade.

As a matter of definition, error is how you observe the instrument (which one is trained to minimise), whereas uncertainty is a property of the calibrated/certified instrument (not just thermometers, but every instrument in the met-enclosure).

Even visual observations such as wind-strength, octants of cloud-cover by type, and visibility are calibrated (either using hand-held meters, features in the distant landscape, or by cross-referencing with colleagues and cloud-charts).

So no, I don’t confuse uncertainty with error, I leave that to others.

All the best,

Bill

Reply to  Bill Johnston
March 2, 2024 6:11 pm

As a matter of definition, error is how you observe the instrument (which one is trained to minimise),

Um, no.

whereas uncertainty is a property of the calibrated/certified instrument (not just thermometers, but every instrument in the met-enclosure

Um, no.

Keep flailing away, though.

walterrh03
Reply to  Bill Johnston
March 2, 2024 6:54 pm

Bill,

That’s not correct. Error is the difference between the observed and true value. Uncertainty is the entire range of possible values around the measurement result. I’ll politely refer you to this; study more and write less.

Reply to  walterrh03
March 2, 2024 7:50 pm

Dear walterrh03,

Scientists and technicians are trained to minimise error, that is to avoid possible sources of observational bias, some of which can be random (not persistent), and some of which may be systematic (true bias).

Having had long and exasperating (circular) discussions with Jim about the GUM (and NLIST), which I don’t intend to revisit, I am not incorrect.

What sort of circular argument is it to say “Uncertainty is the entire range of possible values around the measurement result“, if in fact you don’t know what the ‘true’ value is, or how to measure it?

Error can be partitioned, but only using repeated observations and a properly constructed experimental reference-frame. BomWatch protocols achieve this with reference to the First Law of Thermodynamics, for instance.

I have repeatedly pointed-out that a single observation contains no information about that observation. The only possible measurable uncertainty is that attributable to the instrument. The rest is up to the skill of the observer.

Individual observations are also independent of others that are not measured simultaneously under the same conditions. So-called error cannot accumulate from one independent observation to the next. This is not to say that successive observations are not autocorrelated.

This is why I received training early in my career, and I have trained others in minimising observer error. It is also why I track of the error component of data using objective statistical methods.

Even for well-constructed visual rating systems used in pasture and ecological research, there are ways of checking for consistency between observers and with standards. I regularly check the Bureau’s metadata using aerial photographs for example. Using two maximum thermometers in the same screen is another example, as is checking reset values with 9am dry-bulb values (even though the two thermometers are not read at precisely the same time).

All the best,

Dr Bill Johnston
http://www.bomwatch.com.au

Jim Masterson
Reply to  Bill Johnston
March 2, 2024 8:21 pm

“. . . I am not incorrect.”

Seriously!

I cannot follow your logic. Precision and accuracy are not the same thing. Only God knows the true accuracy. We can only measure with respect to precision. Unless a thermometer is calibrated prior to each measurement, you can’t claim that the glass tube hasn’t slip with respect to the printed scale. Very few weather thermometers, if any, have scales engraved on the glass tube–and those can’t be calibrated.

Now, measuring the length of a rod is simple. Repeated measurements of “that” rod will improve the resolution and reduce the error. However, repeated measurements of temperature at different times does not improve the resolution of a single temperature entity. You’re measuring different temperature entities and trying to conflate then into a single temperature entity.

And then they try to invoke the CLT (central limit theorem). That’s truly a laugh!

Reply to  Jim Masterson
March 2, 2024 8:46 pm

His notions about what “error” is are well and truly bizarre.

He is also 100% clue-proofed.

Reply to  Jim Masterson
March 2, 2024 9:12 pm

Dear Jim,

It is not truly a laugh, it is truly correct. If you don’t know what the ‘true’ observation is, you would not know if it knocked you down.

Also meteorological thermometers ARE engraved with either a deg F or degC scale (in Australia whole degrees-F, with a 10-degree index; or 1/2 degC with a longer whole degree bar, and a 5-DegC index.) Uncertainty estimates of an instrument are also always expressed as +/-, so for each independent observation, within that bandwidth, the central limit theorem holds.

If you are not experienced in using met-thermometers and taking weather observations, on what basis do you challenge protocols followed by those that have or do? Do you think there are no instructions about what to do, how to take observations and how to check for problems like bias and broken gear?

Perhaps you have never seen one, but I know and have used and calibrated both degF and degC thermometers and I have photographs of them too.

And now, we are going to flog that long-dead horse called precision and accuracy.

Restating what I said earlier, if you are talking about measuring something, precision is about the instrument, accuracy is about the way it is observed. A well-trained observer minimises the error (ie., maximises the accuracy) associated with taking a measurement, while precision of the instrument determines the uncertainty surrounding an observed value.

Measuring the length of a rod with a tape having 10-cm divisions is not as precise as using a tape with 1mm divisions. Someone with poor vision, or who has not done it before, probably cannot measure to that level of precision. So measuring the length of a rod may not be that simple for all people. Likewise for the office-girl who may be asked to take temperature observations at 9am, on her first day in the office.

Repeat measurements give you a more reliable mean, and an estimate of the SE of that value. However, no information about ‘error’ can be derived from a single measurement.

Cheers,

Bill

Reply to  Bill Johnston
March 3, 2024 6:53 am

Followed by yet another word salad. But the bolds do make it look important.

Reply to  Bill Johnston
March 3, 2024 6:51 am

Nice word salad.

Reply to  walterrh03
March 2, 2024 6:10 pm

And his “answer” confirms this statement.

walterrh03
Reply to  karlomonte
March 2, 2024 8:56 pm

He needs to study.

Reply to  walterrh03
March 2, 2024 11:45 pm

Dear Karlo and Walter,

You both need practice.

Rather than preach about something you have no experience of, I suggest get a job or volunteer as a met-observer.

Next you will tell me that thermometers don’t have scales etched on the glass; or that errors incurred in measuring temperature yesterday are additive with errors today, then the next day and forever; that PDFs are actually histograms (or vice-versa), and that on days ending in u, water runs up-hill.

I suspect you may need to learn to read GUM, make observations using thermometers you have never seen, and work your keyboard at the same time (watch CapsLock!)

I think it is time to be polite and cease harassing, bullying and demeaning those with knowledge that you may not have.

I look forward to happier days.

All the best,

Bill

Reply to  Bill Johnston
March 3, 2024 1:13 am

And then they slink away …. always the way such discussions end

Reply to  Bill Johnston
March 3, 2024 6:55 am

You forgot the full stop.

walterrh03
Reply to  Bill Johnston
March 3, 2024 9:54 am

Bill,

I referred you to a legitimate source for study. At some point, we were all new to metrology and had to study from somewhere. That’s the opposite of bullying and harassment, and so is correcting you when you are wrong. If that upsets you, then you have some self-reflecting to do.

And if that qualifies as harassment and bullying, what do you classify your behavior directed towards Jennifer Marohasy?

Reply to  walterrh03
March 3, 2024 10:01 am

He might want to pause for a bit and consider why the World Meteorological Organization uses the JCGM/GUM definitions of error and uncertainty for their thermometer classifications, instead of the Bill Esoteric Temperature Metrology.

walterrh03
Reply to  karlomonte
March 3, 2024 10:14 am

And yet the WMO insists on using surface datasets with claimed uncertainty of ~0.05.

I suspect you may need to learn to read GUM, make observations using thermometers you have never seen, and work your keyboard at the same time (watch CapsLock!)

If he really studied the GUM, he would quickly find the definition of error and uncertainty on page 5.

Reply to  walterrh03
March 3, 2024 12:03 pm

I suspect the committee(s) that put out the WMO meteorological guides are different from the climate science types, the guides have a lot of good information.

Indeed he would find the definitions PDQ. When we tried to point him in the right direction previously, his reaction was only calibration labs use the GUM, plus it conflicts with his scores of decades experience putting eyeball to thermometers.

Reply to  karlomonte
March 3, 2024 2:24 pm

Dear Walterrh03, karlo (and Jim),
 
As Karlo just pointed out, WMO uses GUM to classify instruments. Use of such instruments for weather observations, however, is directed by protocols and procedures, which in Australia are published by the Bureau of Meteorology. Protocols aim to minimise error, which is the same as maximising the precision of an observation. Uncertainty is a fixed property of the instrument.
 
The GUM and NIST (and procedures adopted in Australia by NATA), and I suspect like-organisations in GB and the EU applies to calibration and quality assurance of instruments, including in health and allied industries, which employ their own operating procedures. Ask a nurse what those protocols entail.
 
Next time you have your blood-pressure measured, ‘error’ does not depend on how many patients had theirs measured using the same machine by the same nurse previously.
 
While the machine may be regularly calibrated by a technician using NIST-certified standards and methods, and is probably stamped as such, each reading is independent of any others. Yours and Jim’s assertions that ‘error’ propagates between independent observations is patiently ridiculous.
 
The same applies to the measurement of temperature, and all the other things that get measured every day using certified calibrated equipment. Go look at surveying equipment – most gear comes with a calibration certificate provided by a certified lab. A surveyor on the other-hand is trained to minimise error in the use of the gear.
 
Buy cheap equipment on Ebay, that is not certified leaves you with no fallback if something goes wrong.
 
Run a pathology lab without NATA certification in Australia ditto. The Bureau of Meteorology operates a NATA certified lab, and the organisation I worked for had two NATA certified analytical labs. Results came back stamped to that effect.
 
Our much smaller lab was visited each year by a technician who checked electrical connections, operation of the fume-cupboard and the calibration of gear such as balances, and furnaces and ovens. We did our own lab checks on tipping-bucket rain gauges and other field equipment, including thermometers and automatic weather stations. SafeWork NSW can even chip-in if there are problems with safety and the failure to abide by work-safety rules.
 
Everywhere you look there is some sort of certificate certified by someone, even for people who may want to volunteer – working with children checks, first-aid, crowd-control, operating a door handle …. To ‘volunteer’ at certain places you may need to spend up to $500 just getting ‘qualified’.
 
If you have not done any of this stuff yourselves, including in this case weather observations, why are you qualified to bang-on about it as though you are some sort of expert.
 
You are conflating the purpose of NATA, NIST and GUM certification schemes, and field calibration, with protocols, methods and training associated with the use of instruments for undertaking measurements.
 
As you have a fixed viewpoint, and you show no indication that you have been trained-in or undertaken lab-work or weather observations, there is no point in furthering this discussion.
 
Yours sincerely,
 
Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
March 3, 2024 2:59 pm

The GUM and NIST (and procedures adopted in Australia by NATA), and I suspect like-organisations in GB and the EU applies to calibration and quality assurance of instruments

The usual bullshit, followed by a huge word salad.

 or undertaken undertaken lab-work

More irony.

no point in furthering this discussion

Sir Robin checks out … maybe.

Reply to  karlomonte
March 3, 2024 3:30 pm

As you have had nothing useful to say and no insights to share, I second that karlo.

Cheers,

Bill

Reply to  Bill Johnston
March 3, 2024 6:08 pm

Your mind is a steel door, welded shut.

And here you are again, as predicted.

walterrh03
Reply to  Bill Johnston
March 3, 2024 7:02 pm

If you insist, Bill. If you don’t want to take up my offer of being pointed in the right direction, then you are refusing knowledge and wasting your time.

Reply to  walterrh03
March 3, 2024 8:53 pm

Don’t give me this crap walterrh03. I don’t need to be pointed. Give me a formula that calculates the standard deviation for the single number 47.

I have given you and others the statistical formulae for calculating the variance, from that sigma^2 and from that sigma (SD), but (none of) you have indicated how that would be calculated for a single number. (See: https://wattsupwiththat.com/2024/03/01/exclusive-a-third-of-u-k-met-office-temperature-stations-may-be-wrong-by-up-to-5c-foi-reveals/#comment-3877227)

I have read the GUM, been through NIST talked to people in NATA-certified labs, and put-up with an incessant personal barrage from karlo and Jim. Just give me the formula that disproves my point that there is no such a thing as the standard deviation of a single number. If you don’t like 47, give it to me for 139.

I have statistical skills, I have monitored the weather for close on a decade. I know what a thermometer looks like, I have trained people and I’ve undertaken some 40-years of field-based research and analysis.

Don’t give me a reference, show me how you calculate the SD for the single number 47, or optionally 139.

Cheers,
Bill

Reply to  Bill Johnston
March 3, 2024 6:54 am

I think it is time to be polite and cease harassing, bullying and demeaning those with knowledge that you may not have.

Look in the mirror, old chappie.

Reply to  walterrh03
March 3, 2024 6:55 am

He won’t.

Reply to  Bill Johnston
March 3, 2024 5:44 am

Sorry dude, they do not cancel, they can be added in quadrature if you have a reason to expect some cancelation. Otherwise, the uncertainty is simply added.

Uncertainty is never divided by the number of single measurements taken. The standard deviation of the single measurement data defines the dispersion of data that can be attributed to the measurand. Typically, if you know that the systematic uncertainty is ±5, that it added to the uncertainty of the data.

You keep referring to how accurately you yourself can read a thermometer. That is not the issue. The ultimate uncertainty involves numerous identified parts of the uncertainty budget. Many of these devolve back to how the system was calibrated and not the ability to read a gauge.

Typically, LIG readings were recorded to the nearest integer when using F thermometers. That is an automatic ±0.5°F resolution uncertainty. Differences from the calibration temperature, humidity, wind speed, ground cover, closeness of heated/cooled structures all add to the uncertainty of EACH reading. These uncertainties are all additive, either directly or in quadrature.

Why else does NOAA here in the U.S. specify AOSS stations to have an accuracy of ±1.8°F?

Read this from the GUM and see if you understand what it is telling you.

F.1.1.2 It must first be asked, “To what extent are the repeated observations completely independent repetitions of the measurement procedure?” If all of the observations are on a single sample, and if sampling is part of the measurement procedure because the measurand is the property of a material (as opposed to the property of a given specimen of the material), then the observations have not been independently repeated; an evaluation of a component of variance arising from possible differences among samples must be added to the observed variance of the repeated observations made on the single sample.

Temperatures never have repeated measurements on single samples. That leaves only the component of variance arising from possible differences among samples.

Reply to  Jim Gorman
March 3, 2024 6:58 am

He is completely unable to understand anything about real metrology, and tries to hide behind reams of nonsense, as witnessed by his bizarre ideas of the definitions for error and uncertainty.

Reply to  Jim Gorman
March 3, 2024 3:22 pm

You say:

Uncertainty is never divided by the number of single measurements taken. The standard deviation of the single measurement data defines the dispersion of data that can be attributed to the measurand. Typically, if you know that the systematic uncertainty is ±5, that it added to the uncertainty of the data.

Utter crap. Standard deviation is the square root of the variance and can only be determined from multiple observations. Variance (sigma^2) is by definition the sum of squared differences with the mean divided by either n, or n-1 depending on whether it is calculated for the population (N) or samples (N-1). SD (sigma) is the square root of the variance (sqrt(sigma^2)].

There is therefore no such a thing as a standard deviation of a single independent number, only of a population of numbers. Furthermore, there is a difference between the sample SD and the population SD – which in Excel are called STDEV and STDEVP. [For variance, the call is VAR and VARP].

“The standard deviation is a summary measure of the differences of each observation from the mean. If the differences themselves were added up, the positive would exactly balance the negative [central limit theorem] and so their sum would be zero. Consequently the squares of the differences are added” https://www.bmj.com/about-bmj/resources-readers/publications/statistics-square-one/2-mean-and-standard-deviation [or any other of the zillions of sites that say the same thing, my emphasis].

A single value does not and cannot have a standard deviation.

There is also no such definition as systematic uncertainty (see https://phas.ubc.ca/~oser/p509/Lec_11.pdf). If you know your instrument is offset systematically, the correction is to add the offset with sign-preserved. Systematic uncertainty is either + or -, not +/- as you infer.

Systematic uncertainty of an instrument is fixed and can only be determined through calibration with an instrument that is free of such bias – i.e., an instrument certified by a lab [using GUM, NIST ot NATA guidelines].

Your lack of understanding is profound indeed.

Yours sincerely,

Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
March 3, 2024 6:12 pm

A single value does not and cannot have a standard deviation.

Yet a single measured value does have an uncertainty interval, you are wrong, again.

certified by a lab [using GUM, NIST ot NATA guidelines

Metrology goes way beyond just calibrating instruments in a lab.

Wrong again.

Reply to  karlomonte
March 3, 2024 6:26 pm

Dear karlo,

You are being boorish and you apparently cannot read and cannot explain anything of which you rave.

To reiterate, a single number cannot by definition have a standard deviation.

If you don’t agree, provide the formula for calculating the standard deviation of the number 47.

The ‘uncertainty’ you are baying about is that of the instrument, which by definition is +/-1/2 the interval range. The GUM has no application in the field use of instruments, only in their QA, accreditation or calibration.

Yours sincerely,

Bill Johnston

walterrh03
Reply to  Bill Johnston
March 3, 2024 6:53 pm

To reiterate, a single number cannot by definition have a standard deviation.

Bill,

That is not correct.

Reply to  Bill Johnston
March 4, 2024 12:45 am

Dear walterrh03 (why not use your real name?),
 
Instrument uncertainty as it relates to the mighty GUM is all here (WMO Guide to Instruments and Methods of Observation Volume I –Measurement of Meteorological Variables).
 
The GUM did not exist when all the baseline data were observed using Kew-standard thermometers in the early 1900s.

However, pushing-on, the stuff you, Jim and karlo have been talking about is covered from Page 15 to 24. The definitions, formulae and everything are there for all of you to see and, dare I say, in glorious black and white.
 
In woke-speak, (quote) “the term measurement … is used less strictly to mean the process of measurement or its result, which may also be called an “observation”.

So, we don’t have observations anymore (even though that is what they are). Instead we have measurements. Silly me. We were measurers, not trained observers, and all of you who criticise me, could be measurers too!

In the land of metrology, we are all the same. I already feel glad all over! Can you feel the vibe too?
 
Oh woops, then it says “The terms accuracy, error and uncertainty are carefully defined in 1.6.1.2, which explains that accuracy is a qualitative term, the numerical expression of which is uncertainty. This is good practice and is the form followed in the present Guide. Formerly, the common and less precise use of accuracy was as in “an accuracy of ±x”, which should read “an uncertainty of x”.
 
So, no longer is uncertainty, which they say is synonymous with accuracy qualitive but numerical, it also only has a positive sign.

Qualitive, positive-sign, numeric has a funny ring to it. Like anything you want, like … like … Anyhow, pressing-on …
 
Of all the mind-blowing statements that nobody could have guessed “The process of experimentally obtaining one or more quantity values that can reasonably be attributed to a quantity” is called a measurement; and they even use devices for doing that (don’t go to sleep karlo, this shite is right up your alley).
 
When they get down to Section 1.6.3.1 they talk about “The statistical distributions of observations”.
 
All their re-badged wokie stuff is actually no different to the olden-day calculations of variance, SD, SE and confidence intervals.

If you are still awake Jim and Karlo you could read it for yourselves on p. 21. They even have a little k (coverage) table which is actually a table of t‑values for samples (N) >60. (For small sample numbers, they use DF as the denominator, just as was done in my 1960s stats book.)
 
Nowhere do they produce a formula for calculating the standard deviation of a single value, and you can’t either.
 
All the best,

Dr Bill Johnston
http://www.bomwatch.com.au
 

Reply to  Bill Johnston
March 4, 2024 3:49 am

Your ignorance, ego, and arrogance have no bounds. I fully realize this will be fruitless, but for the benefit of others who might be still be reading:

Error — clearly defined in the GUM as the distance between a measurement and the true value of the quantity being measured.

This definition is universal and doesn’t depend on what is being measured, and includes your holy and sacred air temperature thermometers.

All measurements have error, there is no escape, to the point that true values are in fact unknowable. It does not matter where a measurement is made, laboratory or not.

You are a statistician who doesn’t understand this, and thinks that you can quantify error with statistics.

You are simply wrong, along with the vast majority of practitioners of climate science.

Uncertainty — uncertainty is not error!

The trendologists of climate science are unable to comprehend this simple definition and fact, and all evidence shows you are inside this set.

Uncertainty is instead an interval within which a true value is expected to lie.

Uncertainty is a numerical expression of the limits of knowledge about a measurement — what you don’t know!

What you completely missed from the GUM is another simple concept, I’ll even use your bolding so you don’t miss it (again):

Uncertainty is quantified by standard deviation, but it is not equal to standard deviation!

The GUM does this two ways — Type A and Type B.

Every measurement has multiple sources of both kinds of uncertainty!

Type A — where multiple measurements of the same quantity can be made and the standard deviation calculated; the s.d. becomes the uncertainty caused by the spread of values, i.e. statistics.

Type B — uncertainty which cannot be treated with statistics and reduced with multiple measurements, because they are nonrandom.

If you can’t understand this distinction, you need a new line of business.

Nonrandom sources of uncertainty come from many things.

Another point you missed when you skimmed the GUM without your reading glassing on is that quantifying uncertainty requires knowledge and study of the entire measurement process you are doing. This is called an Uncertainty Analysis.

If your measurement consists of several different quantities that are measured independently and then a final result calculated with an equation, the GUM tells you that you need the uncertainty interval of each sub-measurement, and then tells you how to combine them into a single uncertainty.

This includes any calibration constants used, which should each have their own uncertainty.

Each of these can and will have their own Type A and Type B elements!

Then you must honestly analyze what you are doing and include uncertainties that arise from the measurement process.

These steps are universal and do not depend on where it is all done!

An uncertainty interval does not give you a PDF! You will never understand this, of course.

Then to piss on those you think are beneath your gigantic intellect, you tossed out this little gem:

All their re-badged wokie stuff is actually no different to the olden-day calculations of variance, SD, SE and confidence intervals.

Neatly demonstrating you understood nothing in the document.

Finally, an expanded uncertainty is calculated with a coverage factor (which you made a fool of yourself by pissing it also). It is not student’s t, although it is related.

The GUM method is not the only way to express uncertainty, but like it or not, it is the international standard.

Lastly, you had to double-down on your ridiculous lie:

Nowhere do they produce a formula for calculating the standard deviation of a single value, and you can’t either.

No one said that they do, you fool. Shoe leather must taste pretty good to keep your foot in mouth for so long.

So I’ll repeat this again:

Uncertainty is quantified by standard deviation, but it is not equal to standard deviation!

Enjoy your ignorance (you will, as you believe you are better than everyone else on the planet).

Reply to  karlomonte
March 4, 2024 4:08 pm

Dear karlo,

You should know from our various interactions, that I avoid getting down in the ditches and slinging mud at those I may not agree with. There have been some exasperating exceptions where people have made claims about weather stations that were provably not there; data that metadata showed did not exist; claimed statistical inferences that were bogus for various reasons, etc.  Dealing with you and your throwaway lines falls into that category, however I have remained stoically above that and I invite you to do the same.
 
I have spent considerable time examining the GUM, NIST and now the WHO manual (and others that I have not mentioned). I also have accumulated much more information (statistical and otherwise) about individual Australian weather stations and their datasets, than probably the Bureau has. Undermining the Bureau’s biased homogenisation methods is my goal. I have developed a range of statistical protocols for doing that and I am well down the track of achieving it. There is no trend in any long and medium-term datasets suggestive of an effect due to CO2, or in most cases, UHI. Only Sydney Observatory (and possibly Melbourne) embed a slight UHI effect.
 
As a weather observer at a Class three site, and senior scientist, I am tired of being told how to suck eggs by people who have never seen a meteorological thermometer, conducted an experiment, undertaken field observations, monitored the weather using T-probes and loggers etc. or published, or peer-reviewed papers in my field.   
 
In my view, while the NIST eBook (https://www.itl.nist.gov/div898/handbook/) is useful in an applied sense, including for those interested in getting a handle on data and its variability, and avoiding pitfalls, I don’t share the same view of the GUM or the WMO doc (based on the GUM) as you do. I do not believe, nor do they explain, how concepts about variation and bias, that they say are not measurable or knowable, apply in the real world.
 
Before you broke into sarcasm, you say (as the GUM and WHO also say) ‘Error – clearly defined in the GUM as the distance between a measurement and the true value of the quantity being measured’. Nice but useless. Both doccos then state that the true value cannot be known. Most researchers understand sampling theory as a bread-and-butter issue; ‘error’ and its components; and how to minimise error for a given set of circumstances, including measuring attributes of the weather (which includes more than just looking at thermometers on a nice sunny morning).
 
Provided there is a standard, an expectation, or a reference-frame, error is statistically quantifiable, including systematic error, both in real-time and post hoc. A sound QA system deals with this. Led by the Brits and to a lesser extent France and Portugal, such schemes have been used for centuries, otherwise scientific progress in exploration, physics, chemistry, astronomy, navigation, meteorology … would have stalled or even not got off the ground.

Lieutenant James Cook did not guess his way to Tahiti, and he did not go there for a random purpose. He was checking/re-setting the worlds astronomic clock by checking a prediction against an observable phenomenon, which was the transit of Venus.

Sydney Observatory was not used for star-gazing. Its primary purpose was to maintain accurate time, signal that to ships in-port at exactly 1pm each day, and calibrate instruments (barometers and thermometers) relative to Kew standards (the laboratory at Kew being the then centre-of-excellence). Signalling by flags or time-balls, observatories in other ports in Australia and around the world performed the same vital service.
 
You say “Uncertainty is a numerical expression of the limits of knowledge about a measurement — what you don’t know!” I disagree, which is allowable in debate. Uncertainty is knowable, it is measurable and quantifiable, otherwise instrument standards would not exist.

[more]
 

Reply to  Bill Johnston
March 4, 2024 4:13 pm

On going:

You also say, “Uncertainty is quantified by standard deviation, but it is not equal to standard deviation! I again disagree. The standard deviation (sigma) is a measure of spread from the mean of a population of values. That definition has not changed. Standard error, refers to the reliability of a sample (the precision of the sample mean) and it is calculated as the SD divided by the Sqrt(N), where N is the number of samples from which SD was calculated. That definition also has not changed. SD can be used as a tolerance test to assess the expectation that single samples are within a bandwidth of +/- one or two sigma, whereas SEM is the value that is multiplied by Students-t to provide confidence intervals for a mean, including CI’s for the mean of a difference. The tables given in the WHO docco are the same t-values at those stated P-levels. (You could look-up control charts in the NIST e-book here: https://www.itl.nist.gov/div898/handbook/pmc/section3/pmc321.htm).
 
In my view, the GUM’s Type B errors are a made-up number, are not replicable and therefore fail the BS-test.

At no point do I recall saying that an uncertainty results in a PDF (probability density function). The GUM coverage factor is actually t05 for a sample size exceeding 60. That is made clear in NIST Technical Note 1900, where they say on Page 31 that “The coverage factor for 95 % coverage probability is k = 2.08, which is the 97.5th percentile of Student’s t distribution with 21 degrees of freedom. In this conformity, the shortest 95 % coverage interval [of their example] is t̄ ± ks∕√n = (23.8 ◦C, 27.4 ◦C)”. You would recognise the formula as being +/- the SE times t (SE being s/sqrt(n)), and k the coverage factor. I got stuck before with this in a discussion with Jim. The one-sided 97.5th percentile of Student’s t distribution (is P=0.025), is equivalent to P=0.05 for a 2-sided test. The rest of your sarcasm, including name calling reflects on you, and I’ll leave with you.
 
Others have claimed that a standard deviation can be derived for single observed value. It is that assertion that I have consistently challenged. If you want to measure fitness of a mean of N-samples, the SEM is the appropriate measure. If you want to know about the population of values from which the mean was derived, you use the SD. Their meanings and how they are calculated from squared-differences are defined and have not changed.  
 
All this has little to do with taking individual observations, whether it be herbage mass, ground-cover, coral-extent, diameter of a tree at breast-height, or temperature, where the only two controllable factors are error-control and precision/quality of the instrument, whether it be a calibrated eyeball, or an instrument in your hand.

Sound experimental protocols, including training, back-checks, cross-calibrations, and using quality-certified instruments provide assurance that data are sound and fit-for-purpose. Although they may pay lip-service here and there, except for calibration and instrument QA, the GUM and WHO guidelines don’t comment on such issues.
 
Finally, it is the job of the analyst (the statistician) to work out ways of checking and dealing with the question of whether data are fit-for-purpose, including devising post hoc verification tests of the data, and of assumptions underlying the statistical approach.
 
Yours sincerely,
Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
March 4, 2024 4:26 pm

As expected, you didn’t learn anything. Your rants aren’t even worth skimming.

You also say, “Uncertainty is quantified by standard deviation, but it is not equal to standard deviation! I again disagree

So what? I don’t care if you want to remain ignorant. It is not me you are arguing with.

I noticed you ran away from my little thought experiment.

Not a surprise.

Reply to  karlomonte
March 4, 2024 4:59 pm

I would rightly expect that if you sent someone out on an errand, you would also provide a set of protocols that pointed to specific pages in the manual, rather than they having to make it up as they went along. In that respect, if they bought you back crap-data it would be your fault, not theirs.

I did not expect any of my techos to just go and do a job without training and without knowledge of the protocol to be followed. But then I’m a scientist ….

b

Reply to  Bill Johnston
March 4, 2024 9:20 pm

Errand?? WTF are you talking about now?

I gave you a not-overly complex (and quite on-topic for WUWT) problem to solve, and your response is this bizarre word salad.

FAIL.

Just admit you don’t have Clue #1 about how to solve it (well you did, really, indirectly).

Reply to  karlomonte
March 4, 2024 9:36 pm

Nothing to do with WUWT. You want data or a result, it is up to you to provide protocols and training, including safe operating procedures.

I did not ask you to give me anything and while I would not work for you under any circumstances, that is the bosses job.

Consider this communication closed.

Cheers,

b.

Reply to  Bill Johnston
March 4, 2024 10:15 pm

Oh no, you’ll be back.

If you understood much of anything about real metrology (which you don’t), you’d see immediately how my problem is relevant to WUWT.

But you don’t, so you instead tried to bluff and run away under a smokescreen cover of a meaningless word salad, with nonsense about “errands” and “bosses”.

Reply to  karlomonte
March 5, 2024 1:20 am

Dear karlo,

Just how come, you expert you, that you never picked-up that the GUM’s Type A error was the same as SE multiplied by Student-t for large numbers? Exactly the same as the 95% CI. Jim did not see it, neither did walterrh03.

That people take notice of your mindless bullying and harassment is appalling. That you even sell this stuff as though you are expert is even worse. Take a nap old-man or at lest show the spine to argue your case coherently, without appealing to authority. The GUM is a bureaucratic document not a protocol.

Jim did not see it, neither did walterrh03 who imagined he could produce an SD for a single number like 47. Like a circus-ringleader, you go on-and-on looking for a few brownie points to brighten you day.

Nevertheless, scientists and rational people need people like you to show how stupid and intractable, stupid can be.

Cheers,

Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
March 5, 2024 6:39 am

Just how come, you expert you, that you never picked-up that the GUM’s Type A error was the same as SE multiplied by Student-t for large numbers? Exactly the same as the 95% CI. Jim did not see it, neither did walterrh03.

From the GUM:

B.2.17 experimental standard deviation
for a series of n measurements of the same measurand, the quantity s(q_k) characterizing the dispersion of the results

NOTE 1 Considering the series of n values as a sample of a distribution, q is an unbiased estimate of the mean µ_q, and s2(q_k) is an unbiased estimate of the variance σ2, of that distribution.

B.2.18 uncertainty (of measurement)
parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand

E.3.5 In particular, combining variances obtained from a priori probability distributions with those obtained from frequency-based distributions is deprecated because the concept of probability is considered to be applicable only to events that can be repeated a large number of times under essentially the same conditions, with the probability p of an event (0 < p < 1) indicating the relative frequency with which the event will occur.

What’s funny is that you never noticed that is what NIST TN 1900 does. In order to do so, they had to make the assumption that systematic and measurement uncertainties were neglible.

You’ll also note that NIST said another method that doesn’t rely on an estimate of the distribution gave a ±2.0 rather than ±1.8.

Reply to  Jim Gorman
March 5, 2024 7:00 am

To keep shorter posts I also want to show what a analytic chemist metrology course explains.

From

https://sisu.ut.ee/measurement/uncertainty

Two important interpretations of the standard deviation:

1. If Vm and s (V ) have been found from a sufficiently large number of measurements (usually 10-15 is enough) then the probability of every next measurement (performed under the same conditions) falling within the range Vm ± s (V ) is roughly 68.3%.

2. If we make a number of repeated measurements under the same conditions then the standard deviation of the obtained values characterized the uncertainty due to non-ideal repeatability (often called as repeatability standard uncertainty) of the measurement: u (V, REP) = s(V). Non-ideal repeatability is one of the uncertainty sources in all measurements. [

Standard deviation is the basis of defining standard uncertainty – uncertainty at standard deviation level, denoted by small u.

Reply to  Jim Gorman
March 5, 2024 7:19 am

Bill is repeating nonsense that you claimed it is possible to calculate a standard deviation of a single measurement result.

He doesn’t understand much of anything.

Reply to  Jim Gorman
March 5, 2024 7:06 am

the GUM’s Type A error was the same as SE multiplied by Student-t for large numbers? “

This all assumes that the stated value of a “stated value +/- uncertainty” defines the distribution. It doesn’t. The uncertainty in each value conditions the distribution as well, “smearing” it if you will, both on the negative and positive side. It’s not just a matter of the tails being non-Gaussian, since those tails are defined by statisticians using only the stated values.

Using a coverage factor only adjusts for the tails not being Gaussian, it doesn’t adjust for the values defining those tails having uncertainty.

Because of the uncertainty portion of each data point, the mean cannot be calculated any more accurately than the data point uncertainties allow. Proof of that is the fact that even if you have the entire population the average will have an uncertainty associated with it from the propagated uncertainties of the individual data points. Samples of a population are no different. The standard error can be no smaller than the propagated uncertainties used with the stated values that are used to find a sample standard deviation.

Assuming that all measurement uncertainty is random, Gaussian, and cancels is a crutch statisticians use to make their analyses “easier”. They never justify it with anything other than handwaving. That might work on the blackboard at school but it doesn’t work in the real world.

Reply to  Bill Johnston
March 5, 2024 6:44 am

As predicted, your pride won’t let you walk away.

And now you’ve pulled out yet another straw man and lit it afire — of course I understand that a Type A uncertainty is standard statistics.

And after whining about people calling you mean names, you pull these gems out:

mindless bullying and harassment”
“nap old-man”
“spine to argue your case”
“circus-ringleader”
people like you
stupid and intractable
stupid can be

Here’s another one: you are a HYPOCRITE of the first order. To be expected from someone consumed with pride.

Of course you did an encore of your previous straw man and accused Jim of claiming they can calculatd a standard deviation on a single number.

This makes you a LIAR.

The reality is you are just a statistician who thinks all error is random, Gaussian, and cancels (as Tim puts it), who has no concept or experience with real-world measurements.

Have you contacted the JCGM yet and told them the GUM is bullshit (your term)?

That you believe the GUM is “a bureaucratic document not a protocol” just screams about your ignorance of the subject, and your pride won’t let you acknowledge that you don’t know squat.

Cheers, dood.

Hope you feel better after your latest bizarre rant, I’m looking forward to seeing the next one.

walterrh03
Reply to  Bill Johnston
March 5, 2024 7:54 am

neither did walterrh03 who imagined he could produce an SD for a single number like 47

There is uncertainty associated with each measurement, not an SD; I should have clarified. The SD is for the entire sample.

walterrh03
Reply to  Bill Johnston
March 4, 2024 6:13 pm

I don’t share the same view of the GUM or the WMO doc (based on the GUM) as you do. I do not believe, nor do they explain, how concepts about variation and bias, that they say are not measurable or knowable, apply in the real world.

That’s your main problem.

 ‘Error – clearly defined in the GUM as the distance between a measurement and the true value of the quantity being measured’. Nice but useless. Both doccos then state that the true value cannot be known. Most researchers understand sampling theory as a bread-and-butter issue; ‘error’ and its components; and how to minimise error for a given set of circumstances, including measuring attributes of the weather (which includes more than just looking at thermometers on a nice sunny morning).

Your sentences do not contradict what Karlomonte said. Just because the true value cannot be known does not mean there is no error.

Provided there is a standard, an expectation, or a reference-frame, error is statistically quantifiable, including systematic error, both in real-time and post hoc. A sound QA system deals with this. Led by the Brits and to a lesser extent France and Portugal, such schemes have been used for centuries, otherwise scientific progress in exploration, physics, chemistry, astronomy, navigation, meteorology … would have stalled or even not got off the ground.

You seem to be forgetting that temperature is an intensive property with multiple variables contributing to what the temperature is at a given time. Complete certainty is elusive because of this. UHI, for example, will manifest itself less strongly on cloudy, winter days compared to warm, sunny days.

Uncertainty is knowable, it is measurable and quantifiable, otherwise instrument standards would not exist.

No, that’s just wrong. They wouldn’t call it uncertainty if that were the case.

You also say, “Uncertainty is quantified by standard deviation, but it is not equal to standard deviation! I again disagreeThe standard deviation (sigma) is a measure of spread from the mean of a population of values. 

Your provided definition of standard deviation is literally what uncertainty is.

All this has little to do with taking individual observations, whether it be herbage mass, ground-cover, coral-extent, diameter of a tree at breast-height, or temperature, where the only two controllable factors are error-control and precision/quality of the instrument, whether it be a calibrated eyeball, or an instrument in your hand.

Completely wrong; those individual observations contribute to the overall dataset. If these factors are not equal across all of the measurements, then error will certainly propagate between them. The fact that there is limited control over these is the major problem.

Although they may pay lip-service here and there, except for calibration and instrument QA, the GUM and WHO guidelines don’t comment on such issues.

Nope, they do. You would know that if you actually read the GUM.

Finally, it is the job of the analyst (the statistician) to work out ways of checking and dealing with the question of whether data are fit-for-purpose, including devising post hoc verification tests of the data, and of assumptions underlying the statistical approach.

Normally, yes, but your long diatribe of basic inaccuracies suggests maybe not.

Reply to  walterrh03
March 4, 2024 9:14 pm

^^100%

Reply to  walterrh03
March 4, 2024 9:17 pm

Dear walterrh03,

We are entering the surreal world of circular arguments.
 
You yourself claimed to be able to provide an SD for a single value, which is clearly hogwash.
 
Type B error as described in the GUM and by WMO, is a made-up value.

Get another ‘expert’ and the value can change, so it is not replicable, it is what the committee or whoever wants it be. While you can believe what you like, I don’t believe Type B uncertainty as stated in the GUM is a scientifically rigorous concept, or number.  
 
The formulae they use for Type A error is identical to Student-t for large numbers of samples [(spread = 2 = t for N>~60) multiplied by SD/sqrtN], which is the standard error of the mean, exactly the same standard error of the mean calculated by olden-day methods. For large samples, it is the same as the 95% CI. For small samples (<~60) you would need to find a lower value than 2 and the WMO docco provides another Table for that. See the table on p.22. Type A is derived from samples and it is straight down-the-line stats 1.01.
 
They have re-named 2-tailed Students-t, as k (the spread factor), which for large numbers is exactly the same ‘t-95’. So why are they doing this when these numbers and formulae are the same as they have always been? Why don’t just say “standard error” and why are you arguing that it is something else? Jim has harangued ‘climate scientists’ for using means and SEMs, but here is the same formula and t-values being used to measure Type A uncertainty in the GUM. No wonder you are all confused!
 
If you were interested in precision of the mean, you would use SEM (times an appropriate t-value if you want CI’s). If you want to know about the ‘spread’ of population of values either as samples (STDEV) or for all the samples (STDEVP), you would use the appropriate SD. While they are both calculated from the variance of the samples, they have different uses. Every statistical program that I’ve used auto-calculates SEM for any summarised or derived statistic – the mean, the slope, predicted values, intercepts, interactions. For the mean, it is the same SEM as calculated by the GUM.         
 
These arguments are also somewhat esoteric (good for committees and report writers so they can justify their existence), but no use at all to practitioners interested in producing good, reliable data. As I have said repeatedly, error-control in the field depends on sound protocols and training, and the use of calibrated certified instruments. Irrespective of anything else, that is the only control that can be exercised in the production of sound data.
 
If you want your blood-pressure checked, I’m sure you would want your nurse to know how to do it, and for her to use gear that was regularly checked and certified. The cuff should be at heart-level for example. Reliability of the number that comes back is the product of those two factors alone at that point.
 
Nowhere have said there is no error. The job of a techo and scientist is to minimise error by via training and adopting sound replicable protocols. The job of a statistician is to use appropriate methods, checks and balances based on statistical theory, to verify (or not) that the data are fit for whatever purpose they are being used for.      
 
Uncertainty is not something that is unknown, it is known and can be quantified via calibration and printed on a certificate. Error is what is left when every attempt has been used to control it. It can be quantified as lack-of-fit for example, the SEM and CIs quantify error about a mean by definition. Outlier (suspect and poor) data can also be identified objectively.
 
Error does not propagate between independent estimates by definition (that is what the word ‘independent’ means). Bias (which can also be quantified) propagates, but error does not. A designed experiment involving replication, is a standard way of quantifying ‘error’ amongst units treated the same. Analysis of variance allocates total variance amongst factors, and using a F-test assesses whether variance due to ‘treatments’ (factors, blocks, interaction or whatever) is significantly different to unexplained (residual) variance. Except for points of clarification, don’t argue against me, go and grab a stats book and quote from that.
 
Finally, my disagreements with the GUM are result of having read it.
 
Yours sincerely,
Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
March 4, 2024 9:26 pm

We are entering the surreal world of circular arguments.

I agree, the stuff you type is totally surreal.

Type B error as described in the GUM and by WMO, is a made-up value.

You need to contact the JCGM immediately and properly inform them they are publishing “BS” (your term). Merely posting comments to a blog ain’t gonna cut it.

Do let us all know how it turns out.

walterrh03
Reply to  Bill Johnston
March 5, 2024 7:17 am

You’re just clogging up the thread.

Reply to  Bill Johnston
March 5, 2024 9:30 am

The formulae they use for Type A error is identical to Student-t for large numbers of samples [(spread = 2 = t for N>~60) multiplied by SD/sqrtN], which is the standard error of the mean, exactly the same standard error of the mean calculated by olden-day methods. For large samples, it is the same as the 95% CI. For small samples (<~60) you would need to find a lower value than 2 and the WMO docco provides another Table for that. See the table on p.22. Type A is derived from samples and it is straight down-the-line stats 1.01.

If you read the GUM Section 4 carefully you will see 2 major things.

  1. The same measurand and repeated measurements under repeatable conditions.
  2. A term called Qⱼ,ₖ. Thit stands for samples taken from the same measurand. That is, multiple measurements (sample size) repeated multiple times.

These allow one to use a statistical analysis to obtain a statistic describing how accurate the estimated mean is. That is all it tells you. It does not tell you the variation (dispersion) of the measurements that made up the measurand’s value. That is uncertainty.

A short tale. I sell you 10,000 2×4’s 8 feet long. I tell you I measured each one and the mean was 8 feet with a standard error of 0.005 feet. Would you buy them if your job required them all to be 8 feet long?

Reply to  Bill Johnston
March 5, 2024 6:05 am

Standard error, refers to the reliability of a sample (the precision of the sample mean) and it is calculated as the SD divided by the Sqrt(N), where N is the number of samples from which SD was calculated.”

You don’t get it. Most statisticians don’t so you aren’t alone.

If your data is a sample (which temperature databases are) consisting of T1 +/- u1, … Tn +/- un

statisticians will 1. assume the standard deviation of the sample is the standard deviation of the population and 2. calculate the SEM using only the stated values of the data.

As for 1., how do you justify assuming the standard deviation of one sample data set is totally representative of the population standard deviation? It isn’t just a matter of how many measurements you make, it’s also a matter of how well the sample represents the population.

The second issue (2.) is that only using the stated values understates the actual standard deviation. That standard deviation of the sample *must* also be conditioned by the uncertainty of the each individual data point. Using the student-t distribution and a coverage factor simply isn’t sufficient. That is still all going to be based solely on the stated values unconditioned by the uncertainty of stated values.

Bottom line? You can’t calculate the mean any more precisely than the uncertainty of the data. That means the SEM is also not a perfect indicator of the uncertainty of the mean since it ignores the uncertainty of the data values. No matter how many measurements you make the “true value” cannot be more “true” than the measurements allow. Averaging simply doesn’t increase accuracy.

Reply to  Tim Gorman
March 5, 2024 7:02 am

He doesn’t believe uncertainty even exists, his world is nothing but statistical sampling.

His comprehension of the subject is so poor that yesterday he was ranting that Jim claimed it is possible to calculate a standard deviation for a single measurement, Jim didn’t of course.

The other highlight was Bill’s claim that the GUM is “bullshit” and “a bureaucratic document not a protocol”.

walterrh03
Reply to  Tim Gorman
March 5, 2024 7:27 am

 That standard deviation of the sample *must* also be conditioned by the uncertainty of the each individual data point.

Exactly. I should have clarified what I said above.

Reply to  Tim Gorman
March 5, 2024 2:09 pm

I don’t accept what you are saying, because you are wrong and I don’t believe assertions that are not believable.

Go and work some examples. You should know that optimal sample size and number can be determined experimentally, and that there is no point taking excessive numbers of samples and no profit in under-sampling. There is a point where as N increases, SD and SE stabilise, For really noisy data, N will be larger (require more samples), than data collected by trained people using robust protocols.

A scientists or practitioners role is to minimise errors associated with measurements, so the observations reflect the thing being measured. Don’t worry, karlo does not get this either, so why don’t you talk to him?

Somebody peer-reviewing a research proposal would ask for evidence that a sampling regime was adequate. Likewise if I peer-review a paper, I look carefully at the methods to make sure they align with the results and conclusions.

Instead of making stuff-up and convincing yourself it is true, go and chat to the Jim who knows something (https://statisticsbyjim.com/hypothesis-testing/sample-size-power-analysis/)

Kind regards,

Bill

Reply to  Bill Johnston
March 5, 2024 3:32 pm

Air temperature measurements are not an exercise in random statistics sampling!

The sample size for any and all air temperature measurements is always exactly equal to one! You get one chance to capture it before it is gone, forever.

Have you contacted the JCGM yet?

Reply to  Bill Johnston
March 5, 2024 5:36 am

rovided there is a standard, an expectation, or a reference-frame, error is statistically quantifiable, including systematic error, both in real-time and post hoc.”

Not a single metrology expert will agree with this.

Bevington on systematic error: “Errors of this type are not easy to detect and not easily studied by statistical analysis. They may result from faulty calibration of equipment or from bias on the part of the observer. They must be estimated from an analysis of the experimental conditions and techniques.”

Taylor: “For this reason, uncertainties are classified into two groups: the random uncertainties, which can be treated statistically, and the systematic uncertainties, which cannot.” (italics are from the text)

Possolo is the same. In all of his examples, systematic uncertainty is assumed to be insignificant or zero.

Bevington, Taylor, and Possolo are recognized as experts in the science of metrology. You can denigrate what they say but no one is going to accept it.

Uncertainty is not error. Uncertainty is an interval in the Great Unknown. It doesn’t cancel nor is it Gaussian, especially when different measurement devices or different measurands are involved in a series of data.

Daily temperatures are not a Gaussian curve. The median value, (Tmin + Tmax)/2, does *NOT* describe the daily temperature curve. The daily temperature data set is a multi-modal distribution is sinusoidal during the day and exponential at night. The median temp and the statistical descriptor known as the “average” are not the same in such a distribution.

Trying to say that single measurement values don’t have variance is a red herring argument. A single measurement may not have variance but it *DOES* have an uncertainty. You can’t cancel that uncertainty when it is generated by different devices and/or different measurands.and you can’t just wish it away. And uncertainties grow – *always* – when you combine those types of measurements.

This seems to be a hard truth for those in climate science to accept but it *is* the truth.

Reply to  Tim Gorman
March 5, 2024 7:25 am

Bill can’t make it past the idea that uncertainty is anything except statistical variance.

Reply to  karlomonte
March 5, 2024 7:55 am

Statistical variance OF THE STATED VALUES, while ignoring the uncertainty of the stated values.

Reply to  Tim Gorman
March 5, 2024 10:05 am

Yep.

Reply to  Tim Gorman
March 5, 2024 1:04 pm

Dear Jim,

(Tmin + Tmax)/2 is not the median value. It is defined as the average for the day and therefore whether you like it or not, that is what it is understood to be.

The GUM Type A uncertainty is identical to t95*SD/sqrt(N), which is t95*SEM, which is there in front of your eyes, yet you are telling me it is not; or, that it is something else?

The GUM Type B uncertainty is not a rigorously determined number, therefore it is what you or anyone else cares to make it.

As I have said repeatedly, both the SD and SE are derived from the variance (s^2 = (sum of the squared differences [otherwise known as the total sum of squares]) divided by N-1). The SQRT of s^2 = the standard deviation, which is a measure of spread of the population of values, which if values represent samples from a larger population is estimated in Excel by STDEV, or if they are the entire population, by STDEVP. The SE (SD/sqrt(N), measures precision of the mean. SE*t95 gives the 95% confidence interval for the mean, such that there is a 95% chance the so-called “true” value lies within that bandwidth. The so-called “spread factor” k is Student-t, which approaches 2 as N>~60. In all of this, there is nothing new under the sun.

In my view GUM is simply re-writing in beige-non-speak, stuff has been known since Student was a lad.

You, karlo and now walterrh03 (who reckoned he can calculate the SD of a single independent number), have wasted enormous amounts of my time, for no apparent gain and I don’t wish this inane discussion to continue. If there are points of clarification/correction you need to make by all means do that, but do so respectively and professionally, otherwise go and talk amongst yourself.  

In relation to other things we have spoken about, GUM 3.4.7 and 3.4.8 are of overriding importance. For someone who has never seen a thermometer to hammer away at someone who is experienced in using them and other met-instruments (and in analysing such data), is trashy and contemptible.    

While I can read a thermometer to 0.1 degC (and you probably could too), precision of the instrument is still ½ the interval range, which is 0.25 degC, which rounds to 0.3 degC. The specifications that you gave previously, where you said the instrument had low apparent precision was for a dew-point sensor, not a T-measuring device. Your assertion, based on that number, that US temperatures are not measured precisely is wrong.

Stop playing games. The specification of the dew-point sensor, which I checked, is instrument specific, yet you still re-cycle that number as though it applies to all T-instruments. (See https://wattsupwiththat.com/2024/01/31/spencer-vs-schmidt-spencer-responds-to-realclimate-org-criticisms/#comment-3862037).

Checking Bureau of Meteorology (Oz) specs. Australian PRT probes have a NATA-lab certified precision of 0.3 degC, which is the same as thermometers.
  
And no thank you, I won’t be buying any lumber from you. If I want it, I would buy it from a mill that uses very precise gear, on-contract, cut to exact specifications.  

To cut this crazy discussion short, I’ll also note, there are objective statistical methods for identifying suspect (outlier) data, bias, data that are not fit-for-purpose, normality, ill-fitting data and a host of other data characteristics that may supersede your appeals to Authority in the form of Bevington, Taylor, and Possolo.

In all my work, I use physically-based fixed BomWatch protocols which are statistically sound, cannot be fiddled and are robust across all the 300 or-so datasets that I’ve examined.

Finally, error minimisation is central to producing high-quality, fit-for-purpose data. Training and repeatable, checkable protocols, and certified gear are essential to achieving that goal. If there are problems, they tend to wash-out in the stats.
 
All the best,
Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
March 5, 2024 3:36 pm

How many repetitions do you get for the daily Tmax?

ONE

If there are problems, they tend to wash-out in the stats.

This is the same bullshit the climate science Fake Data fraudsters push: “all error is random, Gaussian, and cancels!”

You are a liar, it is impossible to average away nonrandom errors that drift with time.

Reply to  Bill Johnston
March 4, 2024 4:16 am

And you missed this diagram posted by Paul Homewood:

https://wattsupwiththat.com/2024/03/01/exclusive-a-third-of-u-k-met-office-temperature-stations-may-be-wrong-by-up-to-5c-foi-reveals/#comment-3876066

It looks like the idiots in the World Meteorological Organization sort air temperature stations into classes using GUM uncertainty levels.

You need get on the horn PDQ and let the WMO know they can’t get a standard deviation from a single measurement.

Reply to  karlomonte
March 4, 2024 8:31 am

Very nice explanation.

Statisticians have propagated the assumption that an SEM type calculation (divide by √n) gives you an accurate description of a measurand. It does not, especially with temperatures.

The predominate thing here is that statisticians have been taught that their data is 100% accurate. How many stat textbooks ever address the fact that their data could be inaccurate. To them the only error is in sampling such that the mean may have error. Find me a text that has data with a ± in front of it.

Temperatures are not only independent measurements, but measurements of independent phenomenon. Consequently, they are single measurements. Pregnancy

The other thing that is never discussed is how uncertainty is handled when calculating ampnomalies. I swear that most of climate science is completely satisfied with just simple arithmetic averages as learned in grade school.

Reply to  Jim Gorman
March 4, 2024 11:00 am

Thank you, Jim. I wasn’t going to bother, but this narrative (if I can borrow this over-used word) he has constructed is so very wrong about uncertainty that the record needed correction.

And you are right, it is just another example of statistics uber alles. He hasn’t even the barest of clues about nonrandom error.

Reply to  Jim Gorman
March 4, 2024 4:39 pm

BS Jim. Part of a statisticians role is to assess the quality of data and if you have not studied statistics and understood experimental methods, including error control, I distrust you expertise in making judgements.

You are an engineer, you have some statistical expertise and you surely know the difference between SEM and SD and what they mean.

b.

Reply to  karlomonte
March 4, 2024 4:30 pm

I did not. Thanks for being in my head anyway.

Anthony has done a survey of US sites, Ken Stewart has surveyed Australian sites and I am surveying the quality of data for Australian sites.

b.

Reply to  Bill Johnston
March 4, 2024 9:16 pm

Now you are making even less sense, did not think this was possible.

Reply to  Bill Johnston
March 4, 2024 6:25 am

OK, I’m feeling generous today, here is a little thought experiment for you. This is an open-book test so feel free to use whatever resources you please.

You have a resistor, the resistance of which must be measured. You get exactly one chance at the measurement, so no averaging of multiple measurements.

There are more hurdles you face: you have one digital electronic meter to measure the voltage across the resistor, and another one to measure the current through the resistor. The resistance is of course then calculated by dividing voltage with current. Each meter has a 100+ page manual from the manufacturers filled with all sorts of info about using their meters.

Another problem you face is that the nearest laboratory is 1500km away, and this measurement must be done outdoors under the blazing sun in the Australian Outback.

Keep in mind that buried inside those manuals, the manufacturers have helpfully provided detailed error specification tables that include factors for not only the electronic analog-digital conversion and digital resolution, but also the ranges you are using, how long it has been since they have been calibrated, and the instrument temperatures. Unfortunately, there is no way to measure any temperatures, including those of the meters.

Your task is now to answer the question: how good is your resistance measurement?

No hand-waving allowed.

HAVE FUN!

Reply to  karlomonte
March 4, 2024 10:17 pm

The great expert on measurements ran away from even trying to solve this.

What a surprise.

Richard Barraclough
Reply to  Steven Mosher
March 2, 2024 6:48 am

Daily Average = Max+5C/ Min.
monthly average = daily average /30.
.017C.

Point taken, but there’s something wrong here

Surely the erroneous Daily Average = (Max + 5 + Min) / 2, which then becomes 2.5 C too high.

Divide this by 30 for its effect on the Monthly Average, and you get 0.83 deg C

old cocky
Reply to  Richard Barraclough
March 2, 2024 1:26 pm

He’s using mosh statistics.

Reply to  old cocky
March 2, 2024 3:42 pm

Not using statistics!

old cocky
Reply to  Bill Johnston
March 2, 2024 6:04 pm

Mosh seems to think he is.

Reply to  old cocky
March 3, 2024 5:04 am

What Mosh actually thinks, or whether he actually does think before posting, is quite irrelevant.

Reply to  Richard Page
March 3, 2024 6:59 am

Indeed, mosh is the ultimate drive-by poster.

old cocky
Reply to  Richard Page
March 3, 2024 3:51 pm

One sometimes wonders if an AI chatbot has taken over his account.

March 1, 2024 2:53 pm

Pristine temperature data is available. In 2005, NOAA set up a 114 nationwide network of stations called the U.S. Climate Reference Network (USCRN). It was designed to remove all urban heat distortions, aiming for “superior accuracy and continuity in places that land use will not likely impact during the next five decades”.

Currently the ‘pristine’ USCRN data show a higher rate of warming over the same period than ClimDiv. Since 2005, when USCRN starts, it shows a warming trend of +0.61F per decade. The ‘non-pristine’ ClimDiv only shows a warming trend of +0.48F per decade over that same period.

The ‘pristine’ temperature stations in the US are warming faster than the ‘non-pristine’ stations. The data set that uses stations that are not exposed to UHI is warming faster than the data set that includes stations that are exposed to UHI.

I have yet to see anyone from this site explain why that might be.

Mr.
Reply to  TheFinalNail
March 1, 2024 3:07 pm

Since 2005, when USCRN starts, it shows a warming trend of +0.61F per decade

Hang on, didn’t your mate Bellman just say –

it’s warming at the rate of 0.34°C / decade since 2005

Reply to  Mr.
March 1, 2024 3:31 pm

Same thing.

Mr.
Reply to  Bellman
March 1, 2024 5:28 pm

So why not say it that way?

Reply to  Mr.
March 1, 2024 5:40 pm

What way?

I use proper SI units. As do the majority of articles here.

Nick Stokes
Reply to  Mr.
March 1, 2024 3:33 pm

Missed some arithmetic at school?
0.34°C*1.8=0.612F

Mr.
Reply to  Nick Stokes
March 1, 2024 5:40 pm

Lots of it Nick 🙂
(too distracted by rugby, cricket, swimming, boxing, chasing (but not catching) sheilas, surf lifesaving, etc etc)

But I had to do an intensive arithmetic catch-up as an apprentice for 2 years to the auditor of the largest fruit & veg food processing company in Oz.

No modeling going on there, just observations, constant scepticism, dismemberment of numbers, and endless deep-diving into “what goes on behind the curtain”.

You should try it sometime . . .

Reply to  Mr.
March 1, 2024 5:59 pm

…dismemberment of numbers…

You can’t see the difference between F and C yet you ‘dismember numbers’.

Course you do.

Mr.
Reply to  TheFinalNail
March 1, 2024 7:23 pm

Yes, see my mission was to challenge the provenance of posted numbers – how were they arrived at, what were actual verified counts, what were pulled out of peoples’ arses.

You’d be surprised at how many of the latter I discovered.

(Sounds just like climate ‘science”, hey?)

Reply to  Mr.
March 1, 2024 4:04 pm

Where ‘Mr.’ learns the difference between F and C.

It’s actually quite educational, this place. If you keep your eyes open.

Reply to  TheFinalNail
March 1, 2024 4:23 pm

Oh yes, I do know the difference between F and C, F is more commonly used in the US and C in Europe, UK and many other countries. I also know that using both F and C as interchangeable units is a bloody stupid thing to do – pick one and shut up.

Reply to  Richard Page
March 1, 2024 4:32 pm

I used F because the NOAA chart I linked to uses F. I can’t get it to show te data in C (maybe you can?).

I would prefer to have used C but that would have led to confusion.

It’s not my fault that some people don’t know the difference between F and C, yet are happy to pontificate on all matters scientific.

Reply to  TheFinalNail
March 1, 2024 4:52 pm

Why don’t we very nicely assume that, because Mr Watts is American, and this is his site, that we use F. If we move to, say, Steve McIntyre’s site, we should more appropriately use C. Isn’t it just wonderful to have a system?

Reply to  Richard Page
March 1, 2024 5:29 pm

Why don’t we just read? F is F and C is C.

Reply to  TheFinalNail
March 2, 2024 3:44 pm

Because I’ve noticed commenters of all persuasions using C for a known figure when they should have used F and, more commonly, posting temperatures without either and leaving it up to readers to try to figure out what they meant. It can be a real mess at times.

Mr.
Reply to  TheFinalNail
March 1, 2024 5:41 pm

You don’t possess a mirror, do you TFN?

Reply to  Mr.
March 1, 2024 5:57 pm

You don’t possess a calculator and some spectacles, do you Mr.?

Mr.
Reply to  TheFinalNail
March 1, 2024 7:25 pm

I relied on what google served up to me.
Lesson learned.
Never trust “woke” about anything.

Reply to  TheFinalNail
March 1, 2024 4:37 pm

Yet you have not learnt a single thing.

You are still as gormlessly unaware of reality as you have always been.

Reply to  TheFinalNail
March 1, 2024 4:33 pm

It has been explained to you many times that ClimDiv started slightly higher, and that they have been honing their adjustment routines gradually to make it a better match to USCRN.

They are getting better as “adjusting” for the urban warming.

ClimDiv is a TOTALLY CONTRIVED data set, controlled by USCRN

Reply to  bnice2000
March 1, 2024 4:55 pm

The Futile Wail as much as admitted he has a very bad memory so one should not mock the afflicted (too much anyway) and help him out by reminding him again, every time it comes up.
Y’know, just like we have to do with Richard Greene on EV’s.

Richard Greene
Reply to  Richard Page
March 1, 2024 5:23 pm

Speaking if EVs, global EV sales (BEV plus PHEV) were strong in January 2024, compared with January 2023 — up +69%.

This fact will make many conservatives go berserk.

EVs cost more than ICEs and hybrids, and deliver less.

Their sales will hit a wall someday, just not yet.

The Honest Climate Science and Energy Blog: Global EV (BEV plus PHEV) sales were booming in January 2024, up 69% year over year

Reply to  Richard Greene
March 2, 2024 3:46 am

I see we’re going to have to have ‘the conversation’ again, aren’t we? EV sales are a useful metric to establish which companies are viable and which aren’t but they tell us nothing about the uptake in EV’s. Now – if you provided figures that showed (EV’s entering service – EV’s scrapped, written off or otherwise leaving service) then that would be damn useful. Given that there’s been only a 43% (ish) rise in EV sales and a 500% rise in EV write-offs during the same year, you can probably see why I don’t view this as good news for the EV industry.

Dave Andrews
Reply to  Richard Greene
March 2, 2024 7:44 am

In the UK EV sales are overwhelmingly to companies and fleets so the data does’nt tell you much about the take up by the populous as a whole.

Reply to  bnice2000
March 1, 2024 4:59 pm

It makes no difference, in terms of trends, what levels the respective data sets start at.

The ‘totally contrived’ data set is warming at a slower rate than the pristine one.

I know this is hard for you to accept.

But it is also funny, and highlights the ridiculousness of this site.

Reply to  TheFinalNail
March 1, 2024 9:06 pm

You really haven’t got a clue what is happening, have you.

Yes, the adjustments DO make a different to the trend. The gradual adjustment of their urban warming adjustments is what has allowed ClimDiv to get closer and closer to USCRN,

It started higher and the difference has now levelled off.

ClimDiv-minus-USCRN
Reply to  bnice2000
March 1, 2024 9:07 pm

That means that since about 2017, the trend have been essentially the same.

And all 3 USA temperature series.. SHOW COOLING since then.

Combine-USA-Temp-since-2015
Richard Greene
Reply to  bnice2000
March 2, 2024 5:32 am

“ClimDiv is a TOTALLY CONTRIVED data set, controlled by USCRN ” bStupid2000

Wild speculation
No data
N EVIDENCE
No proof
(except that you are a conspiracy theory Nutter)

Richard Greene
Reply to  TheFinalNail
March 1, 2024 5:01 pm

They can’t handle the truth !

Reply to  Richard Greene
March 1, 2024 9:08 pm

You wouldn’t even recognise the truth.

It would waft windily over or through your empty head.

Richard Greene
Reply to  bnice2000
March 2, 2024 5:36 am

I have been waiting patiently for many months for you to just try to refute any science or energy statements I have made on this website. I would appreciate corrections or debate if any are needed. I’m still waiting.

I have the data
You have the claptrap

dk_
Reply to  TheFinalNail
March 1, 2024 6:51 pm

I have yet to see anyone from this site explain why that might be.

Shall we not recall the arguments, given many times, that both ground instrument data sets are deliberately adjusted to create a warming signal not borne out by satellite detections for the same period?

sherro01
Reply to  TheFinalNail
March 2, 2024 1:34 pm

TFN,
One easy explanation.

Remote sites have not yet had UHI from surroundings.
Urban sites had most of their UHI years ago and have plateaued.
Result?
Remote sites seem to be warming faster than urban.
It is easy to create scenarios to fit the observations.
It is hard to prove that your scenarios are correct.
Geoff S

March 1, 2024 2:57 pm

Lo these many years several folks here have been decrying the absolute insanity of climate science when it comes to temperature measurements and their uncertainty. climate science basically approaches the science of metrology as not being necessary as all they are doing is calculating run-of-the-mill simple averages. Their position is that any errors are random, Gaussian and cancel over many, many averages.

What has galled me for years is what I was taught in every physical lab I took in electrical, physics, chemistry, and surveying. The fact that averages could not extend measurements beyond what was measured was discussed at the beginning of each and every lab. Failure to follow this simple rule would result in a least an Incomplete and at worst a Fail. Significant digit rules were followed to the letter.

Climate science treatment of data is the most haphazard, unruly, and least professional I have ever seen. They are overrun with statisticians who have been trained that each and every piece of data is 100% accurate. The more numbers you can average allows you to calculate a mean more and more accurately by using the √n. That means you can quote measurements far below the resolution actually measured.

They just can’t fathom that measurements of real world physical phenomenon will always have uncertainty. Every time I read of someone saying that how closely you can calculate the mean of a set of number is the uncertainty in what was actually measured my heart rate spikes. These folks have never measured the end gap on piston rings, never used plastiguage on main bearings, or measured the taper in cylinders. Heck, according to them, I can keep a pile of used brake rotors behind the shop, when adding a new one to the pile, revise the average measurement and divide the standard deviation by √n and tell customers what the values I obtained are correct by ten-thousandths of an inch, or better. All I would be doing is homogenizing the measurements.

Hopefully, as more and more money is wasted, people will wake up and finally realize the data being used is not only not fit for service, but has been artificially modified to keep in step with confirmation bias.

Reply to  Jim Gorman
March 1, 2024 3:44 pm

They just can’t fathom that measurements of real world physical phenomenon will always have uncertainty. 

HadCRUT and NOAA always publish uncertainties in their monthly updates.

(UAH don’t.)

Reply to  TheFinalNail
March 1, 2024 4:28 pm

UAH uncertainty ranges are published and stated explicitly in the documentation. I’ll take a wild stab in the dark and guess you’ve never read that far?

Reply to  Richard Page
March 1, 2024 5:01 pm

UAH uncertainty ranges are published and stated explicitly in the documentation. 

Can you link to that please? I’m happy to admit if I got that wrong.

Reply to  Richard Page
March 1, 2024 5:14 pm

I’d like a reference as well. The only figure usually quoted comes from their previous version.

Reply to  Bellman
March 1, 2024 5:31 pm

Yes, UAH certainly don’t quote uncertainties in their monthly updates, unlike HadCRUT and NOAA.

Richard Greene
Reply to  TheFinalNail
March 2, 2024 5:43 am

HadCRUT and NASA claimed uncertainties are TOTAL BS

Especially the claim of +/- 0.1 degrees for the 1800s

Only a fool would believe that.

Reply to  Richard Greene
March 2, 2024 10:36 am

That’s because they are NOT uncertainties but mathematically derived probabilities – a bit like a confidence level – that’s the mathematical probability that the correct answer is somewhere in that range. The actual uncertainty range would be huge, in comparison, completely swamping the data.

Reply to  Richard Page
March 2, 2024 11:47 am

They are not even probabilities. Have you ever seen an histogram of global temperatures or anomalies? Global ΔT’s are always touted with small uncertainties that rely on a Gaussian distribution. Yet no one has ever produced a simple histogram on this site. That sounds like it is being ignored, either from ignorance or on purpose.

Reply to  Jim Gorman
March 2, 2024 12:53 pm

A histogram of what?

You keep demanding people do this work for you, but never explain exactly what you want from them.

If you just want a histogram of anomalies, here’s one of all of the UAH gridded data.

20230302wuwt2
Reply to  Bellman
March 2, 2024 1:13 pm

That’s great! So the mean is zero. That is, the anomaly with the highest probability is 0 degrees. You do realize that growth in global UAH temperature anomalies would show the highest percent at the growth temperature over that time frame, right?

Reply to  Jim Gorman
March 2, 2024 1:34 pm

You do realize the UAH base period is 1991 – 2020?

That histogram is showing the entire range over 45+ years. You would expect the mean to be below zero, as there are more years before the base period than after.

In fact the mean is -0.053°C.

I’ve no idea what you mean by “show the highest percent at the growth temperature”.

Here’s all the data points for last year. the mean is 0.51°C.

(It is odd that there is a spike around zero. Maybe a mistake in my code, but other years don’t show it.)

20230302wuwt3
Reply to  Bellman
March 2, 2024 5:15 pm

I’ve no idea what you mean by “show the highest percent at the growth temperature”.

I didn’t say it well. If most instances of anomalies had grown by +1 degree, the mean would be centered at that value.

You can’t insist that an average of anomalies is accurate because it is Gaussian, then do a histogram that shows there are as many values below the mean of “0” as there is above the mean. If the mean is “0”, then that is what it is.

Funny how you have two peaks. Yet have no idea why. It is not an error in your code, it is due to two hemispheres each with different temperature profiles.

As a confirmed statistician I am surprised you have never investigated the PDF of your data.

Reply to  Jim Gorman
March 2, 2024 5:35 pm

If most instances of anomalies had grown by +1 degree, the mean would be centered at that value.

Maybe I didn’t explain the histogram well. It isn’t showing the changes in temperatures – it’s showing every grid point for every month of the UAH record. I’m not sure if it has any particular meaning – it’s just the broadest possible histogram I could produce.

As I keep saying – you continuously ask for histograms, variances etc. But never explain what data you want them of, or what useful information you hope to get from them.

You can’t insist that an average of anomalies is accurate because it is Gaussian

Pay attention. I’m not the one who makes such a claim. You are the one who keeps insisting everything has to be Gaussian for the average to be accurate.

If the mean is “0”, then that is what it is.

Again, the UAH anomaly is based on the mean of temperatures from 1991 – 2020. The average of those 30 years must be 0 – by definition. That only leaves the 3 years since then and the 12 years before that to have any effect on the overall average.

Reply to  Bellman
March 2, 2024 5:58 pm

it’s showing every grid point for every month of the UAH record.

Keep trying to rationalize what it is showing.

A mean of “0” means no warming is occurring over the period you chose. I don’t care if it is every grid square, state, county, etc. there are a many below zero as above.

Reply to  Jim Gorman
March 2, 2024 6:43 pm

A mean of “0” means no warming is occurring over the period you chose.”

Just when I think you are trying to be helpful you start talking nonsense again. How can it show warming when there is no time involved. It’s a static snapshot of the entire period. If you want to see warming, look at the average of a recent year and compare it to an earlier one.

Here’s a comparison of two Decembers.

20230302wuwt5
Reply to  Bellman
March 2, 2024 7:09 pm

You really don’t understand do you?

KM has called you a trendologist. That is all you appear to be comfortable with. How many times have you been told that you assume all error is random, Gaussian, and cancels. Well guess what, this shows what you are analyzing has a Gaussian distribution of data.

A trend would show a skewed distribution where there are more + values than – value or vice versa.

Reply to  Jim Gorman
March 2, 2024 7:34 pm

You really don’t understand do you?

You assume me not being able to understand your bizarre arguments reflects badly on me.

KM has called you a trendologist.

He’s called me all sorts of names. Name calling is all the troll has.

That is all you appear to be comfortable with.

All we’ve been talking about is histograms – nothing about trends. You claim that you should be able to see a warming trend in a timeless histogram. Rather than explain yourself you start resorting to second-hand name calling.

How many times have you been told that you assume all error is random, Gaussian, and cancels.

It doesn’t matter how many times you repeat that lie – I do not assume that. And this has nothing to do with the histograms.

Well guess what, this shows what you are analyzing has a Gaussian distribution of data.

Eh? First you accuse me of assuming all distributions are Gaussian, then you claim this distribution is Gaussian.

A trend would show a skewed distribution where there are more + values than – value or vice versa.

You really have no idea of what you speak. A skewed distribution would mean the distribution was skewed – not that there was a trend. You don’t need to look at a static histogram to know if there is a trend – you just have to look at the trend.

Reply to  Jim Gorman
March 2, 2024 6:32 pm

Funny how you have two peaks. Yet have no idea why. It is not an error in your code, it is due to two hemispheres each with different temperature profiles.

I’m not sure about that. Looking at the entire data, there isn’t much difference between the two hemispheres, and both have a strong peak at zero.

Looking at just 2023 there is a difference, and there is a very strong zero spike in the Southern Hemisphere.

Looking at the raw, unweighted cell figures, it’s noticeable that in just about every year, there are many more cells with an anomaly of exactly zero, than any other value.

I’ll have to look at this in more detail when I get a chance. It could be an artifact in the UAH cell data, or more likely in how I’ve processed it.

20230302wuwt4
Reply to  Bellman
March 2, 2024 6:46 pm

It’s not like we haven’t tried to tell you that the global average anomaly growing with CO2!is a joke. You’ve just discovered for yourself why that is. Your histogram isn’t lying to you.

Reply to  Bellman
March 3, 2024 5:03 am

I think I’ve figured out the issue with the spikes at zero. The spike is present int he UAH gridded data – and it seems that in general there are about twice as many cells with an anomaly of 0.00 as there are cells with an anomaly of -0.01 or +0.01.

My suspicion is that this is a rounding error. Negative numbers might be being rounded up, but positive numbers down. That would mean the 0 anomaly covers twice the range of other values.

Reply to  Bellman
March 3, 2024 10:48 am

Pull the graphs up and put your ear down real close to the monitor. See if you don’t hear “We don’t know for sure.”. “We are uncertain of the answer.”. “Maybe significant digits weren’t used.”

Reply to  Jim Gorman
March 3, 2024 6:47 pm

What are you on about now?

Reply to  Jim Gorman
March 2, 2024 4:26 pm

No, while I don’t have the data, the graph indicates the data are slightly long-tailed. The most frequent of the observations (the apex) is slightly warmer than zero. Also, calculated as anomalies, as each data-point is virtually zero-centered, we would expect a zero-centred histogram (which is different to saying the mean is zero).

While I’ll probably be corrected by karlo, that is my take.

All the best,

Bill

Reply to  Bill Johnston
March 2, 2024 5:47 pm

That is not what a histogram shows. It shows the distribution of values with the highest probability being the Expected value of the data.

The PDF of the data should be the first thing investigated when doing a statistical analysis. One needs to know what assumptions must be made.

This histogram shows the most common value is an anomaly of “0” warming with equal distribution surrounding it. Going to be hard to refute .

Reply to  Jim Gorman
March 2, 2024 8:17 pm

You are right it is or may not be a histogram; however, as shown by others, a PDF is continuous and does not have discrete ‘columns’ depicting classes.

The highest likelihood still does not appear to align with zero on the x-axis. The slight offset indicates a slightly long-tailed distribution, would you not agree?

All the best,

Bill

Reply to  Bill Johnston
March 2, 2024 8:49 pm

a PDF is continuous and does not have discrete ‘columns’ depicting classes.

Wrong.

Reply to  karlomonte
March 2, 2024 9:38 pm

Dear karlo,

I though you would know that the shape of a histogram depends on the number of bins, whereas a probability density function does not – a PDF can be likened to a histogram having an infinite number of bins.

There are any number of examples on the internet.

I made some code that extracts PDFs from daily temperature and various other high-frequency data, which can be handy to compare before vs after an event or a change, or between datasets over the same time-frame.

All the best,

Bill

Reply to  Bill Johnston
March 3, 2024 9:19 am

Temperatures are discrete samples, not a continuous function.

Think Nyquist and what it takes to achieve an accurate representation of the continuous daily temperature function.

Reply to  Bill Johnston
March 2, 2024 6:14 pm

While I’ll probably be corrected by karlo, that is my take.

Not going to bother, its hopeless.

Reply to  Bellman
March 3, 2024 8:30 am

Me 3. I distinctly remember searching “the documentation” exhaustively, and finding nada. Of course I might have missed it, and/or Mr. Page’s linkage farther down…

Richard Greene
Reply to  TheFinalNail
March 2, 2024 5:39 am

I’d like to know the uncertainty for infilled numbers that are NEVER verified.

The answer is unknown

In the 1800s the majority of surface grids had unfilled numbers

Reply to  TheFinalNail
March 2, 2024 11:24 am

HadCRUT and NOAA always publish uncertainties in their monthly updates.

Really? They use the same method as Mosher below.

Daily Average = Max+5C/ Min.

monthly average = daily average /30.

.017C.

Now recall that the land record is 30% of the total. so

we are talking .005C

It is a piece of shite.

You want to prove your chops? Give everyone here your references that show metrological practices and procedures that allow you to calculate uncertainty this way.

Here is a place to start.

Daily Average = Max+5C/ Min.

What is the uncertainty in “Max”?
What is the uncertainty in “Min”?
How are those uncertainties combined?
Is the “5” systematic uncertainty or measurement uncertainty.
Did Mosher properly account for all the uncertainties?
Did Mosher compute combined uncertainty of single measurements properly?

If you can’t answer these questions, then you have no business telling others that they are wrong!

Reply to  Jim Gorman
March 1, 2024 4:02 pm

The fact that averages could not extend measurements beyond what was measured was discussed at the beginning of each and every lab.”

Have you never considered the possibility that your teachers were wrong?

and finally realize the data being used is not only not fit for service

Explain again how the “pause” in the data proves that CO2 does not cause warming – despite the same data not being fit for purpose.

Reply to  Bellman
March 1, 2024 4:40 pm

You are well aware that the only atmospheric warming in UAH data is at El Nino events

There is no CO2 atmospheric warming signal in the whole of the 45 years of UAH data. !

You have yet to produce one single bit of scientific evidence that CO2 causes warming.

Reply to  bnice2000
March 1, 2024 4:41 pm

The land surface data shows lots of urban and airport and homogenisation warming

But you can’t coax any evidence of CO2 warming out of that garbage, either..

leefor
Reply to  bnice2000
March 1, 2024 8:23 pm

*

Richard Greene
Reply to  bnice2000
March 1, 2024 5:28 pm

There are over 100,000
scientific studies with empirical evidence of AGw. And one AGW denier here who consistently sounds like a fool.

leefor
Reply to  Richard Greene
March 1, 2024 8:25 pm

Does this “empirical” evidence lead one to conclude a temperature rise for a doubling of CO2?

Richard Greene
Reply to  leefor
March 2, 2024 7:35 pm

+0.7 degrees C. to +0.8 degrees C for CO2 x 2 based on lab spectroscopy isolating the effect of CO2 with no water vapor in the air.

Reply to  bnice2000
March 1, 2024 5:33 pm

Oh dear, it looks like I’ve triggered the resident troll again.

I have not mentioned CO2. I’m asking for evidence that the superior CRN data demonstrates that the older station data was unreliable. So far the only indication is that at best the two data sets are not significantly different, or at worst that the CRN data indicates the old data was under representing the amount of warming.

Reply to  Bellman
March 2, 2024 3:51 am

No you haven’t, you give yourself far too much undeserved credit – it’s just the usual bnice/Greene argument starting again – it happens at one point or another on nearly every article. You learn to tune most of it out after a while.

Reply to  Bellman
March 2, 2024 12:35 pm

CRN has an accuracy of ±0.3°C per NOAA. ASOS has an accuracy of ±1.8°F per NOAA.

Exactly what do you think the accuracy of LIG thermometers was in the early 1900’s?

Reply to  Jim Gorman
March 2, 2024 2:45 pm

10 mK

Richard Greene
Reply to  bnice2000
March 2, 2024 5:49 am

“There is no CO2 atmospheric warming signal in the whole of the 45 years of UAH data. !”

Two facts:

(1) The UAH data do not reveal the cause of the warming

(2) You are an El Nino Nutter because you completely ignore the cooling effects of La Ninas.

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidRvZBgyaIOaaqXFjBJ_hxD3wnvKUwSQHBTmIcr4hTM-Ta8dbF_0SvFykiXYklBAQEfQa7xWtos9hCCxCK8oeozfR9Vt32Je9qOoya1rYEpErnjd9mEtMLlKi0vt6jomHe2z6bo6DMSJmrgdLFDaReLyxo5IiBlFQJnsknSgz6xTJtK_XC68ROPBQD38zF/w640-h430/image-66.webp

Reply to  Bellman
March 2, 2024 12:15 pm

You are joking right? Tell us how many BS level physical science labs you have taken? These are taught under the supervision of PhD level teachers.

I have shown you lab instructions from multiple universities that all say the same thing. There are even commercials web sites that discuss these issues that I haven’t bothered with just because I knew I would get this response.

Let me point out that your backassward attempt at an Appeal to Authority is a perfect example of a failed argument.

You want to PROVE something? Show some university lab instructions that tell students it is ok to use averaging to extend the resolution of a series of measurements.

Better yet show us some metrology texts that allow resolution to be extended by averaging.

Reply to  Jim Gorman
March 2, 2024 4:46 pm

You should know by now that these arguments by authority don’t work.

If they all say that it’s impossible to get a more precise average than the individual measurements, then provide the prove. I’ve demonstrated to you many times how it’s possible, I’ve given you simulations, I’ve revered you top the books that prove how it’s possible.

Now you’re accusing me of an appeal to authority, becasue I suggest your teachers might have been mistaken. I’m not sure you understand what Appeal To Authority means.

Better yet show us some metrology texts that allow resolution to be extended by averaging.

Taylor – Exercise 4.15

Your answer will illustrate how the mean can have more significant figures than the original measurements.

Richard Greene
Reply to  Jim Gorman
March 2, 2024 7:38 pm

Everyone knows real science requires three or more decimal places. Less than that is just malarkey.

Reply to  Jim Gorman
March 2, 2024 4:12 pm

More preaching Jim?

When was the last time you measured wear on brake pads using thermometers.

While some of what you say is relevant, you just don’t get that it is the climate that varies not the instrument, whose uncertainty is fixed.

You also don’t get that by convention, averages are expressed to the next significant digit (so the person interested in the value can see how it rounded), and for the same reason that values describing variation about that value (SD or SEM) include 1-digit more, not 10 or 100 digits more as you like to claim. Homogenisation is also different to averaging. P-levels are also given as an actual values (mostly to 3-places, or if smaller, as P<0.001. R^2 to three decimal places, so they can be transformed into % with 1-place. If an average is 52.5, is it actually 52 or 53? The 0.5 is there to inform the person looking at the value, that is all.

I agree that splitting hairs over a second decimal place averaged from whole degrees is not worth the word-count. However, people actually do it, and some do it on WUWT.

Also see: See https://wattsupwiththat.com/2024/03/01/exclusive-a-third-of-u-k-met-office-temperature-stations-may-be-wrong-by-up-to-5c-foi-reveals/#comment-3876412

All the best,

Bill

Reply to  Bill Johnston
March 2, 2024 6:15 pm

When was the last time you measured wear on brake pads using thermometers.

More idiocy from the world’s great expert.

March 1, 2024 3:00 pm

Any idea what happens when a UK temperature series is constructed from just class 1 and 2 stations?

Reply to  bnice2000
March 1, 2024 3:30 pm

It drops about 2-4°C, drops another 1-2°C when you account for the modern electronic measuring systems and even more if you can get rid of the UHI signal.

Reply to  Richard Page
March 1, 2024 3:44 pm

It drops about 2-4°C, drops another 1-2°C when you account for the modern electronic measuring systems and even more if you can get rid of the UHI signal.

And your evidence for this is….?

Reply to  TheFinalNail
March 1, 2024 4:30 pm

It’s all right, TFN, I’m using the Michael Mann and Phil Jones method.

Reply to  Richard Page
March 1, 2024 5:02 pm

 I’m using the Michael Mann and Phil Jones method.

So peer-reviewed scientific literature then. It should be easy to provide a link….

Reply to  TheFinalNail
March 1, 2024 4:43 pm

DENIAL of urban warming.

DENIAL of over-active thermometers.

Is there anything real that you don’t deny ??

Reply to  bnice2000
March 1, 2024 5:05 pm

Denial of the fact that the ‘pristine’ sites are warming faster than the so-called ‘UHI-contaminated’ sites?

This gets more bizarre by the minute.

Reply to  TheFinalNail
March 2, 2024 3:53 am

That again – where is the USCRN documentation with the WMO classification – point me towards that and then we can discuss it.

Reply to  TheFinalNail
March 2, 2024 1:02 pm

Here is a question for you.

Why has climate science continued using old traditional methods of two samples per day. Since 2005 and CRN we have sub-minute data that allows the calculation of degree-day and degree-night or even degree-total_day by integrating those readings.

Better yet humidity is available that allows enthalpy integration calculations over extended periods of a day.

Why are these not being used by you and climate science?

Reply to  Jim Gorman
March 2, 2024 1:38 pm

Because science has not yet devised a time machine.

old cocky
Reply to  Bellman
March 2, 2024 1:59 pm

It is possible to calculate the enthalpy, degree-day and degree-night figures for those sites which have the data available, for the period in which the observations have been taken.

I think Australian BoM observations have recorded humidity and air pressure since the start (1910), though the granularity would only be daily.

leefor
Reply to  Bellman
March 2, 2024 6:43 pm

And yet UK Met thinks they have with the resolution of their “data”.

sherro01
Reply to  Jim Gorman
March 2, 2024 5:00 pm

Excellent question, Jim.
I suggest intellectual laziness to be part of the answer. Geoff S

March 1, 2024 3:31 pm

Obvious point.
This has no impact on trends or deviations from the norm.
Location errors mean they consistently read wrong – which is irrelevant to the question of AGW.

So the stations cannot be used set new record temperatures.
But the one thing they can be used for is investigation of trends.

Not counting the long-term effect of encroaching urban heat islands, of course.

Reply to  MCourtney
March 1, 2024 3:49 pm

Not counting the long-term effect of encroaching urban heat islands, of course.

What do you say about the observable fact that ‘pristine’ USHCN is warming faster than the so-called bedevilled ClimDiv data in the US, over their joint period of measurement?

Are urban heat islands cooling the ClimDiv data; or are adjustments cancelling out real warming??

Reply to  TheFinalNail
March 1, 2024 4:34 pm

We’re not talking about USCRN but the UK network, just try to keep up, will you?
MCourtney – the problem is that we don’t know for sure that the error would be a constant or a variable. If it’s a variable then it throws the whole idea of a trend right out the window.

Reply to  Richard Page
March 1, 2024 5:09 pm

We’re talking about the suggestion that ‘pristine’ sites, whether in the US or UK, i.e. sites not contaminated by UHI, will show a more accurate trend than those affected by UHI.

The evidence is already in. The ‘uncontaminated’ sites are warming faster than the ‘contaminated’ sites.

Pristine sites have a faster warming trend than those affected by UHI. It’s right there in front of you eyes.

Reply to  TheFinalNail
March 2, 2024 3:58 am

Every damn time we try to have a conversation about the UK sites and the WMO classification you go and drag the USCRN into it, usually where it is most inappropriate. STOP IT. You’re deliberately obfuscating the issue with the USCRN ploy – what are you trying to hide, TFN? Evidence of cooking the books or a thumb on the temperature scales? Do you, personally, wish to be seen as guilty by association by covering up?

Richard Greene
Reply to  Richard Page
March 2, 2024 5:53 am

The article discusses USCRN

Reply to  TheFinalNail
March 1, 2024 4:45 pm

AGAIN.. you refuse to comprehend…

You CHOOSE to remain deliberately ignorant.

ClimDiv is being adjusted to match USCRN.

They have been gradually improving their algorithm that removes urban warming from the ClimDiv measurements to get a better match.

Reply to  bnice2000
March 1, 2024 5:15 pm

ClimDiv is being adjusted to match USCRN.

lol!

It’s worse than we thought.

The ‘contaminated’ ClimDiv has a slower warming rate than USCRN; so they are artificially making it warmer to match the ‘pristine’ warming rate reported by USCRN!?

Comedy, mate, Shambles.

Richard Greene
Reply to  TheFinalNail
March 2, 2024 6:02 am

bStupid2000 is the Floyd R. Turbo of conservative climate “science”

Johnny Carson Tonight Show – April 8,1992 – seg 2 – Floyd R. Turbo (skit) (youtube.com)

Reply to  TheFinalNail
March 2, 2024 7:36 am

Ok, TFN. I’ll bite if you are absolutely determined to have your way and drag the USCRN into the discussion of UK Met Office sites. But first I’ll need some basics – when the USCRN was first set up, what temperature system was it initially calibrated with, please?

Reply to  Richard Page
March 2, 2024 8:29 am

C’mon TFN – you wanted this conversation and you’re leaving me hanging here. One simple answer is all I need from you – what was USCRN initially calibrated with, please?

Reply to  Richard Page
March 2, 2024 9:08 am

No? Nothing?
Ok then, I’ll tell you – both USCRN and nClimDiv were calibrated using the USCHN network. That means that the heavily biased, badly sited and corrupted USCHN propagated its errors into both nClimDiv and USCRN in calibration. I’m sure that the scientists setting it up probably thought the same as bdgwx or Bellman (can’t remember which originally said it, sorry) that the errors were gaussian, random and would average out over time. Unfortunately they were wrong – what you’ve got there are two badly calibrated datasets going for a random walk through the garden of errors and uncertainties where they often accumulate, not average out. Presumably, without the frequent adjustments to bring it in line with USCRN, nClimDiv would be all over the place. If you set something up badly you’ll never get good results from it, no matter how accurate the instruments are.

Reply to  Richard Page
March 2, 2024 11:32 am

Duh. Obviously all references to USCHN in the above comment should actually read USHCN. I can’t keep track of all these acronyms.

observa
March 1, 2024 4:12 pm

Out by up to 5 deg C you say?

“The urban heat island effect is a phenomenon that affects a lot of big cities around the world,” says Dr Gloria Pignatta of the UNSW’s City Futures Research Centre.
The effect refers to urban centres being significantly warmer than rural areas – estimates vary between 1C and 13C on average. It occurs as a result of land modification; built-up areas have less green cover and more hard surfaces which absorb and radiate heat.
Hot in the city: can a ban on dark roofs cool Sydney? | Urban planning | The Guardian

The dooming is a feel kinda thingy.

dk_
March 1, 2024 4:22 pm

Highest acknowledged instrument error in 2500 years, scientismists say!

Reply to  dk_
March 2, 2024 4:01 am

Highest acknowledged instrument error since the architect of the perfectly straight tower of Pisa messed up! 😆

dk_
Reply to  Richard Page
March 2, 2024 3:32 pm

The architect was not at fault! Somewhat like Al Gore, Pisa’s tower is built half on sand.

Reply to  dk_
March 3, 2024 5:16 am

Yeah. Show me the architect that doesn’t know how to build a structure correcting for soft material in the foundations?

Richard Greene
March 1, 2024 4:24 pm

I’ve been waiting since 2009 for some other nation to have a weather station siting survey. Finally, there is one. Just what we expected from reading the US survey here in I think 2009

When I started reading about the climate in 1997, it took me one hour to doubt the claim that the climate was going to be bad news in 100 years. No one could know that. Government scientists claimed they knew. So I immediately did not trust government climate science.

The US weather station siting survey in 2009 just added to my distrust. The second US survey showed NOAA did not react to the first survey — siting was even worse.

For these reasons, I do not trust NOAA (and NASA-GISS). I prefer UAH.

When I do not trust an organization, for a good reason, I do not believe anything they claim. For unknown reasons, most conservatives do not follow this Rule of Thumb. They know NOAA doesn’t care about weather station siting for the stations they use for the US average. Several decades ago, NOAA reduced temperature records from the mid-1930s to make 1998 the hottest US year (at that time).

Nevertheless, most conservatives seem to buy NOAAs claim that USCRN provides accurate numbers: ‘You’ve lied to us before but we trust you now NOAA’. That is not logical

I recommended the Sceptic article on my blog this morning in spite of the fact that it was a cheerleader for NOAA’s USCRN

The Honest Climate Science and Energy Blog: March 1, 2024 Recommended Reading List

“Pristine temperature data is available. In 2005, NOAA set up a 114 nationwide network of stations called the U.S. Climate Reference Network (USCRN).”
From The Sceptic

Either you trust NOAA or you do not. I do not trust NOAA or any government climate science agency. Why should I?

I believe a lot of conservatives trust USCRN because they glanced at the chart and believe it shows no warming since 2005. If the USCRM chart showed obvious warming at a glance. I believe there would be less trust.

BIG MISTAKE
It turns out that eyeballing graphs is not an effective way to analyze data.

I’ve got bad news for conservatives: USCRN’s surface warming trend is FASTER than the global average warming trend.

“USCRN actually shows faster warming over the period of overlap than the old weather network. So rather than “hiding” the weather network that shows no recent warming, NOAA’s new USCRN actually shows faster warming than the official US ClimDiv temperature record.”

“While the trend uncertainties are larger (due to representing only 2% of the planet!), we still see statistically significant warming in the USCRN record over the 2005-2022 period, with a central estimate of 0.30C per decade, compared to 0.23C per decade for both the ClimDiv network and the world as a whole.”

comment image

SOURCE OF QUOTES:
The most accurate record of US temperatures shows rapid warming (theclimatebrink.com)

The URCRN data at the link above are from 2005 through 2022

Reply to  Richard Greene
March 2, 2024 2:39 pm

The USCRN stations are certainly better sited than the USHCN ones were but pristine? There are always going to be some siting issues – probably easiest to say ‘least worst’ sites.

Richard Greene
Reply to  Richard Page
March 2, 2024 7:46 pm

The USCRN stations could be perfectly sited and very accurate

But what we get is a statistic called the US average temperature. That statistic is whatever NOAA wants to tell us. And I do not trust NOAA

Reply to  Richard Greene
March 3, 2024 5:22 am

Well you’re not alone in that. Whether it’s through incompetence or deliberate mismanagement they’ve really screwed up every dataset they’ve touched. And their idea of fixing things through constant adjustments has just made matters far worse and erodes all confidence in their abilities. Big shiny computers are no solution for incompetence.

David S
March 1, 2024 4:49 pm

Seems like it is all part of the left wing plan: Lie, Lie, lie, while denouncing the truth as lies. If you can force people to accept things that are obviously false then you can control them. In Orwell’s “1984” O’brien held up 4 fingers and asked Winstin ; how many fingers. Winston said 4. O’brien said no it’s 5. Then he gave Winston an electric shock Eventually Winston realized that he needed to say 5 to avoid the shocks. In our time they tell you a man can be a woman, a woman can be a man, men can have babies, and climate change is an existential threat. If you don’t agree they cancel you or do to you the things they’re doing to Donald Trump. Welcome to 1984.

sherro01
March 1, 2024 4:59 pm

Accuracy of historic thermometry of weather is important. If temperatures are not really changing, there is no global warming, there is no need to fool around with fuels for electricity and so on.
I have studied Australian data for decades. Australia is a land with wide open spaces.

PEOPLE per SQ KM
POPULATION
LAND AREA SQ KM
INDIA 481 1428M 3.3M

UNITED KINGDOM 280 68M 0.24M

CHINA 151 1426M 9.7M

UNITED STATES 37 340M 9.4M

NEW ZEALAND 20 5M 0.27M

AUSTRALIA 3 26M 7.7M

There are at least 1,500 BOM weather stations in Australia.
An interim simple summary is “As the site becomes more likely pristine, the quality of the temperature measurements becomes poorer.” Sadly, poor sites can be so bad that they cause harm to the average if taken seriously.
A set of 45 “pristine” stations I selected are mostly far from human influence when examined on a Google Earth map. My 45 “pristine” sites were chosen mainly because (a). they had the longest records among the bigger selection at the start (b). they helped fill gaps in the national geographic coverage (c). the surrounding population, say within a Km, was tiny, less that 5 homes or so. This loosely assumes that population is a likely cause of UHI.
I chose to use the temperature trend over time as a measure of how pristine the site was. If natural variation was the dominant cause of temperature change, then it should be roughly similar in all parts of the country – unless we could identify another cause of changes to those trends
In the event, the time trends at these 45 stations (here taken over the period 1910 to 2019) were all over the place. There was no sense of uniformity. UAH since 1979 gives Australian temperature change at about 1.4⁰C per century. Our BOM says we have warmed a similar 1.5⁰C per century, but from 1910 to now. Here are some numbers to compare from my pristine stations.
For Tmax, MARREE 3.14 ⁰C per century highest. MAATSUYKER IS 0.02⁰C per century lowest.
For Tmin, MARDIE 3.04 ⁰C per century highest. MONTAGUE IS 2.5 ⁰C per century lowest.
(Also, trends at pristine sites averaged higher than trends at 37 chosen urban sites).
Can any reader please help me? I need some criterion other than temperature/time trend constancy to help define what a pristine site really is.
………………………………..
The big problem today is the same as always. The routine daily temperatures are riddled with error of observation (site shifts are major) so that the noise level is too high to show a useful signal.
Which is much the same conclusion as Chris Morrison shows in the WUWT article and that Anthony Watts showed with his USA station studies.
When will it be realised that these numbers are not fit for the purpose of informing policy decisions? Geoff S

Richard Greene
Reply to  sherro01
March 1, 2024 5:37 pm

Why does Australia need 1500 weather stations? It seems like 100 to 200 would be sufficient.

Australia population is equivalent to 0.33% of the total world population. 

sherro01
Reply to  Richard Greene
March 1, 2024 5:58 pm

Richard,
I would appreciate an absence of your comments to my work, because your comments can reach inane levels.
Here, I mentioned some national population to calculate population density, to suggest a good probability of finding pristine stations in Australia. That has no relevance to your comment about 0.33% of global population. The Antarctic has far less, if you seek another inane comment.
You ask why we need 1,500 weather station. This is not relevant. Maybe we have 1,500 airports to service, I do not know and I do not care.
There is no need for you to respond, thank you.
Geoff S

Richard Greene
Reply to  sherro01
March 2, 2024 6:28 am

I know from your past comments that you have been extremely rude to Jennifer Marohasy, Ph. D. for no apparent reason

Your bad temper s continues

The question I asked was simple and not controversial in any way.

You obviously did not know the answer, but saw an opportunity to insult me for no apparent reason

I do not sit silently and endure needless insults.

Your nasty comment defines you an a s s h o l e, to use a common term

If you disagree with anything I have ever posted, please have the common courtesy to tell me exactly what you disagree with and why.

You never do that. You just throw down generic insults. You are an angry, nasty man.

As a self-proclaimed “expert” on Australian weather stations, I’d expected you to know why Australia would need 1500 of them.

1,500 weather stations for 0.33% of the world’s population in Australia is equivalent to almost 450,000 weather stations for 100% of the world’s population.

When I ask Google, I read:

In Australia there are currently 870 weather stations throughout the country. These stations are fitted with instruments that enable them to measure wind, rain, temperature and humidity.

By the way, Australia has 613 airports 

Your claim of over 1500 weather stations in Australia does not appear to resemble reality. Can we trust anything else you claim after such a large error?

Reply to  Richard Greene
March 2, 2024 7:35 am

Your memory is shot, Greene, Geoff does not deserve your irrational assault.

old cocky
Reply to  karlomonte
March 2, 2024 1:41 pm

He seems to be getting Geoff confused with somebody else.
I don’t know that Geoff has even commented on any of Jennifer’s articles.

Reply to  old cocky
March 2, 2024 2:49 pm

Correct.

sherro01
Reply to  old cocky
March 2, 2024 2:52 pm

old cocky,
I have not counted how many comments I have made in response to Dr Marohasy in the last 10 years on WUWT, but I would guess at fewer than 20.
I do not recall any that were immoderate.
She and I have different approaches to the way Science is done and so, not surprisingly, some comments have expressed agreement and some have expressed disagreement – but that is how arguments proceed.
Geoff S

Richard Greene
Reply to  old cocky
March 2, 2024 7:59 pm

That must have been Bill Johnston. I apologize if that was my error.

Nevertheless, sherro01 reacted to my innocent question with uncalled for hostility. Like a lunatic who needs to be sedated.

And his claimed 1500 weather stations in Australia were almost double the actual number. So he was wrong too

Rude and Wrong
Two strikes

sherro01
Reply to  Richard Greene
March 2, 2024 11:00 pm

Richard Greene,
I do not accept that you have made a genuine, contrite apology.
My request is as before, that you desist from commenting in response to what I write, because you seem incapable of accepting correction even when shown clear data that you were wrong.
My mind does not work your way, so a future of pointless conflict is a likely outcome if you persist with your chat. I do not presume that other readers want that.
Geoff S

Simon
Reply to  karlomonte
March 2, 2024 3:04 pm

“…Geoff does not deserve your irrational assault.”
Now that’s funny right there. KM criticising others for an “irrational assault.”

Reply to  Simon
March 2, 2024 4:00 pm

I think everyone reading that would be very happy to see you finally finding a bit of amusement, Simon, as you’re usually such a miserable sod with an apparent sense of humour bypass.

Reply to  Simon
March 2, 2024 6:17 pm

Poor baby, go push your marxist garbage elsewhere.

Jim Masterson
Reply to  Simon
March 2, 2024 7:59 pm

Stupid projection! You are a gift to conservatives everywhere.

Richard Greene
Reply to  karlomonte
March 2, 2024 7:52 pm

But merely asking why Australia needed 1500 weather stations was answered with a vicious generic character attack on me. But that’s okay with you, hypocrite?

sherro01
Reply to  Richard Greene
March 2, 2024 2:46 pm

Richard,
Please do not rely on the gossip in Google to question my comments.
Here is an Excel file showing 1,695 BOM stations, with latitude and longitude, that were on an issued BOM product a few years ago.
Some might have closed, some more might have opened, but you cannot argue against my words that

There are at least 1,500 BOM weather stations in Australia.”

I have politely asked you to refrain from commenting about what I write here, Your response was to immediately do so.
An apology is in order, from you to me.
Geoff S

sherro01
Reply to  sherro01
March 2, 2024 2:48 pm

There are at least 1,500 BOM weather stations in Australia.
Geoff S
https://www.geoffstuff.com/attstn.xlsx

Richard Greene
Reply to  sherro01
March 2, 2024 8:05 pm

My question about why Australia needed 1500 weather stations did not deserve to be answered with a vicious generic character attack.

AUSTRALIA
ACORN-SAT uses observations from 112 weather stations in all corners of Australia, selected for the quality and length of their available temperature data.

Any other weather stations in Australia are irrelevant for the Australian average temperature.

The US average temperature was the subject of the discussion. Not every weather station in the US.

US weather stations

Over 26,000 weather stations are part of the Meteorological Assimilation Data Ingest System (MADIS) which is managed by the National Oceanic and Atmospheric Administration (NOAA). But only about 15,000 of them are used for ClimDiv.

USCRN is 114 weather stations

old cocky
Reply to  Richard Greene
March 3, 2024 12:14 am

ACORN-SAT uses observations from 112 weather stations in all corners of Australia, selected for the quality and length of their available temperature data.

Any other weather stations in Australia are irrelevant for the Australian average temperature.

That’s not the same thing as there not being approximately 1500 weather stations.

As to why, Australia is a big place, so they may well be there to provide decent coverage for forecasting. The BoM might know why it has them, or they might be run by other organisations and the observations collated by the BoM.

Reply to  Richard Greene
March 2, 2024 3:20 pm

Oh Dear, Richard,

Your comments to my dear friend Geoff are misdirected.

It is I who have argued with Marohasy, and only when I found errors and omissions in her many claims about Australian weather stations.

Those discussions only became heated because she never defended her cases, which were often wrong, but still stood-by her assertions as though they were fact. Her lack of understanding about how temperatures are measured, her data-shopping using Excel, and her misuse of statistical inference is legendary.

So blame me, not Geoff. Alternatively, go visit http://www.bomwatch.com.au and catch-up om my latest research relating to homogenisation of Australian temperature data. For starters, I suggest, Halls Creek (https://www.bomwatch.com.au/bureau-of-meterology/part-6-halls-creek-western-australia/).

In response to JM’s banging-on about daily data for Brisbane Airport, I did two analyses that showed unequivocally where she went wrong using paired-t tests on large daily datasets (https://www.bomwatch.com.au/bureau-of-meterology/why-statistical-tests-matter/_)

Yours sincerely,

Dr Bill Johnston
http://www.bomwatch.com.au

Reply to  Bill Johnston
March 2, 2024 6:18 pm

Yeah, and you go absolutely unhinged whenever you see her name.

Reply to  karlomonte
March 2, 2024 7:04 pm

Dear Karlo,

Not at all. Some of what she does is pretty interesting, but some is simply wrong. She was a well respected entomologist, but entomology is not the same as observing and analysing weather data. Although I have a few old texts from Uni-days, while I understand weather and climate, I would not claim to have even a working knowledge of entomology these days.

I have closely researched some 80 of Australia’s weather stations; some 300 long- and medium-term datasets, and published a range of reports on some of those at http://www.bomwatch.com.au

I am currently working my way across the Pilbara and Kimberley regions of Northern Australia. Shortly I’ll publish reports on Rabbit Flat, Camooweal, Tennant Creek and Alice Springs, before working my way down to Adelaide via Giles, Woomera, and Tarcoola. I spent weeks researching some of those sites at the National Library and National Archives of Australia, so I’m keen to consolidate the information into the public domain. (All are ACORN-SAT sites used to monitor Australia’s warming.)

Taking area-weightings into account, I think I have thus-far covered about 20% of Australia’s land area. No-one has come close to analysing so many important datasets using the same protocols, and publishing detailed reports on each.

Anything you can add would be appreciated. However, I don’t intend engaging with you, by swapping ad hominem attacks.

Yours sincerely,
Dr Bill Johnston
http://www.bomwatch.com.au

Richard Greene
Reply to  Bill Johnston
March 2, 2024 8:25 pm

Keep up the good work

Reply to  Bill Johnston
March 2, 2024 8:54 pm

Ask me if I care; you go nonlinear whenever you see her name and treat her like scum.

Richard Greene
Reply to  karlomonte
March 2, 2024 8:23 pm

MDS
Marohasy Derangement Syndrome

Scientists are supposed to be skeptical of all other scientists. That’s more important than peer review. Good science has to be able to refute strong contrary arguments. … And the internet was designed for strangers to insult each another.

Reply to  Richard Greene
March 2, 2024 8:55 pm

Your one and only “talent”.

Richard Greene
Reply to  Bill Johnston
March 2, 2024 8:16 pm

Sorry I got two Australians confused. I used to enjoy your arguments with Marohasy. I consider her an expert on coral, not weather stations. I trust you on the subject of weather stations in Australia.

Reply to  sherro01
March 1, 2024 6:12 pm

UAH since 1979 gives Australian temperature change at about 1.4⁰C per century.

No, the UAH warming trend for Australia since 1979 is +1.8⁰C per century; not 1.4⁰C (Dec 1979- Jan 2024).

Substantially faster than what you claim.

Reply to  TheFinalNail
March 1, 2024 6:26 pm

Sorry, Jan 1979-Jan 2024, but the trend is the same to 2 decimal places, +1.8C per century, not +1.4C.

sherro01
Reply to  TheFinalNail
March 1, 2024 8:19 pm

TFN,
Thank you for showing my error. I apologise for posting 1.4⁰C per century when the correct calculation is 1.77 +/- some error. I made a graph to calculate the trend, but something unknown went wrong.
Geoff S

ResourceGuy
March 1, 2024 5:12 pm

Okay, from here we can start to compute the budget and grant benefit per one degree of positive error. And from there the tenure and promotions flow. It could be modeled!

March 1, 2024 5:50 pm

5C, that’s getting up there. Wasn’t there a recent posting here on WUWT mentioning 6C UHI for Houston?

March 1, 2024 5:58 pm

Is there any official definition of x degrees uncertainty or is it as junky a claim as the junky stations themselves?

Stating uncertainty as +/- some value, whether or not the value is the same for + and -, is self defining. Saying uncertainty is x degrees could mean several different things unless there is a rigorously defined meaning for such a statement. Is there?

Reply to  AndyHce
March 1, 2024 6:29 pm

The international standard is the JCGM Guide for the Expression of Uncertainty in Measurement, abbreviated GUM.

Climate science as a rule does not adhere to the standard. Many do not understand that uncertainty is not error, which is so basic that they automatically fail when it comes to honest reporting of data.

walterrh03
Reply to  karlomonte
March 1, 2024 8:54 pm

I recall one instance where one of the trolls mistakenly concluded that an uncertainty of 5°C led them to interpret that as a region’s climate oscillating between an ice age and an interglacial period.

Reply to  walterrh03
March 1, 2024 10:21 pm

The objections of the trendologists to Pat Frank’s paper on how uncertainty accumulates at each time step of a climate model boiled down to: “the error can’t be this big!”

Reply to  karlomonte
March 1, 2024 9:18 pm

There may be something useful for me in that 134 pages, but I think my question is answerable in one short sentence that won’t requires a week of study. It is not about the basis, whatever that may be, for the statement of uncertainty, it is just what the heck does the statement mean?

The answer has to be
+/- 5
OR +/- 2.5
OR +x/-y where x and y are unknown values that add to 5 (one equation in two unknowns) OR something I am currently unable to imagine.

If someone professes to understand this GUM, they must know the answer or they are confused as I am.

Reply to  AndyHce
March 1, 2024 10:18 pm

In a nutshell, uncertainty is an estimate of doubt about a numerical measurement, it represents a limit to what is known. It differs from error because error is the separation between a result and the true value of the quantity—but true values are in general unknown and unknowable.

The uncertainty is an interval within which the true value is expected to be.

In the GUM, uncertainty quantified as one standard deviation, which is called standard uncertainty, with symbol u. If a measurement has multiple intermediate measurements and/or additional sources of uncertainty, they are combined with root-sum-square into a single combined uncertainty, symbol u_c.

A final result is stated as the value plus or minus the expanded uncertainty U, which is u_c times a coverage factor k (typically equal to 2).

There are other ways that uncertainty is expressed but this is what the JCGM metrologists have used in the standard.

The +U and -U are not required to be equal, an asymmetric interval can be used if needed.

So if a temperature is quoted as 20 ± 5°C, this means the true value is expected to be anywhere in the interval [19.5, 20.5]°C.

It is important to remember that an uncertainty does not imply any particular statistical distribution over the interval because again, in general the distribution can’t be known. It can be argued that because there is only a single true value of a measurement, the distribution must be a delta function equal to one at the true value and zero everywhere else.

Interestingly enough, there is movement within metrology to reporting only the endpoints of the uncertainty interval, but this not how the GUM works.

Reply to  karlomonte
March 3, 2024 6:06 am

Excellent! Just got through to this post in my email. Perfectly cogent explanation.

The only thing I would add is that the movement to only using an interval is to remove the wrong interpretation of the “mean” as being the true value. The purpose is to insure people interpret the interval as what is unknown. The true value can lay anywhere within that interval, not just at the mean.

Reply to  Jim Gorman
March 3, 2024 7:16 am

As you say, the reason for reporting only the interval is obvious, it forces people to recognize the real spread. But the GUM X ± U = X ± k*u notation is important for calibration chains where results need to be used by others to calculate their own uncertainties farther down the chain. Having a portable notation was one of the objectives of the JCGM as I understand it.

Its not obvious, for example, how a precision resistor calibration of [R1, R2] ohms would have to be used in later calculations.

Reply to  karlomonte
March 3, 2024 9:40 am

Resistors aren’t too bad. They are usually specified in terms of %. If I saw an interval., I would convert it to a % spread. As for design, sensitivity analysis would be done. If necessary calibration components would be added and procedures designed. That’s what they pay engineers for!

Reply to  Jim Gorman
March 3, 2024 10:05 am

True this, it isn’t a problem that couldn’t be solved, as long as the coverage factor is retained.

Reply to  karlomonte
March 3, 2024 10:39 am

Clearly you are unable to understand my question. Where did I go wrong?

Reply to  AndyHce
March 3, 2024 10:42 am

When this article says 5 degrees, what is the interval? Just that, not dialogue about where it comes from. +/- 5 is not the same as +/- 2.5
And, when properly stated (if there is any “properly”) does 5 always mean the same interval?

Reply to  AndyHce
March 3, 2024 12:24 pm

From the article:

Under a freedom of information request, the Daily Sceptic has obtained a full list of the Met Office’s U.K. weather stations, along with an individual class rating defined by the World Meteorological Office. These CIMO ratings range from pristine class 1 and near pristine class 2, to an ‘anything goes’ or ‘junk’ class 5.

The WMO guides use the GUM uncertainty notation, so the uncertainties attached to Classes 1 through 5 may be taken as expanded uncertainties.

When this article says 5 degrees, what is the interval? 

If the GUM notation is used, it means a that temperature measured with a Class 5 device will be T ± 5°C; 5°C isn’t the total width of the interval, which is 10°C.

It means that the (unknowable) true value of the temperature measured with this instrument is expected to lie anywhere inside the 10°C range, and there is no way of knowing anything more.

Without extra work, improvements, etc. to upgrade the Class, all data taken with the instrument will have an expanded uncertainty of ±5°C.

Does this help?

Reply to  karlomonte
March 4, 2024 1:45 am

Thank you. That and ONLY that is what I asked. By standard notation,
N degrees uncertainty means +/-N degrees. Why, oh why, can’t the +/- just be expressed and eliminate any potential confusion? Is ink so dear?

Reply to  AndyHce
March 4, 2024 1:51 am

Of course, if we don’t know whether or not a statement adheres to the standards, there isn’t any way to know what it means unless it is possible to get the source to elaborate in a non wishy washy way.

Reply to  AndyHce
March 4, 2024 4:04 am

Yep. Most don’t understand the significance, which lets climate science off the hook when they routinely give tiny air temperature “error bars” less than 0.1 or 0.2C.

Reply to  AndyHce
March 4, 2024 4:01 am

Good point, and 100% correct. I can only offer that ignorance of the subject is the rule and not the exception. I say this a lot, much of this boils down to equating uncertainty (which has a +/-) with error (which does not). The GUM is relatively recent, about 3 decades or so, and isn’t taught in statistics course TMK.

I would guess there are more that a few people who, like myself, were thrown into the deep end when they encountered ISO 17025 as part of work, which requires a GUM-style uncertainty analysis.

Reply to  karlomonte
March 4, 2024 8:57 am

My introduction was doing measurements with analog meters and oscilloscopes.

Think about what the inventors of radar had to deal with when determining timing. Is a plane 10 miles away or 100 miles? Think about the relative uncertainty of 100 Hz at a 30 MHz signal. 10^2 / 3×10^7 = 3.3×10^-6.

Reply to  Jim Gorman
March 4, 2024 11:03 am

And Bill has convinced himself that uncertainty is only in the realm of national-level calibration labs.

Reply to  AndyHce
March 4, 2024 8:38 am

There is a need handle non-symmetric intervals. And to remove the connotation that the “mean” is the true value.

Reply to  AndyHce
March 1, 2024 10:30 pm

I should also mention that the WMO uses GUM uncertainties, so those numbers can probably be assumed to be expanded U. The rest of the article isn’t clear about the distinction between error and uncertainty.

Reply to  AndyHce
March 3, 2024 8:39 am

KM has done a good job explaining what the interval is.

The uncertainty interval is simply an interval within which you do not know where the true measurement lays.

When dealing with single measurements the typical standard deviation of the data is also the standard uncertainty. If you have multiple measurements of the same thing under repeatable conditions you may use the standard uncertainty of the measurement to describe how accurately you have calculated the mean of that single measurand.

Typically these are expanded to a 95% interval by using a “k” factor or 2.

One must be careful to discern what they are declaring. The SDOM is only appropriate for a single measurand. You can say I have done the measurements multiple times and this is how accurate the mean value is. When declaring how accurate a measurement is when different things are measured under non-ideal repeatability conditions the standard deviation is really the only thing to use when defining the dispersion of measurements that can be attributed to the measurand.

Hopefully, you can see that a daily or monthly average does not have multiple measurements of any of the temperatures and therefore the standard deviation is the appropriate standard uncertainty.

I won’t discuss a daily average right now, but you can probably understand how big the uncertainty of the measurements around the measurand actually is.

Reply to  Jim Gorman
March 4, 2024 1:46 am

I’ve know that part for more than 50 years. That’s why I didn’t ask about it.

Reply to  AndyHce
March 4, 2024 11:31 am

It’s hard to keep track of who knows what around here, LOL!

Reply to  AndyHce
March 3, 2024 8:55 am

I reread your question and the short answer is that uncertainty is USUALLY given as a ±number. An interval this large would not be unexpected with poorly sited stations.

Just look at a few stations for a month, put the Tmax or Tmin into Excel and find the mean, variance, standard deviation, kurtosis, and skewness. Be sure to use the sample commands.

Here are some random standard deviations for Tmax:

March 1953 7.8
January 1960 7.1
December 1998 8.4
May 2016 5.8
October 2020 5.4

Reply to  Jim Gorman
March 3, 2024 10:48 am

When it is expressed as +/- I know what it means. When it is expressed as 5, I do not know what it means. That, not all these long explanations about something else, is what, and only what, I am trying to find out.

All the explanation above just avoid that question. They may be meaningful in regards to uncertainty but are definitely not what I asked.

Reply to  AndyHce
March 3, 2024 11:11 am

I assume it is meant to be ±5.

Uncertainty in measurement is terribly mishandled. An expansion factor along with Degrees of Freedom should be specified.

Reply to  AndyHce
March 3, 2024 12:29 pm

From the article:

Nearly one in three (29.2%) U.K. Met Office temperature measuring stations have an internationally-defined margin of error of up to 5°C.

They conflate uncertainty with error margin in the first sentence, which tells me it was likely penned by a tech writer, infamous for glossing over important details in the interest of making the text “easier” to read.

UK-Weather Lass
March 1, 2024 7:46 pm

With the agenda they have there is no way the UK-Met Office cares about professional standards to the extent they welcome anything that ensures their unscientific and unprofessional behaviours keep the CAGW agenda in the public’s heads. They are rotten to the core with the power to continue to influence their role in the accelerating decline of UK’s academic standards. What the Met Office know about weather is pitiful and corrupt at a time when global standards in meteorology are poorer than they have ever been before. Was one of their senior officers not caught lying in public last month without retribution or explanation?

It’s about time there was a public inquiry into how rotten and unfit for purpose the Met Office is and why it is allowed to continue to be so. We might have a sensible inquiry were our politicians fit for purpose and not an apparently ‘protected species’ but sadly we cannot even do ‘public inquiries’ properly. The UK is looking broke and broken and somewhat in need of a return to being a proper democracy.

Dave Andrews
Reply to  UK-Weather Lass
March 2, 2024 8:06 am

The Met Office wants to keep getting bigger,better and very expensive computers to continue playing with. 🙂

old cocky
Reply to  Dave Andrews
March 2, 2024 1:42 pm

The Met Office wants to keep getting bigger,better and very expensive computers to continue playing with. 🙂

Not that there’s anything wrong with that 🙂

Reply to  old cocky
March 3, 2024 10:49 am

In other words, you are very happy to pay for anything they want?

old cocky
Reply to  AndyHce
March 3, 2024 11:51 am

There’s nothing wrong with wanting shinier toys.

Part of my job was taking what they wanted and telling them what they needed. In the trade, it’s called capacity planning.

bobpjones
March 1, 2024 11:36 pm

Considering that the Met Office, has been tampering with the data, in adjusting temperatures upwards, on top of those junk readings. Just how inaccurate are those ‘readings’?

Reply to  bobpjones
March 2, 2024 4:05 am

Extremely.
Not by a margin of 0.1°C or 0.2°C but by 2-4°C, possibly more than that even.

March 1, 2024 11:59 pm

There’s a lot of toing and froing about warming or not.
But that’s not the issue and is just a diversion.
The problem is we’ve no idea what is happening and has happened. The site issues just add to a whole lot of other problems like equipment changes.
Basically the UK is in a situation where the Pools Panel aka The Met Office decided who the League Champions are.

https://www.theguardian.com/football/2010/dec/08/the-knowledge-pools-panel-creation

March 2, 2024 1:24 am

Having read to the end of these comments (too many of them ill-tempered personal attacks making no contribution to rational discussion) I take the following point out of it all.

Bellman and TheFinalNail keep asserting there is evidence of warming, either in the US or globally. Many others dispute this.

My question is, suppose they are right and there has been some warming in the last 30-40 years. Did anyone expect temperatures in the US, the UK or globally, to be constant and static? Surely not.

Take the UK, where we have long temperature records and quite a detailed history. We know that grapes were grown as far north as Yorkshire in Roman times. We know there were bitterly cold winters permitting ice fairs on the Thames. We know there was a period of Medieval warmth. Its obvious that temperatures in the UK have varied a great deal, and that there are decade long periods of lower and higher temperatures.

This is not in the least alarming. Partly because the recent warming is fairly slight, but mostly because the cause of the recent warming is not clear. The question is not whether there are fluctuations in temperature, the question is whether they are caused by human CO2 emissions and to what extent they are natural variation. And to what extent we should be alarmed, whatever the cause is.

Then there is a consequent question, and this is the important one: if there is some warming from our CO2 emissions, which it seems reasonable there may be a small amount, are the policy measures proposed by the climate activists a reasonable response to it?

And here is where it seems to me that the thing goes totally off the rails. We have an uncertain phenomenon of uncertain cause, a possible danger whose imminence and scale is very unclear, and we propose in return what? To move cars to EVs, to move home heating to heat pumps, and at the same time to move electricity generation to wind and solar.

Notice the only places its proposed to do this is the Western countries. None of the activists are at all excited by the Chinese, Indian, Indonesian etc emissions, which, on their theory, should be the main source of danger.

Notice also that its not possible to do. It is impossible to move current electricity generation to wind and solar. No-one has even come close. Again, take the UK. With its pattern of weather, just to generate the peak power requirement of 45GW from wind and solar would probably take 500GW faceplate, and lots of storage. To cope with the additional demand from EVs and heat pumps would take it to closer to 1,000GW. The UK currently has 28GW of wind installed. Its impossible to do.

And finally notice that even these impossible proposals do not reduce the mass of emissions that don’t come from electricity generation, home heating or driving. Even were they done, it would not materially reduce the UK’s emissions. In fact all the construction would probably increase them.

And even if it did lower the emissions, this would have no measurable effect on global emissions.

Most of the debate about this strikes me as being about as rational as some 16th century European mob screaming that there is a bad harvest, its the second in a row, lets burn some witches. It will be analyzed by future historians as one in a long series of self destructive mass hysterias that have afflicted our species throughout our history.

So the questions to ask Bellman and TheFinalNail are: how much recent warming do you think is due to human emissions, and why? (And read the IPCC carefully before posting on this!). Then, what are your proposed targets for total global emissions in view of this? How would you propose to get to these targets, ie what reductions are needed by country? And finally, how do you propose to deliver these reductions?

Take a given country, the UK will do fine because the available data is so detailed. Its now doing 450 million tons a year. How would you get it down to zero? How much wind? How much solar? How many batteries? How much total power generation? And what about the rest of the emissions not due to power generation? And how low would you propose taking China, how and by when?

And answer came there none….

Reply to  michel
March 2, 2024 3:25 am

We could also ask them if they’d prefer living in Inverness now or in the height of the LIA or the South of France at any point in the last 2000 years as far as climate is concerned

Reply to  michel
March 2, 2024 4:18 am

TFN, Bellman, bdgwx and others will tell you that the observed warming trend is about 0.34°C per decade and that we have been steadily warming since the 70’s, giving a Global Average Temperature of 14.98°C in 2023. Given the error in the UK data, at least, they cannot support that data at all. If you assume a 2-5°C drop in UK temperature to account for the errors, then we may not have been warming at all, possibly even cooling. We just don’t know because the errors are likely variable and the Met Office have built them into the data going back to at least 2000. The human caused contribution is less significant of a question than what if we are cooling not warming as advertised?

Reply to  Richard Page
March 2, 2024 4:23 am

Nor can the reporting of the GAT to 10 mK be supported.

Richard Greene
Reply to  Richard Page
March 2, 2024 6:50 am

There is no evidence of global cooling since 1975

100% of evidence supports warming from all sources of temperature data to anecdotes from ordinary people.

Reply to  Richard Greene
March 2, 2024 9:28 am

Really? Allow me to share one anecdote which is verifiable online. During the 40’s or 50’s, I believe, it was hot enough to cook an egg on a concrete step. Esther Rantzen accomplished the same thing during the hot weather in the 1970’s I think as well. Now in 2022, in Lincoln, on the same day as the ‘hottest evah’ record was broken nearby at RAF Coningsby, an enterprising Lincolnshire Echo reporter tried exactly the same thing. There was no change to the egg once it was out of the shell – it stayed runny and completely uncooked; it wasn’t even close to the temperatures recorded in the 40’s or 50’s or the 70’s. Enjoy!

Richard Greene
Reply to  Richard Page
March 2, 2024 8:34 pm

Cooking an egg on a cement sidewalk on a hot summer day is not science. It is also total BS
A myth

You could probably cook an egg on the steel hood of a black car on a sunny day afternoon. Try it.

Reply to  Richard Greene
March 3, 2024 5:32 am

Of course it isn’t, it’s an anecdote, I labelled it as such ahead of time so you’d know it wasn’t a serious, scientific post. You were the one mentioning ‘anecdotes from ordinary people’ – I also mentioned it was verifiable online, not a myth. I cannot, after giving you all due warning, be held accountable for your reading comprehension (or lack thereof).

Reply to  Richard Page
March 3, 2024 10:57 am

There are no instrument records but there is plenty of human history about droves of people dying in the streets of Rome and Delhi during hot summers thousands of years ago. The more wealthy regularly went to the sea shore or the mountains.

Reply to  Richard Page
March 2, 2024 7:09 am

TFN, Bellman, bdgwx and others will tell you that the observed warming trend is about 0.34°C per decade

I do not. That’s just the rate of warming in US temperatures measured over a very short period of time. It is only brought up to question the claims made that CRN is in some way demonstrating something about a lack of warming in the US.

Given the error in the UK data, at least, they cannot support that data at all.

What errors? I’m not going to assume the Met Office is incapable of determining the UK temperature based on some hand-waving article such as this one. There are multiple sources of information that indicate the globe has been warming, and that includes the UK.

Reply to  Bellman
March 3, 2024 4:36 am

The part of the question you and others always leave out is warming from what base. Every time you quote a record warming you need to also include the base it is being calculated on.

Then we could have a real argument about what the BEST base temperature for the globe should be. Remember 15°C is about 60 degrees. That’s a pretty cold temperature for most folks.

Tell us what you think the best global temperature should be. Tell us if it has ever been warmer throughout the history of humans.

Reply to  Jim Gorman
March 3, 2024 7:18 am

And the related but highly relevant question: “what is the optimum concentration of CO2 in the Earth’s atmosphere?”

Reply to  Jim Gorman
March 3, 2024 2:08 pm

The part of the question you and others always leave out is warming from what base.

The “base” would be the start of the warming period I’m describing. That doesn’t mean there has not necessarily been warming, or cooling, before then.

Then we could have a real argument about what the BEST base temperature for the globe should be.

I doubt there is such a thing. Easier to say that some temperatures are better than others, but you can’t really have a single average temperature that is optimal for everyone in every way. And as you might realize, a single average could describe multiple different distributions.

Remember 15°C is about 60 degrees.

60 degrees of what? If this is some temperature scale that sounds really hot. But don’t confuse say an average global temperature of 15°C with the sort of temperature you will experience. It doesn’t mean that everywhere on the earth is 15°C all the time.

Reply to  Richard Page
March 2, 2024 8:00 am

We just don’t know because the errors are likely variable and the Met Office have built them into the data going back to at least 2000.

Here’s my attempt at a map using UAH gridded data since 2000. It suggests that the UK has been warming at anywhere between 0.15°C and 0.35°C / decade. With most of England being over 0.25°C / decade.

According to the Met Office instrument data, the UK has been warming at 0.28°C / decade over the same period.

20240302wuwt1
Reply to  Bellman
March 2, 2024 9:36 am

So what? There’s no reason to expect it to stay the same constant temperature. It always has varied in the past, and its varying now. It will probably cool down again in a bit, as it always has. Its called regression to the mean. Enjoy the mild winters while they last.

Reply to  michel
March 3, 2024 3:23 am

The “so what” is that I’m addressing your claim that temperatures in the UK might have been falling since 2000.

Reply to  Bellman
March 3, 2024 3:25 am

Sorry, iI meant addressing Richard Page’s claim not yours.

Reply to  Bellman
March 3, 2024 5:40 am

Now, now – if you read through what I actually said rather than what you thought I said, you’ll find I made no such claim. I said “possibly even cooling” for the simple reason that error piled on top of error in the temperature datasets are greater than the warming figures given, making a mockery of the whole process. Given all that, nobody has a clue what the actual figures are, they can’t have – far too many mistakes and errors.

Reply to  Richard Page
March 3, 2024 7:20 am

What is the uncertainty of infilled Fake Data temperatures?

Reply to  karlomonte
March 3, 2024 9:22 am

Why it is 100% accurate since a computer says so!
/sarc

Reply to  Jim Gorman
March 3, 2024 10:10 am

Duh!

Reply to  Richard Page
March 3, 2024 2:12 pm

Are you trying to argue about the distinction between “might have been falling” and “possibly even cooling”?

Richard Greene
Reply to  michel
March 2, 2024 6:47 am

You miss the key point that climate change — the CO2 boogeyman — has nothing to do with the climate.

The CO2 boogeyman is used to control people and implement leftist “Rule By Leftist Experts” fascism.

Control energy and you control a nation.

If this was about the climate, the fact that almost 7 billion people live in nations that could not care less about Nut Zero would be a HUGE deal. In the news every week. But this fact is practically ignored.

Climate change is a strategy to gain political power and control … and it is working almost as well as hoped for by the leftists.

Editor
March 2, 2024 2:47 am

There is some confusion about the margins of error at these junk stations. This is what the WMO say about Class 4 and 5:

Snipaste_2024-03-02_10-45-48
Reply to  Paul Homewood
March 2, 2024 4:20 am

And this should be the minimum standard for all temperature stations used in the Global Average dataset.

March 2, 2024 7:31 am

Update: And the month of XXX 2023/2024 was the warmest ever on record by 0.x ± 2 °C.

ROTFL.

Reply to  ToldYouSo
March 2, 2024 9:29 am

They never cease to provide amusement.

walterrh03
Reply to  Richard Page
March 2, 2024 2:56 pm

eVeRy tEnTh oF a DeGrEe CoUnTs.

March 2, 2024 11:05 am

Google is on record as stating that it will ban all sites that are sceptical of “well established scientific consensus”

Translation: Google is censoring sources of truth inconvenient to google.

Reply to  ATheoK
March 4, 2024 7:07 am

Google is just showing its woeful misunderstanding of science: nowhere, in any description of the “scientific method”, will one find mention of obtaining a consensus as a necessary step.

On the one hand, there was scientific consensus at one time that the Laws of Nature were as described by Newton and Maxwell, then Einstein’s theories of relativity and the laws of quantum mechanics came along.

On the other hand, there is no scientific consensus on what comprises dark matter and dark energy, both of which together make up about 95% of the “known” universe.

ROTFL.