The Curious Case of the Missing Data

Ivor Williams

I shall end with two unanswered questions. The reason for that lies in a story with eight decimal places of recondite mystery and scarcely believable deductions. One last glimpse of reality: the mean temperature of the world at the moment (early November) is hovering around 14 deg C, which is never used because it does not convey a sufficient element of danger in the global warming message. Fourteen degrees Celsius or fifty-seven Fahrenheit are not messages of imminent doom. Either one is the annual mean temperature of Bordeaux, San Francisco or Canberra.

Therefore the Wise Ones have decided that any global temperature given to the masses must always be shown as a difference from the mean of the half-century 1850-1900, which, they say, is representative of our world in smoke-free pre-industrial times. That period also happens to be towards the end of the Little Ice Age, which, the Met Office says, had ‘particularly cold intervals beginning in about 1650, 1770 and 1850.’ Cold spell beginning in 1850? Interesting.

Thus it was that on 10 January this year the Met Office told us that ‘The global average temperature for 2024 was 1.53±0.08°C above the 1850-1900 global average,’ This  is an extraordinarily accurate figure but the World Meteorological Organisation has much the same: ‘The global average surface temperature [in 2024] was 1.55 °C … ± 0.13 °C … above the 1850-1900 average, according to WMO’s consolidated analysis.’ Ignore the scarcely believable accuracy of those second decimal places, there’s worse to come.

The obvious question is: Why were those fifty years chosen as the fundamental reference period? The answer is easily found: ‘Global-scale observations from the instrumental era began in the mid-19th century for temperature,’ says the Intergovernmental Panel on Climate Change (IPCC) in their Fifth Assessment Report (Section B, page 4.) An associated IPCC Special Report (FAQ1.2 para 4) explains that ‘The reference period 1850–1900 … is the earliest period with near-global observations and is … used as an approximation of pre-industrial temperature.’ Note the categoric statements that sufficient data is available in that nineteenth century fifty-year period to calculate the global mean temperatures.

In 1850, may I remind you, Dickens was writing David Copperfield, California was admitted to the Union as the 31st state and vast areas of the earth were still unexplored. 1900 brought the Boxer Rebellion (China), the Boer War (South Africa) and the Galveston hurricane (USA). There were still quite large areas awaiting intrepid explorers.

I was curious about how in olden times those global temperatures were actually measured, but after a painstaking search of websites and yet again proving that AI-derived information can be both wrong and misleading, I turned in despair to the Met Office enquiry desk. Their reply was long and very detailed. No actual data, but several clues as to where to search. Very interesting clues.

The IPCC report above claiming ‘global-scale observations’ is obviously true, because the World Meteorological Organisation has a comprehensive graph showing six different global mean temperature measurements of the difference from the 1850-1900 period. But a link ‘Get the data’ on the same page leads to the following curious table of the Met Office anomalies:

1850  -0.1797
1851  -0.0592

then every year to

1899  0.0128
1900  0.1218

then every year to

2023  1.4539
2024  1.5361

There is even more accurate Met Office data from the past, this time anomalies relative to the 1961-1990 period but this time totally unbelievable, all from HadCRUT5.1.0.0, Summary Series, Global, CSV file, Annual.

1850  -0.42648312
1851  -0.2635183

then every year to

1899  -0.34430692
1900  -0.2301605

then every year to

2024  1.1690052

Dig further and monthly values are produced. You can’t help being suspicious of even two decimal places, let alone eight. I dug deeper. I found graphs.

They show northern and southern hemispheres separately, with both station count and coverage percentage. They are from a paper: Hemispheric and large-scale land surface air temperature variations: An extensive revision and an update to 2010, P.D. Jones et al. Page 48, line 1120. They show the number of recording stations and the hemisphere percentage covered from 1850 to 2010.

Very similar pictures are also shown in Land Surface Air Temperature Variations Across the Globe Updated to 2019: The CRUTEM5 Data Set, T J Osborn et al, para 5.1 fig 6, and Hemispheric Surface Air Temperature … to 1993, P D Jones 1993, page 1797.

Approximate readings from the above graphs:

1850
Northern hemisphere coverage          7%
Southern hemisphere coverage         0-1%

1900
Northern hemisphere coverage          23%
Southern hemisphere coverage         6%

Surely not? There must be a mistake somewhere. But there’s nothing like a graph in scientific peer-reviewed papers for providing clear and unequivocal information. If you still think this just cannot be true, then look further at the American Meteorological Society map of station density 1861-1890 (Section 5 of Journal), or a classic Bartholomew map of world reporting stations in 1899.

The information supplied by the Met Office led me to a meandering pathway of scientific papers covering thirty-odd years of intensive research into the problem of accurately measuring global mean temperatures from 1850 onwards. The path seems to have ended in a swirling fog.

Those graphs show that even by 1900 only about 15% of the earth had recording stations. And the 1850 data is apparently extracted from only around 4%.

How can world temperatures be measured that accurately with such an impossibly small amount of data – almost nothing from the oceans and most of the rest from North America and Western Europe?

It wouldn’t really matter except for someone having decided that current global mean temperatures should always be shown to the worried world as anomalies compared with the 1850-1900 data, which is itself possibly a cooler climatic period. The intention must be to demonstrate clearly that there is no doubt that we are indeed warming up dangerously, and if we don’t do something about it soon it will be too late and don’t say we didn’t warn you.

But, and this is one huge ‘but’, how can the 1850 mean global temperature be recorded, for instance, as -0.1797 deg C less than the mean of 1850-1900, when it seems that reporting stations covered only about 4% of the earth at that time? And why to a totally unrealistic ten-thousandth of a degree?

I did warn you this would end with two unanswered questions, and here they are, both about that fifty-year 1850-1900 period:

Where can we consult the actual original global data?

How were those incredibly accurate anomaly figures calculated?

4.9 67 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

524 Comments
Inline Feedbacks
View all comments
sherro01
November 11, 2025 5:10 am

Ivor,
Thank you for your research.
In 1992 I saw poor science from said Phil Jones and sent some emails seeking corrections. No significant answers came back. I started to conclude that I was dealing with a huckster, a view reinforced by the Climategate emails, which I asked the UK police to investigate. They obfuscated then did nothing. Poor science had its foot in the door and a flood of people ignorant of proper science flooded it. Stupid politicians gave them huge amounts of money and still do.
To support my claims of poor science, go to the International Bureau of Weights and Measures Guide to estimation of Uncertainty in Measurement, GUM, that many proper scientists regard as preferred authority about uncertainty. Search for preferred methods to estimate the uncertainty of adjusted or homogenised numbers. You will not find a mention. There is no approved method in proper science for the estimation of adjusted numbers. The whole textbook is about original numbers only, original as in the numbers for historic temperatures that were handwritten on paper by thousands of diligent, dedicated observers to whom we owe thanks, not rejection by fake adjusters.There are no valid uncertainty estimates for guesses, as should be obvious to a logical mind.
Proper scientists have long been careful as well with the count of digits used to express an observation. See GUM for more preferred methods. Competent scientists quickly downplay numbers with too many digits. In general earth science field observations it is rare to have more than 3 or 4 significant digits. Misuse is common. For example, study the official numbers for ocean level rise and fall as measured from satellites. The official numbers are arrived at my unacceptable statistical processes that claim an accuracy greater than the system capability.
I am hopeful however that the efforts of some of the better US scientists are starting to be heard at Presidential level. Remedial correction will follow but I expect it will be slow and resisted by the many fakers currently pocketing a comfortable sum on paydays.
Geoff S (hard scientist).

Tom_Morrow
November 11, 2025 7:25 am

Those numbers came from the same location where ER doctors remove objects that patients “accidentally fell on” – the exit orifice of the alimentary tract.

November 11, 2025 9:13 am

Nevermind!

November 11, 2025 9:35 am

It’s funny how the same people who insist it’s impossible to know the global temperature in the 19th century because of low coverage, will also insist we can use the temperature of just the US to know what the global temperatures were in the 1930s.

Reply to  Bellman
November 11, 2025 9:42 am

It’s funny how the same people who insist it’s impossible to know the global temperature in the 19th century because of low coverage, will also insist we can use the temperature of just the US to know what the global temperatures were in the 1930s.

No one does that. What is being pointed out is that IF CO2 IS THE INSTRUMENT CAUSING WARMING, then being a well mixed gas, the change in temperature should follow everywhere.

Reply to  Jim Gorman
November 11, 2025 12:58 pm

“No one does that.”

Really? Nobody has ever claimed the world was warmer in the 1930s when what they actually mean was the US was warmer? My memory differs from yours.

“the change in temperature should follow everywhere.”

Only in your straw man world. In the real world, regional variations exist.

Reply to  Bellman
November 11, 2025 9:28 pm

You seem to have forgotten that many people such as yourself denied the warming events (such as the Medieval Warm Period), dismissing them as “regional variations.” Then, with more investigations, it was acknowledged that the events were indeed global.

Reply to  Clyde Spencer
November 13, 2025 5:03 am

nope. Global.

Reply to  Bellman
November 12, 2025 4:28 am

Only in your straw man world. In the real world, regional variations exist.

Then claiming that CO2 is the driver of temperature is a false claim, right? You can’t have it both ways. In some regions CO2 is not a driver and in others CO2 is the driver! Next time you see a graph showing a correlation between CO2 and temperature anomalies, maybe you should tell the person that the graph is misinformation.

Reply to  Jim Gorman
November 12, 2025 7:36 am

Then claiming that CO2 is the driver of temperature is a false claim, right?

Then stop claiming it. All I’ve ever said is that rising CO2 levels, or any greenhouse gas, will tend to rise temperatures.

I think the length of day drives seasonal variation, but I don’t expect every summer at the same latitude to have the same temperature.

The summer extreme heat in the USA during the 30s was not caused by CO2. The reasons it was so hot there did not apply everywhere else.

Now that CO2 has been rising at a substantial rate you see most of the globe warming, but you would not expect everywhere to warm at the same rate, and there will be regions bucking the trend. This may happen due to random local factors, or due to the nature of the location, and it may happen due to changes caused by the global warming.

Reply to  Bellman
November 12, 2025 8:16 am

but you would not expect everywhere to warm at the same rate

Then what good is a global average without also quoting the variance in temperatures.

Reply to  Bellman
November 11, 2025 11:28 am

Basically everywhere measurements were made had a big bulge in the 1930s/40s

These match the US temperatures reasonably well.

Here is 4 major sites in South Africa, for example.

Australia seems to have had a slightly different pattern, although a lot of highs used to from around the mid 1930s.. but have been spirited away by BoM.

1940s-South-African-temps
Reply to  bnice2000
November 11, 2025 1:01 pm

“Here is 4 major sites in South Africa, for example.”

So now 4 carefully selected sites in South Africa can determine global temperatures. Too bad that none of them look like the US data.

Reply to  Bellman
November 11, 2025 2:31 pm

No, Bellend. Nearly all the NH follows the same pattern, especially the Arctic.

USA data also has a strong peak, higher than now, during the 1930s, 40s.

As do many other regions around the world.

What raw data doesn’t match is the agenda driven FABRICATIONS of the GISS et al with their idiotic anti-science “global temperature” fabrications.

Your denial of raw temperature data that does exist, and reliance on data that doesn’t exist, shows why you are so muddled in the head about climate.

Reply to  bnice2000
November 11, 2025 6:17 pm

No, Bellend.

Admitting you’ve lost the argument already I see.

Nearly all the NH follows the same pattern, especially the Arctic.

Do you hear yourself. If nearly all the NH is the same as the US, why is it only the Arctic that especially follows it.

USA data also has a strong peak, higher than now, during the 1930s, 40s.

That’s the point I’m making. USA had warm a warm peak in the 1930s, not so much in the 40s. And just looking at summer months the peak is similar to current temperatures.

noaausasummer
Reply to  Bellman
November 11, 2025 6:19 pm

The Arctic had a warm period a bit after the US, but temperatures were nothing like current temperatures.

NOAAArcticSummer
Reply to  Bellman
November 11, 2025 6:25 pm

Your denial of raw temperature data

And there’s your problem. You want to claim that everywhere was as hot as the US, but you only want to cherry pick individual stations, rather than aggregate all the stations. And you want to ignore any problems with that raw data, such as changes in location.

Reply to  Bellman
November 11, 2025 7:18 pm

You want to claim that everywhere was as hot as the US,

There was a general warming in the 30’s and 40’s world wide. Not necessarily ”as hot” as in some parts of the US. Moron.

Reply to  Bellman
November 11, 2025 7:15 pm

So now 4 carefully selected sites in South Africa can determine global temperatures

T max was high in the 30s/40s in Africa, India, China, Australia as well as the US. But I’m sure everything in between was not affected eh?

Reply to  Bellman
November 11, 2025 7:02 pm

It was warm world wide in the 1930s. Asia, Australia, Africa, North and South America.

November 11, 2025 10:43 am

You can’t help being suspicious of even two decimal places, let alone eight.

Eight significant figures does not speak well for the education of those responsible for calculating the tabular data. I was teaching when hand-calculators first became affordable and it was a real challenge to get students to not copy everything displayed by their calculator. The behavior was not unlike someone following their GPS navigation system directions into an unharvested corn field.

November 11, 2025 11:16 am

What is the “coverage area” of a single temperature measurement?

1 m^2
10 m^2
100 m^2
1 km^2

1000 km^2

Alternatively, what is the transfer function from Station Count to Percent Coverage?

Reply to  karlomonte
November 12, 2025 4:31 am

A single temperature measurement is local to the point at which it is measured.

No one denies this.

Reply to  TheFinalNail
November 12, 2025 4:44 am

You didn’t read the head article very closely.

Reply to  TheFinalNail
November 12, 2025 5:33 pm

And multiple temperature measurements are local to the point at which they are measured.

So why does climate science jam all these local temperature measurements into a “global” average when they are *local* measurements and not “global” measurements?

November 11, 2025 11:26 am

My guess? The old, sparse data is modeled on the more current, abundant data. The old 4% or less is fitted to the new 4% or less, and then the gross data is modified to fit the old, and an “old” average computer.

What I find alarming is the reduction in data points. Is the reduction because the locations became untenable? What would the previous average look like if the dropped data points were dropped all the way back?

Bob
November 11, 2025 1:26 pm

Absolutely brilliant Ivor. These CAGW fakers have nothing.

November 12, 2025 2:41 am

How were those incredibly accurate anomaly figures calculated?

Take a look at the side-panel here at WUWT.

You’ll find the UAH data prominently featured. These are reported to the NSSTC to the 1000th of a degree C (3 decimal places) – and they don’t even use thermometers!

Yet not a word of criticism does there come….

Reply to  TheFinalNail
November 12, 2025 4:36 am

There is criticism. If the satellites measuring radiation intensity have an uncertainty of ±4 W/m², then so does the UAH satellites. Put that uncertainty into the SB equation and see what values pop out. That is the uncertainty range of temperatures being calculated. Or, put temperatures into the SB that are separated by 1/1000ths of a degree. See what you get for radiation variation.

It would behoove you to evaluate how “averages” and divide by √n is used to obtain those precisions of temperature over a months time. Satellites measuring sea level suffer from the same uncertainties that are simply glossed over.

Reply to  Jim Gorman
November 12, 2025 4:43 am

He doesn’t read the comments very closely, there has been lots of critiques of UAH data handling posted.

November 12, 2025 9:09 am

An interesting follow up to the global coverage question, would be coverage by latitude. As the first few stations were undoubtedly located in higher latitudes, with data from lower latitudes being added later. A bump up of much higher latitude data with the growth of the Soviet Union, and a collapse of higher latitude data upon it’s demise in 1991.

November 12, 2025 4:34 pm

Amazing. Another non expert imagines he’s found fundamental flaws in the work of real scientists —. While demonstrating his own ignorance of how such data is taken, the error bars surrounding it, and the use of anomalies to minimize the systematic error that this ‘guest blogger’ claims are ignored by the real scientists. This article is pablum for the uncritical reader who accepts any and all nonsense as gospel.

Reply to  Warren Beeton
November 12, 2025 4:39 pm

the use of anomalies to minimize the systematic error

Another one of the standard climatology fantasies.

Reply to  karlomonte
November 12, 2025 7:21 pm

Speaking of uncritical readers of Denier pablum

Reply to  karlomonte
November 13, 2025 4:15 am

It’s all part of the meme of “all measurement uncertainty is random, Gaussian, and cancels”.

Subtracting two values with uncertainty CANCELS the uncertainty in the result!

Climate science is just chock full of these kinds of idiotic memes.

Reply to  Tim Gorman
November 13, 2025 4:40 am

“Subtracting two values with uncertainty CANCELS the uncertainty in the result!”

Not if the uncertainties are random, they don’t. Why do you always assume that all uncertainties are random?

Reply to  Bellman
November 13, 2025 5:06 am

Your lack of reading comprehension skills is showing again!

It is CLIMATE SCIENCE that assumes uncertainty cancels when subtracting two uncertain values – it is a part of their meme of “all measurement uncertainty is random, Gaussian, and cancels” which you also subscribe to.

Reply to  Tim Gorman
November 13, 2025 5:10 am

No, only systematic. You are clueless, Gorman

Reply to  Warren Beeton
November 13, 2025 5:55 am

No, only systematic. You are clueless, Gorman

Here is an easy example for your to illustrate your knowledge.

Monthly Average = 50 ±2
Baseline Average = 45 ±2

I subtract them to obtain an anomaly. What is the anomaly value and uncertainty.

You need to read some metrology books. If a systematic error has been discovered via calibration, then corrections should be made to the input quantities prior to calculating uncertainty. If the systematic errors are inherent in the process, then a probability distribution needs to be identified and a measurement uncertainty value calculated as a standard deviation that is included in combined uncertainty.

Read GUM an Section 4.3 for a discussion of Type B uncertainty.

Reply to  Jim Gorman
November 13, 2025 5:57 am

What’s the process used by climate scientists?

Reply to  Warren Beeton
November 13, 2025 6:24 am

There isn’t one. Field instruments can drift, microclimates can change, shelters can deteriorate.

When did you ever see an uncertainty budget with uncertainty propagated from the initial readings to a final determination?

They just get trashed because climate science couldn’t claim uncertainty in the out to three or four decimal places.

Reply to  Jim Gorman
November 13, 2025 7:54 am

So you don’t know how scientists treat the data, yet you claim their process is flawed. That’s a new level of hubris, founded on ignorance.

Reply to  Warren Beeton
November 13, 2025 1:20 pm

So you don’t know how scientists treat the data, yet you claim their process is flawed. That’s a new level of hubris, founded on ignorance.”

It’s obvious that when they refuse to state what the measurement uncertainty is that they are *NOT* treating measurement uncertainty at all.

The SEM is *NOT* measurement uncertainty. It is a metric for SAMPLING error. Sampling error ADDS to the measurement uncertainty – yet climate science only ever speaks to the sampling error!

Reply to  Tim Gorman
November 13, 2025 3:56 pm

You’ve already demonstrated your ignorance of their methodology.

Reply to  Jim Gorman
November 13, 2025 6:07 am

Look at Table D2 in TN 1297 for an example of including systematic uncertainty as standard deviation in the combined uncertainty.

Have you ever seen an uncertainty budget like this for temperature measurements? I’ll bet not!

Reply to  Tim Gorman
November 13, 2025 5:29 am

Your inability to detect sarcasm is showing again. I know it’s your delusion that all of climate science believes all distributions are Gaussian etc. You shout it in every comment you make.

But in this case you claimed that if uncertainties are random they will cancel when subtracted. That’s just not true. If they are random then they add in quadrature when subtracted. They will only cancel when subtracted if there is a complete positive dependency between them. E.g if the uncertainty is entirely due to a systematic error

Reply to  Bellman
November 13, 2025 6:16 am

E.g if the uncertainty is entirely due to a systematic error

Think again.

I have a value A ±1 and a value B ±1 all due to systematic error.

If A > B, show us how ±1 disappears.

This doesn’t even make sense. If I have a pressure gauge that reads +5 psi high, how do you remove that uncertainty?

Reply to  Jim Gorman
November 13, 2025 7:11 am

“Think again.”

I’m always thinking. But the answer is still the same.

“I have a value A ±1 and a value B ±1 all due to systematic error.”

If the ± 1 represents an entirely systematic error, then call that error e. So we have true values A + e, and B + e. Now subtract B from A. You have a true value of (A + e) – (B + e) = (A – B) + (e – e), and e – e = 0. So A-B is the true value.

Or use the general equation from the GUM where the uncertainties are not independent. Assume a correlation is 1. That should give you the same result when the function is subtraction.

“This doesn’t even make sense.”

Not to you, maybe. To me it seems quite obvious. Suppose I take the height of two people using the same tape measure. I find person A is 190cm and B is 185cm. I conclude that A is taller than B. Then someone discoveres that the tape had been cut of at the base, and was measuring everyone 10cm taller than their actually height. Assuming this 10cm error is the same for both A and B, why would You think it could make B taller than A?

Reply to  Bellman
November 13, 2025 10:35 am

If the ± 1 represents an entirely systematic error, then call that error e. So we have true values A + e, and B + e. Now subtract B from A. You have a true value of (A + e) – (B + e) = (A – B) + (e – e), and e – e = 0. So A-B is the true value.

Why am I not surprised you would invent an insane example that doesn’t even meet the physical requirements.

Do you really think that subtracting two measurements that are wrong will give you a correct value with no error?

You haven’t heard a word you’ve been told about uncertainty adding, ALWAYS. You just blithely subtract them and assume they cancel!

If A and B are means of two random variables, each with a standard deviation of ±1, would you subtract or add the variances? Remember systematic error are supposed to be included as standard deviation in a combined uncertainty.

Reply to  Jim Gorman
November 13, 2025 1:34 pm

Do you really think that subtracting two measurements that are wrong will give you a correct value with no error?”

That’s EXACTLY what he believes. All measurement uncertainty is random, Gaussian, and cancels. Just like climate science does!

Reply to  Jim Gorman
November 13, 2025 1:51 pm

Why am I not surprised you would invent an insane example…

It was your example. Remember

I have a value A ±1 and a value B ±1 all due to systematic error.

Do you really think that subtracting two measurements that are wrong will give you a correct value with no error?

If the error is entirely identical, yes. That’s what systematic error means. If there is any random uncertainty then no.

You haven’t heard a word you’ve been told about uncertainty adding, ALWAYS.

I see you just ignored my argument and resort to repeated assertions,. I’ve heard you saying it for years, and it still is misleading. Specifically it’s wrong in this case, when you have non independent error operating in opposite directions, and it’s wrong when you use it to claim that the uncertainty of an average will never be smaller than that of the individual measurements.

You really have to lean to follow the maths and not just rely on repeated assertions.

If A and B are means of two random variables each with a standard deviation of ±1, would you subtract or add the variances?

You would add the variances, assuming they were independent.

Remember systematic error are supposed to be included as standard deviation in a combined uncertainty.

You can do that, but you have to understand that they are not independent.

Reply to  Bellman
November 13, 2025 1:33 pm

This is nothing more than your usual use of the meme “all measurement uncertainty is random, Gaussian, and cancels”!

Your values should be A +/- e_A and B +/- e_B.

What is the measurement uncertainty if e_A is + and e_B is a plus?

What is the measurement uncertainty if e_A is – and e_B is minus?

What is the measurement uncertainty if e_A is – and e_B is +?

What is the measurement uncertainty if e_A is – and e_B is plus?

What is the measurement uncertainty if e_A is +/2 and e_B is -?

What is the measurement uncertainty if e_A is +/4 and e_B is -/2?

You only want to assume that e_A and e_B *ALWAYS* CANCEL.

They don’t. The +/- only describes the interval in which A and B can lie. They do *NOT* justify that e_A and e_B always cancel!!!! It’s why it’s called an UNCERTAINTY INTERVAL. You simply don’t know what either “e” value is!! Since you don’t know what they are how do you know if they cancel or not?

Hint: You gave yourself away by using the term “true value”. No matter how often it’s repeated to you, you simply ignore the fact that you can NEVER know the true value. Therefore, you can never know what “e” is either.

You continually say you don’t believe in the meme of “all measurement uncertainty is random, Gaussian, and cancels” BUT IT IS IN EVERYTHING YOU ASSERT. You aren’t even capable of understanding when you are applying the meme!

Reply to  Tim Gorman
November 13, 2025 2:07 pm

This is nothing more than your usual use of the meme “all measurement uncertainty is random, Gaussian, and cancels”!

Apart from the fact I’m specifically saying they are not random, and have no interest in their distribution.

Your values should be A +/- e_A and B +/- e_B.

OK, but it’s a systematic error, therefor e_A = e_B.

You simply don’t know what either “e” value is!!

You don’t need to know what e is, just that it’s the same for both measurements.

No matter how often it’s repeated to you, you simply ignore the fact that you can NEVER know the true value.

How many times are you going to repeat this idiocy. I know you don’t know the true value. that’s why there is uncertainty. In this you don;t need to know the true value to know that if there are no random uncertainties but the same systematic error, then the difference between the two unknown true values will be a known true difference, because you’ve eliminated the only unknown.

You continually say you don’t believe in the meme of “all measurement uncertainty is random, Gaussian, and cancels”

Yet you still keep lying that I keep claiming it.

BUT IT IS IN EVERYTHING YOU ASSERT.

You keep giving away your own insecurities with this incessant shouting.

If I believed that all uncertainties were random, I would have said that the example

I have a value A ±1 and a value B ±1 all due to systematic error.

was meaningless as there is no such thing as a systematic error, and then argued that as the ±1 was a random uncertainty, the uncertainty of A – B would be √(1² + 1²) = 1.41.

Instead I assumed the uncertainty was entirely due to systematic error, and said the uncertainty of A – B was zero.

Can you not see how absurd your obsession about me is? Endlessly repeating the same easily refuted lie doesn’t do your arguments any favours.

Reply to  Bellman
November 14, 2025 5:03 am

OK, but it’s a systematic error, therefor e_A = e_B.”

YOU DON’T KNOW THAT!

Systematic error in measurements can CHANGE even in the same instrument if the measurement environment changes! Meaning you *have* to assume that e_A ≠ e_B.

The meme “random, Gaussian, and cancels” is so ingrained in your brain that you don’t even know when you are asserting it!

You don’t need to know what e is, just that it’s the same for both measurements.”

You CAN NOT KNOW if they are the same for both measurements if the measurements are made under different environmental conditions! Why do you think the GUM defines the term “repeatable” as it does?

———————————————-
NOTE 2 Repeatability conditions include:
— the same measurement procedure
— the same observer
the same measuring instrument, used under the same conditions
the same location
repetition over a short period of time.
——————————————- (bolding mine, tpg)

Once again, you don’t even realize when you are using the meme “random, Gaussian, and cancels” that is ingrained in your brain!

How many times are you going to repeat this idiocy.”

It’s not idiocy. It’s exactly what the GUM says.

“Although these two traditional concepts are valid as ideals, they focus on unknowable quantities: the “error” of the result of a measurement and the “true value” of the measurand” (bolding mine, tpg)

You keep on making assertions that imply the GUM is wrong but you never actually show any evidence – no other references contradicting the GUM or a treatise of some kind showing that the “true value” **can* be known.

that’s why there is uncertainty.”

Then why do you continue to insist that you *do know* the true value and that you know that systematic error is always the same in every measurement.?

same systematic error”

Total uncertainty is typically defined as “random” and “systematic”. If you don’t know what the random component is, then how do you know what the systematic component is? If you know the total measurement uncertainty *and* the systematic uncertainty then you *do* know what the random component is — meaning the random component isn’t really random! As Bevington specifically points out in his tome, systematic uncertainty is not amenable to statistical analysis – yet you imply that it is!

the difference between the two unknown true values will be a known true difference, because you’ve eliminated the only unknown.”

Your statistician biases are showing again! An unknown *random* component is *NOT* a fixed value! It is only so in statistical world where the mean is considered to be the “true value”, the *expected* value that always happens, i.e. a constant! You always say you don’t believe that to be the case but just like “random, Gaussian, and cancels” you just keep coming back to depending on the average being the “true value”.

Bevington says that systematic uncertainty is not amenable to statistical analysis for a reason! Yet *you* imply that it is – meaning Bevington is wrong, the GUM is wrong, Taylor is wrong, NIST is wrong, on and on and on ….

If I believed that all uncertainties were random, I would have said that the example”

Your use of the meme is endemic in every assertion you make. You don’t have to specifically state it, it just stands out as a red flag!

Instead I assumed the uncertainty was entirely due to systematic error, and said the uncertainty of A – B was zero.”

You simply cannot knw that A and B are equal! It’s the basic definition of “uncertainty”. If you *know* what the systematic error is in each measurement then you should take up a new career – fortune teller in a carnival!

Reply to  Tim Gorman
November 14, 2025 5:38 am

You missed the fact that error don’t SUBTRACT, they ADD. From the basic example in Taylor, the error in measuring the length and width, the maximum error is added. RSS may be appropriate, but the errors are still added, only in quadrature.

Every statistical text says when the means of two random variables are added or subtracted, the variances add in quadrature.

Reply to  Jim Gorman
November 14, 2025 5:51 am

You missed the fact that error don’t SUBTRACT, they ADD.

No, that’s the whole point. constant errors subtract when you are subtracting.

By all means change the argument and say that you might have different systematic errors for each measurement. That just means they are effectively random errors for the purpose of the combined uncertainty. But it’s an inevitable fact that when you have the same error for each measurement, subtracting one from the other will cancel that error. Even if the two errors are not identical, subtraction will eliminate the part they have in common.

From the basic example in Taylor, the error in measuring the length and width, the maximum error is added.

Has it ever occurred to you that Taylor might be wrong if he says that.

Every statistical text says when the means of two random variables are added or subtracted, the variances add in quadrature.

Random being the operative word. Which means they have to be independent. If there is a systematic error between the two they are not independent.

Reply to  Bellman
November 14, 2025 7:27 am

constant errors subtract when you are subtracting.

Where exactly do you find these “constant errors”?

Random being the operative word. Which means they have to be independent. If there is a systematic error between the two they are not independent.

The GUM system treats and quantifies all as variance.

Reply to  Tim Gorman
November 14, 2025 6:07 am

YOU DON’T KNOW THAT!

Please keep writing in all caps. It really demonstrates how good your argument is.

I do know that, because it was the premise of this hypothetical example, 2 measurements each with an uncertainty of ±1, where the ±1 was entirely due to systematic error. If, as seems reasonable, that means each have the same systematic error then it should be clear that subtracting one from the other removes that systematic error.

If you want to change the goal posts and say you have two values each with a different unknown systematic error, then yes, they would have to be treated as two random errors, and the uncertainties add.

The meme “random, Gaussian, and cancels” is so ingrained in your brain that you don’t even know when you are asserting it!

You are so lost here. It’s become such a meme for you that you can’t even see that I’m saying the exact opposite. Random uncertainties add when subtracting, systematic errors cancel.

“Although these two traditional concepts are valid as ideals, they focus on unknowable quantities: the “error” of the result of a measurement and the “true value” of the measurand” (bolding mine, tpg)

So? You keep trying to find meaning in these biblical verses which just isn’t there. You don;t know what the true value is, that’s the whole point of uncertainty, and you don’t know what the error is. That doesn’t mean you can’t make calculations based on those unknown values. It’s standard frequentest statistics, as used by Taylor.

As the GUM says, it doesn’t matter what definition you use, the calculations will be the same.

I’ll ignore the rest of your rant as it just seems to be repeating the same argument over and over again.

Reply to  Bellman
November 14, 2025 7:17 am

If, as seems reasonable, that means each have the same systematic error then it should be clear that subtracting one from the other removes that systematic error.”

Huh? One more time, how do you know they have the same systematic error when different things are being measured by different instruments under different environments?

In essence all you are doing is: “Look, if I assume everything is equal it all cancels out”. Nothing more than a corollary to your “all measurement uncertainty is random, Gaussian, and cancels”.

If you want to change the goal posts and say you have two values each with a different unknown systematic error, then yes, they would have to be treated as two random errors, and the uncertainties add.”

How else should you treat measurements of different things by different thing under different environments?

Random uncertainties add when subtracting, systematic errors cancel.”

You are still living in statistical world! “If everything is equal then everything cancels!”

How you can make this assumption in the real world is just beyond me.

That doesn’t mean you can’t make calculations based on those unknown values”

You *are* assuming that they are equal! If they are unknown then how do you know they are equal?????

Reply to  Tim Gorman
November 14, 2025 9:29 am

One more time

If only.

how do you know they have the same systematic error when different things are being measured by different instruments under different environments?

You don’t. And if you are using different instruments there’s less likelihood of there being a systematic error.

But if there is any systematic error between the two measurements, then by definition it will be the same between the two, and will be eliminated by subtraction.

If you are talking about anomalies, then the main advantage is that any systematic error caused by environmental factors is reduced by taking the difference. A station on the top of a mountain will have a much colder temperature than one at the base of the mountain. A station in the arctic will tend to have a colder temperature than one at the equator. A station will tend to be colder in winter than in summer. Taking an anomaly will tend to reduce much of these differences.

In essence all you are doing is: “Look, if I assume everything is equal it all cancels out”.

Yes. That’s the point I’m making. Do you finally get it? The claim was that when the uncertainty was caused by a systematic error, then subtracting values should increase that uncertainty, not cancel it.

You are still living in statistical world!

And you are living in fantasy world. If you don;t want to use statistics to understand uncertainty, explain what method you do use – becasue at the moment it just seems to be to to just make up any argument that agrees with your personal believes.

If they are unknown then how do you know they are equal?????

I’ll answer the first of your questions. You don;t know if all the errors are the same. But if there is any sort of systematic error that is positively correlated between two measurements, it will be, at least in part, eliminated when looking at the differences. And, the great thing about this is you don’t need to know what the value is, it will still be cancelled simply because x – x = 0, regardless of what x is.

Reply to  Bellman
November 14, 2025 10:06 am

And if you are using different instruments there’s less likelihood of there being a systematic error.

In which universe? Statements like this just demonstrate (again) your lack of understanding of real-world metrology.

But if there is any systematic error between the two measurements, then by definition it will be the same between the two, and will be eliminated by subtraction.

Hand-waved nonsense — by what definition?

You just hope there is cancelation.

Reply to  karlomonte
November 14, 2025 11:33 am

It should be obvious to you that Bellman means the systematic error will be reduced by the common value of the plus and the minus sides of the systematic error distribution.

Reply to  Warren Beeton
November 14, 2025 12:29 pm

Another climatologist who doesn’t understand non-random uncertainty.

PASS.

Reply to  karlomonte
November 14, 2025 1:47 pm

So you think systematic error cannot be reduced by the use of anomalies? Then you dont understand statistics (which your responses to Bellman have already demonstrated)

Reply to  Warren Beeton
November 14, 2025 3:01 pm

Systematic error cannot be reduced with anomalies.

Measurement uncertainty is related to the variance of the measurement data. Shifting that distribution by adding or subtracting a constant does *NOT* reduce the variance of the distribution – i.e. it does *NOT* reduce the measurement uncertainty associated with the distribution!

Please explain how you think shifting the distribution using subtraction changes the variance of distribution.

I’m betting you can’t!

Reply to  Tim Gorman
November 14, 2025 3:24 pm

Thousands of scientists publishing in peer reviewed scientific journals use anomalies to do just that. When you publish your critique in a scientific journal, I’ll pay attention. Until then, you are just noise.

Reply to  Warren Beeton
November 15, 2025 3:37 am

Those scientists are doing nothing more than shifting the distribution to make the numbers *look* smaller. If you calculate the relative uncertainty of each IT WILL BE THE SAME. And it will have the same variance!

I knew you couldn’t explain how the variance of a distribution changes by adding or subtracting a constant. Instead you use the argumentative fallacies of False Appeal to Authority and the Bandwagon fallacy. “They say this” and “They all do this” so it must be right!

Literally pathetic!

Reply to  Tim Gorman
November 15, 2025 5:38 am

The scientists are doing it right. And you are doing it wrong, Gorman. Ranting and whining on wuwt won’t solve your basic problem — incompetence.

Reply to  Warren Beeton
November 14, 2025 2:59 pm

You are trying to assert the same meme that bellman always does. All measurement uncertainty is random, Gaussian, and cancels.

It’s sheer idiocy!

  1. You don’t know what the systematic error distribuiton *is*. It could be severely skewed and you will *not* get cancellation.
  2. In fact, you *can* have a measurement uncertainty that is *all* positive or all negative. Consider an electronic sensor whose calibration error is positive when used with a constant measurement current through it causing heat expansion. The error *always* increases in the positive direction, it *never* goes negative.

If heat *always* drives the calibration error to be in one direction then how do you get any cancellation of the systematic error?

Reply to  Tim Gorman
November 14, 2025 6:31 pm

You are trying to assert the same meme that bellman always does. All measurement uncertainty is random, Gaussian, and cancels.

Once again, I have never asserted any such thing. I have repeatability asserted the opposite. But Gorman thinks if he keeps repeating the lie enough times, someone will believe him.

Reply to  Bellman
November 15, 2025 3:39 am

You use the meme in EVERYTHING you post! Do you think people can’t recognize this when I point it out each and every time you apply the meme?

Reply to  Tim Gorman
November 15, 2025 5:17 am

You use the meme in EVERYTHING you post!

Everything? Including the comments where you attacked me for assuming the uncertainties were systematic?

Do you think people can’t recognize this when I point it out each and every time you apply the meme?

You really are developing a dangerous obsession with me.

Reply to  Warren Beeton
November 15, 2025 1:56 pm

You are as bad as bellhop. To know “systematic error”, one must KNOW the true value of the measurement. The subtraction of a value and the true value will provide both positive and negative values that can be assumed to cancel. The true value is unknowable.

The GUM says this.

Annex D

The term true value (B.2.3) has traditionally been used in publications on uncertainty but not in this Guide for the reasons presented in this annex. Because the terms “measurand”, “error”, and “uncertainty” are frequently misunderstood, this annex also provides additional discussion of the ideas underlying them to supplement the discussion given in Clause 3. Two figures are presented to illustrate why the concept of uncertainty adopted in this Guide is based on the measurement result and its evaluated uncertainty rather than on the unknowable quantities “true” value and error.

You would do well to bolster your assertions and criticism with references rather than acting as the world’s authority that is infallible.

Reply to  Bellman
November 14, 2025 2:22 pm

You don’t. And if you are using different instruments there’s less likelihood of there being a systematic error.”

HUH? This assertion is just pure idiocy. Systematic doesn’t mean “equal among all instruments”. It typically implies a either a calibration problem and/or an incorrect measurement protocol. Calibration drift is different even in similar instruments. Incorrect measurement protocols can be different for different measurements.

“But if there is any systematic error between the two measurements”

Systematic error is *NOT* between measurements, it is part and parcel of each measurement. Measurement 1 can have a totally different systematic error than Measurement 2 even if both are measured using the same instrument!

If you are talking about anomalies, then the main advantage is that any systematic error caused by environmental factors is reduced by taking the difference.”

Where in Pete’s name are you coming up with this? You are totally off the reservation. Put down the bottle!

If Environment1 is different than Environment2 they don’t cancel out! They *add* to the total uncertainty. The measurement uncertainty contributed by a metal tape measure in Environment1 *adds* with the measurement uncertainty contributed by the same metal tape measure in Environment2.

This is nothing more than your meme that all measurement uncertainty is random, Gaussian, and cancels.

You can continue to say that you don’t employ that meme but it just comes shining through in every single thing you assert!

Taking an anomaly will tend to reduce much of these differences.”

Malarky! Utter and total malarky!

If the uncertainty is represented by a standard deviation then shifting the distribution by subtraction of a constant WILL NOT REDUCE THE SD OF THE DISTRIBUTON.

And that is *all* that an anomaly is – subtraction of a constant.

Yes. That’s the point I’m making. Do you finally get it? The claim was that when the uncertainty was caused by a systematic error, then subtracting values should increase that uncertainty, not cancel it.”

But systematic error is neither constant or equal among measurements. If a tape measure is off by 1 inch then when measuring two different board lengths the total measurement uncertainty will be +/- 2″. It will *NOT* cancel out to 0 (zero) measurement uncertainty! If one measurement uncertainty is +/- 1″ because of the environment and a second measurement has a measurement uncertainty of +/- 1.5″ because of a different environment you don’t get a total measurement uncertainty of +/- 0.5″. It is +/- 2.5″!

And you are living in fantasy world. If you don;t want to use statistics to understand uncertainty, explain what method you do use – becasue at the moment it just seems to be to to just make up any argument that agrees with your personal believes.”

I am not the one claiming that measurement uncertainties subtract instead of adding.

You don;t know if all the errors are the same.”

Then why did you assume that they are always the same?

But if there is any sort of systematic error that is positively correlated between two measurements, it will be, at least in part, eliminated when looking at the differences.”

Correlation doesn’t imply cancellation of any kind. If measurement1 is +/- 1 and measurement2 is +/- 2 they probably have some measure of correlation due to a confounding variable.

THEY DON’T CANCEL, not even partially! They ADD! They add directly or in quadrature. Take your choice – both are ADDITION.

Again, all you are doing is asserting that “all measurement uncertainty is random, Gaussian, and cancels”.

Reply to  Bellman
November 15, 2025 1:35 pm

You don’t. And if you are using different instruments there’s less likelihood of there being a systematic error.

Keep throwing that manure dude, it may stick to the wall someday.

You keep using the term error and there is no longer such a thing. The GUM says that to know errors you must know a true value, yet, true values are unknowable. It is the foundation for using variance as a standard uncertainty.

Look at Eq. 10 in the GUM, do you see any subtraction involved. How about addition, it all over?

Systematic effects can only be resolved in measuring an input quantity through calibration. Then the measurements can be corrected. Those that are caused by changing components must be dealt with by evaluating reproducible measurements.

The book Experimentation and Uncertainty Analysis for Engineers explains that by using multiple different measurement devices, one may develop a series of observations that creates a random variable that allows one to develop “a single realization drawn from some parent population of possible systematic errors”. This then allows one to create a standard uncertainty that can be applied to an uncertainty budget for the system being analyzed. Any idea where that is done in climate science?

The upshot is that they don’t use the “error” paradigm, they use the “uncertainty” paradigm by developing a random variable with a mean and variance.

Reply to  Jim Gorman
November 15, 2025 3:10 pm

Keep throwing that manure dude,

OK, it’s your area of expertise. Explain how two different instruments have the same likelihood of having identical systematic errors, as one instrument.

You keep using the term error and there is no longer such a thing.

Woke nonsense. You’re not allowed to say error anymore. Political correctness gone mad.

Shame nobody told the GUM as they keep talking about systematic error.

Look at Eq. 10 in the GUM, do you see any subtraction involved.

I refer the gentleman to my previous answer.

https://wattsupwiththat.com/2025/11/10/the-curious-case-of-the-missing-data/#comment-4131762

Reply to  Bellman
November 13, 2025 1:22 pm

But in this case you claimed that if uncertainties are random they will cancel when subtracted.”

No, that is *NOT* what I claimed. That is what climate science claims. Go back and read my post for *meaning* this time.

Reply to  Tim Gorman
November 13, 2025 2:14 pm

It’s what you claimed scientist claimed. Please try to read for context, rather for cheap point scoring.

Reply to  Bellman
November 13, 2025 8:09 am

WTH is a “random uncertainty”?

Reply to  karlomonte
November 13, 2025 1:35 pm

Nothing he says makes any sense. If uncertainty is random then how do you know it always cancels? Even in a random sequence you can have long runs of the same value. Do they always cancel?

Reply to  Tim Gorman
November 13, 2025 3:13 pm

By random uncertainty, I mean uncertainties that are caused by random factors. A short hand way of distinguishing them from uncertainties caused by systematic errors, or any other non-independent uncertainty.

I’m pretty sure I didn’t hallucinate the term – but it was a lot easier when you were allowed to talk about random error, before all this woke nonsense about error not being uncertainty.

Reply to  Bellman
November 13, 2025 3:25 pm

Here’s one example of the term I found after a brief search.

In contrast to systematic uncertainties, random uncertainties are an unavoidable result of measurement, no matter how well designed and calibrated the tools you are using. Whenever more than one measurement is taken, the values obtained will not be equal but will exhibit a spread around a mean value, which is considered the most reliable measurement. That spread is known as the random uncertainty. Random uncertainties are unbiased – meaning it is equally likely that an individual measurement is too high or too low.

https://www.physics.columbia.edu/sites/www.physics.columbia.edu/files/content/Lab%20Resources/Lab%20Guide%201_%20Introduction%20to%20Error%20and%20Uncertainty.pdf

Not that I would agree with that last sentence.

Here’s another use of the term, though they only use it in the title

Statistical Analysis of Random Uncertainties

https://web.mit.edu/fluids-modules/www/exper_techniques/3.Statistical_Anal._of_Unce.pdf

And going back in time, there’s always that Taylor book, which makes repeated use of that term, including having a chapter called

Statistical Analysis of Random Uncertainties

But obviously that’s nonsense as statistics has nothing to do with the real world.

Reply to  Bellman
November 14, 2025 6:10 am

Random uncertainties are unbiased – meaning it is equally likely that an individual measurement is too high or too low.”

This is wrong, wrong, wrong. Like you, they assume that all measurement uncertainty is random, Gaussian, cancels. They’ve never heard of an asymmetric distribution apparently.

But obviously that’s nonsense as statistics has nothing to do with the real world.”

Bevington says systematic uncertainty is not amenable to statistical analysis. Since you cannot know the actual systematic uncertainty in any *single* field measurement, e.g. different temperature readings under different environments, you simply cannot eliminate the systematic uncertainty using statistics.

All you can do is ESTIMATE total measurement uncertainty. You cannot separate out the individual components.

It’s why you just can’t assume that “all measurement uncertainty is random, Gaussian, and cancels” as you and climate science do.

Reply to  Tim Gorman
November 14, 2025 7:03 am

This is wrong, wrong, wrong.

Which is why I said I didn’t agree with it.

Like you, they assume that all measurement uncertainty is random, Gaussian, cancels.

What part of “Not that I would agree with that last sentence.” didn’t you understand?

Bevington says systematic uncertainty is not amenable to statistical analysis.

Which has zero to do with the what you are replying to. You claimed that “random uncertainty” wasn’t a thing. I pointed out that Taylor uses the term, and made a joke about him also saying you can use statistics. Your response is that a different person said you can’t use statistical analysis on non-random errors. Do you think you try too hard to find reasons to disagree with me?

Reply to  Bellman
November 14, 2025 7:10 am

Taylor also assumes that after Chapter 3 everything is totally random with no systematic uncertainty!

Thus statistical analysis is appropriate.

Reply to  Tim Gorman
November 14, 2025 9:46 am

Taylor also assumes that after Chapter 3 everything is totally random with no systematic uncertainty!
Thus statistical analysis is appropriate.

You have to love the mental contortions people go through to avoid admitting they are wrong.

So now it’s all right to use statistics as long as you assume all uncertainties are random, is it? Yet you keep claiming I assume all uncertainties are random as if it’s a mortal sin.

I assume you now believe that everything after chapter 3 in Taylor is fantasy and has nothing to do with the real world.

Reply to  Bellman
November 14, 2025 2:31 pm

So now it’s all right to use statistics as long as you assume all uncertainties are random, is it?”

Where in Pete’s name do you get this stuff? No one has *EVER* said using statistics can’t be used! They can’t be used to create data. All statistics do is describe the actual data!

What you CAN NOT do is identify systematic uncertainty using statistical descriptors. Total measurement uncertainty = random uncertainty + systematic uncertainty. The fact that you cannot identify the systematic uncertainty using statistical descriptors means you can’t identify the random uncertainty either!

You are trying to assert that you *CAN* eliminate systematic uncertainty without knowing what it is! A difference in measurement uncertainty between two measurements can be generated from different random uncertainties, from different systematic uncertainties, or from a combination of the two. If you can’t identify the systematic uncertainty using statistical descriptors then you can’t assume they cancel! The difference in the measurement uncertainty *could* be totally from different random uncertainty components!

I assume you now believe that everything after chapter 3 in Taylor is fantasy and has nothing to do with the real world.”

Give it up. You know nothing about metrology. You are aptly demonstrating that in just this thread alone!

Reply to  Bellman
November 15, 2025 8:39 am

So now it’s all right to use statistics as long as you assume all uncertainties are random, is it?

Dude, exactly what do you think Type A analysis of observations is?

GUM 2.3.2

Type A evaluation (of uncertainty)

method of evaluation of uncertainty by the statistical analysis of series of observations.

Yet you keep claiming I assume all uncertainties are random as if it’s a mortal sin.

You do assume that. You even assume they cancel as you did when justifying “A-B” also justifies “eₐ-e₆=0”. This is the old paradigm where one knew the true value and that errors around that true value were random errors and that they canceled.

That paradigm went away with statistical variance where any negative values are squared and then ADDED to the positive values!

The GUM also says;

2.3.3

Type B evaluation (of uncertainty)

method of evaluation of uncertainty by means other than the statistical analysis of series of observations

When was the last time you included any assessment of Type B uncertainties in a combined standard uncertainty? Things like hysteresis, drift, resolution, etc. They all cancel don’t they at every location so they can be ignored, right?

Why do you think NOAA created a “reference” system called CRN? Why do you think their Type B assessment of CRN stations is ±0.3°C (0.54°F)

Reply to  Jim Gorman
November 15, 2025 2:07 pm

Dude, exactly what do you think Type A analysis of observations is?

It’s what I’ve been telling you all along – statistics. Exactly the same statistics that are used with samples of different things resulting in the SEM.

You do assume that. You even assume they cancel as you did when justifying “A-B” also justifies “eₐ-e₆=0”.

This is really getting unhinged. You claim I say that all uncertainties are random, then quote my talking about a situation where I’m assuming they are systematic.

This is the old paradigm where one knew the true value and that errors around that true value were random errors and that they canceled.

If they were random errors they would not cancel like that. They would add in quadrature. There would be some cancellation, but they would not cancel completely.

That paradigm went away with statistical variance where any negative values are squared and then ADDED to the positive values!

What paradigm. It’s always been true that variances add. That subtracting random variables results in their variance adding.

You do keep demonstrating that the reason we keep wasting so much time arguing is that you never try to understand what I’m saying. If you weren’t so fixated on the lie that I think all uncertainties are random you might have actually understood the point I’m making.

When was the last time you included any assessment of Type B uncertainties in a combined standard uncertainty?

What do you think I’ve been doing all this time. When we talk about the measurement uncertainty of an average it’s generally assumed we have a type B uncertainty for each measurement. That was the original question way back, 100 thermometers each with an uncertainty of 0.5°C. That uncertainty has to be type B.

Things like hysteresis, drift, resolution, etc. They all cancel don’t they at every location so they can be ignored, right?

Why on earth would you think that?

Why do you think NOAA created a “reference” system called CRN?

To get better and more detailed measurements. And probably in a few years time so it can be dismissed as a hoax when people here see it showing a warming trend.

Reply to  Bellman
November 13, 2025 3:42 pm

 before all this woke nonsense about error not being uncertainty

With each post you reinforce your ignorance.

Reply to  karlomonte
November 14, 2025 6:11 am

The fact that he considers error and uncertainty to be the same is a perfect clue to his mindset.

Reply to  Tim Gorman
November 14, 2025 7:30 am

BINGO.

Reply to  Tim Gorman
November 14, 2025 9:51 am

The fact that he considers error and uncertainty to be the same is a perfect clue to his mindset.

Who’s “he”? I certainly don’t think that. You can use error analysis to analyze uncertainty, just as those books on error analysis you keep recommending do. But that doesn’t mean they are the same.

Reply to  Bellman
November 15, 2025 9:00 am

I certainly don’t think that. You can use error analysis to analyze uncertainty,

You certainly do think that. You show it in every post.

To know error, you MUST know the true value of an input quantity. Why is that so hard to understand?

To know uncertainty, you must have sufficient observations of the same input quantity under repeatable conditions to use a statistical analysis of a mean and variance of the observations. Variance gives a range of values, positive and negative, that surround the mean. The mean is the best estimate but it is not considered the true value.

With error you can end up with a “true value + error” or “true value – error”. You can’t do that with a statistical analysis of mean/variance.

VARIANCE IS NOT ERROR. Throw that meme away and never mention it again.

GUM 3.2.2

NOTE 2 In this Guide, great care is taken to distinguish between the terms “error” and “uncertainty”. They are not synonyms, but represent completely different concepts; they should not be confused with one another or misused.

Annex D

The term true value (B.2.3) has traditionally been used in publications on uncertainty but not in this Guide for the reasons presented in this annex. Because the terms “measurand”, “error”, and “uncertainty” are frequently misunderstood, this annex also provides additional discussion of the ideas underlying them to supplement the discussion given in Clause 3. Two figures are presented to illustrate why the concept of uncertainty adopted in this Guide is based on the measurement result and its evaluated uncertainty rather than on the unknowable quantities “true” value and error.

Reply to  Jim Gorman
November 15, 2025 2:24 pm

You certainly do think that.

Ah, lying and mind reading at the same time. You really are talented.

To know error, you MUST know the true value of an input quantity.

To know an actual error, as distinct from uncertainty that’s true. It’s also irrelevant. Thinking of uncertainty in terms of error, just as in any other branch of statistics, does not require you to actually know either the true value or the error. Just to be able to estimate the standard deviation of the error. Do you really think all the books on error analysis were thick.

To know uncertainty, you must have sufficient observations of the same input quantity under repeatable conditions to use a statistical analysis of a mean and variance of the observations.

And in so doing your are analyzing error. What do you think the standard deviation is? It’s the average error. Deviation is just another name for error. Once you’ve got that, you can either think of uncertainty as being the size of that standard deviation, or you can infer an uncertainty interval around a measurement, but that interval is defined by the average of all your errors.

The mean is the best estimate but it is not considered the true value.

Just as it is in error analysis.

With error you can end up with a “true value + error” or “true value – error”.

You do not end up with that, not unless you know what the true value is in the first place. What you end up with is a sample standard deviation which is the best estimate of the population deviation, which in this case is the uncertainty of the measurements. And from that you calculate the SEM, or SDOM, which is your best estimate if the uncertainty of the mean. You then make an inference about how far the true value is likely to be from your measured value. This is usually stated as best estimate ± error.

VARIANCE IS NOT ERROR.

No. It’s the average of the squares of all errors.

They are not synonyms, but represent completely different concepts

And I agree.

Reply to  Bellman
November 15, 2025 3:05 pm

To know an actual error, as distinct from uncertainty that’s true. It’s also irrelevant. Thinking of uncertainty in terms of error, just as in any other branch of statistics, does not require you to actually know either the true value or the error. Just to be able to estimate the standard deviation of the error. Do you really think all the books on error analysis were thick.

uncertainty in terms of error

Uncertainty and error are TWO DIFFERENT CONCEPTS. You CAN NOT think of one in terms of the other. They derive from very different views of measurements and are not compatible.

Just to be able to estimate the standard deviation of the error

There you go again. You can not know the standard deviation of the error because you do not know the true value. Errors do not have a standard deviation. Errors are calculated by subtracting the true value from a reading. Why is that so hard to understand?

GUM D.3.1
Although the final corrected result is sometimes viewed as the best estimate of the “true” value of the measurand, in reality the result is simply the best estimate of the value of the quantity intended to be measured.

You keep contradicting the GUM which was created by folks smarter than you or I. You should base your assertions with quotes from the GUM.

Reply to  Jim Gorman
November 15, 2025 3:30 pm

Uncertainty and error are TWO DIFFERENT CONCEPTS.

As I keep saying.

You CAN NOT think of one in terms of the other.

Obviously you can, or else Taylor and Bevington were idiots.

Here’s what the GUM says

2.2.4 The definition of uncertainty of measurement given in 2.2.3 is an operational one that focuses on the measurement result and its evaluated uncertainty. However, it is not inconsistent with other concepts of uncertainty of measurement, such as

a measure of the possible error in the estimated value of the measurand as provided by the result of a measurement;

Two different concepts of uncertainty, not inconsistent with each other. The GUM prefers the first one, but points out that the calculations are the same regardless.

You can not know the standard deviation of the error because you do not know the true value.

That’s why I said “estimate”.

Errors do not have a standard deviation.

The standard deviation is the square root of the average of the squares of all errors.

“Although the final corrected result is sometimes viewed as the best estimate of the “true” value of the measurand, in reality the result is simply the best estimate of the value of the quantity intended to be measured.”

Which is whole new philosophical rabbit hole. I think it’s to do with the distinction between Realism and Instrumentalism, but philosophical arguments are not something I care to much about at the time. Suffice to say it’s a bit difficult to see the distinction between “true value” of the measurand, and value of the quantity intended to be measured.

You keep contradicting the GUM which was created by folks smarter than you or I.

Another argument by authority. Yet you don’t have a problem with contradiction all statisticians and scientists, despite the probability that some of them are smarter than you or I.

Reply to  Bellman
November 16, 2025 5:36 am

Obviously you can, or else Taylor and Bevington were idiots.”

You’ve never actually studied either Taylor or Bevington! You just cherry pick stuff you think comports with your misconceptions.

Taylor from the very first pages: “Our carpenter’s experiences illustrate a point generally found to be true, that is, that no physical quantity (a length, time, or temperature, for example) can be measured with complete certainty.”

Taylor from the first page in Chapter 1: “In science, the word error does not carry the usual connotations of the terms mistake or blunder. Error in a scientific measurement means the inevitable uncertainty that attends all measurements.” (bolding mine, tpg)

Taylor from the first page of Chapter 1: “For now, error is used exclusively in the sense of uncertainty, and the two are used interchangeably.”

Taylor uses, from the very start, the same definition the GUM does. Uncertainty, not “true value +/- error” but “estimated value +/- uncertainty”.

You’ve been provided these quotes MULTIPLE TIMES, yet you refuse to internalize the concept. When you say e1 – e2 = 0 *YOU* are using the “true value +/- error” concept, not the concept of uncertainty.

Bevington from the first page of Chapter 1: “Error is defined by Webster as “the difference between an observed or calculated value and the true value”. Usually we do not know the “true” value; otherwise there would be no reason for performing the experiment.”

Bevington: “The problem of reducing random errors is essentially one of improving the experimental method and refining the techniques, as well as simply repeating the experiment.

Bevington: “The term error suggests a deviation of the result from some “true” value. Usually we cannot know what the true value is, and can only estimate the errors inherent in the experiment.”

Bevington: “Because, in general, we shall not be able to quote the actual error in a result, we must develop a consistent method for determining and quoting the estimated error. A study of the distribution of the results of repeated measurements of the same quantity can lead to an understanding of these errors so that the quoted error is a measure of the spread of the distribution.”

As with Taylor we see Bevington using the concept of “measurement uncertainty” and not “measurement error”.

You’ve been provided Bevington’s quotes multiple times as well. Yet you continue to refuse to internalize the concept.

When you combine distributions into a data set you *ADD* their variances, you don’t subtract them. e1 – e2 simply makes no sense at all in such a situation. And, as Taylor points out in no uncertain terms, when you add measurement uncertainties you must use judgement as to whether you add them directly or add them in quadruature. But you STILL add them – MEASUREMENT UNCERTAINTY GROWS, it doesn’t reduce.

Two different concepts of uncertainty, not inconsistent with each other.”

They ARE inconsistent! That’s why the GUM says that “error” and “uncertainty” are *NOT* the same. The use of “error” REQUIRES knowing a “true value” – something that *all* the experts say you cannot know! The calculations are *not* the same.

You are, once again, trying to use the argumentative fallacy of Equivocation by changing definitions of what you are discussing in your mind but never actually stating which definition you are using. “Uncertainty” as an interval or “error” as a fixed quantity. If you are using an interval, then e1 – e2 makes no sense. If you are using “true value +/- error” then you are assuming you know the “true value”, something you CAN’T know which is why the GUM moved to using uncertainty as an interval.

“Suffice to say it’s a bit difficult to see the distinction between “true value” of the measurand, and value of the quantity intended to be measured.”

It is not difficult at all to see the distinction. And it is *NOT* a philosophical difference between Realism and Instrumentalism. Neither Realism or Instrumentalism has to do with metrology, the science of measurement. Realists believe measuring the length of a board actually describes the physical reality while Instrumentalists believe measuring the length of a board is merely a subjective description of what we believe is reality. That has nothing to do with actually making the measurement and describing it using “estimated value +/- measurement uncertainty”.

This is just another red herring from you so deflect from your inability to actually support your assertions.

Reply to  Tim Gorman
November 16, 2025 10:07 am

“Error in a scientific measurement means the inevitable uncertainty that attends all measurements.”

Exactly. What else do you think I mean by error?

You keep ranting against phantoms. You have a fit whenever I use the word error regardless of the context, yet think that when Taylor says error he doesn’t mean error.

Reply to  Bellman
November 16, 2025 5:39 am

Yet you don’t have a problem with contradiction all statisticians and scientists, despite the probability that some of them are smarter than you or I.”

You *still* fail to get it. There is nothing wrong with statistical descriptors. What is wrong is making unstated assumptions to allow their misapplication in calculating the statistical descriptors and in understanding what they are actually providing. Memes like “random, Gaussian, and cancels”, or “numbers is just numbers and signficant figures and resolution can be ignored”, or “you don’t have to use weighted values based on variance when calculating an average” are typical unstated assumptions used by the statisticians and scientists in climate science. The problem isn’t statistics itself – the problem is the practioners.

Reply to  Tim Gorman
November 13, 2025 3:40 pm

After all this time, he still doesn’t have a single clue about real uncertainty and how it is handled in metrology!

Reply to  Warren Beeton
November 13, 2025 6:51 am

Read this dude

GUM 4.1.3

The set of input quantities X1, X2, …, XN may be categorized as: ⎯ quantities whose values and uncertainties are directly determined in the current measurement. These values and uncertainties may be obtained from, for example, a single observation, repeated observations, or judgement based on experience, and may involve the determination of corrections to instrument readings and corrections for influence quantities, such as ambient temperature, barometric pressure, and humidity; ⎯ quantities whose values and uncertainties are brought into the measurement from external sources, such as quantities associated with calibrated measurement standards, certified reference materials, and reference data obtained from handbooks.

GUM 6.3 Choosing a coverage factor 

NOTE Occasionally, one may find that a known correction b for a systematic effect has not been applied to the reported result of a measurement, but instead an attempt is made to take the effect into account by enlarging the “uncertainty” assigned to the result. This should be avoided; only in very special circumstances should corrections for known significant systematic effects not be applied to the result of a measurement (see F.2.4.5 for a specific case and how to treat it). Evaluating the uncertainty of a measurement result should not be confused with assigning a safety limit to some quantity. 

Reply to  Jim Gorman
November 14, 2025 6:19 am

Read this Dude:
https://climate.nasa.gov