Patrick Frank: Nobody understands climate | Tom Nelson Pod #139

Tom Nelson

Patrick Frank is a physical methods experimental chemist. BS, MS, San Francisco State University; PhD, Stanford University; Bergmann Postdoctoral Fellow, The Weizmann Institute of Science, Rehovot, Israel. Now Emeritus scientific staff of the SLAC National Accelerator Laboratory and the Department of Chemistry, Stanford University. He has 67 publications in bioinorganic chemistry including among others the unusual metal active site in blue copper electron transport proteins, the first X-ray spectroscopic evidence for through-sigma-bond electron transfer, falsification of rack-induced bonding theory, deriving the asymmetric solvation structure of dissolved cupric ion (which overturned 60 years of accepted wisdom), and resolving the highly unusual and ancient (Cambrian) biological chemistry of vanadium and sulfuric acid in blood cells of the sea squirt Ascidia ceratodes. He also has peer-reviewed publications on the intelligent design myth, the science is philosophy myth, the noble savage myth, the human-caused global warming myth, and the academic STEM culture of sexual harassment myth.

Slides for this podcast: https://tomn.substack.com/p/on-the-re…
Another Patrick Frank pod here: • Climate, Sea Squirts & Science – Dr …

About Tom Nelson:
https://linktr.ee/tomanelson1
YouTube:

• Tom Nelson Podcast
Twitter: https://twitter.com/tan123
Substack: https://tomn.substack.com/
About Tom: https://tomn.substack.com/about

5 15 votes
Article Rating
1.1K Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
August 24, 2023 10:30 pm

Wow! Unexpected bonus! 🙂

Thanks so much, Charles!

Reply to  Pat Frank
August 24, 2023 10:33 pm

By the way, my replies to critical commentary under the video at Tom Nelson’s site have been disappearing. I suspect YT interference. So, I’ll not be replying there anymore.

Tom did a great job. The whole experience there was a lot of fun.

Reply to  Pat Frank
August 25, 2023 12:39 am

I’ve discovered (well I think I have) that You Tube scrubs comments that have a URL link in them.

Reply to  Steve Case
August 25, 2023 1:02 am

Youtube scrubs all sorts of comments. You can tell because their reply counter does not always reset correctly and there are other non scrubbed replies to phantom commenters whose comments no longer appear to exist.

Philip Mulholland
Reply to  Steve Case
August 25, 2023 2:16 am

I’ve discovered (well I think I have) that You Tube scrubs comments that have a URL link in them.

Steve,
I have had the same experience with posted URLs on YouTube comments.
I am going to try a test with a disabled syntax URL to see how clever the robot actually is.

michael hart
Reply to  Pat Frank
August 25, 2023 5:51 am

Blessed are the experimental chemists, for they learn sooner how often and easily a scientist can be wrong.

I think this is largely because it is often quicker and easier to go back and repeat experiments rather than concoct theories as to how they were right first time.

commieBob
Reply to  Pat Frank
August 25, 2023 9:22 am

Pure gold! Thank you.

About errors rounding to zero …

Sometimes errors truly do round to zero (for practical purposes). Your cell phone wouldn’t work otherwise.

On the other hand, white noise is rare in nature. What you get more often is red noise, or something like it. Such errors absolutely do not round to zero.

IMHO, any work that relies on statistics should be checked by a statistician. It’s too easy, and too tempting, to plug a data set into Matlab and keep trying different tools on it until you get a publishable result.

Climate ‘science’ is largely the result of people applying tools they don’t understand. Exhibit ‘A’ might be James Hansen’s mishandling of feedback analysis. link

Gary Pearse
Reply to  Pat Frank
August 25, 2023 12:39 pm

Wow! I’ve read most of your contributions to WUWT and many of your comments on other threads, and I’m impressed how much more I’ve learned today. Thanks! I’ve wanted to ask a chemist about the Le Châtelier Principle (LCP) as an actor in in the process of climate change.

I’ve commented over the years that the interactive components (composition, temperature, pressure, volume, physical states, etc, etc,) of the atmosphere, hydrosphere and biosphere, are subject to the LCP with respect to ‘forcing’ changes to any one (or more) of the components, say, T°C for example. If atmospheric heating occurs from whatever cause, LCP states that this perturbation will induce changes in all other components such as to resist the forced change. The final change in T°C ends up much reduced from what was expected.

When models proved to give anomaly predictions that were 300% too high relative to observations in the first decade of the new millennium, my first thought was LCP was not taken into consideration. Since then, I’ve come to believe that consensus climate scientists and a good many other scientists aren’t even aware of LCP.

August 24, 2023 11:23 pm

Fantastic. I will be coming back to this again and again.

August 25, 2023 12:16 am

~24:00 “…when you have a bunch of coupled oscillators…”

_______________________________________________
Reminds me of that Double Pendulum video Kip Hansen
put up a while back. See it HERE on the You Tube

22:22 “…and get their wrong answers faster…”

_________________________________________
Ha ha ha ha ha ha ha ha! first chuckle of my day (-:

The Real Engineer
Reply to  Steve Case
August 25, 2023 9:29 am

That looks like chaotic coupled sytems, come to think of it….
Even a mathematical computer model of those oscillators would have trouble, simply because there are so many undefinable things going on. In fact it is probablly proof that even a very definable computer model cannot work. Good point! Those devices work only on a simple conservation of energy principle with a couple of unknown frictional and air resistances thrown in. Mathematically pretty trivial, modelable, probably not!

Reply to  The Real Engineer
August 26, 2023 4:52 am

To me, it’s like designing a triple conversion receiver using discrete components in the old days. Three oscillators, 3 mixers, umpteen products causing phase noise. You try to filter, shield, etc. but hard to eliminate all the different combinations that can occur.

August 25, 2023 12:26 am

A must watch for those interested in learning or talking about climate model accuracy and device measurement accuracy. There are practical limits in life. Armwaving does not eliminate them.

Chris Hanley
August 25, 2023 12:34 am

It all makes sense; Dr Frank’s concise debunking of tree ring proxies was a bonus (1:06) viz. that the temperature reconstructions are physically meaningless because there is no physical theory converting tree rings to temperature … if the tree ring width trend happens to correlate approximately with the supposed thermometer trend, preposterously, the tree must be a ‘good thermometer’ not only during the relatively short ‘calibration period’ but also during its whole past life.

Reply to  Chris Hanley
August 25, 2023 9:00 am

Just so!

Among potential environmental factors, beyond temperature-stress, that plausibly would affect yearly tree ring growth would be:
— variations in local humidity/rainfall/snowfall over the course of a year
— variations in available nutrients taken up by roots (via variations in soil humidity & windblown dust & local soil erosion)
— variations in local sunlight (via annual variations in local cloud coverage, and possible shading from other nearby trees/vegetation)
— variations in wind loading on the tree, with the natural response being to increase wood growth to support higher trunk stresses from higher yearly-average wind speeds.

Until each of these factors can be shown scientifically to NOT AFFECT annual ring growth of the particular tree species being used as a “temperature proxy”, a claim that tree ring growth variations reflect temperature variations is absurd . . . and that applies to you, Michael Mann and your infamous hockey stick graph!

Reply to  ToldYouSo
August 25, 2023 2:27 pm

Don’t forget insect infestations which can have various multiple effects.

August 25, 2023 1:20 am

and if anyone stands up and asserts that:
“Climate is created under our feet and manifests in the sky”
i.e. All the things we see/measure as ‘causes‘ are in fact ‘effects

as I repeatedly do, what would happen……

How did CO₂ make this? (BBC video short)

What is so interesting/sad/sickening is how the Met Office felt obliged to jump and assert:
“No, this is not a tornado”

IOW: What planet are they on that they think everyone else is sooooo stupid?

Reply to  Peta of Newark
August 25, 2023 1:54 am

I watched this on East Midlands TV News last night, watch local in preference to national and international news. I thought “Haynado that’s nonsense that’s a Hay Devil” also known as a Dust Devil
A farm worker should be aware of these things, too many townies reading the news as these things are well known as is the cause.

cartoss
August 25, 2023 2:13 am

Great to see you linking to Tom Nelson on Youtube. He is a true warrior in the climate con fightback.

ScienceABC123
August 25, 2023 3:14 am

I think everyone understands what climate is, it’s just that no one has been able to model it.

August 25, 2023 3:48 am

There are real gems scattered all throughout this video. Pat knows more about physical science than any of the so-called climate science experts.

I like where he talks about the need to be able to resolve the *details* of everything in order to create a proper physical model. Things like clouds, wave motion in the oceans, heat transport among the various oscillators that exist, etc.

One of Freeman Dyson’s main criticism of climate science was that it isn’t holistic. It looks basically at one thing, GHG’s like CO2, and tries to tie it causally to one thing, atmospheric temperature. Dyson was saying basically the same thing as Pat – if you can’t lay out a complete physical theory then your model is nothing more than a data mapping exercise. It contains no real physical science.

It’s like mapping the speed of a car going down the interstate. You can measure the velocity at different points in time and create a set of differential equations to map that velocity data. but if you don’t know the *cause* generating that data then you simply can’t predict future velocity. You’ll never be able to tell that the car slows down going up hill and speeds up going down hill and also speeds up and slows down because of traffic on the roadway because you simply don’t understand the details of the physical processes involved in determining velocity.

This doesn’t even being to address the issue of measurement uncertainty associated with the temperature data. How do you even tell the velocity of the car speeds up and slows down if the measurement uncertainty is more than the velocity change? The words “DO NO KNOW” simply don’t seem to be in the vocabulary of climate science.

Alastair Brickell
Reply to  Tim Gorman
August 25, 2023 4:27 am

Yes, there are lots of gems. The first hour was a bit difficult for me but some gems are:

1:05 Tree ring measurements are meaningless
1:08 Ice Cores & O18 & tempereature
1:11 CO2 movement/diffusion in ice cores, effect of surface UV on CO2 in cores
1:16 Drift in all old thermometers (pre 1885) can show 0.7C increase over 100 years that is not real

It’s well worth the time to watch.

Reply to  Alastair Brickell
August 25, 2023 5:36 am

https://www.hallofmaat.com/lostciv/review-of-voyages-of-the-pyramid-builders/

In that interesting book it’s shown, that treerings may be good proxies if you compare them with other proxies like ice cores, flowstone cave data etc.
He deliver very good research results of different regions of the world correlated to local treering data.

AGW is Not Science
Reply to  Krishna Gans
August 25, 2023 6:52 am

Not following the logic. If tree ri go are “good proxies” ONLY WHEN they are “compared with” (and presumably AGREE with) “other” proxies, then…

They obviously are NOT “good proxies.”

Reply to  AGW is Not Science
August 25, 2023 7:33 am

Thank you. The same thing hit me. Tree rings have far too many inputs to use temperature as what they indicate about the environment.

Reply to  Tim Gorman
August 25, 2023 11:31 am

Another way of looking at this problem of tree rings is to calibrate the tree rings with known temperatures, or use multivariate analysis to obtain the co-variances with other proxies. One is almost certainly going to obtain very large r^2 values, and large standard deviations. The standard deviations will create an envelope of uncertainty that reveals how unreliable the tree rings are. It may well be that the uncertainty is larger than the change in temperature, showing that it has little utility for reliably estimating past temperatures.

Reply to  Clyde Spencer
August 25, 2023 5:18 pm

Ummm . . . I think you meant to say ” . . . going to obtain very small r^2 values, a large standard deviations.”

Reply to  ToldYouSo
August 26, 2023 6:23 pm

Yes, thank you.

Captain Climate
Reply to  Krishna Gans
August 27, 2023 5:48 am

A agreed with B and C doesn’t mean A agrees with temperature. Each of these proxies needs to be actually anchored to temperature in real ways.

August 25, 2023 4:00 am

That was a great podcast! Thank you Pat Frank and Tom Nelson.

“Nobody understands climate” – true! The importance of uncertainty in measurement and computation must be respected.

But at least as to the end result of emission of longwave radiation, we can watch from space in the NOAA “CO2 longwave IR” band. The atmosphere is the authentic model of its own performance as an emitter, and the the output is plainly not that of a passive “trap” in respect to non-condensing GHGs. What, then? Dynamic self-regulation in response to absorbed energy.

https://youtu.be/Yarzo13_TSE

August 25, 2023 5:06 am

Excellent presentation of the reality of things! Now you need to brutally distill the essential elements of the argument and it’s solid foundation to produce a 2-3 minute concise, hard hitting piece. For the mass delusion needs to be countered and the masses have minuscule attention spans.

AlanJ
August 25, 2023 5:34 am

Pat’s model of uncertainty leaves the uncertainty dependent on when you start calculating it. i.e. he would say I can make my results more certain by not calculating the uncertainty until a further tilmestep. If that doesn’t tell you this is the goofiest most unphysical exercise in nonsense in the history of mathematics I don’t know what to tell you.

The fact that this foolishness has captured the hearts and minds of so many on this website absolutely boggles my mind.

AlanJ
Reply to  AlanJ
August 25, 2023 5:39 am

His uncertainty also depends on the size of the tilmestep over which it is calculated. So somehow Pat thinks you get more or less information depending on an arbitrary choice you make about tilmestep. I don’t even know where to begin with it all.

Reply to  AlanJ
August 25, 2023 6:53 am

His uncertainty also depends on the size of the tilmestep over which it is calculated.”

Of course it does! Why would you expect anything different? The total uncertainty in the total length of three 2″x4″ boards is greater than the uncertainty in the total length of two 2″x4″ boards. The RELATIVE uncertainty stays the same but the absolute value goes up!

If you have a group of three boards with measurements of 8″ +/- 0.5″ the total length will be 24″ +/- 1.5″. That’s a relative uncertainty of 1.5″/24″ = 0.06%. Take two boards of length 8″ +/- 0.5″. The total length will be 16″ +/- 1″. A relative uncertainty of 1″/16″ = 0.06%. The uncertainty went from 1″ to 1.5″ as you added a board.

Until you can figure that out you don’t have a chance in h*ll of figuring out measurement uncertainty.

The exact same thing happens with iterative processes. The more iterations you have the larger the uncertainty interval becomes since the uncertainty accumulates! The issue with an iterative process where the output of one iteration becomes the input to the next iteration is that the uncertainty *more* than adds, it compounds! It’s like adding a third 2″x4″ whose uncertainty is 1″ instead of 0.5″!

If you are an example of the rest of climate science it’s no wonder its screwed up from the word go!

AlanJ
Reply to  Tim Gorman
August 25, 2023 7:14 am

Of course it does! Why would you expect anything different?

The uncertainty needs to be independent of my method of estimating it. Using Pat’s equation, I can get different uncertainties by choosing different intervals over which to calculate it. That is, if you iterated by hours instead of days you would get a completely different uncertainty interval.

Make that make sense.

Reply to  AlanJ
August 25, 2023 7:53 am

Impossible, this is nothing but hand waved word salad.

Reply to  AlanJ
August 25, 2023 7:58 am

Are you drunk? The uncertainty in hourly measurements is less than the uncertainty in a daily average since the hourly uncertainties accumulate as you determine the daily average. The uncertainty in a daily average is less than the average in a monthly average since the daily uncertainties accumulate. An on and on and on.

So what if the you change the interval from hourly to daily or from daily to monthly. The uncertainty of each is a function of what came before! Of course they will be different values!

Unless, of course, you are a climate scientist and then you assume that all measurement uncertainty is random, Gaussian, and cancels – so you can ignore it!

You didn’t understand my board example at all, did you?

Like so many trying to defend the climate models it’s not obvious at all that you have ever done any work in the real world where your reputation and financial well-being is put at risk with every thing you do concerning measurements. You’ve never built a wooden bean to span a house foundation using 2″x4″ boards. You’ve never overhauled an engine where you have to decide whether to bore out a cylinder. You’ve never built a stud wall in a house. If you had done *any* of these things you would understand the accumulation of measurement error.

AlanJ
Reply to  Tim Gorman
August 25, 2023 9:04 am

Using Pat’s approach, if I compound errors hourly, the uncertainty blows almost immediately, whereas I can almost eliminate all uncertainty by simply compounding error over 10000 year intervals instead. It’s quite a novel technique.

Reply to  AlanJ
August 25, 2023 1:23 pm

Showing a complete lack of understanding of what Pat says is not the best way to make your point.

Reply to  Tim Gorman
August 26, 2023 9:10 am

Tim I have done all three of the things you mention beans, stud walls, and bore out cylinders. I have never used 8” long boards put together for either beams or studs. The beams and studs would be weak and useless. As for boring I always knew going in that I was putting in oversized rings so boring was a fore gone conclusion.

Please pick better examples.

Reply to  mkelly
August 26, 2023 10:01 am

I’m sitting here in my basement looking at one. It’s made of different lengths cut from 2″x4″ boards so that the joints don’t all line up. That’s when you get weakness. The issue with that is how accurate are the cuts on the shorter boards? This is a prime example of measuring different things and trying to use an “average” value to determine overall length without considering uncertainty. If even one of the cut boards is 1/8″ inch too long or too short it throws off the entire beam If all of the cut boards are 4′ +/- 18/” then what do you get?

It’s the same with vinyl flooring lengths in a room that isn’t an even multiple of the flooring panel length. Mis-cut one too short and you better hope the moulding covers up where the space.

The issue with boring is not the actual boring. It’s deciding whether to bore at all. It’s probably more appropo for rod bearings. You better hope you measured the journals right or you wind up with a set of unused bearing sitting on the bench on the bench in the corner with the hope you can use them on something else at a later date Otherwise you eat the cost.

Reply to  AlanJ
August 25, 2023 8:33 am

I can get different uncertainties by choosing different intervals

Only if you don’t know what you’re doing.

bdgwx
Reply to  Pat Frank
August 25, 2023 10:06 am

Only if you don’t know what you’re doing.

You can clearly see that Lauer & Hamilton 2013 said the multimodel RMSE was 4 W/m2. Not 4 W/m2.year. Not only did you erroneously change the units, but you arbitrarily picked a year when doing so. A typical climate model time step is 1 hour (some lower, some higher). Had you used an hour instead of a year your uncertainty would have blown up rapidly. For example, using F0 = 34 W/m2 and your made up 4 W/m2.year LCF uncertainty we have using equation 5.2 u(T) = 0.42 * 33 * 4 / 34 = 1.6 K. Then using equation 6 and assuming 100 years we have σT = sqrt[ n * u(T)^2 ] = sqrt[ 100 * 1.6^2 ] = 16.3 K. But had you instead used the made up 4 W/m2.hour figure it would be σT = sqrt[ n * u(T)^2 ] = sqrt[ 100 * 8760 * 1.6^2 ] = 1526 K. Does ±1526 K even pass the sniff test? Who is the one who doesn’t know what they are doing here?

Reply to  bdgwx
August 25, 2023 10:42 am

H&L (2013), eqn. 1: “… 20-yr means.

Figure 2 shows 20-yr annual means…

The rmse of the multimodel mean for SCF…”

For CMIP5, the correlation of the multimodel mean LCF is 0.93 (rmse = 4 W m⁻²) and ranges between 0.70 and 0.92 (rmse = 4–11 W m⁻²) for the individual models.

I don’t think you’re dyslexic, bdgwx. Evidence says you’re careless and unable to understand what is before your eyes.

Those blind to their own ignorance boldly go forth and make foolish arguments. Utterly unaware that modesty in declaration confers grace upon the ignorant.

bdgwx
Reply to  Pat Frank
August 25, 2023 2:35 pm

For CMIP5, the correlation of the multimodel mean LCF is 0.93 (rmse = 4 W m⁻²) and ranges between 0.70 and 0.92 (rmse = 4–11 W m⁻²) for the individual models.

I don’t think you’re dyslexic, bdgwx.

I see 4 W/m2 in there. You say it is 4 W/m2.year. Who is dyslexic, blind, ignorant, and/or foolish? Is it me for not seeing the “year” in transparent font? Is it Lauer and Hamilton for writing W/m2 when it should have been W/m2.year?

Reply to  bdgwx
August 25, 2023 9:20 pm

Square roots take the ±, bdgwx. Some people don’t include it, knowing everyone understands it’s there.

I see 4 W/m2 in there

No, you see “rmse = 4Wm⁻²”. You consistently misrepresent their result.

bdgwx
Reply to  Pat Frank
August 26, 2023 2:11 pm

Pat Frank: No, you see “rmse = 4Wm⁻²”. You consistently misrepresent their result.

I type “4 W/m2”. The text literally displays “4Wm⁻²”. How am I misrepresenting their result? How is “W/m2” not the same thing as “Wm⁻²”?

Reply to  bdgwx
August 26, 2023 5:48 pm

What is the significance of rmse, bdgwx.

bdgwx
Reply to  bdgwx
August 25, 2023 12:03 pm

BTW…as I point here that ±1526 K value is probably underestimating the result for hourly timesteps since as you drive toward shorter and shorter averaging periods the RMS of the LCF value increases. A one hour averaging period for the LCF would likely result in an RMS much higher than 4 W/m2.

Reply to  bdgwx
August 25, 2023 9:21 pm

±1526 K just indicates that your analogy runs to ever greater foolishness.

bdgwx
Reply to  Pat Frank
August 26, 2023 2:03 pm

Pat Frank: ±1526 K just indicates that your analogy runs to ever greater foolishness.

It’s not an analogy. It is the result from equations 5.2 and 6 when you use 4 W m-2 hour-1.

Reply to  bdgwx
August 26, 2023 5:50 pm

use 4 W m-2 hour-1.” which is utter foolishness.

Reply to  AlanJ
August 25, 2023 1:21 pm

That is, if you iterated by hours instead of days you would get a completely different uncertainty interval.”

The very nature of solving integrals and differentials by computer.

Or are you so ignorant you didn’t know that !!

Yes.. you almost certainly are that ignorant

Reply to  bnice2000
August 25, 2023 3:08 pm

He and bdgwx have never shown that they understand what an integral *is*. Both of them think an integral is an average.

Reply to  Tim Gorman
August 25, 2023 8:31 am

Tim, Alan is supposing that the ‘4’ of Lauer and Hamilton’s ±4 W/m² annual average LWCF error remains constant no matter what time-step interval one uses to calculate the centennial uncertainty.

It’s a really fatuous argument.

I suspect Alan never tested the idea himself, but got it elsewhere. Had he checked rather than applied by rote, he’d have found what is demonstrated in my reply, and elected to not look foolish.

bdgwx
Reply to  Pat Frank
August 25, 2023 10:13 am

It’s a calibration error. It is 4 W/m2; not 4 W/m2.year. Just because Lauer & Hamilton used annual averages as part of their analysis does not mean that it is 4 W/m2 per year. Again, it’s not 4 W/m2.

This is not unlike the differences between say UAH and BEST where UAH is anchored to ~263 K while BEST is anchored to ~288 K. The calibration difference between the two is thus 15 K. It is NOT 15 K/year. And it will stay at around 15 K regardless of whether we chose a single year or 30 year as the averaging period for determining the calibration difference.

Reply to  bdgwx
August 25, 2023 10:47 am

20 yr annual mean: 20 years of data/20 years = (mean of data)/yr.

Very inept, bdgwx.

Your difference is 25 K.

bdgwx
Reply to  Pat Frank
August 25, 2023 11:59 am

Yeah…typo…25 K. I tried editing the post to fix it immediately, but WUWT is not allowing to me edit posts anymore. I’m not sure what that is all about.

I think you misunderstood what L&H 2013 did. They have annual means of the LCF value. 20 annual means for each of 27 models. That is 20 * 27 = 540 values that can be compared. Out of those 540 comparisons the root mean square difference was about 4 W/m2. Some comparisons would obviously be lower and some higher. Assuming the differences were normally distributed (its probably close enough) that would mean 68% of the time it was <= 4 W/m2 and only 5% of the time it was >= 8 W/m2. They are not saying the difference is changing with time. They just choose an annual averaging period likely to remove to the seasonal cycle. If they had chosen a monthly averaging period then there would have been 12 * 20 * 27 = 6480 values that could be compared. And, obviously, because the monthly variation is higher than the annual variation we might expect the root mean square difference to be higher than 4 W/m2 if done on a monthly basis.

Reply to  bdgwx
August 25, 2023 1:48 pm

They just choose an annual averaging period “

But you and AlanJ are trying to claim that it *isn’t* an annual averaging period. You want your cake and to eat it also.

Reply to  bdgwx
August 25, 2023 9:24 pm

Out of those 540 comparisons the root mean square difference was about 4 W/m2.

Wrong. Look at their eqn. 1. The L&H error metric is simulation minus observed.

Start with a mistake, proceed with nonsensical blah.

Reply to  bdgwx
August 25, 2023 1:46 pm

And what does 4w/m^2 *mean*? You can’t just ignore it. Calibration error *is* uncertainty no matter how much you want to assume otherwise.

And a 4w/m^2 calibration error at time T0 is *NOT* the same as a 4w/m^2 error in total insolation over a year!

You are trying to say that if at one point in time the calibration error is X, the average over any interval is X also. It’s the same old meme of the average uncertainty is the uncertainty of the average. It simply doesn’t work that way.

The integral of 4w/m2 over 31,536,000 seconds is *NOT* 4w/m^2 annually. 4w/m^2 at point T0 when divided by the total w/m^2 being received at that point in time is a RELATIVE uncertainty. Multiply that relative uncertainty times the total w/m^2 received over 31,536,000 seconds and you don’t get 4w/m^2.

The 4w/m^2 is an ANNUAL mean. You can deny that till you are blue in the face but it won’t change that fact.

Reply to  Pat Frank
August 25, 2023 1:36 pm

I know it. The problem is that he doesn’t know what the uncertainty is for a 10,000 year interval. So he just assumed its the same as for a 1 year period. This is climate science at its finest.

Reply to  Tim Gorman
August 25, 2023 9:26 pm

For bdgwx, it’s also the same for 1 hour. There’s no hope.

Reply to  AlanJ
August 25, 2023 8:23 am

Try beginning with a correct appraisal, AlanJ. You’re just recrudescing an argument long ago refuted.

The calibration period was 20 years, and the published annual average uncertainty in LWCF due to cloud fraction error was ±4 W/m². Let F_0 (year 2000) = 34 W/m²

Into eqn. 5.2 it all goes: the uncertainty after 1 century is sqrt[100*(0.42*33*4/34)²] = ±16.3 C.

Now we change the time-step to one month. The monthly uncertainty is sqrt(4²/12) = ±1.155 W/m². Eqn. 5.2 again: the uncertainty after 1 century is sqrt[100*12*(0.42*33*1.155)/34)²] = ±16.3 C

Time-step changed to 10 years: uncertainty is sqrt(4²*10) = ±12.65 W/m². Uncertainty after 1 century is sqrt[10*(0.42*33*12.65/34)²] = ±16.3 C.

Oops. They’re all the same.

You never worked through your criticism, did you Alan. Take someone else’s word for it, did you?

AlanJ
Reply to  Pat Frank
August 25, 2023 8:56 am

You’re just restating your error in different words. You’re in all cases assuming that the error for each year must be 4 W/m^2, but in the paper you link the term has no time component, it is not an error of 4 W/m^2/year, it is a mean error of 4 W/m^2. So the choice of how often you are summing this error term is arbitrary, you always do it over a year, but you could also do it over a month, or over ten years, or 100. And your uncertainty would be different every time.

I have worked through my criticism quite well, but I don’t think you have. I’ll let you have more time to digest.

Reply to  AlanJ
August 25, 2023 10:54 am

assuming that the error for each year must be 4 W/m^2,”

It’s an annual mean, Alan. Look at H&L (2013) eqn. 1. Virtually all the data and all the error are annual means of 20 years of simulation.

(sum of data)/20 years = (mean of data)/year.

Reject that all you like. You’ll always be wrong. But take heart, you’ll always have bdgwx for company.

Reply to  Pat Frank
August 25, 2023 11:08 am

But take heart, you’ll always have bdgwx for company.”

Among others. MOST others….

https://skepticalscience.com/frank_propagation_uncertainty.html

Reply to  bigoilbob
August 25, 2023 11:49 am

blob tries to rescue the unskilled and unaware…

Reply to  bigoilbob
August 25, 2023 1:01 pm

Quoting SkS is equivalent in knowledge of quote an amoeba.

They are irrelevant to any rational discussion.

Reply to  bnice2000
August 25, 2023 1:07 pm

They are irrelevant to any rational discussion.”

Trans:

I can’t refute anything in the linked post, so I’ll just refute it because of it’s source.

Reply to  bigoilbob
August 25, 2023 1:39 pm

Mr. Loblaw jg’s post is an Ode to Misconception.

bdgwx
Reply to  Pat Frank
August 25, 2023 11:29 am

Yeah. It’s an annual mean. It’s no different than the annual mean of UAH being 263.7 K and BEST being 287.4 K in 1979. That’s a 24 K calibration difference. Again…24 K; not 24 K/year. Just because I did an annual average doesn’t mean I’m expecting the calibration difference to increase by another 24 K each year. It doesn’t work like that. I’ll get slightly different calibration differences each year, but they will all be roughly around 24 K. It’s the same with the LCF of the CMIP models. The difference is roughly 4 W/m2 regardless of which year it is .In other words, it isn’t increasing by 4 W/m2 each year.

Reply to  bdgwx
August 25, 2023 1:57 pm

It’s not a difference, bdgwx. It’s a rmse.

Honestly, it’s as though you, b.o.b. and AlanJ take refuge in willful ignorance.

Reply to  bdgwx
August 25, 2023 5:08 pm

Your 24 K isn’t a calibration difference. Neither mean is a reference standard. Your entire analogy is a jumbled mess; not even irrelevant.

Reply to  Pat Frank
August 26, 2023 5:23 am

Reference standard? Reference standard? What is that? How do you use it?

/sarc

Reply to  Pat Frank
August 26, 2023 8:48 am

Sir, isn’t all this analysis based on the assumption that we know what the correct answer is? Or the correct starting point?

I ran a manufacturing plant and we made spec products. We knew going in the tolerance for each aspect of the part. We knew what the machines were capable of and all our calipers were calibrated to NIST blocks so we knew when we met customer specs.

We ran SPC. Our analysis was based on knowing correct answer. We have no idea what the “correct” temperature of earth is.

Reply to  mkelly
August 26, 2023 11:10 am

I don’t understand your question, mkelly. All what analysis?

Reply to  Pat Frank
August 27, 2023 6:17 am

Dimensional or uncertainty analysis.

If the what you are assigning a +/- value to is a made up number what information have you gained?

It is said the temperature of earth is supposed to be 15 C. It is not known that 15 C is correct. What if it is 18 C and we don’t know that?

I guess I am struck by people arguing with you about the uncertainty around a measurement/dimension that may not be true or real.

Reply to  mkelly
August 27, 2023 7:33 am

I take your point, mkelly.

The debate here is really between people who go with what the data can say versus those who hold on to what they want the data to say.

Seeing it in that light might clarify things.

AlanJ
Reply to  Pat Frank
August 25, 2023 11:32 am

The way you’ve confused yourself is so peculiar and extraordinary that I’m struggling to find the right words to explain it in a way that someone so befuddled as you are could get it clear in their head. It’s an annual mean because it is the mean for the whole year rather than a portion of it. They used 20 years of data to derive the annual mean. That does not create a rate (mean error per year). They had 20 samples with which they determined the annual mean. The units are not Wm^2/year, they are W/m^2.

AlanJ
Reply to  AlanJ
August 25, 2023 11:35 am

The unit “year” doesn’t go into the calculation, you’ve just put it there.

Reply to  AlanJ
August 25, 2023 2:01 pm

The denominator in a mean of 20 years, is year.

As in: (sum of data magnitudes for 20 years)/20 years = magnitude/year.

You’re just making it all up, aren’t you. Your confusion. You can’t be that unschooled.

Reply to  Pat Frank
August 25, 2023 3:02 pm

Now you understand why I stopped looking at climate models years ago as there is a cottage industry behind its mess of unverifiable claims, they get excited over they lie constantly that it has high predictive value when their own base formula (W/m2) that generate a tiny warm forcing potential at the 430 ppm level doesn’t support it at all.

They also ignore the greater amount of energy leaving the planet than their CO2 warm forcing can conjure up which is why CO2 based warming trend over the 280 ppm level is impossible because the amount of energy leaving the planet is higher than what CO2 can generate which means most of of the warming is being generated elsewhere with UHI and poorly site and maintained surface temperature units as major factors behind the warming.

AlanJ
Reply to  Pat Frank
August 25, 2023 3:07 pm

Except you don’t include the years unit in the denominator, it is a time invariant estimate. They are estimating the annual mean uncertainty by using 20 estimates of annual mean uncertainty. That is why the authors don’t list uncertainties in units of W/m^2/year. You’re just tacking the 1/years unit onto the calculation.

Pick any year and the uncertainty for that year is 4W/m^2, because that’s what the estimate of the annual uncertainty is.

Reply to  AlanJ
August 25, 2023 5:03 pm

They are estimating the annual mean uncertainty by using 20 estimates of annual mean uncertainty.

Wrong.

They’re calculating error by differencing 20 years of simulated minus observed cloud fraction. Then calculating annual RMSE uncertainties.

L&H carried out a model calibration.

Reply to  AlanJ
August 25, 2023 2:03 pm

The unit “year” doesn’t go into the calculation, you’ve just put it there.”

You just said: ” It’s an annual mean because it is the mean for the whole year”

Annual means YEAR. You are lost in your own mind. You can’t even keep your own assertions straight from message to message.

Reply to  AlanJ
August 25, 2023 3:47 pm

 annual mean”

are you really that dense? You said yourself it is an annual mean.

Annual = Year

Reply to  AlanJ
August 25, 2023 1:55 pm

You fail dimensional analysis.

I’ll make it easy for you. You have 100 apples scattered over 10 tables. Compute an average.

AlanJ
Reply to  Pat Frank
August 25, 2023 3:19 pm

Except we are dealing with time. You have one table and you count the apples on it each day, and you want to know what the average number of apples on the table is. You might count the apples for ten days and average the counts. Say the daily average you compute is 100 apples on the table. That doesn’t mean that in 100 days you will have 10000 apples. In 100 days you will wake up and find 100 apples on the table.

Reply to  AlanJ
August 25, 2023 3:52 pm

you count the apples on it each day”

It is simply unbelievable how dense you can be. Apples/day is apples per day!

As Pat said, you fail dimensional analysis!

How do you know whether you will have 10000 apples or not? As with climate science you fail to describe a falsifiable test. What if you get different apples on the table each each day? The apples from the previous day get eaten at lunch or supper the previous day! Just like the temperature changes from day to day, month to month, and year to year. Last years temps are not this years temps.

Reply to  AlanJ
August 25, 2023 4:58 pm

We’re dealing with dimensional units and denominators.

100 apples scattered over 10 tables. Compute the average, Alan.

Reply to  AlanJ
August 25, 2023 2:01 pm

 It’s an annual mean because it is the mean for the whole year rather than a portion of it. “

An annual mean is not a daily mean. It is not a monthly mean. It is not a 10000 year mean.

Now you are trying to backtrack on your original assertion. It was just a matter of time.

Reply to  AlanJ
August 26, 2023 5:27 am

Read you last sentences.

 They had 20 samples with which they determined the annual mean.

The units are not Wm^2/year, they are W/m^2.

They determined the annual mean, but that isn’t a value/year. You do realize that year is a synonym for annual, right?

AlanJ
Reply to  Jim Gorman
August 26, 2023 7:36 am

“Annual mean” just means they aren’t providing seasonal estimates, it’s the uncertainty across the whole year (cloud forcing error might be higher or lower if you’re looking at spring vs winter, for instance). Pat is adding this “per year” unit onto the end of the uncertainty, but it is time invariant. The authors do not cite the uncertainty in units of Wm^2/year, Pat is the one doing that, and he thinks the uncertainty is a response error that should be compounded annually. It’s not, it’s a base-state error. His convention of annually compounding the error is completely unphysical and arbitrary – he could have tacked on a “per month” instead and compounded the uncertainty monthly, causing it to blow up even faster, or compounded in tri-centennially, causing it to wither down to nothingness.

Reply to  AlanJ
August 26, 2023 11:24 am

““Annual mean” just means they aren’t providing seasonal estimates, it’s the uncertainty across the whole year…

L&H (2013) “A measure of the performance of the CMIP model ensemble in reproducing observed mean cloud properties is obtained by calculating the differences in modeled (x_mod) and observed (x_obs) 20-yr means.

The overall comparisons of the annual mean cloud properties with observations are summarized for individual models and for the ensemble means by the Taylor diagrams for CA, LWP, SCF, and LCF shown in Fig. 3. These give the standard deviation and linear correlation with satellite observations of the total spatial variability calculated from 20-yr annual means.

The rmse of the simulated 20-yr-mean CA ranges from 11% to 20% among the CMIP3 models (multimodel mean 5 11%) and from 10% to 23% (multimodel mean 5 12%) in CMIP5.

We compared the performance of these five models in reproducing the 20-yr-mean ISCCP observed total cloud amount with and without application of COSP.

Just as for CA, the performance in reproducing the observed multiyear annual mean LWP did not improve considerably in CMIP5 compared with CMIP3.

AlanJ, you either have not read the paper before commenting on it, or you read the paper and didn’t understand it, or you read the paper, understood it, and are misrepresenting it.

AlanJ
Reply to  Pat Frank
August 26, 2023 7:32 pm

you read the paper, understood it

Yes.

and are misrepresenting it.

No.

You’re quoting these sections of the paper as though you’re proving something. But you are not, those quotations are exactly consistent with what I’ve said. At this point I’m not sure who it is that you’re trying to convince. Try writing in your own words exactly what picture you think that selection of quotations is painting.

Reply to  AlanJ
August 25, 2023 1:58 pm

it is not an error of 4 W/m^2/year, it is a mean error of 4 W/m^2.”

It is an ANNUAL error of 4w/m^2, it is an ANNUAL mean error of 4 w/m^2.

There. Fixed it for you.

Reply to  AlanJ
August 25, 2023 1:17 pm

Never solved a differential by computer, have you AlanJ.

Your ignorance and lack of understanding is showing, as always.

Reply to  bnice2000
August 25, 2023 3:35 pm

The only reply they can muster is to blindly push the ‘minus’ button.

AlanJ
Reply to  AlanJ
August 25, 2023 5:43 am

Pat’s claim that climate models are nothing more than linear extrapolations of forcing is also utter nonsense. What about all of the other things apart from surface temperature trends that climate models model? The whole basis of his argument is ridiculous – you shouldn’t be using his simple linear equation as the basis of climate model uncertainty assessment to begin with because that equation is not the basis of climate models.

The more I think about it the crazier it all becomes. What an interview.

Reply to  AlanJ
August 25, 2023 7:21 am

Now you are just whining. If the output of the models can be given by the use of a simple linear equation then it is the model that is nonsense.

What other things are you talking about? Polar bear extinction? Miami being underwater? Massive crop failures and global starvation?

The models have missed things like global greening, continued record grain harvests, fewer weather related deaths.

IT DOES NOT MATTER WHAT THE BASIS OF THE MODELS IS! Complexity is *NOT* an indicator of accuracy or completeness. If the horsepower and torque curve of an engine can be modeled with a 2nd order equation then why disparage it? A complicated model based on intake/exhaust value turbulent flow, cylinder turbulent flow, piston compression, spark plug efficiency at various pressures, carburetor turbulent flow and mixing efficiency, etc, etc, etc is only useful if I know *all* the components that go into the model ACCURATELY. If I have to guess at some of them or parameterize them then I am only doing a data matching exercise which the simple linear equation would do just as well.

That’s where the climate models are today. All of the components for the physical world we know as earth are *NOT* known completely or accurately. The models are tuned to output what climate scientists *think* will happen, not what *will* happen. They are nothing more than mathematical models of guesses. And so far, those guesses have turned out to be more wrong than right. If the guesses were correct we would see the outputs of the models coalescing, instead the variance of the ensemble seems to be increasing, not decreasing.

paul courtney
Reply to  Tim Gorman
August 25, 2023 7:53 am

Mr. Gorman: To be precise, what Mr. J is doing is racehorsing. He makes one claim (uncertainty is “bizzaro”), and before any substance is provided, he’s onto something else that he claims is utter nonsense. Hoping we won’t slay the first point, instead we try to keep up. A well known tactic here (h/t Mr. Stokes).

Reply to  paul courtney
August 25, 2023 8:14 am

Stokes has no clues about real uncertainty either.

Reply to  paul courtney
August 25, 2023 9:49 am

Exactly right, Paul.

Plus the studied use of dismissive and abusive language to engage emotional responses and short-circuit rational thinking.

Reply to  Pat Frank
August 25, 2023 10:29 am

An example of Stokes “abusive language” and/or “emotional responses “?

Reply to  bigoilbob
August 25, 2023 10:49 am

how about just one: “The whole basis of his argument is ridiculous”

Nothing to show why it is ridiculous so this is nothing more than abusive language. It’s an argumentative fallacy known as Argument by Dismissal.

Reply to  Tim Gorman
August 25, 2023 11:02 am

How can factual statements by “abusive”? You seem to be keeping with the pre 1800 US definition of libel. I.e., if your tender tissues got bruised by what some mean person said, they go to jail. In modern times, if it’s true, it’s fit to print.

BTW, Mr. Stokes has done a who’ lot more to school Dr. Frank, than to summarily “dismiss” him…

Reply to  bigoilbob
August 25, 2023 3:06 pm

Bwahahahahahahahahahaha!!!

Reply to  Sunsettommy
August 25, 2023 3:38 pm

He’s worshiping the feet of Nitpick Nick Stokes…

paul courtney
Reply to  bigoilbob
August 25, 2023 11:03 am

Mr. bob: After you give us an example of his racehorsing.

Reply to  AlanJ
August 25, 2023 8:39 am

Pat’s claim that climate models are nothing more than linear extrapolations of forcing is also utter nonsense.

Wrong again, Alan.

My claim — demonstrated dozens of times — is that climate model output is a linear extrapolation of the fractional change in GHG forcing.

The emulator — hardly more complex than y = mx+b — reproduces the air temperature projections of climate models right up through the CMIP6 generation. Their output is utterly linear. Once one has that, linear propagation of error follows directly.

The more you reveal about your lack of thought, the more one knows you’re not to be taken seriously.

AlanJ
Reply to  Pat Frank
August 25, 2023 9:01 am

Climate model output for a single variable (global mean surface temperature change) is similar to the linear extrapolation you have derived you mean. The fact that you drew a curve that kinda looks like climate model output doesn’t mean you can use the curve the evaluate uncertainty in climate model output. You don’t have climate model output, you have a similar looking curve.

Reply to  AlanJ
August 25, 2023 11:08 am

Your first argument was wrong and so here you are, shifting your ground.

The emulation equation reproduces climate model air temperature projections using the identical forcings.

The original equation has zero degrees of freedom and goes right through the middle of an ensemble of projections..

To emulate a specific model, the equation needs only one degree of freedom (f𝚌𝑜₂) — two if one wants to emulate an air temperature (not anomaly) projection.

Climate model air temperature projections are demonstrated to invariably be linear extrapolations of GHG forcing.

Use of the emulation equation to propagate uncertainty is therefore perfectly appropriate. The uncertainty, after all, is merely the upper and lower bound of projected air temperature, at each step.

And when that bound goes stratospheric, the projection becomes physically meaningless.

Pass it off all you like. The demonstration remains and remains correct.

AlanJ
Reply to  Pat Frank
August 25, 2023 11:41 am

Climate model air temperature projections are demonstrated to invariably be linear extrapolations of GHG forcing.

Again, it’s this sentence that I’m saying is incorrect. The projections might look like linear extrapolations of GHG forcing, but that is not what they are. They only look like linear extrapolations of GHG because GHG forcing is the dominant driver of the observed trends. You don’t get to say, “climate model projections of surface temperature trends kind of look like linear extrapolations of GHG forcing therefore that is all they are therefore we can use a linear extrapolation of GHG forcing to evaluate the uncertainty in climate model projections.”

The uncertainty, after all, is merely the upper and lower bound of projected air temperature, at each step.

Which you can only evaluate by considering all of the variables in the model.

Reply to  AlanJ
August 25, 2023 1:54 pm

The projections might look like linear extrapolations of GHG forcing, but that is not what they are.”

If it walks like a duck and quacks like a duck then it is very likely to be a duck!

And the climate models sure *LOOK* and *QUACK* like linear extrapolations of GHG forcing!



AlanJ
Reply to  Tim Gorman
August 25, 2023 5:10 pm

It might look and quack that way to someone utterly ignorant of how climate models are constructed, but I’m not sure that’s the boat you are wanting to put yourself into, Tim.

Reply to  AlanJ
August 25, 2023 6:22 pm

I *know* how iterative models work. I was in long range planning for a major telephone company for several years. Iterative models were what we used to evaluate capital investment projects for viability. We would iterate over the depreciation life of the assets being studied to see what the overall rate of return would be. And then we would do Monte Carlo runs to evaluate the impact different initial conditions (as well as possible future effects such as tax changes) would have in order to get some kind of a measure of the uncertainty associated with the project. When you don’t have an unlimited capital budget you *have* to do such things.

We didn’t evaluate each year in the study separately without carrying forward expenses, taxes, etc. And they don’t do it in the climate models either.

Who in Pete’s name do you think you are fooling with this garbage?

Reply to  AlanJ
August 25, 2023 2:04 pm

You don’t get to say, …

Demonstrated.

Reply to  AlanJ
August 25, 2023 3:22 pm

Which you can only evaluate by considering all of the variables in the model.

GHG forcing is the only variable needed to fully explain the global air temperature projection observable.

They don’t “kind of look like.” They behave exactly as.

Reply to  AlanJ
August 25, 2023 1:52 pm

And what is the climate model? It’s a poor curve matching exercise to the actual temperature.

Reply to  Pat Frank
August 25, 2023 1:50 pm

Part of the problem with climate science seems to be the inability to read!

Reply to  Pat Frank
August 28, 2023 2:04 am

Hi Pat,
Great interview, thank you.
Please let me argue in AlanJ style.
Pat, your use of y=mx+b is hilariously wrong.
Everyone knows that sometimes the right equation is y=mx-b.
You have to state if your case is general or assumed.
So you should write y=mx+/-b.

Never mind that the universal assumption is that b can be p[ositive or negative, not needing to be said.
Similar to the AlanJ argument about Wm-2 calculated over a year versus Wm-2/year.
Geoff S

bdgwx
Reply to  Geoff Sherrington
August 28, 2023 5:55 am

Similar to the AlanJ argument about Wm-2 calculated over a year versus Wm-2/year.

Lauer & Hamilton 2013 make it clear as well. The units are W m-2; not W m-2 year-1.

It’s not unlike how Dr. Spencer calculates an average temperature over a year. The units are still C; not C/year.

BTW…trivial unit analysis confirms the units don’t change. (W/m2 + W/m2 + … + W/m2) / 12 is still W/m2. It’s the same with UAH temperatures. (C + C + … + C) / 12 is still C.

Reply to  bdgwx
August 28, 2023 6:10 am

You have 100 apples scattered across 10 tables, bdgwx.

Calculate the average.

AlanJ couldn’t do it.

Reply to  Geoff Sherrington
August 28, 2023 6:06 am

Great exposition on the current mania, Geoff. 🙂

Reply to  AlanJ
August 25, 2023 9:09 am

So, AlanJ, I take that you are asserting that the fact that Pat’s linear equation to calculate temperature change so accurately fits the data points as noted for the particular models and reference scenarios cited is nothing more than luck or coincidence?

Now, you were saying something about utter nonsense and ridiculous . . . time to get out your mirror.

AlanJ
Reply to  ToldYouSo
August 25, 2023 9:16 am

Not at all, the CO2 forcing has been the primary driver of the observed 20th century temperature trend, so naturally you will get something pretty close to climate model output for the long term trend using just CO2 forcing. The problem is that Pat then says, “this is all climate models really are so this is all we need to use to evaluate model uncertainty” which is an absolutely absurd leap of logic.

Reply to  AlanJ
August 25, 2023 10:05 am

Not at all, the CO2 forcing has been the primary driver of the observed 20th century temperature trend . . .

I really don’t think that statement is supported by science-backed data, but you are free to post your opinion.

After all, history records:
— a 1880-1913 cooling interval, despite increase in cumulative total of human-originated CO2
— a 1946-1976 cooling interval, despite increase in cumulative total of human-originated CO2
—for about the last 9 years, a pause in global warming (see https://wattsupwiththat.com/2023/07/05/the-new-pause-remains-at-8-years-10-months/ ), despite a massive (about 0.3 trillion tonnes, or about a 16%) increase in cumulative total of human-originated CO2 during that same interval.

You see, facts matter.

AlanJ
Reply to  ToldYouSo
August 25, 2023 10:31 am

It’s pretty easy to check. I downloaded forcings from Miller et al., 2014 and plotted them against surface temperature:

comment image

I’d say there is no question whatsoever that CO2 forcing has been the primary driver of the observed warming trend, what do you think?

Reply to  AlanJ
August 25, 2023 11:25 am

“It’s pretty easy to check. I downloaded forcings from Miller et al., 2014 and plotted them against surface temperature:”

I guess you’ve TOTALLY missed the point about how ridiculous it is to state/show temperature anomalies to a mathematical precision (let alone claimed accuracy) of 0.02 C or better, as is indicated by the variations in the various curves plotted on the graph that you posted. As Pat Frank demonstrated, it’s not credible to assert that a total swing of even a full 1 C is meaningful given the instrumental and field use uncertainties of what is used to measure climate temperatures.

However, I fully recognize that people who believe in climate models (those incorporating various “forcings”) have no problem whatsoever asserting precision/accuracy of model outputs to however many decimal places they wish.

Finally, it is generally recognized that the approximate 11-year solar sunspot cycle creates about a 0.1% variation is TSI (https://www.nasa.gov/mission_pages/Glory/solar_irradiance/total_solar_irradiance.html ). Given an average TSI (aka “solar constant”) of 1361 W/m^2, that variation would be a climate “forcing” of about 1.36 W/m^2. If you were paying attention you would see that that amplitude of a somewhat periodic natural “forcing” oscillation is not to be found in the graph you presented, especially in the interval from 1900–1960 and post-1992.

So much for you assertion of “pretty easy to check” . . . again, garbage in, garbage out . . . and failure to recognize either.

Reply to  AlanJ
August 25, 2023 11:50 am

Once again, you are confusing correlation with proof of cause and effect.

Reply to  AlanJ
August 25, 2023 11:53 am

He digs deep, and pulls out … a hockey stick!

Reply to  AlanJ
August 25, 2023 2:30 pm

You are making the basic mistake of equating correlation and causation. You have to *prove* causation. Graph postal rates against temp and you’ll likely find a similar correlation. Do postal rates cause temps to go up?

Reply to  AlanJ
August 25, 2023 3:09 pm

The forcing is too small to be the driver this reality was discovered years ago, when are you going to catch up?

Reply to  AlanJ
August 25, 2023 3:27 pm

no question whatsoever

Where’s your physical theory of the climate, able to resolve a 0.035 W/m² annual average perturbation?

What is the criterion of causal explanation in science, Alan?

No physical uncertainty bounds on the temperature graph, by the way. You’ve no idea the correlation is real.

Reply to  AlanJ
August 26, 2023 5:55 am

There you go. Correlation IS, without any doubt, proof of causation!

Look at your x-axis, it is time. Unless you have a formula that accurately relates time directly to forcing values, you are not doing science. You may use a correlation to postulate an hypothesis but you need an actual mathematical formula to show causation. If you examine your graph closely, you will find a formula that accurately predicts a forcing based solely on an increasing value of time is not possible.

Reply to  AlanJ
August 25, 2023 10:06 am

AlanJ, you are entirely wrong about the 20th-century temperature trend. being driven by CO2. This is why they “adjusted” data with adjustment factors with an R2 to co2 of .95 or so. This is well-known to any that has looked at the data in detail.

Reply to  AlanJ
August 25, 2023 11:11 am

“so this is all we need to use to evaluate model uncertainty””

No. So this is all we need to evaluate the uncertainty of air temperature projections.

Argument by misdirection, Alan.

If you want to know about model uncertainty, you can study Soon, et al., (2001).

Reply to  AlanJ
August 25, 2023 11:48 am

Not at all, the CO2 forcing has been the primary driver of the observed 20th century temperature trend,

It is asserted, but not proven. Correlation is not proof.

Reply to  Clyde Spencer
August 25, 2023 9:36 pm

No need to demand proof – a modicum of evidence would be a good start. But so far there has been absolutely no evidence that human CO2 emissions have driven temperatures beyond the range of natural variability.

Reply to  AlanJ
August 28, 2023 2:09 am

AlanJ,
You wrote “CO2 forcing has been the primary driver of the observed 20th century temperature trend”.
Where is the physical equation evidence for that assertion?
Where is the evidence showing that natural pyhsical events are not to be included?
Geoff S

Reply to  AlanJ
August 25, 2023 6:38 am

You *really* don’t understand measurement uncertainty at all, do you? Uncertainty exists at the start of recording measurement data, not at the end of it! Measurement uncertainty is additive. Each data point is “stated value +/- measurement uncertainty”. E.g. 5 +/- 0.5. You accumulate measurement uncertainty as you go along! For the second measurement the uncertainty becomes 0.5 + 0.5 if you do direct addition of the uncertainty (i.e. no random cancellation). Or sqrt(.5^2 + .5^2) if you think there is partial cancellation. The uncertainty grows in both cases.

That’s why Pat calculated the total uncertainty beginning with the uncertainty in the initial conditions. You can’t wait till a later iteration to “start calculating” it!

Pat would *NOT* say “ make my results more certain by not calculating the uncertainty until a further tilmestep.”. How you get that from what he has done boggles *my* mind!

Climate science memes:

  1. All measurement error is random, Gaussian, and cancels
  2. The average uncertainty is the uncertainty of the average.
  3. The SEM is the uncertainty of the average.
  4. Averaging multiple single measurements of different things is the same as averaging multiple measurements of the same thing.
  5. Variance of random variables do no add when adding random variables.
  6. Variance is *not* a measure of the uncertainty of the average and can be ignored because it isn’t important.

And you want to talk about “boggling” the mind?

AlanJ
Reply to  Tim Gorman
August 25, 2023 7:09 am

Pat would *NOT* say “ make my results more certain by not calculating the uncertainty until a further tilmestep.”. How you get that from what he has done boggles *my* mind!

Pat would certainly not say that, I am sure, because he doesn’t seem to understand that it is a consequence of his bizzaro uncertainty calculation. Timestep 0 has the lowest uncertainty value, and each subsequent step has that uncertainty + additional uncertainty. So to reduce the uncertainty at any timestep, just make sure you started calculating the uncertainty closer to that timestep! Easy.

paul courtney
Reply to  AlanJ
August 25, 2023 7:15 am

Mr. J: You can say words like “bizzaro” and tell us your mind is boggled ’til the climate changes, all you do is cast aspersions without substance. When someone says, “where to begin”, and then never begins, it’s a good sign he is out of his league. Your comments here are my evidence, you can’t tie Pat Frank’s loafers. Please feel free to submit more evidence.

bdgwx
Reply to  paul courtney
August 25, 2023 10:21 am

AlanJ’s point is clear. Read Lauer & Hamilton 2013 and notice the 4 W/m2 calibration error for LCF. Then read Frank 2019 and notice how it gets changed 4 W/m2.year. It is so subtle that most laypeople would never notice. Now imagine if if that 4 W/m2.year were actually 4 W/m2.hour to match the typical timestep of a climate model. What happens when you plug that into equations 5.2 and 6? What do you get?

paul courtney
Reply to  bdgwx
August 25, 2023 10:51 am

What do I get? First, let me ask, is your clock doing an iterative calculation? Or mechanically telling time? Is there a difference? I don’t actually know, but I can read, and I can tell that you and Mr. J are comparing apples with tree rings. I don’t really care, but Pat Frank and others would like to know if you can define “iterative”?

bdgwx
Reply to  paul courtney
August 25, 2023 11:13 am

I don’t see that you have responded to anything I’ve said. I’m talking about the Lauer & Hamilton 2013 and Frank 2019 publications. And I’ll ask again…What happens when you plug 4 W/m2.hour into equations 5.2 and 6?

paul courtney
Reply to  bdgwx
August 25, 2023 12:57 pm

Mr. x: And I’ll ask again, define iterative and tell us how it applies to a clock? I don’t need to read your paper to conclude that if the math goes off the board, the problem may be your method?

bdgwx
Reply to  paul courtney
August 25, 2023 1:37 pm

Again…your response has nothing to do with my post. Read my post carefully. Nowhere in it did I say anything about a clock. If you want to challenge something I said in this post then at the very least make sure I discussed it. Otherwise I have no choice but to accept that you are not challenging anything I’ve said here, but instead deflecting and diverting.

paul courtney
Reply to  bdgwx
August 25, 2023 2:30 pm

Mr. x: You began the exchange with “AlanJ’s point is clear.” He’s the source of the clock, maybe you should associate more carefully.
I don’t need to read this paper or do your math to see your mistake, richly demonstrated by other comments. You accuse Frank of injecting “year” into what is described as annual. You keep making this error long after it was explained to you above. By Frank. After that, I’m not “deflecting”, I am mocking. You. If you only knew.

Reply to  bdgwx
August 25, 2023 4:48 pm

Why would anyone suppose the RMSE uncertainty per hour is ±4 W/m²?

It’s a 20-year annual mean. Not an hourly mean.

I’m still having trouble deciding whether your confusion is due to dyslexia or basic inability to understand. And those are the charitable explanations for your refractory ignorance.

bdgwx
Reply to  Pat Frank
August 25, 2023 8:08 pm

Pat Frank: Why would anyone suppose the RMSE uncertainty per hour is ±4 W/m²?

I have no idea. Why would anyone suppose the RMSE given as 4 W/m2 is equivalent to 4 W/m2.year. The year-1 is just as arbitrary as hour-1. I mean if you’re going to make units up you should pick the smallest unit of time possible because that will cause your final uncertainty to spiral out of control sooner.

It’s a 20-year annual mean. Not an hourly mean.

Completely irrelevant. It makes no difference if it was a annual mean, decadal mean, or 20-year mean. The units are still W m-2.

Reply to  bdgwx
August 25, 2023 9:40 pm

The year-1 is just as arbitrary as hour-1.

L&H calculated annual means. An annual mean has year as a denominator.

Over and over, the same mistake.

The units are still ±Wm⁻²/year.

bdgwx
Reply to  Pat Frank
August 26, 2023 1:25 pm

Pat Frank: L&H calculated annual means. An annual mean has year as a denominator.

No it does not. And that is absurd.

Do you think annual mean from UAH in 2022 is 0.18 C or 0.18 C/year?

Do you think the monthly from UAH in July 2023 is 0.64 C or 0.64 C/month?

Reply to  bdgwx
August 26, 2023 5:56 pm

“annual mean from UAH in 2022 is 0.18 C

Where is the year divisor, bdgwx?

Reply to  paul courtney
August 26, 2023 8:44 am

Your mechanically telling time is an apt description. A clock gains/loses a second a day. What is the cumulative error of that clock? To know the actual error, you need a reference for comparison. Without the reference you need to assume a years time could be all gain or all loss. What is the cumulative uncertainty range? ±365 seconds. It is a matter of knowing or not knowing. That is uncertainty. The errors could be 50/50 so the clock is totally accurate. Or it could be 1/99. The point is you don’t know and can never know without calibration to a reference. IT IS UNCERTAIN.

Reply to  bdgwx
August 25, 2023 4:40 pm

Now imagine…

You do far too much of that, bdgwx. Such as imagining you know what you’re talking about.

Tell you what: do the experiment. Measure the cloud error per time step. Now imagine you understand the scientific method.

Reply to  AlanJ
August 25, 2023 7:41 am

Pat would certainly not say that”

Then why did you say he would?

“bizzaro uncertainty calculation”

What you term as “bizzaro” is nothing more than standard metrology in the real world. All you have to offer is the argumentative fallacy of Argument by Dismissal. *Exactly* what is bizzaro about adding uncertainties? Engineers do it every day. So do carpenters, mechanics, engineers of all persuasions.

” So to reduce the uncertainty at any timestep, just make sure you started calculating the uncertainty closer to that timestep! Easy.”

In other words, JUST IGNORE THE UNCERTAINTY INTRODUCED IN EARLIER TIMESTEPS. Only a climate scientist would see that in what Pat did!

It doesn’t matter *when* you start the calculation, the uncertainty will GROW from that point. It’s inevitable. It is standard metrology. Once that uncertainty becomes larger than the differences you are trying to identify you have lost – what you are looking for becomes an UNKNOWN. As I said before, climate scientists don’t seem to understand what the words “Do Not Know” means. You simply can’t know any differences in the hundredths digit when your uncertainty is in the tenths or units digit.

You can’t “average” away uncertainty no matter how much you wish you could.

AlanJ
Reply to  Tim Gorman
August 25, 2023 8:58 am

Then why did you say he would?

Because Pat doesn’t know what he’s talking about, and his approach is generating egregious absurdities that he does not seem to recognize, so of course he isn’t voicing their existence.

It doesn’t matter *when* you start the calculation

Well it shouldn’t matter, but it does for Pat’s calculation.

Reply to  AlanJ
August 25, 2023 11:14 am

Because Pat doesn’t know what he’s talking about, …

Not demonstrated by any of your comments here, Alan. Nor by anyone else.

paul courtney
Reply to  AlanJ
August 25, 2023 12:59 pm

Mr. J: And still you fail to consider that the “egregious absurdities” are from your mistake, not Frank’s.

Reply to  AlanJ
August 25, 2023 2:05 pm

This isn’t about Pat. It’s about YOU! First you make a statement and then a few messages later deny you said it!

You can’t keep your own assertions straight, how are you going to keep someone else’s straight?

Reply to  Tim Gorman
August 26, 2023 5:53 pm

If this thread was in a forum set up, I would have posted his many deviations he has made in the thread to utterly destroy his credibility.

He is here to sow confusion deliberately nothing more.

Eventually you and others will have to stop feeding this error machine and go on.

Reply to  Sunsettommy
August 26, 2023 6:41 pm

Sage advice, I don’t need to waste more time with scrambled eggs.

Reply to  AlanJ
August 25, 2023 9:55 am

Propagation of uncertainty through an iterative calculation is standard.

When the uncertainty repeats with each iterative step — as it does in a climate simulation — the uncertainty necessarily grows with each step.

Gratuitous use of “bizzaro” does not carry your point.

AlanJ
Reply to  Pat Frank
August 25, 2023 10:05 am

My clock is off by one minute. I can’t know if it is currently 5:04 GMT, 5:03 GMT, or 5:05 GMT. In 24 hours, by your reasoning, I will not know if it is 5:04 GMT, 5:02 GMT, or 5:06 GMT. Imagine how unsure I’ll be in a month, all because it is off by one minute.

Reply to  AlanJ
August 25, 2023 10:16 am

You fail to recognize that what might be a fixed error is different than what might be a rate of error buildup (aka a propagation of errors). Not knowing which is which today makes you totally unable to make future predictions.

You really do need to read up on the various types of errors and how they are treated mathematically.

Reply to  ToldYouSo
August 25, 2023 11:29 am

Exactly right, TYS.

An understanding of the critical difference between error and uncertainty is very scarce in CO₂ climatology.

AGW is Not Science
Reply to  Pat Frank
August 26, 2023 10:23 am

Correction: That should read “CO2 alarmist nonsense.”

Calling it “climatology” bestows far too much apparent credibility on what is essentially “hypothetical bullshit.”

Reply to  AGW is Not Science
August 26, 2023 11:25 am

I’ve no argument with your view. 🙂

AlanJ
Reply to  ToldYouSo
August 25, 2023 11:45 am

Pat is taking a fixed error and treating it as a rate of error buildup, that is precisely the issue I am calling out in this thread. Cloud forcing error doesn’t build up year over year in the same way that my clock’s one minute error doesn’t build up day after day.

Reply to  AlanJ
August 25, 2023 2:19 pm

Pat is taking a fixed error and treating it as a rate of error buildup, that is precisely the issue I am calling out in this thread.”

In an iterative process the uncertainty grows with each iteration.

You can deny that all you want but its the plain, physical fact.

A bulldozer building up a road bed in layers using a box blade will see the total uncertainty of the buildup grow if box blade is bad. A layer that is supposed to be 4″ thick will wind up being 4.1″ thick. When the next layer is added it will be 8.2″ thick. The uncertainty in height of the road bed has grown from .1″ to .2″.

Do you live in the real world at all? Have you *ever* built anything using lumber. Something like baseboard trim around a room?

Reply to  Tim Gorman
August 25, 2023 5:17 pm

I suspect AlanJ doesn’t know to distinguish uncertainty from physical error, Tim.

We’ve only seen that mistake about a b’zillion times from the alarmist crowd.

Reply to  AlanJ
August 25, 2023 5:15 pm

Uncertainty is not error, AlanJ.

The LWCF RMSE ±4 W/m² is an average annual uncertainty, not an error.

That distinction seems beyond the grasp of nearly every climatologist I’ve encountered, and every climate modeler bar none.

Here, let’s find out. Does ±17 C uncertainty at the end of a centennial temperature projection mean the model is oscillating wildly between a hothouse and an ice house climate?

Reply to  Pat Frank
August 25, 2023 6:25 pm

Bellman has quotes from uncertainty texts programmed with hot keys that he thinks mean uncertainty is error.

He doesn’t know what he doesn’t know.

Reply to  karlomonte
August 25, 2023 6:55 pm

I wish you’d told me that before I wasted my time copying out the text by hand.

Reply to  Bellman
August 26, 2023 7:05 am

Don’t lie, you’ve cut-and-pasted that same text multiple times, thinking that it proves uncertainty equals error.

It doesn’t.

Reply to  karlomonte
August 26, 2023 5:07 pm

Has it occured to you that the text is the same because I’m copying the same source? Why would I lie about it?

Reply to  Bellman
August 26, 2023 10:26 pm

…thinking that it proves uncertainty equals error.

It doesn’t.

Reply to  karlomonte
August 27, 2023 4:24 am

You’re trolling so much you can’t even keep your insults straight. You were for some reason insisting I had a couple of quotes on file so I could copy and paste them, and then called me a liar when I admited to copying them rather than doing the sensible thing and keeping them on record.

I am not the one saying uncertainty equals error – I’m pointing out that in a lose fashion some of the sacred texts said that they could be regarded as roughly equivalent. Personally I don’t agree they are equivalent, as an error is a specific difference between a measurement and the true value, whereas uncertainty means the entire range of possible errors.

I also say it’s possible to define uncertainty in ways that avoid using the word error – I’m just not sure if that means you are actually abolishing errors from the definition or just the word.

But what I also think is some here see “uncertainty is not error” as a way of avoiding defining what they think uncertainty actually is – and that therefore they can never be wrong.

Reply to  Bellman
August 27, 2023 7:42 am

Have you figured out the answer to your question yet?

You don’t even understand the very basics yet here you are trying to lecture Pat Frank.

Reply to  karlomonte
August 27, 2023 10:01 am

Which question would that be? The answer I normally figure out is you are talking nonsense. It seems to apply to most questions.

Reply to  Bellman
August 27, 2023 12:51 pm

Tim is correct, dementia has set in.

Reply to  karlomonte
August 27, 2023 1:19 pm

Sorry if it went over your head.

Reply to  Bellman
August 27, 2023 1:55 pm

Ouch, this really hurt me, loopholeman. I am crushed.

Reply to  karlomonte
August 26, 2023 3:44 am

And he doesn’t want to know what he doesn’t know!

Reply to  Tim Gorman
August 26, 2023 7:07 am

“Don’t confuse me with facts, my mind is made up!”

Reply to  karlomonte
August 26, 2023 8:56 am

He doesn’t know what he doesn’t know.

Is that error or uncertainty?

/sarc

Reply to  Pat Frank
August 25, 2023 6:26 pm

Error implies you know a “true value” you can use to calculate the error associated with a measurement. That seems to be endemic in climate science – they *know* what the true value should be.

From the GUM (which I doubt AlanJ has ever studied):

“E.5.1 The focus of this Guide is on the measurement result and its evaluated uncertainty rather than on the unknowable quantities “true” value and error (see Annex D). By taking the operational views that the result of a measurement is simply the value attributed to the measurand and that the uncertainty of that result is a measure of the dispersion of the values that could reasonably be attributed to the measurand, this Guide in effect uncouples the often confusing connection between uncertainty and the unknowable quantities “true” value and error.”

Reply to  AlanJ
August 28, 2023 2:21 am

AlanJ,
Then produce your measurements to show that clouds are invariant from year to year.
The expectation is that variation and uncertainty will increase over longer times.
It is usual to start an analysis of fuiture uncertainty with NOW, because that is the start of the future.
Probably my fault, but climate work involves future modelling interest more than the past, it seems, so that I assumed that it was normal for Pat to show propogation of error into the future, starting NOW (whenever that time was).

bdgwx
Reply to  ToldYouSo
August 25, 2023 1:13 pm

ToldYouSo: You fail to recognize that what might be a fixed error is different than what might be a rate of error buildup (aka a propagation of errors)

Lauer & Hamilton are not saying the 4 W/m2 LCF figure is building up at 4 W/m2.year. In fact, they don’t use units of W/m2.year or any timeframe. They are only saying the multimodel calibration difference is 4 W/m2.

Reply to  bdgwx
August 25, 2023 2:10 pm

“In fact, they don’t use units of W/m2.year or any timeframe. They are only saying the multimodel calibration difference is 4 W/m2.”

Non-sequitur. Try again.

FYI: watts are units for the flow of energy per unit of time. Energy can build up in a system having heat capacity, such as exist in Earth’s atmosphere and oceans.

Reply to  ToldYouSo
August 25, 2023 4:04 pm

bdgwx appears to be a statistician. You will confuse him by trying to explain dimensional analysis.

bdgwx
Reply to  ToldYouSo
August 26, 2023 1:23 pm

bdgwx: In fact, they don’t use units of W/m2.year or any timeframe. They are only saying the multimodel calibration difference is 4 W/m2.

ToldYouSo: Non-sequitur. Try again.

That comes from Lauer & Hamilton 2013.

FYI: watts are units for the flow of energy per unit of time. Energy can build up in a system having heat capacity, such as exist in Earth’s atmosphere and oceans.

That has nothing to do with this.

Reply to  bdgwx
August 25, 2023 5:25 pm

or any timeframe

Define annual mean, bdgwx.

multimodel calibration difference is 4 W/m2.”

The E in RMSE is error, not difference. The rmse result is ±4 W/m², not 4 W/m².

The corrected meanings are the obvious reading. And yet you portray them incorrectly. How does that happen, bdgwx?

bdgwx
Reply to  AlanJ
August 25, 2023 10:31 am

For lurkers and expanding on AlanJ’s analogy…what Frank is saying is that your ±1 min calibration uncertainty of your clock is equivalent to saying it ±1 min/timestep where timestep is how frequently you read your clock. Forget about the absurdity saying two completely different units can be treated the same for a moment. If you read your clock once per second then what Frank is saying that after 1 month your uncertainty in your clock reading is sqrt[ 30 * 24 * 60 * 60 * 1^2] = 1610 min and all because your clock was off by only 1 minute. And the more frequently you read your clock the more uncertain you are of the time. If that does not qualify as “bizzaro” then I don’t know what does.

paul courtney
Reply to  bdgwx
August 25, 2023 11:00 am

Mr. x: So a clock is the same as the climate models? Please call Mike Mann and let him know he can use a clock to tell temperature. He probably hates sawdust by now.

Reply to  bdgwx
August 25, 2023 1:12 pm

Bad analogy. Cloud error, expressed in W/m^2 is a ‘rate’. CO2 ‘forcing’, expressed in W/m^2 is also a rate. Their impact on energy accumulation, and presumably on surface temperature, should therefore commensurately scale with time, regardless of the length of the time step chosen by the modeler, i.e., as Pat indicated here:

https://wattsupwiththat.com/2023/08/24/patrick-frank-nobody-understands-climate-tom-nelson-pod-139/#comment-3771976

Conceptually, this is the same way (price) volatility is handled in financial modeling.

Apart from this, I think what I’m really seeing here is in large part is just consternation on the part of some that Pat’s work demonstrates that:

  • It’s possible to linearly emulate (on average) the temperature increases ‘calculated’ by presumably complex climate models using only a running total of projected CO2 forcings.
  • That the models display their lack of a sound basis in physics due to their tendency to ‘spew’ all over the place once their training wheels are removed beyond the tuning period.
bdgwx
Reply to  Frank from NoVA
August 25, 2023 2:18 pm

Frank from NoVA: Cloud error, expressed in W/m^2 is a ‘rate’.

W/m2 is a flux. W/m2.year is the rate of change of W/m2 per year. Lauer & Hamilton make it clear that the units for the values they presented are W/m2 and not W/m2.year.

Reply to  bdgwx
August 25, 2023 3:45 pm

You don’t understand what a Watt is!

One Joule per SECOND, which is a RATE.

Reply to  karlomonte
August 25, 2023 4:22 pm

And the 4 w/m^2 is an UNCERTAINTY interval, not an absolute rate. bdgwx *still* doesn’t understand what uncertainty is! And likely never will!

Reply to  Tim Gorman
August 25, 2023 4:25 pm

The 4w/m^2 is an uncertainty, not an absolute value of flux. That makes all the difference.

bdgwx has never understood what uncertainty is. and apparently neither has AlanJ.

When uncertainty approaches or exceeds the value you are measuring or attempting to discern you are done. YOU DO NOT KNOW what is happening. And that is the issue with the climate models. Their uncertainty quickly exceeds the differences they are trying to discern. It’s why they always have to assume that measurement uncertainty is zero and the stated values are 100% accurate.

Reply to  bdgwx
August 25, 2023 3:47 pm

W/m^2 is indeed equivalent to a rate, that of energy per unit time crossing a unit area normal to the flow of energy.

bdgwx
Reply to  ToldYouSo
August 25, 2023 5:02 pm

W/m^2 is indeed equivalent to a rate, that of energy per unit time crossing a unit area normal to the flow of energy.

W/m2 is the rate at which joules transfer across a surface each second. It is not the rate at which the flux itself is changing. Therefore it is not equivalent to a rate of change of flux. Any insinuation that it is is patently false. And I’ll remind you H&L 2013 clearly say 4 W m-2 and not 4 W m-2 year-1. They are not saying the systematic difference of LCF between models is increasing.

Reply to  bdgwx
August 26, 2023 11:40 am

“Therefore it is not equivalent to a rate of change of flux.”

Wrong again. W/m^2 is the rate of change of energy flux, which is expressed in fundamental SI units as joule/second/m^2, or equivalently (joule/m^2)/second.

Simple to see, if you understand dimensional analysis . . . impossible to see if you don’t.

bdgwx
Reply to  ToldYouSo
August 26, 2023 1:19 pm

ToldYouSo: W/m^2 is the rate of change of energy flux

That is both false and absurd. It is a flux; not a change in flux.

Reply to  bdgwx
August 25, 2023 4:21 pm

Please try to keep up. It is the annual mean uncertainty, not an absolute value of change in the rate.

What does the term ANNUAL mean to you?

Reply to  bdgwx
August 25, 2023 6:11 pm

W/m2.year ” Tendentiously inserting your own conclusion.

And still selectively blind to “annual mean.” L&H made it clear they were dealing with annual means.

RMSE produces ±4 W/m² Tell me bdgwx, how does a “rate” proceed in the +x and -x directions simultaneously?

bdgwx
Reply to  Pat Frank
August 25, 2023 8:02 pm

Pat Frank: W/m2.year ” Tendentiously inserting your own conclusion.

W/m2.year is YOUR conclusion. AlanJ and I are the ones tell you it is wrong. Don’t blame YOUR use of W/m2.year on me.

And still selectively blind to “annual mean.” L&H made it clear they were dealing with annual means.

I’m hardly ignoring that. It is right there in their publication. They had 20 annual means in W/m2 for each of the 27 models. Again…that is W m-2; not W m-2 year-1 like what you assumed in your paper.

RMSE produces ±4 W/m² Tell me bdgwx, how does a “rate” proceed in the +x and -x directions simultaneously?

I don’t know. You tell me. It’s sounding awful lot like an absurd argument YOU make up and expect me to defend. I’m not interested in defending your arguments especially when they are absurd.

Reply to  bdgwx
August 25, 2023 9:52 pm

W/m2.year is YOUR conclusion.”

No it is not. W/m²/year is L&H’s conclusion.

They had 20 annual means in W/m2 for each of the 27 models.

L&H “A measure of the performance of the CMIP model ensemble in reproducing observed mean cloud properties is obtained by calculating the differences in modeled (x_mod) and observed (x_obs) 20-yr means.”

The overall comparisons of the annual mean cloud properties with observations are summarized for individual models and for the ensemble means by the Taylor diagrams for CA, LWP, SCF, and LCF shown in Fig. 3. These give the standard deviation and linear correlation with satellite observations of the total spatial variability calculated from 20-yr annual means.

20-year ensemble means, bdgwx. You never fail to lose an opportunity to be correct.

bdgwx
Reply to  Pat Frank
August 26, 2023 1:13 pm

No it is not. W/m²/year is L&H’s conclusion.

Not it isn’t. No where in L&H do you see 4 W m-2 year-1. You made up the year-1 part because you misunderstood the methodology. You erroneously declared that because they were working with annual means that gave you the liberty to append year-1 to the units.

Would you think it is acceptable if I took the liberty to append year-1 to the units of the annual mean temperature reported by UAH? Is the 2022 value really 0.18 C/year? Is the July 2023 value really 0.64 C/month?

Reply to  bdgwx
August 26, 2023 5:59 pm

OMG!!!

You can’t understand their own words no wonder you are lost.

bdgwx
Reply to  Sunsettommy
August 26, 2023 6:25 pm

Sunsettommy: You can’t understand their own words no wonder you are lost.

Post the page number and paragraph where Lauer & Hamilton say it is 4 W m-2 year-1.

Reply to  bdgwx
August 26, 2023 6:07 pm

You made up the year-1 part because you misunderstood the methodology.

Everything in L&H is calculated as annual means.

A plain reading of the text would inform anyone, who isn’t determined to misunderstand it, that annual means are meant throughout. Including in the uncertainty metrics.

You erroneously declared that because they were working with annual means that gave you the liberty to append year-1 to the units.

Thank-you. You just refuted yourself with an inadvertent and lovely own goal.

Good job, bdgwx. You backed into the right answer.

Reply to  bdgwx
August 26, 2023 3:38 am

I don’t know. You tell me. It’s sounding awful lot like an absurd argument YOU make up and expect me to defend. I’m not interested in defending your arguments especially when they are absurd.”

Of course you don’t know, even after having it pointed out to you multiple times!

It’s not a rate. It’s the uncertainty about the rate! Why is uncertainty such a hard thing to grasp?

Pat didn’t ask you to defend anything. He asked you a question that you failed to answer.

Reply to  Pat Frank
August 26, 2023 3:36 am

I see you didn’t get an answer to your well-posed question. Hope you didn’t expect one.

Reply to  bdgwx
August 25, 2023 8:42 pm

‘W/m2 is a flux.’

Thanks, ‘Nick’! Let me try again – the Watt is a unit of power, i.e., a rate. A Watt times a unit of time is a unit of energy, e.g. a Watt-hr. Let’s agree that the accumulation / dissipation of energy over time results in an increase / decrease in temperature. Note for sake of completeness that a Watt per unit of time has no physical basis.

With me so far? Good. Now let’s assume that the average cloud error, in terms of flux, is 4 W/m^2. This means that we expect the error to be 4 W/m^2 five minutes from now or tomorrow or a century from now.

Now, to find the model uncertainty in terms of temperature over some span of time, it is necessary to determine the uncertainty in energy accumulation over that time span. Per the link I provided, above, Pat shows how this is done to arrive at a consistent result for any time interval no matter how many time steps are compromised in that interval.

Think of this as you would consider pricing a one-year call option using a binomial tree – as long as your volatility scales consistently with your choice of time step, your result should be effectively the same whether you use 100 or 1,000 time steps.

Reply to  bdgwx
August 25, 2023 2:20 pm

If your clock is running slow or fast then how far off it is DOES grow with each step of the clock!

Do you live in the real world at all?

Reply to  Tim Gorman
August 25, 2023 3:46 pm

Nope! He doesn’t.

Reply to  bdgwx
August 25, 2023 5:37 pm

is equivalent to saying it ±1 min/timestep where timestep is how frequently you read your clock.

Wrong. The ±1 min is the 1σ uncertainty in the time measurement at every single minute.

The question then is, when the clock says 100 minutes have passed, what is the uncertainty (not the error) in that time-reading?

The uncertainty is ±10 min. The actual error is unknown because one doesn’t have an independent accurate reference standard.

… your clock was off by only 1 minute

That’s not what ±1 min uncertainty means. A ±1 min uncertainty means sometimes fast and sometimes slow. But one never knows when or by how much.

You’re assuming that you know the error and that it’s fixed. Both of your assumptions are wrong.

Reply to  bdgwx
August 26, 2023 9:15 am

You fail to realize that uncertainty is a ± interval within which you don’t know what is happening. the “1” minute you are talking about is systematic error. It remains the same forever and the GUM treats it as a Type B uncertainty.

As Tim has tried to show you, take an 8′ ± 1″ board. You don’t know what the actual measurement is. It could be -1 inch or +1 inch, you simply don’t know. Then you add a second board.

What is the total? Is it 16′ ±1″ or is it 16′ ±2″?

How about adding the third board?

Is the total length 24′ ±1″ or is it 24′ ±3″?

That is an iterative process, where the total relies on what went before. Show us your answers.

paul courtney
Reply to  AlanJ
August 25, 2023 10:54 am

Mr. J: Is your clock producing an iterative calculation? Or just mechanically telling time? Do you know the difference? Not sure I do, but I can see you don’t.

Reply to  AlanJ
August 25, 2023 11:27 am

Wrong analogy, Alan. It’s not a constant offset error. The ±4 W/m² is a calibration uncertainty. Uncertainty is not error.

In your clock example, you don’t know it’s off by 1 minute (error). You know the uncertainty in reading is ±1 minute. And then you want to know exactly how many minutes have passed when your clock registers 100 minutes.

The error at 100 minutes might be one minute. But you can’t know that. You can only know the uncertainty in the reading, which will be sqrt(100*1²) = ±10 minutes.

AlanJ
Reply to  Pat Frank
August 25, 2023 12:07 pm

It is exactly a constant offset error. The clock isn’t changing its wrongness between readings, it is always a minute fast or a minute slow. If I count the minute hand going round 100 times, 100 minutes will have elapsed. What I don’t know is if the time 100 minutes hence is actually a minute higher or lower than the clock will read.

Reply to  AlanJ
August 25, 2023 12:14 pm

Deftly demonstrating that you don’t understand uncertainty.

Uncertainty is not error.

Reply to  AlanJ
August 25, 2023 2:25 pm

The clock isn’t changing its wrongness between readings, it is always a minute fast or a minute slow.”

How do you know this? Calibration error is typically a “drift”, not a fixed amount. Thermometers *DRIFT*, and usually in one direction. Meaning the offset bias increases with time.

You are trying to say that if the display needle on the clock is bent then it will always be bent by the same amount. That is a systematic error that is not susceptable to statistical analysis. So how did you figure out that the display needle was bent?

AlanJ
Reply to  Tim Gorman
August 25, 2023 3:26 pm

In the case of the climate models, we know because L&H, as cited by Pat, have provided the fixed uncertainty estimate. They estimated exactly how much the display needle on the clock was bent.

AlanJ
Reply to  AlanJ
August 25, 2023 3:28 pm

Pat has mistakenly decided that their time invariant estimate is a rate and so the bend should keep getting benter year over year.

Reply to  AlanJ
August 25, 2023 6:22 pm

RMSE produced ±4 W/m² as an annual mean uncertainty. How does a rate proceed in the +x and -x directions simultaneously?

The average number of pigeons you see per year is not a rate. It’s a number average.

The scatter about the number of pigeons you may see per year is not a rate. It’s an uncertainty in the number of pigeons you might see in any given year.

Reply to  Pat Frank
August 26, 2023 3:42 am

Nice comment. It gets right to the heart of the matter.

I don’t know if AlanJ, bdgwx, and bellman are willfully ignorant or just plain dense. Your example lays it out pretty well but I’m pretty sure they won’t get its truth at all.

Reply to  Tim Gorman
August 26, 2023 7:12 am

Rather than trying to understand, they will instead just push the minus button.

Much easier.

AlanJ
Reply to  Pat Frank
August 26, 2023 6:59 am

The scatter about the number of pigeons you may see per year is not a rate.

Then you shouldn’t be treating it as one, should you?

Reply to  AlanJ
August 26, 2023 7:11 am

“Then you shouldn’t be treating it as one, should you?”

How is propagation of calibration uncertainty a velocity?

AlanJ
Reply to  Pat Frank
August 26, 2023 7:39 am

It doesn’t propagate, it’s a base state error, not a response error. You’re tacking on this “per year” unit and treating it as though it’s a quantity that grows every year. But it isn’t, and you have arbitrarily chosen the convention of a year for the compounding period, but you could have chosen a month, or a billion years instead, and gotten completely different uncertainties. People have pointed this out to you since time immemorial and you’ve never been able to grasp the concept.

Reply to  AlanJ
August 26, 2023 11:47 am

It doesn’t propagate, it’s a base state error, not a response error.”

Simulated minus observed (L&H eqn 1) is a response error.

L&H carried out a calibration experiment. They derived the per-model annual mean error across 20 years of simulation using 27 CMIP5 models.

The ±4 W/m²/year is the per-model mean annual uncertainty in simulated LWCF.

You’re tacking on this “per year” unit and treating it as though it’s a quantity that grows every year.

Uncertainty is not error. It doesn’t grow per year. It increases per step in an iterative calculation.

In the case of an annual mean uncertainty, a propagation step is the calculation across one year.

A 20 year mean is the mean per year of 20-years data.

The annual average or 20 years of data.

A concept so simple only a climate modeler (or a modeler groupie) could miss it.

but you could have chosen a month, or a billion years instead,

It’s an annual mean. I chose one year. Anything else is wrong. Your argument is wrong.

People have pointed this out to you since time immemorial and you’ve never been able to grasp the concept.

People who don’t know the meaning of “annual mean,” evidently. People like you, Alan, by all accounts.

I’ve not been able to grasp how it is that so many people cannot understand the phrase, “annual mean.”

Reply to  Pat Frank
August 26, 2023 3:09 pm

It is WILLFULL IGNORANCE. It can be nothing else. To use their definition your mean annual driving distance over a period of 20 years can’t be used to determine your total driving distance over that 20 years!

The total distance you travel over 20 years has to be the mean of the 20 years. It’s a “base state”. The total driving distance can’t grow each year, it can only be the base state – i.e. the annual mean.

Reply to  Tim Gorman
August 26, 2023 6:11 pm

He’s finding every possible way to misinterpret the plain meaning of the text.

Reply to  AlanJ
August 26, 2023 8:12 am

He’s not treating it that way. It’s an uncertainty interval. Thus it carries the same units as the rate. It is an interval of *NOT KNOWING*. Why is that so hard to understand?

Reply to  Tim Gorman
August 26, 2023 6:12 pm

At a guess, he and others resist understanding because with understanding comes the realization that they’ve nothing left to say about the state or the evolution of the climate.

Reply to  AlanJ
August 25, 2023 5:17 pm

No, they didn’t say how much the needle was bent! They gave the annual uncertainty! Meaning per year!

I give you a 50 gallon drum. Every hour I give you a bucket with 1gallon +/- 1 quart to empty into the bucket. What happens with the 50th bucket?

Is what you get at the end just +/- 1 quart over or under?

It’s the same with the annual uncertainty iterated over any number of years using the same calculation each year. The input to the next year is the output of the prior year. The water in the drum is the accumulation, including the uncertainty, of all the prior buckets. The uncertainty doesn’t start over at each iteration!

Reply to  AlanJ
August 25, 2023 6:16 pm

“fixed uncertainty estimate.”

An rmse of ±4 W/m². “±” does not indicate a fixed offset.

AlanJ
Reply to  Pat Frank
August 26, 2023 7:01 am

It does. It is a calibration error, we don’t know the exact value of the offset, we just know the range it is in. The upper and lower bound. It does not change year to year. It is a time invariant uncertainty. You’ve mentally added the “per year” onto the end and this has caused you tremendous confusion.

Reply to  AlanJ
August 26, 2023 7:17 am

It is a calibration error,”

It’s the uncertainty statistic from a calibration experiment. It conditions every step of an iterative climate simulation.

Let’s conclude that you don’t understand the term annual mean.

AlanJ
Reply to  Pat Frank
August 26, 2023 7:41 am

Let’s conclude that you don’t, Pat. Annual mean means it is not a seasonal estimate, it’s the estimate across the full year. That doesn’t mean it is a rate of uncertainty accumulation.

Reply to  AlanJ
August 26, 2023 8:14 am

He didn’t say anything about uncertainty accumulation across a year. It’s the accumulation of uncertainty across MULTIPLE years, i.e. multiple annual iterations!

Reply to  AlanJ
August 26, 2023 11:58 am

Annual mean means it is not a seasonal estimate, it’s the estimate across the full year.

Lauer & Hamilton disagree with you Alan.

The annual mean of 20 years of data is the annual average of data summed over that 20 years.

That doesn’t mean it is a rate of uncertainty accumulation.

±4W/m² is a rate? With a velocity simultaneously along the +x and -x axes?

That’s what you’re averring, isn’t it. Motion simultaneously in opposed directions.

±4W/m² is not a rate. It’s an uncertainty statistic. Statistics are not rates.

I have measured chemical reaction rates. The rates are not statistics.

AlanJ
Reply to  Pat Frank
August 26, 2023 7:57 pm

The annual mean of 20 years of data is the annual average of data summed over that 20 years.

Not summed, averaged. They took the annual means for 20 years, then took the average of those 20 annual means. The result is a time invariant estimate of LCF. They calculated the difference between this estimate for the models and the observations.

Reply to  AlanJ
August 25, 2023 3:00 pm

Then why do you assume that you *DO* know the temperature 100 years from now? That all stated values of temperature are 100% accurate? And that all iterative outputs from the climate models are 100% accurate?

If you don’t know then you don’t know! And pretending you do is only fooling yourself!

Reply to  AlanJ
August 25, 2023 6:13 pm

The clock isn’t changing its wrongness between readings, it is always a minute fast or a minute slow.

In which case your analogy is misconceived. Uncertainty is not error.

Reply to  AlanJ
August 25, 2023 12:04 pm

Again, you are confusing a constant bias in the accuracy with an uncertainty in the accuracy. Consider an old watch that is in need of cleaning and oiling. It doesn’t keep good time. In fact, sometimes it runs fast and sometimes it runs slow. You don’t know when when it does either, just that it does. From the Empirical Rule, the standard deviation is approximately 1/4 the range in error. You never know exactly what the error is.

I’m reminded of the old joke that a man who only owns a pocket watch always knows what time it is. However, the man who owns both a pocket watch and a wrist watch is never sure of the right time.

bdgwx
Reply to  Clyde Spencer
August 25, 2023 2:15 pm

Laurer & Hamilton are talking about systematic differences between models. They are not talking about the differences getting bigger or smaller over time. For all intents and purposes the 4 W/m2 figure is a constant bias. It is not the amount he LCF error expands each year like what Pat is implying.

paul courtney
Reply to  bdgwx
August 25, 2023 3:49 pm

Mr. x: A constant bias, like Mr. J’s clock? It’s off one minute +/- (he doesn’t know which), but he seems to know it’s a constant “off one minute”, as distinguished from a watch that loses or gains (he doesn’t know which) a minute OVER TIME. He doesn’t seem to get that if he says it’s “constant”, then it can’t propagate, is a tautology. Do you? Oh, well, it’s not your clock analogy (you are wise to avoid it, unlike Mr. J.) and you won’t define “iterative”, maybe you can define “annual”?

Reply to  bdgwx
August 25, 2023 4:19 pm

You *really* don’t understand what an iterative process is, do you?

If you have an uncertainty u1 in year 1 then that gets propagated into the output from the year 1 calculations. That uncertainty u1 adds to the uncertainty u2 in year 2 so the output in year 2 is a combination of u1 and u2. The output of year 3 is a combination of u1, u2, and u3.

That uncertainty is *NOT* a dial calibration issue on an analog meter from a bent needle. It is an uncertainty in the values the dial indicates.

If you used the same initial values in year 1, year 2, year 3, etc then you will get approximately the same values output each year. But since the initial conditions for year 2 are the output of year 1 you will get a different value from year 2 than you got in year 1. And since the output of year 1 has uncertainty so does the input to year 2. And that uncertainty from year 1 will be conditioned by the uncertainties inherent in year 2 calculations in an additive manner.

The iterative process of a climate model is exactly like the tractor and box blade building up a road bed. If the box blade is 1″ off then that 1″ will accumulate at each iteration (layer) of roadbed laid down. You won’t end up with 1″ of total uncertainty at the end. You’ll end up with 1″ times the number of layers put down!

If the MEAN annual uncertainty is u1, then that uncertainty will accumulate with each annual iteration that is laid on top of the previous ones.

AlanJ
Reply to  Tim Gorman
August 25, 2023 5:04 pm

The iterative process of a climate model is exactly like the tractor and box blade building up a road bed.

It’s not, you’re just claiming that it is, effectively assuming your conclusion in the premise of the argument. The cloud forcing uncertainty in year two is independent of the cloud forcing uncertainty in year one. It’s as if you scraped the roadbed clean each day and started all over again. At the end of each day you’ll be off by +/- an inch, but that has no impact on the amount you’ll be off by the next day. This is like… the fundamental error that Pat is making. And he’s making the mistake because he is utterly convinced that the uncertainty is in units of W/m^2/year and so he should compound the uncertainty each year.

AlanJ
Reply to  AlanJ
August 25, 2023 5:06 pm

And as I’ve pointed out, because this convention is completely unphysical and arbitrary, he could choose any commanding schedule he liked, effectively exercise completely control over the uncertainty produced by his approach. Why not compound it millennially instead? The uncertainty would wither to nothingness.

AlanJ
Reply to  AlanJ
August 25, 2023 5:06 pm

Compounding schedule*

Reply to  AlanJ
August 25, 2023 6:15 pm

It’s not, you’re just claiming that it is, effectively assuming your conclusion in the premise of the argument. “

Malarky! Prove it! Show me where the analogy is wrong. You are just stating that it is and expecting everyone to believe you without you having to put any effort into disproving it!

“The cloud forcing uncertainty in year two is independent of the cloud forcing uncertainty in year one.”

Of course it is! So what? Are you trying to argue that averages can’t be used? That each year all the initial conditions have to be totally rewritten instead of using the output of the previous year as the intput to the next year?

It’s as if you scraped the roadbed clean each day and started all over again.”

That’s not how an iterative model works! Each year builds on the next one. You said that YOURSELF when you said it takes several steps for the model to sync up!

You can’t even keep your own arguments straight! Perhaps you should start writing them down and reviewing them before making another post that is contradictory to your own assertions!

You are just shooting from the hip. First your want the models to be iterative and then you don’t. You want each year to depend on the prior one and then you want each year to be independent of prior years.

How can the models handle oscillatory phenomenon (e.g. ENSO) if each year can’t depend on the prior year?

Give it up man! You are lost in your own mind!

Reply to  AlanJ
August 26, 2023 12:32 am

The cloud forcing uncertainty in year two is independent of the cloud forcing uncertainty in year one.”

But the year 2 simulation initializes on the erroneous simulation of year 1. The year 2 simulation then erroneously projects the already wrong climate state delivered by step 1.

Step 3 initializes with the combined errors of step 1 and step 2. And etc.

The simulation error accumulates with every iterative step. But the error accumulates in some unknown fashion because the correct energy state of the projected climate is unknowable. Comparison with future observables to determine error is impossible.

But the reliability of the projected climate can be determined by propagating the uncertainty of the projected climate. And in the case of the troposphere, the lower limit uncertainty of the simulated thermal energy state is given by the annual average LWCF calibration error. Namely ±4 W/m² (CMIP5 models).

All of this is discussed in “Propagation…” under “Differencing From a Base-State Climate Does Not Remove Systematic Error.”

Reply to  AlanJ
August 26, 2023 12:39 am

It’s as if you scraped the roadbed clean each day and started all over again.”

You’re confusing error with uncertainty. Error is knowing the size of the mistake, which knowledge you assume here. One never knows that in a futures projection.

Rather, each day you’d not know how much you’d laid down or scraped off, or even if you’d scraped any off.

And you’d not know how much you’d laid down the next day either. At the end, you’d have no idea of the correct thickness of the roadbed.

But if you know the uncertainty of your process, you’d have an estimate at the end of the reliability of the bed thickness in meeting spec.

Reply to  Pat Frank
August 26, 2023 6:58 am

Eloquently put. But it will go in one ear and out the other meeting no resistance in between.

Reply to  AlanJ
August 26, 2023 9:40 am

Yes, cloud forcing in any given next iteration may be independent, however, the information input to that iteration from the previous one already carries the uncertainty in the values. That means the next iteration starts off with data where it is unknown what the values truly are. They are uncertain. Unless you assume that the previous values are 100% accurate (they aren’t) then the next iteration expands the uncertainty interval.

Reply to  Jim Gorman
August 26, 2023 11:17 am

Why is this so hard for them to understand?

Reply to  bdgwx
August 25, 2023 6:27 pm

Laurer & Hamilton are talking about systematic differences between models.

They’re talking about the difference between model simulations and observations.

the 4 W/m2 figure is a constant bias.

RMSE calculation does not produce positive constants.

It is not the amount he LCF error expands each year like what Pat is implying.

I’m not implying expansion of error each year. I’m directly demonstrating an expansion of uncertainty each iterative year.

Uncertainty is not error.

Every single one of your step perceptions is wrong.

Reply to  Pat Frank
August 26, 2023 2:22 am

They will never understand because their built-in biases (pun intended) prevent it.

Reply to  Pat Frank
August 26, 2023 3:49 am

Uncertainty is not error.”

They could write this on the blackboard 1000 times and it still wouldn’t sink in.

Reply to  AlanJ
August 25, 2023 2:11 pm

If your clock is off because it is running slow or fast then your accumulated uncertainty *WILL* GROW!

And just how do you know it is off to begin with? You can’t know that inherently from looking at just the data it provides. That’s the problem with systematic bias, it is not able to be identified by statistical analysis.

That’s why so many treatises on measurement uncertainty wind up assuming that systematic bias is either eliminated or made significantly smaller than all other sources of uncertainty for most of their books. It’s so statistical analysis can be done. That’s what Taylor does. It’s what Bevington does. It’s what the GUM does. It’s what Possolo does.

Reply to  AlanJ
August 26, 2023 8:53 am

See my previous post. Your analogy is wrong. Does your clock gain/lose 1 minute per 24 hours? How do you know? Do you have a reference where you plot the actual difference? Remember, uncertainty is a ± value. Thet means your clock can lose/gain time in an unknown way. If you have not plotted against a reference your uncertainty in a month is ~±30 minutes..

AlanJ
Reply to  Jim Gorman
August 26, 2023 11:07 am

You might have a clock that loses a minute every day, but that is not the nature of the error Pat is using to evaluate model uncertainty. The cloud forcing error Pat cites is a base state error in the model calibration, it is akin to having a clock whose minute hand is set one minute ahead or one minute behind. Your uncertainty no matter the elapsed time is one minute, plus or minus. Pat has arbitrarily and without justification added the “per day” bit to his clock.

Reply to  AlanJ
August 26, 2023 12:17 pm

The cloud forcing error Pat cites is a base state error in the model calibration,

Simulated minus observed across 20 years long removed from the spin-up base year is not a base-state error. It’s an error in simulated response.

And the RMS of 20 years of error in the simulated response is the mean uncertainty in a simulation.

it is akin to having a clock whose minute hand is set one minute ahead or one minute behind

A ±u uncertainty is not at all akin to a constant offset. And an uncertainty statistic is not a physical magnitude. Your argument is in opposition to the plain meaning of the terms.

Your uncertainty no matter the elapsed time is one minute, plus or minus.

Your constant-one-minute-off clock analogy doesn’t analogize the (plus/minus) LWCF uncertainty. In your analogy, you have knowledge of the systematic error in an unchanging system.

In a climate simulation, neither condition is true.

Pat has arbitrarily and without justification added the “per day” bit to his clock.

My example was per 100 minutes, not per day.

Reply to  AlanJ
August 25, 2023 11:53 am

I think that you are confusing the probabilistic uncertainty with a constant bias in the accuracy,

Reply to  Tim Gorman
August 25, 2023 7:23 am

Don’t forget meme #0 (which dominates the climastrologer mind):
0. Uncertainty is the same as error.

Reply to  karlomonte
August 25, 2023 7:42 am

Thank you! I did forget that!

Reply to  Tim Gorman
August 25, 2023 11:57 am

Looks like the pack size of minus sign-pushing trendologists is now up to four.

Janice Moore
Reply to  karlomonte
August 25, 2023 1:38 pm

And it appears to be an inside job. I attempted to minus an AlanJ comment and the error message came up: “You’ve already voted for this comment.” I had not. Other times, in attempting to minus bdgx (or whatever) or AlanJ, my vote had 0 effect.

Meh. Who cares? They (or he or she….) are quite handily shooting themselves/her or himself in the foot.

KEEP UP THE GOOD WORK, MEN!! (or women or man …. or woman….)

Reply to  Janice Moore
August 25, 2023 3:51 pm

Janice—this is WordPress at work…it seems to log me out after some unknown time. If it happens while writing a comment it will ‘helpfully’ tell you “Sorry! Commenting is closed.”

Janice Moore
Reply to  karlomonte
August 25, 2023 5:49 pm

Today, when I tried repeatedly to edit one of my comments, it never would let me do that. It just kept yelling, “You’re posting to quickly. Slow down!” 🙄

Reply to  Janice Moore
August 25, 2023 6:32 pm

women or man …. or woman

And we can agree that’s all there are Janice. 🙂 Nice to see yuou here.

Janice Moore
Reply to  Pat Frank
August 25, 2023 9:00 pm

😀 Hi! Thank you for the kind acknowledgement. Good to “see” you, too.

Moreover, GREAT to get the immense privilege of attending (for free!) a highly informative, clearly communicated, lecture on a vitally important subject. The clarity of your remarks underscores the fact that you are a giant in this field.

Reply to  Janice Moore
August 25, 2023 9:56 pm

Just paying attention, Janice. No more.

Simon
Reply to  Janice Moore
August 25, 2023 7:25 pm

KEEP UP THE GOOD WORK, MEN!! (or women or man …. or woman….)”
Or trans…

Janice Moore
Reply to  Simon
August 25, 2023 9:15 pm

Biology is determined at conception. It cannot be changed. No matter how many modifications to the structure and biochemistry of the machine are made.

A trans person can “put on” the body of the opposite sex and can behave as the opposite sex typically behaves, but, he or she still has the same sex he or she was born with.

And if a male goes through puberty before “putting on” a female body, he has a meaningfully more powerful cardiovascular system, stronger muscles, etc., thus,

MEN SHOULD NOT BE COMPETING IN WOMEN’S ATHLETICS.

Yes (smile), I feel quite strongly about this issue.

Take care, Simon. I still pray for you often.

********************

TRUMP 2024! 😀

********************

Reply to  Janice Moore
August 27, 2023 9:51 am

TRUMP 2024! https://s.w.org/images/core/emoji/14.0.0/svg/1f600.svg

Put up. My standard wager. If 45 becomes 47, on inauguration evening I’ll find your last comment here and reply “Living with genital herpes has made me a better person”. With no weasel words elsewhere. If not, then you. I’ll post a test post that day, for you to find.

You all coulda’ cleaned up in 2016. I offered this widely, with no one willing to take me up on it.

And pullease, no Lindsey Graham pearl clutching. It’s a simple bet, with low consequences.

Reply to  bigoilbob
August 27, 2023 12:54 pm

Are you drunk, blob?

Reply to  karlomonte
August 27, 2023 2:26 pm

A simple bet. Open to you as well. But I’m betting that you’ll also be all hat and belt buckle.

Reply to  Simon
August 26, 2023 2:23 am

Marxist clown.

Reply to  karlomonte
August 25, 2023 1:56 pm

Don’t whine. If it makes you happier, feel free to down vote this comment.

Janice Moore
Reply to  Bellman
August 25, 2023 3:15 pm

This drawing of you made me happy.
comment image

Reply to  Janice Moore
August 25, 2023 3:23 pm

This is a better likeness.

comment image

Janice Moore
Reply to  Bellman
August 25, 2023 9:16 pm

Heh.

Reply to  Bellman
August 25, 2023 3:48 pm

With bellcurvewhinerman on the scene, all that’s left is an appearance by Nitpick Nick.

Reply to  karlomonte
August 25, 2023 2:54 pm

You noticed too! No refutation of anything, just a downcheck. Nice.

Reply to  Tim Gorman
August 25, 2023 1:59 pm

Possible because you keep reading Taylor and Bevington who say they are essentially the same.

Reply to  Bellman
August 25, 2023 2:55 pm

If you would actually STUDY both of these tomes you would understand that they do *NOT* consider them to be the same. You’ve been given the quotes at least twenty times. Even the GUM makes the distinction!

Reply to  Tim Gorman
August 25, 2023 3:46 pm

I said “essentially” the same. E.g. Taylor

1.1 Error as Uncertainty

In science, the word error does not carry the usual connotations of the terms mistake or blunder. Error in a scientific measurement means the inevitable uncertainty that attends all measurements. As such, errors are not mistakes; you cannot eliminate them by being very careful. The best you can hope to do is to ensure that errors are as small as reasonably possible and to have a reliable estimate of how large they are. Most textbooks introduce additional definitions of error, and these are discussed later. For now, error is used exclusively in the sense of uncertainty, and the two words are used interchangeably.

Or Bevington:

Our interest is in uncertainties introduced by random fluctuations in our measurements, and systematic errors that limit the precision and accuracy of our results in more or less well-defined ways. Generally, we refer to the uncertainties as the errors in our results, and the procedure for estimating them as error analysis.

Reply to  Bellman
August 25, 2023 5:01 pm

Good boy, you found what you wanted to see, ignore everything else.

Reply to  Bellman
August 25, 2023 5:44 pm

This only highlights the lack of understanding you have of uncertainty. Random errors are amenable to statistical analysis. Systematic *biases” are not. A systematic bias is *NOT* an error. Since it can’t be quantified in a field measurement it is an UNKNOWN.

If you would actually study both of these instead of doing your incessant cherry-picking you would see that both define uncertainty as u_total = u_random + u_systermatic. “u” stands for UNKNOWN. Unknown means uncertainty. In essence, neither random uncertainty or systematic uncertainty are ERRORS.

Look at the GUM:

D.5 Uncertainty
D.5.1 Whereas the exact values of the contributions to the error of a result of a measurement are unknown and unknowable, the uncertainties associated with the random and systematic effects that give rise to the
error can be evaluated. But, even if the evaluated uncertainties are small, there is still no guarantee that the error in the measurement result is small; for in the determination of a correction or in the assessment of
incomplete knowledge, a systematic effect may have been overlooked because it is unrecognized. Thus the uncertainty of a result of a measurement is not necessarily an indication of the likelihood that the
measurement result is near the value of the measurand; it is simply an estimate of the likelihood of nearness to the best value that is consistent with presently available knowledge.

E.5 A comparison of two views of uncertainty
E.5.1 The focus of this Guide is on the measurement result and its evaluated uncertainty rather than on the unknowable quantities “true” value and error (see Annex D). By taking the operational views that the result of a measurement is simply the value attributed to the measurand and that the uncertainty of that result is a measure of the dispersion of the values that could reasonably be attributed to the measurand, this Guide in effect uncouples the often confusing connection between uncertainty and the unknowable quantities “true” value and error. (bolding mine, tpg)

Let me repeat. You have *NEVER* understood the difference between error and uncertainty because you are addicted to cherry-picking things you think prove your point instead of actually STUDYING the subject and learning the difference.

It’s like you stubbornly refusing to use the proper terminology “standard deviation of the sample means” and continuing to use “standard error of the mean” which is meaningless and misleading. And it has mislead you totally!

Reply to  Tim Gorman
August 25, 2023 3:53 pm

He cannot break free of being an unlearned clown.

Reply to  Tim Gorman
August 25, 2023 2:20 pm

I see the same sad lies have now been formatted into a handy list.

“1. All measurement error is random, Gaussian, and cancels

Nobody says that certainly no climate scientist you’ve ever quoted.

2. The average uncertainty is the uncertainty of the average.

The only person I’ve seen saying that is Dr Patrick Frank. You know, when he’s claiming the uncertainty in a month or a year of monthly values is equal to the average uncertainty of the daily values. We went over this in depth the last time his instrument uncertainty paper was discussed.

3. The SEM is the uncertainty of the average.

As a first step and depending on the circumstances yes. But it certainly isn’t the only uncertainty in a global anomaly estimate.

4. Averaging multiple single measurements of different things is the same as averaging multiple measurements of the same thing.

It is not. But to a large extent the same statistics are used. You can use the Central Limit Theorem both for a random sample of different things and the random errors caused by measurement uncertainty when measuring the same thing.

5. Variance of random variables do no add when adding random variables.

They absolutely do, and again I have no idea who this strawman is who you keep thinking says otherwise. Your main problem here is you never understand the difference between adding random variables, and adding random variables then dividing by a constant.

6. Variance is *not* a measure of the uncertainty of the average and can be ignored because it isn’t important.

Correct, variance is not a measure of the uncertainty of an average or any random variable. It isn’t a direct measure of anything really. The measure of the uncertainty ois the square root of the variance, i.e. the standard deviation, or standard error of the mean for an average.

Reply to  Bellman
August 25, 2023 3:54 pm

bellcurveman tries to cover for climastrology by making like he’s an expert in metrology and uncertainty.

Yer not.

Reply to  karlomonte
August 25, 2023 4:30 pm

I do not claim to be an expert, I explicitly state I’m not an expert, or even know much about the subject. What I can do is point out where Tim and you have no concept of how an equation works.

I don’t care how much expertise you claim to have in metrology or how many years you’ve spent making up uncertainty reports – if you claim that the uncertainty of a mean increases with sample size – you have to explain how, and not hide behind a load of childish name calling.

Reply to  Bellman
August 25, 2023 5:03 pm

Yet you yammer on and on and on and on and on as if you are an expert, when it is painfully obvious yer not.

Reply to  karlomonte
August 25, 2023 6:01 pm

He *still* hasn’t figured out that the standard deviation of the sample means is *NOT* the uncertainty of the mean. It’s only the uncertainty of your calculation of the mean!

Reply to  Tim Gorman
August 26, 2023 2:25 am

He will never figure it out.

Reply to  Bellman
August 25, 2023 5:59 pm

What I can do is point out where Tim and you have no concept of how an equation works”

You are an amateur statistician, not a mathematician let alone an engineer or physical scientist. You didn’t even know what an integral is until it was explained to you and my guess is that you’ve probably forgotten it by now.

 if you claim that the uncertainty of a mean increases with sample size”

You can’t even get this right! You keep saying you don’t believe the average uncertainty is the uncertainty of the average and then you come up with something like this!

The uncertainty of the mean is *NOT* the average uncertainty. It is *NOT* the standard deviation of the sample means.

Increased sample size only lets you calculate the mean more closely. It tells you *NOTHING* about the accuracy, i.e. the uncertainty, of that mean. The mean can be highly inaccurate while the standard deviation of the sample means can be zero!

If your data has large uncertainties then the mean will have a large uncertainty as well! And it doesn’t matter how many decimal places you calculate it out to – that doesn’t make it any more accurate or have any less uncertainty.

And you wonder why we keep saying you believe that the average uncertainty is the uncertainty of the average – even after posting this garbage.

Reply to  Tim Gorman
August 25, 2023 7:38 pm

You didn’t even know what an integral is until it was explained to you and my guess is that you’ve probably forgotten it by now.

Please stop lying about me. It’s tedious and just makes it look like you can;t win an argument without resorting to ad hominems.

You can’t even get this right!

Do you or do you not think the uncertainty of an average increases with sample size?

The uncertainty of the mean is *NOT* the average uncertainty.

You keep saying that and ignoring the many many times I’ve had to point out to you that I agree.

It is *NOT* the standard deviation of the sample means.

And I say to some extent it is. You just asserting it isn’t does not make a persuasive argument, no matter how many capital letters you use.

Increased sample size only lets you calculate the mean more closely.

Which I would say is an important part of reducing uncertainty.

It tells you *NOTHING* about the accuracy, i.e. the uncertainty, of that mean.

It does tell you something about the accuracy, i.e. how precise it is. What it doesn’t tell you about is the trueness – which is why you have to look at systematic biases in your sampling and measurements, etc.

If your data has large uncertainties then the mean will have a large uncertainty as well!

But if those uncertainties are random, increasing sample size will reduce the uncertainty of the mean.

And none of this answers the question as to why you think increasing sample size increases uncertainty.

And you wonder why we keep saying you believe that the average uncertainty is the uncertainty of the average

I do. Thanks for noticing. Maybe if you tried to explain what you think you instead of just copying and pasting it every two sentences, the wondering would cease. But to me this just seems nonsense.

Let’s go back to your original claim – 100 thermometers each with a random independent uncertainty of ±0.5°C. You claimed the measurement uncertainty of the average would be ±5.0°C because you have to multiply the uncertainty by root 100. If I believed the uncertainty of the average was the same as the average uncertainty, I would say the uncertainty was ±0.5°C, becasue that is the average uncertainty. I could use RMS like Frank and say it was √[(100*0.5²) / 100] = 0.5 if I wanted it to look more impressive.

But in fact what I say, at least in the abstract, is that the correct maths is the average uncertainty divided by root N, i.e. ±0.05°C.

I still fail to understand how you think 0.05 = 0.5.

Reply to  Bellman
August 26, 2023 2:29 am

And none of this answers the question as to why you think increasing sample size increases uncertainty.

Yet the fact remains that the sample size of a time series remains exactly equal to one, which you will never understand.

±0.05°C

Back to the absurd inane milli-Kelvin temperature “uncertainties”, and you wonder why no one takes you seriously.

Reply to  karlomonte
August 26, 2023 4:52 am

Yet the fact remains that the sample size of a time series remains exactly equal to one

Not much of a series if it only has one point in it.

Back to the absurd inane milli-Kelvin temperature

To paraphrase a certain troll – your only argument is “But it can’t be that small.”

Reply to  Bellman
August 26, 2023 7:06 am

Each point is a sample size of one.

What do you get when you use 100 samples of sample size 1?

Doesn’t the SEM become the square root of the variance of the combined sample means? I.e. the variance of the data in the data set?

Do you *ever* think anything through? Stop cherry-picking and actually learn something.

Reply to  Tim Gorman
August 26, 2023 7:36 am

“Each point is a sample size of one.”

Then it’s not much of a time series. You don’t specify what you want to do with the series, butif say you want to look at a long term average or a linear regression, then all points in the period of interest are part of the sample.

“What do you get when you use 100 samples of sample size 1?”

How do you want to “use” them. If you mean take the average of all hundred samples then you gave a single sample if size 100.

“Doesn’t the SEM become the square root of the variance of the combined sample means?”

You’re demonstrating the same lack of understanding about how a sampling actually works as your brother.

Fine. If you want to use the SD of 100 sples of size 1 to estimate the SEM you can do that. But all it is telling you is what the SEM is of a single sample of size 1. It shouldn’t be a surprise that this is the same as the population SD. It’s just saying that if you draw one value at random it’s expected deviation from the average is the same as the overall deviation from the average.

But if you take the average of 100 samples (assuming they are all randomly selected from the same population), then the DRM is now the standard deviation divided by 10. This means you have a better chance of getting close to the population average with a sample of 100 then if you just take 1 random value.

“Do you *ever* think anything through? Stop cherry-picking and actually learn something.”

I keep trying to be helpful and suggest that these types of comments reflect more on you than they soon me. But you never seem to get the message.

Reply to  Bellman
August 26, 2023 11:20 am

Then it’s not much of a time series.

Yes, you really are this dense. Hapless is the word CMoB uses, and he is (as usual) spot on.

But you never seem to get the message.

Oh the irony.

Reply to  Bellman
August 26, 2023 7:32 am

Are you really this dense?

To paraphrase a certain troll – your only argument is “But it can’t be that small.”

A small step in the right direction—recognizing your condition. Maybe there is hope for you after all.

Is there a Trendologist Anonymous group meeting in your area?

Reply to  Bellman
August 26, 2023 6:00 am

Do you or do you not think the uncertainty of an average increases with sample size?”

If you have multiple measurements of the same thing using the same instrument under the exact same environmental conditions and the measurements are all independent, random, and form a Gaussian distribution then the uncertainty of the average goes down with larger sample sizes. .

Fail *any* of these restrictions and the uncertainty goes up.

Multiple measurements of different things do not meet *any* of these restrictions. They are not the same thing. They are taken under different environmental conditions. The measurements are probably *NOT* random and Gaussian since they are of different things.

Therefore the sample size is irrelevant and the uncertainty of the average goes UP, way up, with each additional data element added to the sample.

Basic statistics. Which you fail at utterly.

“You keep saying that and ignoring the many many times I’ve had to point out to you that I agree.”

And yet you keep saying that the total uncertainty divided by the number of elements is the uncertainty of the average. Once again, your actions belie your words.

“And I say to some extent it is. You just asserting it isn’t does not make a persuasive argument, no matter how many capital letters you use.”

It is *NOT* to *any* extent at all! It only indicates how precisely you have located the average value. It tells you *NOTHING* about the uncertainty of that average. Even if your SEM is zero you *still* don’t know if the average you calculated is anywhere near accurate. You can only judge its accuracy by propagating the uncertainties of the data elements onto that average.

The SEM and the uncertainty of the average are two entirely different things. It is a failure of statisticians to understand that simple fact that lies at the root of the problem with climate science. Statisticians are too used to working only with stated values and not with “stated value +/- uncertainty”. To most statisticians “+/- uncertainty” simply doesn’t exist. It is certainly not covered in any of the five statistics textbooks I have managed to collect.

Each and every example in each and every textbook only lists out stated values and defines the uncertainty of the mean as the SEM. That is wrong, totally wrong, in the real world. That’s why I say statisticians, like you, live in a phantom “statistics world” and not in the real world. Statisticians don’t design bridges, build support beams, overhaul engines, build stud walls,build road beds etc – real people in the real world do that and *they* have to consider measurement uncertainty in every single thing they do!

But in fact what I say, at least in the abstract, is that the correct maths is the average uncertainty divided by root N, i.e. ±0.05°C.”

Whenever you divide by the number of elements you are finding an average value. Remember, the equation is actually sqrt(total uncertainty/ N) which becomes sqrt of the average uncertainty value. That is what “total uncertainty/N” *is*, an average. You want to define the sqrt of the average uncertainty as the uncertainty. It’s idiotic in the extreme.

No one building a support beam believes that they can combine multiple 2″/4″‘s and wind up with an uncertainty at the end of u_avg/sqrt(N). At least no one whose civil and criminal liability is tied to the correct estimation of the total uncertainty in the beam.

Reply to  Tim Gorman
August 26, 2023 7:40 am

Fail *any* of these restrictions and the uncertainty goes up.

I still can’t help but wonder what path has led them to these bizarro notions—it can’t be just basic statistics, an intro text won’t say anything about measurement uncertainty.

Unskilled and Unaware is a very real psychological phenomenon, can this explain it?

Reply to  Tim Gorman
August 26, 2023 4:37 pm

If you have multiple measurements of the same thing using the same instrument under the exact same environmental conditions and the measurements are all independent, random, and form a Gaussian distribution then the uncertainty of the average goes down with larger sample sizes. .

Fail *any* of these restrictions and the uncertainty goes up.

Piffle. None of those are “restrictions” on the statistical methods, and violating the assumptions does not automatically reverse the equations.

The measurements are probably *NOT* random and Gaussian since they are of different things.

You still have this odd idea that measuring different things somehow means they will not be random, whereas measuring the same thing using the same instrument will result in more randomness.

Basic statistics. Which you fail at utterly.”

And yet you insist all statisticians fail to understand this basic statistics. Can you provide any reference or equation explaining how the uncertainty of the average goes “way up” each time you add an element?

And yet you keep saying that the total uncertainty divided by the number of elements is the uncertainty of the average.

No. I keep saying and you keep ignoring that the uncertainty of the mean is the uncertainty of the total divided by the number of elements. You still don’t seem to understand that the uncertainty of the total is not the same as the total uncertainty.

It is *NOT* to *any* extent at all! It only indicates how precisely you have located the average value. It tells you *NOTHING* about the uncertainty of that average.

And you continue to argue that knowing the precision of your average tells you nothing about the uncertainty of the average.

By all means say there may be additional uncertainty caused by biases, but that does not mean the precision tells you *NOTHING*.

You can only judge its accuracy by propagating the uncertainties of the data elements onto that average.

Which is what I keep doing when we talk about the measurement uncertainty of the average. But of course you won’t accept, or won’t understand, any of the equations that tell you how to do that.

And how does propagating the uncertainties tell you the accuracy – when as you say you don;t know what systematic errors are in play?

Whenever you divide by the number of elements you are finding an average value.

Divide what? If thing you are dividing is not a sum of the values what average are you finding? If you divide the hypotenuse of a right angled triangle by two, are you finding the average length of the other two sides?

Remember, the equation is actually sqrt(total uncertainty/ N)

That’s not my equation. My equation would be (uncertainty of total) / N. Uncertainty of total, assuming all random uncertainties is sqrt(total sum of squares), and if all uncertainties are identically distributed, that reduces to sqrt(N) * (individual uncertainty). Hence my equation reduces to (individual uncertainty) / sqrt(N).

No one building a support beam believes that they can combine multiple 2″/4″‘s and wind up with an uncertainty at the end of u_avg/sqrt(N).

I’m glad to here it. If you are combining by adding your uncertainty should be u_avg * sqrt(N). But as you keep demonstrating you can’t understand the difference between adding and averaging.

Reply to  Bellman
August 28, 2023 12:03 pm

Piffle. None of those are “restrictions” on the statistical methods, and violating the assumptions does not automatically reverse the equations.”

You are simply unbelievable! The issue is *NOT* statistical methods, the issue is metrology! Metrology is the discipline of MEASUREMENT. It *uses* statistical methods as tools for analysis but those measurements *HAVE* to meet certain restrictions.

“You still have this odd idea that measuring different things somehow means they will not be random, whereas measuring the same thing using the same instrument will result in more randomness.”

Random does *NOT* mean Gaussian or even symmetrical!

The measurements of a combined herd of Shetland ponies and Quarter Horses can be taken at random. You will *still* not get a Gaussian or even symmetrical distribution. In u such a case the mean is useless. It describes neither population nor does the standard deviation.

Temperature measurement taken from around the globe is EXACTLY like measuring the heights of a herd of combined Shetland ponies and Quarter Horses.

In such a case there is *NO* true value to be estimated. Just as the average of the heights of Shetland’s and Quarterhorses does not give a “true value” of anything, a temperature data set taken from around the globe doesn’t represent a “true value” of anything either.

You know so LITTLE about measurement and statistics that somehow you think the average of anything is meaningful. You can’t even understand that combining winter and summer temperatures involve combining two different populations.

No. I keep saying and you keep ignoring that the uncertainty of the mean is the uncertainty of the total divided by the number of elements. “

It is *NOT* the uncertainty of the mean.

It is the average uncertainty, not the uncertainty of the mean. The term “uncertainty of the mean” is the ultimate in being ambiguous. That term can either be speaking of the standard deviation of the sample means or the actual accuracy of the mean as propagated from the individual elements.

The average uncertainty is only useful in calculating the total uncertainty. In something like an iterative progression you just multiply the average uncertainty by the number of iterations. If you think about it, building a beam is nothing more than an iterative process of adding element after element.

If you already know the total uncertainty (needed to calculate the average), then exactly what does that average uncertainty provide you?

Think of it this way – I am given 100 2″/10″ boards whose AVERAGE uncertainty is 1″. I select one to build a header across a garage door. Can I use that average uncertainty to determine if it will reach the entire distance?

If I can’t then what good is it in knowing the “average uncertainty”?

Reply to  Tim Gorman
August 28, 2023 2:59 pm

“Average uncertainty” allows them to ignore and toss into the trash the instrumental uncertainties, which aids their goal of milli-Kelvin air temperature uncertainties.

Reply to  Bellman
August 26, 2023 12:10 pm

“””””But in fact what I say, at least in the abstract, is that the correct maths is the average uncertainty divided by root N, i.e. ±0.05°C.”””””

Read Taylor Section 5.7 really closely. See where he says all the X’s are identical and all the σₓ are the same. You don’t have that with a set of global stations.

Read the GUM Section 4 about experimental uncertainty by multiple measurements under repeatable conditions. Repeatable conditions around the globe?

Reply to  Jim Gorman
August 26, 2023 3:20 pm

You are wasting your time. He won’t study it. He’ll just say its wrong.

He can’t understand that dividing by a constant doesn’t change the distribution, it only shifts it around on the X-axis.

He’ll never understand the attached image from Taylor’s book. He’s a cherry-picker and not a student. He doesn’t study anything, he just picks things he thinks validates his assertions but he has no real idea of whether those things actually validates his assertions or not.

taylor_5_6.jpg
Reply to  Tim Gorman
August 26, 2023 4:46 pm

Of course dividing or multiplying by a constant changes the distribution. I can’t see how you can even think it doesn’t.

It’s elementary logic. Divide all values in a distribution and the distance between each element an the mean half. Hence the variance is divided by 4, and the standard deviation by 2.

Let me illustrate with a few lines of R.

> var(1:100)
[1] 841.6667
> var(1:100 / 2)
[1] 210.4167
> sd(1:100)
[1] 29.01149
> sd(1:100 / 2)
[1] 14.50575
Reply to  Bellman
August 28, 2023 12:24 pm

Of course dividing or multiplying by a constant changes the distribution. “

Your reading skills are showing again. It simply doesn’t change the *uncertainty* of the distribution!

I can’t emphasize enough: YOU NEED TO STUDY TAYLOR AND DO *ALL* THE EXERCISES.

From Taylor: “As a first nontrivial example of error propagation, suppose we measure two independent quantities x an y and calculate their sum x+y. We suppose that the measurements x and y are normally distributed about their true values X and Y, with widths σ_x and σ_y as in Figures 5.16(a) and (b, and we will try to find the distribution of the calculated values of x + y.”

…… (after treatment)

“x + y = (x-X) + (y-Y) + (X+Y) (5.60)

Here the first two terms are centered on zero, with widths σ_x and σ_y, by the result from step 1. Therefore, by the result just proved [Equation (t.59)], the sum of the first two terms is normally distributed with width sqrt( σ_x^2 + σ_y^2). The third term in (5.60) is a fixed number; therefore, by the result in step 1 again, it simply shifts the center of the distribution to (X+Y) but leaves the width unchanged. In other words, the values of (x+y) as given by (5.60) are normally distributed around (X+Y) with width sqrt( σ_x^2 + σ_y^2). “

Dividing by a constant does *NOT* change the uncertainty. The uncertainty is the sum of the uncertainties of the elements involved. The uncertainty of x/y, where y is a constant, is sqrt[ (u(x)/x)^2 + (u(y)/y(^2 ] and since u(y) = 0 it simplifies to u(x)/x.

It simply doesn’t matter how large or small “y” is. It doesn’t affect the uncertainty.

Reply to  Jim Gorman
August 26, 2023 4:53 pm

We are talking about a 100 thermometers with identical measurement uncertainties. I am not talking about the SEM or repeated measurements – just about what happens when you propagate random uncertainties using any of the established rules for propagating uncertainties.

If you want to calculate uncertainties for a global anomaly average that’s a different beast altogether. For a start measurement uncertainties are far less important than spatial uncertainties, you do not have a random distribution, you are dealing with anomalies rather than temperatures, and you are not simply averaging all the values.

Reply to  Bellman
August 26, 2023 5:55 pm

Nope, not going to dance with you.

Get out in front and show how you would do it. Show the references you use and the assumptions you have researched and if they are met.

Then the site can critique your method.

Reply to  Jim Gorman
August 26, 2023 6:19 pm

Not sure why you think I asked you to do anything.

I’ve said before I am not going to do an uncertainty analysis of any global data set – that’s way out of my pay grade. That doesn’t mean I can’t point out that uncertainty does not normally increase with sample size, and the uncertainty of a mean is not usually the mean of the uncertainty.

Reply to  Bellman
August 28, 2023 12:27 pm

We are talking about a 100 thermometers with identical measurement uncertainties. I am not talking about the SEM or repeated measurements – just about what happens when you propagate random uncertainties using any of the established rules for propagating uncertainties.”

You have no idea what you are talking about!

Where have you found 100 thermometers with identical measurement uncertainties?

You keep wanting to redirect the discussion to non-physical hyopthetical’s. Join the rest of us in the real world – and tell us how the thousands of thermometers used for the global average temperature all have the same measurement uncertainty.

Reply to  Tim Gorman
August 28, 2023 2:01 pm

Where have you found 100 thermometers with identical measurement uncertainties?

In your comment here.

If you add 100 independent, non-correlated temperature values together to calculate an average and each value has an uncertainty of +/- 0.5C then the resulting uncertainty is [ +/- 0.5 x sqrt(100)] = +/- 0.5 x 10 = +/- 5C.

Thus your average becomes useless in trying to identify differences in the tenths or hundredths digit.

You keep wanting to redirect the discussion to non-physical hyopthetical’s.

Because if you get the simple examples wrong, you will probably get the real world examples wrong as well.

Reply to  Tim Gorman
August 25, 2023 9:59 pm

The expression you want, Tim, is unschooled dilettante.

Reply to  Pat Frank
August 26, 2023 6:49 am

I’ll try to remember that expression but those are pretty big words. They may not stick! <grin>

Reply to  Bellman
August 25, 2023 9:58 pm

if you claim that the uncertainty of a mean increases with sample size

It can do if the errors are systematic.

Reply to  Pat Frank
August 26, 2023 6:45 am

Even systematic bias doesn’t have to be constant. Consider a measuring instrument whose test faces wear with each measurement, such as a physical measuring device monitoring the diameter of a wire being pulled through a die. As that wire passes through the die, the die wars and the diameter of the wire will therefore increase. As the wire passes through the measuring device (think the faces on a micrometer) those faces wear also and get grooves in them. What happens then? The measuring device has to decrease the distance between the faces to keep contact with the wire. What does that cause? It looks like the wire diameter has gotten *smaller*. If the die and the measuring faces wear at the same rate it might appear that the wire diameter is constant. If they don’t then who knows which one will dominate? If the die wears faster the measuring device might catch that the wire diameter is increasing but it won’t show the right increase.

Bottom line? You can’t just always assume that systematic bias is constant. It can be a function of time, material, etc. Unless you can somehow create a mathematical function to account for this you can’t just counteract the systematic bias by adjusting the stated value. The bias will certainly not be random and is unlikely to be Gaussian and therefore won’t cancel out in the end.

It’s what the uncertainty interval is to be used for!

Reply to  Tim Gorman
August 26, 2023 7:50 am

Component drift and temperature tolerances—always specified by manufacturers as plus/minus, not plus or minus. And yes, individual components can go either direction. All you can do is form an interval which averaging cannot change.

Reply to  Pat Frank
August 26, 2023 7:44 am

“It can do if the errors are systematic.”

“Can do” is not the same as “always will”.

Sure a badly thought through large sample can give you a worse result than a well designed small sample. But that isn’t the point Tim makes and I’m disagreeing with, that even with random independent measurement uncertainties the uncertainty of the mean inevitably grows with sample size.

Reply to  Bellman
August 26, 2023 6:18 pm

even with random independent measurement uncertainties the uncertainty of the mean inevitably grows with sample size.

It diminishes, actually. With 1/sqrtN.

Reply to  Pat Frank
August 26, 2023 6:24 pm

Thanks.

Reply to  Pat Frank
August 27, 2023 5:21 pm

Interesting to note how many people disagree with this comment, compared to the numbers who attack me whenever I say the same thing.

It would be nice to think this means now Dr Frank has said it they realise they were wrong all this time, but somehow I suspect it will be quickly forgotten and we will be going through all the same arguments next time.

Reply to  Bellman
August 27, 2023 5:35 pm

Who denied that uncertainty due to random error diminishes as 1/sqrtN?

Reply to  Pat Frank
August 27, 2023 6:30 pm

Tim Gorman for one. I’ve been arguing with him about it for over 2 years, and he still insists that the the uncertainty of the mean is the same as the uncertainty of a sum, i.e. multiplied by root N.

It’s what got me interested in the idea of uncertainty in the first place.

Reply to  Bellman
August 27, 2023 9:00 pm

I believe that Tim argues that measurement of different things violates the 1/sqrtN rule. A different argument.

Reply to  Pat Frank
August 28, 2023 1:34 pm

Yep.

This only highlights the fact that Bellman has *never* studied the subject at all. He lives with the meme that all uncertainty is random, Gaussian, and cancels. He screams that he doesn’t but everything he asserts shows that he does.

Taylor develops the 1/sqrt(N) in his section 5.7 on page 148. It really only applies:

“all of the measurements of the same quantity x, their widths are all the same and are all equal to σ_x.

In this case the partial derivative of each element is 1/N. So the quantity in under the square toot is [ N (σ_x^2/N^2) ] ==> σ_x/sqrt(N).

For temperatures in the real world the requirements of measuring the same quantity and having the same uncertainties for each measurement is violated totally and fully. It’s actually violated for even the so-called daily average temperature since two different things are being measured and the temperature variance (i.e. the SD) is different during the daytime than it is at night.

This is the typical mistake made by statisticians and computer programmers that have only been trained in classical statistics which ignore the fact that data points are “stated value +/- measurement uncertainty) and not just “stated value”. Even my youngest brother, a quality control engineer in an ammunition/explosive plant for many years, had to learn this. He learned it from the senior quality engineers and from specific training in measurement techniques.

It’s not obvious that very many in climate science are even aware of metrology as a discipline. You had to learn it in your career as a chemist. I suspect much of the learning was on-the-job experience rather than through training at university.

Reply to  Tim Gorman
August 28, 2023 2:24 pm

Taylor develops the 1/sqrt(N) in his section 5.7 on page 148

You know full well, as we’ve been over it enough times, that we are talking about the general rules for propagating errors / or uncertainties. The fact that you can use this to improve the measurement of one thing by averaging multiple measurements, does not mean the same rules do not apply to measuring different things. It’s simply an application of the rulls for adding and dividing. Which in tern just gives you the same result at the CLT, when applied just to the measurement uncertainties.

In this case the partial derivative of each element is 1/N

And what would it be if the measurements were of different things?

Reply to  Bellman
August 28, 2023 3:22 pm

The “general” rules you are speaking of are ONLY for multiple measurements of the same thing in the same environment using the same instrument.

*YOU* want to apply it to everything. That requires adherence to the meme that all measurement uncertainty is random and cancels.

That is *NOT* the general rule for propagating uncertainty.

does not mean the same rules do not apply to measuring different things”

It is EXACTLY what it means. It simply doesn’t apply to single measurements of different things under different environments using different instruments.

You live in an unreal world – one where the average is ALWAYS a true value. For you, the average of 5 is a *true* value for two different units of 4 and 6. It isn’t. It never has been and it never will be, not in the REAL world.

A true value *must* exist. If it didn’t then even a statistical analysis of multiple measurements of the same thing under the same environment using the same instrument couldn’t be considered to provide one.

If the true value does *NOT* exist then it doesn’t exist. No amount of math can *make* it exist.

A global average temperature doesn’t exist. It can’t be identified physically. A true value for it can only be calculated by assuming all measurement uncertainty is random, Gaussian, and cancels. In addition all measurements have to have the same variance, i.e. σ_x has to be the same for all measurements.

The fact that you simply cannot accept these physical limitation only proves that you live in “statistical world” and not in the real world with the rest of us.

Reply to  Tim Gorman
August 28, 2023 4:39 pm

The “general” rules you are speaking of are ONLY for multiple measurements of the same thing in the same environment using the same instrument.

Completely wrong as you should have understood by now.

How would you be able use the general rule to find the uncertainty in the combined length of two boards of different length? Or the area of a sheet from it’s width and height.

You claim to have done all the exercises in Taylor. How did you find an answer to 3.21, for example, if you could only use the rules for identical things?

That requires adherence to the meme that all measurement uncertainty is random and cancels.

That’s the assumption being made here. That’s why we both said all the uncertainties were random.

Reply to  Bellman
August 28, 2023 4:56 pm

WTF is a “random” uncertainty?

Reply to  Bellman
August 28, 2023 5:02 pm

How would you be able use the general rule to find the uncertainty in the combined length of two boards of different length? Or the area of a sheet from it’s width and height.”

The general rule you speak of is in the GUM. If you would study the GUM instead of doing your usual cherry-picking, you would see that the “general rule” assumes *NO* systematic uncertainty. It only applies for random uncertainty. And it only applies when you have multiple measurements of the same thing under the same environment using the same instrument.

4.2.1 In most cases, the best available estimate of the expectation or expected value μq of a quantity q that varies randomly [a random variable (C.2.2)], and for which n independent observations qk have been obtained under the same conditions of measurement (see B.2.15), is the arithmetic mean or average q (C.2.19) of the n observations:”

How many times does this restriction have to be given to you before it finally sinks in?

a quantity q that varies randomly [a random variable (C.2.2)], and for which n independent observations qk have been obtained under the same conditions of measurement

Independent observations of the same measurand under the same conditions of measurements.

Can you read at all?

Reply to  Tim Gorman
August 28, 2023 5:16 pm

How many times does this restriction have to be given to you before it finally sinks in?

Zillions.

Can you read at all?

Nope.

Reply to  Tim Gorman
August 28, 2023 6:49 pm

The general rule you speak of is in the GUM.

Yes, and as we’ve been over numerous times applies to measurements of different things,m just as the specific rules derived from it do.The results are the same, so remind me how they only apply when combining measurements of the same thing.

you would see that the “general rule” assumes *NO* systematic uncertainty.

Which is irrelevant as we are talking about only random uncertainty (or independent to be be more correct.) And not correct as they also include equations for correlated measurements.

And it only applies when you have multiple measurements of the same thing under the same environment using the same instrument.

If you would actually read the section rather than seeing what you want to see, you would realise this is completely false. The general equation is for combining any number of different things.

““4.2.1 In most cases, the best available estimate of the expectation or expected value μq of a quantity q that varies randomly [a random variable (C.2.2)], and for which n independent observations qk have been obtained under the same conditions of measurement (see B.2.15), is the arithmetic mean or average q (C.2.19) of the n observations:””

Where do you see any restrictions in that passage? It is not telling you how to use equation 10, it is telling you how to get the best estimate of a quantity by averaging multiple measurements.

But remember the question you were tryong to answer was what is the equation for combining different things. You said the equation I was after was in the GUM, but then immediately said it could not be used for combining different things. So make your mind up, is it the equation I’m looking for, and if it isn’t where is the equation?

Reply to  Bellman
August 29, 2023 4:59 am

How would you be able use the general rule to find the uncertainty in the combined length of two boards of different length? Or the area of a sheet from it’s width and height.”

You find the total uncertainty by adding the individual uncertainties. You don’t assume a random, Gaussian distribution where the measurement uncertainty cancels and you wind up with a “true value”.

“You claim to have done all the exercises in Taylor. How did you find an answer to 3.21, for example, if you could only use the rules for identical things?”

You have ONE measurement for each component, distance and time. The fractional uncertainties add. You can’t even work out this simple exercise?

“That’s the assumption being made here. That’s why we both said all the uncertainties were random.”

You are lost in the weeds again. I *never* assume all uncertainties are random in the real world.

Reply to  Tim Gorman
August 29, 2023 10:52 am

You find the total uncertainty by adding the individual uncertainties.”

But you claim it doesn’t work for measurements of different things.

You don’t assume a random, Gaussian distribution where the measurement uncertainty cancels and you wind up with a “true value”.

This is just becoming a list of random word associations at this point. But if you think there is a correlation between the measurements, use the general equation for correlated inputs.

You can’t even work out this simple exercise?

I trying to resolve your inconsistencies. You say the equations for propagating uncertainties do not apply when the measurements are of different things, yet then use them for two different things in this exercise.

Reply to  Bellman
August 30, 2023 6:33 am

Once again, you only show you *NEVER* study anything but, instead, just cherry-pick things you can throw against the wall.

What does the GUM define v = d/t as?

Reply to  Tim Gorman
August 28, 2023 4:07 pm

I some metrology in labs, Tim, especially in Analytical Chemistry and an upper division course in Instrumental Methods. Physics lab as well.

I just went and checked the major requirement at SF State University, my alma mater through the MS. Darned if they still have those classes. It looks like it’s still a strong major.

Then, of course, graduate lab work and the hammer dropped with the question, ‘how do you know?’ and ‘how do I defend this statement of results?’

Never a specific course in Metrology, though. That would have been useful, though at the time I’d probably not have appreciated how central it was.

Reply to  Pat Frank
August 28, 2023 5:04 pm

That’s where I learned. In labs doing practical work with real world components. And figuring out why 8 different students got 8 different answers from 8 different lab setups.

Reply to  Pat Frank
August 28, 2023 2:18 pm

I believe that Tim argues that measurement of different things violates the 1/sqrtN rule. A different argument.

The comment you were responding to was talking about the measurement of different things:

Sure a badly thought through large sample can give you a worse result than a well designed small sample. But that isn’t the point Tim makes and I’m disagreeing with, that even with random independent measurement uncertainties the uncertainty of the mean inevitably grows with sample size.

Measurement of a sample of things.

So, do you agree with Tim that the rules for propagating uncertainties only apply when measuring the same thing multiple time?

To be clear, this is only talking about the measurement uncertainty, which I think is mostly irrelevant when the measurements are of a sample. The real uncertainty is given by the standard deviation / √N.

Reply to  Bellman
August 28, 2023 3:08 pm

So, do you agree with Tim that the rules for propagating uncertainties only apply when measuring the same thing multiple time?”

STOP MAKING UP CRAP. The issue is how you propagate the uncertainties. NOT whether you do or not!

The rules for propagating uncertainty when you have multiple measurements of the same thing in the same environment *IS* different than the rules of how you propagate multiple single measurements of different things.

Something which *YOU* refuse to accept. You’ve been given the quotes from Taylor, Bevington, and Possolo MULTIPLE TIMES that measurements with systematic uncertainties, especially when the systematic uncertainties are different, ARE NOT AMENABLE TO STATISTICAL ANALYSIS.

” The real uncertainty is given by the standard deviation / √N.”

This ONLY applies when you have multiple measurements of the same thing with the same sigma. I’ve given you the quote from Taylor laying this out and, apparently, you just blew it off. Just as you usually do when you’ve been shown to be wrong!

Reply to  Tim Gorman
August 28, 2023 4:49 pm

The rules for propagating uncertainty when you have multiple measurements of the same thing in the same environment *IS* different than the rules of how you propagate multiple single measurements of different things

So what are they. Where in all the books you insist I read do they spell out these different rules for propagating the uncertainties of different things?

Reply to  Bellman
August 28, 2023 5:07 pm

Chapters 2 & 3 in Taylor. Bevington doesn’t really get into it much. He just states that where systematic uncertainty exists it is not usually amenable to statistical analysis and moves directly into statistically analyzing random uncertainty.

IF YOU HAD ACTUALLY STUDIED EITHER ONE OF THESE YOU WOULD KNOW THIS!

Cherry-picking simply doesn’t allow one to actually learn anything. You are a prime example!

Reply to  Tim Gorman
August 28, 2023 6:31 pm

I have read them, but maybe I have a blind spot, so rather than quoting a couple of chapters why don’t you provide an actual equation number, just to make sure we are both reading it correctly.

Reply to  Bellman
August 28, 2023 4:27 pm

So, do you agree with Tim that the rules for propagating uncertainties only apply when measuring the same thing multiple time?

If the 1/sqrtN rule is to be applied, then yes.

When I was doing titration experiments using a microliter syringe and monitoring with a spectrometer, I’d have to propagate both the syringe uncertainty and the spectrometer resolution into the result. Uncertainties of a categorically orthogonal kind.

This can be done by propagating them as fractional experimental uncertainties.

Generally, the results of experiment violate the closure rules of statistics. But one wants a reasonable (and defensible) estimate of reliability. Hence the approximation of, rather than strict adherence to, statistical hard-and-fast niceties.

I’ve always liked, and agree with, Einstein’s take on the studied adjustments of method scientists must make in order to proceed into the unknown.

[The scientist] therefore must appear to the systematic epistemologist as a type of unscrupulous opportunist: he appears as realist insofar as he seeks to describe a world independent of the acts of perception; as idealist insofar as he looks upon the concepts and theories as free inventions of the human spirit (not logically derivable from what is empirically given); as positivist insofar as he considers his concepts and theories justified only to the extent to which they furnish a logical representation of relations among sensory experiences. He may even appear as Platonist or Pythagorean insofar as he considers the viewpoint of logical simplicity as an indispensable and effective tool of his research.

Reply to  Pat Frank
August 28, 2023 5:09 pm

Well said. And there isn’t an ounce of doubt that it went right over bellman’s head!

Reply to  Pat Frank
August 28, 2023 5:18 pm

Next Bellman will tell that he thinks Einstein is wrong…

Reply to  karlomonte
August 28, 2023 5:28 pm

He was, on a lot of things.

Reply to  Bellman
August 28, 2023 6:51 pm

I KNEW IT!~!~!~!~!~!

Reply to  Pat Frank
August 28, 2023 5:46 pm

I’d have to propagate both the syringe uncertainty and the spectrometer resolution into the result

Fascinating, I’m sure, but has nothing to do with the question of the measurement uncertainty of a mean of different things.

L<et me make it clearer. The general for adding different things with possibly different things where all the uncertainties are independent, is to take the positive square root of the sum of the squares of the individual uncertainties. If the uncertainties are all of the same size, this reduces to √N * u. (Not coincidentally this is also the rule for the sum of different values in a random sample, but there using SD rather than uncertainty. √N * σ.

I think Tim agrees with this, as it’s what he claimed initially.

Now I say that if you take the average of these N things, you will have to divide the uncertainty by N, which leaves you with u / √N. Again this is only the measurement uncertainty. You could say this is the uncertainty if you want an exact average of the N different things. Again, this is similar to what happens to the CLT when you take an average rather than a sum, σ / √N.

But Tim disagrees, and says you never divide the uncertainty and so the uncertainty of the average is the same as the uncertainty of the sum, √N * u.

I think this leads to an absurd conclusion, where the measurement uncertainty of the average is much greater than any individual measurement uncertainty.

Now, he does sometimes switch the argument to systematic errors – but that still wouldn’t explain how the uncertainty could grow. If all the uncertainties were completely dependent the uncertainty of the average woudl just be the uncertainty of the individual, or the average uncertainty.

As I say this is an argument that has been going round in circles for well over 2 years, and has created a rut so deep never of us can get out. But I still can;t find the logic in saying that uncertainty will reduce in the average of measurements of the same thing, but not in the average of different things – given that the logic is based on equations that generally are used for combining different things.

bdgwx
Reply to  Pat Frank
August 29, 2023 7:33 am

Pat Frank: If the 1/sqrtN rule is to be applied, then yes.

NIST applies the 1/sqrt(N) rule to measurements of different things in TN 1900.

Reply to  bdgwx
August 29, 2023 7:47 am

And ends up with uncertainties of degrees K, not milli-K!

Reply to  karlomonte
August 30, 2023 6:20 am

Yep!

Reply to  Tim Gorman
August 30, 2023 7:25 am

Thankfully he seems to have given up spamming the link to the NIST Uncertainty Machine.

Reply to  bdgwx
August 29, 2023 9:24 am

Cherry picking again.

NIST specifies very clearly that the measurements used meet repeatible conditions of measurement.

GUM
B.2.15 repeatability (of results of measurements)
closeness of the agreement between the results of successive measurements of the same measurand carried out under the same conditions of measurement

NOTE 1 These conditions are called repeatability conditions.

NOTE 2 Repeatability conditions include: — the same measurement procedure
— the same observer
— the same measuring instrument, used under the same conditions
— the same location
— repetition over a short period of time.

That is a far cry from averaging stations that do not meet the repeatable conditions requirement.

I’ll bet you next try to tell us that the anomaliy calculated from that station doesn’t carry an expanded experimental standard deviation of the mean whose value is at ±1.8° C.

Or that the value of 1.8°C doesn’t pertain when averaging anomaly values.

Reply to  Jim Gorman
August 30, 2023 6:21 am

Yep! That’s bdgwx, bellman, and most of climate science.

Reply to  bdgwx
August 30, 2023 6:19 am

Possolo assumes in TN1900 that Tmax IS THE SAME THING MEASURED ON DIFFERENT DAYS UNDER THE SAME ENVIRONMENTAL CONDITIONS USING THE SAME DEVICE.

He assumes systematic uncertainty is negligible. He assumes the measuring environment is the same for each day. He assumes the measuring device calibration doesn’t change from day to day.

He can therefore use the variation in the stated values to calculate a standard deviation and use that to calculate the uncertainty.

In other words, he has built a TEACHING example, not a real world example.

  1. The weather can change from day to day – i.e. not the same environment.
  2. Measurement stations in the field typically *do* have systematic uncertainty, from the microclimate they exist in if nothing else.
  3. Measuring station calibration *can* change over time, especially in a month. It can change from external environment changes if nothing else, prevailing wind direction, insect infestation, dirt contamination, etc.

TN1900 simply doesn’t directly apply to the real world and especially not to a “global average temperature”.

The fact that you can’t or WON’T recognize that makes you a cult member, not a scientist.

Reply to  Bellman
August 28, 2023 1:12 pm

Maybe you can finally understand the attached.

The standard deviation of the mean is 0.69. It’s the spread of the shots around the average, u(mean). You keep calling this the uncertainty of the mean. It isn’t. It’s a measure of the precision of your shooting – which is totally different than the accuracy of your shooting. If the standard deviation of the mean was zero, i.e. all shots into one hole, then you would be *very* precise but also *very* inaccurate.

The total uncertainty is 13. This is what you want to minimize. You want all shots into the bullseye – i.e. a total uncertainty of zero. That would also mean your precision would be very high.

The average uncertainty is 2.2 That really doesn’t tell you much other than how far you need to move your sight to get into the bullseye. But even that won’t help if the standard deviation of the mean is caused by jerking the trigger, poor breathing, bad ammo, random wind, etc.

uncertainty.jpg
Reply to  Tim Gorman
August 28, 2023 2:27 pm

Which has nothing to do with the point made – which was what happens when all the uncertainties are random and independent.

Reply to  Bellman
August 28, 2023 3:04 pm

And how often is this satisfied, LoopholeMan?

Reply to  Bellman
August 28, 2023 3:11 pm

Each and every one of those shots is random and independent. Do you think they are from one shotgun blast?

Not only that they are single measurements of DIFFERENT things. The environment and instruments for each one are different. Different wind, different ammo, different hold for the rifle, different breathing, different trigger pull, etc.

You simply don’t live in the real world. It looks like it is impossible for you.

Reply to  Tim Gorman
August 28, 2023 4:46 pm

They are not independent. There’s a systematic error.

Reply to  Bellman
August 27, 2023 5:48 pm

LoopholeMan has found his loophole!

Hurrah!

Reply to  Pat Frank
August 29, 2023 2:10 pm

Only if you are measuring the same thing multiple times. If your measurements are of different things there is no guarantee that the distribution of measurements is random, Gaussian, and coalesces on a true value.

Reply to  Bellman
August 25, 2023 5:09 pm

Nobody says that certainly no climate scientist you’ve ever quoted.”

You don’t have to say it. Every time you say the average uncertainty is the uncertainty of the average or that the RMSE between data points and the tend line is the uncertainty of the trend line YOU DEMONSTRATE IT.

You can’t get away from it. You can put forth all the disclaimers you want but every time you do anything it just comes shining through!

“The only person I’ve seen saying that is Dr Patrick Frank. You know, when he’s claiming the uncertainty in a month or a year of monthly values is equal to the average uncertainty of the daily values. We went over this in depth the last time his instrument uncertainty paper was discussed.”

You obviously don’t understand what Pat is saying but that doesn’t stop you at all does it?

Apparently the word ANNUAL means as little to you as it does to begwx and AlanJ. You *all* fail dimensional analysis. This has been pointed out to you multiple times and yet you continue to fail in learning about it.

The *average* uncertainty of an LIG instrument is *NOT* the uncertainty of any specific instrument, IT IS AN AVERAGE. But when combining the readings of multiple instruments the uncertainty grows with each inclusion of another instrument and can be calculated by applying the average uncertainty to each instrument.

The average uncertainty is not the uncertainty of the average. The uncertainty of the average is the combined uncertainty of the elements making up the average. Again, you’ve never ever built anything that depended on proper propagation of uncertainty. And yet here you are spouting nonsense about propagation of uncertainty. Good think no one ever has to drive on a bridge you designed! The bridge would probably end up 2′ short of the piling on the far end!

As a first step and depending on the circumstances yes. But it certainly isn’t the only uncertainty in a global anomaly estimate.”

The SEM is *NOT* the uncertainty of anything other than how close you are to the data set average. It’s a measure of sampling error, not uncertainty!

“t is not. But to a large extent the same statistics are used. “

NO, YOU DON’T USE THE SAME STATISTICS! You can’t! You’ve *never* understood that and likely never will. The average height of *A” quarter horse determined by multiple measurements is *NOT* the average height of a corral full of quarter horses as determined by multiple single measurements of each horse. We’ve been down this road over and over. You can measure the size of “A” man multiple times and buy him a shirt that fits. You can’t measure the height of all men and order “A” shirt determined by those multiple measurements and expect it to fit all men.

While the term “average” is used in both cases THEY ARE NOT THE SAME STATISTICS. It seems that statisticians have a real hard time understanding that. I don’t know why! It’s probably because they never look at the variance of the data they use to calculate the average values. The variance of multiple measurements of the same thing will usually be much smaller than the variance of multiple single measurements of different things. That’s the problem with the global average temperature. The variance of the data is never used, propagated, or even mentioned!

“You can use the Central Limit Theorem both for a random sample of different things “

You absolutely refuse to accept the limitations and restrictions laid out in all texts on when the central limit theorem can be used. At best it only helps with finding the average value It does *NOT* tell you the variance or uncertainty of anything. Can you even state the restrictions on the use of the CLT? You’ve been given them often enough.

Your main problem here is you never understand the difference between adding random variables, and adding random variables then dividing by a constant”

I understand variance perfectly. When you combine winter temps with summer temps, each with different variance, you can’t just willy-nilly do the combining without regard to the different variances. Yet climate science does. And so do you!

When it comes to uncertainty there is no dividing by a constant. That does nothing but scale the uncertainty down to something you *think* it should be. The uncertainty of a constant is ZERO whether it is added, subtracted, multiplied, or divided into something. The uncertainty of a beam made up of multiple 2″x4″ boards is *NOT* the total uncertainty divided by the number of boards. That can easily leave the beam too short for the span! Somehow you can’t seem to grasp that simple concept!

It goes right back to you believing the average uncertainty is the uncertainty of the average. You say you don’t believe that but it just comes shining through in everything you do!

Correct, variance is not a measure of the uncertainty of an average or any random variable. It isn’t a direct measure of anything really.”

You aren’t even a very good statistician! You measure 100 things and get 10 each from 1 to 10. 10 ones, 10 two’s, etc.

Now on the next go-round say you get 50 1’s and 50 10’s. The variance for the first one is 8.33. and for the second one is 20.45. The average is the same, 5.5. Now which average has the most uncertainty?

Reply to  Tim Gorman
August 25, 2023 5:39 pm

You don’t have to say it.

Of course I don’t, because whatever I say you’ll ignore it and make up some more lies about me. Case in point –

Every time you say the average uncertainty is the uncertainty of the average or that the RMSE between data points and the tend line is the uncertainty of the trend line…

I’ve not said any of that. If I have said anything like that I was wrong. Neither of those things are remotely correct. But again it’s irrelevant becasue you will just keep lying.

You obviously don’t understand what Pat is saying but that doesn’t stop you at all does it?
Apparently the word ANNUAL means as little to you as it does to begwx and AlanJ.

So when he says the annual uncertainty of an instrument is given by taking the monthly uncertainty, squaring it, multiplying by 12, dividing by 12 and taking the square root he isn;t going round the houses to say the uncertainty of an annual average is just the average monthly uncertainty? And again when he calculates the monthly uncertainty to be exactly the same as the daily uncertainty, that’s just a coincidence. And when he says the uncertainty is calculated using RMS, the M doesn’t stand for mean?

But when combining the readings of multiple instruments the uncertainty grows with each inclusion of another instrument and can be calculated by applying the average uncertainty to each instrument.

Yet that doesn’t happen when Frank does it. His annual uncertainty thousands of instruments ends up exactly the same as for one instrument. Not thousands of times greater.

If I’m wrong and have misunderstood one of his obscure arguments, please point to the part of his paper that tells you that every instrument grows the uncertainty of the mean.

Reply to  Bellman
August 25, 2023 6:17 pm

Continued.

The SEM is *NOT* the uncertainty of anything other than how close you are to the data set average.

It’s so strange that your definition of uncertainty doesn’t allow you to say that knowing how close you are to the average is telling you anything about the uncertainty.

NO, YOU DON’T USE THE SAME STATISTICS!”

Well, now you’ve written it in block capitals, I see I must be wrong.

The average height of *A” quarter horse determined by multiple measurements is *NOT* the average height of a corral full of quarter horses as determined by multiple single measurements of each horse.

Indeed. Why do you always try to distract with a claim that has nothing to do with what I’m saying. The height of one horse is not the same as the average of all horses. But in both cases you use the same statistics to get the estimate, and the uncertainty of that estimate is in part dependent on the SEM or the SDOM or whatever you want to call it.

While the term “average” is used in both cases THEY ARE NOT THE SAME STATISTICS. It seems that statisticians have a real hard time understanding that.

Maybe the fact that the profession that studies statistics have a hard time understanding your assertions should give you pause for thought.

I don’t know why! It’s probably because they never look at the variance of the data they use to calculate the average values.

You keep whining about people using SEM, but then claim they never look at the variance. How does that work? You need to know the SD to work out the SEM, if you know the SD you know the variance.

The variance of multiple measurements of the same thing will usually be much smaller than the variance of multiple single measurements of different things.

Something I’ve been trying to explain to you for years. You keep wanting to talk about the measurement uncertainty of a mean, and I keep saying that you want to look at the SEM, and it will be much bigger. The uncertainty of a sample mean is caused by the randomness of the sampling. The greater the variance in the population the greater the uncertainty.

You absolutely refuse to accept the limitations and restrictions laid out in all texts on when the central limit theorem can be used.”

I absolutely do not. But keep arguing with the voices in your head.

At best it only helps with finding the average value It does *NOT* tell you the variance or uncertainty of anything.

It tells you the standard deviation of the sample mean. Most people realize that the smaller this is the more certainty you have that your sampling mean is close to the actual mean.

Can you even state the restrictions on the use of the CLT?

For the standard CLT the main requirement is that you are dealing with independent identically distributed random variables. Other variations allow for weaker assumptions.

I understand variance perfectly.

Of course you do.

When you combine winter temps with summer temps, each with different variance, you can’t just willy-nilly do the combining without regard to the different variances.

It’s just a pity you never seem to be able to apply them properly. To demonstrate my point, you have just switched from talking about adding random variables to combining them. In this case winter and summer temps. How are you combining them? By adding, or are you taking an average?

And what does them having different variances have to do with combing random variables. You said correctly that when you add random variables you add their variances. Nothing about them needing to have the same variance.

When it comes to uncertainty there is no dividing by a constant.

So you wish to believe. And I can;t help you see beyond your religious convictions. But that doesn’t mean you are right. We were talking about combining random variables, not uncertainty. If you add two random variables their variance adds, if you add two variables and divide by 2 you have to add the variances and divide by 4. I don’t care how strong your delusion is, that is the correct and easily demonstrable correct equation.

That does nothing but scale the uncertainty down to something you *think* it should be.

And what I can see it should be. I’ve asked you to demonstrate it to yourself, but as usual your mind won’t accept any experiment that might challenge your religion. But for old times sake, consider rolling a pair of dice and adding the result. Repeat a number of times and look at the variance of your results. Then repeat but take the average of the two dice each time. Is the variance of the average the same as the variance of the sum?

The uncertainty of a beam made up of multiple 2″x4″ boards is *NOT* the total uncertainty divided by the number of boards.

Do you never notice how you keep changing from an average to a sum?

It goes right back to you believing the average uncertainty is the uncertainty of the average.

Lying never helps your argument.

Reply to  Bellman
August 25, 2023 6:35 pm

continued.

You aren’t even a very good statistician!

I know I’m not, just a little better than you.

You measure 100 things and get 10 each from 1 to 10. 10 ones, 10 two’s, etc. Now on the next go-round say you get 50 1’s and 50 10’s. The variance for the first one is 8.33. and for the second one is 20.45. The average is the same, 5.5. Now which average has the most uncertainty?

Missing my point a bit. Variance is just the square of the standard deviation. The larger the standard deviation the larger the variance. But variance isn’t a useful measure. It doesn’t directly tell you the amount of variation in the sample, whereas standard deviation does. Depending on your units the variance of a sample might be much smaller or larger than the variation in the sample, and is measured in square units that make no sense.

Of course the sample with the larger variance has the larger standard deviation, and hence the larger SEM. But you need to take the SD to get a meaningful figure. Say your boards are measured in meters. What does a variance of 20.45 m² tell you about the variation in the lengths, given that the boards are only varying by 9 m? By contrast a deviation of 4.52 m gives a better sense of the spread – halve are 4.5m smaller than the average half 4.5 m bigger.

Reply to  Bellman
August 26, 2023 2:32 am

Wow, an epic bellcurvewhinerman three-part rant.

Did you break the keyboard?

Reply to  karlomonte
August 26, 2023 4:54 am

Oh look. The troll who posts 100 individual comments a day, is now whining that I broke a lengthy response down into 3 separate comments.

Reply to  Bellman
August 26, 2023 7:51 am

Why do you troll Pat Frank and Christopher Monckton with your non-physical nonsense?

Hypocrite.

Reply to  Bellman
August 26, 2023 4:13 am

Missing my point a bit. Variance is just the square of the standard deviation. The larger the standard deviation the larger the variance. But variance isn’t a useful measure.”

If variance isn’t a useful measure then neither is standard deviation since σ is a direct derivation from variance. Yet σ is what the GUM describes as a measure of the uncertainty in a set of data.

 It doesn’t directly tell you the amount of variation in the sample, whereas standard deviation does.”

You are continuing your willful ignorance.

There is no direct method of calculating standard deviation. First you find the variance! Then you calculate the standard deviation. Variance *IS* a measure of the variability in the data. It tells you the degree of spread in your data set. The greater the spread the larger the variance is.

You continue spouting garbage. And you expect us to believe it? You simply do not know as much about statistics than I do!

“Depending on your units the variance of a sample might be much smaller or larger than the variation in the sample, and is measured in square units that make no sense.”

Since the standard deviation is the square toot of the variance how can the variance be less than the standard deviation?

You are babbling. Put down the bottle!

“But you need to take the SD to get a meaningful figure.”

No, you don’t. The spread of the values in the data set, i.e. the variance, is a direct indicator of the uncertainty in measuring the measurand.

You simply don’t understand metrology at all. You *really* need to stop cherry-picking things and actually study the subject. Do the examples and figure out why you never get the correct answer!

Reply to  Tim Gorman
August 26, 2023 5:12 am

If variance isn’t a useful measure then neither is standard deviation since σ is a direct derivation from variance.

When I said you were missing the point, maybe it would have helped you to try to understand the point you were missing, rather than just continue to miss it.

As a number, variance is not as useful a number as the standard deviation, because it’s the square of a useful number.

If you were to find a random board in the ditch and decided to measure it’s length, would it be more useful to say it’s length was 6 feet, or to say the square of it’s length was 36 square feet? Both contain the same information, but one is more meaningful to a human.

Since the standard deviation is the square toot of the variance how can the variance be less than the standard deviation?

If it’s less than 1.

(I like square toot though. May have to use it sometime.)

No, you don’t. The spread of the values in the data set, i.e. the variance, is a direct indicator of the uncertainty in measuring the measurand.
You simply don’t understand metrology at all.

And yet all the metrology texts you show me use standard deviation or standard uncertainty as a measure of uncertainty, and not variance.

Reply to  Bellman
August 26, 2023 7:50 am

As a number, variance is not as useful a number as the standard deviation, because it’s the square of a useful number.”

But you can’t find SD without first finding variance. You *can* find the length of a board without first finding the square of its length!

“If it’s less than 1.”

Yes, But if you lay that out on a graph and the width of the SD is wider than the width of the variance you have a problem. How does the SD indicate a 68% width of the population if it’s wider than the population width? You can shift that population along the x-axis anywhere you want and it won’t change the shape. Perhaps you should try using percentages instead of direct values.

“And yet all the metrology texts you show me use standard deviation or standard uncertainty as a measure of uncertainty, and not variance.”

You didn’t even bother to understand my example of 10 of each value from 1 – 10 and 50 of each value of 1 and 10. Both have an average value of 5.5. But the variance of one is about 8 and of the other is about 20.

Which case is the most uncertain?

Reply to  Bellman
August 26, 2023 7:54 am

(I like square toot though. May have to use it sometime.)

Cocaine, this explains a lot.

Reply to  Tim Gorman
August 26, 2023 7:53 am

“ It doesn’t directly tell you the amount of variation in the sample, whereas standard deviation does.”

You are continuing your willful ignorance.

You simply don’t understand metrology at all.

Absolutely incredible.

Reply to  Bellman
August 25, 2023 6:44 pm

It’s so strange that your definition of uncertainty doesn’t allow you to say that knowing how close you are to the average is telling you anything about the uncertainty.”

It’s not strange at all. Do you have dementia? I truly need to know. Have you already forgotten all the discussion about precision and accuracy using shooting targets that we’ve had?

You can precisely place all your shots in one hole on the target yet be a long distance from the bulleye! The SEM only tells you how precisely you have calculated the mean, it tells you nothing about the accuracy of that mean!

 But in both cases you use the same statistics to get the estimate”

YOU DON”T! By ignoring variance you don’t use the same statistics.

Once again, what you say doesn’t match what you do!

 the more certainty you have that your sampling mean is close to the actual mean.”

But you do *NOT* know how accurate that mean *is*. It’s accuracy is determined by how accurate the data is, not by how precisely you calculate the mean of the stated values while ignoring the uncertainty interval that goes along with those stated values!

“You said correctly that when you add random variables you add their variances. Nothing about them needing to have the same variance.”

Again, variance is a measure of the uncertainty of the average value. Why do you want to keep ignoring that fact?

” you have just switched from talking about adding random variables to combining them. “

How do you calculate a global average temperature? Can you do it without adding? I can’t.

“It tells you the standard deviation of the sample mean”

Where does the uncertainty of the stated values get evaluated? It is *that* which tells you the accuracy of the mean.

“Do you never notice how you keep changing from an average to a sum?”

Again, how do you calculate a global average temperature without doing adding?

Reply to  Tim Gorman
August 26, 2023 2:35 am

Here what the great expert thinks:

But variance isn’t a useful measure.

Hahahahahahahahahahahah

You are right, Tim, it must be dementia.

Reply to  Tim Gorman
August 26, 2023 4:47 am

Have you already forgotten all the discussion about precision and accuracy using shooting targets that we’ve had?

Have you forgotten all the times I addressed your “discussions”?

As with any technique in maths or elsewhere – garbage in garbage out. If all your measurements are systematically wrong then your average will be wrong. If you mess up the sampling the average will be wrong. But that does not mean the CLT or the SEM is useless, or tells you “NOTHING” about the uncertainty of the mean.

Your problem is you keep jumping from general arguments to specific exceptions. It’s as if I kept insisting that adding the lengths of wooden boards will tell you *NOTHING* about the total length, and when you point out that’s absurd, I point out that you don;t know if someone has shrunk your measuring tape, or how you can be sure you didn’t transpose a couple of digits when you added the measurements.

YOU DON”T! By ignoring variance you don’t use the same statistics.”

You do not ignore variance. If you are talking about the usual SEM calculation at this point, you need the variance to calculate it. SD / √N – remember. That SD is the square root of the variance.

Again, variance is a measure of the uncertainty of the average value. Why do you want to keep ignoring that fact?

It’s a measure of the variance in a random variable, or data set. If that random variable is a sampled average then it’s a measure of the square of the uncertainty of that average. But you need to know how to calculate the variance of the random variable that is the sample average – which is where you keep getting it wrong. It’s really simple – if X and Y are random variables the variance of X + Y is given by

Var(X + Y) = Var(X) + Var(Y)

But the variance of the average is given by

Var([X + Y] / 2) = [Var(X) + Var(Y)] / 2.

And I’ll repeat, there is no requirement for Var(X) = Var(Y).

How do you calculate a global average temperature? Can you do it without adding? I can’t.

Again, how do you calculate a global average temperature without doing adding?

And you worry that I have dementia. Again , the issue is not with the fact you add values to get an average, it’s the fact that the uncertainty of that average is not the same as the uncertainty of that sum. You don’t just add temperatures to get an average, you also have to do some division.

Reply to  Bellman
August 26, 2023 7:28 am

 But that does not mean the CLT or the SEM is useless, or tells you “NOTHING” about the uncertainty of the mean.”

Malarky! The *only* time the SEM can tell you anything about the uncertainty of the mean is if you assume the population mean is 100% accurate. Then if the SEM is zero you have a 100% accurate population mean.

You simply can’t assume that the population average is 100% accurate if the data elements making up the average have uncertainty. It’s not a valid assumption whether you are measuring multiple things one time or one thing multiple times. You have to *justify* such an assumption and the only way you can justify that assumption is if you know the entire population already!

It’s why so many examples in so many books just assume all measurement uncertainty is random, Gaussian, and cancels. Then they can assume that the population mean is 100% accurate. It’s a simplification that doesn’t work in the real world, it only works in your statistical world.

“Your problem is you keep jumping from general arguments to specific exceptions.”

No, I am not. Your reading comprehension skills (or lack of) are showing again.

My exceptions apply in general – to *any* case, at least in the real world. Why don’t you ever want to join us in the real world?

” you need the variance to calculate it.”

You need the variance of the SAMPLE MEANS, not the variance of the population!

“it’s a measure of the square of the uncertainty of that average.”

It’s a measure of how close you are to the population average. It tells you *NOTHING* about how accurate that average is.

“Var([X + Y] / 2) = [Var(X) + Var(Y)] / 2.”

That’s the AVERAGE VARIANCE. Once again we are back to you claiming you don’t believe the average variance is the variance of the average BUT YOU SHOW THAT IS WHAT YOU BELIEVE WITH EVERYTHING YOU POST!

You just can’t get away from your religious dogma, not even in one post!

it’s the fact that the uncertainty of that average is not the same as the uncertainty of that sum”

The uncertainty of the average *IS* the uncertainty of the sum!

If q = s/y then

u(q)/q = u(x)/x + u(y)/y

u(q)/q is *NOT* u(s)/y

The average uncertainty is *NOT* the uncertainty of the average!

Reply to  Bellman
August 25, 2023 6:34 pm

I’ve not said any of that. If I have said anything like that I was wrong. Neither of those things are remotely correct. But again it’s irrelevant becasue you will just keep lying.”

What you say is belied by what you do!

“just the average monthly uncertainty?”

See what I mean? You simply ignore what the word “annual” means and substitute something you make up! What you say is belied by what you do!

“Yet that doesn’t happen when Frank does it.”

Of course it does! What do you think each iterative step is?

“His annual uncertainty thousands of instruments ends up exactly the same as for one instrument.”

What instrument are you talking about? A climate model is not an instrument!

 please point to the part of his paper that tells you that every instrument grows the uncertainty of the mean.”

What paper are you talking about? The subject here isn’t a paper, it’s the podcast he contributed to!

Reply to  Tim Gorman
August 25, 2023 6:52 pm

You simply ignore what the word “annual” means and substitute something you make up!

This is just getting pathetic.

What I said:

So when he says the annual uncertainty of an instrument is given by taking the monthly uncertainty, squaring it, multiplying by 12, dividing by 12 and taking the square root he isn;t going round the houses to say the uncertainty of an annual average is just the average monthly uncertainty?

Maybe I should have written the word ANNUAL in capitals as it seems to be the only language you understand.

Of course it does! What do you think each iterative step is?

What iterative step? I’m not talking about hos model claims. I’m talking about his uncertainty of the observed global averages.

What instrument are you talking about? A climate model is not an instrument!

The same ones I assumed you were talking about when you said:

The *average* uncertainty of an LIG instrument is *NOT* the uncertainty of any specific instrument, IT IS AN AVERAGE. But when combining the readings of multiple instruments the uncertainty grows with each inclusion of another instrument and can be calculated by applying the average uncertainty to each instrument.

What paper are you talking about?

This one.

https://www.science-climat-energie.be/wp-content/uploads/2019/07/Frank_uncertainty_global_avg.pdf

The subject here isn’t a paper, it’s the podcast he contributed to!

The subject was you claiming all climate scientists believing the uncertainty of an average was the average uncertainty, and me pointing out that the only one who seems to believe that is Patrick Frank. But I think he starts talking about it in the podcast after 32 minutes.

Reply to  Bellman
August 26, 2023 2:37 am

This is just getting pathetic.

Yes, you are. But rest assured, no amount of clues can ever penetrate the neutronium.

Reply to  karlomonte
August 26, 2023 4:57 am

Have an upvote – you have reached the pinnacle of your wit. I can’t see you improving on that.

Reply to  Bellman
August 26, 2023 7:56 am

Stop posting non-physical nonsense.

Reply to  Bellman
August 26, 2023 4:34 am

This is just getting pathetic.”

I agree. You see the word “annual” and assume it means “monthly”. Pathetic.

The +/- 4 W/m^2 is an ANNUAL mean, not a monthly mean.

” I’m not talking about hos model claims. I’m talking about his uncertainty of the observed global averages.”

So what is your exact problem with his analysis? His statement:

“variance. Reviews of time series quality
control and homogeneity adjustments do not discuss sensor evaluation [7-10], and the methodological report of USHCN data quality [25] does not describe validation or sampling of noise stationarity in temperature sensors. The surface station sensor diagnostics, available in the online reports of the new USCRN National Climatic Data Center network, include standard deviations calculated from the twelve temperatures recorded hourly (http://www.ncdc.noaa.gov/crn/report; see the “Air Temperature Sensor Summary,” under “Instruments”). But despite the set of ~8640 monthly standard deviations from individual CRN sensor data streams, which should give some measure of the magnitude and stationarity of variance, no extensive survey of station sensor variance is evident in published work.”

It’s just more evidence of the fact that climate science ignores measurement uncertainty, uncertainty which *must* be propagated to get a complete picture of the total uncertainty.

It’s just more of the meme: “measurement uncertainty is always random and Gaussian and therefore cancels”.

The subject was you claiming all climate scientists believing the uncertainty of an average was the average uncertainty,”

When you ignore the measurement uncertainty and only look at the variation in the stated values you are *NOT* getting a full picture of the measurement uncertainty.

As Pat states in the paper:

“The station temperature in each month during the normal period can be considered as the sum of two components: a constant station normal value (C) and a random weather value (w, with standard deviation σi).” This description plus the use of a reduction in measurement noise together indicate a signal averaging statistical approach to monthly temperature.” (bolding mine, tpg)

They are finding an AVERAGE of the variation in the stated temperature value, and by dividing by sqrt(N) they are getting an average variation in the stated values. It’ss done by assuming the uncertainty is random, Gaussian, and cancels. Do you *ever* read anything for meaning? Someday you need to stop cherry-picking and actually LEARN the subject matter.

Reply to  Tim Gorman
August 26, 2023 5:22 am

The +/- 4 W/m^2 is an ANNUAL mean, not a monthly mean.

Stop changing the subject. I am not talking about his paper on models, but on instrument uncertainty. You should know this because the thread starts with you saying

The *average* uncertainty of an LIG instrument is *NOT* the uncertainty of any specific instrument, IT IS AN AVERAGE. But when combining the readings of multiple instruments the uncertainty grows with each inclusion of another instrument and can be calculated by applying the average uncertainty to each instrument.

That’s the comment I was replying to. Nothing about watts per square meter.

So what is your exact problem with his analysis?

One day maybe I’ll devote a decade to going over every problem I have with his analysis. But the main one is that he thinks that the average uncertainty should be used to describe the uncertainty of the average.

Reply to  Bellman
August 26, 2023 7:56 am

 he thinks that the average uncertainty should be used to describe the uncertainty of the average.”

That’s not at all what he has said. Your lack of reading comprehension skills is showing again.

He is saying that the SUM of multiple average values is the same as the SUM of the individual values.

The average value of 2,4,6,8,10 is 6. The sum of 2,4,6,8,10 is 30. The sum of 6+6+6+6+6 is 30. They are the same. And they both describe the uncertainty of the average!

You can’t even understand sixth grade math!

Reply to  Tim Gorman
August 26, 2023 9:18 am

“He is saying that the SUM of multiple average values is the same as the SUM of the individual values.”

Is he? Maybe you meant to write something different, but as it stands what you are saying makes no sense and is clearly wrong.

“The average value of 2,4,6,8,10 is 6. The sum of 2,4,6,8,10 is 30. The sum of 6+6+6+6+6 is 30. They are the same. ”

And your point is?

“And they both describe the uncertainty of the average!”

How? What? You keep rambling on and I’m sure this means something to you, but you really need to take a deal breath and try to figure out how to explain it to someone not living in your head.

In what way does 30 describe the uncertainty of the average?

Reply to  Bellman
August 26, 2023 10:06 am

I didn’t think you’d get it.

It doesn’t matter if you add up 5 individual uncertainties or 5 average uncertainties. You get the same total uncertainty.

You appear to be claiming that Pat is wrong because he is adding average uncertainty to get a sum rather then adding up the individual uncertainties.

The answer is the SAME!

Reply to  Tim Gorman
August 26, 2023 11:28 am

“I didn’t think you’d get it.”

Perhaps if you thought more about what you wanted to say rather than just blerting out random sums it would be easier for people to get your point.

Your point seems to be that if you know the average of 5 numbers, then the average times 6 will be equal to the sum. This is not at all surprising given the average is equal to the sum divided by 5. But it is a useful illustration that averages are not meaningless.

“You appear to be claiming that Pat is wrong because he is adding average uncertainty to get a sum rather then adding up the individual uncertainties.”

That’s not remotely what I’m saying is wrong. What I’m saying is wrong, is to say the average of that sum is the uncertainty of the average. You know – the uncertainty of the average is not the average uncertainty.

Reply to  Bellman
August 26, 2023 7:58 am

In what journal will the great expert (i.e. you) be publishing this “analysis”?

Reply to  karlomonte
August 26, 2023 9:09 am

Sarcasm Fail monthly.

Reply to  Bellman
August 26, 2023 11:25 am

More hypocrisy.

Reply to  Bellman
August 26, 2023 12:27 pm

But the main one is that he thinks that the average uncertainty should be used to describe the uncertainty of the average.

No, I don’t. You clearly do not understand the logic of that analysis.

Eqns. 4, 5 and 6 do not refer to any measurement average. They total up the instrumental uncertainty, all by itself, before any measurement at all is made.

This is the last time I’m going to explain that for you.

If you want to understand the paper, do what any serious student does. Read over and over until you understand the meaning as it is conveyed. Not your pre-expectations of what you think it ought to mean.

Reply to  Pat Frank
August 26, 2023 4:00 pm

No, I don’t.

If it’s not what you intend it’s still what you get. You start with the uncertainty in a daily average expressed as a standard uncertainty. You then say “The uncertainty in Tmean for an average month (30.417 days) is the RMS of the daily means”.

How can that RMS leave you with anything other than the daily uncertainty? You are just squaring the daily uncertainty, multiplying it by 30.417, dividing it by 30.417 and taking the square root. The result is the claimed uncertainty for the month is the average daily uncertainty.

Eqns. 4, 5 and 6 do not refer to any measurement average.

They are the measurement uncertainty for the mean, whether monthly annual or 30 year, and you use the same result for the global average (month or annual).

They total up the instrumental uncertainty, all by itself, before any measurement at all is made.

That’s what I mean by the measurement uncertainty. It’s independent of the actual measurements, and you are not including spatial uncertainty.

If you want to understand the paper, do what any serious student does. Read over and over until you understand the meaning as it is conveyed.

I would hope any serious student would ask questions, rather than just keep reading the same paper they don’t understand. I think that benefits an open-minded teacher as well. Learn from the questions asked, explain your argument better, and allow for the possibility that you might be wrong.

Reply to  Bellman
August 26, 2023 6:37 pm

How can that RMS leave you with anything other than the daily uncertainty? You are just squaring the daily uncertainty, multiplying it by 30.417, dividing it by 30.417 and taking the square root. The result is the claimed uncertainty for the month is the average daily uncertainty.

Right. Because the uncertainty is intrinsic to the instrument.

As I noted, Bellman, you don’t understand the logic of the analysis.

And no one can just leap to understanding. Understanding takes years of study. I worked hard for years before even submitting the 2008 Skeptic paper. Then years more for the others.

I would hope any serious student would ask questions,...”

You don’t ask questions, Bellman. Neither does bdgwx. You each challenge from ignorance.

If some analysis does not match your limited expectations, you decide it’s wrong. And you each do not take explanations, because you have already pre-decided that we are wrong. Therefore, you decide, our further responses must also (always) be wrong.

And so it goes, around and around and around again.

You and bdgwx are, in a word, rejectionist. That sterile attitude shows in everything you two do here.

Reply to  Pat Frank
August 27, 2023 6:51 am

If some analysis does not match your limited expectations, you decide it’s wrong. And you each do not take explanations, because you have already pre-decided that we are wrong. Therefore, you decide, our further responses must also (always) be wrong.”

You nailed it here!

Neither of them have studied Taylor or Bevington and worked out the examples in each and every chapter.

They cherry-pick pieces and parts that they think they can use to refute even direct quotes from the book.

I wouldn’t let either one of them design even the blade for a lawnmower let alone the production facility for making them.

Reply to  Bellman
August 25, 2023 6:45 pm

Nobody says that certainly no climate scientist you’ve ever quoted.”

Nine examples here. I can provide another 19 dating all the way back to 1926.

The only person I’ve seen saying that is Dr Patrick Frank.”

I’ve never said that. You and especially bdgwx are unable to distinguish the uncertainty of the mean from the mean of uncertainty. And even worse, never figured out why the latter is relevant in LiG Met.

You two have ever been hopelessly wrong. And I now use hopeless deliberately because by all evidence there’s no obvious possibility that you’ll ever figure it out. Or that you desire to do.

bdgwx
Reply to  Pat Frank
August 25, 2023 7:52 pm

You and especially bdgwx are unable to distinguish the uncertainty of the mean from the mean of uncertainty.

Bellman and I have been unwavering and unequivocal on this matter.

uncertainty of the mean: u(Σ[x_n, 1, N] / N)

mean of the uncertainty: Σ[u(x_n), 1, N] / N

You say this is wrong. So tell everyone here Pat. If the uncertainty of the mean isn’t u(Σ[x_n, 1, N] / N) then what is it?

Reply to  bdgwx
August 25, 2023 10:08 pm

bdgwx: “And why does that change the uncertainty of the mean formula?confusing the uncertainty of the mean with the mean of the uncertainty.

How quickly we forget.

Reply to  Pat Frank
August 26, 2023 2:39 am

Remembering causes migraines.

bdgwx
Reply to  Pat Frank
August 26, 2023 12:51 pm

PF: And why does that change the uncertainty of the mean formula?” confusing the uncertainty of the mean with the mean of the uncertainty.

I went through the comment section in that article. In each case Bellman and I are both in agreement, unwavering, and unequivocal that the uncertainty of the mean is defined as u(Σ[x_n, 1, N] / N).

PF: How quickly we forget.

Again…if you don’t accept that the uncertainty of the mean is is defined as u(Σ[x_n, 1, N] / N) then how do you define it mathematically? And what word or phrase do you use to describe u(Σ[x_n, 1, N] / N)?

Reply to  bdgwx
August 26, 2023 7:12 pm

The point of discussion was use of the mean of uncertainty rather than the uncertainty of the mean. You never understood the choice.

However, uncertainty in a mean is the standard deviation √[Σᵢ(x̅-xᵢ)²/(N-1)].

bdgwx
Reply to  Pat Frank
August 27, 2023 5:13 am

Pat Frank: However, uncertainty in a mean is the standard deviation √[Σᵢ(x̅-xᵢ)²/(N-1)].

Your own source Bevington 4.14 says it is the standard deviation divided by the square root of the sample size or σ/sqrt(N). Are you going to stick with your original statement above?

Reply to  bdgwx
August 27, 2023 5:28 am

Yes, Mr. Rote.

bdgwx
Reply to  Pat Frank
August 27, 2023 11:00 am

If you disagree with Bevington then why do you use him as a source?

Reply to  bdgwx
August 27, 2023 12:55 pm

Still fraudulently “adjusting” historic air temperature data, bgwxyz?

Reply to  bdgwx
August 27, 2023 2:51 pm

I don’t disagree with Bevington. Your usage is wrong.

bdgwx
Reply to  Pat Frank
August 27, 2023 6:47 pm

I don’t disagree with Bevington. Your usage is wrong.

Let me get this straight. Bellman and I accept that the “uncertainty of the mean” is defined as u(Σ[x_n, 1, N] / N). You keep saying our usage is wrong. Don’t deflect. Don’t divert. Lay it out once and for all. How are you defining “uncertainty of the mean”?

And second, Bevington describes σ/sqrt(N) as the “estimated error in the mean” and “uncertainty in the determination of the mean”. You keep saying it is just the standard deviation. How is that not a disagreement with Bevington?

Reply to  bdgwx
August 27, 2023 9:34 pm

You keep saying our usage is wrong.”

The point of discussion has always been and remains that your insistence on the uncertainty of the mean as opposed to the mean of uncertainty is wrong.

“Don’t deflect. Don’t divert.

Spare me the unpleasant irony. Your post is a deflection; a diversion; a misstatement of the subject.

You keep saying [σ/sqrt(N)] is just the standard deviation

I never wrote that σ/sqrtN is the standard deviation.

That memory thing again. I wrote, uncertainty in a mean is the standard deviation √[Σᵢ(x̅-xᵢ)²/(N-1)]. This doesn’t disagree with Bevington.

Calculating a mean removes one degree of freedom.

You proclaimed that you were leaving in a huff, bdgwx. How about keeping your promise.

Reply to  Pat Frank
August 28, 2023 6:28 am

+1000

bdgwx
Reply to  Pat Frank
August 28, 2023 7:54 am

The point of discussion has always been and remains that your insistence on the uncertainty of the mean as opposed to the mean of uncertainty is wrong.

We have a mean. We want to know the uncertainty of it. Therefore what we want to know is the uncertainty of the mean. Nobody cares about the mean of the individual uncertainties here. It is irrelevant.

You keep saying [σ/sqrt(N)] is just the standard deviation

I never wrote that σ/sqrtN is the standard deviation.

First, I know you didn’t. Second, I’d appreciate it if you didn’t make up quotes from me. I never said “You keep saying [σ/sqrt(N)] is just the standard deviation.” You took two different quotes from me and arranged them in way to mean something completely different than what I actually said.

That memory thing again. I wrote, uncertainty in a mean is the standard deviation √[Σᵢ(x̅-xᵢ)²/(N-1)]. This doesn’t disagree with Bevington.

It literally disagrees with Bevington 4.14. If you think the uncertainty in the determination of the mean is the standard deviation and nothing more than post a screenshot of Bevington 4.14 showing exactly that. This is your opportunity to shine and make me like dyslexic, blind, and ignorant for the whole world to see.

You proclaimed that you were leaving in a huff, bdgwx. How about keeping your promise.

I did not proclaim that I was leaving. I said I was disengaging from that particular discussion regarding your arbitrary inclusion of year-1 to the W m-2 units Lauer & Hamilton published.

Reply to  bdgwx
August 28, 2023 12:42 pm

Nobody cares about the mean of the individual uncertainties here.

I cared about the mean of the uncertainty in the context of Lig Met. And LiG Met. was the context of that discussion. You and Bellman insisted on being wrong.

First, I know you didn’t.

Then why did you write that I did? See below.

…to mean something completely different than what I actually said.”

You wrote: “And second, Bevington describes σ/sqrt(N) as the “estimated error in the mean” and “uncertainty in the determination of the mean”. You keep saying it is just the standard deviation.

The “it” in your sentence refers to, “σ/sqrt(N).”

And then I wrote, “You keep saying [σ/sqrt(N)] is just the standard deviation”

The brackets convention signals an interpolation. The interpolation is obvious and obviously conveys the correct meaning of your text.

And you proceed to manufacture a fake lie, in claiming it was “something completely different than what I actually said.” It was exactly what you meant. Let’s see: that would be a lie inside a lie. A lying lie.

Bevington 4.14 is strictly true only for random error. We’re not concerned with random error. Use of the equation appropriate to non-random error is not a disagreement. Except with your benighted views.

regarding your arbitrary inclusion of year-1” In an annual mean. Got it.

So, you’re not leaving after all. Another hope for respite from adamantine nonsense dashed.

bdgwx
Reply to  Pat Frank
August 28, 2023 2:29 pm

PF: I cared about the mean of the uncertainty in the context of Lig Met. And LiG Met. was the context of that discussion.

Then that’s a problem. The average of the individual uncertainties does not tell you what the uncertainty of the average is. They are not equivalent.

PF: The “it” in your sentence refers to, “σ/sqrt(N).”

No it does not. The “it” refers to the uncertainty of the mean or in the language of Bevington “estimated error in the mean” or “uncertainty in the determination of the mean”. And that concept (the “it”) is not computed as σ like you claim. It is computed as σ/sqrt(N) according to Bevington and everyone else.

PF: And then I wrote, “You keep saying [σ/sqrt(N)] is just the standard deviation”

No I did not. Stop misquoting. This is what I said.

PF: And you proceed to manufacture a fake lie, in claiming it was “something completely different than what I actually said.”

It was something completely different. You literally took different sentences, split them apart, and rejoined parts of them in a way I never said or intended. And you put double quotes around it to make it look like I said it. And I’m the liar? Are you being serious right now?

Bevington 4.14 is strictly true only for random error.

And where does Bevington say the uncertainty of the mean is computed as as simply σ for systematic error?

Reply to  bdgwx
August 28, 2023 5:02 pm

The average of the individual uncertainties does not tell you what the uncertainty of the average is.

It does when the individual uncertainties represent the elements of intrinsic instrumental accuracy.

You have never understood that point. Neither does Bellman. Neither of you understand the analytical logic in LiG Met. You have not studied with any care,

You’re quite evidently not interested in understanding the paper on its internal grounds, You instead impose your own naive expectations upon it.

You’re in strict violation of the ethics of review, and you evidently don’t care a whit.

No it does not.”

Again, what you wrote: “And second, Bevington describes σ/sqrt(N) as the “estimated error in the mean” and “uncertainty in the determination of the mean”. You keep saying it is just the standard deviation.

Your “it” can refer to “estimated error in the mean” and “uncertainty in the determination of the mean”. as you like. But both of those terms refer back to “σ/sqrt(N).”

Therefore, “You keep saying [σ/sqrt(N)] is just the standard deviation.” is indeed an accurate representation of your statement.

You’re just manufacturing outrage. A pose.

And where does Bevington say the uncertainty of the mean is computed as as simply σ for systematic error?

Bevington deals only in passing with systematic errors. See pp. 2.3.14 and especially 55 under “A Warning About Statistics.”

Reply to  Pat Frank
August 28, 2023 4:39 pm

uncertainty (of measurement)
parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand

NOTE 1 The parameter may be, for example, a standard deviation (or a given multiple of it), or the half-width of an interval having a stated level of confidence.

2.3.1
standard uncertainty
uncertainty of the result of a measurement expressed as a standard deviation

2.3.2
Type A evaluation (of uncertainty)
method of evaluation of uncertainty by the statistical analysis of series of observations

3.1.4 In many cases, the result of a measurement is determined on the basis of series of observations obtained under repeatability conditions (B.2.15, Note 1).

B.2.15
repeatability (of results of measurements)
closeness of the agreement between the results of successive measurements of the same measurand carried
out under the same conditions of measurement

NOTE 1 These conditions are called repeatability conditions.

NOTE 2 Repeatability conditions include:
— the same measurement procedure
— the same observer
— the same measuring instrument, used under the same conditions
— the same location
— repetition over a short period of time.

NOTE 3 Repeatability may be expressed quantitatively in terms of the dispersion characteristics of the results.

B.2.17
experimental standard deviation
for a series of n measurements of the same measurand, the quantity s(qₖ) characterizing the dispersion of the results and given by the formula:

s(qₖ) = √(Σ(qⱼ – q̅)² / (n-1))

qₖ being the result of the kth measurement and being the arithmetic mean of the n results considered

NOTE 1 Considering the series of n values as a sample of a distribution, q is an unbiased estimate of the mean μ(q), and s2(qₖ) is an unbiased estimate of the variance σ², of that distribution.

NOTE 2 The expression s(qₖ)/ n is an estimate of the standard deviation of the distribution of q and is called the experimental standard deviation of the mean.

NOTE 3 “Experimental standard deviation of the mean” is sometimes incorrectly called standard error of the mean.

B.2.18
uncertainty (of measurement)
parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand

NOTE 1 The parameter may be, for example, a standard deviation (or a given multiple of it), or the half-width of an interval having a stated level of confidence. (bold, italic, underline by me)

From NIST: https://www.nist.gov/system/files/documents/nvlap/aplac_tc_010_issue_1.pdf

When a test result is presented as a measured value and a measurement uncertainty, it prescribes an interval within which the true value of the quantity being measured is expected to lie with a stated level (usually 95%) of confidence. This uncertainty interval varies in size, depending on the test.

The uncertainty of measurement may be a STANDARD DEVIATION. It is up to the person displaying the measurement to decide and to inform the reader as to what standard deviation is being used and if it is expanded.

One should note the GUM indicates a half-width may be an appropriate value If a confidence level is associated. No way can you simplify a half-width interval to be as small as an SEM, which leads one to consider standard uncertainty as a very appropriate indicator.

Please note that NIST says “a measurement uncertainty” should be presented with a measured value. They don’t recommend as to which lnterval should be used.

OTH, in NIST TN 1900, they specify that for Tmax, an expanded standard uncertainty of the mean is an appropriate value. But remember, they also specify that the temperature measurements they are using meet the requirements for repeatability of results of measurements.

The averaging of Tmax and Tmin do not meet the requirements of repeatability of results of measurements. The averaging of different stations for homogenization does not meet the requirements of repeatability either. Averaging NH and SH certainly don’t meet the repeatability conditions either.

Why don’t you try to justify these?

While you are at it try to justify anomalies not carrying the variance of their component parts.

Lastly, you want to understand where some of Dr. Frank’s experience comes from? Go through this course.

https://sisu.ut.ee/measurement/uncertainty

Reply to  Jim Gorman
August 28, 2023 5:04 pm

Pat,

This was meant for a response to bdgwx. I must have clicked on the wrong reply button.

I’ve tried to edit and move this but it didn’t work.

My apologies.

Reply to  bdgwx
August 28, 2023 1:20 pm

Take a look at the attached picture.

The standard deviation of the mean is 0.69. It’s the spread of the shots around the mean. It’s what most people mistakenly call the uncertainty of the mean.

The total uncertainty is 13. That’s the total of how far off your shots were from the bullseye. This is what you want to minimize, i.e. the accuracy of your shots or, in other words, the uncertainty of your shots.

The average uncertainty is 2.2. What does that tell you? You might want to adjust your sights but that will only help if you also control your trigger pull, your breathing, use target ammo, and shoot on a windless day, etc. All factors in the uncertainty of where your shots are going to hit.

u(Σ[x_n, 1, N] / N)

This is the sum of the total (=13) divided by the number of shots. That is the average uncertainty. Again, what does this actually tell you?



uncertainty.jpg
Reply to  Tim Gorman
August 28, 2023 3:07 pm

He won’t look at it.

Reply to  bdgwx
August 28, 2023 5:08 pm

uncertainty (of measurement)
parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand

NOTE 1 The parameter may be, for example, a standard deviation (or a given multiple of it), or the half-width of an interval having a stated level of confidence.

2.3.1
standard uncertainty
uncertainty of the result of a measurement expressed as a standard deviation

2.3.2
Type A evaluation (of uncertainty)
method of evaluation of uncertainty by the statistical analysis of series of observations

3.1.4 In many cases, the result of a measurement is determined on the basis of series of observations obtained under repeatability conditions (B.2.15, Note 1).

B.2.15
repeatability (of results of measurements)
closeness of the agreement between the results of successive measurements of the same measurand carried
out under the same conditions of measurement
NOTE 1 These conditions are called repeatability conditions.
NOTE 2 Repeatability conditions include:
— the same measurement procedure
— the same observer
— the same measuring instrument, used under the same conditions
— the same location
— repetition over a short period of time.
NOTE 3 Repeatability may be expressed quantitatively in terms of the dispersion characteristics of the results.

B.2.17
experimental standard deviation
for a series of n measurements of the same measurand, the quantity s(qₖ) characterizing the dispersion of the results and given by the formula:
s(qₖ) = √(Σ(qⱼ – q̅)² / (n-1))
qₖ being the result of the kth measurement and q̅ being the arithmetic mean of the n results considered

NOTE 1 Considering the series of n values as a sample of a distribution, q is an unbiased estimate of the mean μ(q), and s2(qₖ) is an unbiased estimate of the variance σ², of that distribution.

NOTE 2 The expression s(qₖ)/√ n is an estimate of the standard deviation of the distribution of q and is called the experimental standard deviation of the mean.

NOTE 3 “Experimental standard deviation of the mean” is sometimes incorrectly called standard error of the mean.

B.2.18
uncertainty (of measurement)
parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand

NOTE 1 The parameter may be, for example, a standard deviation (or a given multiple of it), or the half-width of an interval having a stated level of confidence. (bold, italic, underline by me)

From NIST: https://www.nist.gov/system/files/documents/nvlap/aplac_tc_010_issue_1.pdf

When a test result is presented as a measured value and a measurement uncertainty, it prescribes an interval within which the true value of the quantity being measured is expected to lie with a stated level (usually 95%) of confidence. This uncertainty interval varies in size, depending on the test.

The uncertainty of measurement may be a STANDARD DEVIATION. It is up to the person displaying the measurement to decide and to inform the reader as to what standard deviation is being used and if it is expanded.
One should note the GUM indicates a half-width may be an appropriate value If a confidence level is associated. No way can you simplify a half-width interval to be as small as an SEM, which leads one to consider standard uncertainty as a very appropriate indicator.

Please note that NIST says “a measurement uncertainty” should be presented with a measured value. They don’t recommend as to which interval should be used.

OTH, in NIST TN 1900, they specify that for Tmax, an expanded standard uncertainty of the mean is an appropriate value. But remember, they also specify that the temperature measurements they are using meet the requirements for repeatability of results of measurements.

The averaging of Tmax and Tmin do not meet the requirements of repeatability of results of measurements. The averaging of different stations for homogenization does not meet the requirements of repeatability either. Averaging NH and SH certainly don’t meet the repeatability conditions either.

Why don’t you try to justify these?

While you are at it try to justify anomalies not carrying the variance of their component parts.

Lastly, you want to understand where some of Dr. Frank’s experience comes from? Go through this course.

https://sisu.ut.ee/measurement/uncertainty

Reply to  bdgwx
August 26, 2023 2:39 am

And you’re still wrong.

And you forgot to put “1LOT” in this comment. HTH

Reply to  bdgwx
August 26, 2023 5:04 am

uncertainty of the mean: u(Σ[x_n, 1, N] / N)

Where is uncertainty in this equation? Uncertainty is usually designated using the letter “u”. This isn’t the uncertainty of the mean, it’s the average of the stated values.

mean of the uncertainty: Σ[u(x_n), 1, N] / N

This *is* the average uncertainty. All it does is take the total of the uncertainty and evenly spread it across all data elements. u1, u2, …, un all become u_avg instead of individual uncertainty intervals.

The total uncertainty of an equation consisting of a numerator and a denominator is the sum of the individual element’s uncertainty, both the elements in the numerator and in the denominator. It is *NOT* the uncertainty sum of the numerator elements divided by the denominator.

Your partial differentials of the elements only give you a weighting factor laying out the contribution of each element to the total uncertainty. It is *still* a sum, a weighted sum, of the uncertainties of each individual element. It is *NOT* the total uncertainty of the numerator divided by the number of elements.

Taylor lays this out so simply in his book that a sixth grader studying long division could figure it out.

if q = x/y then the uncertainty of q is

u(q)/q = u(x)/x + u(y)/y

It is *NOT* u(x)/y + u(y)/y

where if y is a constant you get u(q)/q = u(x)/y which is what you want us to believe.

Reply to  Tim Gorman
August 26, 2023 6:06 am

Taylor lays this out so simply in his book that a sixth grader studying long division could figure it out.

But obviously not simple enough for you to understand. We’ve been over this so many times it’s obvious that you are incapable or unwilling to understand. But for old times sake.

The total uncertainty of an equation consisting of a numerator and a denominator is the sum of the individual element’s uncertainty, both the elements in the numerator and in the denominator.

You miss the part where it’s explained these are the relative uncertainties.

It is *NOT* the uncertainty sum of the numerator elements divided by the denominator.

It is when you convert them back to absolute uncertainties. And Taylor makes this abundantly clear when he says that if q = Bx, where B is an exact value with no uncertainty than u(q) = B u(x). Equation (3.9).

where if y is a constant you get u(q)/q = u(x)/y which is what you want us to believe

I think you mean u(q)/q = u(x)/x

Now all you have to do is figure out what that means for u(q). It isn’t difficult.

  1. multiply both sides by q
  2. substitute x/y for q.
Reply to  Bellman
August 26, 2023 8:09 am

You miss the part where it’s explained these are the relative uncertainties.”

I didn’t miss it at all. It doesn’t matter if it is the absolute uncertainty or the relative uncertainty, the important part is that it is the sum of the uncertainties, be it absolute or relative!

It is when you convert them back to absolute uncertainties.”

You’ve never figured this out at all. It’s because all you do is cherry-pick.

When you use relative uncertainty the absolute value of the uncertainty of the left side depends on the size of the left size. u(q)/q.

As q changes so does u(q). It’s how percentages work.u(q) isn’t just some absolute value.

*YOU* want to make a constant value by finding an average uncertainty and equating it to that.

I’ll ask again, where do you see a division of total uncertainty by y in any of Taylor’s text when q = x/y?

Reply to  Tim Gorman
August 26, 2023 5:31 pm

I didn’t miss it at all. It doesn’t matter if it is the absolute uncertainty or the relative uncertainty

Which explains why you keep failing to understand. Of course it matters – there would be no point in distinguishing between multiplication and addition if it didn’t matter.

the important part is that it is the sum of the uncertainties, be it absolute or relative!

When you are adding you have to add absolute uncertainties – you can;t just choose to add relative uncertainties because you feel like it. Are you sure you have read Taylor multiple times?

When you use relative uncertainty the absolute value of the uncertainty of the left side depends on the size of the left size. u(q)/q. As q changes so does u(q).”

Only if u(q) / q is a constant.

u(q) isn’t just some absolute value.

Oh yes it is.

*YOU* want to make a constant value by finding an average uncertainty and equating it to that.

OK, you’ve lost me now. Really, you just keep spinning in every way you can, rather than just following the logic of the equation. If

u(q) / q = u(x) / x

then

u(q) = q * u(x) / x

and as

q = x/y

u(q) = x / y * u(x) / x

and the xs cancel, so

u(q) = u(x) / y

I really can’t understand how anyone with a modicum of algebra could fail to understand that. I can only assume you have to work really hard to unsee what you don’t want to see.

I’ll ask again, where do you see a division of total uncertainty by y in any of Taylor’s text when q = x/y?

And again, it’s in equation (3.9).

And if you really really want to avoid seeing this by claiming that B cannot be a fraction, it’s explained in the example on page 55. Remember? The one about dividing the uncertainty of a stack of paper by 200.

You’ll of course blank this from the argument by saying it only works if all the paper is the dame size – failing to understand that ut is still demonstrating exactly what you claim is impossible. Dividing total uncertainty by a constant.

Reply to  Bellman
August 27, 2023 7:11 am

When you are adding you have to add absolute uncertainties”

Why do you have to do this? My guess is that you can’t explain.

Go study Section 2.7 in Taylor.

It’s a preconception you have received from cherry-picking instead of studying.

You have admitted that you haven’t worked out any of the examples in Taylor let alone *all* of them.

As Pat has pointed out you start off with a bunch of preconceptions based on nothing but your own prejudices and proceed to cherry-pick things you think confirm your prejudices. And most everything you wind up with is then tainted by your prejudices and is wrong-headed.

You don’t even seem to understand that relative uncertainties are PERCENTAGES. They don’t have units.

Only if u(q) / q is a constant.”

u(q)/q is a PERCENTAGE. It *is* a constant that is calculated from the relative uncertainties of the individual elements!

The relative uncertainty only changes if the uncertainty of the individual elements changes.

“u(q) = q * u(x) / x”

u(x)/x is a constant! The absolute value of u(q) depends only on q.

Like everyone is trying to tell you. You start off wrong, claim you aren’t doing what you do, and then you turn around and do things the wrong way anyway. As Feynman said, *YOU* are the easiest person for you to fool! And you do it continuously!

Reply to  Tim Gorman
August 27, 2023 9:07 am

Your still arguing this with me, but have no objection to Pat Frank saying exactly what I’m saying?

Me: “When you are adding you have to add absolute uncertainties”
TG: “Why do you have to do this?”

Short answer, becasue that’s what every text on error/uncertainty propagation tells you to do. Adding and subtracting adds absolute uncertainties. Multiplication and dividing adds relative uncertainties.

Longer answer. it comes from the general equation for propagation uncertainties (e.g. equation 10 in the GUM). In the case of adding it’s really simple as the partial derivative of each term is just 1.

It also follows from the rules for adding variances when adding random variables – something you claim to know perfectly.

Go study Section 2.7 in Taylor.

Nothing in that section about using fractional uncertainties when adding values.But if you got as far as 3.8 you would see him spelling it out

Before I discuss some examples of this step-by-step calculation of errors, let me emphasize three general points. First, because uncertainties in sums or differences involve absolute uncertainties (such as δx) whereas those in products or quotients involve fractional uncertainties (such as δx/x the calculations will require some facility in passing from absolute to fractional uncertainties and vice versa, as demonstrated below.

My emphasis.

You have admitted that you haven’t worked out any of the examples in Taylor let alone *all* of them.

Stop lying. I’ve gone from a few of the exercises. I’ve even pointed them out to you when they demonstrate you are wrong.

As Pat has pointed out you start off with a bunch of preconceptions based on nothing but your own prejudices and proceed to cherry-pick things you think confirm your prejudices.

Not true at all. I started out with little knowledge and most of what I’ve learnt is as a result of arguing with you and reading your books. And cherry-picking is a weird concept when it comes to studying texts. If a source says something it says it. If I’ve taken something out of context or its contradicted elsewhere you could easily point that out. You don’t have to keep insisting the entire book has to continuously re-read read for some sort of deeper meaning. That’s the sort of claim I associate with religious cults – ignore what the holy book actually says, only those with sufficient insight can tell you what it really means.

You don’t even seem to understand that relative uncertainties are PERCENTAGES.”

A relative uncertainty is not a percentage (or PERCENTAGE). You can express a relative uncertainty as a percentage if you like.

They don’t have units.

Duh!.

u(q)/q is a PERCENTAGE.

No, u(q)/q * 100 is a percentage.

It *is* a constant that is calculated from the relative uncertainties of the individual elements!

It will be the same result for the same equation if that’s what you mean.

u(x)/x is a constant! The absolute value of u(q) depends only on q.”

And what does q depend on? Hint, q = x / y.

As Feynman said, *YOU* are the easiest person for you to fool!

You still don;t get the contradiction in quoting that, do you? When he says “you” he means “you” – not everyone but you.

Reply to  Bellman
August 27, 2023 9:47 am

Adding and subtracting adds absolute uncertainties.”

Adding and subtracting adds variances.

Reply to  Pat Frank
August 27, 2023 10:04 am

And variances are absolute.

Reply to  Bellman
August 27, 2023 2:53 pm

They’re variances of the uncertainty. The uncertainties themselves are plus/minus.

Reply to  Pat Frank
August 27, 2023 5:13 pm

They’re variances of the uncertainty

Not sure what you mean by that. The uncertainties are not varying.

The uncertainties themselves are plus/minus.

Not really. Unless you mean errors. Standard uncertainty is the standard deviation of the probability distribution. It’s always positive. Writing ± is just to indicate they represent a range.

Again, it would be helpful if you quoted your definition of uncertainty.

Reply to  Bellman
August 27, 2023 5:41 pm

Standard uncertainty is the standard deviation of the probability distribution. It’s always positive.

No. It’s ±u.

Writing ± is just to indicate they represent a range.

No. It’s written ±u because it’s sqrt(V).

Reply to  Pat Frank
August 27, 2023 5:55 pm

A square root in mathematics can be positive or negative. but a standard deviation is always positive. Insert a negative standard deviation into any freeware or excel function requiring the standard deviation as an argument, and this [ #NUM!] is what you get.

Thanks for latest lesson for the fora, Bellman….

Reply to  bigoilbob
August 27, 2023 6:30 pm

“And now blob has jumped into dah ring to give LoopholeMan a boost, and he shows dat he knows even less about dah subject!”

“Oh I’m tellin’ yahs, dah crowd is lovin’ this spectacle, Marv!”

Reply to  bigoilbob
August 27, 2023 9:43 pm

Computer glitches don’t count.

√(xᵢ-x̅)² is always +/- because the difference is a dispersion about a mean.

Declaration doesn’t reify nonsense.

Reply to  Pat Frank
August 28, 2023 5:26 am

Dispersion about the mean is Bellman’s point. Mine is that standard deviation is defined as a positive parameter. I.e., the positive distance of that dispersion at the 1 sigma level. I don’t think that reating it as such is a computer glitch.

Folks, we’re left cute eccentricity and are entering Dan Kahan system 2 territory here. Dr. Frank is undoubtedly unusually intelligent. You can’t become the go to guy at Stanford for chem lab waste disposal without being so. But years ago Dan Kahan explained to us how smart people with over amped amygalae can rationalize away BS much better than the rest of us. And usually, they don’t even know they are doing so. This overwhelming “flight or fight” is the most plausible explanation for his Dig In Right Or Wrong, here, and elsewhere…

Reply to  bigoilbob
August 28, 2023 5:30 am

typo “reating” s/b “treating”. Moderator, why have an edit function if it’s blocked, even for edits within your old edit time period?

Reply to  bigoilbob
August 28, 2023 5:40 am

Who are the “folks”?

Reply to  bigoilbob
August 28, 2023 6:16 am

Mine is that standard deviation is defined as a positive parameter.

Expressing the positive root is just a convention.

Standard deviation is defined as √[(xᵢ-x̅)²/(N-1)].

You’d know all about wrong bob. You live it.

Reply to  Pat Frank
August 28, 2023 7:01 am

“Standard deviation is defined as √[(xᵢ-x̅)²/(N-1)].”

What you may be misunderstand here is that the √ symbol always means the positive square root. It can’t be both positive and negative because it’s a function.

Standard deviation is meant to describe the size of the deviation. It makes no sense to talk of a negative size.

Reply to  Bellman
August 28, 2023 8:02 am

LoopholeMan has his own esoteric math book.

Reply to  karlomonte
August 28, 2023 8:14 am

Do you have a book that says √ can be negative? If you do, I’d suggest getting a refund.

Reply to  Bellman
August 28, 2023 9:15 am

Quadratic formula, LoopholeMan, two roots.

Try again.

Reply to  karlomonte
August 28, 2023 12:35 pm

It’s your choice. Either continue to demonstrate your ignorance, or actually try to learn something.

Hint, why do you think the quadratic formula has a ± in front of the √ ?

Reply to  Bellman
August 28, 2023 3:08 pm

Either continue to demonstrate your ignorance, or actually try to learn something.

IRONY ALERT!~!~!~!~!~!~!~!
AGAIN!~!~!~!~!~!~!~!

Reply to  Bellman
August 28, 2023 10:01 am

All positive real numbers has two square roots, one positive square root and one negative square root. The positive square root is sometimes referred to as the principal square root. The reason that we have two square roots is exemplified above. The product of two numbers is positive if both numbers have the same sign as is the case with squares and square roots

a2=a⋅a=(−a)⋅(−a)

https://www.mathplanet.com/education/pre-algebra/right-triangles-and-algebra/square-roots-and-real-numbers

See the image from:

https://www.bigideasmath.com/MRL/public/app/#/mortimer/student/dynamic-classroom

Look at the second bullet point.

PSX_20230828_120005.jpg
Reply to  Jim Gorman
August 28, 2023 12:44 pm

Yep. All square roots have two solutions, but the √ always indicates the positive one.

I’m really surprised (though I probably shouldn’t be) by how this very simple and clear correction triggers so much outrage. I’m also really puzzled by why those who keep shouting “uncertainty is not error”, now want to claim it’s possible to have a negative uncertainty.

Reply to  Bellman
August 28, 2023 2:01 pm

Has it not sunk into your mind yet that the positive root only represents 34% of a distribution?

What represents the other 34%?

If your average is 50 with an uncertainty of 10, then the interval of 50 to 60 only covers 34% of the possible values.

What covers the other 34%?

Reply to  Tim Gorman
August 28, 2023 3:37 pm

Has it not sunk into your mind yet that the positive root only represents 34% of a distribution?

That’s why you use ±.

Reply to  Bellman
August 28, 2023 3:56 pm

Then why does the GUM speak of 68%?

Reply to  Tim Gorman
August 28, 2023 4:04 pm

Could it be becasue it’s talking about a normal distribution?

Reply to  Bellman
August 28, 2023 4:06 pm

Sorry, I thought you were replying to a different comment. You will have to be more specific – which 68% are you talking about?

Reply to  Tim Gorman
August 28, 2023 3:38 pm

Also, there you go assuming all distributions are Gaussian.

Reply to  Bellman
August 28, 2023 3:58 pm

Right. The guy that is trying to explain to you that the average height of Shetlands mixed with Quarter horses is *NOT* a Gaussian distribution is assuming all distributions are Gaussian.

Put down the bottle.

Reply to  Tim Gorman
August 28, 2023 5:01 pm

You assumed it when you said that half a standard deviation would cover 34%.

Reply to  Bellman
August 28, 2023 12:11 pm

√ means square root.

“positive size” is you attempting a deflection.

Reply to  Pat Frank
August 28, 2023 1:41 pm

Is the point where I’m supposed to “win” the debate by saying you should take a course in mathematics?

Or just read this, or I’m sure there are proper text books that explain it as well

https://en.wikipedia.org/wiki/Square_root

Every nonnegative real number x has a unique nonnegative square root, called the principal square root, which is denoted by √x, where the symbol √ is called the radical sign or radix. For example, to express the fact that the principal square root of 9 is 3, we write √9 = 3 . The term (or number) whose square root is being considered is known as the radicand. The radicand is the number or expression underneath the radical sign, in this case, 9.

Every positive number x has two square roots: √x (which is positive) and -√x (which is negative). The two roots can be written more concisely using the ± sign as ± √x. Although the principal square root of a positive number is only one of its two square roots, the designation “the square root” is often used to refer to the principal square root.

Reply to  Bellman
August 28, 2023 2:31 pm

the designation “the square root” is often used to refer to the principal square root.”

This may be true for those that don’t live in the real world. I would point your attention to the word “often” in the sentence above.

It is often in “math world” or “statistics world”. It is *not* true for “real world”.

  1. A positive carrier signal mixed with a positive audio signal.
  2. Caster, camber, or toe-in on the front end of an auto.
  3. Uncertainty in length of a 2″x4″ board which can be longer or shorter than nominal.
  4. Any non-linear process can result in both a positive and negative result

Living in the real world means considering the positive and negative roots *all* the time, not just often.

Reply to  Tim Gorman
August 28, 2023 3:35 pm

This may be true for those that don’t live in the real world.

How many times have you insulted me by saying I need to get a maths education? And now you insist that the correct mathematical terms are not the “real world” and it’s the non-mathematical uses that are the “real world”.

Tough. My usage is the same as in Taylor, GUM and I would imagine every other book on metrology. If you don’t think they are living in the real world why even study metrology.

Uncertainty in length of a 2″x4″ board which can be longer or shorter than nominal.”

“UNCERTAINTY IS NOT ERROR”

Living in the real world means considering the positive and negative roots *all* the time, not just often.

Strawman time again.

Reply to  Bellman
August 28, 2023 3:55 pm

I just gave you four examples of where the sqrt symbol indicates both positive and negative. BOTH! And you somehow think that isn’t *real* world!

Correct mathematical terms are in your head only. The correct interpretation of the square root sign in the REAL WORLD is both positive and negative. If I am mixing a carrier with an audio signal and I only use the POSITIVE root in the math then I will miss half the signal.

The math for mixing f_c with f_a *in a non-linear device (e.g. a diode with an x^2 response) has* to include both the positive and negative roots. It HAS TO. If you only work the math using the positive square toot, i.e. the principle root, then you get an incomplete picture of what is happening.

the designation “the square root” is often used to refer to the principal square root.”

ONLY IN MATH WORLD.

Tough. My usage is the same as in Taylor, GUM and I would imagine every other book on metrology. If you don’t think they are living in the real world why even study metrology.”

No, it isn’t! You’ve been given graphs from Taylor showing that it isn’t! And you just ignore them!

They *are* living in the real world. That’s why measurements are stated as “stated value +/- uncertainty* and not as “stated value + uncertainty”!

“Strawman time again.”

REAL WORLD TIME AGAIN!

Reply to  Tim Gorman
August 28, 2023 4:14 pm

I just gave you four examples of where the sqrt symbol indicates both positive and negative.

Then the examples were using it wrong.

The correct interpretation of the square root sign in the REAL WORLD is both positive and negative.

Citation required.

If I am mixing a carrier with an audio signal and I only use the POSITIVE root in the math then I will miss half the signal.

Have you considered putting a negative sign in front of the √ when you want the negative root?

The math for mixing f_c with f_a *in a non-linear device (e.g. a diode with an x^2 response) has* to include both the positive and negative roots.

I don’t care how many times you keep claiming things like this. Show me an actual; example of it being used incorrectly.

You’ve been given graphs from Taylor showing that it isn’t! And you just ignore them!

I’ve wasted the entire evening trying to answer your dozens of comments – then you complain that there was one I missed.

I can see nowhere where Taylor gets it wrong. E.g. in equation (4.8) he says the standard deviation is approximately 0.7. He does not suggest this is both positive and negative.

Reply to  Bellman
August 28, 2023 5:27 pm

Have you considered putting a negative sign in front of the √ when you want the negative root?”

OMG! What do you think the ± symbol is for!

I don’t care how many times you keep claiming things like this. Show me an actual; example of it being used incorrectly.”

Of what being used incorrectly?

From edaboard.com:

———————————————————-
Mixing occurs whenever the equation is non-linear. For CMOS transistors the equation is a square function. For a diode is an exponential. Nevertheless, both are non-linear and both perform mixing. The equation for every non-linear device is given by:

out = K1(A+B)+K2(A+B)²+K3(A+B)³ + …

As you can see every non-linear equation includes a square law term, including the diode.

—————————————————–

When you analyze the “out” function you *must* consider both the negative and positive roots of the two squared terms.

It’s impossible to prove a negative. I can’t show you where the square root is being used incorrectly other than in the quote *YOU* provided. The word “often” in that quote only applies to math world and statistics world, not to the real world.

Reply to  Tim Gorman
August 28, 2023 6:15 pm

OMG! What do you think the ± symbol is for!

For exactly the purposes I keep telling you. Tell you the positive value can be added or subtracted, or that it indicates a range of values.

When you analyze the “out” function you *must* consider both the negative and positive roots of the two squared terms.

You’re really working yourself into a lather about something I’m not disagreeing with. If you just read what I and everyone else said, you’d realize how much you are wasting both of out time.

For the last time, hopefully, I am not saying you ignore the negative root. I am saying that √ is the positive square root and if you want the negative one you just need to multiply it by (-1) or subtract it. I am not saying you ignore the range caused by negative deviations, I am saying the standard deviation is a positive value, and the range is given by adding ± to it.

Reply to  Bellman
August 28, 2023 5:14 pm

Your Wiki source: “For example, 4 and −4 are square roots of 16 because 4² = (−4)² = 16.”

You claimed that, “the √ symbol always means the positive square root.

it doesn’t. Your own source contradicts you. The text you quoted contradicts you.

You’re now shifting your ground to principal square root, which is merely another convention.

Reply to  Pat Frank
August 28, 2023 5:26 pm

This is just getting weird. You are so convinced it’s impossible that you might have made a minor mistake you are having to unsee what the article actually says.

I mean it literally says “√x (which is positive)”

For example, 4 and −4 are square roots of 16 because 4² = (−4)² = 16.”

Yes, it has two roots, one positive one negative, the √ means the positive one. I don’t see why this is a hill you want to die on.

Reply to  Bellman
August 28, 2023 1:56 pm

You are a statistician. As an engineer I know you *have* to consider both positive and negative roots. For example, consider an rf carrier frequency of a transmitter mixed with an audio signal, i.e. your standard AM radio signal in your car.

You will get frequencies both above (positive) and below (negative) the carrier. I.e. f_c + f_a AND f_c – f_a.

The square root *only* means the positive square root in the “math world” or the “statistics world”. In the real world you *have* to consider both the positive and negative value of the square root.

Someday maybe you’ll decide to actually do some study of the real world but I doubt it.

Reply to  Tim Gorman
August 28, 2023 3:44 pm

As an engineer I know you *have* to consider both positive and negative roots.

How many more times are you going to repeated this wretched strawman? You always want to consider positive and negative roots. That does not mean standard deviations or uncertainties are ever negative. You consider it by adding or subtracting them.Hence the ±. If any of these values were already both positive and negative, the ± would be redundant, you could just use +.

Reply to  Bellman
August 28, 2023 4:02 pm

 If any of these values were already both positive and negative, the ± would be redundant, you could just use +.”

Then why did you provide the quote:

the designation “the square root” is often used to refer to the principal square root.”

The principal square root being the POSITIVE term. Your quote implies that + *is* what is most often used!

Reply to  Tim Gorman
August 28, 2023 5:02 pm

This guy is loony tunes!

Reply to  Tim Gorman
August 28, 2023 7:08 pm

I’m really not sure what difficulty you are having understanding anhy of this. It’s all very basic stuff, and you’re the one who keeps saying ~I need a basic education in math.

A square root is the solution to an equation x² = y. We say that the solution is the square root of y. As some mathematicians decided to invent negative numbers, despite negative numbers not existing in the real world, you ended up with rules that imply that actually there are two solutions to the equation, one positive and one negative.

This complicates things a bit. For one, if you just wrote a symbol to represent the square root, it wouldn’t be a function as it would give two results This is why when talking about square roots in general, you have to use the formulation x² = y. When a symbol is used to designate a square root, it actually has to be defined as the positive square root so it can be used as a function. Hence √ is not strictly the square root function, it’s a function given the positive square root. If you want the negative square root you have to come up with a way of making the positive number negative, and if you want to use the symbol in an equation where both roots are possible, such as the quadratic formula you need another symbol to indicate it can both be added or subtracted.

Finally, as the quote says, not everyone is as rigorous as mathematicians would like, so some mistakenly call √ the square root symbol, and some use square root to mean the positive square root.

This last point makes sense when you remember that as negative numbers don’t exist in the real world, so usually you only want the positive number. If a chess board has 64 squares, how many columns does it have? Nobody would expect the answer -8.

Is that enough for now, or do you want me to explain imaginary numbers as well?

Reply to  Bellman
August 28, 2023 5:19 pm

That does not mean standard deviations or uncertainties are ever negative.

Uncertainties can be made arbitrarily positive or negative merely by shifting the physical reference frame. No significance is lost or gained by the shift.

Reply to  Pat Frank
August 28, 2023 6:29 pm

You can if you want. It just goes against the international standards, and seems needlessly confusing.

The GUM explicitly says that standard uncertainty is always positive. Insisting that it could be negative if you want has wasted an entire day, and distracted from the discussion on resolution – and demonstrated that many of the experts here doesn’t understand basic mathematical symbols. So, I’d hate to think what would happen if you actually used a negative uncertainty in your paper.

Reply to  bigoilbob
August 28, 2023 6:31 am

Oh look, blob is an ageist, what a surprise.

The truth is, you and LoopholeMan are the ones with dementia, certainty not Pat.

Clown.

Reply to  karlomonte
August 28, 2023 6:41 am

It’s not necessarily dementia. Dr. Frank is intelligent, and a little self awareness might be enough to go a long way. But then again, his on line behavior might indeed be a side effect of aging, and/or depression. Intervention by a gerontologist for the first. The right meds have helped family members suffering from the second.

Reply to  bigoilbob
August 28, 2023 6:53 am

Yes, you are a bigoted clown.

Reply to  karlomonte
August 28, 2023 7:30 am

“bigoted”? Sorry/not sorry, km. You can’t back that up.

With Dr. Frank, I’m reminded of fellow Missourian Rush Limbaugh. With him, it was opioid dependence, and his failure to rehab. But multiple tries are often required. If he hadn’t been cut down early, the next stint might have done the trick. The scales would have fallen from his eyes and his Q corrosivity would have slipped away. I hope that Dr. Frank has the time to change – whatever – and to thereby make meaningful, consequential, future contributions.

Reply to  bigoilbob
August 28, 2023 7:52 am

You spew paragraphs of unintelligible word salads — dementia.

And yeah, you are a bigot — toward anyone who doesn’t adhere to your insane marxist leftist worldview.

Reply to  bigoilbob
August 28, 2023 11:53 am

meaningful, consequential, … contributions.”

You’ve signaled repeatedly an inability to recognize them when they occur.

Reply to  bigoilbob
August 28, 2023 11:58 am

When you can’t win the argument on facts or logic, defame your opponent.

Let’s see: the next gambit in the loser series is for you to pound the table, b.o.b.

Reply to  bigoilbob
August 28, 2023 12:08 pm

You’ve lost the debate on the merits, b.o.b.

Derogating one’s opponent post-contest merely expresses a small-minded bitterness.

Reply to  Pat Frank
August 27, 2023 6:19 pm

No. It’s written ±u because it’s sqrt(V).

Not from any definition I’ve seen. u is always the positive square root of s².

I’m not sure what negative uncertainty would mean, given that zero means no uncertainty.

Reply to  Bellman
August 27, 2023 6:22 pm

e.g. GUM 5.1.2

The combined standard uncertainty u_c (y) is the positive square root of the combined variance u_c^2(y),…

Reply to  Bellman
August 27, 2023 10:06 pm

GUM 4.3.4: “EXAMPLE A calibration certificate states that the resistance of a standard resistor RS of nominal value ten ohms is 10,000 742 Ω ± 129 μΩ at 23 °C and that “the quoted uncertainty of 129 μΩ defines an interval having a level of confidence of 99 percent”.

4.3.5: “EXAMPLE A machinist determining the dimensions of a part estimates that its length lies, with probability 0,5, in the interval 10,07 mm to 10,15 mm, and reports that l = (10,11 ± 0,04) mm, meaning that ± 0,04 mm defines an interval having a level of confidence of 50 percent.”

4.3.6: “Consider a case similar to that of 4.3.5 but where, based on the available information, one can state that “there is about a two out of three chance that the value of Xi lies in the interval a− to a+” (in other words, the probability that Xi lies within this interval is about 0,67). One can then reasonably take u(xi) = a, because for a normal distribution with expectation μ and standard deviation σ the interval μ ± σ encompasses about 68,3 percent of the distribution.”

4.3.9: “NOTE 1 For a normal distribution with expectation μ and standard deviation σ, the interval μ ±encompasses
approximately 99,73 percent of the distribution.”

6.2.1: “The expanded uncertainty U is obtained by multiplying the combined standard uncertainty u𝔠(y) by a coverage factor k:
U = ku𝔠(y) (18)
The result of a measurement is then conveniently expressed as Y = y ± U,…”

7.2.2: “When the measure of uncertainty is u𝔠(y), … 4) “m𝑠 = (100,021 47 ± 0,000 35) g, where the number following the symbol ± is the numerical value of (the combined standard uncertainty) u𝔠 and not a confidence interval.” (Combined standard uncertainty includes systematic error.)

Need I go on?

Reply to  Pat Frank
August 28, 2023 3:21 am

As I said, ± defines an interval. It does not mean the uncertainty is both positive and negative.

Reply to  Bellman
August 28, 2023 6:20 am

(plus/minus) — what does “minus” mean?

Reply to  Pat Frank
August 28, 2023 7:11 am

It means subtract.

But the symbol ± can mean you can add or subtract the value, or that the value can be taken to be either negative or positive. However, it can also in some systems be used to indicate a range of values.

Reply to  Bellman
August 28, 2023 7:53 am

LoopholeMan still hasn’t figured out that uncertainty is not error.

Reply to  karlomonte
August 28, 2023 8:35 am

I’m not the one confusing deviation with standard deviation. I’m not the one claiming uncertainty can be negative.

Reply to  Bellman
August 28, 2023 9:16 am

I’m not the one confusing deviation with standard deviation.

Hahahahahahahahah

Reply to  karlomonte
August 28, 2023 9:23 am

Does this help?

tumblr_ojlbdgno0c1suxeeyo1_500.jpg
Reply to  Jim Gorman
August 28, 2023 11:46 am

And they wonder why no one takes them seriously?

Reply to  Bellman
August 28, 2023 11:49 am

You’re the one confusing convention with definition. Uncertainty is the standard deviation of error about a mean (or standard) value.

If the mean is itself negative — a negative Voltage, say — then all the uncertainty values may be negative.

If a mean is small positive, then uncertainty can extend into the negative. This is not true only when the uncertainty is a Poisson distribution.

You’ve argued yourself into defending a foolish position. We can let it die a natural death or you can huff.

Reply to  Pat Frank
August 28, 2023 1:19 pm

Uncertainty is the standard deviation of error about a mean (or standard) value.”

Oh dear, now you’ll have some one screaming “UNCERTAINTY IS NOT ERROR”.

If the mean is itself negative — a negative Voltage, say — then all the uncertainty values may be negative.”

Under what definition of uncertainty?

If you say uncertainty is a standard deviation, then by definition it’s not negative.

Or you can provide some evidence that there is any definition or convention that uses negative uncertainties. Again, uncertainty is not error., There can be negative errors, but the standard deviation of them will be positive.

I’ll go first. Here’s the GUM definition of combined standard uncertainty

standard uncertainty of the result of a measurement when that result is obtained from the values of a number of other quantities, equal to the positive square root of a sum of terms, the terms being the variances or covariances of these other quantities weighted according to how the measurement result varies with changes in these quantities

My emphasis .

Reply to  Bellman
August 28, 2023 6:11 pm

GUM 4.3.4: “EXAMPLE A calibration certificate states that the resistance of a standard resistor RS of nominal value ten ohms is 10,000 742 Ω ± 129 μΩ at 23 °C and that “the quoted uncertainty of 129 μΩ defines an interval having a level of confidence of 99 percent”.

4.3.5: “EXAMPLE A machinist determining the dimensions of a part estimates that its length lies, with probability 0,5, in the interval 10,07 mm to 10,15 mm, and reports that l = (10,11 ± 0,04) mm, meaning that ± 0,04 mm defines an interval having a level of confidence of 50 percent.”

4.3.6: “Consider a case similar to that of 4.3.5 but where, based on the available information, one can state that “there is about a two out of three chance that the value of Xi lies in the interval a− to a+” (in other words, the probability that Xi lies within this interval is about 0,67). One can then reasonably take u(xi) = a, because for a normal distribution with expectation μ and standard deviation σ the interval μ ± σ encompasses about 68,3 percent of the distribution.”

4.3.9: “NOTE 1 For a normal distribution with expectation μ and standard deviation σ, the interval μ ±encompasses
approximately 99,73 percent of the distribution.”

6.2.1: “The expanded uncertainty U is obtained by multiplying the combined standard uncertainty u𝔠(y) by a coverage factor k:
U = ku𝔠(y) (18)
The result of a measurement is then conveniently expressed as Y = y ± U,…”

7.2.2: “When the measure of uncertainty is u𝔠(y), … 4) “m𝑠 = (100,021 47 ± 0,000 35) g, where the number following the symbol ± is the numerical value of (the combined standard uncertainty) u𝔠 and not a confidence interval.” (Combined standard uncertainty includes systematic error.)

“Annex J* Glossary of principal symbols

“U expanded uncertainty of output estimate y that defines an interval Y = y ± U having a high level of confidence, equal to coverage factor k times the combined standard uncertainty u𝔠(y) of y: U = ku𝔠(y)”

U = ku𝔠(y), therefore and necessarily ±U = k±u𝔠.

My emphasis throughout.

The combined standard uncertainty is a (plus/minus) dispersion calculated from √(variance).

As noted previously, the description of u𝔠(y) as the positive root is a convention.

Statistical uncertainty dispersions about a mean or an experimental measurement are never interpreted as restricted to a single positive tail.

Reply to  Pat Frank
August 28, 2023 8:04 pm

So many quotes. None saying the uncertainty can be negative. Yet you fail to mention all the times it says the uncertainty is the positive square root. E.g.

2.3.4

standard uncertainty of the result of a measurement when that result is obtained from the values of a number of other quantities, equal to the positive square root of a sum of terms, the terms being the variances or covariances of these other quantities weighted according to how the measurement result varies with changes in these quantities

3.3.5 The estimated variance u² characterizing an uncertainty component obtained from a Type A evaluation is calculated from series of repeated observations and is the familiar statistically estimated variance s² (see 4.2). The estimated standard deviation (C.2.12, C.2.21, C.3.3) u, the positive square root of u², is thus u = s and for convenience is sometimes called a Type A standard uncertainty.

5.1.2 The combined standard uncertainty u_c (y) is the positive square root of the combined variance u²_c (y) …

C.2.12

standard deviation (of a random variable or of a probability distribution) the positive square root of the variance:

C.3.3 Standard deviation

The standard deviation is the positive square root of the variance. Whereas a Type A standard uncertainty is obtained by taking the square root of the statistically evaluated variance, it is often more convenient when determining a Type B standard uncertainty to evaluate a nonstatistical equivalent standard deviation first and then to obtain the equivalent variance by squaring the standard deviation.

And then there’s Annex J, Glossary of principal symbols, which mentions “positive square root” about 10 times.

By all means call this a “convention” but it’s a pretty universal and sensible one.

Reply to  Pat Frank
August 29, 2023 4:56 am

As noted previously, the description of u𝔠(y) as the positive root is a convention.

“What’s the good of Mercator’s North Poles and Equators,

   Tropics, Zones, and Meridian Lines?”

So the Bellman would cry: and the crew would reply

   “They are merely conventional signs!

Reply to  Bellman
August 28, 2023 11:31 am

The minus sign indicates a negative value.

Reply to  Pat Frank
August 28, 2023 1:04 pm

It can do, but you asked what “minus” meant.

In the context of the ± sign, it means the equation has two solutions, one obtained by adding the other by subtraction. Or it can indicate a range of values either side of a central value. What it does not indicate is that a value is both positive and negative simultaneously.

Reply to  Bellman
August 28, 2023 2:42 pm

Now you are just making stuff up again!

No one said anything about a number being both positive and negative simultaneously. That’s the only argument you could make to justify you now agreeing with us that a square root has two solutions – which is what you started off denying.

Reply to  Tim Gorman
August 28, 2023 3:11 pm

Another pathetic strawman argument. I have never said that a square root does not have two solutions. What I have said is that the symbol √ indicates the positive square root, that standard deviations are taken to be the positive square root, and that uncertainty is always taken to be positive.

I’m really not even sure what the issue is. It just started with me stating that the rules of propagation are that when you add or subtract values, you add absolute uncertainties, and then Pat going on about adding variances.

Possibly there is a confusion about the use of the overloaded word “absolute”. But then he said that uncertainties could be positive or negative, or maybe he meant they could be added or subtracted. I’ve lost track of all the nonsense. And it’s all been a massive distraction from what I was hoping to be talking about today.

But my point is, uncertainties cannot be negative, and they are not subtracted in any propagation equation.

Reply to  Bellman
August 28, 2023 12:43 pm

you can add or subtract the value” (bolding mine, tpg)

Judas H. Priest!

You *really* think ± doesn’t indicate an INTERVAL?

68% of the population in a normal distribution does *NOT* exist in +SD or in -SD. It lies in the INTERVAL ±SD.

Reply to  Tim Gorman
August 28, 2023 1:54 pm

Calm down. I’ve said several times ± can indicate an interval.

Reply to  Bellman
August 28, 2023 9:00 am

You can’t be this dense.

What does the interval of ±(-5) mean?

Have you ever seen this ANYWHERE?

Reply to  Jim Gorman
August 28, 2023 5:03 pm

Yes, he can be this dense.

Reply to  Jim Gorman
August 29, 2023 11:17 am

What does the interval of ±(-5) mean?

It means the same as ±5. What would be the point of writing it as a negative.

Have you ever seen this ANYWHERE?

No. That’s why I find this whole discussion so strange. All of a sudden it seems everyone things a standard deviation or an uncertainty can be negative.

Reply to  Bellman
August 27, 2023 10:09 pm

Not from any definition I’ve seen.

That explains it, then.

I’m not sure what negative uncertainty would mean, given that zero means no uncertainty.

So, you’re saying the negative deviation of a Gaussian interval, mean of 0, is meaningless.

Reply to  Pat Frank
August 28, 2023 7:21 am

“That explains it, then.”

And again you avoid saying what definition you are using.

“So, you’re saying the negative deviation of a Gaussian interval, mean of 0, is meaningless.”

No. Deviation can be negative bit standard deviation is described the “amount” of deviation. It roughly represents the average distance all points are from the mean. And distance, by definition, is never negative.

Reply to  Bellman
August 28, 2023 9:26 am

“””””Deviation can be negative bit standard deviation is described the “amount” of deviation. It roughly represents the average distance all points are from the mean. “””””

ALL POINTS ARE FROM THE MEAN!

No wonder he has a hard time understanding!

Reply to  Jim Gorman
August 29, 2023 11:20 am

ALL POINTS ARE FROM THE MEAN!

Obviously you think that’s a telling argument, as you wrote it capital letters, but that doesn’t mean it makes sense to anyone else. If you want me to understand what you think, try writing with fewer capitals and more clarity..

Reply to  Bellman
August 28, 2023 11:27 am

So, you deny the existence of the -x axis.

The definition is the algebraic method and Cartesian coordinates. Evidently, you deny that, too.

Reply to  Pat Frank
August 28, 2023 12:52 pm

This is just getting ab-surd. I’m glad I didn’t take your advise to do a chemistry if this the standard of the mathematics they teach.

No. I do not deny negative, or imaginary or any other sort of number. But distance is not a point on the negative part of an axis. It’s a function describing the distance between two points, and is never negative. e.g. the the standard distance between a and b is |a – b|.

Reply to  Bellman
August 28, 2023 2:40 pm

ROFL!!!

Distance has negative values. If you are instructing a driver to go east to a delivery and then west to a delivery you *are* giving him positive and negative distances. His distance from the loading dock is his positive distance (east) plus his negative distance (west). If he goes 5 miles east and then 5 miles west he is zero distance from the loading dock.

How do you suppose algorithm’s meant to minimize delivery times work? You want to minimize the distance someone has to go to get to the delivery dock to pick up a new order. Meaning you better keep track of how far your drivers are from the delivery dock.

How many times do you have to be told to join the rest of us in the REAL world?

Reply to  Tim Gorman
August 28, 2023 3:28 pm

Distance has negative values.

Not in a metric space definition. Looking into it, there is such a thing as signed distance, but I doubt that’s what you are talking about.”

If he goes 5 miles east and then 5 miles west he is zero distance from the loading dock.”

That’s more to do with distance traveled. I’m talking about the distance between two points.

“Meaning you better keep track of how far your drivers are from the delivery dock.”

Do you think they will ever be a negative distance from the dock?

How many times do you have to be told to join the rest of us in the REAL world?

In the REAL world if I was told I was -5km from my destination, I would think there was something odd with the REAL world. If I move closer to the destination the distance decreases, if I miss it and move further away the distance increases, it does not become negative.

That’s my REAL world concept of distance.

Reply to  Bellman
August 28, 2023 3:37 pm

Not in a metric space definition. Looking into it, there is such a thing as signed distance, but I doubt that’s what you are talking about.””

Like Pat said, you’ve apparently never heard of Cartesian coordinates.

That’s more to do with distance traveled. I’m talking about the distance between two points.”

IT HAS TO DO WITH HIS LOCATION! 5 miles east and then 5 miles west is *NOT* a distance of zero. But his location from the origin IS!

“Do you think they will ever be a negative distance from the dock?”

I just showed you there can be. If he goes 5 miles west then how far from the dock is he? The origin starts at ZERO, not the most distance the driver has traveled west!

“In the REAL world if I was told I was -5km from my destination, I would think there was something odd with the REAL world.”

Your experience with the real world is showing again! This is *exactly* how the fire detection robot I worked on looked at its location! It started at origin 0, and then we tracked its location on an x,y cartesian map. If it went in the front door and turned left then it would wind up a negative distance from the origin!

When are you going to accept that you know almost NOTHING about the real world and work to correct the deficiency?

Reply to  Tim Gorman
August 28, 2023 4:27 pm

Like Pat said, you’ve apparently never heard of Cartesian coordinates.

No, I’m just a poor country hick who never had any of that book learning like what you have.

Really, try arguing about what I say, rather than fantasizing about how superior you are.

Now, what metric do you want to use on a Cartesian plane that will give you a negative distance?

5 miles east and then 5 miles west is *NOT* a distance of zero

Which is why I said you were talking about distance traveled. Not the distance between two points.

This is *exactly* how the fire detection robot I worked on looked at its location!

Location is not distance.

If it went in the front door and turned left then it would wind up a negative distance from the origin!

Then you were using “distance” wrong. Not a problem, coding often uses incorrect variable names.

When are you going to accept that you know almost NOTHING about the real world and work to correct the deficiency?

When are you going to stop being such an offensive snob?

Reply to  Bellman
August 28, 2023 5:32 pm

Which is why I said you were talking about distance traveled. Not the distance between two points.”

NO! I *am* talking about the distance between two points. If the delivery dock is Point-A and the delivery truck is Point-B then by Point-B going x and then -x you wind back up a the Point-A.

“Location is not distance.”

You use negative distance to perform the location calculation! You just can’t learn at all, can you?

When are you going to stop being such an offensive snob?”

When are you going to stop making assertions that are so wrong? Such as that there is no such thing as negative distance. I just showed you how negative distance is used and you stubbornly refuse to accept it!

Reply to  Tim Gorman
August 28, 2023 6:05 pm

NO! I *am* talking about the distance between two points. If the delivery dock is Point-A and the delivery truck is Point-B then by Point-B going x and then -x you wind back up a the Point-A.

OK let me try to understand your logic. Let d be the distance function. d(A, B) is the distance between point A and point B. I’m guessing at the start A = B, so d(A, B) = 0.

Now the truck moves to a new location x, so is then d(A, x) from A, then drives back to where it started, so is back at point-A, so is d(A, B) = 0 from A again.

No, sorry. I’m still not seeing where the negative distance comes in.

You use negative distance to perform the location calculation! You just can’t learn at all, can you?

But you are saying distance is the distance between two points. What does driving a negative distance from your starting point mean? You might travel for a certain distance in what has been arbitrarily designated negative direction, but you haven’t traveled a negative distance, and you are not a negative distance from your starting point.

If I want to know where I am, I can use a coordinate system, which may include negative values. But if I want to know how far I am from a given point, I want to know the distance, and that should not be negative.

I suspect what you are trying to say, is you can drive a specific distance in a specific direction. In a 1-dimensional space the direction might mean positive or negative. Combine that in your program by calling it negative distance might work for your purpose. But that doesn’t mean it makes sense in the real world. If someone tells me to drive -50km, I’m going to assume they are mad. But if they say drive east for 50km, I’d know what they mean.

Reply to  Bellman
August 29, 2023 4:36 am

No, sorry. I’m still not seeing where the negative distance comes in.”

Because d(A,B) requires it in order to become zero.

If B is always positive then d(A,B) will always be positive. You’ll never get back to d(A,B) = 0!

Do you *ever* think about anything before you post?

“Combine that in your program by calling it negative distance might work for your purpose. But that doesn’t mean it makes sense in the real world.”

Our robot program *was* real world. We had a working model. It could run through a maze and tell us where it was at every point in the journey!

You, on the other hand, live in statistical world and can’t absorb anything that exists in the real world.

Reply to  Tim Gorman
August 29, 2023 5:55 am

Because d(A,B) requires it in order to become zero

Stop using this nonsense to distract from the real nonsense about negative standard deviations!

Seriously though, take a course on metric spaces – or use what ever form of words work for you and your projects. Just don’t expect people in the real world to understand you when you say you are -5km from home..

Reply to  Bellman
August 30, 2023 4:36 am

Stop using this nonsense to distract from the real nonsense about negative standard deviations!”

You already admitted that the POSITIVE standard deviation only covers 34% of the possible values.

That means that the NEGATIVE standard deviation covers the other 34%.

And yet, here you are still denying that the negative standard deviation exists.

Cognitive dissonance at its finest.

I don’t need any study on metric spaces. I already gave you an example of how we used cartesian coordinates to map where a robot was, including negative distance. And yet, here you are still denying that can’t work in the real world – that there is no such thing as negative distance.

Reply to  Tim Gorman
August 28, 2023 6:20 pm

Tim, Bellman is pulling a Nick Stokes on you.

He’s deflecting the debate into a side-alley — distance — where he can confuse the issue and walk away claiming victory.

The issue at hand is the ±(uncertainty) representing the dispersion of error about a mean or a standard magnitude.

The ±(uncertainty) is the standard deviation. It is sqrt(variance) and takes both signs, because the uncertainty is two-tailed about the mean or standard value.

Bellman is dancing all around to avoid admitting that. So, he’s diverting you into a byway.

Reply to  Pat Frank
August 28, 2023 7:40 pm

Really? You think I wanted to spend all day explaining basic principles like distance? I was all set this morning to write my response to your interesting comment on resolution, but then I mentioned in passing that standard deviation was positive, and I’ve spent all day having insults hurled at me, and demand that I justify my weird believe that √ means the positive root, land that distances can not be negative.

My problem, one one of my many problems, is I’ve become somewhat OCD in answering every insulting comment thrown at me, rather than just ignore them. The problem is whenever I do miss a question, I’ll have someone boasting that I couldn’t answer it.

The issue at hand is the ±(uncertainty) representing the dispersion of error about a mean or a standard magnitude.

I agree completely. But now I’ll have the rabble screaming UNCERTAINTY IS NOT ERROR at me.

The ±(uncertainty) is the standard deviation.

Nope. Still wrong. But I’d still like to see some source that does say that. I think people are just confused becasue they see ±σ and think the ± is part of the standard deviation rather than an addition to it. But scouring the internet shows no support for the concept of negative standard deviations, and lots of support for the idea they are impossible.

Try the quiz on this site for a start, at least question 3.

https://www.khanacademy.org/math/statistics-probability/summarizing-quantitative-data/variance-standard-deviation-population/a/concept-check-standard-deviation

Reply to  Bellman
August 28, 2023 9:26 pm

The issue at hand is the ±(uncertainty) representing the dispersion of error about a mean or a standard magnitude.

I agree completely. But now I’ll have the rabble screaming UNCERTAINTY IS NOT ERROR at me.

Hey LoopholeMan, “dispersion of error about a mean or a standard magnitude” does not equal “error”.

But I have to remember that you believe Einstein was “wrong about many things”, thus you inhabit an alternate universe.

Reply to  karlomonte
August 29, 2023 4:40 am

“dispersion of error about a mean or a standard magnitude” does not equal “error”.

You need to keep your mania straight. Whenever I’ve said uncertainty is not the same as error, but it does describe the extent of the dispersion of error, i.e. the standard deviation of the errors, you’ve yelled “UNCERTAINTY IS NOT ERROR” as a get out of jail free card.

You insist that because the GUM defines uncertainty in terms that avoid the word error errors have nothing to do with uncertainty. As always your arguments shift with who’s on “your side”.

But I have to remember that you believe Einstein was “wrong about many things”, thus you inhabit an alternate universe.”

Yes. The one where argument from authority is regarded as a bad thing. You seem to think because Einstein is a genius it’s impossible for him ever to be wrong. You seem to place Pat Frank in that category as well. He has a Phd, how can someone like me ever question him.

Reply to  Bellman
August 29, 2023 7:01 am

You need to keep your mania straight. Whenever I’ve said uncertainty is not the same as error, but it does describe the extent of the dispersion of error, i.e. the standard deviation of the errors, you’ve yelled “UNCERTAINTY IS NOT ERROR” as a get out of jail free card.

And you are still wrong. You CANNOT assume any “dispersion of error” from a standard uncertainty value, i.e. a probability dispersion.

You still can’t get past GO.

Einstein is a genius it’s impossible for him ever to be wrong.

“wrong about many things” — LoopholeMan

You may continue your kook dance now.

Reply to  karlomonte
August 30, 2023 5:53 am

In the real world if you are going to assume a probability distribution *i.e. dispersion) of error then you need to be able to JUSTIFY it. Even if your data is from multiple measurements of the same thing in the same environment using the same instrument. You can’t just *assume* something without some kind of justification.

Yet bellman assumes he can define the probability distribution of error for *anything* – usually Gaussian – without any justification at all. It’s a foible common among mathematicians that don’t have to worry about real world consequences.

Reply to  Tim Gorman
August 30, 2023 6:38 am

Yes (I used the wong word); he’s not alone, many people see an expanded GUM uncertainty ±U and assume it means 95% of all measurements made according to however U was obtained will be within the interval.

But until and unless a probability distribution is determined, this is a leap into the dark.

And many measurements are not amenable to generating a distribution. A Monte Carlo simulation might help, but if the simulation is not an accurate representation of the process the distribution won’t have much meaning.

This was a lesson imparted into my skull by a mathematician friend who had taken the time to study the subject in depth.

Reply to  karlomonte
August 30, 2023 7:29 am

It is a 95% probability that the measurements in the series of measurements occurred within that interval. Exactly where, well your guess is as good as mine.

If you are trying to replicate the “experiment” under repeatible conditions, good luck. If your μ±U duplicates the 1st one, you did well.

The controlling word is UNKNOWN. The GUM makes this plain. NO measurement is exact. Even a group of experiments with multiple measurements in each one doesn’t have a KNOWN exact value.

That is why one should understand that μ is NOT a known exact value. It is a guess that is LIKELY to exist within an interval of a given confidence, but there is a chance it doesn’t.

I think this why the term error keeps cropping up. The assumption is that μ is an exact number with an interval where a measurement ± error might be seen.

Reply to  Jim Gorman
August 30, 2023 7:54 am

“Unknown” is what is totally unacceptable to these people, therefore it must be wrong.

And yet, the GUM (Guide to the Expression of Uncertainty in Measurement) clearly states that ±U is not a measure of error.

If you’ve ever seen the notation “±U_95” (subscript 95), this is an indicator that someone doesn’t understand this. What this really means is that a combined standard uncertainty ±u has been multiplied by a coverage factor, k. In the laboratory accreditation world, the standard coverage factor is simply 2, and because 2 ~= 1.96 (student’s t), it became widely assumed that it gives a 95% coverage.

A much more succinct notation is to use “±U_k=2” which tells exactly how ±U was calculated, plus it allows easy conversion back to ±u if necessary for calibrations farther down the traceability chain.

And the GUM also makes it clear that whatever k is used, it doesn’t imply a probability distribution.

The reason isn’t hard to understand if you think about combined uncertainties. One variable in a given measurement might have a nice Gaussian probability distribution, but when it is combined with other variables (especially Type B uncertainties), the distribution doesn’t track over.

Reply to  karlomonte
August 30, 2023 8:35 am

You know from years of dealing with measurements you learn what UNKNOWN truly means. You can build an electronic circuit and measure the noise factor, come in the next day, it has changed! What is the “true value”, who the hell knows! All you can do is make an educated guess (μ) and generate an estimate of what the uncertainty interval ±U could be.

The folks arguing here have no experience. They have not spent the time studying Taylor or the GUM with the benefit of having actually made meaningful measurements.

My early years were spent working with my father measuring rod and main bearing journals, setting up spiral cut gears, valve angles, cylinder taper, hydraulic pressures and leakage, etc. These affected customers and reputation, and guess what, profitability! Nothing like it to focus the mind!

Reply to  karlomonte
August 30, 2023 3:01 pm

“Unknown” is what is totally unacceptable to these people, therefore it must be wrong.

If by “these people” you mean me or statisticians, that’s completely wrong. Statistics is all about the unknown. You take the average of a sample, it’s different than the average of a different sample. Does that mean they come from different populations? You don’t know. Everything is random – the best you can do is say how likely it is that you would get that difference if they came from the same population. Nothing is certain, just some things are less likely.

Then there’s Bayesian statistics, where probability depends on the state of not knowing.
If you’ve ever seen the notation “±U_95” (subscript 95), this is an indicator that someone doesn’t understand this.

The GUM recommends you don’t use intervals at all, but that it’s better to just state the standard uncertainty. But they do say that if you use expanded uncertainty you should “give the approximate level of confidence associated with the interval y ± U and state how it was
determined”

The reason isn’t hard to understand if you think about combined uncertainties. One variable in a given measurement might have a nice Gaussian probability distribution, but when it is combined with other variables (especially Type B uncertainties), the distribution doesn’t track over

That’s a fair point, but it does leave me wondering why the emphasis is on the coverage factor. If you don’t know the distribution, what does twice the standard deviation tell you, that the standard deviation doesn’t?

Reply to  Bellman
August 30, 2023 3:35 pm

The GUM recommends you don’t use intervals at all, but that it’s better to just state the standard uncertainty. But they do say that if you use expanded uncertainty you should “give the approximate level of confidence associated with the interval y ± U and state how it was

determined”

Another example of your reading comprehension difficulties.

Reply to  Bellman
August 30, 2023 4:06 pm

That’s a fair point, but it does leave me wondering why the emphasis is on the coverage factor. If you don’t know the distribution, what does twice the standard deviation tell you, that the standard deviation doesn’t?”

Does it *EVER* sink in that the GUM is addressing multiple measurements of the same thing under conditions of repeatability?

When you have multiple measurements of the same thing under conditions of repeatability you can get a statistical analysis where the the coverage factor *means* something.

Once again you are showing you haven’t bothered to study the subject at all, you just continue to cherry-pick.

When do you use the student-t distribution? The answer will inform you on this subject.

Reply to  Tim Gorman
August 30, 2023 5:24 pm

Does it *EVER* sink in that the GUM is addressing multiple measurements of the same thing under conditions of repeatability?

Do you have that on speed dial. It’s got nothing to do with my question, which was about coverage factors.

When you have multiple measurements of the same thing under conditions of repeatability you can get a statistical analysis where the the coverage factor *means* something.

Huh? The whole point was saying that there was no known probability distribution. If you have enough measurements of the same thing to know where the 95% interval lies, you also have a good idea of the distribution.

When do you use the student-t distribution?

When you have a standard error of the mean, and the standard deviation is estimated from the sample. This assumes the population distribution is normal.

The answer will inform you on this subject.

How?

Reply to  karlomonte
August 30, 2023 2:28 pm

Yes (I used the wong word); he’s not alone, many people see an expanded GUM uncertainty ±U and assume it means 95% of all measurements made according to however U was obtained will be within the interval.

If it doesn’t mean that, what does the 95% mean?

Reply to  Bellman
August 30, 2023 3:36 pm

You figure it out.

Reply to  Bellman
August 30, 2023 4:15 pm

Dig into NIST documents. They are the national experts and have pretty much final say so in how measurements are dealt with. They have documents in medical, chemistry, laboratories, etc.

Reply to  Jim Gorman
August 30, 2023 5:11 pm

Not my nation. But as far as I’ve seen they don’t disagree with the international standards.

Reply to  karlomonte
August 30, 2023 2:25 pm

KM: “And you are still wrong. You CANNOT assume any “dispersion of error” from a standard uncertainty value, i.e. a probability dispersion.

PF: “The issue at hand is the ±(uncertainty) representing the dispersion of error about a mean or a standard magnitude.

Still waiting for someone to actually define what they mean by uncertainty, rather than just shouting what it isn’t.

Reply to  Bellman
August 30, 2023 3:27 pm

When are you going to finally get it? The term “error” implies that you know a true value from which an error can be calculated. If you don’t know the true value then you don’t know the error either. If you don’t know the error then it is impossible to know the dispersion profile. You can estimate what the dispersion interval is but you can’t KNOW where in that interval anything lies. It is an UNKNOWN.

You’ve been told OVER AND OVER AND OVER AND OVER AND OVER what uncertainty is. You’ve been given what the GUM has to say about it. And you can’t remember more then 2 minutes.

Uncertainty is an interval given to let others know what they can expect if they repeat the measurement under the same conditions. It is *not* a probability curve, it is not a statement of a true value, it is not meant to include *all* possible values.

“Stated value +/- uncertainty” gives others a clue as to the accuracy of your measurement. Does the length of that beam have an uncertainty of 1′ or 1″? Does that thermometer measure to +/- 1C or +/- 0.5C? Is the capacitance of that part +/- 1% or +/- 20%? Is the accuracy of that crankshaft journal +/- 0.01″ or +/- 0.001″?

If you are truly as unteachable as you show on here then we are all wasting out time on you. If you are not as unteachable as you pretend then we are all wasting out time on you.

Bottom line? We are all wasting our time on you. You will never learn.

Reply to  Tim Gorman
August 30, 2023 3:39 pm

^^^^ +42

The only other possible explanation is arguing for the sake of argument, but the probability of this being correct is vanishingly small.

Reply to  Tim Gorman
August 30, 2023 4:16 pm

When are you going to finally get it? The term “error” implies that you know a true value from which an error can be calculated.

When are you going to get that it doesn’t. You know there is an error, you do not know what it is – hence there is uncertainty. You should know this. Do Taylor and Bevingotn’s books on error analysis ever claim you can know what the error is?

Uncertainty is an interval given to let others know what they can expect if they repeat the measurement under the same conditions

parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand

It is *not* a probability curve

The parameter may be, for example, a standard deviation (or a given multiple of it), or the half-width of an interval having a stated level of confidence

expanded uncertainty

quantity defining an interval about the result of a measurement that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the measurand

NOTE 2 To associate a specific level of confidence with the interval defined by the expanded uncertainty requires explicit or implicit assumptions regarding the probability distribution characterized by the measurement result and its combined standard uncertainty. The level of confidence that may be attributed to this interval can be known only to the extent to which such assumptions may be justified.

Does the length of that beam have an uncertainty of 1′ or 1″

Meaningless unless you know what that uncertainty entails. Does an uncertainty mean that on average you will be out by 1, does it mean you can be out by at most 1, does it mean there us a 95% chance of being out by 1?

Reply to  Bellman
August 30, 2023 4:26 pm

Meaningless unless you know what that uncertainty entails.

Which from all indications you will never know.

Does an uncertainty mean that on average you will be out by 1,

Nope.

does it mean you can be out by at most 1,

Nope.

does it mean there us a 95% chance of being out by 1?

Nope.

And before you type out your standard bleating, no, I’m not going try (again) to lead you by the nose.

Reply to  karlomonte
August 30, 2023 5:08 pm

It’s all right. I didn’t really expect you to answer.

Reply to  Bellman
August 30, 2023 6:11 pm

Um, blind now?

I answered three questions.

Reply to  karlomonte
August 30, 2023 6:21 pm

No. I asked one question and suggested three hypothetical answers. You rejected all three answers and provided no alternative answer.

Reply to  Bellman
August 30, 2023 9:23 pm

So I’m supposed to hop to your every whim?

Reply to  karlomonte
August 31, 2023 4:25 am

I didn’t expect an answer.

Reply to  Bellman
August 31, 2023 4:37 am

You got your answer. The fact that you can’t understand it is *your* problem, no one else’s.

Reply to  Tim Gorman
August 31, 2023 7:28 am

Nope, nope nope is not an answer to the question of what means? I want to know what it is, not what it isn’t.

Reply to  Bellman
August 31, 2023 5:04 am

Then why ask?

Reply to  karlomonte
August 31, 2023 7:28 am

Because your silence is informative.

Reply to  Bellman
August 31, 2023 7:27 am

Because your silence is informative.

Reply to  Bellman
August 31, 2023 7:58 am

/snort/

Reply to  Bellman
August 31, 2023 7:59 am

So is yours.

  1. Show us a source in the GUM specifying how to handle multiple measurements of different things.
  2. Can an uncertainty interval be asymmetric? How does a single value for σ handle an asymmetric uncertainty interval?
Reply to  Tim Gorman
August 31, 2023 8:33 am
  1. I’ve told you enough times – equation 10 or 13. But if you want the uncertainty from sampling, you won’t get that in the GUM as it’s all about measurement uncertainty.
  2. Yes. But you can’t express it using the standard GUM conventions. G.5.3 gives some details, but it essentially says you’ll need to provide more details.
  3. σ doesn’t care about the distribution, it’s just a measure of the amount of dispersion. To use it to calculate probability distributions you need to know the distribution. This is true whether the distribution is Gaussian, non-Gaussian but symmetric, or asymmetric. In most case you would need more information than just mean and standard deviation.
Reply to  Bellman
August 31, 2023 8:45 am

“I’ve told you enough times – equation 10 or 13. But if you want the uncertainty from sampling, you won’t get that in the GUM as it’s all about measurement uncertainty.

Equations 10 and 13 only apply to symmetric distributions. Look at 4.4.3. Once again you get caught cherry-picking!

And, once again, you are using your evasion rule No. 2 – “I was speaking about something else”. This isn’t about sampling error.

“Yes. But you can’t express it using the standard GUM conventions. G.5.3 gives some details, but it essentially says you’ll need to provide more details.”

Meaning Eq 10 and 13 don’t apply since they don’t provide the additional information and can’t handle asymmetric uncertainty.

σ doesn’t care about the distribution, it’s just a measure of the amount of dispersion. To use it to calculate probability distributions you need to know the distribution. “

How does σ handle asymmetric distributions? Asymmetric distributions also have dispersion!

And, here again, is your evasion rule No. 2 – “I was speaking about something else”. This isn’t about probability distributions. It’s about the uncertainty interval.

Reply to  Tim Gorman
August 31, 2023 10:13 am

Equations 10 and 13 only apply to symmetric distributions.

Wrong.

Look at 4.4.3.”

That’s not equation 10. But if we are looking at that section, would you point me to all the times they talk about negative uncertainties, negative standard deviations, or require that negative values of square roots are included?

Is there any specific part of 4.4.3 you think invalidates equation 10. It’s just taking the standard error of the mean for a collection of values. It would work for any distribution.

And, once again, you are using your evasion rule No. 2 – “I was speaking about something else”. This isn’t about sampling error.

Then you should have been more specific – you asked “Show us a source in the GUM specifying how to handle multiple measurements of different things.

One way I would handle multiple measurements of “different” things would be to assume they came from a sample rather than wanting an exact average of just those things.

But if you are only interested in the measurement uncertainty – equation 10 or 13.

How does σ handle asymmetric distributions? Asymmetric distributions also have dispersion!

The same way it “handles” any distribution. It’s the positive square root of the average of the squares of the deviations.

Reply to  Bellman
August 31, 2023 1:12 pm

That’s not equation 10. But if we are looking at that section, would you point me to all the times they talk about negative uncertainties, negative standard deviations, or require that negative values of square roots are included?”

As usual, your reading comprehension is just atrocious. Section 4.4.3 is laying out the distributions the GUM Eq 10 and 13 are appropriate for!

They are all symmetric. And they all have symmetric uncertainty intervals.

Sect 4.4.6: “As indicated in 4.3.9, the expectation of t is μ_t
= (a_+ + a_-)/2 = 100 °C”

Hmmmm…. funny. I see an “a_-” in that equation.

a_+ + a_-

Well, what *do* you know! Will wonders never cease!

As I keep saying you *never* actually study anything. You just cherry-pick crap. You just got caught doing it again!

Is there any specific part of 4.4.3 you think invalidates equation 10. “

Of course not. I didn’t say it did. I *said* Eq 10 and 13 only work for the special case where you have multiple measurements of the same thing under the same environment using the same instrument. They do *NOT* work for multiple measurements of different things that are very likely to generate multi-nodal or skewed measurement distributions.

Then you should have been more specific “

No, *YOU* should stay on topic instead of trying to deflect the discussion to something else so you don’t have to address the issue at hand. The topic in this sub-thread has *never* been about sampling error, it is how to specify uncertainty, including asymmetric uncertainty.

One way I would handle multiple measurements of “different” things would be to assume they came from a sample rather than wanting an exact average of just those things.”

You are deflecting again. The issue is *NOT* how you calculate the average or what the standard deviation of the sample means might be!

Stay on topic!

But if you are only interested in the measurement uncertainty – equation 10 or 13.”

But Eq 10 and 13 only work for symmetric measurement distributions as shown in Sect 4. You just can’t admit that even to yourself, can you? So you just keep repeating the same religious dogma that symmetric and asymmetric data distributions can be handled the same and so can symmetric and asymmetric uncertainty intervals.

You are going to stick with the meme that all measurement uncertainty is random, Gaussian, and cancels till it buries you.

The same way it “handles” any distribution. It’s the positive square root of the average of the squares of the deviations.”

Relidious dogma again. You simply don’t care if the squares of the deviations mean anything at all. They are your God and you will not admit to anything else.

Reply to  Tim Gorman
August 31, 2023 3:15 pm

Hmmmm…. funny. I see an “a_-” in that equation.

Well, what *do* you know! Will wonders never cease!

That’s it – this has gone on long enough. Just admit that this is a wind up, and we can all go home.

Just in case anyone thinks Tim is being serious here, a_-, is not -a. It’s the GUM’s designation for the lower bound of a rectangular distribution. The “-” is a subscript. It has no meaning other than to designate it’s the lower bound, compared with a_+, which is the upper bound.

And, to add to the irony he follows up with

As I keep saying you *never* actually study anything. You just cherry-pick crap. You just got caught doing it again!

Reply to  Bellman
August 31, 2023 3:56 pm

The “-” is a subscript. It has no meaning other than to designate it’s the lower bound, compared with a_+, which is the upper bound.”

It’s a LOWER NEGATIVE bound specifying a negative interval that is different than the positive bound and interval. And you continue to deny that a negative interval exists. The intervals are always positive and it is the OPERATOR (-) that designates how the positive value is used.

Pat is right. You are never going to learn.

Reply to  Tim Gorman
August 31, 2023 5:14 pm

It’s a LOWER NEGATIVE bound specifying a negative interval that is different than the positive bound and interval.

Take you hand of the shift key and think – or maybe read the relevant bit of the GUM. There are not two intervals. The lower bound is the smallest value in the interval, the upper bound is the largest value in the interval. There is only one interval.

And you continue to deny that a negative interval exists.

Depends on what you mean. You can talk of a negative interval for a function – that is an interval where the function is all negative. But that is not what is being described here.

The intervals are always positive

An interval is an interval. It is neither positive or negative, it just consists of all the values from a- to a+. Those values may be all negative or all positive or some of each. It has a standard uncertainty, which is of course positive. It’s all explained in 4.3.7. And the next section describes a situation where it is not symmetric.

Reply to  karlomonte
August 30, 2023 5:13 pm

Right on every count.

Reply to  Bellman
August 30, 2023 5:11 pm

I say you can’t calculate the error without knowing the true value. Then you turn around and say: “When are you going to get that it doesn’t. “

And then you say: “You know there is an error”

HOW DO YOU KOW THERE IS AN ERROR IF YOU DON’T KNOW THE TRUE VALUE?

Unfreakingbelievable! How do you know you didn’t hit the sweet spot and get the measurement *exactly* right.

As usual, you are assuming facts not in evidence.

“Meaningless unless you know what that uncertainty entails”

You TOTALLY missed the points. What does 1′ vs 1″ tell you about the measurement of the beam? Which one would you trust the most to try and put across your foundation?

Reply to  Tim Gorman
August 30, 2023 5:44 pm

HOW DO YOU KOW THERE IS AN ERROR IF YOU DON’T KNOW THE TRUE VALUE?

Because there are always errors. But even if you get the very slime probability of getting a value exactly right, it still just means you have an error of zero. And as you don’t know the true value you don’t know if any of your measurements are the ones with an error of zero. Hence there is uncertainty in any measurement.

Reply to  Pat Frank
August 28, 2023 9:27 pm

Bellman has admitted in the past of being a disciple of Nitpick Nick Stokes.

Reply to  karlomonte
August 29, 2023 4:41 am

What a weird world you live in under that bridge.

Reply to  Bellman
August 29, 2023 7:03 am

You think “Einstein was wrong about many things”, so yeah, from the bizarro universe you inhabit, reality would seem strange.

Reply to  karlomonte
August 29, 2023 4:11 pm

This isn’t a slight on Einstein. Everybody makes mistakes and gets thing wrong. Trusting people to always be right, just because they are right about a lot of things is bad science. What’s that quote that gets trotted out every five minutes when it suits you – “science is the belief in the ignorance of experts”?

Here’s one list on things he might have been wrong about

https://www.forbes.com/sites/startswithabang/2016/12/29/the-four-biggest-mistakes-of-einsteins-scientific-life/

Reply to  karlomonte
August 29, 2023 5:59 am

Nick Stokes name is in many more WUWT’s than he posts in. Can you say “Rent free”? I knew that you could….

Reply to  bigoilbob
August 29, 2023 7:02 am

Yes, blob is another Stokes groupie…

Reply to  Pat Frank
August 29, 2023 4:37 am

Yeah, I should have caught that. Thanks!

Reply to  Bellman
August 28, 2023 6:13 pm

Deflection. We are not discussing distances.

Reply to  Pat Frank
August 28, 2023 7:43 pm

You’re the one who brought up the x-axis. I was just saying the standard deviation depends on the distance between each value and the mean, not whether it is positive or negative.

Reply to  Bellman
August 28, 2023 12:49 pm

Standard deviation is derived from variance. Variance is calculated by squaring the difference between the mean and the value. If that difference is negative it doesn’t matter because it gets squared!

You *have* to calculate variance FIRST. There is no direct calculation for SD without first finding variance.

The amount of deviation is positive *AND* negative, not “or”.

Like Pat said, as usual you have argued yourself into something you can’t support. All you are doing, as with all your other assertions, is trying to use sophistry to get out from other – what is the definition of “is”?

Reply to  Tim Gorman
August 28, 2023 2:59 pm

The amount of deviation is positive *AND* negative, not “or”.

It is not. The standard deviation is the positive square root of the variance.

All you are doing, as with all your other assertions, is trying to use sophistry to get out from other – what is the definition of “is”?

All I’m doing is trying to correct a small and mostly irrelevant mistake Pat made. But of course, you four are now having to blow this up into the next battleground, rather than just check your facts.

Reply to  Bellman
August 28, 2023 3:24 pm

The positive standard deviation only gives 34% of the possible values. Where does the other 34% come from.

Why won’t you answer that?

Reply to  Tim Gorman
August 28, 2023 4:29 pm

I did. I said you use the negative of the standard deviation.

Reply to  Tim Gorman
August 29, 2023 6:26 am

I’ve got it figured out, Tim. In statistics, and statistics alone, the standard deviation is strictly reckoned a probability, P.

P < 0 is impossible. I.e., one cannot have a probability less than zero.

The negative wing of the standard distribution is ignored in statistics because negative probability is undefined. Hence the statistical definition that the standard deviation is the positive root of the variance.

Of course, in science and engineering, the negative wing of the standard distribution has physical meaning, and is assigned a positive probability.

Therefore, in science and engineering, the standard deviation is defined as the ±roots of the variance. Both roots have equal weight and represent positive probabilities with physical meaning.

Bellman is just playing cozy with the alternative meaning. He is arguing the statistical definition within a scientific context.

He is not, “trying to correct a small and mostly irrelevant mistake Pat made.” He is being disingenuous.

Operating in a scientific context but arguing from a statistical stance.

Most papers on the subject, including JCGM GUM, define SD from statistics as the positive root, but their worked examples and problems are written ±u(y).

The usage contradicts the stated definition.

But one can quote the definition of SD from those statistical papers as authoritative, even though it is functionally wrong in science.

In science and engineering, the negative wing of a normal distribution represents a positive probability. Therefore, within science and engineering, the standard deviation is indeed defined as ±(uncertainty).

In science and engineering the standard deviation is defined as both roots of the variance — positive and negative — because each represents a positive physical probability.

Uncertainty‘ in science is defined differently than uncertainty in statistics.

Bellman is playing off that difference, while staying silent about it. Sowing confusion. Very Stokesian.

Reply to  Pat Frank
August 29, 2023 7:13 am

“The negative wing of the standard distribution is ignored in statistics because negative probability is undefined”

Utter nonsense on every level.

The “negative wing” of the distribution is not negative probability. The standard deviation is not probability. Assuming a normal distribution you can use the standard deviation to say what the percentage or the probability is between any two points, and it will never be less than zero,

But above all else, I don’t know why I have to keep repeating this, nobody is ignoring the negative wing. The issue is simply understanding what the values mean. The standard deviation is positive. The reason you put ± in front of it is to indicate that it can be both added and subtracted.

It’s a very minor correction, and one that doesn’t really matter to the interpretation of the result, and I can’t understand why so many are making such a distraction of it.

Reply to  Bellman
August 29, 2023 7:22 am

Utter nonsense on every level.

Yes, you do post utter nonsense on every level.

Looks like Pat hit another nerve.

Reply to  Bellman
August 29, 2023 9:17 am

Bellman: “Utter nonsense on every level.

“GUM 0.4: “Thus the ideal method for evaluating and expressing uncertainty in measurement should be capable of readily providing such an interval, in particular, one with a coverage probability or level of confidence that corresponds in a realistic way with that required.

“2.2.3, NOTE 2 Uncertainty of measurement comprises, in general, many components. Some of these components may be evaluated from the statistical distribution of the results of series of measurements and can be characterized by experimental standard deviations. The other components, which also can be characterized by standard deviations, are evaluated from assumed probability distributions based on experience or other information.”

“2.3.5 expanded uncertainty
quantity defining an interval about the result of a measurement that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the measurand.

“NOTE 1 The fraction may be viewed as the coverage probability or level of confidence of the interval.

“NOTE 2 To associate a specific level of confidence with the interval defined by the expanded uncertainty requires explicit or implicit assumptions regarding the probability distribution characterized by the measurement result and its combined standard uncertainty.”

Farrance & Frenkel (2021): “As always when taking a square root, the result can be either positive or negative. Since an uncertainty cannot be negative, the sign of u(x) − u(y) is so chosen as to make u(z)=u(x) − u(y) positive.” (my bold throughout)

Bellman: “The reason you put ± in front of it is to indicate that it can be both added and subtracted.

The reason you put ± in front of it is because it is a square root and defines a range of values or uncertainty about a mean or a measurement.

Wrong again, Bellman.

Reply to  Pat Frank
August 29, 2023 10:14 am

Note one of those quotes has anything to do with what you claims, and I said was nonsense. None of them say the “negative wing” is ignored. None talk about negative probability.

And then you are happy to quote Since an uncertainty cannot be negative, which is what I’ve been telling you since this nonsense started. Really, if people here just tried to engage with what I actually said, and keep saying, rather than just arguing whit some imagined bogyman, we could save a lot of time.

The reason you put ± in front of it is because it is a square root and defines a range of values or uncertainty about a mean or a measurement.

Please take a course in basic algebra. This is just getting embarrassing.

A square root does not define a range, it defines two values, one positive one negative. If you use the √ it represents the positive square root. You put a ± symbol in front to indicate that it can be both added and subtracted. When the value is a standard deviation or standard uncertainty, it will be the positive square root of the variance. You can then put a ± symbol in front of a multiple of it to indicate a range of values which indicate a particular confidence interval or uncertainty interval.

At no point does it make sense to say the uncertainty is negative. Subtracting the uncertainty value from the mean does not make the uncertainty negative. Your own quotation tells you it cannot be negative.

Reply to  Bellman
August 29, 2023 4:06 pm

Note one of those quotes has anything to do with what you claims,

They all do.

Their aggregate says uncertainty is a probability; standard deviation is a probability distribution; the fractional distribution of values is a coverage probability; expanded uncertainty (combined random plus systematic) is a probability distribution, and; uncertainty cannot be negative.

That series establishes my point.

None talk about negative probability.”

F&K (2021): “an uncertainty cannot be negative,” From above, uncertainty is a probability distribution. Therefore (combining) a probability distribution cannot be negative. QED. Thank-you.

…which is what I’ve been telling you since this nonsense started.

A statistical definition. Not relevant to physical SDs, wherein negative uncertainties take positive probabilities.

some imagined bogyman

Like arguing statistical purities in a scientific context.

Vasquez & Whiting (2006): “When several sources of systematic errors are identified, β is suggested to be calculated as a mean of bias limits or additive correction factors as follows:

β≈[∑ᵢφ²𝑠ᵢ]¹/² (2)

where i defines the sources of bias errors and φ𝑠 is the bias range within the error source i.

Eqn. 2 is not derived from any closed form statistical expression. It is a convenient analytical homology. The β has no discrete statistical meaning, because the error distribution is not normal.

But it has meaning in science because it provides a convenient uncertainty range to relate the reliability of a result that includes some systematic error.

The bias range is physically ± about the mean. The negative uncertainty values take positive probabilities. Both the equation and the interpretation violate statistical assumptions about uncertainty and probability.

But no one in science cares, because the meanings are coherent and objectively significant within science.

A square root does not define a range,...”

What does [xᵢ-x̅] define, if not a range?
When xᵢ<x̅, are the differences negative?
When one calculates √[∑ᵢ(xᵢ-x̅)²/N], does the ±SD reflect the range of positive and negative values around x̅?
Do the values of the negative wing of the uncertainty distribution have positive probabilities?

The meaning of the square root in the context of scientific data analysis is not just two values, is it. The SD directly represents the (plus/minus) bounds of a range.

The problem is one of vocabulary, Bellman. You’re arguing statistical meaning in a context that requires the scientific variety.

In science, the uncertainty SD is always a physical range around a physical value. ‘Negative uncertainties’ means values on the negative side of the range distribution. It does not mean probabilities less than zero.

The embarrassing element of our conversation is you insisting upon imposing your preconceptions where they do not apply.

Reply to  Pat Frank
August 29, 2023 5:07 pm

A statistical definition. Not relevant to physical SDs, wherein negative uncertainties take positive probabilities.”

To be clear – when you posted all those quotes, you were doing so because they disagreed with you, and so in your opinion were wrong?

Eqn. 2 is not derived from any closed form statistical expression. It is a convenient analytical homology. The β has no discrete statistical meaning, because the error distribution is not normal.

Distributions don;t need to be normal to have a statistical meaning. Statistics can other distributions, including nonparametric. From the abstract it very much seems they are doing statistics.

Both the equation and the interpretation violate statistical assumptions about uncertainty and probability.

That seems unlikely. The paper’s behind a paywall, but the abstract says

The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors.

That doesn’t suggest they are violating any statistical or probability rules. Could you be clear about which assumptions they are violating?

What does [xᵢ-x̅] define, if not a range?

It’s not a range..A range would be [xᵢ, x̅]
What you have there is a subtraction.
I’m not really sure why you put square brackets round it. Possibly this is meant to indicate the absolute value, which would make it the distance between xᵢ and x̅

“When xᵢ<x̅, are the differences negative?

xᵢ-x̅ is negative
[xᵢ-x̅] is negative
|xᵢ-x̅| is negative
(xᵢ-x̅)² is positive

When one calculates √[∑ᵢ(xᵢ-x̅)²/N], does the ±SD reflect the range of positive and negative values around x̅?

Yes, it reflects the 1 SD range.

Do the values of the negative wing of the uncertainty distribution have positive probabilities?

Yes.

The meaning of the square root in the context of scientific data analysis is not just two values, is it.

The meaning of the values of a square root depends on what you are using it for.

The SD directly represents the (plus/minus) bounds of a range.

That’s one use.

The problem is one of vocabulary, Bellman. You’re arguing statistical meaning in a context that requires the scientific variety.”

Sol far you have given zero evidence that scientists are misusing the vocabulary at all. Everything you are claiming scientists are doing is the same as the mathematical meaning.

Again, it should be easy to find one scientific reference that actually says standard deviations, or uncertainties can be negative, if it were common practice.

In science, the uncertainty SD is always a physical range around a physical value.

And some reference saying that would be helpful. Most of this just feels like you playing word games. The SD is the positive square root of the mean of the variances. If you right mean ± SD you are defining a confidence interval about a range. That doesn’t mean the value SD is the range.

‘Negative uncertainties’ means values on the negative side of the range distribution.

Then show your definition of uncertainty. As far as I’m concerned an uncertainty is a value denoting a range. It does not mean specific errors. You can’t make a distinction between uncertainties that are smaller than the mean verses those that are greater than the mean, becasue there is only one uncertainty.

It does not mean probabilities less than zero.

Obviously. I just can;t understand why you think anyone says that. Statisticians will say the probabilities less than 0 or greater than 1 are meaningless. I’m assuming even scientists don;t want to violate that particular rule.

The embarrassing element of our conversation is you insisting upon imposing your preconceptions where they do not apply.

And the embarrassing part from your side is you keep claiming scientists don’t accept standard mathematical terminology, whilst refusing to supply any evidence.

Reply to  Bellman
August 29, 2023 8:51 pm

…refusing to supply any evidence.

Farideh Jalilehvand and Patrick Frank (2021) “EXAFS applications in coordination chemistry

Patrick Frank and Maurizio Benfatto (2021) “Symmetry Breaking in Solution-Phase [Cu(tsc)₂(H₂O)₂]²⁺: Emergent Asymmetry in Cu−S Distances and in Covalence

Patrick Frank, Robert M.K. Carlson, Elaine J. Carlson, Britt Hedman, Keith O. Hodgson (2020) “Biological sulfur in the blood cells of Ascidia ceratodes: XAS spectroscopy and a cellular-enzymatic hypothesis for vanadium reduction in the ascidians

The ± convention accepted by all my colleagues, the reviewers, and the journal editors.

And for the sake of variety: J. T. Watt, M. L. McGann, R. K. Takesue, and T. D. Lorenson (2022) “Marine Paleoseismic Evidence for Seismic and Aseismic Slip Along the Hayward-Rodgers Creek Fault System in Northern San Pablo Bay

The ± convention used throughout.

Reply to  Bellman
August 30, 2023 6:13 am

Bellman:

This might help explain some things about why you are wrong.

From:
https://www.asc.ohio-state.edu/gan.1/teaching/spring04/Chapter3.pdf

What is the probability when “μ – 1σ” occurs. What is the probability when “μ + 1σ” occurs.

Why does the table have “Prob of exceeding ±nσ?

PSX_20230830_075642.jpg
Reply to  Jim Gorman
August 30, 2023 7:08 am

Thanks. Another good illustration of why standard deviation has to be positive. Look at the probability function for the Gaussian distribution. It involves dividing by σ. If σ were negative, the probability would be negative, and the cumulative probability would be -1. Obviously that’s impossible, which is why it’s just as well that σ is never negative.

Reply to  Bellman
August 30, 2023 7:12 am

Another illustration that no reality clue can ever penetrate your skull.

Reply to  karlomonte
August 30, 2023 9:05 am

And look who crawls out from under the bridge to say I’m wrong, without ever explaining why I’m wrong. It’s almost as if he doesn’t know what he’s talking about himself, but wasn’t people to think he’s the expert.

Reply to  Bellman
August 30, 2023 1:10 pm

As you’ve told many many times, I’m done with trying to educate you, it is impossible.

All that is left is to identify your nonsense.

And how many times are you employ this silly bridge meme?

Look in the mirror.

Reply to  karlomonte
August 30, 2023 1:44 pm

Such a transparent charlatan.

Reply to  Bellman
August 30, 2023 3:40 pm

Ouch, this hurts.

Shall I worship at the throne of LoopholeMan to atone for my sins?

Reply to  Bellman
August 30, 2023 9:27 am

As Pat has pointed out, the probability is *NOT* the standard deviation. They are different things you are trying to fool everyone with.

Reply to  Tim Gorman
August 30, 2023 12:18 pm

I’m beginning to suspect Tim might not be the towering intellect he initially appears.

No. Probability is *NOT* the standard deviation. I have zero idea why you would even need to point it out. However, you need to know the standard deviation, σ, to derive the normal probability distribution, as Jim reminded us. And if σ is negative that doesn’t work, becasue you would end up with negative probability.

Reply to  Bellman
August 30, 2023 1:11 pm

Another smokescreen.

Reply to  Bellman
August 30, 2023 5:39 am

You would fail at teaching a 3rd grader the new math. Subtracting is adding a negative number. 6 + (-3) = 3

The ABSOLUTE value of the standard deviation is what you are trying to pass off as *the* standard deviation.

|-6| = 6

|σ| = σ
|-σ| = σ
|± σ| = σ

-σ exists. If it didn’t then half of a Gaussian distribution would be missing.

Reply to  Tim Gorman
August 30, 2023 7:29 am

Of course -σ exists. It’s the negative of σ. You with the – sign in front of it because σ is positive. If it was both negative and positive there would be no point in writing -σ as it would just mean the same as σ.

I think the simple confusion here is you are interpreting μ ± σ as meaning “add σ (which is both positive and negative) to μ”. When it actually means “add and subtract σ (which in this case is positive) to μ”.

Your interpretation works, but is wrong. Pedantically it’s wrong because you would need to write μ + ± σ, for it to work. And more importantly, it’s wrong because σ cannot be negative. See my response to Jim above for one reason why it cannot be negative.

Reply to  Bellman
August 30, 2023 8:23 am

And none of this sigma nonsense supports your claims that the magic of averaging allows you to toss instrumental uncertainties in the garbage. More sophistry.

Reply to  karlomonte
August 30, 2023 9:10 am

Pay attention. None of this is about averaging. None of this affects the correctness or otherwise of the standard error of the mean. It’s purely about lots of experts demonstrating they don’t understand some fairly basic mathematical terms. And then demonstrating their inability to question if they might have misunderstood a simple concept, such as the standard deviation or uncertainty is not negative.

Reply to  Bellman
August 30, 2023 1:14 pm

Pay attention. None of this is about averaging.

Achtung! LoopholeMan demands it!

None of this affects the correctness or otherwise of the standard error of the mean.

A term which the GUM says is incorrect.

Reply to  Bellman
August 30, 2023 9:33 am

So C = A + (-B) doesn’t exist in statistics world? I assure you it exists in math world.

Reply to  Tim Gorman
August 30, 2023 12:23 pm

So C = A + (-B) doesn’t exist in statistics world?

I think we’ve reached peak Gorman at this point. I’ll print this off and frame it for posterity.

He just ignores everything I say, and then makes a joke about what he imagined I would have said if I was as mathematically challenged as him.

Reply to  Bellman
August 30, 2023 12:51 pm

Don’t just print it off. Write it on the blackboard 1000 times. Maybe it will sink in. Even 3rd graders pick it up pretty quickly.

They understand that counting (natural) numbers exist on the number line as positive numbers. They understand that whole numbers consist of counting numbers plus zero. They understand that integers exist as both positive and negative whole numbers. They understand that integers exist as a subset of rational numbers, decimal numbers that terminate or repeat. They understand that irrational numbers are decimals that never terminate or repeat.

Real numbers consist of a combined set of rational numbers and irrational numbers.

The number line consists of a line drawn on a piece of paper with the ends having arrows on them to indicate extension to infinity. The zero point can be placed anywhere on that number line. Numbers to the right of zero are positive numbers and numbers to the left of the zero point are negative numbers.

I had to hearken back to my grade school math lessons on this and it took some time for my memory to regurgitate it. The assertions above apply to statistical world, math world, and the real world.

But not, apparently, to bellman world.

Reply to  Tim Gorman
August 30, 2023 1:56 pm

Even 3rd graders pick it up pretty quickly.

Do you ever take a hint?

So C = A + (-B) doesn’t exist in statistics world?

Is a monumental lie. I am not saying that. There isn’t a single word I said in any comment that could honestly be mistaken for that. You are just arguing with your own befuddled imagination.

They understand that counting (natural) numbers exist on the number line as positive numbers. They understand that whole numbers consist of counting numbers plus zero. They understand that integers exist as both positive and negative whole numbers. They understand that integers exist as a subset of rational numbers, decimal numbers that terminate or repeat. They understand that irrational numbers are decimals that never terminate or repeat.

And you think I don’t?

Are you ever going to address anything I’ve actually said?

There are positive and negative numbers – the standard deviation is never negative. That’s all you need to understand.

Reply to  Bellman
August 30, 2023 3:06 pm

There are positive and negative numbers – the standard deviation is never negative. That’s all you need to understand.”

Pure malarky. You can repeat this to yourself all you want, it still won’t be true.

Any Gaussian distribution can be normalized so the average is at 0.

What you will find is that the x-axis is a number line. To the left of 0 are negative numbers and to the right are positive numbers.

The 68% interval would be –σ to +σ.

The 68% interval would be σ – (-σ).

In fact, there is no reason why you can’t normalize a Gaussian curve to be anything you want. The 68% interval would go from -2σ to 0.

*YOU* seem to be trying to say that real numbers have to be only positive, that a negative sign is nothing more than an operator on a positive number. That is simply not true. It’s only true for natural numbers. σ is not a natural (counting) number.

Reply to  Tim Gorman
August 30, 2023 5:01 pm

Pure malarky. You can repeat this to yourself all you want, it still won’t be true.

I’ve also quoted numerous sources, explained the logic, and given reasons why it could not be negative.

Any Gaussian distribution can be normalized so the average is at 0.

Immediately demonstrating you don’t understand the point. To use one of your insults, you are unteachable.

What you will find is that the x-axis is a number line.

Gosh, really. That had never occurred to me.

“In fact, there is no reason why you can’t normalize a Gaussian curve to be anything you want. The 68% interval would go from -2σ to 0

Still completely missing the point. We are talking about the value of σ, not the value of µ. Shifting the curve only changes µ. Scaling the curve along the x-axis scales σ, but it can’t make it negative.

The question you need to ask is what would the curve be like if σ were negative. And the answer is it would be upside down, with all the y coordinates negative.

*YOU* seem to be trying to say that real numbers have to be only positive

Are you working on the assumption that if you make up enough stupid thinks about me, I’ll just give up and start agreeing with you.

I don’t know how many more times I’ll have to say this – I am saying the standard deviation has to be positive.Not all numbers.

that a negative sign is nothing more than an operator on a positive number.

No. I’m saying that σ is always positive and putting a – sign in front of it is going to indicate either you are subtracting the positive value from another value, or you are using to indicate a range of values.

Yes, I appreciate there is some confusing multiple meanings to the “-” as well as the “±” symbol, but as with all things you just have to follow the logic. If – appear between two values it’s the minus sign and means subtract the second from the first. If it’s in front of a single value it means negate the value. In both cases the operator is mapping the value/s to a new value. -π means take the value of π (+3.14…) and multiply it by (-1) so you have -3.14…. It does not mean that π is now negative.

The confusion is that “-” can also be used in front of a number to indicate it’s a negative number, and there’s probably a philosophical debate about whether -2 as a negative number means the same as -2 meaning -(+2). It doesn’t really matter though, they both represent the same quantity.

But the point is that when you have a constant or variable representing a real positive value, putting “-” in front of it indicates the value that is the negative of the constant or variable, not that the constant or variable has become negative.

Reply to  Bellman
August 30, 2023 5:15 pm

No. I’m saying that σ is always positive and putting a – sign in front of it is going to indicate either you are subtracting the positive value from another value, or you are using to indicate a range of values.”

Like I said, to you the (-) symbol is always an operator and never a value modifier. Take the blinders off.

Reply to  Bellman
August 30, 2023 1:15 pm

No one can even begin to match the huge hat-size of LoopholeMan, certainly not Albert.

Reply to  Bellman
August 30, 2023 3:18 pm

I think the simple confusion here is you are interpreting μ ± σ as meaning “add σ (which is both positive and negative) to μ”. When it actually means “add and subtract σ (which in this case is positive) to μ”.

Neither of those statements is correct. In science and engineering, μ ± σ is interpreted to mean that ±σ is the uncertainty in the value of μ.

Typically, this means that the physically correct value of μ is considered to be somewhere within the range of values defined by ±σ.

Only σ² enters the Gaussian function, so either sign produces the identical distribution.

Reply to  Pat Frank
August 30, 2023 3:52 pm

In science and engineering, μ ± σ is interpreted to mean that ±σ is the uncertainty in the value of μ.

Which still doesn’t make the uncertainty negative.

Only σ² enters the Gaussian function

Wrong.

Screenshot 2023-08-30 234854.png
Reply to  Bellman
August 30, 2023 4:02 pm

Which still doesn’t make the uncertainty negative.

How do you know? You don’t even understand what the word means****.

****(even after being told over and over and over and over…)

Reply to  karlomonte
August 30, 2023 5:33 pm

How do you know?

Because the standard uncertainty is based on the standard deviation and that is never negative.

Reply to  Bellman
August 30, 2023 5:00 pm

A nice formula, but it only defines the probability on the y-axis. It will always be positive as Dr. Frank has already stated.

The variables x and μ are values on the x-axis. It is the x-axis values that can very well be negative. I don’t suppose negative values of real things like ° F live in your world. Why do you think the squared values are used?

Reply to  Jim Gorman
August 30, 2023 6:19 pm

A nice formula, but it only defines the probability on the y-axis. It will always be positive as Dr. Frank has already stated.

As Dr Frank incorrectly stated.

It is the x-axis values that can very well be negative.

Of course they can be very well be negative – it goes to -infinity, that;s pretty negative. But as you say, it’s the y-axis that is giving you the probability.

I don’t suppose negative values of real things like ° F live in your world.

How can you lot be so dense? I keep suspecting this is just a game – you are naught school children pretending not to understand the point in order to wind the teacher up.

Negative values exist in my world, like the negative vote you have just earned.

Why do you think the squared values are used?

Because you need to convert the positive and negative errors into positive values. That’s because you want something that measure the dispersion of values through the distribution. Squaring the deviations ensures they are all positive and allows you to get an average that is positive and grows the more spread out distribution is.

This gives you the variance, and the only reason you take the square root is to get convert it into a more understandable figure – something that represents in a biased way, the average distance of all these values. And when I say take the square root you obviously want the positive root, because this is a measure of the amount of spread. It makes no sense for it to be negative, any more than it would make sense for the variance to be negative.

The only reason there is a square root in the equation is because you are using the squares of values in the first place. An alternative description of the spread, is to just take the average of the absolute deviations. (MAD) This gives you an unbiased average spread, but and avoids any of this nonsense. Clearly the average of absolute values has to be positive. But it’s used less as it isn’t so nice mathematically. But it does illustrate the point, both MAD and SD are similar indications of the amount of spread in a distribution. Why would it make sense for one to only be positive, but the other to be both positive and negative?

Reply to  Bellman
August 31, 2023 6:23 am

You keep trying to describe the solution of an even root as only positive. In your world that may be so.

It is not true in the real world occupied by scientists and engineers. An even root DOES have two answers, one positive and one negative.

I am sure you think that it is always the negative value that is disregarded because it is never a real world solution. I hate to inform you that a negative value can be just as real as the positive value. It is what students learn in high school physics when first dealing with the addition of forces. Forces are vectors that have both magnitude AND direction. That is why a solution can have a negative solution of a root. It is why a measurement can have a negative solution to a radical.

Reply to  Jim Gorman
August 31, 2023 7:15 am

Take it one more step. You *can* take the square root of a negative real number. Is the solution *only* “j”? or are there two? +j and -j. They represent two different things. The plus and minus signs are not just operators, they are value modifiers distinguishing two different things.

Reply to  Tim Gorman
August 31, 2023 2:02 pm

Ah, those famous jmaginary numbers.

Reply to  Bellman
August 31, 2023 2:10 pm

Being an imaginary number does not mean it is a phantom number. It’s just a traditional way of addressing values on a different plane, the complex plane. It’s why, in so much of engineering it is not denoted with the letter “i” but with the letter “j”. Complex numbers combine (add, subtract) as vectors. They multiply as vectors, usually more easily done using polar coordinates.

Reply to  Tim Gorman
August 31, 2023 3:22 pm

I know , I know. I shouldn’t have made a joke at the expense of engineers not being able to spell imaginary. But after 4 days of being told I need to learn basic maths, it’s hard to resist. I know engineers use j rather than the correct i, becasue they already had an i symbol.

Please don’t start another patronizing lecture about complex analysis.

Reply to  Bellman
August 31, 2023 4:04 pm

“Einstein was wrong about many things”

Reply to  Jim Gorman
August 31, 2023 2:01 pm

You keep trying to describe the solution of an even root as only positive. In your world that may be so.

No I do not, and no it isn’t. (And why only even roots)

All I am saying is that √ is the positive square root. I really didn’t think, when I mentioned it in passing 4 days ago that this would be remotely controversial.

I am sure you think that it is always the negative value that is disregarded because it is never a real world solution.

Then you would be wrong.

I hate to inform you that a negative value can be just as real as the positive value.

Always so certain about what everyone regards as real. There are lots of times when negative numbers make sense, and lots of times when they don’t make sense in the real world. Considering you’re the sort of person who rejects the idea of an average if it givers you a fractional child, I won’t take lectures from you about what’s acceptable in the real world.

Can you have a negative number of children, can a child be a negative height? Does a chessboard have -8 rows and columns?

Reply to  Bellman
August 30, 2023 9:39 pm

Which still doesn’t make the uncertainty negative.

Always negative with respect to the mean, strictly negative when μ≤0. The probability is always positive.

The exponent simplifies to -(x-μ)²/2σ².

σ√2π = √2πσ², but I accept your point there.

Reply to  Pat Frank
August 31, 2023 4:46 am

Always negative with respect to the mean, strictly negative when μ≤0. The probability is always positive.”

It’s pointless continuing this discussion if you are not prepared to state what definition of uncertainty you are using. I just want to know what you would mean by the number used to represent uncertainty. What do you take an uncertainty of 0.5cm to mean, what would an uncertainty of -0.5cm mean.

The definition I’m trying to use is that given in the GUM, where standard uncertainty is defined as the uncertainty of a measurement expressed as a standard deviation. It’s a single value that indicates the amount of dispersion. It can also be multiplied by a coverage factor and used to give a range about the measurement. But I can’t see any logic in talking about a negative dispersion.

Reply to  Bellman
August 31, 2023 6:48 am

The definition I’m trying to use is that given in the GUM, where standard uncertainty is defined as the uncertainty of a measurement expressed as a standard deviation. It’s a single value that indicates the amount of dispersion”

+σ only represents 34% of the possible dispersion in a Gaussian distribution. -σ represents the other 34% of the total dispersion in a Gaussian distribution.

The plus and minus are value modifiers, they are not operators. You can speak of -σ all by itself. The uncertainty does *NOT* have to be symmetric. It can be something like x +0.5,-0.7. This can easily happen with an instrument that does not have a Gaussian response but an asymmetric hysteresis response.

Using your definition there is no way to specify an asymmetric uncertainty interval.

This stems from your in-built bias that all uncertainty is random, Gaussian, and cancels. That is a *special* case, not a general case. You keep claiming you don’t have this bias but it is obvious that you do in everything you post. You simply can’t seem to punch your way out of that statistical paper bag you live in.

Reply to  Tim Gorman
August 31, 2023 7:11 am

The plus and minus are value modifiers, they are not operators. You can speak of -σ all by itself. The uncertainty does *NOT* have to be symmetric. It can be something like x +0.5,-0.7. This can easily happen with an instrument that does not have a Gaussian response but an asymmetric hysteresis response.

Excellent point, there is no way to have an asymmetrical interval with his definition(s).

Reply to  karlomonte
August 31, 2023 7:52 am

Yep. You *could* say that the interval is 1.2 and you have a +/- 0.6 uncertainty but then you lose the skewness of the uncertainty. This is what bellman would have to do, just say the uncertainty is 0.6.

Reply to  Tim Gorman
August 31, 2023 7:59 am

+σ only represents 34% of the possible dispersion in a Gaussian distribution. -σ represents the other 34% of the total dispersion in a Gaussian distribution.

Try to learn something. You always insist that I need to learn some elementary mathematics, yet then keep displaying your ignorance even after I’ve explained it to you repeatedly.

The standard deviation, σ, does not represent a probability distribution. It is a single value that represents the average dispersal of values from the mean. If we assume this comes from a Gaussian distribution, we can use the formula to determine the probability of a single value, or a range of values. There is nothing magic about the probability of a value being in the interval [µ-σ, µ+σ]. They are simply two points on the x-axis. (It’s just a convenient interval to remember as it’s symetrical about the mean, and has a well known per-calculated probability.

You can find the probability for any interval between any two points, both negative, both positive, one negative and one positive. To do this you only need to know µ and σ, because the Gaussian distribution is entirely defined by those two values.

Saying it’s impossible to know the probability of a range left of the mean unless you can talk about a negative σ is just nonsense.

The plus and minus are value modifiers, they are not operators.”

Not if you are saying µ ± σ. Then it’s a binary operator.

You can speak of -σ all by itself.

Of course you can. But that doesn’t mean it’s the standard deviation.

The uncertainty does *NOT* have to be symmetric.”

Might depend on your definition of uncertainty. The standard ways of describing uncertainty in the GUM doesn’t allow for non-symetric uncertainties, and you keep insisting there is no probability distribution associated with an uncertainty, so I’m not sure how it could be non-symetric.

But if you think of uncertainty as a probability distribution, that could well be non-symetric.

Using your definition there is no way to specify an asymmetric uncertainty interval.

It’s the GUM’s definition.

It can be something like x +0.5,-0.7.”

Which still doesn’t give you a negative uncertainty,

This stems from your in-built bias that all uncertainty is random, Gaussian, and cancels.

And you were doing so well up to that point.

Reply to  Bellman
August 31, 2023 8:17 am

The standard deviation, σ, does not represent a probability distribution.”

I didn’t say anything about the probabilities. I spoke to the VALUES of the dispersion. σ only covers 34% of the possible values in a Gaussian distribution. What covers the other 34%?

Saying it’s impossible to know the probability of a range left of the mean unless you can talk about a negative σ is just nonsense.”

Pure cow manure! The uncertainty left of the mean does *not* have to be the same as that to the right of the mean.

You keep denying that you live in a Gaussian world but it just comes through with everything you post!

“Not if you are saying µ ± σ. Then it’s a binary operator.”

How do you specify an asymmetric uncertainty interval? Using ± is merely shorthand for a symmetric interval. it is *still* a value modifier.

“Of course you can. But that doesn’t mean it’s the standard deviation.”

Are you truly aware of what you just said here? It is an admission that you see everything as Gaussian with a symmetric standard deviation. What do label an asymmetric interval from a skewed distribution? Does a skewed distribution not have a standard deviation?

Might depend on your definition of uncertainty.”

Again, you are caught using your No. 3 fallback when you’ve been caught out. “It’s a matter of definition”

The standard ways of describing uncertainty in the GUM doesn’t allow for non-symetric uncertainties,”

That’s because the GUM only addresses multiple measurements of the same thing in a repeatable environment using the same device. Something you have yet to admit to. Your wriggle room is getting smaller and smaller. Where is your usual defender bdgwx? Has he decided that perhaps assuming the global average temperature can be done by ignoring uncertainty is not so good? That the GUM doesn’t support that assumption?

and you keep insisting there is no probability distribution associated with an uncertainty, so I’m not sure how it could be non-symetric.”

Nope. The DATA can have an skewed distribution. That has nothing to do with the uncertainty interval having a distribution. That’s why the GUM specifically moved away from the “true value” – “error” meme. If you *know* the probability distribution of the uncertainty then you can determine the “true value”. If you don’t know the probability distribution of the uncertainty interval, i.e. it only describes an area of UNKNOWN, then you can’t determine a “true value”.

Which still doesn’t give you a negative uncertainty,”

Your reading comprehension, or rather lack of it, is showing again. -0.7 is *NOT* a negative uncertainty?

And you were doing so well up to that point”

You just keep on proving me correct.

Reply to  Tim Gorman
August 31, 2023 11:16 am

Pure cow manure! The uncertainty left of the mean does *not* have to be the same as that to the right of the mean.

This is just getting silly. You are the one who has to explain what you want. Tell or show me what equation you use to derive two different standard deviations from the same asymmetrical distribution, and then show how they are used to calculate confidence intervals, left and right of the mean. And then explain why one of these standard deviations is actually < 0.

It is an admission that you see everything as Gaussian with a symmetric standard deviation.

Your obsessed – and I’m beginning to find these fantasies about me disturbing.

Your wriggle room is getting smaller and smaller. Where is your usual defender bdgwx?

I may have to call the police.

The DATA can have an skewed distribution.

What data? And why is it in capitals? We are talking about uncertainty.

If you *know* the probability distribution of the uncertainty then you can determine the “true value”.

How?

Your reading comprehension, or rather lack of it, is showing again. -0.7 is *NOT* a negative uncertainty?

It’s not an uncertainty full stop. It’s the lower bound of an uncertainty interval.

Honestly, you try to string this along – but all you have to do is point to a single text that describes the lower bound as an uncertainty. The fact you never do that, and just have to keep inventing your own definitions is the problem.

Reply to  Bellman
August 31, 2023 1:17 pm

Forestry Department, you need to head over there PDQ.

Reply to  Bellman
August 31, 2023 1:59 pm

Tell or show me what equation you use to derive two different standard deviations from the same asymmetrical distribution”

u(T) = u(T0) – a(f+)(ΔT) + b(Δf-) “f” is the friction force, a and b are scaling constants, f+ = 0 if temp is going down and f- is 0 if temp is going up.and f+ ≠ f-

You *really* don’t know anything about physical science, do you?

An uncertainty interval is not a confidence interval. They each have different uses. The measurement of a 2″x4″ board being 8′ +/- 1″ does *NOT* mean that the 1″ is a confidence interval. It is not meant to be a confidence interval.

Confidence intervals typically only apply to instances where the uncertainty is random, Gaussian, and cancels and where the variation in the stated value is used to determine uncertainty. Measurement uncertainty is not error and it cannot be assumed to have a specific distribution let alone a Gaussian one.

You simply cannot get away from the meme of measurement uncertainty being random, Gaussian, and cancels. It’s an obsession with you and you are so obsessed about it that you can’t even recognize that you are obsessed with it.

It’s not an uncertainty full stop. It’s the lower bound of an uncertainty interval.”

ROFL!! The lower bound of an uncertainty interval is not an uncertainty itself? Do you *ever* listen to yourself?

Reply to  Pat Frank
August 29, 2023 7:19 am

“Therefore, in science and engineering, the standard deviation is defined as the ±roots of the variance.’

Citation required. I mean it’s possible. Science and engineering don’t need to be as the rigerous as mathematics, butI still can’t find anything that actually claims that.

I suspect this might just be something people assume it means, and never need to change that assumption because it makes no real difference on most places. In fact, I suspect Might have thought similar untilIwas taught better.

Reply to  Bellman
August 29, 2023 7:21 am

Until I was taught better.

Edit still not working.

Reply to  Bellman
August 29, 2023 7:26 am

The Big Man demands a citation!

When LoopholeMan speaks, the world shakes!

Reply to  Bellman
August 29, 2023 9:56 am

Annex J*
Glossary of principal symbols
____________
a – half-width of a rectangular distribution of possible values of input quantity Xi:

a = (a+ – a-)/ 2
_____________

U – expanded uncertainty of output estimate y that defines an interval Y = y ± U having a high level of confidence, equal to coverage factor k times the combined standard uncertainty uc

Reply to  Jim Gorman
August 29, 2023 10:33 am

And…?

It’s really tedious when you just keep quoting things that I don’t disagree with, with no explanation of what point you think you are making. I can’t help it if you don’t understand what you are quoting, or think it’s some big gotcha.

Now if you could provide the long sought citation that proves scientists and engineers believe a standard deviation can be negative, that would be helpful.

Note, that in your first quote a+ and a- are the upper and lower bounds of a rectangular distribution, the + and – should be subscripts. It’s just saying that if you have a rectangular distribution the half-width is equal to the width divided by 2. The width is obtained by subtracting the lower bound from the upper bound – this, of course, will be positive.

The other one is just the standard definition of an expanded uncertainty interval. U is a positive. You know that becasue it’s defined as

U = k*u_c(y)

and, k, the coverage factor and u_c are both positive. (To be fair they don;t explicitly define the coverage factor as positive,. but it’s difficult to imagine what a negative coverage factor will be, and they do say it’s typically between 2 and 3.)

Reply to  Bellman
August 30, 2023 6:25 am

Note, that in your first quote a+ and a- “

But you keep saying a- can’t possibly exist!

Reply to  Tim Gorman
August 30, 2023 7:36 am

What. Why do would I think a lower bound of a rectangular distribution doesn’t exist? This doesn’t even have anything to do with standard deviations. All it is saying is if you have a rectangular distribution you can subtract the lower bound from the upper and divide by two to get the half width.

If the distribution runs from 12 to 16, then a- is 12 and a+ is 16, and the half width, a, is given by (16 – 12) / 2 = 4 / 2 = 2.

Note, the + and – signs in a+ and a- are meant to be subscripts.

Reply to  Jim Gorman
August 30, 2023 6:24 am

WHAT? This can’t be! The real world doesn’t exist! Only statistical world exists and this violates the principles of statistical world!

Reply to  Bellman
August 29, 2023 11:05 am

Look at the image.

Does that equation provide two solutions?

If not, from where do the values below the mean come from.

Reply to  Jim Gorman
August 29, 2023 11:18 am

Here is the image.

standard deviation.png
Reply to  Jim Gorman
August 29, 2023 11:45 am

Does that equation provide two solutions?

No.

If not, from where do the values below the mean come from.”

From subtracting S_x or multiples thereof from the mean.

I really don’t understand why this is so difficult. Suppose it wasn’t a standard deviation but just some arbitrary positive constant, C = +2 say. Would you assume that if I said 0 – C = -2, I was actually saying C was equal to -2?

Here’s the conclusion of the worked example from the page:

Take the square root of the number from the previous step. This is the standard deviation. Your standard deviation is the square root of 4, which is 2.

“Which is 2”. Not -2, just 2.

Reply to  Bellman
August 29, 2023 8:31 pm

“Your standard deviation is the square root of 4, which is 2.”

Courtney could as well have written, “the square root of 4, which is -2.”

After all (-2)² = 4. -2 is as legitimately the square root of 4 as 2.

The reason she did not is because of the careless and arbitrary import of a misleading limitation from statistics.

Three of Courtney’s five differences are negative.

Describing the dispersion about 3 as 2 does not convey the factual measurement (plus/minus) dispersion anywhere near as directly as 3±2.

The notion of values less than 3 is immediately and intuitively obvious.

And note that the -2 uncertainty value is as positively probable as the +2.

Even if one subtracted the mean to get 0±2, the -2 below zero in the number line retains its positive probability.

The statistical rationale for rejecting an undefined P<0 is nowhere to be found. Rejection of negative valued uncertainties is ungrounded in science.

In science and engineering the ±(uncertainty) is ubiquitous. And even when a positive SD is exclusively presented, the (plus/minus) dispersion is understood.

Reply to  Pat Frank
August 30, 2023 8:41 am

“Courtney could as well have written, “the square root of 4, which is -2.””

Could have, but didn’t. That’s because she’s not describing a square root but a standard deviation.

“Describing the dispersion about 3 as 2 does not convey the factual measurement (plus/minus) dispersion anywhere near as directly as 3±2.”

And you can di that if you want. Just don’t mistake ±2 for the standard deviation. The standard deviation is +2. You can add it and subtract it from the mean to get a range. You could add and subtract anu value to get a range, it doesn’t mean it’s going to be both positive and negative.

I’m not sure myself if saying μ±2 is clearer than saying the standard deviation is 2, as long as your audience knows what a standard deviation means. The ± range is more usually used to define a convidence interval with a specified percentage.

“And note that the -2 uncertainty value is as positively probable as the +2.”

Again that’s confusing error with uncertainty. You can say it’s equally probable that the error will be 2 above or below the true value, but it makes no sense to say the uncertainty is -2.

“The statistical rationale for rejecting an undefined P<0 is nowhere to be found. Rejection of negative valued uncertainties is ungrounded in science."

It's axiomatic to any definition of probability. If scientists can imagine negative probability than I don't want to here any more about how they are the ones living in the real world. A probability of zero means something can never happen. It's impossible. You can't have less probability than that.

(To be fair, for all I know there is an abstract branch of probability theory that allows negative probability. But that would be exactly the type of mathematics you would say is not applicable to the real world )

Reply to  Bellman
August 30, 2023 8:53 am
Reply to  Bellman
August 30, 2023 1:17 pm

Again that’s confusing error with uncertainty.

IRONY OVERLOAD…

Reply to  karlomonte
August 30, 2023 3:04 pm

The irony being you claim it’s a mortal sin when you think I’m doing it, but fine when Pat does it.

Reply to  Bellman
August 30, 2023 3:43 pm

More reading comprehension disability.

Reply to  Bellman
August 30, 2023 2:53 pm

“That’s because she’s not describing a square root but a standard deviation.”

Courtney: “the square root of 4, which is 2.

Courtney’s full statement , “Your standard deviation is the square root of 4, which is 2.” is wrong on its face

“Your standard deviation is the square root of 4, …” is correct.

“…which is 2.” is wrong. The square root of 4 is ±2.

Right modified by wrong is wrong.

Courtney’s conclusion merely shows that she has been bamboozled by the limitations of statistical thinking, and has incorrectly applied such thinking to physical meaning.

it makes no sense to say the uncertainty is -2.”

No one would say that. The uncertainty is ±2.

You’re just continuing to insist on the statistical interpretation of uncertainty that we discussed, and which I invalidated above.

Your link below shows that negative probabilities engage some anti-particle mystery in the quantum world, but are just mathematical conveniences in Finance and Engineering.

Under Engineering, we find this: “the facility disruption states (whose probabilities are ensured to be within the conventional range [0,1]),” (my bold) which validates my point that statistics eschews negative probabilities.

Hence Statistics is incapable of rationalizing the meaning of uncertainty in science and engineering.

Because science and engineering assign positive probabilities to negative uncertainty values.

The entire conversation on this issue is you insisting on a statistical vocabulary in a science/engineering milieu.

Reply to  Pat Frank
August 30, 2023 3:27 pm

Courtney’s full statement , “Your standard deviation is the square root of 4, which is 2.” is wrong on its face

A bit sloppy, but as the Wiki page on square roots says.

Although the principal square root of a positive number is only one of its two square roots, the designation “the square root” is often used to refer to the principal square root.

Courtney’s conclusion merely shows that she has been bamboozled by the limitations of statistical thinking

It’s not my source, it was Jim’s, but I think your statement is offensive nonsense. You keep lashing out at anyone who you don’t agree with. You never consider it’s possible you might be fooling yourself.

No one would say that. The uncertainty is ±2

If you say it’s ±2, you are saying it’s -2 as well as +2. And so far you have yet to explain what you would think an uncertainty of -2 means.

You’re just continuing to insist on the statistical interpretation of uncertainty that we discussed, and which I invalidated above.

I’ve looked at lots of definitions of uncertainty and none of them allow it to be negative. You are now claiming you’ve invalidated that by a list of quotes, not one of which says you can have negative uncertainty, and one that specifically says uncertainty cannot be negative.

Again, and again, and again – what you are doing is confusing the idea of negative uncertainty with that of an interval.

Your link below shows that negative probabilities engage some anti-particle mystery in the quantum world, but are just mathematical conveniences in Finance and Engineering.

As I said, they are not something you would use in the “real” world. In case you’ve missed the point again – I was the one arguing that probabilities cannot be negative, and that this is a good reason why standard deviations cannot be negative.

“the facility disruption states (whose probabilities are ensured to be within the conventional range [0,1]),”

A reminder – you were the one insisting on negative probabilities:

The statistical rationale for rejecting an undefined P<0 is nowhere to be found. Rejection of negative valued uncertainties is ungrounded in science.

Hence Statistics is incapable of rationalizing the meaning of uncertainty in science and engineering.

And yet they all seem to depend on statistics to describe uncertainty.

Because science and engineering assign positive probabilities to negative uncertainty values.

Meaningless drivel unless you can explain what a “negative uncertainty value” is. If you say you measured a board as 10m, with a standard uncertainty of (-1cm), what does the -1 mean? That you are more certain than if there had been zero uncertainty?

The entire conversation on this issue is you insisting on a statistical vocabulary in a science/engineering milieu.

It seems more like you failing to understand that saying you can subtract a positive number does not mean the number becomes negative.

Reply to  Bellman
August 30, 2023 3:32 pm

It seems more like you failing to understand that saying you can subtract a positive number does not mean the number becomes negative.”

And you can’t accept the idea that you can normalize a Gaussian curve so it is entirely to the left of 0. I.e. the standard deviation is a negative number. That does *NOT* mean that the probability of each of those negative values is negative. As Pat rightly points out they can *all* be positive.

Reply to  Tim Gorman
August 30, 2023 3:47 pm

So let me get this straight, in bellman world a minus sign in front of something can only be an operator? Is this what he is “arguing”?

Reply to  karlomonte
August 30, 2023 3:56 pm

You’re asking the wrong person.

Reply to  karlomonte
August 30, 2023 4:56 pm

Sure looks that way to me. He doesn’t have the understanding of a number line that a 3rd grader does. To him everything is a natural number and the minus sign is just an operator.

Reply to  Tim Gorman
August 30, 2023 4:01 pm

And you can’t accept the idea that you can normalize a Gaussian curve so it is entirely to the left of 0.

You can’t, but that’s irrelevant to the point.

I.e. the standard deviation is a negative number.

I wish you would just admit you don’t know what the standard deviation is, and save us both this embarrassment.

Moving the curve has absolutely nothing to do with the standard deviation, and certainly does not make it negative.

That does *NOT* mean that the probability of each of those negative values is negative.

The next person who implies I think there are negative probabilities in the Gaussian distribution gets a down vote. The worst punishment imaginable.

Reply to  Bellman
August 30, 2023 5:06 pm

You can’t, but that’s irrelevant to the point.”

Of course you can. You can slide it anywhere you want on the number line and it won’t affect the distribution at all. You are confusing the height of the values, the y-axis, with the location of the distribution on the x-axis.

one more indicator that you simply don’t know what you are talking about!

“Moving the curve has absolutely nothing to do with the standard deviation, and certainly does not make it negative.”

It doesn’t make the VARIANCE negative. The standard deviation, the square root of the variance has both a positive and negative square toot.

If you are analyzing an equation that requires taking a square root and you ignore one of the roots then you will only get 1/2 of the answer to the equation. Got that? You only get one half of it!

Take the equation for the mixing of F1 and F2 in a square-law mixer. You will get F1 + (-F2) and F1 + (+F2) frequencies. If you work the equation through with only the positive root, F1 + F2, you will miss half of the result. If you calculate the variance of the resulting waveform using only the positive root you won’t get the right answer, you will get *HALF* of the true value of the variance. You will only see the interval from F1 to +F2 and miss the other half of the interval F1 + (-F2).

Pat had it right. You are living in statistics world and making up your own definitions. That might work in statistics world. It doesn’t work in the real world.

Reply to  Tim Gorman
August 30, 2023 5:59 pm

Of course you can.

I take it your 3rd grade education didn’t include learning about infinity. You can not move a Gaussian distribution so it is entirely to the left of 0.

It doesn’t make the VARIANCE negative. The standard deviation, the square root of the variance has both a positive and negative square toot.

We went through all this three days ago. The standard deviation is the positive square root of the variance.

If you are analyzing an equation that requires taking a square root and you ignore one of the roots then you will only get 1/2 of the answer to the equation.”

That depends on the problem. Often the positive root is the only one that makes sense. Consider the not unrelated issue of Pythagoras’ theorem. What’s the length of the hypotenuse of a triangle with sides 3cm and 4cm. Would it make sense to say it’s ±5cm?

Reply to  Bellman
August 31, 2023 4:48 am

You never learned vector math did you? The Pythagoras’ theorem you are referencing is a basic elementary concept. As you begin to learn more complicated science and math, the answer to your question is that it could very easily be either. As Tim tried to explain, using vectors for travel, or even the travel of an electromagnetic wave could result in a negative value for the square root. And in fact it could describe something in both directions. Have you ever learned about the position of electrons?

It all depends on the coordinate system being used. Science encompasses much more than just statistics.

“You can not move a Gaussian distribution so it is entirely to the left of 0.”

Sure you can. Can you have a distribution centered around a value on the x-axis of -65° such as in Antarctica?

Reply to  Jim Gorman
August 31, 2023 7:02 am

He’s actually never learned statistics math. Like everything else he just cherry-picks things in statistics world as well.

Reply to  Jim Gorman
August 31, 2023 1:42 pm

You never learned vector math did you?

Could you be anymore patronizing?

I was specifically asking about the length of a side of a right angled triangle. As I said – it depends on the problem.

Sure you can.

Someone else who thinks they can lecture me on basic mathematics, yet doesn’t understand the Gaussian distribution is infinite.

Can you have a distribution centered around a value on the x-axis of -65° such as in Antarctica?

Can you have a Gaussian distribution on a sphere?

Reply to  Bellman
August 31, 2023 2:02 pm

I was specifically asking about the length of a side of a right angled triangle. As I said – it depends on the problem.

bellman Evasion Rule No. 2. “I was talking about something else”

“Someone else who thinks they can lecture me on basic mathematics, yet doesn’t understand the Gaussian distribution is infinite.”

bellman Evasion Rule No. 3: deflect to another subject

“Can you have a Gaussian distribution on a sphere?”

bellman Evasion Rule No. 3. Deflect to another subject.

Reply to  Tim Gorman
August 31, 2023 3:39 pm

You really are beginning to seem like someone who has a personal obsession with me.

bellman Evasion Rule No. 2. “I was talking about something else”

My exact words:

That depends on the problem. Often the positive root is the only one that makes sense. Consider the not unrelated issue of Pythagoras’ theorem. What’s the length of the hypotenuse of a triangle with sides 3cm and 4cm. Would it make sense to say it’s ±5cm?

bellman Evasion Rule No. 3: deflect to another subject

The subject was Jim saying “sure you can” to my comment

You can not move a Gaussian distribution so it is entirely to the left of 0.

“bellman Evasion Rule No. 3. Deflect to another subject.

Jim responded –

Can you have a distribution centered around a value on the x-axis of -65° such as in Antarctica?

Reply to  Bellman
August 30, 2023 3:45 pm

Just about everything you type is meaningless drivel.

Reply to  Bellman
August 30, 2023 10:32 pm

is often used

Not defined as.

“I think your statement is offensive nonsense.”

You’re welcome to your opinion.

You keep lashing out …”

Supposing Courtney was misled is not to lash out. You’re powering your argument with lurid language.

If you say it’s ±2, you are saying it’s -2 as well as +2.

Not correct. The message is that the uncertainty in value spans that range.

“I’ve looked at lots of definitions of uncertainty and none of them allow it to be negative.”

Yet once again, from Vasquez & Whiting, (2006): “… to define uncertainty intervals for means of small samples as x ± t·s, where s is the estimate of the standard deviation σ.

From Ferson, et al. (2007): “the expression 0.254 “± 0.011 denotes a normal distribution N(0.254, 0.011) with mean 0.254 and standard deviation 0.011 which is the model for the uncertainty of the measurement result.

Direct allowance of uncertainty to be negative in a scientific context. With a positive probability. Please remember, after this.

I was the one arguing that probabilities cannot be negative, and that this is a good reason why standard deviations cannot be negative.

In science, the negative wing of an uncertainty distribution, explicated as the negative valued standard deviation, carries positive probabilities.

Thus, in the relevant context of science — the present context — the second clause of your objection is a non-sequitur of the first.

This has been pointed out many times, and demonstrated now two or three times. Please stop revisiting the same mistaken statistical context.

Meaningless drivel unless you can explain what a “negative uncertainty value” is.”

The negative distribution wing representing the lower range uncertainty of a physical magnitude, and represented by the negative valued standard deviation.

…what does the -1 mean?

it means the correct (true) length of one’s board is within ±1 cm (1σ) of 10 m. It may be a bit shorter, it may be a bit longer, but one doesn’t know which or exactly by how much.

It seems more like you failing to understand that saying you can subtract a positive number does not mean the number becomes negative.

Rather, it seems you can’t understand that ±σ represents a continuous range rather than two scalar limits.

Reply to  Pat Frank
August 31, 2023 5:34 am

Not defined as.

You see the problem I have is that Jim Gorman keeps insisting on sending me length passages from random bits of text he’s found on the internet. He always thinks they prove I’m wrong about something, but nearly always it’s just he hasn’t understood what they are saying.

Often these sites are quite basic, intended as introductions to a topic, and in many cases they are badly written, confusing and sometimes just wrong. This particular page isn’t bad, it’s quite clear and a reasonable introduction to pupil just needing to know what a standard deviation is and how to calculate it by hand.

But it clearly is not intended to be a research document or a complete text book on probability theory. It’s just a low level introduction to the subject, and I think it’s entirely fair that the author didn’t try to confuse the reader by introducing complications such as negatives of square roots, especially when in this case you only want the positive root.

Suggesting that this means that he has been “bamboozled by the limitations of statistical thinking” is just plain daft. But it seems to be a pattern with you. Attacking all statisticians for not living in the real world, attacking climate scientists for not using the correct statistics, yet not being prepared to accept any criticism of your own work, by people you consider “unlearned”.

Supposing Courtney was misled is not to lash out.”

Misled by whom? I’ve no idea who this person is – I’m guessing you don’t either given you assumed he was a woman, but according to the page Dr Taylor has a PHd in mathematics and “is currently a professor of mathematics and chair of the mathematics department at Anderson University, where he also leads the university’s research experience projects in mathematics.”

Suggesting he has somehow been “bamboozled” by evil statisticians into thinking square roots can’t be negative – seems at best arrogant on your part.

Reply to  Bellman
August 31, 2023 5:59 am

I post excerpts from university courses when possible and other commonly used tutorials so that readers of this site can read them and know my comments are supported by reputable sources.

You seldom provide sources that substantiate your assertions.

As to the use of ±σ values, you need only to learn some vector math to help understand the real world and why there are two possible answers to an even root. Would it surprise you to know that an airplane can have a negative solution for distance traveled when solving a Pythagoras triangle, i.e., a negative vector? Would it surprise you to know there are places in turbulent flow of a fluid that have a negative vector?

These are real world examples and lie outside the world of linear math where probability is always positive real numbers.

Reply to  Jim Gorman
August 31, 2023 7:08 am

You seldom provide sources that substantiate your assertions.”

He has yet to show any source from the GUM on how to handle multiple measurements of single things. I’m not going to hold my breath waiting.

Reply to  Tim Gorman
August 31, 2023 8:05 am

Equations 10 and 13, B.2.17. Take your pick.

If you want to combine single measurements of different things, equations 10 and 13

None of this actually describes the real uncertainty of sampling. For that I wouldn’t use the GUM – it’s not it’s purpose.

Reply to  Bellman
August 31, 2023 8:25 am

Wrong. If you look at Section 4.4.3 of the GUM it only shows these equations applying to symmetrical distributions, i.e. gaussian, uniform, and triangle. Not a single skewed distribution is shown. You are only guaranteed of getting these symmetric distributions from multiple measurements of the same thing under repeatable conditions using the same measuring device. There is a *reason* why only these distributions are considered in the GUM – and for some reason you are unable to admit that.

You’ve been caught cherry-picking again. Don’t you *ever* get embarrassed over this?

This isn’t a question of sampling error. This is bellman evasion rule No. 2, “I was speaking of something else”.

Reply to  Tim Gorman
August 31, 2023 10:41 am

Wrong. If you look at Section 4.4.3 of the GUM it only shows these equations applying to symmetrical distributions

Sigh. Section 4.4.3 is not about equation 10. It’s a simple illustration of how to use the SEM (or experimental standard deviation of the mean, if you prefer) to calculate the uncertainty of the mean of 20 random measurements. It is illustrating Equation 5. It takes the measurements from a Gaussian distribution, but that does not mean that the equation will not work if it comes from a different distribution. It’s just an application of the CLT, which as you must surly know by now works for nearly any distribution.

While you are looking at 4.4.3, note who in the example, they at no point suggest the standard deviation or uncertainty, or the result of any square root is negative.

The arithmetic mean or average t of the n = 20 observations calculated according to Equation (3) is t = 100,145 °C ≈ 100,14 °C and is assumed to be the best estimate of the expectation μt of t based on the available data. The experimental standard deviation s(tk) calculated from Equation (4) is s(tk) = 1,489 °C ≈ 1,49 °C, and the experimental standard deviation of the mean s (t^bar) is calculated from Equation (5), which is the standard uncertainty u (t^bar) of the mean ,t^bar is

u(t^bar) = s(t^bar) = s(t_k) / √20 = 0,333°C ≈ 0,33°C.

(For further calculations, it is likely that all of the digits would be retained.)

Reply to  Bellman
August 31, 2023 12:30 pm

“It’s just an application of the CLT, which as you must surly know by now works for nearly any distribution.”

But there are very specific requirements to use the CLT when sampling non-normal distributions. The GUM example does sampling from a normal distribution. It is guaranteed to work.

Your assertion has no evidence to support it. Even NUST TN 1900 makes the admission that a statistical test for normality can’t justify that assumption.

From the TN.

“””A coverage interval may also be built that does not depend on the assumption that the data are like a sample from a Gaussian distribution. The procedure developed by Frank Wilcoxon in 1945 produces an interval ranging from 23.6 ◦C to 27.6 ◦C (Wilcoxon, 1945; Hollander and Wolfe, 1999). The wider interval is the price one pays for no longer relying on any specific assumption about the distribution of the data. “”‘

See the wider interval! It expands to ±2.0. Why would anyone think the uncertainty would diminish?

Reply to  Jim Gorman
August 31, 2023 3:02 pm

But there are very specific requirements to use the CLT when sampling non-normal distributions. The GUM example does sampling from a normal distribution. It is guaranteed to work.

It would be much easier if you tried to study what the CLT actually is rather then just guessing.

Having a normal distribution does not guarantee it will work, and not being normal is no obstacle to it working. The specific requirements are independence, identical distribution and a finite standard deviation. And even if the requirements don’t hold in perfectly, doesn’t mean it can’t give you useful results.

Even NUST TN 1900 makes the admission that a statistical test for normality can’t justify that assumption.”

And yet they still use the result.

See the wider interval! It expands to ±2.0.

You are using a small sample size < 30. The point is the larger the sample size the smaller the interval.

Reply to  Bellman
August 31, 2023 3:52 pm

How do you get identical distributions for the samples from a multi-nodal distribution?

And yet they still use the result.”

As a TEACHING TOOL.

No, the point is that TN1900 ignored systematic bias in the measurements. The same thing you always do so you can assume all measurement uncertainty is random, Gaussian, and cancels.

Reply to  Bellman
August 31, 2023 1:22 pm

 to calculate the uncertainty of the mean of 20 random measurements.”

ROFL!!! You can’t see the forest for you blinders!

Measurements are only used to specify uncertainty under the meme that the measurement uncertainty is RANDOM, GAUSSIAN, AND CANCELS!

You can deny you live with that meme every second of you life but it just comes shining through each and every time!

You just can’t get away from your cherry-picking, can you?

but that does not mean that the equation will not work if it comes from a different distribution.”

OF COURSE THAT IS EXACTLY WHAT IT MEANS!! It will not work for a multi-nodal distribution. It will not work for a skewed distribution. And it will not work when the measurement uncertainty is not random, Gaussian, and does not cancel out!

It works for multiple measurements of the same thing under the same environment using the same device. It does *NOT* work for single measurements of different things under different environments using different devices.

It doesn’t matter what is being talked about you just always circle back to the same assumption: all measurement uncertainty is random, Gaussian, and cancels.

Reply to  Bellman
August 31, 2023 11:57 am

Funny you should bring up equations 10 and 13. You do realize that these are equations for the combined uncertainty of “y”, right? I wonder how dividing by N is not shown to calculate the combined uncertainty.

Now let’s discuss B.2.17.

This is finding the value of s(qᴋ). Guess what the same measurand means in the explanation “for a series of n measurements of the same measurand”. It means what it says, multiple measurements of the same thing.

In NIST TN 1900, the measurand was declared to be the Tmax for the month of the same thing. The Xᵢ’s are all actual measurements OF THE SAME MEASURAND. You could then find the mean of those measurements by

y = (1/N)(Σ(X₁+ … + Xₙ))

The last you and bdgwx dealt with this, the declaration was made that the functional description for the monthly average temperature was as above. That’s ok because actual measurements are used., but no one has ever justified that averaging means of different stations meets the requirement of “a series of n measurements of the same measurand.”

You will certainly have a problem convincing anyone that a station in Death Valley during the summer is the same measurand as a temperature at a station at the southern tip of Peru in winter.

That means averages of averages do not fall under the GUM. A verifiable procedure needs to be used that deals with averages of averages.

Most people who deal with measurement would say simply add their uncertainties. I’ll bet you won’t.

Reply to  Jim Gorman
August 31, 2023 1:04 pm

I wonder how dividing by N is not shown to calculate the combined uncertainty.

We’ve been through this so many times before – but I guess your memory if almost as bad as mine.

You have to multiply each term by the square of the square of the partial derivative. If f is the mean function, that means each term is divided by N, and each partial derivative is 1/N. Hence,

u(y)^2 = 1/n^2 * (u(x1)^2 + u(x2)^2 + … + u(xN)^2)

Then taking the positive square root

u(y) = 1/n * sqrt(u(x1)^2 + u(x2)^2 + … + u(xN)^2))

Now let’s discuss B.2.17.

Yes, that’s the equation for the standard deviation of n measurements of the same thing. I.e. the standard uncertainty of a measurement.

In NIST TN 1900, the measurand was declared to be the Tmax for the month of the same thing.

Yes, that’s what you get if you want the uncertainty of the daily average max temperature – here “the same thing” is apparently allowed to be the daily average. I remember when you were very insistent that it was impossible to ever treat temperature measurements of the same thing. As soon as you took a measurement the thing had disappeared and you had a new temperature.

The Xᵢ’s are all actual measurements OF THE SAME MEASURAND.

Yes, and that is the average of all the days in May. Yet you also kept insisting that an average could not be a measured.

but no one has ever justified that averaging means of different stations meets the requirement of “a series of n measurements of the same measurand.”

And what requirements are those? They seem to depend very much on what you want to allow and disallow.

You will certainly have a problem convincing anyone that a station in Death Valley during the summer is the same measurand as a temperature at a station at the southern tip of Peru in winter.

Why? If the measurand is defined as the global average temperature, they are both measurements of the same thing. Just as a maximum temperature on the 1st of May is the same thing as the max temperature on the 31st.

And I hardly need to convince anybody. Nearly everyone, including WUWT keep talking about global average temperatures. They even try to claim there has been a pause in them.

That means averages of averages do not fall under the GUM.

Yet they fail to mention that. So assuming that;s correct why do you keep demanding I tell you where in the GUM it explains how to handle it. Also, are you going to explain how Pat Frank gets his uncertainty of the global annual anomaly based entirely on instrument uncertainty. You keep insisting it’s impossible and meaningless – but not to him.

Most people who deal with measurement would say simply add their uncertainties.

I think you have just slandered a whole load of people who would say that’s nonsense. But I hope you explain to Pat why he’s wrong to use RMS rather than just add all the uncertainties.

Reply to  Bellman
August 31, 2023 4:16 pm

And in the limit as N —> infinity in your equation, the division becomes infinity over infinity, which is undefined.

Reply to  karlomonte
August 31, 2023 4:39 pm

I’d worry about that if you ever have an infinite number of measurements.

Reply to  Bellman
August 31, 2023 4:45 pm

Check this out: the “maths” guy doesn’t care about inconvenient little problems.

Any means are acceptable to hit his goal of tiny air temperature “uncertainties”—where have I heard this philosophy before?

Reply to  karlomonte
August 31, 2023 6:05 pm

The inconvinient problem of accidentally measuring an infinite number of things and finding you were left with zero uncertainty. Yes that’s a real problem that need to be addressed. Fortunately it solves itself, by virtue of being impossible. But also, having zero uncertainty isn’t exactly a problem. But also it will never happen even if you could measure an infinite number of things, becasue of all the other sources of uncertainty, that you keep forgetting about.

Reply to  Bellman
August 31, 2023 6:18 pm

becasue of all the other sources of uncertainty, that you keep forgetting about.

Will you please stop overloading my irony meter?

Reply to  Bellman
September 1, 2023 10:20 am

You are just about to reach the real problem.

You are not averaging an infinite number of measurements. You are averaging a large number of random variables made up of averages all of which have uncertainty..

The assumption is that you can reduce the uncertainty by averaging more averages. That just isn’t the case, not ever. Uncertainty grows every time you add things together. Uncertainty is not a statistical parameter. It is a physical descriptor. It may be calculated by finding an SD or SEM if certain assumptions are made, but it describes a physical concept.

Here is the procedure climate science uses.

Step 1 – Average Tmax & Tmin.
Uncertainty 1/2 Type B

Step 2 – Average 30 days of Tmid
Throw away original uncertainty
Calculate new uncertainty.

Step 3 – Average 30 monthly averages
Throw away original uncertainty
Calculate new uncertainty

Step 4 – Subtract Sept 2 and Step 3
Throw away original uncertainty
Calculate new uncertainty

Step 5 – Average Step 4 with 9000 stations
Throw away original uncertainty
Calculate new uncertainty.

You need to reconcile yourself to the physicality of what you are dealing with. Here is an example of what a statistician will arrive at using math.

My factory makes rods 12 inches long.

My factory people just keep cutting them and the rods vary from 10 inches to 14 inches. They send me data that they have made 10,000 rods whose average is 12 inches with an uncertainty of “2 / √10000 = 0.02 inches”. NO PROBLEMS HERE.

Are the statistics reliable?

Until you can relate to us how your reduction in uncertainty via averaging averages truly work you will not receive any agreement from us.

Reply to  Jim Gorman
September 1, 2023 11:38 am

It would be great to see this as a full article in WUWT.

Reply to  Jim Gorman
September 1, 2023 12:14 pm

Lovely! Beautifully laid out.

Reply to  Jim Gorman
September 1, 2023 1:51 pm

Here is the procedure climate science uses.

It is not. You are only looking at propagating measurement uncertainty, and I’m not even sure which data sets include specific measurement uncertainties – I thought that was Dr Frank’s complaint.

If you do only want to estimate measurement uncertainty your steps are mostly OK, but you keep confusing “propagate the uncertainties” for “throw away the uncertainties”.

I would say, you are propagating the uncertainties for a daily Tmin and Tmax to get the Tmean uncertainty. This gives you a new value with it’s own uncertainty, that you then propagate into a monthly average, with a new uncertainty, and so forth.

The final step is more complicated as you have to see how all the stations are being averaged. The uncertainty is going to be weighted by that – the uncertainty of a single isolated station will have more of an effect on the final uncertainty, than a station from a densely populated area.

But, unless there is some major systematic error in the instruments which cannot be accounted for, is the spatial sampling, along with all the uncertainties arising from adjustments, any systematic changes that are not known, and so forth.

If you want to talk about “what climate science uses” you really should start by looking at their actual uncertainty calculations. I’m sure none of them are perfect.

Reply to  Bellman
September 1, 2023 2:14 pm

Here is an example of what a statistician will arrive at using math.

Any specific statistician? I’m sure could always find another one that will disagree with their analysis.

My factory makes rods 12 inches long.
My factory people just keep cutting them and the rods vary from 10 inches to 14 inches.

Is that good or bad? UI don’t know what the rods are used for, but claiming they are all 12″ long, when they vary by 2″ seems bad. Also, is the 10 – 14 range the entire range, or is it a standard deviation of 2?

They send me data that they have made 10,000 rods whose average is 12 inches with an uncertainty of “2 / √10000 = 0.02 inches”. NO PROBLEMS HERE.

Quite a few problems there. The most obvious one being not defining what the uncertainty is. Are they talking about the uncertainty of the average, or of an individual rod? I would guess from your figures that this is the SEM, and the 2″ is the standard deviation.

Are the statistics reliable?

Who knows? I don’t know how competent your people are. Maybe they are making the figures up, or hiding bigger issues.

Assuming there is complete randomness in the large errors, then you would assume the SEM value of 0.02″ is reasonable, but why do you want to know it? Any measurement is about answering a question, and the uncertainty has to reflect that question.

If the question is, what is the average length of all our rods, based on a large random sample of 10000, then this is a good answer. Despite the large individual errors, your sample still managed to be exactly 12 inches, to the nearest 100th of an inch, and the actual average, and you can have high confidence that the actual average isn’t likely to be that say 0.04″ from 12″.

On the other hand, if you wanted to know how uncertain your cutting process was, you would need to be looking at the individual uncertainty, which would be the standard deviation of 2″. Whether that’s good or bad depends on what you are using them for. If say the process required rejecting any rod less than 11″, or more than 13″, it would mean you are going to be rejecting a lot of rods.

Until you can relate to us how your reduction in uncertainty via averaging averages truly work you will not receive any agreement from us.

You don;t seriously think I expect any of you to change your minds? I’ve been arguing every which way, given every argument I can think of, demonstrated what the equations in your own books imply, showing how you can see this through simulations, and practical demonstrations. It leaves not a scratch on your deeply held believes. I’ve long given up assuming I can persuade you – I just keep arguing because I mostly enjoy it and I think I learn a lot from it.

Reply to  Bellman
September 1, 2023 2:33 pm

YMI — Yet more irony.

Reply to  Bellman
September 1, 2023 4:26 pm

Your entire answer is nothing more than bellman Evasion Rule No. 1 – the question is ill-posed or vague.

Assuming there is complete randomness in the large errors, then you would assume the SEM value of 0.02″ is reasonable, but why do you want to know it? Any measurement is about answering a question, and the uncertainty has to reflect that question.”

The SEM is MEANINGLESS in the real world. No one cares how closely you have calculated the average.

What they care about is if they need a 12″ rod will they get one? Or will it be 14″ long and require on-site remanufacturing? Or, even worse, will they get one that is only 10″ long and will have to hold up their project till a new one (that may still come in only 10″ long) can be shipped to them?

As usual, you are living in “statistical world” and not the real world. I guess we shouldn’t expect anything different!

“individual uncertainty”

So you would measure every single rod? Then what good is a statistical analysis?

Reply to  Tim Gorman
September 1, 2023 4:54 pm

The SEM is MEANINGLESS in the real world. No one cares how closely you have calculated the average.

Gorman evasion number 678, ignore the answer and say it doesn’t work in the “real world”. the real world where the side of a triangle can have a negative length.

You’re version of the real world might never use SEM, but in many worlds it’s a valuable statistical result. It isn’t about “how closely you calculated the average”, it’s about what the sample tells you about the population. It’s about saying how likely two samples are from the same population, it’s about telling if a new drug has a significant effect.

What they care about is if they need a 12″ rod will they get one?

Which was exactly the point I was making. You obviously you want to pretend that someone would use the uncertainty of the average, to claim that all the rods were very close to 12″. The classic Gorman gotcha question. But all it shows is your own ignorance. As I say, you need to know what question you are answering. What quantity do you want to know the uncertainty of. But of course, the fact I explain that the SEM is no good for knowing the individual uncertainty, will just be claimed as evasion by you.

As usual, you are living in “statistical world” and not the real world. I guess we shouldn’t expect anything different!

Gorman deflection technique number 712 – when on the ropes end with a personal insult.

So you would measure every single rod?

That’s what Jim was claiming when he treated it as a sample of 10000. Of course you wouldn;t measure that many – you would take samples and use all the “not real world” statistics to evaluate the strength of the sampling.

Reply to  Bellman
September 1, 2023 2:32 pm

Fanning from the hip and hoping to hit … anything.

Reply to  karlomonte
September 1, 2023 2:43 pm

Oh noes! karlo disagreed with something I said.

Reply to  Bellman
September 1, 2023 3:05 pm

You misspelled “everything”.

HTH

Reply to  Bellman
September 1, 2023 3:30 pm

“If you do only want to estimate measurement uncertainty your steps are mostly OK, but you keep confusing “propagate the uncertainties” for “throw away the uncertainties”.”

You show how little you know. No one propagates uncertainty from one average to the next average. The next average is always done assuming brand new UNCERTAINTIES and divided by √n. By the time you get to anomalies, you are using small numbers (<<1) and dividing by a whole slew of stations. That isn't propagating uncertainty, it is lying by figuring.

Variances need to be added and not divided away. What I tried to show is that Ex(X+Y)is μX+μY while Var(X+Y) =Var(X)+Var(Y). Just because you want to divide μ by 2!doesn't mean you can also divide the uncertainty by 2.

If I build a table 4'x8' both ±1". Do you think if I build a matching table half the size, that the uncertainty is also divided by 2? Do you think if I use the same measuring tape the uncertainty suddenly becomes ±0.5?. What if I build 100 of them? Does the uncertainty become 1/√10=0.3?

That is what you recommend, decreasing the uncertainty as you add more and more components. That never occurs. Your method somehow results in each component getting its uncertainty magically reduced.

Reply to  Jim Gorman
September 1, 2023 4:05 pm

No one propagates uncertainty from one average to the next average.

Of course you can. The inputs into one propagation calculation may be the result of other propagation calculations.

It’s what Pat Frank does in his paper. Propagate the uncertainties from the max and min to get the uncertainty of the mean, then propagate the daily mean uncertainties into that for an average month of 30.4 days, then propagate the monthly uncertainty into an annual uncertainty etc.

Of course, in his method he always ends up with the exactly the same uncertainty each time.

What I tried to show is that Ex(X+Y)is μX+μY while Var(X+Y) =Var(X)+Var(Y).

And as always that is giving you the variance of a sum, not an average.

Just because you want to divide μ by 2!doesn’t mean you can also divide the uncertainty by 2.

And then we have the argument by dismissal. Why can’t you? Every equation and common sense says that’s exactly what you do. It’s why the CLT says that the sum increases by the √N, whilst the mean decreases by √N. It’s what happens if you evaluate using Equation 10, and it follows from the rules for combining variances.

It’s a simple thing really – Var(kX) = Var(X) * k^2.

If you would only think about what a variance is, you would understand it couldn’t be anything else.

If I build a table 4’x8′ both ±1″. Do you think if I build a matching table half the size, that the uncertainty is also divided by 2?

No.

Do you think if I use the same measuring tape the uncertainty suddenly becomes ±0.5?.

No. That’s why you are using absolute uncertainties.

What if I build 100 of them? Does the uncertainty become 1/√10=0.3?

The uncertainty of what? The average, yes. Each individual table, no.

(Again, that is assuming your measurement uncertainties are random.)

That is what you recommend, decreasing the uncertainty as you add more and more components.

The uncertainty of the mean decreases as sample size increases.

That never occurs.

Apart from all the times you insist it does occur if you are measuring the same thing multiple times.

Your method somehow results in each component getting its uncertainty magically reduced.

No it does not, and the fact you still don’t get the difference between the uncertainty of the mean, and the individual uncertainty is why I never expect you to understand any of this.

Reply to  Bellman
September 1, 2023 6:45 pm

No one propagates uncertainty from one average to the next average.

Of course you can. The inputs into one propagation calculation may be the result of other propagation calculations.

Evasion/smokescreen #633 — Jim stated very plainly that no one in the alleged science of climate does any propagation at all.

Zip zero nada.

And then you climb on your high horse and proclaim it can be done. BFD.

NO ONE DOES THIS. Certainly not the UAH.

All you care about are your impossibly tiny delta-T “uncertainties” that you use to fool people who can’t see through your fraudulent data machinations.

Reply to  Jim Gorman
September 1, 2023 4:09 pm

Magic is the correct term, no doubt about it.

Reply to  Bellman
September 1, 2023 4:19 pm

But, unless there is some major systematic error in the instruments which cannot be accounted for”

How do you find the systematic error for the temperature measuring stations at Forbes Field in Topeka, KS? How do you find the systematic error for the temperature measuring station of the 57 automatic measuring stations in Antarctica funded by the NWS?

“I would say, you are propagating the uncertainties for a daily Tmin and Tmax to get the Tmean uncertainty.”:

What *ARE* the uncertainties for Tmin and Tmax? What are they for the temperature measuring station at the Des Moines, IA International Airport? Do you find that information in any of the temperature databases?

” I’m not even sure which data sets include specific measurement uncertainties”

They don’t.

” but you keep confusing “propagate the uncertainties” for “throw away the uncertainties”.”

If you don’t use them then you *are* throwing them away. There isn’t any other way to look at.

“The final step is more complicated as you have to see how all the stations are being averaged. “

Why? The GAT is an *index*, not an actual temperature. Not even UAH measures every point on the globe. As an index all you need to know is how it is changing – and that includes the measurement uncertainty associated with the index members.

“the uncertainty of a single isolated station will have more of an effect on the final uncertainty, than a station from a densely populated area.”

Why? Are you saying rural stations have higher measurement uncertainty? If not then what *are* you trying to say?

‘you really should start by looking at their actual uncertainty calculations.”

They don’t do any! There’s nothing to look at. If they don’t have the measurement uncertainties for the individual Tmin and Tmax values then they *can’t* calculate any further measurement uncertainties.

They do *exactly* what you do – assume all measurement uncertainty is random, Gaussian, and cancels. They then use the variation in the stated values as their uncertainty calculation. Just like Possolo did in TN1900.

It’s why they don’t accept that measurement uncertainty in the input components they put into the climate models exists and compounds in an iterative process. They just assume all measurement uncertainty is random, Gaussian, and cancels – just as you do!

Reply to  Tim Gorman
September 1, 2023 6:46 pm

He doing another Stokesian nitpick misdirection.

Reply to  Jim Gorman
September 1, 2023 10:04 am

As to the use of ±σ values, you need only to learn some vector math to help understand the real world and why there are two possible answers to an even root.

Your problem is you are thinking σ is a vector. It’s a scalar. It doesn’t have a direction, it’s a measure of the average distance each point is from the mean, which can only be positive.

Would it surprise you to know that an airplane can have a negative solution for distance traveled when solving a Pythagoras triangle,

Nothing you do would surprise me – but as I tried to explain before distance is a scalar value – and it’s never negative. I suspect what you mean is displacement.

https://sciencing.com/distance-vs-displacement-whats-the-difference-why-it-matters-w-diagram-13720227.html

And if you are using Pythagoras to solve displacement and assuming all square roots are both positive and negative, all you are going to say is the plane has both traveled in a positive and negative direction.

You can’t just look at the result of an equation – you have to ask what it says about the reality of the problem. Use a quadratic equation to solve a real world problem and you will always get two solutions – but much of the time only one, or neither, will be valid.

These are real world examples and lie outside the world of linear math where probability is always positive real numbers.

It’s amazing that you think negative possibility is something that exists in the real world. There may be esoteric reasons in quantum physics where you can use negative probabilities to solve difficult problems, but in the “real world” all probabilities have to be between 0 and 1. 0 is impossible, 1 is certain, and you cannot be less certain than impossible.

Reply to  Bellman
August 31, 2023 6:35 am

Not correct. The message is that the uncertainty in value spans that range.

As I said before the ± can have multiple meanings. It can mean both add and subtract, it can mean both positive and negative, and it can describe a range. And quite a few other things as well.

“Yet once again, from Vasquez & Whiting, (2006): “… to define uncertainty intervals for means of small samples as x ± t·s, where s is the estimate of the standard deviation σ.”

And once again, an “uncertainty interval” is not an uncertainty.

And if you define uncertainty as meaning an interval, what determine the uncertainty is going to be the size of the interval, not the values within it.

the expression 0.254 “± 0.011 denotes a normal distribution N(0.254, 0.011) with mean 0.254 and standard deviation 0.011 which is the model for the uncertainty of the measurement result.”

Saying that NIST use this odd formulation 0.254 ± 0.011 to describe a normal distribution with mean 0.254 and standard deviation 0.011. The standard deviation is positive. It then goes on to say the measurement uncertainty modeled by that normal distribution. Still no idea why you think this means the uncertainty is negative.

Direct allowance of uncertainty to be negative in a scientific context.

Point to the negatives you think are indicating uncertainty, and specify your definition of uncertainty.

In science, the negative wing of an uncertainty distribution, explicated as the negative valued standard deviation, carries positive probabilities.

Obviously. You still seem to be saying this is different to statistics.

Thus, in the relevant context of science — the present context — the second clause of your objection is a non-sequitur of the first.

The point was that negative standard deviation would imply a negative probability, using the Gaussian equation.

This has been pointed out many times, and demonstrated now two or three times. Please stop revisiting the same mistaken statistical context.

The problem is, repeating things that I think are wrong does not make for a good argument.

The negative distribution wing representing the lower range uncertainty of a physical magnitude, and represented by the negative valued standard deviation.

Which does not mean the standard deviation is negative.

This whole argument keeps coming back to you thinking there are two standard deviations for any distribution, one negative and one positive, and to find a confidence interval you add both distributions.

I think, for multiple reasons, this is an incorrect interpretation. There is only one standard deviation, it is positive, and you find the interval by both adding and subtracting it.

As I keep trying to say, either interpretation will give you the same result, and if we could just agree that sometimes one uses maths in a sloppy way as long as it gets the right result, there would have been no need for this 4 day long thread covering hundreds of ill-tempered comments.

But my problem is you seem to think that somehow my interpretation is some plot by statisticians to force scientists to use concepts that don’t work in the real world. At the same time refusing to even contemplate the idea that there a single standard deviation might be in any way correct, or there might be reasons why these despicable mathematicians define things the way they do.

Reply to  Bellman
August 31, 2023 6:43 am

And once again, an “uncertainty interval” is not an uncertainty.

Just quit now while you’re behind, this is never going to work.

Time to go talk with the Forestry Department.

Reply to  Bellman
August 31, 2023 7:19 am

it means the correct (true) length of one’s board is within ±1 cm (1σ) of 10 m

Which doesn’t agree with the GUM definition as it talks about a true value (but I’m fine with that).

But the question is what does an uncertainty of -1 mean. If I say the standard uncertainty is 1cm, I know it means that there is a ±1cm interval about the measurement. How would it be different if you said the standard uncertainty was -1cm?

Rather, it seems you can’t understand that ±σ represents a continuous range rather than two scalar limits.

Why do people have to keep making this up? I’ve said repeatedly that ± can represent a range of values.

Reply to  Bellman
August 31, 2023 7:55 am

What if the uncertainty is not symmetric around the stated value?

The stated value is *NOT* the mean unless the distribution is Gaussian (or symmetric).

What if the uncertainty is greater on one side of the stated value than on the other? Then the ± value modifier is not applicable.

Get out of your gaussian paper bag!

Reply to  Tim Gorman
August 31, 2023 9:50 am

What if the uncertainty is not symmetric around the stated value?

It’s the same answer as all the previous ones.

Reply to  Bellman
August 31, 2023 12:38 pm

Except you have provided an answer. All you’ve offered is deflection, dissembling, and evasion so as to *NOT* answer.

Is that what we are to take from this? That no answer is all we are going to get?

Reply to  Bellman
August 31, 2023 8:34 am

Which doesn’t agree with the GUM definition…

Just curious, but why do you treat the GUM as though it’s divine writ?

How would it be different if you said the standard uncertainty was -1cm?

Wouldn’t that be representing the uncertainty as a scalar?

“I’ve said repeatedly that ± can represent a range of values.”

The point made concerned ±σ, the standard deviation, not just ±. I’m not making things up. You either forgot, or shifted stance.

Reply to  Pat Frank
August 31, 2023 9:05 am

Just curious, but why do you treat the GUM as though it’s divine writ?

I don’t. But a lot of people here do – list time this came up I was almost burnt at the stake for daring to say I disagreed with its castigating the Standard Error of the Mean as an incorrect term.

But, if you are going to talk about something like uncertainty, it does have to have a defined meaning, and if you won’t say what your definition it makes sense to use the international standards definition.

Wouldn’t that be representing the uncertainty as a scalar?

Yes, that’s what the standard uncertainty is, a scalar figure representing the standard deviation of the uncertainty.

“The point made concerned ±σ, the standard deviation, not just ±”

And µ ± σ, is a range. The 68% probability is the probability of a random value lying within that range.

So much of this is down to people making their own bad assumptions about what I’ve said, and looking for weird gotcha moments.

For the record:

  1. I believe there are negative numbers.
  2. I don’t believe all numbers are natural.
  3. I think it’s possible to have asymmetric distributions.
  4. I think that ± can represent a range of value.
  5. I think ± can represent two values.
  6. I think ± can represent two binary operations.

What I have said is:

  1. The √ symbol always represents the positive square root of a value.
  2. The standard deviation of a distribution cannot be negative.
  3. The uncertainty (at least in any definition I’ve seen) cannot be negative.
Reply to  Bellman
August 31, 2023 7:11 pm
  1. The √ symbol is the radix.
  2. The standard deviation of a distribution is ±.
  3. The uncertainty can be negative and associated with a positive probability.

From Ferson, et al. (2007): “the expression 0.254 “± 0.011 denotes a normal distribution N(0.254, 0.011) with mean 0.254 and standard deviation 0.011 which is the model for the uncertainty of the measurement result

“One must be careful to keep in mind that the plus-minus notation between the measurement result and its standard uncertainty denotes a probability distribution, rather than an interval* or a pair of scalar values.”

You have seen it, Bellman, you merely dismiss it.

Reply to  Pat Frank
September 1, 2023 3:36 am

And you have to retain the ability to separately denote the negative part and the positive part. Using a standard deviation gives no indication of
skewness, it is only completely informational for a normal distribution.

Any distribution can be normalized to zero. In such a case the left side of the distribution consists of negative numbers. In such a case the minus symbol is not just an operator but a value modifier. The term -σ has a definite meaning.

As your author points out, uncertainty is not just a scalar value.

Reply to  Pat Frank
September 1, 2023 5:17 am

The √ symbol is the radix.

From your link

Each positive real number has two square roots, one positive and the other negative. The square root symbol refers to the principal square root, which is the positive one.

The uncertainty can be negative and associated with a positive probability.

non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information used

VIM 3.2.6

standard uncertainty of the result of a measurement when that result is obtained from the values of a number of other quantities, equal to the positive square root of a sum of terms, the terms being the variances or covariances of these other quantities weighted according to how the measurement result varies with changes

in these quantities

GUM 2.3.4

The standard deviation of a distribution is ±.

The estimated variance u2 characterizing an uncertainty component obtained from a Type A evaluation is calculated from series of repeated observations and is the familiar statistically estimated variance s² (see 4.2). The estimated standard deviation (C.2.12, C.2.21, C.3.3) u, the positive square root of u², is thus u = s and for convenience is sometimes called a Type A standard uncertainty.

GUM 3.3.5

Reply to  Bellman
September 1, 2023 5:29 am

From Ferson, et al. (2007)

What do you think that quote is actually saying? You seem to think it indicates negative uncertainties, but you seem to be reading something into it I can’t see.

the expression 0.254 “± 0.011 denotes a normal distribution N(0.254, 0.011) with mean 0.254 and standard deviation 0.011 which is the model for the uncertainty of the measurement result

You highlight the standard deviation given as a positive number 0.011, and then highlight the text saying this is the uncertainty of the measurement result. This indicates to me they regard the uncertainty as being positive.

The sentence before that is simply saying that NIST use the expression 0.254 ± 0.011 to indicate a normal distribution with mean = 0.254 and standard deviation equal to 0.011. At no point are they suggesting the ± means there is a negative uncertainty or standard deviation.

One must be careful to keep in mind that the plus-minus notation between the measurement result and its standard uncertainty denotes a probability distribution, rather than an interval* or a pair of scalar values.

Which is exactly what I’m saying. I still don#t see how you interpret this as saying uncertainties or standard deviations can be negative – and continously copying it won’t help. Just point me to the text that actually spells out that an uncertainty can be negative.

Reply to  Bellman
September 1, 2023 11:48 am

What do you think that quote is actually saying?

That 0.254 ± 0.011 [is the] mean and standard deviation [representing] the uncertainty of the measurement result.

Note in emphasis: the clear meaning of the quote is that ±0.011 denotes the standard deviation 0.011. The ± is included in the denotation.

negative uncertainty

Statistical meaning not relevant.

Which is exactly what I’m saying.

You have claimed that the standard deviation represents a scalar limit. Also here.

Reply to  Pat Frank
September 1, 2023 3:10 pm

That 0.254 ± 0.011 [is the] mean and standard deviation [representing] the uncertainty of the measurement result.

The you are misreading it. They specifically say the standard deviation is 0.011.

the clear meaning of the quote is that ±0.011 denotes the standard deviation 0.011.

See. In this context ±0.011 represents the actual standard deviation, which is +0.011.

If it helps read the example of the word “denote” you linked to

The color red is used to denote passion or danger.

The color red denotes passion, it is not the same thing as passion.

Reply to  Bellman
September 1, 2023 7:35 pm

The you are misreading it.

No, I’m not.

“See. In this context ±0.011 represents the actual standard deviation, which is +0.011.”

Got it. ±0.011 = +0.011. Silly.

If it helps read the example of the word “denote” you linked to
The color red is used to denote passion or danger.
The color red denotes passion, it is not the same thing as passion.”

Yes and the ± is used to denote the standard deviation.

Reply to  Pat Frank
September 2, 2023 7:30 am

I can’t help you any more with your reading problems. The passage clearly says that NIST use the formulation 0.254 ± 0.011 to represent the normal distribution (0.254, 0.011), where 0.254 is the mean and 0.011 is the standard deviation. If you think that means ±0.011 is the deviation, I won’t stop you. Just don’t expect to take anything else you say on trust.

Reply to  Bellman
September 2, 2023 7:26 pm

When have you ever taken anything I’ve posted on trust?

I’ll expect that you’ll inevitably dismiss whatever I post. That’s been your consistent pattern.

Reply to  Pat Frank
September 1, 2023 4:34 pm

That 0.254 ± 0.011 [is the] mean and standard deviation [representing] the uncertainty of the measurement result.”

bellman can’t figure out that while uncertainty can be calculated in certain circumstances as variance and standard deviation, uncertainty is actually neither one. He only understands Type A uncertainty and even then in a very limited way. It’s why he can’t give us a GUM protocol for calculating the measurement uncertainty from measuring multiple things one time and forming a data set from the measurements. It’s why he can’t tell us how to calculate an asymmetric uncertainty interval and denote it along with a stated value, nor can he point to anyplace in the GUM that shows how to do that.

He knows one thing: measurement uncertainty is always random, Gaussian, and therefore cancels. He denies that anything outside of that exists. It’s “statistical world” all the way!

Reply to  Tim Gorman
September 1, 2023 5:10 pm

bellman can’t figure out that while uncertainty can be calculated in certain circumstances as variance and standard deviation, uncertainty is actually neither one.

Gorman evasion 823. The claim was that the uncertianty was the standard deviation, which was simultaneously positive and negative. Now it seems to be accepted for the moment that standard deviation is not negative – shift to claiming standard uncertainty is not the same as standard deviation, and claim that’s the mistake you were really addressing.

Tough. The GUM says standard uncertainty is the standard deviation. If you don’t like that definition, you will have to finally tell us what yours is.

“He only understands Type A”

Next deflection – more personal insults based on what he thinks I said. Read the GUM – both Type A and Type B uncertainties are based on standard deviations. You may have to guess what it is for a type B, but you are still defining it as the standard deviation.

It’s why he can’t give us a GUM protocol for calculating the measurement uncertainty from measuring multiple things one time and forming a data set from the measurements.

You keep asking this question. I keep answering, you then find some excuse and just keep adding more and more clauses to the question. What on earth are you talking about “forming a data set from the measurements” What has that got to do with calculating the combined uncertainty from multiple things?

The answer is equation 10 or 13. That is how you combine the uncertainties from multiple things – it’s why it’s called the “combined” standard uncertainty. I’ll leave it up to you to figure out what the word “standard” means.

It’s why he can’t tell us how to calculate an asymmetric uncertainty interval and denote it along with a stated value, nor can he point to anyplace in the GUM that shows how to do that.

The reason I can’t tell you is the reason you can;t tell me – it’s complicated and depends on what sort of distribution you are talking about. The reason I can’t point to where the GUM gives a detailed explanation of how to do it, is because it doesn’t give a detailed explanation for the same reason – it’s complicated.

He knows one thing: measurement uncertainty is always random, Gaussian, and therefore cancels.

Gorman evasion techinques number 1 through 5. Lie lie and lie and lie again. Keep repeating the lies no matter what I actually say. Just put random, Gaussian and cancels on speed dial, and at them at the end of every comment.

Reply to  Bellman
September 1, 2023 6:48 pm

Another goofy word salad full of smoke and mirrors.

Reply to  Bellman
September 1, 2023 7:28 pm

You may have to guess what it is for a type B, but you are still defining it as the standard deviation.

Actually not. One uses the statistical formalism for Type B, but the meaning of the standard deviation is different.

Some people advise not combining Type A and Type B uncertainties for this reason — they’re not the same thing.

Others are pragmatic and suggest combining them in quadrature into a convenient measure of overall reliability.

I dislike the “Type B” nomenclature because it hides a number of different sorts of uncertainty. Calibration uncertainty is not a guess, for example. Or a professional judgment. But it is included as Type B.

Reply to  Pat Frank
September 1, 2023 10:08 pm

The GUM states that if your combined uncertainty has a “substantial” amount of Type B, you should do “something else” (my paraphrase).

But there are situations where there is no something else.

Consider a measurement that uses a precision 4-terminal resistor (i.e. high $$$$) with a DVM to measure current. The resistor will have a temperature coefficient specified by the manufacturer as ±X ppm/°C, and a resistance drift spec of ppm per some time period. It will also have a certificate from a cal lab for its resistance as R ± U(R) at (typically) 25°C, if the lab is accredited to ISO 17025. Note that component manufacturers give the tolerances as plus/minus, and not minus or plus (and yes individual units can go either way).

Calculation of the current uncertainty will require at least two Type B uncertainties: one based on the maximum amount of time between successive calibrations that the measurement lab establishes for their work. The other will be a function of the maximum possible lab temperature excursions, which can only be done as an engineering evaluation.

Also, the way DVM manufacturers provide error specifications (they don’t give uncertainty numbers) that vary with DVM range, temperature, and time from last calibration, means that uncertainties for the voltage measurement also have to be developed as Type B.

With high-quality components and careful lab procedures these Type B uncertainties can be pretty small, but they can’t be ignored, and there is no way to get statistics for them.

Reply to  Bellman
September 2, 2023 4:05 pm

The answer is equation 10 or 13. That is how you combine the uncertainties from multiple things – it’s why it’s called the “combined” standard uncertainty. I’ll leave it up to you to figure out what the word “standard” means.”

These equations ARE NOT used to combine uncertainties from multiple things. This statement just emphasizes your lack of understanding.

These equations are used to COMBINE uncertainties of multiple measurements used in calculating a single measurand. That is why the functional relationship is so important. Think equations like:

PV = nRT, or v = dist / time.

Each of the measurements for a single measurand, say pressure or velocity, require combining the uncertainties from the measurements used to calculate them. Again, equations 10 & 13 are not designed to “combine” uncertainty of multiple measurands.

Read what the GUM says:

where f is the function given in Equation (1). Each u(xi) is a standard uncertainty evaluated as described in 4.2 (Type A evaluation) or as in 4.3 (Type B evaluation). The combined standard uncertainty uc(y) is an estimated standard deviation and characterizes the dispersion of the values that could reasonably be attributed to the measurand Y (see 2.2.3).

The GUM shows Y = f(x1, x2, …, Xn). Those Xn values are the measurements used to calculate a single measurand.

5.1.5

Thus, for the purposes of an analysis of uncertainty, a measurand is usually approximated by a linear function of its variables by transforming its input quantities from Xi to δi (see E.3.1).

5.2.2

The combined standard uncertainty uc(y) is thus simply a linear sum of terms representing the variation of the output estimate y generated by the standard uncertainty of each input estimate xi (see 5.1.3). [This linear sum should not be confused with the general law of error propagation although it has a similar form; standard uncertainties are not errors (see E.3.2).]

2.2.3 The formal definition of the term “uncertainty of measurement” developed for use in this Guide and in the VIM [6] (VIM:1993, definition 3.9) is as follows:

uncertainty (of measurement)

parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand

It says, “to the measurand”. It doesn’t say measurands.

I also want to point out that equations 10 & 13 DO NOT show dividing the uncertainties by the number of measurements used to calculate the mean measurements of experimental measurands.

For example, using V = D/T => D = VT, the uncertainty is the linear sum of each measurand:

Uc(D) = u(D) + u(T)

Reply to  Jim Gorman
September 2, 2023 5:28 pm

You really need to decide exactly question you want me to answer.

You insist you have to propagate the measurement uncertainties when taking an average, but then inist that none of the propagation techniques work, because the average isn’t a measurand. You can’t have it both ways.

You don’t have a problem with propagating uncertainties to get the sum of temperatures, so presumably you think the sum is a valid measurand. Yet just divide the sum to get a more sensible value, and then for some reason it stops being a measurand.

If you want to propagate the uncertainties, I can suggest various interpretations.

A) Treat the average as a function for multiple inputs and propagate the uncertainties using equation 10, just as would for the sum.

B) Treat the average as a simple measurand, and each value as a single measurement, with the uncertainty being the variation in the temperatures. (This is what the NIST example is doing).

C) Forget about measurement uncertainty, and treat it like a regular statistical sample. I.e. standard error if the mean.

D) Forget the equations, and use Monte Carlo techniques.

The problem is you will keep rejecting every method as the show uncertainty decreasing, but you will never explain how you can justify your own calculations.

Reply to  Bellman
September 2, 2023 6:28 pm

“””You insist you have to propagate the measurement uncertainties when taking an average, but then inist that none of the propagation techniques work, because the average isn’t a measurand. You can’t have it both ways.”””

It’s really tiring when you don’t even know what you are talking about.

When you are measuring the same thing with the same device multiple times, you form a normal distribution where the mean is the center of an uncertainty interval. Propagation is used only to find the combined uncertainty of the measured components making up the measurement.

Experimental uncertainty is used when measurents of similar things under repeatable conditions are made.

Measurements of different things under different conditions, such as temperature, do not meet the necessary condition of repeatability.

This has been pointed out to multiple times. You have been asked to point out where the GUM details how to handle uncertainty of different things that all have different conditions. So far you have provided nothing.

From Redlands Univ.

“A measurement is repeatable if (1) you are sure that you are really measuring the same quantity each time you repeat the measurement”

Reply to  Jim Gorman
September 3, 2023 3:58 am

“When you are measuring the same thing with the same device multiple times, you form a normal distribution where the mean is the center of an uncertainty interval”

This is your problem. You are seeing taking multiple measurements as a way of “forming” a distribution. But it’s actually being done to determine the actual distribution. Every measurement can be seen as being a random variable with a pre-existing distribution. Taking multiple measurements is a way of trying to estimate the distribution. But even if you only have one measurement you can still assume it comes from a distribution. And on either case you cannot assume it will be a normal distribution.

Propagation of uncertainties is not the process of experimenting to discover a distribution – it’s the calculation you make to estimate what the uncertainty of a combined measurement made up of different measurements, each with their own uncertainty distribution.

These individual uncertainties are estimated from whatever sources you have, i.e. a type B uncertainty. This is after all what Par Frank is doing. Taking multiple sources to estimate the u certainty of a single measurementnd then propagating it both to an annual and a global average.

The problem is you keep confusing experimental uncertainty with the propagation of uncertainty.

Reply to  Bellman
September 3, 2023 12:06 pm

“”Every measurement can be seen as being a random variable with a pre-existing distribution.””

Exactly what do you think Dr. Frank has been trying to tell you with this paper.

What do you think NOAA has done by specifying that LIG Type B uncertainty is ±1° F for LIG, ±1.0° F for MMTS, ±1.8° F for ASOS, and ±0.3° C for CRN.

You are lost in the woods, and don’t know where you are.

My advice – forget statistics. Playing with math is not the purpose in metrology.

Pretend you have been hired by NASA to measure and characterize a large number of fuel pellets for a trip to Mars. Are you going to play fast and loose with uncertainty even though that could kill the mission by quoting the average uncertainty as the uncertainty of each. Or, are you going to quote the uncertainty based on the sum of each individual pellet’s uncertainty? Are you going to quote the SD or the lower expanded SEM?

These are what scientists and engineers deal with. It’s not a game to determine the best way to minimize the quoted uncertainty. Characterizing measurements and their uncertainty is a serious responsibility. People know what they can rely on when measurements and their uncertainty are properly calculated.

Another example. You need to consume a daily pill where too much will kill you and too little will let you die. The manufacture quotes the average uncertainty. Would this satisfy you?

Reply to  Jim Gorman
September 3, 2023 12:29 pm

I tried to explain this to bellman earlier using the attached picuure.

What you want to do is lower the total uncertainty. The SEM only tells you how well you have grouped your shots, not how accurate they are.

The average uncertainty tells you nothing about how to adjust anything. You could have half the shots in the 4 ring and half in the 0 ring and still get the same average uncertainty.

Somehow this just never seems to sink in.

uncertainty.jpg
Reply to  Jim Gorman
September 4, 2023 3:55 pm

Pretend you have been hired by NASA to measure and characterize a large number of fuel pellets for a trip to Mars.

The answer’s the same as it was for 12″ rods, but you’ll just accuse me of being evasive again. You have to use the right tool for the job. If the important thing is how much do the individual items vary, as is often the case, you need to look at individual variation. If you need to know how uncertain the mean is you need to look at the uncertainty of the mean. At the most simplest that’s the difference between SD, and SEM.

If the mission depends on fuel pellets being a consistent size, then you need to look at SD, and probably a lot else. If on the other hand it was vital that the average was a specific value you would want to check the SEM.

Are you going to play fast and loose with uncertainty even though that could kill the mission by quoting the average uncertainty as the uncertainty of each.”

Not sure what you mean by that. Do you want a separate uncertainty analysis for every pellet? If there are different types of pellets than maybe quote the uncertainty for each type.

Or, are you going to quote the uncertainty based on the sum of each individual pellet’s uncertainty?

Again, at the risk of being attacked for evasion, I’m not sure what you mean by that. Are you talking about the measurement uncertainty of a sum of specific pellets, or the uncertainty of a specific number of randomly selected pellets?

These are what scientists and engineers deal with.

Hopefully they are given better specifications than you provide.

The manufacture quotes the average uncertainty. Would this satisfy you?

Again, what do you mean by average uncertainty. The assumption is all the pills have a different uncertainty, in which case you should probably get the manufacturing process under control. Assuming they all have the same uncertainty, then that is the average uncertainty, and is definitely what you want, although you would also want this to be as small as possible. You certainly wouldn’t want the uncertainty of the average.

You would on the other hand want to look at the uncertainty of the average if you were doing quality control, and making sure there wasn’t a change in the specified average amount.

Reply to  Bellman
September 4, 2023 4:24 pm

If the important thing is how much do the individual items vary, as is often the case, you need to look at individual variation. If you need to know how uncertain the mean is you need to look at the uncertainty of the mean. At the most simplest that’s the difference between SD, and SEM.”

You are never going o get it. Primarily because you have never been responsible for the health and safety of others in any project.

It simply doesn’t matter what the SEM is if the average is inaccurate. It’s your kind of thinking that smashes lunar exploratory vehicles into the moon’s surface like Russia did not long ago. If you were in charge of landing a manned vehicle on the moon you would *still* think that how small you could make the SEM would be the proper statistical descriptor to use.

“you need to look at individual variation”

You still can’t shake that meme that all measurement uncertainty is random, Gaussian, and cancels leaving the stated value variation as the uncertainty.

“Not sure what you mean by that. Do you want a separate uncertainty analysis for every pellet?”

Again, you’ve *NIEVER* been responsible for anything that carries personal liability, either civil, criminal, or both.

What is meant is that you *have* to plan for worst case. If you are planning for the required fuel load you better use the propagated measurement uncertainty of the different things (i.e fuel pellets) that leaves you with the largest safety margin. Otherwise you could wind up stranding astronauts on a path out of the solar system with no fuel to get back. An SEM simply provides you no knowledge on how big the fuel load has to be. It’s the accuracy of the average that is of concern, not the SEM. If your SEM is zero but the average is inaccurate then the average is of no use at all.

Again, at the risk of being attacked for evasion, I’m not sure what you mean by that. Are you talking about the measurement uncertainty of a sum of specific pellets, or the uncertainty of a specific number of randomly selected pellets?”

You *ARE* evading again. If the astronauts on the spacecraft are placing their lives on your calculations of fuel load, which do *you* think would be the most important to them?

 look at the uncertainty of the average if you were doing quality control,”

You’ve been given example after example of why the SEM is *NOT* the proper quality control measure. Your SEM on the 12″ rods could be zero while you are cranking out rods that vary from 10″ to 14″. You just never bother to learn at all.

Reply to  Tim Gorman
September 4, 2023 5:06 pm

To late to go through all your misunderstandings at the late stage. But I am intrigued about your reasoning here.

Your SEM on the 12″ rods could be zero while you are cranking out rods that vary from 10″ to 14″. You just never bother to learn at all.

If you have taken a reasonably large sample, and ensured it is sufficiently random – how likely is it that the SEM could be zero. That could only happen if every single rod sampled was exactly 12″, and if that is the case, how would you actually know that they were varying by ±2″?

Reply to  Bellman
September 5, 2023 4:48 am

If you have taken a reasonably large sample, and ensured it is sufficiently random – how likely is it that the SEM could be zero. That could only happen if every single rod sampled was exactly 12″, and if that is the case, how would you actually know that they were varying by ±2″”

Just as likely as multiple measurements of the same thing under the same environment using the same device will generate a Gaussian distribution of measurement error that all cancels out leaving you with the average being the true value.

The rods do *NOT* all have to be 12″ long, the lengths just have to present a random, Gaussian distribution just like in the preceding paragraph.

If one doesn’t work then neither does the other. Meaning the global average temperature; which depends on the meme of all measurement uncertainty being random, Gaussian, and cancels; is garbage without propagation of the measurement uncertainties beginning with the daily Tmax and Tmin temps.

You are *never* going to understand metrology bellman. It is a *physical*, reality based discipline – something you have shown you are entirely incapable of understanding. You proved it with your assertion above which makes the GAT into garbage – and it is made into garbage by your own understanding. But we all know you will use one of your three evasion rules to avoid having to admit this. Have at it!

Reply to  Tim Gorman
September 5, 2023 7:50 am

Talk about evasion. Did you at any point attempt to explain how you think it’s possible for a sample of rods to have a SEM of Zero, when they are all different lengths?

Try answering the question I asked, rather than whittering on about Gaussian distributions.

Reply to  Bellman
September 5, 2023 10:33 am

Are you unable to read? I *did* explain it – at least for someone with a modicum of critical thinking.

  1. If the distribution is truly random and Gaussian then the SEM can truly be zero. The values do *not* have to all be same. The samples just all have to have the same average as the population average!
  2. This is no different than your meme that all measurement uncertainty is random, Gaussian, and cancels thus leaving the average value as the true value.

As I tried to point out to you with the target, the issue is not the SEM. You could put all the shots in the same hole thus having an SEM of zero. But the average would *still* be inaccurate. What you must minimize is the total uncertainty.

You don’t even understand enough to realize that you said the same thing when you said “ you should probably get the manufacturing process under control.”. *That* is how you minimize the total uncertainty. You put all your shots in the bullseye – you do *not* try to minimize the SEM because you aren’t then helping the accuracy.

And you are *still* evading. You couldn’t answer the question of how would you keep the spaceship from running out of fuel by ignoring the uncertainties in the fuel pellets.

Reply to  Tim Gorman
September 5, 2023 1:45 pm

No. You think you explained it, but all you ever do is demonstrate your own ignorance. I’ve tried to explain what the SEM is and how it’s calculated from a sample for years, and you still refuse to accept you could possibly be wrong.

If the distribution is truly random and Gaussian then the SEM can truly be zero. The values do *not* have to all be same.

It cannot – unless all the values in the sample are identical. If the values are different the standard deviation will be greater than zero and hence the SEM will be greater than zero.

The samples just all have to have the same average as the population average!

And, as you still refuse to learn – you are not taking multiple samples, just one.

Even if you want to pretend that the only way to calculate the SEM is to take a large number of different samples, what you are claiming is statistically implausible. That some how you could take a large number of samples, and each have an identical mean for each, despite huge variations in the individual values. It just isn’t going to happen.

This is no different than your meme that all measurement uncertainty is random, Gaussian, and cancels thus leaving the average value as the true value.

Random gibberish from you as usual.

you do *not* try to minimize the SEM because you aren’t then helping the accuracy.

Please try to learn something. Minimizing the SEM will increase the likelihood that you will detect changes in the mean. Take two small samples and it will be difficult to know if their means are statistically different, becasue the SEM will be large. Take two large samples and it will be easier to find a statistically significant difference, because the SEM’s are smaller.

Reply to  Bellman
September 5, 2023 2:21 pm

It cannot – unless all the values in the sample are identical.”

I answered and then you come back with garbage like this? Do you *ever* wonder why no one thinks you know anything about statistics, uncertainty, or anything in the physical world?

Instead of using the term SEM it would help your understanding if you used the proper descriptive term: standard deviation of the sample means. Not the SD of the samples, not their variance, not their values – THEIR MEANS.

The samples can have very different values and still have the same mean!

It cannot – unless all the values in the sample are identical. If the values are different the standard deviation will be greater than zero and hence the SEM will be greater than zero.”

Have you started drinking already today? Again, the standard deviation of each individual sample plays no part in the SEM, only the mean.

1, 7, 10 has a mean of 18/3 = 6. 5,6,7 has a mean of 18/3 = 6.

Vastly different values in each but the SAME MEAN!

You’ve already degraded your reputation as a math whiz so far that no one listens to you any more. And here you are working on driving it further into the mud. The only reason I’m replying to you is so that others won’t take the garbage you post as the truth.

“And, as you still refuse to learn – you are not taking multiple samples, just one.”

It doesn’t matter as long as the mean is the same! Taking just one sample doesn’t really give you anything. SEM = SD/sqrt(N). If you don’t know the SD of the population then you can’t calculate the SEM. And you can’t tell from one sample how close you are to the population mean if you don’t already know the SD of the population which, in turn, means you already know the population mean.

“Even if you want to pretend that the only way to calculate the SEM is to take a large number of different samples, what you are claiming is statistically implausible.”

It may not have a big chance of happening but it isn’t zero. And the *ONLY* way to actually calculate the SEM *is* from multiple samples. That’s the meaning of “the standard deviation of the sample *MEANS*.

“Random gibberish from you as usual.”

Nope. You just don’t want to have to accept the truth. You can’t have your cake and eat it too – if you can’t get a true value from a distribution of measurement errors that are random, Gaussian, and cancels then the GAT is nonsense because it depends totally on that meme.

Reply to  Tim Gorman
September 5, 2023 4:17 pm

Simply futile trying to explain anything to you like this. You are so convinced you understand this, that all you can do is hurl insults and ad hominems at me, rather than even try to understand. The fact is that the world has been doing what you claim is impossible for well over 100 years, every single text book and internet source explains how to do it – even the GUM. You do not need to know the exact population standard deviation to calculate the SEM from a single sample, you use the sample standard deviation to estimate it.

You can’t understand why it would be pointless to take hundreds of samples, just in order to work out the SEM from them – or why that would be less accurate than estimating it from one sample.

To add to your failure to understand how the real world works, you persist in the notion that somehow it would be possible to take a large sample of samples and somehow have them all give you identical means, even when the population distribution is large. It might not be impossible, it just won’t happen in the real world (or in a million real worlds).

Reply to  Bellman
September 6, 2023 11:51 am

From “The Active Practice of Statistics” by David S. Moore, Purdue Univ

“Our data are a simple random sample (SRS) of size n from the population. This assumption is very important.”

“Observations from the population have a normal distribution with mean u and standard deviation σ. In practice it is enough that the distribution be unimodal and symmetric unless the sample is very small. Both u and σ are unknown parameters.”

“In this setting the sample mean y_bar has the normal distribution with mean u and a standard deviation of σ/wqrt(n). Because we don’t know σ, we estimate it by the sample standard deviation s. … We then estimate the standard deviation of y_bar by s/sqrt(n). This quantity is called the standard error of the sample mean y_bar.” (bolding mine, tpg)

“When the standard deviation of a statistic is estimated from the data, the result is called the standard error of the statistic. The standard error of the sample man y_bar is

SE_ybar = s / sqrt(n)”

—————————————————————–

Read this VERY carefully. What you want to do is call a method for ESTIMATING the standard error the same as actually finding he standard deviation of the sample means, meaning exactly how well the sample tell you what the mean is.

For calculating global temperatures several of the restrictions on the estimation of the standard error come into play. First, the population has to be normal – which for the global temp population it is *NOT*. Second, the the population has to be unimodal – which for the global temp population it is *NOT*. Third, the population has to be symmetric – which for the global temp population it is *NOT*.

This all means that for the global temperature you simply cannot use the sample standard deviation as an estimate of the population standard deviation.

The only valid way to analyze the global temperature data is to take multiple samples, depend on the CLT to create a normal distribution from those sample means, and then calculate the standard deviation of the resulting distribution – i..e the standard deviation of the sample means.

You can whine and moan all you want about that isn’t how statisticians do it, or how they have done it in climate science for the past 100 years – that doesn’t make what statisticians do correct or what has been done by climate science for 100 year correct. Statisticians apply no knowledge of the physical world in their analysis (just like you don’t) and climate science apparently doesn’t know enough about statistics or physical science to do it correctly.

Even the distribution of daily temperatures is not normal, unimodal, or symmetric. So climate science starts off wrong with its very first step in calculating a global temp! And it just gets worse and worse as the go up the level of averaging.

Reply to  Tim Gorman
September 6, 2023 2:17 pm

“In this setting the sample mean y_bar has the normal distribution with mean u and a standard deviation of σ/wqrt(n). Because we don’t know σ, we estimate it by the sample standard deviation s. … We then estimate the standard deviation of y_bar by s/sqrt(n). This quantity is called the standard error of the sample mean y_bar.”

Exactly as I keep telling you, and you say is impossible.

What you want to do is call a method for ESTIMATING the standard error the same as actually finding he standard deviation of the sample means

As I keep telling you – though nobody calls it the “standard deviation of the sample means“.

Whatever you do will be an estimate. The only way to know an exact SEM is to know the actual standard deviation of the population, and you only really know that in certain assumed exact situations – such as rolling a fair die.

Your suggestion of taking multiple samples and finding the standard deviation of all the means, is pointless, but will still only give you an estimate, unless you are going to take an infinite number of samples.

For calculating global temperatures…

Stop changing the subject. We were talking about your 12″ rod, not global temperatures. As I keep trying to explain, nobody is just taking the SEM of global temperatures.

First, the population has to be normal

Wrong.

Second, the the population has to be unimodal

It does not – though why you think global anomalies are not unimodal is your problem.

Third, the population has to be symmetric

It does not. By the way you, you realize that if your first “condition” is true, then the other two will also be true.

This all means that for the global temperature you simply cannot use the sample standard deviation as an estimate of the population standard deviation.

None of those are the reasons why you wouldn’t.

Hint – you asked before what the conditions where for the CLT to hold, and I told you the data needed to be independent and identically distributed. But I dare say you’ve forgotten that by now.

The only valid way to analyze the global temperature data is to take multiple samples

Yes, you can use bootstrapping or some such. I think the BEST analysis uses Jackknifing, but don’t expect me to know how any of it works in detail.

Reply to  Tim Gorman
September 6, 2023 7:45 am

It is quite telling that he ran away from your example of all the rifle shots going through the same hole—the standard deviation is then zero and so is the holy SEM.

This is the best example to demonstrate their fatal flaw.

Averaging cannot remove bias error, he will never acknowledge this because his whole act would collapse.

Reply to  karlomonte
September 6, 2023 9:36 am

A lesson in running away from a question by Mr “you are in no position to demand answers, you wouldn’t understand the answer, stop using your Jedi mind tricks on me.”

Very well. Your comments just illustrate how little you and the Gorms understand how statistics work or how you would use a SEM.

Let’s say you’ve fired a large number of shots at a target. You can look at a couple of simple statistics here. 1) the spread of the shots, athat is their standard deviation, which in measurement terms is an indication of the precision, and 2) the average position of all the shots, the mean, an estimate of the trueness of each shot.

Now if the average is not near the bullseye it’s an indication of a systematic error in the shots, either in the gun or the shooter. But how do you tell if the error is real, or if it was just the result of chance? That’s when you want to look at the standard error of the mean. This tells you how much variation you expect in the average if all the shots. If all the shots are well spread out and there were only a few shots, the SEM is large and it’s quite possible that you are just looking at random variation. If the shots are all close together and you had a lot of tries, then it becomes quite unlikely that the off centre average could have happened by chance.

If by some bizarre reason all your shots were going through the same hole, suggesting the SEM is very small, zero if you will, then if that hole is not dead centre it means there gas to be a systematic error, which need correcting.

Reply to  Bellman
September 6, 2023 12:38 pm

Now if the average is not near the bullseye it’s an indication of a systematic error in the shots, either in the gun or the shooter. But how do you tell if the error is real, or if it was just the result of chance?”

What is the probability that random chance will cause the same error each time? That doesn’t sound very “random”.

Basically, you are showing your complete lack of real world experience with almost anything.

The example is meant to show that AVERAGE UNCERTAINTY is not total uncertainty. It is meant to show that standard deviation of the sample means only shows the variation around the average, it is *NOT* an indication of the actual uncertainty of the average.

Until you can get those concepts clear in your mind you will *never* understand measurements and measurement uncertainty. As it stands, I wouldn’t let you near *anything* physical, be it an engine, a stud wall in a house, a bridge beam, or a nuclear power plant.

Reply to  Tim Gorman
September 6, 2023 1:58 pm

That doesn’t sound very “random”.

Hence the word “systematic”.

it is *NOT* an indication of the actual uncertainty of the average.

It is an indication of how much variation there is in the sample mean. Hence why it is useful in testing if two samples come from the same population. Or in this case if the sample comes from a gun that is true.

Reply to  Bellman
September 7, 2023 12:03 pm

It is an indication of how much variation there is in the sample mean.”

Do you *ever* stop to think through what you are posting?

How do you get “variation” in the mean from ONE sample mean?

“Hence why it is useful in testing if two samples come from the same population.”

And then you talk about having TWO sample means to compare!

The variation in the the sample mean is the standard deviation of the sample population. You are *assuming* that the sample and the population both have the same standard deviation, an assumption that must be *proven* to be of any use in the real world of physical science. Since the temperature data forms a population that is decidedly one that is not Gaussian, this would be assuming facts not in evidence.

Reply to  karlomonte
September 6, 2023 12:30 pm

Yep. Consider, if even one of the 3-ring shots were on the bulls-eye the total uncertainty would change significantly, from 13 to 10 or 30%, but the average wouldn’t be affected as much, from 2.2 to 1.7 or about 20%.

That’s why you want to aim (target, shoot for, etc) lowering the total uncertainty, not the average uncertainty.

Reply to  Tim Gorman
September 1, 2023 6:47 pm

You got it.

Nothing must be allowed to stand in the way of milli-Kelvin pseudo-error bars.

Reply to  Bellman
September 1, 2023 12:09 pm

The square root symbol refers to the principal square root, which is the positive one.

The square root sign is also the radix, which is how I, as a scientist, see it.

In measurement science, the negative square root has as much significance as the positive one. Therefore, in measurement science, the positive square root does not take a principal standing.

The ISO IEC Guide occasionally gets it.

Section 5.3 under 5 Stages of uncertainty evaluation

b) the standard deviation of Y, taken as the standard uncertainty u(y) associated with y [JCGM 100:2008 (GUM) E.3.2], and

c) a coverage interval containing Y with a specified coverage probability. (my bold)

where y is an estimate of Y. A coverage interval containing Y is Y±u(y), where ±u(y) = the coverage interval is the standard deviation.

Reply to  Pat Frank
September 1, 2023 2:36 pm

The square root sign is also the radix, which is how I, as a scientist, see it.

Yes, and radical – all coming from the Latin for root. But I find radix a little confusing as it also means the base of a number system.

The ISO IEC Guide occasionally gets it.

And from the same document

4.2 Measurement uncertainty is defined [JCGM 200:2008 (VIM) 2.26] as

non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information used.

where y is an estimate of Y. A coverage interval containing Y is Y±u(y), where ±u(y) = the coverage interval is the standard deviation.

That’s quite a stretch.

The full text is

5.3 The calculation stage (see clause 7) consists of propagating the probability distributions for the input quantities through the measurement model to obtain the probability distribution for the output quantity Y , and summarizing by using this distribution to obtain

a) the expectation of Y , taken as an estimate y of Y ,

b) the standard deviation of Y , taken as the standard uncertainty u(y) associated with y

[JCGM 100:2008 (GUM) E.3.2], and

c) a coverage interval containing Y with a specified coverage probability.

a), b), and c) are three different things. So b) and c) are two different things.

b) is obtaining a standard deviation of Y – which is the standard uncertainty associated with y. c) is obtaining a coverage interval for a specific probability.

In no way does it suggest the coverage interval is the same as the standard deviation. How would that even make sense when the coverage interval depends on the required probability?

Reply to  Bellman
September 1, 2023 7:20 pm

The “and” logically connects b and c. Deny it as you might.

The standard deviation is … and [is] a coverage interval containing Y and a specified coverage probability.

That’s about as clear as can be that standard definition has multiple meanings. And the statistical meaning is not the meaning used in science.

I’ve written to a scientist connected with GUM, and pointed out the confusion resulting from their adherence to the statistical definition. So far, no reply.

Reply to  Pat Frank
September 2, 2023 6:08 am

The word “and” just tells you the last item in a list is about to occur. The items may or may not be connected and may or may not be the same thing.

In this case we have the be expected value, the standard deviation, and the coverage interval. Each item is “connected” to the previous one, in as far as you need to know the value of each to know the next value. But you seem to think that means they are the same thing. You are claiming the coverage interval is just another name for the standard deviation.

”The standard deviation is … and [is] a coverage interval containing Y and a specified coverage probability.”

Yes adding a word can complete change the meaning, especially if you if ore the context.

By using this to obtain a) an expected value, b) a standard deviation, and c) is a coverage interval.

Makes no sense, just seems you are desperate to twist the passage ti justify your misunderstanding.

Reply to  Bellman
September 2, 2023 7:22 pm

There is no “and” between a) and b). They are independent. An “and” is present between b) and c). They are connected.

Follow the logical connections below, please.

c) a coverage interval containing Y with a specified coverage probability.

VIM Vocab. Metrol._JCGM_200_2008
2.37
coverage probability
probability that the set of true quantity values of a measurand is contained within a specified coverage interval

NOTE 1 This definition pertains to the Uncertainty Approach as presented in the GUM.

VIM Vocab. Metrol._JCGM_200_2008
2.35
expanded measurement uncertainty
expanded uncertainty
product of a combined standard measurement uncertainty and a factor larger than the number one

NOTE 1 The factor depends upon the type of probability distribution of the output quantity in a measurement model and on the selected coverage probability.

NOTE 2 The term “factor” in this definition refers to a coverage factor.

ISO IEC Guide 98-3 2008
2.3.5
expanded uncertainty
quantity defining an interval about the result of a measurement that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the measurand

NOTE 1 The fraction may be viewed as the coverage probability or level of confidence of the interval.

NOTE 2 To associate a specific level of confidence with the interval defined by the expanded uncertainty requires explicit or implicit assumptions regarding the probability distribution characterized by the measurement result and its combined standard uncertainty. The level of confidence that may be attributed to this interval can be known only to the extent to which such assumptions may be justified.

VIM Vocab. Metrol._JCGM_200_2008
2.30
standard measurement uncertainty
standard uncertainty of measurement

standard uncertainty
measurement uncertainty expressed as a standard deviation

2.31
combined standard measurement uncertainty
combined standard uncertainty
standard measurement uncertainty that is obtained using the individual standard measurement uncertainties associated with the input quantities in a measurement model.

Reply to  Pat Frank
September 3, 2023 2:52 am

This is just getting sad.

There is no “and” between a) and b) because in a list the word is only used between the final two items.

You’re embrissing yourself just to avoid admitting you might have made a small mistake that has no impact on your argument over instrument uncertainty.

If it were true that “science” uses the word standard deviation to mean a coverage interval, then you should easily be able to find examples of them doing that, instead of having to torture the English language like this.

I’d still say “science” was wrong to do it if they did, because it’s not helpful misuse a well defined term.

Everything else you quote at length from the GUM is simply you failing to distinguish between standard uncertainty and expanded uncertainty.

Reply to  Bellman
September 3, 2023 6:35 am

Quiet back there! The expert is speaking!

Reply to  Bellman
September 3, 2023 6:46 am

They all include the standard deviation, Bellman, including c) a coverage interval containing Y with a specified coverage probability.

Up-thread I provided links to my published papers showing exactly that science uses standard deviation as a coverage interval. A usage passed by all my co-authors, my reviewers, and the journal editors.

You’ll also find it in other published papers in the physical sciences, Did you ever look?

Reply to  Bellman
August 31, 2023 7:37 am

This whole discussion has revolved around a normal distribution. This is only one example of many possible uncertainty intervals. A skewed measurement distribution has unequal interval values. This is why every physical scientist uses a ± interval.

A skewed measurement can easily be -10, +3. How does this happen? A mechanical gauge using a spring to control the pointer. Springs are not linear in their movement. Transducers are not linear. You can have odd intervals where the plus and minus values are not the same. How about a measurement of 5 (-2,+3).

This is one reason more measurements plotted on a histogram is important. If they do not have a normal distribution the mean is not a good indicator and neither is a standard deviation.

This is why “a” and “b” points describe an interval.

Reply to  Jim Gorman
August 31, 2023 9:32 am

This whole discussion has revolved around a normal distribution.

Not my fault. I was just talking about standard deviations – but then some kept talking about the µ±σ interval having a 68% probability. I kept pointing out that this was only for a Gaussian distribution.

A skewed measurement distribution has unequal interval values.

Any distribution can have any interval values you want.

This is why every physical scientist uses a ± interval.

Because they assume all distributions are symmetrical?

This is why “a” and “b” points describe an interval.

But an interval won’t tell you what the distribution is.

And the problem we jkeep coming back to is people keep insisting that the interval has no distribution, it’s just a range of ignorance where you can say nothing about the probability of any value inside. That would imply to me that the distinction between a symmetric and asymmetric distribution would be meaningless.

Reply to  Bellman
August 31, 2023 12:23 pm

And the problem we jkeep coming back to is people keep insisting that the interval has no distribution, it’s just a range of ignorance where you can say nothing about the probability of any value inside.

You can specify an uncertainty interval without knowing *anything* about a distribution inside the interval. Go read up on a Type B uncertainty estimate in the GUM.

WHEN are you going to start actually studying the appropriate documents instead of cherry-picking?

That would imply to me that the distinction between a symmetric and asymmetric distribution would be meaningless””

Really? Not knowing a distribution for the uncertainty interval means it is meaningless? I’m not surprised you think that! But it is as wrong as it can be.

Reply to  Bellman
August 31, 2023 7:40 am

This whole argument keeps coming back to you thinking there are two standard deviations for any distribution, one negative and one positive, and to find a confidence interval you add both distributions.”

There ARE two pieces to the distributions. You *can* have an asymmetric uncertainty interval. How do you differentiate between the two if you don’t use a value modifier showing the the negative part of the uncertainty interval is different from the positive part of the inerval.

As usual, you are stuck in Gaussian world. Have you *never* wondered why the GUM only addresses symmetric uncertainty? My guess is that you didn’t even realize the negative uncertainty span can be different than the positive side.

This *should* be a clue to any thinking man that the GUM only addresses instances that result in a Gaussian distribution.

When are you gong to give us a source from the GUM for how to handle multiple measurements of different things?

Reply to  Tim Gorman
August 31, 2023 9:44 am

There ARE two pieces to the distributions.

And you keep moving the goal posts. A distribution has one standard deviation – it’s called the standard deviation.

Now, maybe you could study what happens if you split the distribution into two parts, greater than the mean and below the mean, and work out a different distribution for each. You could do a piecewise SD in many ways – and for all I know there are methods that use that. But – you would still expect each standard deviation to be positive.

As usual, you are stuck in Gaussian world.

Howe many times did I point out to you, you were assuming a Gaussian distribution. You keep wanting to make the µ ± σ be 68%.

Have you *never* wondered why the GUM only addresses symmetric uncertainty?

It doesn’t. I gave you the section number where it discusses asymmetric uncertainties else where. You’ll have to find it. It’s a real pain having to address the same question hundreds of time in different threads. Especially when you you are so obnoxiously patronizing about it.

When are you gong to give us a source from the GUM for how to handle multiple measurements of different things?

The same answer as I’ve given you the last 100 times you asked it – equation 10 or 13.

Reply to  Bellman
August 31, 2023 12:34 pm

A distribution has one standard deviation – it’s called the standard deviation.”

And now we are back to trying to deflect to the data distribution. You don’t *need* to know the exact distribution of the uncertainty interval in order to specify it. That is exactly what Type B uncertainty is for.

Stop deflecting.

“Now, maybe you could study what happens if you split the distribution into two parts, greater than the mean and below the mean, and work out a different distribution for each”

Still deflecting to the DATA rather than address the uncertainty. Stop deflecting.

“Howe many times did I point out to you, you were assuming a Gaussian distribution. You keep wanting to make the µ ± σ be 68%.”

I was assuming a Gaussian distribution because that’s all you seem to know! EVERYTHING is Gaussian to you!

Now that we’ve moved on to an asymmetric uncertainty interval all you can do is deflect and evade.

“It doesn’t. I gave you the section number where it discusses asymmetric uncertainties else where.”

It does *NOT* address how to handle it. It only says it exists and that other information than the standard deviation and average is needed to handle it. It does *NOT* specify the information nor does it specify a method or protocol on how to determine that extra information.

And apparently neither can you!

“The same answer as I’ve given you the last 100 times you asked it – equation 10 or 13.”

And I keep telling you that these don’t apply to multiple measurements of different things where you can have skewed data distributions and asymmetric uncertainty intervals. They don’t even apply to Gaussian data distributions which may have asymmetric uncertainty intervals!

Show us one place in the GUM where they actually calculate an asymmetric uncertainty interval and specify what it is. Oh — don’t bother, we know you can’t because it isn’t there! Just like in 4.4.3, they don’t show *any* skewed distributions, only symmetric ones.

You can continue to play dumb but *EVERYONE* knows you’ve been caught. So has climate science because they don’t recognize that you can even have skewed temperature data let alone asymmetric uncertainty – their entire discipline is based on the assumption that all measurement uncertainty is random, Gaussian, and cancels. Just like you do!

Reply to  Bellman
August 31, 2023 3:05 pm

This whole argument keeps coming back to you thinking there are two standard deviations for any distribution, one negative and one positive, and to find a confidence interval you add both distributions.

What makes you think I think that? In any normal distribution, there is an infinite number of standard deviations, on each side of the mean. Both positive and negative.

But my problem is you seem to think that somehow my interpretation is some plot by statisticians to force scientists to use concepts that don’t work in the real world.

No. It’s that you don’t understand that uncertainty in science is not uncertainty in statistics. Consequently, you don’t know what you’re talking about.

We’ve argued all these points to death. Go on thinking as you do, and welcome to it. You’re evidently fated to not understand LiG Met. on its own merits.

Reply to  Pat Frank
August 31, 2023 3:30 pm

What makes you think I think that? In any normal distribution, there is an infinite number of standard deviations, on each side of the mean. Both positive and negative.

Sorry if I overestimated your understanding.

I assume you are now pointing out there are an infinite number of multiples of σ on the real number line. So what? There are an infinite number of multiples of e, π and any other number you can think of.

No. It’s that you don’t understand that uncertainty in science is not uncertainty in statistics. Consequently, you don’t know what you’re talking about.

Yet you never tell me how these non-statistical uncertainties are defined.

Reply to  Bellman
August 31, 2023 6:20 pm

Yet you never tell me how these non-statistical uncertainties are defined.

Why bother? You will just reject them as “wrong”.

Reply to  Bellman
August 31, 2023 8:57 pm

I assume you are now pointing out there are an infinite number of multiples of σ on the real number line.

No. I’m alluding to the fact that the wings of a Gaussian extend to ±infinity

Reply to  Pat Frank
September 1, 2023 6:03 am

That’s what I was saying. I still don’t get your point.

There is only one standard deviation for a distribution – it’s the standard distribution, σ, and has to be positive. There is a point on the x-axis corresponding to µ + σ and a point corresponding to µ – σ, and there are points corresponding to µ + 2σ, µ + 3, and µ + kσ, for all integers k, or indeed any non-integer k.

None of these points are special, except that they are used to define certain intervals that have well known probabilities, and they scale with the shape of the normal distribution.

But you can have a probability for any range – regardless of whether it involves a multiple of σ, e.g. µ + 1.96σ. Nore does the interval have to be symmetrical about the mean.

None of this means that the standard deviation is negative.

Reply to  Bellman
September 1, 2023 10:10 am

There is a point on the x-axis corresponding to µ + σ and a point corresponding to µ – σ, and there are points corresponding to µ + 2σ, µ + 3, and µ + kσ, “

As usual, you are stuck in the world of random, Gaussian variables.

µ + σ and µ – σ ONLY WORK WHEN YOU HAVE A SYMMETRIC DISTRIBUTION with a symmetric standard deviation.

Nore does the interval have to be symmetrical about the mean.”

Then how do you differentiate the left side of the interval from the right side? How do you calculate it?

Reply to  Tim Gorman
September 1, 2023 5:54 pm

As usual, you are stuck in the world of random, Gaussian variables. ”

Your hot key is stuck. Saying that numbers exit on a number line has nothing to do with anything you’ve just said.

µ + σ and µ – σ ONLY WORK WHEN YOU HAVE A SYMMETRIC DISTRIBUTION

You’ve just ignored anything I’ve actually said, and then resorted to shouting your ignorance. If you have a probability distribution you can work out the probability of a value falling between any two values, regardless of whether the values or the distribution is symmetric.

Then how do you differentiate the left side of the interval from the right side? How do you calculate it?

If you know the mean, and the upper and lower bounds of an interval you can see if it’s symmetrical about the mean.

Reply to  Bellman
September 1, 2023 6:49 pm

If you have a probability distribution you can work out the probability of a value falling between any two values, regardless of whether the values or the distribution is symmetric.

Where do you get this alleged distribution?

Pull it out of your backside?

Reply to  karlomonte
September 2, 2023 7:32 am

I just plotted the temperatures used in TN 1900 using box plot, quartile, 5 number, whatever you want to call it. It is a skewed distribution with almost 1° C between the mean and median.

Try to determine a real mathematical formula that matches the curve would not be easy.

Reply to  Jim Gorman
September 2, 2023 7:42 am

Here is the histogram for the UAH baseline average, for the month of February (1990-2010)

Reply to  karlomonte
September 2, 2023 7:44 am

Forgot the graph, show me where it is “gaussian”:

Feb UAH baseline.png
Reply to  karlomonte
September 2, 2023 2:21 pm

I am not surprised. I do think UAH is probably the best “global temperature” set, but the uncertainty in the data can hardly be assessed based on this distribution.

Reply to  Bellman
September 1, 2023 11:29 am

There is only one standard deviation for a distribution – it’s the standard distribution, σ, and has to be positive.

has to be“? On what grounds?

The standard deviation is the square root of the variance. The square root of any positive number is ±s.

None of this means that the standard deviation is negative

Where did I write that the standard deviation is negative?

Reply to  Pat Frank
September 1, 2023 3:30 pm

“has to be“? On what grounds?

On the grounds that it is a measure of dispersion, and all measures of dispersion have to be non-negative:

A measure of statistical dispersion is a nonnegative real number that is zero if all the data are the same and increases as the data become more diverse.

It makes no sense to suggest that any measure of dispersion could be less than zero – as that would imply there is less dispersion then if all values were identical.

The standard deviation is the square root of the variance.

The positive square root.

The square root of any positive number is ±s

But in many context the – value is ignored as not applicable. As I’ve said it makes no sense to think of the negative length of a hypotenuse, or or the length of a square. They don’t exist in the real world.

Where did I write that the standard deviation is negative?

When you said this:

The standard deviation is the square root of the variance. The square root of any positive number is ±s.

Reply to  Bellman
September 1, 2023 7:13 pm

“A measure of statistical dispersion…”

In science, the standard deviation is a statistical measure of physical dispersion. See the difference?

It makes no sense to suggest that any measure of dispersion could be less than zero – as that would imply there is less dispersion then if all values were identical.

It makes sense that physical measures of dispersion include negative values.

When you said this:
The standard deviation is the square root of the variance. The square root of any positive number is ±s.”

±s is not a negative number.

Reply to  Pat Frank
September 1, 2023 4:29 pm

You didn’t. I have prepared a list of bellman evasion rules (which you turned me on to):

  1. Your question is ill-posed or vague
  2. I was talking about something else.
  3. Divert to another subject.
Reply to  Tim Gorman
September 1, 2023 4:39 pm

So let me get this straight. After a week of me trying to explain that the standard deviation is never negative – you are now claiming you never said the standard deviation is both positive and negative?

Reply to  Bellman
August 31, 2023 7:07 am

in many cases they are badly written, confusing and sometimes just wrong.”

Usually the problem is your reading comprehension, not the documents.

I’ll ask for a third time so you won’t miss the question.

Can an uncertainty interval be asymmetric? If so, how do you designate the each piece of the uncertainty if σ is only a positive number and the plus and minus signs are only operators?

Reply to  Tim Gorman
August 31, 2023 9:21 am

Can an uncertainty interval be asymmetric?

And I’ll reply for the third time, yes it can.

If so, how do you designate the each piece of the uncertainty if σ is only a positive number and the plus and minus signs are only operators?

Could you repeat that in a way that makes sense?

If you have an asymmetric or any distribution you need to know what that distribution is before you ask questions about the probability of a value lying between two values.

If you know the distribution is Gaussian, you can use the Gaussian formula to work out what the probability is for any interval – to do this you just need µ and σ because the distribution is entirely defined by those two values. For other distributions whether symmetric or not you may need other parameters. These might for example include skew and kurtosis. In many cases there will not be a parametric definition of a distribution and you will have to estimate it from data or some other means.

If you can find a formula for a particular distribution you can again work out the probability of a random value lying between any two points. You can also determine ranges that meet a particular probability – e.g. a range that will have a 95% probability of containing a random value.

None of this has anything to do with whether ± is an operator or not.

Reply to  Bellman
August 31, 2023 12:09 pm

Could you repeat that in a way that makes sense?”

bellman evasion Rule No. 1: “You asked an ill-formed or vague question”.

‘ll ask again: ““If so, how do you designate the each piece of the uncertainty if σ is only a positive number and the plus and minus signs are only operators?””

It’s a perfectly formed and pertinent question. Stop making excuses.

“If you have an asymmetric or any distribution you need to know what that distribution is before you ask questions about the probability of a value lying between two values.”

belman evasion Rule No 2: “I was talking about something else”.

This is *NOT* about probability. It is about how you designate an asymmetric uncertainty interval. Your continued evasions are a red flag. You either can’t answer or you won’t answer. Which is it?

 For other distributions whether symmetric or not you may need other parameters.”

Another evasion. What are those parameters and how do you designate them?

“These might for example include skew and kurtosis”

Skew and kurtosis only tell you the distribution of the DATA is skewed. It doesn’t tell you that the uncertainty is asymmetric. A thermocouple sensor in hot gas flowing in a hot tube may generate a perfectly linear pattern as the temperature of the gas increases while the uncertainty interval is asymmetric.

Again, for the umpteenth time, how do you designate an asymmetric uncertainty interval?

“If you can find a formula for a particular distribution you can again work out the probability of a random value lying between any two points.”

Another Evasion Rule No. 2. The issue isn’t about the probability of the DATA! It is about the uncertainty of the stated values of that data. You can specify an uncertainty interval with *NO* knowledge of the distribution of the uncertainty – that’s what a Type B uncertainty is for!

STOP EVADING. Just admit you don’t want to answer because you know your assertion is wrong.



Reply to  Tim Gorman
August 31, 2023 2:34 pm

“‘ll ask again: ““If so, how do you designate the each piece of the uncertainty if σ is only a positive number and the plus and minus signs are only operators?”””

If I said your question didn’t make sense to me, why do you think repeating it verbatim will make it clearer. Maybe you should write it in all caps, that seems to be your usual tactic.

Problem 1 is I don’t know what you mean by “the each piece” of the uncertainty. How many pieces does an uncertainty have. You’re the one who know wants to describe asymmetric uncertainty – you need to explain exactly what information is needed to describe the asymmetry.

Then “if σ is only a positive number”.

What has that got to do with describing an asymmetry. I keep banging this drum, but you never want to check it for yourself, σ is not generally a description of a distribution. It is a measure of the dispersion of all the values in the distribution. The average distance of each point from the mean (that’s positive distance for your benefit). It doesn’t matter what shape the distribution is you always calculate the standard deviation the same way. In some distributions, e.g. Gaussian all you need is the mean and the standard deviation to completely define the distribution – in others you will need more parameters, and in others there will be no definition based on parameters.

Pretending that σ can be negative will not help you in any way to define the distribution.

I suspect your problem is you just don’t understand what σ is. You’ve looked at a graph of a Gaussian distribution seen positive and negative σ signs and assumed that they are props holding the graph up. Make some of the signs bigger and smaller and you can warp the graph. You are confusing points that mark where on the axis a point is equal to -σ with a value representing a different sort of standard deviation.

our continued evasions are a red flag. You either can’t answer or you won’t answer. Which is it?

Nope and nope. Have you considered I answered them but you didn’t understand the answer? Of course not. Your understand is perfect.

This is *NOT* about probability.

Oh yes it is.

What are those parameters and how do you designate them?

Why keep asking me? You’re the expert on uncertainty, you are no obsessed with asymmetric uncertainty – surely you know how to represent them.

Skew and kurtosis only tell you the distribution of the DATA is skewed.

Keep displaying your ignorance.

In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean.

https://en.wikipedia.org/wiki/Skewness

Another Evasion Rule No. 2. The issue isn’t about the probability of the DATA!

Then what do you think it is? You are the one who kept going on about the 68% between two sigmas, what do you think that means? All distributions are probability distributions. They describe the probability of a value occurring. You can deduce them by looking at data, and predict the distribution of data, because it is assumed the data is a collection of random variables taken from the distribution.

You can specify an uncertainty interval with *NO* knowledge of the distribution of the uncertainty – that’s what a Type B uncertainty is for!

In which case you have to make an assumption about what the distribution is.

Reply to  Bellman
August 31, 2023 3:48 pm

Problem 1 is I don’t know what you mean by “the each piece” of the uncertainty.”

And here we go! bellman Evasion Rule No. 1: “your question is ill-posed or vague”.

The GUM seems to know. (a_+) + (a_-). Two separate pieces. Now tell me you didn’t know that was what I was talking about! Even after I provided you the quote.

“ou’re the one who know wants to describe asymmetric uncertainty – you need to explain exactly what information is needed to describe the asymmetry.”

I have explained it. I’ve even given you a quote from the GUM on it. (a_+) + (a_-)

What is a_+ and what is a_-?

“What has that got to do with describing an asymmetry.”

If σ is only a positive number and the + and – signs are only operators showing mean + σ and mean-σ then how do you ever show asymmetry?

” σ is not generally a description of a distribution. It is a measure of the dispersion of all the values in the distribution. The average distance of each point from the mean (that’s positive distance for your benefit). “

How does the AVERAGE value get used to describe an asymmetric uncertainty interval?

bellman Evasion Rule No 3. Divert to talking about something else.

n others you will need more parameters, and in others there will be no definition based on parameters.”

What other parameters? We keep asking you for them and how to calculate them but all we ever get from you is hand waving and evasion. All so you don’t have to admit that there is such a thing as a negative σ.

“You are confusing points that mark where on the axis a point is equal to -σ with a value representing a different sort of standard deviation.”

I’m not confusing anything. You are dissembling and making stuff up.

“Have you considered I answered them but you didn’t understand the answer? Of course not. Your understand is perfect.”

You haven’t answered anything!

  1. Where in the GUM does it show how to handle single measurements of different things under different environments using different devices.
  2. Where in the GUM does it show how to calculate asymmetric uncertainty?
  3. Why does the GUM show using a_- if “a” is always a positive number?

How do *YOU* handle asymmetric uncertainty? How do *YOU* calculate it? How do *YOU* write it?

Don’t deflect, evade, handwave, or dissemble. Just answer the straightforward questions.

In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean.”

And here we go again. Your use of the meme that all uncertainty is a random variable, Gaussian, and cancels.

You keep denying your use of that meme but it crops up in EVERYTHING you post!

Then what do you think it is?”

It’s about uncertainty and how it is specified. Something you seem to have become reluctant to address. You know you are wrong so you have to deflect, evade, dissemble, and use handwaving magic.

Reply to  Tim Gorman
August 31, 2023 5:50 pm

Last word for now, as I’ve had enough of your petulant, pathetic insults and demands.

The GUM seems to know. (a_+) + (a_-). Two separate pieces. Now tell me you didn’t know that was what I was talking about! Even after I provided you the quote.

Read what it says, rather than making it up. a_+ and a_- are not two separate intervals. They are the upper and lower bounds of the same interval defining an assumed rectangular distribution. The expect value is (a_+ + a_-) / 2, it has a half-value value of a = (a_+ – a_-) / 2. The standard uncertainty is given by u^2 = a^2 / 3.

4.3.8 describes a situation where the unknown distribution is not symmetric, that is you know the upper and lower bounds, but an expected value that is not central. But that still doesn’t involve negative uncertainties or intervals.

If σ is only a positive number and the + and – signs are only operators showing mean + σ and mean-σ then how do you ever show asymmetry?

How would you show asymmetry using a negative σ? I won;t expect an answer because it’s obvious you still won’t take my hint and find out what a standard deviation actually is.

How does the AVERAGE value get used to describe an asymmetric uncertainty interval?

It doesn’t. As I keep telling you.

Divert to talking about something else

Otherwise know as trying to explain something to you in a way I hope you might understand.

What other parameters? We keep asking you for them and how to calculate them but all we ever get from you is hand waving and evasion. All so you don’t have to admit that there is such a thing as a negative σ.

The other parameters will depend on what distribution you are talking about. You keep demanding I answer these impossible questions, but never explain how you can use the concept of negative uncertainty or negative standard deviations to describe an arbitrary asymmetric distribution.

I’m not confusing anything. You are dissembling and making stuff up.”

I’m trying to help you out. You keep saying things that make no sense to anyone who knows what a standard deviation is, and I’m trying to see it through your eyes and speculate what you might be getting wrong. I have to do this because you never give any details of what you are actually trying to say.

You haven’t answered anything!

I’ll take that as a no. You haven’t considered that you are not understanding what I’m saying, let alone consider the possibility that you might be misunderstanding something.

Where in the GUM does it show how to handle single measurements of different things under different environments using different devices.

Answered, but you don’t like the answer and dismiss it with your usual baloney about Gaussian distributions.

Where in the GUM does it show how to calculate asymmetric uncertainty?

I told you it doesn’t in any detail. 4.3.8 for example describes an unknown distribution where all you know is a lower and upper bound, and an expected value which is not central. But all they say is without more details it’s difficult to figure out the standard uncertainty, so the simplest approximation is to treat it like a rectangular distribution.

Why does the GUM show using a_- if “a” is always a positive number?

Because it’s written for people who have more sense than to assume a_- means a negative number. a is the half width of an interval, by definition it is positive, being half the value of a positive difference.

How do *YOU* handle asymmetric uncertainty?

Why do you kep asking me this idiotic question over and over. I do not know how I would handle asymmetric uncertainty. It’s not something I’m ever likely to need to do. It would depend on the actual uncertainty.

If you think you know, why don;t you just tell us rather than keep asking me. But if you claim that you can do it using negative uncertainty, I’m going to guess you still don’t know what you are talking about.

Just answer the straightforward questions.

If they are so straightforward them. Waits for the scream of “You are in position to demand anything of me”.

And here we go again. Your use of the meme that all uncertainty is a random variable, Gaussian, and cancels.

I’m beginning to suspect this is some strange for of Tourette syndrome. You have to blurt it out regardless of context or sense. In this case you spewed it out in response to me quoting a passage about Skewness. If you can’t see the problem, I pity you.

Reply to  Pat Frank
August 31, 2023 6:14 am

Rather, it seems you can’t understand that ±σ represents a continuous range rather than two scalar limits.

Exactly right, and he is desperate to extract information from inside the range.

Reply to  karlomonte
August 31, 2023 7:11 am

bellman doesn’t even realize that an uncertainty interval does not need to be symmetric. That only applies to a Gaussian (or symmetric) distribution.

You *can* have different uncertainties on either side of a stated value.

How do you designate this if the plus and minus signs are only operators and not value modifiers? My guess is we’ll never get an answer from bellman on this.

Reply to  Tim Gorman
August 31, 2023 7:15 am

I have personally encountered a situation where it was necessary to have asymmetric uncertainty limits.

Reply to  karlomonte
August 31, 2023 7:49 am

If bellman would stop and think about it for even one second it should dawn on him that an asymmetric response is not unlikely for an LIG thermometer. The liquid going up (i.e. temp going up) encounters a dry tube. The liquid going down (i.e. temp going down) encounters a wet tube. The friction coefficients will be different for each direction leading to an asymmetric uncertainty in the reading.

The issue with the LIG thermometer is that its other uncertainties overwhelm the friction uncertainties making the uncertainty appear symmetric — but that asymmetry is still there. In a full list of the uncertainty factors it should always be included.

The same thing applies to a thermal sensor in a hot gas flowing in a tube. Because of radiation loss from the sensor to the cooler walls the actual temperature is more likely to be above the reading than it is to be below it. Again, is the bias enough to affect the overall uncertainty? If it is then you will have an asymmetric uncertainty. You can’t just say the standard deviation is an unlabeled σ. Whether you label them +σ and -σ or u_l (lower side) and u_u (upper side) is merely notation convention, you still need a value modifier to distinguish between them. σ is *not* just a positive number on the number line.

Reply to  Tim Gorman
August 31, 2023 9:48 am

If bellman would stop and think about it for even one second it should dawn on him that an asymmetric response is not unlikely for an LIG thermometer.

I’m not the one writing the 50 page pamplet on LIG uncertainty.

Does Pat Frank address that question? There are lots of graphs indicating asymmetric uncertainties, but all his expanded uncertainties are ± a single value.

Reply to  Bellman
August 31, 2023 12:36 pm

Pat didn’t try to address every possible source of uncertainty in an LIG. He focused on one. That does *NOT* make his analysis wrong. What he identified is just one component to be considered – that *is* symmetric.

You are fading fast. Try harder.

Reply to  Bellman
August 31, 2023 3:08 pm

…but all his expanded uncertainties are ± a single value

If you’re talking about the results of eqns. 4-6, those are not expanded uncertainties. They’re average 1C/division LiG thermometer resolution.

Reply to  Pat Frank
August 31, 2023 3:34 pm

Then why do you keep multiplying them all by 1.96? What’s that if it’s not a coverage factor?

Reply to  karlomonte
August 31, 2023 8:26 am

You best me to it!

Reply to  Jim Gorman
August 31, 2023 8:39 am

great minds think alike?

Reply to  Tim Gorman
August 31, 2023 8:21 am

bellman doesn’t even realize that an uncertainty interval does not need to be symmetric.”

Please stop lying. I said that the standard way the GUM specifies describing uncertainties does not allow for non-symmetric intervals. I’m not saying you can’t have an asymmetric distribution.

How do you designate this if the plus and minus signs are only operators and not value modifiers? My guess is we’ll never get an answer from bellman on this.

In fact a quick check shows it’s mentioned in G.5.3. As they say this doesn’t affect the standard uncertainty.

They say it’s still convenient to quote the expanded uncertainty as y±U, “unless the interval is such that there is a cost differential between deviations of one sign over the other.”

The alternative requires more information to be given.

The alternative is to give an interval that is symmetric in probability (and thus asymmetric in U): the probability that Y lies below the lower limit y − U_- is equal to the probability that Y lies above the upper limit y + U_+. But in order to quote such limits, more information than simply the estimates y and u_c (y) [and hence more information than simply the estimates x_i and u(x_i) of each input quantity X_i] is needed.

Reply to  Bellman
August 31, 2023 8:37 am

I’m not saying you can’t have an asymmetric distribution.”

Of course you are. You say it every time you assert the GUM can handle asymmetric distributions.

They say it’s still convenient to quote the expanded uncertainty as y±U, “unless the interval is such that there is a cost differential between deviations of one sign over the other.””

gum: “[and hence more information than simply the estimates x_i and u(x_i) of each input quantity X_i] is needed.”

Since the global average temperature is an asymmetric distribution with asymmetric uncertainty where is the additional information?

Where is the evaluation of the cost differential between the deviations of one sign or the other for the global average temperature? (I thought there wasn’t any sign associated with the deviations according to you. Apparently the GUM doesn’t agree with you)

There *will* be asymmetry in daily Tmax and Tmin values because of the different variances each have since one is from a sinusoidal temp curve and the the other is from a expnential/polynomial temp curve. That asymmetry will then follow throughout all the computations of the GAT.

Asymmetric uncertainty intervals, + and -, DO NOT CANCEL. Thus the assumption that all temperature measurement uncertainty is random, Gaussian, and cancels is not and can not be justified.

Reply to  Tim Gorman
August 31, 2023 9:58 am

Of course you are.

Ah the secret knowledge only you posses. Every time I say you can asymmetric uncertainties, you can claim that I’m really saying you cannot have asymmetric distributions. How else can you keep these meaningless arguments going.

You say it every time you assert the GUM can handle asymmetric distributions.

So weird. First I say that you can have asymmetric uncertainties, but the GUM doesn’t address this. Then I notice that there is a brief section mentioning asymmetric uncertainties, so I point this out. And pointing this out is in Gormanland enough to prove that I don’t believe in asymmetric uncertainties.

Reply to  Bellman
August 31, 2023 12:47 pm

you can claim that I’m really saying you cannot have asymmetric distributions.”

Nope. I am saying you refuse to answer as to how you specify an asymmetric uncertainty interval. I am saying you refuse to answer on how you calculate an axsmmetric uncertainty interval.

Stop whining and provide an actual answer rather than deflecting to talking about probability distributions of the stated values of measurements, about how you have to know a distribution for the uncertainty interval to specify what it is, and how the GUM tells you how to calculate the uncertainty of multiple measurements of different things.

So weird. First I say that you can have asymmetric uncertainties, but the GUM doesn’t address this. Then I notice that there is a brief section mentioning asymmetric uncertainties, so I point this out. And pointing this out is in Gormanland enough to prove that I don’t believe in asymmetric uncertainties”

You are STILL deflecting. The GUM simply doesn’t tell how to handle asymmetric uncertainty, it only recognizes that you can have such – something you didn’t even realize. And *YOU* have not provided any method for handling asymmetric uncertainty either. You won’t even tell us how asymmetric uncertainty intervals should be shown for a measurement!

We *all* know that’s because it would force you admit that standard deviation has both a positive component and a negative component. And you simply can’t admit that, it would be too embarrassing for you.

Reply to  Tim Gorman
August 31, 2023 1:46 pm

He is stuck in thinking an uncertainty interval equals a probability distribution, he’s said this over and over.

Reply to  karlomonte
August 31, 2023 2:03 pm

Yep.

Reply to  Pat Frank
August 30, 2023 3:29 pm

The entire conversation on this issue is you insisting on a statistical vocabulary in a science/engineering milieu.”

You nailed it!

Reply to  Bellman
August 29, 2023 4:56 pm

Citation required.

Vasquez and Whiting again: “Additional formulae are presented when dealing with small samples, replacing the Gaussian probability term κ by a tolerance probability from the “t” Student distribution, which is equivalently used to define uncertainty intervals for means of small samples as x ± t·s, where s is the estimate of the standard deviation σ.

Ferson, et al., (2007) (pdf) discuss uncertainty in a scientific context. From page 112:

NIST uses the plus-or-minus symbol ± between the measurement result and its standard uncertainty to denote the normal distribution model of the measurement and its uncertainty. For instance, the expression 0.254 ± 0.011 denotes a normal distribution N(0.254, 0.011) with mean 0.254 and standard deviation 0.011 which is the model for the uncertainty of the measurement result. This notation is somewhat at odds with other common uses of the ± symbol to denote a pair of values of differing sign or a range of values around a central point. One must be careful to keep in mind that the plus-minus notation between the measurement result and its standard uncertainty denotes a probability distribution, rather than an interval* or a pair of scalar values. NIST also suggests a “concise notation” in which the standard uncertainty is expressed parenthetically as mantissa digits whose power is specified positionally by the last digits of the best estimate. Thus, the concise notation 0.254(11) is equivalent to the normal distribution 0.254 ± 0.011.

The problem is not one of rigor. The problem is dealing with the physical world rather than working in the Platonic ideal of mathematics.

Reply to  Pat Frank
August 29, 2023 5:33 pm

And again, two quotes, neither of which say, or imply, that standard deviation is both negative and positive.

The second one is talking about interval arithmetic. Not something I know much about, but looking at section 4.5.2 they define standard deviation as the square root of the upper and lower variances. There numerical example shows all the standard deviations as positive.

Numerical examples. The sample standard deviation for the skinny data set is [√9.503,√12.912] = [3.082, 3.594]. (Remember that intervals are outward rounded as described in section 3.2, so that, although the sample standard deviation is computed to be [3.0827…, 3.5933…], the left endpoint rounds down and the right endpoint rounds up to the interval reported here.) The population standard deviation for the skinny data set is [√7.919, √10.760] = [2.814, 3.281]. For the puffy data set, the sample standard deviation is [√1.031, √12.347] = [1.015, 3.514], and the population standard deviation is [√0.916, √10.975] = [0.957, 3.313].

Reply to  Bellman
August 29, 2023 7:54 pm

And again, two quotes, neither of which say, or imply, that standard deviation is both negative and positive.”

The bit I quoted from Ferson (2007) p. 112 follows under the section:
8 Methods for handling measurement uncertainty

Your bit from p. 58 follows under the section:
4 Descriptive statistics for interval data

Interval data “consists partly or entirely of intervals, rather than exclusively of point values.” It is stated as “NP-hard,” and follows a different statistics.

Section 4.8 Confidence intervals, especially the starred note, distinguishes between the frequentist statistics used for measurement uncertainty and interval arithmetic.

It appears you chose your quote carelessly.

However, quoting from above as regards measurement uncertainty: “as x ± t·s, where s is the estimate of the standard deviation σ.

“the expression 0.254 “± 0.011 denotes a normal distribution N(0.254, 0.011) with mean 0.254 and standard deviation 0.011 which is the model for the uncertainty of the measurement result.

“the plus-minus notation between the measurement result and its standard uncertainty denotes a probability distribution,”
(my bold)

Those quotes fully establish my point. The the standard deviation is ±, and the standard deviation is a probability distribution.

Cavil all you like Bellman. You’re wrong.

Reply to  Pat Frank
August 30, 2023 6:28 am

It appears you chose your quote carelessly.”

It’s what he does ALWAYS. He cherry-picks with absolutely no understanding of context or meaning.

Reply to  Tim Gorman
August 30, 2023 7:15 am

I have copied and saved more than a few of the explanations Pat has taken the time to write and post in this thread.

Can’t say the same for any Bellman screeds.

Reply to  karlomonte
August 30, 2023 9:12 am

I’m heartbroken.

Reply to  Bellman
August 30, 2023 5:49 am

Science and engineering don’t need to be as the rigerous as mathematics, butI still can’t find anything that actually claims that.”

Unfreakingbelievable! Science and engineering impact the LIVES AND WELL-BEING OF HUMANS. Abstract math does not.

Now which one has to be more rigorous? If someone doing math on a blackboard decides to ignore the negative root what is the impact on the lives and well-being of others? If someone is calculating shear forces o a bridge beam what is the impact on the lives and well-being of others?

Being rigorous means taking *everything* into account, allowing for the impact of everything, understanding the real world including the limitations (i.e. uncertainty) of living in the real world.

Being rigorous is *NOT* ignoring part of the real world so the math is easier.

Reply to  Bellman
August 31, 2023 6:50 am

Do you accept that the uncertainty can be asymmetric?

If so, then how do you express it if σ is always a positive number?

Reply to  Pat Frank
August 29, 2023 12:36 pm

It is a typical troll thing to do.

Reply to  Pat Frank
August 29, 2023 3:56 pm

Bellman is just playing cozy with the alternative meaning. He is arguing the statistical definition within a scientific context.
He is not, “trying to correct a small and mostly irrelevant mistake Pat made.” He is being disingenuous.
Operating in a scientific context but arguing from a statistical stance.

I see things differently. I presented what was my understanding of the correct mathematical description of the square root symbol, standard deviation and uncertainty – that they were all positive. I checked my understanding, and have quoted many online sources that agree with me.

You say there is an alternative definition used by scientists and engineers. This wouldn’t surprise me, but so far you have not posted a single source confirming this alternative definition.

Meanwhile, I spend an entire responding to dozens of angry, and offensive comments – none of which mention there is an alternative, all just saying I’m wrong.

And yet at the end of all this, I’m the one accused of being disingenuous and a troll. I’m the one trying to distract from the things I’d rather be talking about.

Most papers on the subject, including JCGM GUM, define SD from statistics as the positive root, but their worked examples and problems are written ±u(y).

Again, I think the confusion is that you and others here are interpreting the ± as applying the value of the uncertainty and that means the uncertainty can either be positive or negative. Rather than seeing the ± as being a combination of the operations addition and subtraction.

The usage contradicts the stated definition.

It doesn’t if you follow what I’m saying. And the alternative is you are claiming the GUM and most papers are contradicting themselves.

But one can quote the definition of SD from those statistical papers as authoritative, even though it is functionally wrong in science.

And there you go again. You say there may be alternative definitions, but then say one is “wrong” in science. Why do you think so many papers are using the “wrong” definition?

In science and engineering, the negative wing of a normal distribution represents a positive probability.

Again, nobody claims otherwise.

In science and engineering the standard deviation is defined as both roots of the variance — positive and negative — because each represents a positive physical probability.”

It does not “represent” a probability. It represents the average absolute deviation, how far each point is from the mean. You can use it to determine probabilities of intervals, and those intervals can include values less than the mean.

Uncertainty‘ in science is defined differently than uncertainty in statistics.

Then again, you need to provide a reference to the scientific definition of uncertainty. Because the one we’ve been using for measurement uncertainty is statistical.

Reply to  Bellman
August 29, 2023 8:10 pm

“Meanwhile, I spend an entire responding to dozens of angry, and offensive comments – none of which mention there is an alternative, all just saying I’m wrong.”

Apart from any offensiveness, the people telling you that you’re wrong are professionals in the field of engineering or science. Why should not their disagreement and explanations be considered authoritative?

Tim and Jim Gorman, for example, are professional engineers. They are fit to review manuscripts.

I have reviewed many manuscripts, have a good publication record, and have indicated how measurement uncertainty is reckoned in physical methods chemistry (and chemical physics).

Why are not those explanations as credible to you as published citations? The level of expertise is as great.

Your rejectionist take on the explanations provided here amount to you having either tacitly decided incompetence, or implicitly supposed blindly partisan lies.

Perhaps that strong whiff of derogation is why some here have gotten tart of expression with you.

“Because the one we’ve been using for measurement uncertainty is statistical.

But modified to the needs of physical science, where mathematical Platonism rarely (if ever) applies.

Reply to  Pat Frank
August 30, 2023 5:33 am

bellman is doing his usual sophistry trying to justify to himself his assertion.

Reply to  Bellman
August 28, 2023 6:32 am

LoopholeMan: “Yes, its true, I have absolute no clues about which I yammer.”

Reply to  karlomonte
August 28, 2023 7:39 am

Any particular point you disagree with, or are you just after more attention?

Reply to  Bellman
August 28, 2023 7:54 am

You think I call out your BS to get attention?

Your dementia is getting worse, LoopholeMan.

Reply to  karlomonte
August 28, 2023 8:18 am

You still won’t answer the question, but feel you have to add another offensive insult. So, yes, I think you are attention seeking.

And, yes, I know I shouldn’t be giving you the attention you crave, but sometimes it’s difficult to resist.

Reply to  Bellman
August 28, 2023 9:18 am

You are in no position to demand answers to anything, LoopholeMan.

As for attention, see above. Your short-term memory is deficient, another sign of dementia.

Reply to  Bellman
August 28, 2023 11:48 am

How much do you pay your fanbois to give you upvotes?

Reply to  karlomonte
August 28, 2023 1:29 pm

Grow up.

Reply to  Bellman
August 28, 2023 3:00 pm

Who needs to grow up? A rational, mature adult would *study* a subject before trying to lecture others with more experience in the subject. All you do is cherry-pick things you think support your preconceptions. You’ve admitted you haven’t studied any metrology texts and worked out *all* the examples.

Reply to  Tim Gorman
August 28, 2023 4:53 pm

Who needs to grow up?

The person claiming I would pay my “fanbois” to up vote my comments. The person so obsessed with status he whines everytime he gets a downvote.

Reply to  Bellman
August 28, 2023 5:07 pm

Poor LoopholeMan is sarcasm-deficient.

Reply to  Bellman
August 28, 2023 3:14 pm

Stepping on others to elevate yourself (i.e. you), a very religious activity.

Reply to  karlomonte
August 28, 2023 4:43 pm

Stop whining. As always you can dish it out, but start blubbing the instant someone hits back.

Reply to  Bellman
August 28, 2023 5:06 pm

Heh, even your “insults” are hapless — Christopher is right (again).

Reply to  Bellman
August 28, 2023 9:29 pm

I disagree with pretty much 100% of what you type into comments.

Happy now?

Reply to  karlomonte
August 29, 2023 4:44 am

Anyone can say they disagree with pretty much everything someone says. Unless you explain why you disagree I’m free to assume you are just disagreeable.

Reply to  Bellman
August 29, 2023 7:09 am

Why bother? You reject 100% of everything anyone tries to clue you with.

Proven time and time again.

Reply to  Bellman
August 30, 2023 3:49 am

If you’ve never seen it written +/- then you’ve never once looked in an engineering book. I gave you lots of examples where the + and the – must both be considered when solving equations. As usual you just blew them off so you wouldn’t have to think about what you are saying.

Reply to  Bellman
August 27, 2023 9:53 am

Another lecture from Dr. Unlearned.

Reply to  Tim Gorman
August 26, 2023 12:47 pm

Perfect!

Reply to  Pat Frank
August 26, 2023 4:36 am

For them it’s religious dogma. There’s no refuting religious dogma to true believers.

Reply to  Pat Frank
August 26, 2023 5:33 am

Nine examples here.

I assume you mean the part where you say “the assumption of random error is constantly employed in published works.”

But an assumption used in a model is not the same thing as a belief. You might assume as a simplification that all errors are random, but that doesn;t mean you belief there can be no systematic errors, or that all errors are Gaussian.

And I’m not sure all your quotes are actually assuming there are no none random errors. E.g.

this suggests that the influence of errors is not dominant, perhaps becasue many of the errors in recording temperatures are random in nature

“many”, not all.

Reply to  Bellman
August 26, 2023 7:27 am

Hansen, et al., 1999: Show me one paper where he brings systematic measurement error into his uncertainty estimate.

In fact, show me an example of Hansen ever bringing any measurement error into his uncertainty estimate.

The assumption of random measurement error is universally present because his work estimates uncertainty without any reference to instrumental measurement error at all.

Reply to  Bellman
August 26, 2023 8:01 am

But an assumption used in a model is not the same thing as a belief. You might assume as a simplification that all errors are random, but that doesn;t mean you belief there can be no systematic errors, or that all errors are Gaussian.”

ROFL!!

Simplifications simply can’t be done to eliminate the inconvenient.

If you believe that there can be systematic bias then you just can’t assume them away.

Nor can you assume that all errors are Gaussian just to make them disappear!

You have to *justify* simplification assumptions.

Reply to  Pat Frank
August 26, 2023 5:36 am

I’ve never said that. You and especially bdgwx are unable to distinguish the uncertainty of the mean from the mean of uncertainty. And even worse, never figured out why the latter is relevant in LiG Met.

No, I do distinguish between the two. That’s why I said you were effectively using the mean of the uncertainty rather than the uncertainty of the mean. Your final sentence confirms that.

Reply to  Bellman
August 26, 2023 7:38 am

This description of uncertainty is wrong.

But in my view, it doesn’t matter how you define uncertainty – either as the deviation of errors around the true vale, or as an interval that characterizes the dispersion of the values that
could reasonably be attributed to the measurand – you are still making a claim about how much difference there is likely to be between any measurement and the measurand.

You’re describing calibration uncertainty bounds as though they describe an error distribution. They do not. They describe the range in which knowledge is zero.

Reply to  Pat Frank
August 26, 2023 10:32 am

Which description do you think is wrong? Could you point me to the definition you would prefer to use if you don’t agree with the GUM definition.

Reply to  Bellman
August 26, 2023 11:28 am

Can you not read and understand English?

“You’re describing calibration uncertainty bounds as though they describe an error distribution. They do not. They describe the range in which knowledge is zero.” — Pat Frank

Reply to  karlomonte
August 26, 2023 1:37 pm

That’s not a definition.

If you say this years average anomaly was 0.5 ± 2.0°C, and that this is a 95% interval, what does it mean to say we have zero knowledge between -1.5 and 2.5?

Now you bring it up, what Frank says is

The 2σ = ±1.94 ◦C uncertainty does not indicate a range of possible temperatures but, rather, the range of ignorance over which no information is available. That is, the physically correct mean anomaly may be anywhere within that range.

Reply to  Bellman
August 26, 2023 1:48 pm

(Sorry, pressed post too soon and can’t edit it.)

So, either it’s an interval that does not indicate a range of possible temperatures, or it’s an interval within which the true value may be anywhere. I’m not entirely sure what the difference is.

However as it’s also a 2σ range, that implies there is a standard deviation of the uncertainty, despite there being zero knowledge of the distribution, and there is a small chance that the physically correct mean anomaly might be outside that range. So why is the area within that range a range of ignorance, but the area outside not part of the ignorance. What do we know about values larger than 1.94 that we don’t know about smaller values?

Reply to  Bellman
August 26, 2023 4:31 pm

You’ve been shown and told the answer to this question many, many, many times, once more is a fool’s errand.

Pass.

Reply to  karlomonte
August 26, 2023 4:38 pm

So you don’t know either.

Reply to  Bellman
August 26, 2023 6:50 pm

Of course I know the answer, it is pretty basic.

Reply to  karlomonte
August 27, 2023 3:59 am

But the answer goes to a school in Canada.

Reply to  Bellman
August 27, 2023 7:45 am

Only inside your skull.

Reply to  Bellman
August 28, 2023 11:34 am

You should know. I’ve given you the quote from the GUM twice in this thread:

“E.5.1 The focus of this Guide is on the measurement result and its evaluated uncertainty rather than on the unknowable quantities “true” value and error (see Annex D). By taking the operational views that the result of a measurement is simply the value attributed to the measurand and that the uncertainty of that result is a measure of the dispersion of the values that could reasonably be attributed to the measurand, this Guide in effect uncouples the often confusing connection between uncertainty and the unknowable quantities “true” value and error.” (bolding mine, tpg)

You keep mixing up “stated value +/- uncertainty” with “true value and errror”

The uncertainty interval is *NOT* the range in which the true value could lay. It is an interval which gives a judgement on how large the “unknown” area is.

There is *no* guarantee that the true value will lay inside the uncertainty interval, *any* uncertainty interval. When it comes down to it the uncertainty interval is an informed judgement, nothing more. Even Type A uncertainty intervals are an estimate because absolute uncertainty is simply not amenable to statistical analysis. This is because it is impossible to know and determine *all* uncertainty factors.

Reply to  Tim Gorman
August 28, 2023 11:52 am

Twice in this thread, and multiple times over the past few years!

Maybe WD-40 will help.

Reply to  Tim Gorman
August 28, 2023 1:27 pm

Which has nothing to do with what Patrick Frank is describing. He specifically says he doesn’t agree with the GUM definition, and instead talks about ranges of ignorance.

It is an interval which gives a judgement on how large the “unknown” area is.

Which gets back to the question I’m asking – if the range is both a 2σ range, and an range of ignorance, what does that say about values outside that range? I’d also ask what a 1σ range would mean.

There is *no* guarantee that the true value will lay inside the uncertainty interval, *any* uncertainty interval.

Which makes the idea of a range of ignorance pointless. You know nothing about what’s inside it or outside it. The zone of ignorance is infinite.

Reply to  Bellman
August 28, 2023 2:57 pm

If the 2σ for one thing is larger than the 2σ for something else then which one has the largest uncertainty?

You *STILL* haven’t figured out what uncertainty is for. You REFUSE to study Taylor, Bevington, or Possolo for understanding. You’ve even been given quotes of what they say.

Bevington: “It is reasonable to expect that the most reliable results we can calculate from a given set of data will be those for which the estimated errors are the smallest. Thus our development of techniques of error analysis will help to determine the optimum estimates of parameters to describe the data. It must be noted, however, that even our best efforts will yield only estimates of the quantities investigated.” (bolding mine, tpg. italics from the text.)

Uncertainty intervals are estimates made to inform others of what they might see when they make the same measurement. It’s not meant to encompass every possible value someone might get. As all the authors state you don’t want to overstate the uncertainty interval and you don’t want to underestimate it either.

When are you actually going to give up cherry-picking and STUDY some of these authors, including working out ALL the examples in their textbooks? The answers *are* in the back of both Taylor and Bevington.

Reply to  Tim Gorman
August 28, 2023 5:20 pm

“If the 2σ for one thing is larger than the 2σ for something else then which one has the largest uncertainty?”

The first. But that depends on having a definition of uncertainty that represents a probability distribution. Otherwise the σ is meaningless.

You quote Bevington who’s talking about errors, but then yell that uncertainty has nothing to do with error.

Uncertainty intervals are estimates made to inform others of what they might see when they make the same measurement. It’s not meant to encompass every possible value someone might get.”

Then what does the σ mean? What does a 95% ignorance range mean, if it doesn’t mean there’s a 5% chance that a measurement might be outside the range?

Reply to  Bellman
August 29, 2023 4:26 am

The first. But that depends on having a definition of uncertainty that represents a probability distribution. Otherwise the σ is meaningless.”

Uncertainty does *NOT* define a probability except in one, specific case. That’s when you have multiple measurements of the same thing under the same environmental conditions using the same device with no systematic bias and a Gaussian (or at least symmetrical) distribution of values.

That simply doesn’t apply when subject at hand is global temperature and climate models. For all other cases than the one above σ *is* meaningless. That’s what we’ve been trying to tell you! σ is not a measure of the dispersion of values for anything except the when the conditions above exist. It’s why the GUM is based on the scenario above.

It’s why multiple single measurements of different things doesn’t offer a “true value”. Multiple single measurements of individual breeds of horses (or dogs or cats or deer or …) doesn’t give you a true value for the heights of “horses”. The uncertainty estimate, i.e. the expectation of where the next measurement might be, is LARGE. And it grows with each addition of a single measurement of the next breed!

You live in “statistical world”. In that world everything must have some kind of probability distribution. That is *NOT* the real world. The GUM definition only applies when you have the conditions listed above. For anything else the uncertainty interval is nothing more than a judgement – meant to inform others of what they would probably see when repeating the same experiment. In such a case the true value has a probability of 1 of being the true value and all other values have a probability of 0 of being the true value. The issue is that you don’t know which value is the true value – it is UNKNOWN.

Think, THINK about it for just one minute! If you know the probability distribution of possible values then you can make a pretty good estimate of what the true value is – there is no “unknown” factor. That’s why in the scenario above you can make a pretty good estimate of the true value. But you can’t just ASSUME the conditions above are met. You have to JUSTIFY that assumption – and you *never* justify it. You just ASSUME it.

And so does climate science.

Reply to  Tim Gorman
August 29, 2023 5:48 am

Uncertainty does *NOT* define a probability except in one, specific case.

Uncertainty is described by an assumed probability distribution – at least it is in the GUM and all the other metrology sources I’ve seen.

I think what you are doing is getting confused by the notion of a type A uncertainty. That’s what you are describing, measure the same thing multiple times with the same instrument and take the standard deviation of the measurements as an estimate of the instruments uncertainty.

That’s not the only way of estimating the instruments uncertainty – i.e Type B uncertainties.

But regardless of how the uncertainty is determined, it is then possible to estimate a combined uncertainty for any measurement that combines multiple measurements. You don’t need to measure each input multiple times if you already have an assumed uncertainty for each measurement. This is what P:at Frank is trying to do in his paper, use an assumed uncertainty for each instrument reading to produce a combined global anomaly uncertainty.

It’s why the GUM is based on the scenario above.

The GUM specifically says Type B uncertainties can be as good as Type A. And the general equations have no requirement that uncertainties be Gaussian or even symmetrical. If there was, I’m sure they’d mention it and you would be able to quote it. The main assumption is that the function is linear, but given the size of the uncertainties it isn’t usually that important. There are additional equations given if there is significant non-linearity.

It’s why multiple single measurements of different things doesn’t offer a “true value”.

Get with the program – “true value” is not used in the GUM. But if it were it’s difficult to see why a single measurand has a true value but a combined one doesn’t.

Multiple single measurements of individual breeds of horses (or dogs or cats or deer or …) doesn’t give you a true value for the heights of “horses”.

I can’t help you with your horse fetish. But regardless of whether you take a realist or instrumentalist view, it’s possible to define the concept of the average height of a horse.

The uncertainty estimate, i.e. the expectation of where the next measurement might be, is LARGE.

That is not the uncertainty I’m talking about. It’s the uncertainty of the mean, not the average uncertainty. The point of the uncertainty of the mean is not to see what range of values the next measurement might take (for that you want the standard deviation), the point is to tell you how much confidence you have that your estimated mean is close to the actual mean – and in particular to test how likely it is that two different samples come from different populations.

“You live in “statistical world”.”

Good. Far too many live in “anecdotal world”.

In that world everything must have some kind of probability distribution.

Makes sense. Or would you prefer to live in “ignorance world”.

That is *NOT* the real world.

It may not be the one you live in – but most people live in a world of probabilities whether they know it or not.

In such a case the true value has a probability of 1 of being the true value and all other values have a probability of 0 of being the true value.

Which is why your frequentist has to talk about likelihood rather than probability. But that’s not the only type of probability,m as I’m sure you know.

The issue is that you don’t know which value is the true value – it is UNKNOWN.

Hence uncertain, but as always you equate “unknown” with “know nothing”.

Think, THINK about it for just one minute!

I’ve been thinking about it for years, but I still wouldn’t claim to know that much. Maybe you should try it.

If you know the probability distribution of possible values then you can make a pretty good estimate of what the true value is – there is no “unknown” factor.

How? If the probability distribution is large, it’s difficult to have much of a clue about what the true mean is. And you never know what it is exactly – the best you can do is narrow the confidence interval.

But you can’t just ASSUME the conditions above are met.

You still don’t understand what “assume” means. There will always be assumptions, the question is how good are they and how much difference would different assumptions make.

And so does climate science.

Would you like to go through Frank’s paper and point out all the assumptions he makes?

Reply to  Bellman
August 29, 2023 7:12 am

The LoopholeMan posts another bizarre lecture.

but I still wouldn’t claim to know that much.

Oh this is a lie, you know EVERYTHING.

Reply to  Bellman
August 30, 2023 4:22 am

Uncertainty is described by an assumed probability distribution – at least it is in the GUM and all the other metrology sources I’ve seen.”

Only when one specific case is involved. It’s when multiple measurements are made of the same thing under the same environment using the same device where systematic uncertainty has been made negligible.

Your understanding stems from never actually studying any of the metrology sources you’ve looked at. It’s why Taylor speaks of *partial* cancellation of uncertainty and adding uncertainty in quadrature. That simply would not be required if all measurement uncertainty was random , Gaussian, and cancelled. If that is true then the uncertainty is found in the spread of the stated values only. And that is what Taylor, Beviington, and Possolo all do. Possolo even does it in TN1900.

When you speak of metrology sources its apparent you’ve never even studied TN1900. Not only is the breadth of your reading is limited it’s obvious you never studied any of the sources for understanding!

I think what you are doing is getting confused by the notion of a type A uncertainty.”

Nope. Type A uncertainty is *still* an estimate of uncertainty. Type A is based on statistical analysis of repeated observations. It is their variance and standard deviation. It assumes NO systematic uncertainty meaning the measurements have to be made in a controlled environment using calibrated equipment – i.e. in a standards lab of some source. And the measurements have to be of the SAME THING.

Type B uncertainty is an ESTIMATED variance based on available knowledge – such as the knowledge of existing systematic uncertainty or the fact that the measurements may be of different things using different devices under different environmental conditions – i.e. NOT of the variation in just the stated values while ignoring the propagation of the measurement uncertainties of the individual elements.

You simply will not accept the fact that all of the referenced experts in metrology tell you that uncertainty does *NOT* reduce by sqrt(N) unless you have multiple measurements of the same thing under the same environmental conditions.

You don’t need to measure each input multiple times if you already have an assumed uncertainty for each measurement. “

The problem is that sqrt(N) *ONLY* applies when the assumed uncertainty for each measurement IS THE SAME. I gave you the excerpt from Taylor on this. As usual, you just blew it off! If you have different uncertainties or if you have systematic uncertainty then Taylor’s Chapter 3 propagation rule applies – NO DIVISION BY SQRT(N).

This is what P:at Frank is trying to do in his paper, use an assumed uncertainty for each instrument reading to produce a combined global anomaly uncertainty.”

You *still* haven’t figured out what Pat did. Pat did *NOT* combine multiple measurements of different things. He propagated an average annual uncertainty through a series of successive iterations using the output of one iteration as the input for the next iteration.

You simply can’t read ANYTHING for meaning, can you?

And the general equations have no requirement that uncertainties be Gaussian or even symmetrical”

Of course that is what they assume! When they speak of multiple measurements of the same thing under the same environment it is SPECIFICALLY what they are assuming.

Your stubborn refusal to accept this quite obvious fact just makes you out to be a cult adherent. It’s religious dogma with you.

“Hence uncertain, but as always you equate “unknown” with “know nothing”.”

You *still* haven’t figured out what uncertainty *is*. Even after having it quoted to you from Taylor and Bevington. It’s an estimate of what someone else performing the same measurement can expect as a result. If the uncertainty had a probability distribution then different values would be more likely to be seen – but that is simply not the case where you have different things.

You are hopeless. All you are doing is spouting religious dogma which you refuse to question.

Reply to  Tim Gorman
August 29, 2023 7:10 am

“The first. But that depends on having a definition of uncertainty that represents a probability distribution. Otherwise the σ is meaningless.”

Uncertainty does *NOT* define a probability except in one, specific case.

He will never acknowledge this.

Reply to  karlomonte
August 30, 2023 6:07 am

bellman is stuck in statistical world.

Dispersion has at its root “disperse”. Disperse is to scatter irregularly in many directions.

The phrase “the uncertainty of that result is a measure of the dispersion of the values that could reasonably be attributed to the measurand” does *NOT* mean that the dispersion creates a probability distribution that can be easily analyzed using standard statistics descriptors, especially not one that is Gaussian or symmetric.

In bellman’s statistical world the term dispersion is a synonym for standard deviation. He forgets that it also includes inter-quartile range which is what must be used for data where the average and standard deviation does not apply, i.e. for a non-Gaussian or symmetrical distribution. Standard deviation is pretty much useless and meaningless for a multi-nodal distribution – i.e. a distribution that can result from single measurements of different things combined into one data set.

It’s why he has to dismiss the situation where you may have multiple species of horse in a corral and you try to discern something from their heights using average and standard deviation. It doesn’t fit into his “statistical world” so he just blanks it out of his mind.

Reply to  Tim Gorman
August 30, 2023 7:03 am

He has in effect redefined “uncertainty” to mean what he mistakenly thinks it should mean, and no one call dispel him of these notions.

Reply to  Bellman
August 29, 2023 8:48 am

From the GUM

“””””3.3.5 The estimated variance u2 characterizing an uncertainty component obtained from a Type A evaluation is calculated from series of repeated observations and is the familiar statistically estimated variance s2 (see 4.2 ). The estimated standard deviation (C.2.12 , C.2.21 , C.3.3 ) u, the positive square root of u2, is thus u = s and for convenience is sometimes called a Type A standard uncertainty. For an uncertainty component obtained from a Type B evaluation, the estimated variance u2 is evaluated using available knowledge (see 4.3 ), and the estimated standard deviation u is sometimes called a Type B standard uncertainty. “””””

Like it or not, this section defines “u² = s²” and “u = s”. Thus “s” is the Type A standard uncertainty. This is not the standard uncertainty of the mean.

4.2.2 The individual observations qk differ in value because of random variations in the influence quantities, or random effects (see 3.2.2). The experimental variance of the observations, which estimates the variance σ² of the probability distribution of q, is given by s²(qₖ). This estimate of variance and its positive square root s(qₖ), termed the experimental standard deviation (B.2.17), characterize the variability of the observed values qₖ, or more specifically, their dispersion about their mean q.

Look carefully at Section 4.2.1, that indicates the requirement of measurements under the “same conditions of measurement” as defined in Section B2.15.

This requirement pretty much squashes the use of using this whole section for computing the uncertainty of an average of temperatures from different stations.

— the same measurement procedure
— the same observer
— the same measuring instrument, used under the same conditions
— the same location
— repetition over a short period of time.

It is instructive that NIST TN 1900 specifies that these requirements are met.

Reply to  Jim Gorman
August 29, 2023 11:47 am

Missing the point, as we were talking about a different, secret, definition from that given by the GUM, that does not rely on probability distributions.

Like it or not, this section defines “u² = s²” and “u = s”. Thus “s” is the Type A standard uncertainty.

Yes. That’s how it defined using a distribution.

This is not the standard uncertainty of the mean.

Unless s is the standard deviation / error of the mean.

Look carefully at Section 4.2.1, that indicates the requirement of measurements under the “same conditions of measurement” as defined in Section B2.15.”

Yes, that’s how you do a type A evaluation. It is not describing how to calculate a combined uncertainty, let alone how to find the standard error of the mean.

This requirement pretty much squashes the use of using this whole section for computing the uncertainty of an average of temperatures from different stations.”

Which is why you wouldn’t use it. As I said it’s about doing a type A evaluation for a single instrument.

Reply to  Bellman
August 29, 2023 1:14 pm

“””””Which is why you wouldn’t use it. “””””

Then explain how to determine the uncertainty of a distribution of temperatures that do not meet the requirement of repeatible conditions.

Show which GUM Section defines a procedure.

Reply to  Jim Gorman
August 30, 2023 9:16 am

You don’t *really* expect an answer do you?

Reply to  Tim Gorman
August 30, 2023 12:13 pm

You people are ridiculous. You spend all day firing hundreds of stupid questions at me, in the hope that one won’t be answered – at which point you stand round patting yourselves on the back saying – “see I knew he couldn’t answer it”.

Reply to  Bellman
August 30, 2023 12:36 pm

Not just couldn’t, but *WOULDN’T. It’s obvious that you are reading this little sub-thread so it’s not like you can say you missed it.

So either you can’t answer Jim’s question or you won’t answer it.

You know that if you answer it your entire edifice on measurement uncertainty and reducing it by averaging the measurements of different things collapses into dust.

No one expected you to answer. Or bdgwx. Or NIck. You are out there on your own, dangling in the wind. And you can’t admit, even to yourself, that what we’ve been telling you is correct.

You’ll be back soon enough telling us that measurement environment and repeatability of measurements don’t apply in statistical world. We all know it.

Reply to  Tim Gorman
August 30, 2023 2:17 pm

Yet it you never notice how few questions your gang answers.

So what was this all important question?

Then explain how to determine the uncertainty of a distribution of temperatures that do not meet the requirement of repeatible conditions.

Show which GUM Section defines a procedure.

To which my answer is the question is to vague to give a complete answer.

But if you have a sample of thermometers randomly placed over an area of interest, the obvious and time honoured way would be to take the standard deviation of the readings and divide by the square root of the sample size. The GUM won;t tell you that because it’s only interested in measurement uncertainty.

If you were talking about a bigger more realistic issue, such as say an historic global anomaly reconstruction – then that’s a much more difficult task, and not one I would like to give an answer to. For a start you need to look at how you model the temperatures in the first place. They are not randomly situated so you need to use some method or other to weigh the readings by area. Then you need to handle quality issues, adjustments for various factors. All of these can use different techniques and all produce different types of uncertainties. The most likely way of dealing with some of these uncertainties would involve techniques such cross validation or Monte Carlo models, which is far beyond anything I’d like to describe, and nothing that is mentioned in the GUM.

Reply to  Bellman
August 30, 2023 3:12 pm

“”time honoured way would be to take the standard deviation of the readings and divide by the square root of the sample size. “”

Tradition!

What a reason!

Here are some problems.

1) Basing the expenditure of trillions of dollars on anomaly that hides so much information.
2) Anomalies don’t tell anyone where anything is occuring.
3) Anomalies don’t indicate if Tmax or Tmin is increasing.
4) Anomaly averages are not a temperature, they are a ΔT, a rate. Is +2 @ Antarctica good while +2 @ Washington bad and why?
5) Trending anomalies based on various baseline temperatures is a joke.
6) Averaging Washington, D.C. with Santa Clara, California tells you nothing useful. Does Concordia, Kansas that is halfway between the two assured of being at the average?

Data is being collected in minutes if not seconds. There are much better mathematical algorithms that can use this data to make better decisions.

Climate science would say forget electron microscopes, atomic time measuring, and laser distances. Traditional methods can tell us what we need to know!

Your insistence that we know with 1/1000th of a degree is that of a Luddite.

Reply to  Bellman
August 30, 2023 3:14 pm

To which my answer is the question is to vague to give a complete answer.”

You have two standard copouts when you know you’ve been caught out:

  1. the question is ill-posed or too vague to answer
  2. you were talking about something else.

Then explain how to determine the uncertainty of a distribution of temperatures that do not meet the requirement of repeatible conditions.”

This is *NOT* a vague question. It is very specific, to the point, and a perfectly legitimate thing to ask. It has a very specific answer.

Show which GUM Section defines a procedure.”

This is *not* a vague question. It is very specific, to the point, and a perfectly legitimate thing to ask – and has a very specific answer.

You are as predictable as the sunrise.

Reply to  Tim Gorman
August 30, 2023 3:53 pm

And he STILL doesn’t understand the purpose of the GUM.

Never will.

Reply to  Bellman
August 30, 2023 4:53 pm

But if you have a sample of thermometers randomly placed over an area of interest, the obvious and time honoured way would be to take the standard deviation of the readings and divide by the square root of the sample size.”

This time honored way ONLY applies if consistent environments across the area of interest are the *same*. Otherwise the temperature comparisons are meaningless. If you would *actually* study the discipline you would find that Hubbard and Lin found in 2002 that you can’t even apply consistent correction factors across an area because of microclimate differences in even closely located devices. If you can’t apply consistent correction factors then how in Pete’s name do you take and average and standard deviation of the temperatures and expect them to tall you anything?

You are back to measuring the heights of a dozen different breeds of horses in a corral and trying to find their average height and the standard deviation of the heights. As if that information will tell you *anything*! You get a multi-nodal distribution and the average and standard deviation are meaningless. Taking the average and standard deviation of different temperature measurements from a “corral” (an area) of different “breeds” (measurement stations) won’t tell you anything useful.

For a start you need to look at how you model the temperatures in the first place. They are not randomly situated so you need to use some method or other to weigh the readings by area.”

More crap. If you can’t do it across a small area because of microclimate differences then you can’t do it across large areas either. And anomalies don’t help because those also depend on the microclimate at individual measurement stations.

Pikes Peak and Colorado Springs are closely located in the same area. How would you weight their temperatures based on area in an average?

This is all a distraction anyway.

Nothing you posted here has anything to do with how you calculate uncertainty of measurements taken under different environments. This even applies to the SAME instrument taking a reading at Tmax where the pressure, humidity, and wind are different than those at Tmin. There is no “area” to worry about. There is no averaging of Tmax to be done across an area. There is no averaging of Tmin to be done across an area.

How do you calculate the uncertainty of the readings taken by that instrument for just those two readings? How do you calculate the uncertainty of multiple readings taken at 2 second intervals throughout the day and are then averaged? None of the readings are repeatable – which fails the restrictions in the GUM.

Answer the question. Don’t equivocate. Don’t deflect. Don’t dissemble. ANSWER.

Reply to  Bellman
August 26, 2023 12:32 pm

Which description do you think is wrong?

This, “dispersion of the values that could reasonably be attributed to the measurand” does not describe what the uncertainties calculated using eqns. 4, 5 & 6 mean.

And it was towards those equations where you were directing your comment.

Reply to  Pat Frank
August 26, 2023 1:23 pm

So what definition of uncertainty are you using?

The quoted definition was straight from the GUM

parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand

Reply to  Bellman
August 26, 2023 7:15 pm

Your view is wrong because those uncertainties are not associated with measurements. They’re the intrinsic resolution limit of 1C/division thermometers.

The GUM definition is irrelevant to the case.

See, Bellman, it’s as I noted above. You do not understand the logic of the analysis. And then react badly when your naive expectations are violated.

Reply to  Pat Frank
August 27, 2023 4:51 am

You are right – I don’t understand your logic, which is why I’m asking you to explain it.

You say that the instrument uncertainty is down to intrinsic resolution limits, and therefore the GUM definition doesn’t apply. But you don’t justify why that means RMS is the correct method to use.

The GUM lists “finite instrument resolution or discrimination threshold;” as a source of uncertainty and never, as far as I can see, says that this means the definition or any of their methods don’t apply.

Reply to  Bellman
August 27, 2023 5:40 am

Then you’re just going to have to figure it out on your own.

I’ve explained the logic of the analysis repeatedly. Repeatedly +1, you don’t get it. So be it. You don’t get it. If not by now, never.

Study up, Bellman. Take a course or two in Analytical Chemistry and another in Instrumental Methods of Analysis.

Try getting educated. It always helps.

Reply to  Pat Frank
August 27, 2023 7:58 am

He whines if you tell him to RTFM. He thinks everything should be handed to him on a gold platter.

Reply to  karlomonte
August 27, 2023 9:44 am

He whines if you tell him to RTFM.

Oh, I do. It’s one of the worst get outs in computing. Translation, we’ve written a really bad piece of software but rather than fix any bugs we’ll just put a footnote on page 276 of the manual saying “don’t press the big button or it will format your hard disk and set fire to your cat.” and then claim it’s the users fault for not RTFM.

Reply to  Bellman
August 27, 2023 9:57 am

Yes, you do.

As you were informed previously (multiple times), I’m done trying to implant reality into your thick skull.

Go RTFM. Maybe (highly doubtful) you can figure out the answer on yer own.

Reply to  karlomonte
August 27, 2023 10:07 am

Yes, you do.

That’s what I said.

Reply to  Bellman
August 27, 2023 1:02 pm

Go RTFM, and go whine to someone who cares.

Reply to  karlomonte
August 27, 2023 1:30 pm

Someone who cares more than the person who has to make a whining response to my every comment?

Reply to  Bellman
August 27, 2023 1:57 pm

IRONY OVERLOAD!~!~!~!~!~!

Reply to  Pat Frank
August 27, 2023 9:58 am

Then you’re just going to have to figure it out on your own.

Fine then. The way I figure it out is you are wrong.

The “I’ve already explained it to you repeatedly” trope is an excuse worth of karlo and the gang, not someone with a serious case to prove.

If you have explained it to me, then clearly I missed it. Could you provide a link or repeat the explanation. All I’ve seen is assertions – “it’s an uncertainty intrinsic to the instrument therefore RMS has to be used”. Nothing explaining why that makes sense.

Reply to  Bellman
August 27, 2023 1:01 pm

It is completely obvious that you have no real interest in learning these subjects, otherwise you would take the time to seriously study them.

All you do is look for loopholes you think you can use to keep the significance of these air temperature trendlines alive.

Pat’s work is a direct threat to them (and you), so you throw up noise in laughable attempts to discredit him.

Reply to  karlomonte
August 27, 2023 1:26 pm

Yeh, that must be it. Only possible explanation.

Reply to  Bellman
August 27, 2023 3:14 pm

It’s an uncertainty intrinsic to the instrument therefore RMS has to be used is the explanation that makes sense.

Figure it as you like. You don’t understand, you won’t study. So be it.

Reply to  Pat Frank
August 27, 2023 5:00 pm

Still just saying it is, not why it is.

I am trying to study it. I am trying to find any hint as to why this might be correct, but it would be a lot easier if you could give me a pointer.

Let me set out what I understand and you can ignore it or say I’m wrong or make your own suggestion – it’s up to you.

You say that the uncertainty of a monthly or annual mean can be determined by looking just at the standard deviation of a daily reading (RMS if you prefer). I can think of a couple of reasons why that might be correct:

  1. All the measurements are correlated, with a coefficient of +1. As in Note 1 to equation (16) of the GUM. This for instance would be the case if the intrinsic resolution uncertainty was a systematic error.
  2. You are only interested in the uncertainty of a single daily reading, and not a monthly / annual / global average.

But neither of these options make sense to me, in the context of finding the uncertainty of a global average coming from the uncertainty caused by resolution of the various instruments.

Then my concern is the paper introduces the concept in a casual manor, just a single sentence “This is the minimum confidence interval that must condition any meteorological air temperature, or a mean of air temperatures.”. There are numerous references throughout the paper, but not to this concept, as far as I can see. Yet, without this the claim of very large uncertainties coming just from the instrument uncertainty could not be made. The resolution uncertainties would be greatly reduced by virtue of being averaged many times in many instruments.

So given the importance, why wouldn’t you want to justify it in the paper, and when questioned point to the theory or some reference that explains why it must be the case?

Reply to  Bellman
August 27, 2023 6:21 pm

You say that the uncertainty of a monthly or annual mean can be determined by looking just at the standard deviation of a daily reading

No, I don’t. I’ve told you that eqns. 5 & 6 operate on the resolution of the instrument itself. No daily reading or measurement is involved.

Yet, without this the claim of very large uncertainties coming just from the instrument uncertainty could not be made.

See Tables 1 & 7.

The resolution uncertainties would be greatly reduced by virtue of being averaged many times in many instruments.

Resolution limit is a characteristic of the instrument. It’s the minimal pixel size. It does not improve as 1/sqrtN.

Why doesn’t the glass in a thermometer become diamond after multiple readings? Well, because glass is a material limit of the instrument.

So given the importance, why wouldn’t you want to justify it in the paper,…

It is justified in the paper. See Section 3. The explanation is there and documented to the literature.

…when questioned point to the theory or some reference that explains why it must be the case?

See references 99-109. Here’s a NIST circular (pdf). It has many references. Start there and with them.

When writing a paper, one assumes a basic knowledge of the field to be resident in the reading audience.

Reply to  Pat Frank
August 27, 2023 7:36 pm

No, I don’t. I’ve told you that eqns. 5 & 6 operate on the resolution of the instrument itself. No daily reading or measurement is involved.

Sorry. When I said the standard deviation of a daily reading, I meant the standard uncertainty.

Resolution limit is a characteristic of the instrument. It’s the minimal pixel size. It does not improve as 1/sqrtN.

I disagree. The uncertainty caused by resolution is still the result of the error caused by rounding. If temperatures are random any rounding to the nearest unit will be random.

It is justified in the paper. See Section 3.

I’ve already quoted the only justification I could find in section 3 “This is the minimum confidence interval that must condition any meteorological air temperature, or a mean of air temperatures.”

Here’s a NIST” circular (pdf). It has many references.”

Nothing in there about averaging temperatures.

Reply to  Bellman
August 27, 2023 10:40 pm

“The uncertainty caused by resolution is still the result of the error caused by rounding.”

No. The resolution uncertainty defines an interval where the reading is physically meaningless. Physically meaningless does not mean random.

If temperatures are random any rounding to the nearest unit will be random.

No. You’re assuming perfect rounding. Recrudescing Nick Stokes’ mistake of assuming perfect accuracy and infinite precision.

I’ve already quoted the only justification I could find in section 3″

Obviously, you don’t know what to look for.

What does this mean: “This 2σ = ±0.11 C represents the resolution (detection) limit—the lowest limit of uncertainty—that can be associated with a temperature measured using a meteorological surface-station mercury LiG Celsius thermometer.”

Or this: “The reported uncertainty associated with this rectangular probability distribution is 1σ = (±0.125/√3 °C = ±0.072 °C.”

Or this: “This [±0.178 °C] is the minimum confidence interval that must condition any meteorological air temperature, or a mean of air temperatures.”

Or this: “However, NBS/NIST calibration circulars published between 1911–1994 tabulated the accuracy for calibrated full-immersion mercury LiG thermometers of 1 C/division to be ±0.1–0.2 °C following correction for all known systematic errors.”

These are intrinsic accuracy measures. They establish the lower limit of meaningful instrumental response. Measured data is good only to the limit of accuracy of the instrument, and can never, ever, exceed that limit.

I have reviewed many manuscripts. To do so within professional purview and ethics, I had to not only study the manuscript with care, but almost invariably to study the references in order to understand the baseline science on which the work was founded.

You, bdgwx, and bigoilbob all starkly violate those standards of ethics. You purport to review my work without making any effort at study, at understanding, or consultation of the referenced baseline science.

All exemplified by your casual dismissal: “Nothing in there about averaging temperatures.” How would you know?

And the point is instrumental resolution, not temperatures.

You people are deliberate frauds.

Reply to  Pat Frank
August 28, 2023 5:50 am

“The uncertainty caused by resolution is still the result of the error caused by rounding.”

No. The resolution uncertainty defines an interval where the reading is physically meaningless. Physically meaningless does not mean random.

You people are deliberate frauds.

Absolutely, and still they refuse to consider that uncertainty is not error.

Reply to  karlomonte
August 28, 2023 12:34 pm

It’s not even clear they understand the difference between resolution and precision. They are related but are not the same.

Reply to  Tim Gorman
August 28, 2023 3:17 pm

Considering that they toss thermometer resolution into the trash, I’d have to say it is very clear that they don’t.

They don’t even understand the title of the GUM.

Reply to  Pat Frank
August 28, 2023 12:34 pm

You people are deliberate frauds.”

They don’t realize they are making fools of themselves. Religious fanatics usually don’t.

Reply to  Bellman
August 29, 2023 3:59 am

Exactly what components make up the uncertainty in a measurement of temperature. Until you can list those out you don’t have a clue as to what Pat did.

Reply to  Tim Gorman
August 29, 2023 7:13 am

Bingo.

He doesn’t care because in his alternate universe, the magic of averaging removes all instrumental uncertainty.

Reply to  karlomonte
August 30, 2023 6:10 am

Yep, just ignore that a measurement is “stated value +/- uncertainty”. Throw away the uncertainty and use the variation of the stated value as the uncertainty. It’s a statistical world analysis, not a real world one.

Reply to  Tim Gorman
August 30, 2023 7:09 am

And then to justify his claim that Pat is “wrong”, he pulls out this bizarre story that “Einstein was wrong about many things.” Even if true (which it isn’t), it is nothing but raw sophistry and illogic.

Reply to  karlomonte
August 30, 2023 12:06 pm

Liar. You know full well I pointed out he was wrong because you challenged me to do so.

“Next Bellman will tell that he thinks Einstein is wrong…”

remember?

It had nothing to do with Pat being wrong.

Reply to  Bellman
August 30, 2023 1:19 pm

Heh, and now teh backpedal…

Reply to  Bellman
August 26, 2023 8:01 am

I really like it when you try to lecture Pat Frank, its hilarious!

Be sure to call me a “troll” because I call you out on your BS.

Reply to  Bellman
August 26, 2023 8:02 am

You say you distinguish between the two but you never actually DO IT.

Reply to  Tim Gorman
August 26, 2023 10:21 am

0.5 is not the same as 0.05. That’s how I distinguish between the two.

Reply to  Bellman
August 26, 2023 11:05 am

“””””“5. Variance of random variables do no add when adding random variables.”

They absolutely do, and again I have no idea who this strawman is who you keep thinking says otherwise. Your main problem here is you never understand the difference between adding random variables, and adding random variables then dividing by a constant.”””””

It is totally ignored. Anomalies are a good example. The uncertainty is calculated using the variance of the anomaly values.

An anomaly is a subtraction of two random variables. They carry the sum of the variances of the monthly average and the baseline temperature. That is a value in the units digits, whereas the value of anomaly variance is in the tenths to hundredths digit due to the small values.

Find just one source that shows calculating the variance of an annual anomaly by adding the twelve variances calculated from the sums of monthly and baseline averages. I’ll be happy to see it.

While you are at it find one source that calculates a monthly variance using the recommended method from NIST. Then tell us why climate science blatantly ignores a recommendation from the U.S. agency responsible for measurement techniques.

Reply to  Jim Gorman
August 26, 2023 5:43 pm

Find just one source that shows calculating the variance of an annual anomaly by adding the twelve variances calculated from the sums of monthly and baseline averages.

Once again confusing adding with averaging. An annual anomaly is not the sum of twelve anomalies, it’s the average. The variance would be the sum of twelve variances divided by 12².

Then tell us why climate science blatantly ignores a recommendation from the U.S. agency responsible for measurement techniques.

It’s just the standard error of the mean. You keep yelling that this does not represent uncertainty, or that it only works if you are measuring the same temperature – but now you are complaining that climate scientists aren’t using the SEM to calculate the uncertainty of the mean in a single month for a single station.

Reply to  Bellman
August 26, 2023 11:49 am

“””””Correct, variance is not a measure of the uncertainty of an average or any random variable.”””””

Wrong! Variance is a measure of a type of uncertainty. It tells one the width of the spread of data around the mean. It directly affects the standard deviation and the SEM.

Also, there aare those mostly forgotten set of statistical parameters, kurtosis and skewness. These are parameters that define how normal a distribution actually is. Normality is a forgotten assumption for many of the statistical calculations rn a distribution.

Here is one example. What is the mean temperature of the globe (not anomaly) for a month. What is the variance, kurtosis, and skewness? Do a histogram and is it a normal distribution?

Reply to  Jim Gorman
August 26, 2023 12:48 pm

Wrong! Variance is a measure of a type of uncertainty. It tells one the width of the spread of data around the mean. It directly affects the standard deviation and the SEM.

Which tells me (again) he’s never taken the time to read the GUM.

Do a histogram and is it a normal distribution?

I tried to do this for the UAH data, none of the distributions that could be backed out were anywhere close to normal. The trendologists downvoted me for my trouble, of course, instead of trying to consider the implications.

Reply to  karlomonte
August 27, 2023 7:12 am

You are a heretic. Heretics are *always* wrong in the face of religious believers.

Reply to  Tim Gorman
August 27, 2023 8:00 am

Yep. Pseudoscience at its worst.

Reply to  Jim Gorman
August 26, 2023 4:59 pm

Variance is a measure of a type of uncertainty.

As I said it isn’t a direct measure. It’s pretty meaningless unless you take the square root.

Also, there aare those mostly forgotten set of statistical parameters, kurtosis and skewness. These are parameters that define how normal a distribution actually is. Normality is a forgotten assumption for many of the statistical calculations rn a distribution.

Forgotten by whom?

Do a histogram and is it a normal distribution?

I doubt it.

Reply to  Bellman
August 26, 2023 5:16 pm

“””””Forgotten by whom?”””””

Show some that you have done.

If you think they aren’t normal, why do you think any of the means have any meaning. A symetric SD interval around the mean of skewed distribution is also meaningless.

Reply to  Jim Gorman
August 27, 2023 7:13 am

You don’t really expect an answer, do you?

Reply to  Tim Gorman
August 27, 2023 9:14 am

Maybe I should just make the usual excuse and say you are ignorant and wouldn’t understand the answer.

Really though, I don;t have infinite time to answer every question right away, and in this case I’m not even sure what the question is.

Reply to  Jim Gorman
August 27, 2023 9:20 am

Sorry, just what am I being required to do here? What have I done that needs showing? What do I think are not normal?

Your claim that a SD from a skewed distribution is meaningless, is just wrong. What you may mean is you can’t use it in the same way as a normal distribution to state what percentage of values lie within a constant multiple of it, but it still has meaning. For example, it can be used to estimate the standard error of the mean.

Reply to  Bellman
August 27, 2023 9:32 am

Let me test this. I take a random sample of 100 values from a Poisson distribution with mean 4 – so SD = 2. I would expect SEM to be 2 / √100 = 0.2.

I test this by generating 10000 samples each of size 100, and look at the standard deviation of the means. Result:

0.2001332

Not bad, considering the SD of 2 is supposedly meaningless, coming from a skewed distribution.

Reply to  Bellman
August 27, 2023 9:58 am

Wow, you can work a calculator.

Impressive.

Reply to  Bellman
August 28, 2023 12:29 pm

What is the use of the standard error of the mean?

Reply to  AlanJ
August 25, 2023 10:26 am

Propagation of uncertainty always starts at the first step of an iterative calculation, Alan. How is it you don’t know that?

Reply to  Pat Frank
August 25, 2023 2:50 pm

Because he’s never done a days physical work apparently. Or he would know that by the time he has incremented a beam several times by adding boards that the total uncertainty is the sum of the uncertainties in each incremental board added to the beam.

He’s never ridden a motorcycle enduro where arrival at a checkpoint is penalized for being early or late – and if your time piece is running fast or slow then how early or late you arrive at the sequential checkpoints gets worse and worse the further you go! Running over 100 miles at 24mph the errors can really add up!

August 25, 2023 5:42 am

fCO2 = 0.42

So once again, the answer is 42.

bdgwx
August 25, 2023 6:17 am

I have a challenge for Pat Frank. Present a model that estimates the global average temperature on a monthly basis from 1900 to 2022. We will compute the R^2 and RMSD of that prediction wrt to Berkeley Earth and compare it that from CMIP. We’ll see 1) who produces the better match to reality and 2) if CMIP really does have an uncertainty as high as is being claimed.

Reply to  bdgwx
August 25, 2023 7:25 am

A bit early for you trendologists to be strutting your nonsense about, yes?

And you still have ZERO clue about what uncertainty is, and isn’t.

And who are “we”?

Reply to  bdgwx
August 25, 2023 7:31 am

Did you not look at the graph which does this in Pat’s interview?

Pat wasn’t trying to forecast temperature, he was trying to show how the CIMP models coalesce into a simple linear equation. How did you manage to miss that? Look at the graph at about six minutes in.

The uncertainty in CIMP is obvious by looking at the equation and graphs at 9 minutes in.

The variance seen in the output models *IS* an indication of the uncertainty of their outputs. If they have no uncertainty they would all coalesce into one. I don’t know why you never want to consider the variances in anything to do with climate modeling and in temperature measurements but it *IS* a valuable statistical tool with which to evaluate uncertainty.

Reply to  Tim Gorman
August 25, 2023 8:07 am

His “monthly basis from 1900 to 2022″ is a red herring, Pat never claimed his emulation does this.

Reply to  karlomonte
August 25, 2023 10:18 am

Check this out, KM.

The linear emulation equation can reproduce the 20th century warming pretty well, using the IPCC-approved forcings. Volcanic spikes not included, but could be.

These are the GISS Temp and UEA/UKMet anomalies

20 CEN both.png
bdgwx
Reply to  Pat Frank
August 25, 2023 10:53 am

What is the root mean square difference between your emulation and GISTEMP?

Reply to  bdgwx
August 25, 2023 11:59 am

What is the real monthly uncertainty of GISTEMP?

Reply to  karlomonte
August 25, 2023 6:47 pm

Good question! Right to the heart of things.

Reply to  bdgwx
August 25, 2023 6:47 pm

I don’t care.

bdgwx
Reply to  Pat Frank
August 26, 2023 7:09 am

It looks like a pretty good match to me. Is it better than CMIP?

Reply to  Pat Frank
August 25, 2023 12:04 pm

They look like curve smoothings!

Reply to  karlomonte
August 25, 2023 10:42 am

The whole CAGW clique is grasping at straws trying to find some fault with Pat’s analysis. There isn’t anything to fault. His equation is not a climate model, it;s an analysis of the climate models and what he found just embarrasses them.

Reply to  Tim Gorman
August 25, 2023 12:06 pm

95% of all of the noise boils downs to:

“NOOOO! It can’t be that big!”

Reply to  karlomonte
August 25, 2023 2:57 pm

You nailed it!

It’s very apparent that none of them has ever done any physical work where measurement uncertainty *has* to be considered at the risk of civil and/or criminal negligence charges.

Reply to  Tim Gorman
August 25, 2023 3:57 pm

No concept of what a PE has to sign off on.

Reply to  bdgwx
August 25, 2023 9:30 am

Question about your “challenge”: Will you permit Dr. Frank to fine tune his “model” with numerous fudge factors as supported by Berkeley Earth and as needed by every one of the CMIP-considered, multi-million dollar supercomputer models so as to match the model output to Earth’s actual atmosphere/surface temperature at any given point in the past?

Or are you so naive as to assume the models accurately predict atmospheric/surface temperatures from first principle physics alone, without the need for any “tuning” parameters. How then to explain their wide dispersion (3:1 or greater) in predicted temperatures?

Next, your assertion that your challenge will resulting in seeing “if CMIP really does have an uncertainty as high as is being claimed”—given the aforementioned 3:1 dispersion in predictions AND the at least factor-of-two average in overpredicting temperatures compared to observations (measurements)—is so absurd as to not merit any further comment.

Reply to  ToldYouSo
August 25, 2023 2:37 pm

Willis E has already shown they have to set limits on things like ice pools (I believe that’s what it was) in order to keep the models from blowing up. If the models were truly physics based this wouldn’t be needed.

Reply to  Tim Gorman
August 25, 2023 3:59 pm

And after all this they still generate oddities like the doubled tropical convergence zones.

Reply to  Tim Gorman
August 25, 2023 6:52 pm

Not only that, but they use a hyperviscous (molasses) atmosphere to suppress small scale enstrophy (energetic gyres) because otherwise the simulations blow up.

Jerry Browning is outspokenly critical about this.

Reply to  Pat Frank
August 26, 2023 5:08 am

The climate models are nothing more than data matching exercises and they even do a poor job of that!

Reply to  bdgwx
August 25, 2023 10:03 am

A challenge from ignorance, bdgwx. No such prediction from physics is possible.

bdgwx
Reply to  Pat Frank
August 25, 2023 10:17 am

And yet the RMSD of CMIP5 vs BEST from 1900 to 2022 is 0.16 K. That is pretty good for something that is not possible.

Reply to  bdgwx
August 25, 2023 10:35 am

Perhaps you aren’t aware if the phrase “garbage in, garbage out”?

As Pat carefully pointed out, any assertion of a temperature measurement to a precision (let alone accuracy) of 0.01 K is absurd. Hence, and statistical derivation (RMSD) or curve-fitting-of-data having a claimed accuracy of 0.16 K is likewise absurd.

bdgwx
Reply to  ToldYouSo
August 25, 2023 2:07 pm

Hence, and statistical derivation (RMSD) or curve-fitting-of-data having a claimed accuracy of 0.16 K is likewise absurd.

And yet it is 0.16 K. I encourage you to download the data here and here and check my work. Let me know what result you get. We can discuss at which step the difference arises.

Reply to  bdgwx
August 25, 2023 2:44 pm

You simply can’t assimilate the fact that the uncertainty in the data is far more than .01K, can you?

If the uncertainty in the data overwhelms the difference you are trying to discern then you are only fooling yourself that you can actually discern such difference!

Feynman had something to say about fooling yourself. Maybe you should go look his quote up.

Stop assuming that the stated values of the data are 100% accurate.

Reply to  bdgwx
August 25, 2023 4:39 pm

“And yet it is 0.16 K.”

Absurd. You do not even understand the difference between “it is” and “it calculates to be”. ROTFLMAO.

bdgwx
Reply to  ToldYouSo
August 25, 2023 7:47 pm

I don’t care what you call it. Call it what you want. All I want to know is if you get a result different than 0.16 K. That’s it. Which is it?

Reply to  bdgwx
August 26, 2023 7:01 am

Suppose I told you that you were wrong because my calculation using your suggested data results in a RMSD of CMIP5 vs BEST from 1900 to 2022 of 0.14627 K.

What would you do with that number?

bdgwx
Reply to  ToldYouSo
August 26, 2023 1:01 pm

What would you do with that number?

I would say two 2 things. 1) That’s pretty close to 0.16 K. 2) The significant police are going to tear you apart for expressing it to 5 decimal places.

Reply to  bdgwx
August 29, 2023 3:53 am

decimal places are not significant figures. It’s apparent you still haven’t learned your measurement lessons.

You missed the “significance” of the question totally.

Reply to  bdgwx
August 26, 2023 6:15 pm

No, the equipment that measure surface temperature isn’t that accurate which is why your reliance on the .16k is silly.

bdgwx
Reply to  Sunsettommy
August 27, 2023 5:06 am

No, the equipment that measure surface temperature isn’t that accurate which is why your reliance on the .16k is silly.

Do you know what that 0.16 K figure is in reference to?

Reply to  bdgwx
August 29, 2023 3:55 am

Do you know what that 0.16 K figure is in reference to?”

If it’s associated with a “global” temperature then it doesn’t matter. The uncertainty will be in the units digit. That means you can’t resolve differences in the hundredths digit, that difference is UNKNOWN at that level of resolution.

Reply to  Sunsettommy
August 29, 2023 2:04 pm

bdgwx, bellman, nick, all the climate scientists believe you can increase accuracy and resolution by averaging. They believe significant digit rules are only for fools.

Reply to  Tim Gorman
August 29, 2023 2:36 pm

This is the bottom line they keep dancing around.

Reply to  ToldYouSo
August 25, 2023 4:03 pm

Absolutely correct! Without claimed milli-Kelvin “uncertainties”, these clowns are bankrupt.

Reply to  bdgwx
August 25, 2023 11:31 am

Models are tuned to match the published historical air temperature record. Tuned curve-fitting is not a prediction.

bdgwx
Reply to  Pat Frank
August 25, 2023 2:04 pm

F = G*(m1*m2/r) is a model that is tuned (via the free parameter G) to match the historical relationship between F, m1, m2, and r. Does that mean Newton’s Law of Universal Gravitation cannot be used to make predictions?

BTW…don’t hear what I didn’t say. I didn’t say overfitting is desirable. I didn’t say curve fitting is the best method of model development. I didn’t say having more free parameters is better than fewer free parameters. I didn’t say free parameters are good. I didn’t say free parameters are bad. I haven’t said a lot of things that you may try to pin on me. Just know that if you make up an absurd argument that I never presented then you and you alone own it. Don’t expect me to defend it.

Reply to  bdgwx
August 25, 2023 3:11 pm

I see no absurd arguments in his reply to your post. What are you talking about?

Reply to  doonman
August 25, 2023 4:00 pm

He doesn’t know. It’s all just babble.

Reply to  doonman
August 25, 2023 4:07 pm

He doesn’t know what he’s talking about, just throwing stuff at the wall and hoping something sticks.

bdgwx
Reply to  doonman
August 26, 2023 7:07 am

I see no absurd arguments in his reply to your post. What are you talking about?

I posted at 2:04 pm. You posted 3:11 pm. Pat replied to my post at 6:55 pm. I will say when he responded at 6:55 pm he didn’t make up an absurd argument expect me to defend it so I’ll give me that.

Reply to  bdgwx
August 25, 2023 4:00 pm

We’ve had this discussion before. And apparently you learned nothing from it.

G is *NOT* a free parameter. G is based on the environmental conditions at the point you are calculating the force. It is determinative. You don’t get to just pick it out of thin air. G on Mars is different than G on Earth but each is determinative based on the volume and density of the planet (at least in rough terms).

Free parameters in climate models can be picked from thin air to make the model result come out as the designers wish, be it with a specific output or with an output that doesn’t “blow up”. Those free parameters are not determinative, they are *chosen*.

Example: cloud cover. It is a free parameter. It is not determinative and is not calculated from environmental conditions on a point by point basis. It is whatever the modeler wants it be.

Reply to  Tim Gorman
August 26, 2023 7:08 am

“G is based on the environmental conditions at the point you are calculating the force. It is determinative . . . G on Mars is different than G on Earth but each is determinative based on the volume and density of the planet (at least in rough terms).”

Actually, no. I think you are referring to “g”, lower case, the symbol used to reflect the local acceleration due to gravity. “G”, upper case, is universally used in physics (at least in the English language) to represent the gravitational constant, which indeed is physical constant throughout the universe to the best our our scientific knowledge.

Reply to  ToldYouSo
August 26, 2023 8:29 am

Of course you are right. It is F that is item in question. It is *still* not a free parameter. And it is F that bdgwx brought up.

G can still be calculated from F, m1, m2 and r. It doesn’t have to be measured experimentally. Only F, m1, m2 and r have to be determined experimentally. That means that G is *not* a free parameter. It is a calculated parameter with an uncertainty inherited from F, m1, m2, and r.

Reply to  bdgwx
August 25, 2023 4:06 pm

Not even remotely similar to what the UN climate snake oil sales persons do to “tune” their simulations.

G can be DIRECTLY measured by measuring all the other terms of the equation.

Where is the equivalent climate equation, fool?

Reply to  bdgwx
August 25, 2023 4:50 pm

“F = G*(m1*m2/r) is a model that is tuned (via the free parameter G) . . .”

You simply do not understand that the gravitational constant, G, is not a “free parameter”, but instead is an empirical physical constant.

G is known to four significant decimal places in SI units; it cannot be adjusted willy-nilly.

But then, you would have to know some basic science to appreciate this fact. 

Reply to  ToldYouSo
August 25, 2023 5:48 pm

bdgwx has been told this over and over. Yet he can’t remember this simple concept.

bdgwx
Reply to  ToldYouSo
August 25, 2023 7:44 pm

ToldYouSo: You simply do not understand that the gravitational constant, G, is not a “free parameter”, but instead is an empirical physical constant.

Being a physical constant does not preclude it from being a free parameter. It is a free parameter because it must be determined experimentally.

G is known to four significant decimal places in SI units; it cannot be adjusted willy-nilly.

Nobody said anything about adjusting G willy-nilly accept you. You and you alone own that argument. That’s fine you can have it. Just don’t expect me to defend your arguments especially when they are absurd.

It is an indisputable fact. G is not known exactly and it’s value must be estimated experimentally. It is a free parameter not unlike the free parameters used in many models including but not limited to climate models. The existence of a free parameter in a model does not mean that the model containing it cannot be used to make predictions. And any claim as such can be countered with the countless models in existence that 1) make predictions and 2) contain at least one free parameter.

Reply to  bdgwx
August 26, 2023 2:43 am

Nobody said anything about adjusting G willy-nilly accept you.

Liar.

Reply to  bdgwx
August 26, 2023 5:33 am

Being a physical constant does not preclude it from being a free parameter. It is a free parameter because it must be determined experimentally.”

Whether it is found experimentally or is calculated from measured components it is still not a “free parameter”.

G *can* be calculated from other components. It does *NOT* have to be found experimentally. The calculation equation can be verified experimentally but that does not mean that G has to be measured experimentally.

Free parameters are chosen or estimated. G is neither. It can be calculated directly, therefore it is not necessary to either choose a value or estimate a value.

We went through all this with you in a previous thread yet here you are again spouting the same nonsense. Is your memory *that* bad?

You don’t even realize it but you just provided an example of the use of uncertainty. You don’t *need* to know G exactly, it’s impossible to measure it EXACTLY with our limited technology. Therefore G should always be stated as X +/- u. It still doesn’t need to be estimated or “chosen”. You just have to use metrology correctly.

How the atmosphere acts in a climate model is *not* calculated or measured experimentally. It is chosen or estimated based purely on a subjective basis. It *is* the very definition of a free parameter. So are so many things associated with the climate models, from ice pools to clouds.

Reply to  Tim Gorman
August 26, 2023 8:14 am

https://physics.nist.gov/cgi-bin/cuu/Value?bg|search_for=gravitation

G
Numerical value 6.674 30 x 10-11 m3 kg-1 s-2
Standard uncertainty 0.000 15 x 10-11 m3 kg-1 s-2
Relative standard uncertainty 2.2 x 10-5

Didn’t find any constant for air temperature.

Reply to  bdgwx
August 26, 2023 7:20 am

“It is a free parameter because it must be determined experimentally.”

By your (tremendously flawed) logic, the parameter G did not exist before mankind was able to determine it experimentally.

I guess you also assert the speed of light, c, did not exist before it was measured experimentally.

Same thing for charge on the electron, Plank’s constant, the fine structure constant, etc, etc, etc.

Yeah, right.

Reply to  bdgwx
August 25, 2023 6:55 pm

F = G*(m1*m2/r)…” argument by diversion. The BEST/CMIP5 difference is no measure of accuracy.

I represented your argument correctly. if you find it absurd, look to yourself.

bdgwx
Reply to  Pat Frank
August 26, 2023 7:04 am

I represented your argument correctly. if you find it absurd, look to yourself.

My argument is that there are many models that have free parameters that must be estimated via experimentation. No one seriously thinks they can’t be used to make predictions.

Reply to  bdgwx
August 26, 2023 7:44 am

No one seriously thinks they can’t be used to make predictions.

Predictions in science constitute a unique singular result deduced from theory. One so specific that it could threaten the standing of a theory did the observable contradict the prediction.

Climate models make no such prediction. Their expectation values are not unique. The spread is so wide they can’t be falsified no matter what happens with the climate.

Reply to  Pat Frank
August 26, 2023 1:23 pm

That is why trends against time will never prove anything about causation. A year will not provide a singular prediction of a temperature.

It is why the IPCC using CO2 predictions are doomed to failure. It should be obvious the CO2 does not determine a singular value for a given concentration.

Reply to  bdgwx
August 26, 2023 8:22 am

My argument is that there are many models that have free parameters that must be estimated via experimentation.”

You misunderstand the term “estimated”. Experimental values are stated as “value +/- uncertainty. That’s not an *estimate*, it is the statement of measuring a measurand.

“E.5.1 The focus of this Guide is on the measurement result and its evaluated uncertainty rather than on the unknowable quantities “true” value and error (see Annex D). By taking the operational views that the result of a measurement is simply the value attributed to the measurand and that the uncertainty of that result is a measure of the dispersion of the values that could reasonably be attributed to the measurand, this Guide in effect uncouples the often confusing connection between uncertainty and the unknowable quantities “true” value and error.”

What you are trying to do is conflate a “measurement value +/- uncertainty” with guessing at a true value!

If the climate models included an uncertainty interval for every “estimated” parameter and then propagated that uncertainty the outputs would not just wind up “stated value” but would be “stated value +/- uncertainty”.

But they don’t do that. They ASSUME the parameters are 100% accurate and proceed from there. That has nothing to do with experimental values and everything to do with subjective guessing at true values.

Reply to  bdgwx
August 26, 2023 11:32 am

Name even one prediction generated by climate models/science that has come to pass.

Just one.

Reply to  bdgwx
August 25, 2023 2:42 pm

Do you mean RSME? Root mean square error?

Look at any graphs of CIMP6 model outputs against actual observations. The RMSE ifs far more than .16K.

Room mean square error derived from the difference between the data points and the trend line tells you NOTHING about uncertainty. Since the trend line is derived from the data and the data is inherently uncertain any calculation of difference is meaningless as far as uncertainty is concerned.

This is typical climate science. Just ignore all measurement uncertainty in the data, assume it is all random, Gaussian, and cancels. Therefore the stated values can be considered to be 100% accurate.

Reply to  Tim Gorman
August 25, 2023 4:08 pm

And again, he assumes that error is knowable, but doesn’t even realize what he is doing!

Reply to  karlomonte
August 25, 2023 5:51 pm

It’s exactly what the GUM says.

“The focus of this Guide is on the measurement result and its evaluated uncertainty rather than on the unknowable quantities “true” value and error”

In order to know the error you first have to know the true value.

Reply to  Tim Gorman
August 26, 2023 2:44 am

Which they will never understand!

sherro01
Reply to  bdgwx
August 28, 2023 3:27 pm

bdgwx,
You mistakenly assume that reality is known.
It never is.
That is why we have to use an estimate, knowing it is not exact, with expressions of uncertainty as a guide to how good the estimate might be.
Geoff S

bdgwx
August 25, 2023 8:09 am

One comment that jumped out at me. In discussing the Hunga-Tonga injection of water vapor into the stratosphere Pat says “Trillions of gallons of water got shot into the stratosphere and those are going to cause the climate to warm up”.

So a mere 150 MtH2O above the 12,000,000 MtH2O already in the atmosphere is expected to warm the Earth, but an extra 1,000,000 MtCO2 does nothing?

Reply to  bdgwx
August 25, 2023 8:16 am

Where did he make this claim?

This is YOUR extrapolation.

Reply to  bdgwx
August 25, 2023 10:19 am

Argument from personal incredulity, bdgwx. That’s all you’ve ever had.

bdgwx
Reply to  Pat Frank
August 25, 2023 12:44 pm

My argument is based on the fact that the laws of physics are universal and that CO2 does not magically get exempted from them. My incredulity is based on the numerous claims that 140 ppm of a substance cannot have a significant effect because 140 ppm is too little. I’ve seen this repeated over and over again on WUWT. Yet here we are with another example of a WUWT participant saying that 0.02 ppm could be having an effect. At the very least I think you can acknowledge the odd dichotomy here. No?

Janice Moore
Reply to  bdgwx
August 25, 2023 1:31 pm

Your assertion is absolutely NOT based on known laws of physics.

bdgwx
Reply to  Janice Moore
August 25, 2023 1:55 pm

Help me understand what are saying. The most important law of physics I’m basing this off of is the 1st law of thermodynamics. Do you reject the 1LOT or reject the idea that CO2 is not exempt from it? If you answer no (acceptance) to both then you agree with me. If you answer yes (rejection) then are challenging me. Which one is it?

Janice Moore
Reply to  bdgwx
August 25, 2023 2:28 pm

Apparently, you do not understand your own statements.

So, to help you understand what you are saying, here is a little analogy.

Your “so a mere… but an extra…” statement-masquerading-as-a-question is like:

“So, a mere pinch of this fentanyl will kill me, but this SUPER SIZED serving of diet rootbeer won’t?”

Still don’t understand what you said?

–> You are asserting that the mere fact of differing quantities means something. It doesn’t.

Reply to  Janice Moore
August 25, 2023 4:10 pm

bgw likes to play Stump The Professor by asking questions he thinks he already knows the answers for.

bdgwx
Reply to  Janice Moore
August 25, 2023 4:48 pm

I understand that the 1LOT is an unassailable law of physics that applies universally and that CO2 is not exempt. Are you challenging my understanding? If the answer is yes then I’ll disengage. I neither have the time nor the motivation to defend the 1LOT right now.

Reply to  bdgwx
August 25, 2023 5:06 pm

And when challenged, he always pulls out his cherished ‘1LOT’ red herring.

Reply to  karlomonte
August 25, 2023 6:06 pm

He simply doesn’t understand how to apply the 1LOT. He thinks that reading it on the internet makes him an expert on it.

Thermodynamics 101 was the *hardest* course I took in college by far. And bdgwx doesn’t know enough to understand what he doesn’t know about it.

Reply to  bdgwx
August 25, 2023 6:04 pm

It’s a matter of knowing how to apply it, not a matter of being able to quote it.

It’s why thermodynamics is a discipline all its own at university. It’s why engineers have to have at least a minimum set of courses from the discipline.

You simply do not know how to apply the 1LOT correctly. Yet you want to tell graduate engineers that they don’t understand the 1LOT!

Reply to  bdgwx
August 25, 2023 11:17 pm

 I neither have the time nor the motivation”

Or the intelligence.

The 1LOT is what proves CO2 does basically nothing except feed plants.

Reply to  bdgwx
August 25, 2023 6:59 pm

H₂O is not CO₂. The stratosphere is not the troposphere. Your analogy has no obvious bearing on any physical question under discussion.

Reply to  bdgwx
August 25, 2023 3:06 pm

It isn’t just the amount! It’s the effect the amount has. A small amount of uranium has vastly more potential effect than does a large amount of coal!

Water vapor can have a vastly different effect than can CO2.

Reply to  bdgwx
August 25, 2023 6:57 pm

Why is water in the stratosphere like CO₂ in the troposphere? Make the physical case.

Reply to  Pat Frank
August 26, 2023 5:34 am

You didn’t actually expect an answer, did you?

Reply to  Tim Gorman
August 26, 2023 12:34 pm

Nope. 🙂 Not a calculation, anyway.

bdgwx
Reply to  Pat Frank
August 26, 2023 6:57 am

Pat Frank: Why is water in the stratosphere like CO₂ in the troposphere? Make the physical case.

It’s your argument. You make the case.

Reply to  bdgwx
August 26, 2023 7:46 am

“It’s your argument.”

Yet again, how quickly you forget.

bdgwx
Reply to  Pat Frank
August 26, 2023 12:57 pm

My argument is that small things can have a big effect. That’s it. And I think you agree with me on that otherwise you wouldn’t have said that the Hunga Tonga water vapor injection could be responsible for some of the warming we observe today.

I said nothing about water in the stratosphere being like CO2 in the troposphere. Water in the stratosphere is different than CO2 in the troposphere in many ways. If you agree with me on that then let’s work to together to convince the contrarians of the differences.

Reply to  bdgwx
August 26, 2023 8:43 pm

My argument is that small things can have a big effect.

Your argument: “So a mere 150 MtH2O above the 12,000,000 MtH2O already in the atmosphere is expected to warm the Earth, but an extra 1,000,000 MtCO2 does nothing??”

The construction of your statement makes a large thing of the extra CO₂.

That memory thing again.

bdgwx
Reply to  Pat Frank
August 27, 2023 5:02 am

My statement makes a large thing of both H2O and CO2 even though both are small quantities relative to the whole. If you accept that 150 MtH2O (which is only 0.02 ppm relative to the whole) then you are agreeing me, but disagreeing with many of the other WUWT participants. Let’s work together on this and convince the contrarians that small things can have a big effect.

Reply to  bdgwx
August 27, 2023 5:47 am

can

Resolve the ambiguity.

Reply to  Pat Frank
August 27, 2023 8:03 am

He really likes the word “contrarian”, it means anyone who dares to point out his non-physical nonsense.

Reply to  Pat Frank
August 29, 2023 3:50 am

He can’t.

It may be, it can be, it’s possible, it could be, it might be, it’s potentially

Words to live by in the realm of climate “science”.

Reply to  bdgwx
August 29, 2023 3:44 am

If that “large” effect is so small you can’t resolve it then how big can it be? That’s the whole problem with CAGW. If the change is within natural variation how do you tell what it is?

Reply to  bdgwx
August 26, 2023 8:10 am

It was *YOUR* claim, not Pat’s. YOU make the case!

Reply to  bdgwx
August 25, 2023 10:47 am

The issue isn’t how much CO2 warms the earth, it’s how much it does so. The climate models and the temperature data has so much uncertainty that any actual differential is impossible to discern.

One more time: if your uncertainty is in the tens digit there is no way you can discern a difference in the hundredths digit. What is going on in the hundredths digit is unknown and unknowable.

It’s why climate science has to assume no uncertainty exists, regardless of whether that makes sense physically or statistically.

Milo
Reply to  bdgwx
August 25, 2023 1:24 pm

The water is in the stratosphere, where it’s likely to persist.

bdgwx
Reply to  Milo
August 25, 2023 1:47 pm

The 150 MtH2O will persistent for a few years. The 200,000 MtCO2 added to the stratosphere will persistent for hundreds of years.

Reply to  bdgwx
August 25, 2023 7:05 pm

The 200,000 MtCO2 added to the stratosphere will persistent for hundreds of years.” which makes all the plants happy.

If we manage to get to 1000 ppmv CO₂ they’ll all be dancing the Macarena.

Reply to  bdgwx
August 26, 2023 1:33 pm

And how does that much CO2 get into the stratosphere when it is so heavy?

Janice Moore
Reply to  bdgwx
August 25, 2023 1:30 pm

Answer: Yes. That is what data tells us.

bdgwx
Reply to  Janice Moore
August 25, 2023 1:51 pm

The claim I keep seeing is that 140 ppm, regardless of what the substance is, is too small of an amount of to have an effect. What data tells us that 140 ppm isn’t big enough, but 0.02 ppm is?

Janice Moore
Reply to  bdgwx
August 25, 2023 2:10 pm

A “claim [you] keep seeing.” Whatever.

Reply to  Janice Moore
August 25, 2023 4:11 pm

Welcome to bdgwx-world: please check your brain at the door.

Reply to  bdgwx
August 26, 2023 1:28 pm

Dude, do you not keep up with climate science?

The water vapor went into the stratosphere, not the entire atmosphere. A warming stratosphere does warm the surface.

August 25, 2023 8:34 am

Fantastic, fact-based, mathematical and scientific slapdown of today’s climate alarmism . . . thank you, Pat Frank!

You offered many key points worth repeating, but I’ll just point out one that was exceptionally noteworthy IMHO: it starts at the 21m58s mark of the video and concludes at 22m53s with this takeway observation “. . . we have convincing nonsense, is what you’ve got.”

“Convincing nonsense”, that’s a spot-on phrase I’m going to propagate forward!

Reply to  ToldYouSo
August 25, 2023 11:34 am

Thanks for the kind words, TYS, and please feel free. 🙂

Mr Ed
August 25, 2023 8:57 am

I listened to the segment on tree rings. On this piece

https://abatlas.org/the-human-sense-of-place/high-altitude-archeology

” Old whitebark pine stumps have been dated to 1,100 to 2,100 years ago in places that are now 500 feet above where trees are growing now, Guenther said. “These were happy, well-fed whitebark pine,” he said. That points to the possibility that the high country was warmer for a period of time, maybe encouraging occupation when lower elevations were stricken with drought”

I believe there is a correlation to tree rings and moisture eg drought as seen in the
Sequoia’s in California. Also the fact of Buffalo Jumps @ 10500 ft in elevation
suggests things on the prairie were dry for a long time.. Just my unqualified opinion…

Nik
August 25, 2023 9:56 am

Loved Pat’s “No Certain Doom” presentation and, I have shared it widely.

If No Doom was a stake through the heart, “Nobody Understands” is a drawing and quartering, followed by incineration.

August 25, 2023 12:40 pm

His talked reminded of one of the conclusions reached by the group that investigated Climategate. They were surprised that there were no statisticians employed by the climate people!

Janice Moore
August 25, 2023 1:42 pm

FOR ANYONE CONFUSED BY AJ, et al:

An excellent lecture on propagation of error by Dr. Frank here:

“No Certain Doom”

Janice Moore
Reply to  Janice Moore
August 25, 2023 1:45 pm

My notes re: above lecture:

7:35 “No published study shows uncertainties or errors propagated through a GCM projection.

10:23 “Ensemble Average” – add all ten together and divide by ten.

10:48 – A straight line equation mimics ensemble average closely

10:58 i.e., GCM’s merely linearly extrapolate GHG forcing

12:45 i.e., you could duplicate the GCMs, run on supercomputers, with a hand calculator

13:00 Discusses total error propagation of GCMs over time

13:35 Calculating uncertainty (formula) – yields a measure of predictive reliability

13:47 Q: Do climate models make a relevant thermal error? Answer: Yes.

14:10 Cloud modeling is highly uncertain (discussion)

14:27 25 years of global cloudiness data

16:25 Average cloud error + – 140% (discussion of cloud error estimation — “lag-1 error autocorrelation are all at least 95% or more)

18:35 Essentially, with that large an error, you can’t know anything about cloud effect on climate using the models

 19:00 The cloud error is not random; it is structural (i.e., there is systematic data that models are not explaining)

19:35 Worse, not only is there error, ALL the models are making the same kind of error

20:00 The errors are not random errors. Structural coherence of cloud error shows models share a common faulty theory.

21:00 How + – 4 w/m*2 average theory error propagates in step-wise model projection of future climate (conventional time-marching method) – stepping out into 100 years in future, error is propagated out, step by step –THEORY error does not cancel out (random error does)

24:30 Having calculated average thermal error of models, enter that error factor into the linear “model-model” (from the start of lecture, the one which accurately mimics all the GCM’s) and use it to make a temperature projection.

25:02 Result: BIG GREEN MESS (the error bars for future projections go right off the page)

25:42 Error after 100 years: = + – 14 degrees; Error is 114 times larger than the variable. This does NOT mean (as many modelers mistakenly think) that the temperature could GET 14 deg. higher or lower – it means that the errors are beyond any physical possibility. That is, these are uncertainties, not temperatures.

26:46 The error bars are larger than the temperature projection even from the FIRST year. The error is 114 times larger than the variable from the GET GO.  Climate models cannot project reliably even ONE year out.

28:20 James Hansen example: 1) as presented in 1988; 2) with error propagation (off the chart)
(Note: 29:00 The modelers never present their projections with a physically valid uncertainty shown – never.)

30:00 That is, Hansen’s projection were meaningless.

32:19 Conclusions: What do climate models reveal about future average global temperature? Nothing
… about a human GHG fingerprint on the terrestrial climate?  Nothing.

… “Have the courage to do nothing.”

Reply to  Janice Moore
August 25, 2023 3:35 pm

You pretty much nailed it.

I would only point out that averages are meaningless without knowing the associated variance. I believe it was you that pointed out in another message that climate studies *never* follow through on propagating variances throughout their “averaging protocols”. Therefore we don’t know if the averages are meaningful or not.

Consider, winter temps have a different variance then summer temps. Yet climate science throws winter and temps together to get a “global average” and simply ignore the variances. Variance has a direct relationship with the meaning of the average. A wide variance means that the “hump” around the average is probably pretty wide, i.e. a large standard deviation. Meaning that the average may or may not be meaningful. You can’t even tell if you have a multi-nodal distribution of temps – which is quite likely when combining winter and summer temps. And anomalies don’t help. Anomalies inherent the variances of the data used to calculate the anomalies. Winter anomalies *will* have a different average than summer anomalies because of their different variances – a classic multi-nodal distribution. Which gets totally ignored in climate science.

Reply to  Tim Gorman
August 25, 2023 4:03 pm

Even in the presence of variance, an average carries less information than the original data. It is a compromise. A form of compression of the original data that makes its expression more compact.

Reply to  Clyde Spencer
August 25, 2023 5:47 pm

Yep. What does the average height of Shetland ponies and Arabians tell you? That’s no different than the average value of winter temps and summer temps.

Janice Moore
Reply to  Tim Gorman
August 25, 2023 9:22 pm

Mr. Gorman, just had to applaud your excellent, technically accurate, powerfully stated, comments passim.

So glad you are here (and several others, too) to fight for data-driven, bona fide, science!

Reply to  Janice Moore
August 25, 2023 7:07 pm

Holy moley, Janice! I’m so impressed! And touched!

Janice Moore
Reply to  Pat Frank
August 25, 2023 9:34 pm

Aw 😊. Wow. Thank you, so much. Your students were (still are – as evidenced by the wonderful instruction we received today) SO blessed. Encouragement is what distinguishes a mere instructor from a true teacher.

And I must say that when I saw the photo of you and that pretty lady, I smiled big and said, “Isn’t that lovely. He has a beautiful best friend forever. He’s not alone.”

Just nice to know.

Reply to  Janice Moore
August 26, 2023 12:00 am

Thank-you Janice. Paula is wonderful. If we ever have tea one day, we’ll tell you the story. 🙂

Janice Moore
Reply to  Pat Frank
August 26, 2023 10:32 am

🙂

Reply to  Janice Moore
August 26, 2023 6:31 am

I had to give yo an uptick for this comment. I just wish I had noticed it without you pointing it out!

Reply to  Tim Gorman
August 26, 2023 8:17 am

And I could not help but notice multiple air temperature trolls (i.e. trendologists) felt compelled to downvote Pat’s thank-you.

Beyond pathetic, this is bare naked hate.

Janice Moore
Reply to  karlomonte
August 26, 2023 10:37 am

I agree. I didn’t “plus 1” your remark — to leave the evidence for all to see (currently, around 10:37AM PDT, you are at -1 for that 🙄)

Reply to  Janice Moore
August 26, 2023 4:35 pm

Yes, same here, their machinations need to be on display for anyone to see.

Janice Moore
Reply to  Tim Gorman
August 26, 2023 10:34 am

Thanks! I just got to be the one to say it.

Janice Moore
August 25, 2023 1:50 pm

Main points from Dr. Frank’s “Are Climate Modelers Scientists?”

“I will give examples of all of the following failures of competence. Climate modelers:

• do [not] understand the physical importance of a unique result.

• are unable to distinguish between accuracy and precision.

• do not understand that a ±15 C uncertainty in temperature is not a physical temperature.

• do not understand that a ±15 C projection uncertainty does not mean the model itself is oscillating wildly between icehouse and greenhouse climate simulations.

• confront standard error propagation as a foreign concept.

• do not understand the significance or impact of a calibration experiment.

• do not understand the concept of instrumental or model resolution.

• do not understand detection limits.

• do not recognize that ‘±n’ is not ‘+n.’

• do not understand the meaning of physical error.

• do not understand physical error analysis at all.” 

(Read and or download Dr. Frank’s EXCELLENT expose, here:
https://www.researchgate.net/publication/370211676_Are_Climate_Modelers_Scientists/link/6445b666d749e4340e3199db/download

Reply to  Janice Moore
August 25, 2023 7:10 pm

Thanks for posting that, and for the link, Janice. I had no idea you knew of that manuscript.

Discovering all of that about climate modelers — that they’re unequipped to judge the physical reliability of their own models — during the 6 years it took to publish “Propagation…” was an enormous shock.

Janice Moore
Reply to  Pat Frank
August 25, 2023 9:38 pm

And, no doubt, getting shafted by the thugs of the Pal Review mob was pretty sickening.

Grateful for you!

Thanks for all you are doing on the front lines of what is, essentially, a battle for Life and Liberty.

Janice Moore
Reply to  Janice Moore
August 25, 2023 9:50 pm

P.S. I haven’t read your paper about sex discrimination in STEM workplaces, but, if you concluded that there is, essentially, none: I wholeheartedly agree with you.

Reply to  Janice Moore
August 25, 2023 11:57 pm

There is some, Janice, but individual cases. There’s no evidence of a STEM climate of sexual abuse.

All SH theories predict that sexually harassed people lose job satisfaction. But male and female academics have very similar levels of job satisfaction. So, no evidence.

I have another manuscript under review. I hope it gets accepted. It provides quantitative evaluations of the problem in terms of personality types. It also brings SH into science with some unanticipated results.

Janice Moore
Reply to  Pat Frank
August 26, 2023 10:43 am

This morning, I realized that I need to qualify my above remark. I have read/heard from reliable sources enough reports of individual cases/anecdotal evidence to be fairly certain that there IS sex discrimination:

for the past 25 years or so, men have been discriminated against in hiring (and, perhaps in promotion, too) as the affirmative action nonsense has infiltrated the STEM sector.

That might be at a meaningful level. The discrimination against women (case by case basis) which you discovered is present, but, I doubt it is as widespread (since the late 20th century) as discrimination against men. The absolute numbers (of men discriminated against) are probably not horribly high, but, it is still WRONG.

Janice Moore
Reply to  Janice Moore
August 26, 2023 10:47 am

Arrgh! I tried over and over to edit my sort of unclear writing above — WordPress keeps refusing to let me edit, claiming that I am “posting too quickly.” Hope you can figure out what I meant to say.

Reply to  Janice Moore
August 26, 2023 12:42 pm

No problemo. 🙂

Reply to  Janice Moore
August 26, 2023 10:45 pm

It seems that the WordPress edit function has become fubared, probably as a side effect of some other change.

Reply to  Janice Moore
August 26, 2023 12:41 pm

There’s some truth to that.

Table 1 in the Falsification paper provides government statistics showing that female STEM applicants are interviewed and hired a a rate disproportionately higher than their application rate.

Males definitely face systemic negative discrimination, but they’re not sexually harassed much in the classical sense.

Non-classically (manuscript (I hope paper) #2) I show males are frequently sexually harassed by females. it’s just a cultural norm and not called out.

sherro01
Reply to  Pat Frank
August 28, 2023 7:27 am

Pat,
Here in Australia since about 2000, there have been many political decisions made in topics that traditionally were left unspoken.
Just today, talk radio has owners of multiple homes asking why there were recent regulations requiring them to allow rental for short or long term only, as the case may be, to help low income homeless people. The owners questioned why regulators were empowered to regulate homes they did not own.
Seeing example after example since 2000 of these regulatory intrusions made me wonder what had changed. The most plausible answer was the increase in proportion of female regulators/policy makers.
So I devised a mental game of the value of trying to trace each strange intrusion to a person, the taking odds on female or male instigator. In short, I reckon quite a lot of social harm has been done by women expressing their views of empowerment.
Then, noting also a wave of interest in many flavours of non-standard sex preferences, I thought it logical that it ain’t just women. It can also be men wanting to be empowered women.
I noted your family photo above. You and I are much the same age and I am a few months short of a 60th wedding anniversary to a lovely lady who is not talking empowerment or sex change. Maybe I like your climate change wisdom because we have multiple points in common, like both analytical chemists at times, like having values for logic, manners, common sense, humility and a few others.
Sorry for the lengthy ramble, it was triggered by comments from Janice, whom I respect through her writings.
Geoff S

Reply to  sherro01
August 28, 2023 11:23 am

Thinking out loud, Geoff, thanks. 🙂

The world is stranger than we know on first look. Layers upon layers upon layers.

The peculiarity of the regulations is that some can propose but multiple others must agree. So it can’t be just women. It must be foolish women allied with idiot men.

Janice Moore
Reply to  sherro01
August 30, 2023 3:08 pm

THANK YOU, Mr. Sherrington! Very glad I came back here… .😊

And, I must add that I am also happy to hear that YOU, also, are not alone, happily spending your days with a lovely wife.

CONGRATULATIONS (early) on 60 YEARS!

✨ 🎇 🎆 💞 💖 💞 🎆 🎇 ✨

Janice Moore
Reply to  Pat Frank
August 30, 2023 3:16 pm

Thank you for taking the time to inform me. Even more *blush*, thank you for so graciously overlooking my mistake (conflating sex discrimination with sexual harassment <– while I would expect some of this (more done by men to women, I would expect — leaving aside the cultural norm, “accepted,” type), as with discrimination, I would have expected it not to be a meaningful trend, just, as you found, isolated cases).

Oh, and I was so pleased to know that I would be welcome for tea with you two. And especially delighted that it would be tea. I love the aroma of coffee, but, coffee just tastes like dirt to me.

Well….. if I am ever in your lovely neck of the woods again, I will try to contact you and come HEAR THAT STORY! 😊

Reply to  Janice Moore
August 30, 2023 9:10 pm

Let me know, Janice. My email is on the papers. 🙂

Janice Moore
Reply to  Pat Frank
August 31, 2023 3:31 pm

Thank you. Just knowing that I am welcome is a blessing, even if I never make it down your way again.

Reply to  Janice Moore
August 26, 2023 6:46 pm

Yes, discrimination swings both ways. Probably the most egregious situation was when I applied for a position with a well-know company that was so anxious to hire a female Asian-American that they made her an offer before someone noticed that she didn’t have the requisite minimum of a master’s degree. They rescinded her offer, but didn’t make me an offer. I can only presume that they continued to look for someone to help fill their affirmative actions quota. That was not the only incident of that type in my career. The first time was in 1971.

Reply to  Janice Moore
August 27, 2023 4:24 pm

Thank you for posting that link, Janice Moore. I just read the manuscript and downloaded the pdf. A great record and reminder of what Pat Frank has been pointing out so consistently for so long now.

Janice Moore
August 25, 2023 1:55 pm

Dr. Frank is in excellent, world class, company in his assertions

For instance, here is Dr. Christopher Essex, soundly backing up Dr. Frank in this lecture:

“Believing in Six Impossible Things Before Breakfast”
https://wattsupwiththat.com/2015/02/20/believing-in-six-impossible-things-before-breakfast-and-climate-models/

Janice Moore
Reply to  Janice Moore
August 25, 2023 2:03 pm

My notes on the above Essex lecture:

[5:24] “Scientific thinking is about things and political thinking is about what other people are doing.”

[5:31] “A consensus is wrong way to think about a scientific question.”

[5:50] (quoted in slide) “’In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that long-term prediction of future climate states is not possible.’” 

[6:25] (Source of above quote: IPCC (Intergovernmental Panel on Climate Change), Third Assessment Report (2001), Sec. 14.2.2.2, p. 774)

[7:08] Mentions: book by Essex and Dr. Ross McKitrick, Taken by Storm: The Troubled Science, Policy, and Politics of Global Warming (Paperback – May 28, 2008)

[9:25] “An extraordinarily complex problem [] is represented [] as an extremely simple one.” (cites the thermometer in a shoebox, grade school, “science” experiment re: greenhouse effect)

[12:03] When you plot a .04 deg. C increase/decade on a thermometer graph – unlikely to be significant.

[14:00] “There is absolutely no physical argument [] to connect this value to any of the things that people talk about as climate impacts.”

[14:50] “There is a cultural problem with science in general.” — “Cultural carbon” – Example: “Certified carbon free” (formula for sucrose = C12H22O11 (without: 11H2O))

[15:40] “Oxygen-free carbon dioxide” – (re: ignorant use of “carbon” to mean CO2)

[17:14] Dominoes Sugar TV commercial (re: “carbon-free sugar”) (Note: they do say that it is because of 0 CO2 emissions that they call it “carbon-free”)

[18:20] By modifying language, you modify thinking – result: junk science (carbon in auto glass makes car hot quote of David Suzuki, “London Free Press,” May 12, 1990)

[20:20] Classic atmosphere-earth energy flow diagram – misleading because wide v. narrow arrows mislead.

[20:40] Energy flow diagram done more accurately IN: Solar radiation OUT: (1)Radiation (2)Fluid Dynamics –

[21:55] “Greenhouse Effect” (introduce gas that changes how Radiation can flow out) versus [22:22] How greenhouses really work (Fluid Dynamics – flow of air stopped by glass) – THIS is a KNOWN physical effect, governed by the laws of radiative transfer =” ”Completely Certain Outcome

[23:19] The “Greenhouse Effect” = Fundamental, Unsolved Scientific Problem — the temperature gradients could cause enough cooling in this NOT-closed system to compensate for warming (unlike in an actual greenhouse)

[25:20] List of fundamental unsolved math problems.
2 Math Equation Problems Not Yet Solved (needed for meaningful climate modeling)

1)    [26:10 – 27:00] Navier-Stokes Equations ([27:08]non-linear differential equation — unsolved. – they govern the flow of fluids (e.g., air and water) If you don’t have a handle on how air and water move [] then you really can’t [] have an intelligent conversation about climate[].

2)    [27:15] Computer Science unsolved problem is the P v. NP (Polynomial v. Non-polynomial Time Problem of Computational Complexity) math problem – this limits how well (not at all, at this time) a computer can be used to solve the math equations needed to solve climate model’s queries –

[27:40] BOTH the above must be solved to be able to meaningfully do climate simulations re: CO2

Physics Problems Not Yet Solved

[27:50] Closure Problem of Turbulence — thus, cannot use Navier-Stokes flow equation to solve flow even in a closed pipe if there is any turbulence from first principles

[28:40 – 29:08] Cannot even determine average flow with turbulence (because to average, you have to do the entire original calculation anyway) from first principles.

[29:09 – 29:51] Experience or data cannot overcome the non-closure problem because we have far too little data and or the time of measurement given what is being measured, climate, is far too short.

[29:53] People DO use models to do empirical “closures,” but they are not doing so from first principles.

[30:18] (James Cameron’s) Computer Water Versus Real Water

[32:10] The point is: there are no math or physics equations that give the result pictured (it is fake water and not an accurate representation of the physical world).

[32:40]Issue: Finite Representation of Computers

[34:02] Red spot graph – point is (0,0) on grid representing Error – Re: 2 variables with 2 unknowns, computer can solve, however, computer only has finite # of decimal places to use to solve, thus, rounding errors will sometimes occur [35:00] – Residual error – [35:25] – Demonstration of residual error using computer plotting of 2 mathematically equal ways of solving an equation done 100,000 times each on graph – blue and red not overlapping shows computer’s finite representation of the 2 equal math methods (magnified = plotted along lines, not a true scatter – the machine epsilon ɛ indicates the finite representation power of a given computer, i.e., the smallest number, ɛ, such that ɛ + 1 > 1; if you add a number < ɛ to 1, it = 1)

[40:33] Finite representation of computers goes beyond GIGO (Garbage In Garbage Out), they can easily give you NGIGO (NOT-Garbage In Garbage Out).

[42:15] – Re: Turbulent Flow – [43:00] for the “swirls” in air, turbulence at smallest scale (the Kolmogorov microscale) is: 1mm) – thus, to do proper calculation, grid must be smaller than 1mm

[43:20] GIVEN, you have a grid < 1mm [NOTE: for aerosols and other factors, you would need micrometer-sized grid – leaving that aside…. And also leaving aside that you would have to be able to “stop action” the air situation (dogs running around, cars, etc.), TO CALCULATE FLUID DYANMIC CALULATION 10 YEARS OUT (what air turbulence would be)

[44:12] (easily 10 variables, using 1 floating point calculation per variable (that is likely only a minimum)), [45:00] the number of calculations per second (billion or so), [45:22] YEARS TO CALCULATE 10 YEAR FORECAST: 1020 years (the universe is 1010 years). IOW: cannot be done.

[46:05] Re: Parameterizations – engineering uses in their models – to approximate a calculation in a reasonable time – [46:25] Engineers tune using data from experiments (e.g., wind tunnel data) – Can’t put the earth into a laboratory (“wind tunnel”) –

[46:42] Thus, climatologists are using a non-empirical model for an empirical problem.

[46:51] Best resolution for climate model grid is HUNDREDS OF KILOMETERS. You will miss a lot, e.g., thunderstorms (! – much energy transfer done by them – there are ~ 2 million thunderstorms a year on earth)

[47:25] All significant weather phenomena are beneath the resolution of the parameterizations of the models — FAKE PHYSICS.

[49:07] Finite Representation – again – “Numerical schemes don’t usually conserve energy!” – [49:35 – equation for ideal oscillation of a pendulum (the energy is conserved – unlike a real grandfather clock which is losing energy to friction, etc. and must have weights keeping it going)

[50:47] Note: the equation below the line – it is not the same as the ideal oscillation equation – it is what you must use to compensate for the finite representation of a computer

[51:40] When you run the oscillation equation (as modified for FR, i.e., the computer’s ɛ ), it not only conserves energy (as per “ideal” equation – not likelyl going to be such in physical world), it accelerates [51:50] and gains energy infinitely (upward sloping curve) –

[53:33] Implication for the Navier-Stokes equations – to account for all the quantities which must be conserved, you would have to write an algorithm which would solve the N-S equations – not yet done by anyone.

[54:30] Re: the Constancy of Climate – example of a differential equation plotted on graph next to computer’s calculation (using a modified version of that differential equation) – NOT the same after time passes (for awhile, tracks pretty much exactly, then, WHOA, diverges [55:10] – becomes unstable, i.e., issue: computer instability.

[55:52] To prove that chaos was not just an artifict of computer FR, Essex, et al. (1991) used the computer FR to do the reverse, i.e., over-stabilize (or suppress known instability) – [56:00] they turned a chaotic system and turned it into a harmonic oscillator.

[56:30] Cf. IPCC models trying to handle ENSO [56:45] different models’ outputs on graph – focus on one section [57:06] blow it up – everything (all the model runs’ paralleling each other) is flat [57:23] – known as a white spectrum, Fourier power spectrum ([57:40]which per the Wiener-Pinchum (sp?) theorem says that the individual output which produces that spectrum has to be uncorrelated from moment to moment – take one model out 1,000 years (using PCMDI (Livermore) climate model)  — FLAT
[58:28] Comparing Observed to Model data – Note: Observed up and down with definite peaks and troughs while the model sluggishly mimics with much less amplitude, much flatter than observed [59:00] IMPLICATION: Climate doesn’t change on its own; it just stays STABLE, unless something “pushes” it.

[59:20] Clear to Essex that the IPCC code overstabilized the climate models – in AR4 IPCC trumpeted their “fake” energy flows to stabilize the system – appears to overstabilize –

[1:00:32] – There is no proof that climate is naturally stable – and there are good arguments that there are internal cycles making climate system change on its own without any external forcing

[1:01:12] – Climatological timescales demonstrated using timelapse photography – sun’s travel over 6 months – cars on road, people “invisible” Q: Would you see “weather” on this time scale? No one knows and there is no physical basis to say you can – [1:04:33] Niagra River using stop image, 1image/15 sec. (a lot of events happening in the level 6 rapids in a short time, so, give idea of what it’s like to see only in terms of long timescales)

[1:05:30] created a 3-minute average (as if only seeing once every 3 minutes what it happening) – invariance introduced, “flattened” implication: physics of what is being observed changes – [1:06:12] need to formulate the physics of long versus short (more information) time-scale activity.

[1:06:38] “There are no experts on what nobody knows. So, the whole idea of using ‘experts’ to decide on matters of this type is completely foolhardy, because there really aren’t any experts on it.

[1:07:06] 8 Main Points – Summary
 
1. Solving the closure problem.
 
2. Computers with infinite representation.
 
3. Computer water and cultural physics.
 
4. Greenhouses that don’t work by the greenhouse effect.
 
5. Carbon-free sugar.
 
6. Oxygen free carbon dioxide.
 
7. Nonexistent long-term natural variability.
 
8. Nonempirical climate models that conserve what they are supposed to conserve.

Janice Moore
Reply to  Janice Moore
August 25, 2023 2:05 pm

Christopher Essex lecture:

Reply to  Janice Moore
August 26, 2023 3:25 am

Thank you for sharing this. I watched it and took away a renewed sense of how to expose the unsoundness of attribution of reported warming to incremental CO2. It’s all about compressible fluid flow and how energy is convertible between kinetic energy and “internal + potential energy”. Atmosphere modelers know this. https://codes.ecmwf.int/grib/param-db/?id=162064

So I am putting something together on this point.

Janice Moore
Reply to  David Dibbell
August 26, 2023 10:48 am

Glad you enjoyed that. Thanks for saying so 🙂 Looking forward to your “something.” What you write is always a worthwhile read.

David Blenkinsop
August 25, 2023 2:59 pm

Wow, quite the analysis by Pat Frank! I’ve long paid considerable attention to conventional climate theory, at least wanting to know what the concern was all about. But, if they’ve no accurate way of monitoring the earth’s temperature, let alone predict anything, then why even bother?

As far as making a “best of WUWT” list, this one ought to go right to the top.

Reply to  David Blenkinsop
August 25, 2023 5:10 pm

And even more fundamental is that what they claim to study, the global average temperature, isn’t even really climate. The GAT rising by 0.2°C tells you absolutely nothing about what the local weather the next day might be.

Simon
Reply to  karlomonte
August 25, 2023 7:37 pm

And even more fundamental is that what they claim to study, the global average temperature, isn’t even really climate. The GAT rising by 0.2°C tells you absolutely nothing about what the local weather the next day might be.”
First rule … weather is not climate, so once again you make a clown of yourself without a suit.

Reply to  Simon
August 25, 2023 11:09 pm

Again, Simon makes an absolute goose of itself.

Shows it doesn’t understand the comment made.

Such a stupid little trollette

Reply to  bnice2000
August 26, 2023 2:51 am

He is an example of what happens to brain matter after studying Mao’s Little Red book.

Reply to  Simon
August 26, 2023 2:50 am

The resident marxist leftist slave master pontificates.

Reply to  karlomonte
August 26, 2023 8:20 am

Only two downvotes? Shirley you dweebs can do better than this, get busy, it is all you are good for.

Reply to  Simon
August 26, 2023 5:38 am

Weather *is* climate – when looked at over a period of time. It’s the very definition of climate. The weather in a desert climate is different than weather in an equatorial rain forest climate. Weather *is* climate.

Utter fail once again Simon.

Simon
Reply to  Tim Gorman
August 26, 2023 1:36 pm

Nope. They are different things studied by different people in different fields.

Reply to  Simon
August 26, 2023 6:23 pm

No, it is how the Koppen Classification index is built on, LINK

In my area it is classified as BSk yet the weather varies over the decades while the climate is the same the entire time.

You really should slow down.

Simon
Reply to  Tim Gorman
August 26, 2023 1:49 pm

Here I will help you…
“Weather is what you experience when you step outside on any given day. In other words, it is the state of the atmosphere at a particular location over the short-term. Climate is the average of the weather patterns in a location over a longer period of time, usually 30 years or more.”

Reply to  Simon
August 26, 2023 4:09 pm

Copy paste with zero comprehension.

Well done simpleton !

YOU are the one constantly whinging and whining about WEATHER events.

You do know that apart from slight warming from El Ninos, there has been no warming at all in the satellite temperature data, don’t you.

No changes in extreme events .. no changes in anything much at all.

And certainly no evidence of human released CO2 doing anything except increase plant growth.

And of course, that slight warming since the LIA has been massively beneficial all humanity, bringing great leaps in wealth, health and general survivability, thanks to the use of fossil fuels and other modern technologies based around solid reliable energy supplies.

Why continue to display your gullibility and ignorance?

Reply to  bnice2000
August 26, 2023 4:37 pm

Why continue to display your gullibility and ignorance?

TDS Simon has a special gift here.

Simon
Reply to  bnice2000
August 26, 2023 6:11 pm

Yawn. Just buy yourself a rubber stamp with…”I am a climate change denier.”

Reply to  Simon
August 26, 2023 6:55 pm

The marxist leftist clown digs deep and pulls out the “denier” label. So erudite.

Simon
Reply to  karlomonte
August 26, 2023 10:19 pm

See the problem you have is you spend your whole time calling people names, so you being upset about being called a legitimate name is beyond funny, in fact it is just ludicrous. In fact cry me a river snowflake….

Reply to  Simon
August 26, 2023 10:47 pm

Stop whining, marxist leftist clown.

Simon
Reply to  karlomonte
August 26, 2023 11:45 pm

I rest my case climate denier.

Reply to  Simon
August 29, 2023 4:29 am

Climate is the average of the weather patterns”

What’s the operative word in that phrase? I’ve highlighted for you.

Reply to  Tim Gorman
August 29, 2023 7:17 am

I’ve been labeled a “climate denier” by a loony Marxist-Maoist from the People’s Republic of New Zealand!

Should I put this in my resume?

David Blenkinsop
Reply to  David Blenkinsop
August 25, 2023 10:31 pm

I just thought I’d follow up on my earlier quick comment with a slightly more technical observation, looking at what I see as a key concept near the beginning of the head posted video here.

If you look at the point in the video, at 10:35 or so, where Pat Frank talks about the “uncertainty in forcing of the troposphere” and also about putting this error into “the art of the equation”, something more or less ‘scandalous’ is revealed then! Going on just a bit further, it is said that “..each step in an iterative calculation carries forward the error of the previous step..”, following which the presentation proceeds with a ‘diverging error’ kind of graph. This graph illustrates that over about a 120 years worth of doing iterations of a typical climate model you could easily get a whopping plus or minus 15 degrees C of uncertainty in the predicted temperature!

Now, if I’m not misreading the situation, it seems to me that this is actually a bit of understatement of the basic problem here?

Look, if one had some sort of simplified climate model, where the need to iterate, in steps, was under control (in the sense that one’s mathematical iterations were to actually *converge* over some reasonable time base, like converging over the course of some years or decades), *that* would at least be a proper bit of math? Add in some tendency for things to change in some progressive and/or ‘forcing responsive’ way, and you could at least *try* to make a simplified prediction about future temperature, or precipitation maybe, or what have you.

What Pat Franks’ discussion appears to reveal (and correct me if I’m wrong) is that conventional climate models *don’t* converge in their iterative “art of the equation”! Rather, on some relevant time base, these models actually *diverge*.

Now, in any sort of basic algebra that I’ve ever heard of, if an iterative formula *diverges* that is strictly a situation of the “solution does not exist”.

That is to say, these conventional climate theory models are not even proper *math*, let alone a prediction of anything real.

Reply to  David Blenkinsop
August 26, 2023 6:55 am

I would only add that the uncertainty interval doesn’t tell you what the true value might be. It is only an indicator of what you don’t know. As long as the values given by the model exist within the uncertainty interval you can’t know if the models are actually converging or diverging. You simply don’t know.

Once your uncertainty interval exceeds the differential you are trying to discern the model is done. There isn’t any use in carrying it forward because the unknown overpowers the differential.

David Blenkinsop
Reply to  Tim Gorman
August 26, 2023 8:53 am

The “Nobody Understands Climate” video here also goes on with more detail about how climate models as such can be carefully ‘parameter tuned’ to match past climate without diverging for past climate estimates. Then if you remove the tuning, and allow the parameters to vary, the model results ‘zoom off’ in all directions again. Reading this closely, Pat Frank’s linear formula with propagated error (the formula at the lower left in the video at 14:50) is then a simplification of what happens in the models when even slightly varying parameters are allowed.

So I’d say that you are correct to say that “you can’t know if the models are actually converging or diverging”

Reply to  David Blenkinsop
August 26, 2023 9:50 am

So I’d say that you are correct to say that “you can’t know if the models are actually converging or diverging””

yep!

Reply to  David Blenkinsop
August 26, 2023 8:25 am

Kip Hansen has a very interesting group of articles on WUWT about climate models and chaos theory:

https://wattsupwiththat.com/2020/07/25/chaos-and-weather/

August 26, 2023 9:32 am

YT used to have a global URL scrub policy. Today that is the Channel Owner’s configuration setting.

Most leave it default. Complained at numerous channels – no replies.

August 26, 2023 10:13 am

WUWT seems slow and does this . No problem with other sites ….
429 Too Many Requests

You have been rate-limited for making too many requests in a short time frame.

Website owner? If you think you have reached this message in error, please contact support.

August 26, 2023 1:48 pm

Pat, I have been late to this thread but have added some comments.

I want to congratulate you on a fine interview and a fine paper. I have certainly learned a lot and had some concerns verified.

Keep up the good work. You have many here that can follow what you are doing and appreciate your work.

Many of us have been horrified at the statistical ineptness displayed in climate science. Your work uncovers some of it!

Many thanks,

Jim Gorman

Reply to  Jim Gorman
August 26, 2023 8:51 pm

Thanks for the kind words, Jim. Your contributions have been greatly appreciated. You and cousin Tim have been tireless in the contest of ideas.

OK, you, too KM. 🙂

Reply to  Pat Frank
August 26, 2023 11:07 pm

After skimming the link that bgwxyz threw in your face below, filled with nonsense and vitriol from the climastrologers, I can only imagine what the struggle you’ve had to endure must have been like to get the truth out there. Words fail me.

Reply to  karlomonte
August 28, 2023 8:00 am

Thanks KM. We’re all compelled to the defense.

August 26, 2023 1:49 pm

I strongly suspect we are dealing here with a Climate ChatGPT – The previous trolls have deferred to AI.
Training on vast language models must have been done, probably by scraping WUWT and all Climate sites (killing them as with DOS). Musk is suing because Twitter was scraped for AI illegally.
Not to be sniffed at – AI is a Pentagon Strategic Imperative : Palantir AIP : Defense and Military
https://www.youtube.com/watch?v=XEM5qz__HOU
This AIP, integrated with OpenAI and Bard, from billionaire Thiel (major GOP donor), is ready to both generate and conduct military operations.
Palantir CEO Karp already in Feb told WaPo that AIP was already used in Ukraine.
Obvious question – what if a Chatbot gave firing orders and the Pentagon only found out later?
This is likely why Musk and others call for a moratorium! There is a Congress bill to block AI and nukes.

It was only a matter of time. Now how to identify a ChatGPT Climate AIP?

Reply to  bonbon
August 26, 2023 2:08 pm

It gets worse – someone asked about kids and ChatGPT – well here is where they go :
https://graziamagazine.com/me/articles/love-letter-chatgpt/
Not even mentioning medical-response-policy by ChatGPT.
How about election Teleprompter ChatGPT?

Reply to  bonbon
August 26, 2023 2:21 pm

Here is a list of AI articles on this problem :
13 report pages.
https://asiatimes.com/2020/06/why-ai-isnt-nearly-as-smart-as-it-looks/
Not sure yet if Climate mentioned.

Reply to  bonbon
August 26, 2023 6:59 pm

Now how to identify a ChatGPT Climate AIP?

It does not know when it is blowing smoke. Therefore, the number of factual errors is probably a function of the number of words.

Reply to  Clyde Spencer
August 28, 2023 7:55 am

The usual estimate is the square root of the number. 🙂

So, 13 pages, 1000 words per page, 114 errors.

One is reminded that AS is the other side of AI. Artificial Stupidity.

Reply to  bonbon
August 28, 2023 11:19 am

I will be submitting an article on measurement uncertainty and some AI answers.

The AI responses are interesting. Perhaps the biggest failing is exactly what climate scientists and others ignore in using statistics – the necessary requirements that must be met for the use of various statistical tools.

One of the biggest problems is repeatable conditions when using experimental uncertainty. None of the AI’s mention this necessary condition.

From JCGM 100:2008

“””””B.2.15 repeatability (of results of measurements) closeness of the agreement between the results of successive measurements of the same measurand carried out under the same conditions of measurement

NOTE 1 These conditions are called repeatability conditions.

NOTE 2 Repeatability conditions include:
— the same measurement procedure
— the same observer
— the same measuring instrument, used under the same conditions
— the same location
— repetition over a short period of time.

NOTE 3 Repeatability may be expressed quantitatively in terms of the dispersion characteristics of the results. [VIM:1993, definition 3.6] “””””

Does anyone think these conditions are met when averaging different stations at different stations?

Reply to  Jim Gorman
August 28, 2023 8:47 pm

None of them apply.

Also, the variance at each station is different, and none of the measurement errors are stationary.

Reply to  Pat Frank
August 30, 2023 8:05 am

So many entries it is hard to get them all. That is exactly the point, none of the conditions are met. At best uncertainty propagation equations are worthless.

NIST TN 1900 at least addressed some of the big issues.

“— repetition over a short period of time.” is a big one. Even over a month is probably pushing it.

bdgwx
August 26, 2023 2:22 pm

I was just told that this arbitrary addition of year-1 to the W m-2 units from Lauer & Hamilton 2013 has already been hashed out with Pat Frank. I’m disengaging from this particular discussion with Pat Frank as I now see that no amount of explanation will ever convince him of the error.

https://pubpeer.com/publications/391B1C150212A84C6051D7A2A7F119

Reply to  bdgwx
August 26, 2023 7:58 pm

I’m disengaging from this particular discussion with Pat Frank as I now see that no amount of explanation will ever convince him of the error.”

Good decision. You’re a young man, with better uses for your time. After all, it has already been shown that dead enders here will be mercifully ignored, and will therefore have NO deleterious, superterranean impact

Thanks to the Imaginary Guy In The Sky, the other worldly heat dome has moved on. . You can go see Whiskey Drinkin’ in St. Charles tomorrow. We had supper on the Hill with in laws last hour, and we’ll be e biking up to Festival of Nations at Tower Grove Park tomorrow. Back to normal forthwith…

Reply to  bigoilbob
August 28, 2023 7:34 am

Good decision.

Do follow suit, bob.

And take your declarations of victory with you. I’m sure they’ll be a comfort.

Reply to  bdgwx
August 26, 2023 9:02 pm

bdgwx, I’m not at all surprised that you’d be comforted by unsupported declarations congruent with your ignorant beliefs.

Error indeed.

L&H (2013): annual means

bdgwx: annual doesn’t mean per year.

Please do bail. And don’t let the door hit you on the way out.

Reply to  bdgwx
August 26, 2023 10:52 pm

In other words, bgwxyz is just parroting all the errors the climastrologers committed.

Reply to  bdgwx
August 26, 2023 10:56 pm

From your link:

Pat Frank wrote:

“If you look at those reviews, Paul, you’ll find that those reviewers:

  1. Think that precision is accuracy
  2. Think that a root-mean-square error is an energetic perturbation on the model
  3. Think that climate models can be used to validate climate models
  4. Do not understand calibration at all
  5. Do not know that calibration error propagates into subsequent calculations
  6. Do not know the difference between statistical uncertainty and physical error
  7. Think that “±” uncertainty means positive error offset
  8. Think that fortuitously cancelling errors remove physical uncertainty
  9. Think that projection anomalies are physically accurate (never demonstrated)
  10. Think that projection variance about a mean is identical to propagated error
  11. Think that a “±K” uncertainty is a physically real temperature
  12. Think that a “±K” uncertainty bar means the climate model itself is oscillating violently between ice-house and hot-house climate states.”

“Those are mistakes to be expected of a college freshman who never took a high school science course.”

“Never, in 30 years of publishing research in chemistry, have I ever encountered such incompetence so often repeated.”

All of these apply to you and bellcurveman, amazing.

JoeG
August 27, 2023 8:03 pm

I know what intelligent design is. What is “the intelligent design myth”?

Reply to  JoeG
August 27, 2023 10:47 pm

It’s this, Joe. But this thread is not the place for a conversation about it.

August 28, 2023 4:05 am

Wow. The detractors here are missing the obvious problem arising from the concepts of uncertainty and error that Pat Frank is applying to the climate (i.e. surface air temperature) projections.

Do a Google search on “NASA CFD quantification of uncertainty” and read about how this problem is addressed when the outcome of a simulation depends on iterative computation. (CFD = “computational fluid dynamics.”)

Here is an example of what you will find from the field of advanced aerodynamics:

https://ntrs.nasa.gov/api/citations/20180000520/downloads/20180000520.pdf

From the abstract – “Computational fluid dynamics is now considered to be an indispensable tool for the design and development of scramjet engine components. Unfortunately, the quantification of uncertainties is rarely addressed with anything other than sensitivity studies, so the degree of confidence associated with the numerical results remains exclusively with the subject matter expert that generated them. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process.”

And from the summary – “An enabling element in the application of this uncertainty quantification framework was the development of a metamodel [i.e. an emulator – dd] for the propagation of uncertainties. This resulted in a major cost savings with regard to the computational time required to perform the uncertainty analysis.” 

So what? SAME THING in climate analysis (diagnosis of the past and projection into the future) using iterative computation in a model. An emulator can help to quantify the uncertainty to be expected.

Much appreciation to Pat Frank for doing us all a valuable service by having formally exposed this problem in “climate” studies.

Reply to  David Dibbell
August 28, 2023 5:47 am

This is exactly right, it is the same thing. The detractors don’t understand what they think they understand.

Reply to  David Dibbell
August 28, 2023 7:52 am

Thanks David. The Baurle and Axdahl report looks very useful. It provides an outstanding precedent for the emulation approach. I’ve only scanned it but will go through more carefully. Great find!

I saw, too, that they reference Roy and Oberkampf (ref. 3) for their approach to uncertainty. I cited the same paper to establish the meaning of uncertainty in (2019) Propagation…, with a long discussion in the Supporting Information.

Thanks also for you kind words. It gets a bit lonely sometimes.

sherro01
Reply to  Pat Frank
August 28, 2023 4:11 pm

Pat,
At the end of 2022, WUWT kindly published a 3-part series on uncertainty that I wrote, with Tom Berger co-author of the last part.
There were over 800 comments about my part two, so you are catching up to that total. But, your contribution is much more didactic than mine, which was largely in questioning mode. Yours in answering mode is more valuable.
You mention loneliness. Yes, I felt that also. Too many of the comments to my articles were unthinking knee jerk recitals of dogma, whose inaccuracies had prompted me to write. They failed to advance understanding of the topic, but did tell more about competence and experience of bloggers.
So, sincere thanks to you (once more) for publicising concepts that have the power to put uncertainty back into proper perspective.
Geoff S

Reply to  sherro01
August 28, 2023 5:10 pm

Geoff: What is absolutely mind-boggling to me is how these ignorant climate types try to lecture Pat about the subject, telling him he’s wrong!

Reply to  sherro01
August 28, 2023 8:44 pm

I saw your posts and was grateful for them, Geoff. At the time I had little free capacity to comment.

My best to you.

Reply to  David Dibbell
August 29, 2023 1:05 pm

David,

Very impressive. The reference paper reminds me of why I, and probably most of my classmates, ran for our lives once we obtained our undergrad ChE degrees. Also interesting that at least some within NASA think there is merit in analyzing uncertainty before launching a bunch of prototypes over populated areas, versus the agency as a whole, which appears to be fully in favor of scrapping conventional energy sources for the entire world without benefit of same.

Reply to  Frank from NoVA
August 29, 2023 3:12 pm

Agreed. Same for me with a BSME. No regrets. I read the entire paper, and I think I got the gist of it, but not much more. And of course the technical subject differs greatly from climate analysis. Still, the idea of uncertainty propagation through an iterative computation, and the use of an emulator to do so, is very clearly presented.