Daisy-Chained Uncertainties

Guest Essay by Kip Hansen — 17 January 2023

What are the chances, huh?  Have you ever heard anyone say that?  I sure have.  Of course, they often ask because they haven’t a clue about what “chance” is or how to arrive at a practical idea of “what the chances are”.

Most of us understand that flipping a coin has a 50/50 chance of coming up heads.  It also has a 50/50 chance of coming up tails.  We understand that our individual chance of getting hit by lightning is extremely small.  Despite the truth of that, I have a close relative that has been hit by lightning twice – and survived both times.  What are the chances of that?!

[Struck by Lightning: Estimates vary widely, but in the United States: “According to the National Weather Service, a person has a 1-in-15,300 chance of getting struck by lightning in their lifetime, defined as an 80-year span.”  And “The odds of being struck in a given year are closer to that one-in-a-million mark, though: 1 in 1,222,000.”]

When we speak of “chances”, we really mean probability, which is a subject so wide and wild, that the statistician William Briggs has written a 237 page book as an introduction to the subject.

Regular readers will know that I am a die-hard pragmatist — a practical person.  If it isn’t true when I stub my toe on it, then I don’t care much.   This means I lean towards working engineers and away from academics of all sorts when the subject is something I can see and touch. 

My favorite professional statistician is William M. Briggs.  We share a background that includes such divergent topics as cryptology and stage magic.   He occasionally publishes something of mine.

He introduced the multiplication of uncertainties in a blog post last June titled:  “Why You Don’t Have To Worry About Climate Change: Multiplication Of Uncertainties”.  He has given me permission to extensively quote that post.

Briggs wrote of what happens with probability when several uncertain things have to happen at the same time.  But what we often have to consider is daisy-chained uncertainties.

What are daisy-chained probabilities?  Something like this:  If my black cat, who has a fifty/fifty chance of coming home tonight, does come home and then encounters my son’s dog, who has a 1-in-four chance of being unexpectedly dropped off for me to dog-sit overnight, and given that the dog is totally intolerant of the cat on one-out-of-every-five days, what is the chance that there will be a chaotic dog-and-cat fight in my home this evening?  

This type of scenario can be stated:  “If this, and then this, and then if this then that.”  The events have to take place in a specified order, each one having its own probability. 

[There is only a 2.5% chance of chaos.  I would be willing to take the chance (and suitable precautions).  Surprised?]

Of course, Briggs does not use such a “householder’s” example.

Briggs says this:

“While it is logically possible that slight changes in the average weather will cause only misery, and do no good whatsoever, it is scarcely likely. Indeed, it is absurd and proves “climate change” is part superstition, part scam, part bad science.”

“Our archetype statement has three parts: 1) the threat of “climate change”, 2) the bad event, and 3) the promise of “solutions”. We are meant to take the thing as a whole, as if the whole were as certain as the most certain part. Rather, as more certain than the most certain part.”

The key is “that we are meant to take…[it] as if the whole three part proposition is as certain as the most certain part.”

Here’s recent news sample

“Phil Trathan: Emperor penguins breeze through the Antarctic winter, and they need sea ice as a stable platform, so they really depend upon the sea freezing and forming a firm base. And as temperatures increase in the Antarctic, then we will see the sea ice disappear. And that means then the Emperors will have no place to breed.“

Take a deep breath…yes, I know that is absurd.  But it is an example of a CliSci-madness daisy-chain statement:  IF “temperatures increase in the Antarctic” then IF “sea ice disappears” Then “Emperors will have no place to breed”. 

“If temperatures increase in the Antarctic” means temperatures getting high enough to threaten winter sea ice formation:

The highest Maximum Temperature Temperature (monthly average) recorded at either Vostok or the South Pole is minus 26 centigrade. Antarctic experts know that sea ice is always present in the southern winter, when Emperor penguins must come ashore to lay eggs raise chicks.  Emperor penguins do not nest on ephemeral sea ice, they nest on the solid fast ice and or inshore the ice-covered rock of Antarctica.  They do, however,  often need land-fast sea ice to leave the water and get up on to the land, depending on the configuration of the shoreline.

That’s an example of how bad the propaganda can get, but let’s see Briggs’ examples:

Below, Briggs is referring to this statement: “Our archetype statement has three parts: 1) the threat of “climate change”, 2) the bad event, and 3) the promise of “solutions”.” “But that certainty adds is impossible. As is not possible. 

All three parts of the statement have their own uncertainties attached to them. If we consider the statement as a whole, then these uncertainties must be multiplied, more or less, resulting in a whole that is vastly more uncertain than any individual part.”

Now he introduces an everyday example: [some emphasis mine – kh]

“This coin will come up heads, [then] I’ll roll greater than a 3 on this die, and [then] draw an eight of hearts from this deck.”

Never forget! All probabilities are conditional, meaning we have to supply evidence from which to calculate them. Here, I’ve chosen common evidence sets. We have to assume these for each of the three parts of this scenario. For the coin flip, we’ll use “Here is an object which when flipped can show only heads or tails”. From that we deduce the chance of heads is 1/2.

And so on for the others. We get 1/2 for the flip, 1/2 for the die roll, and 1/52 for the card draw, all assuming standard evidence. For the entire scenario to be true, we need get all three. The probabilities multiply: 1/2 x 1/2 x 1/52 = 1/208, which is about 0.005.”  [the more precise value is 0.0048076923076923, about ½ of 1% ]

Briggs started with a news story (“There’s a Study!”) which he summarizes as “Because of the climate crisis, coffee production in Africa will decrease, which is why our political solutions need to be put in place.”

I picked these examples because I think they’re in the same ballpark as our coffee “climate change” scenario, though the evidence sets are trickier. Let’s step through each of the parts of the scenario to see how statements like this should be tackled.

1) The threat of “climate change”. I take this to mean Expert models predicting “large” “climate change” are accurate or the climate changes on its own, for reasons (at least in part) other than encoded by Experts in their models. Given Experts have been predicting weather doom since the 1970s, first that it would be too cold, then that it would be too hot, then that it would just be too different, and they’ve been wrong every time so far, I’m not too keen on Expert models. But I also figure that the earth’s climate has been both hotter and cooler, wetter and drier, sunnier and cloudier in the past, so it can be so again.

There is no numerical value for the probability that can be deduced from this evidence. It is too vague. But that doesn’t mean it is not useful. If pressed for a number, it is not too far, in my mind based on this evidence, from 50-50.

2) The bad event. Maybe coffee production in Africa would decrease under changed weather, or maybe it wouldn’t. Saying it will decrease is the result of another model by Experts. Who haven’t done at all well with agriculture forecasts.

Again, no numerical probability can be deduced. But I’m feeling generous, so call it 50-50 again. (Really, I believe it’s less, but I don’t want to change our example.)

3) The promise of “solutions”. Expert “solutions” here would be twofold: stopping the climate from changing, and ameliorating reductions in coffee production given the climate has changed in a direction to harm production.

This one is even trickier because some of the same evidence is used in (3) and in (1); namely, that about Experts’ climate models. This makes the multiplication trick strictly wrong.However, it’s not too far off, either, especially because Expert “solutions” for complex situations stink, stank, stunk. That one in fifty two is being generous.

[The resulting chance of the daisy-chain for coffee doom, as calculated above is about 0.005 – or 1/2 of 1%.]

The end result is I’m not worried about “climate change”, not nearly as worried as I’d be about adopting Expert “solutions”, which in my estimation would only make things worse, or much worse.”

My opinion, which I share with Briggs (more or less), all of the CliSci predicted Bad Effects share these types of daisy-chain dependencies and probabilities. 

Think of it in terms of the “IPCC likelihood” scale: 

As we can see, something that is stated to be “Likely” is between 66% and 90% ‘probability’ — converted to decimal fractions as 0.66 to 0.9.[Wherein the probabilities have not been calculated but determined by polling the expert opinions of those serving on the on the IPCC committee overseeing the chapter of IPCC reports on the issue – and many times changed, I understand, by the various National Representatives that must approve each likelihood statement.]

What happens when just two (2) such “Likely” statements are daisy-chained by dependency.

I’ll use the two differing point within the range of “Likely” – 0.70 and 0.85:

If “Likely #1: then if “Likely #2” then Result (in likelihood)

0.70 x 0.85 = 0.595

Daisy-chained likely events depending on one another almost break into the “Likely” range – if we rounded up, they’d make it.

But look what happens when we need three Likely outcomes to happen simultaneously:

0.75 x 0.80 x 0.85 = 0.51

Basically 50-50 in the About as Likely as Not range

The two categories,  About as Likely as Not and More Likely Than Not overlap – the lower category being “33-66% probability” and the higher being “> 50% probability”.

Once things events drop into the “About as Likely as Not” range, three daisy-chaining events produce “Unlikely” outcomes. 

0.4 x 0.5 x 0.6 = 0.12

Three daisy-chained events at the lower end of “About as Likely as Not”:

0.35 x 0.35 x 0.35 = 0.043

“Extremely Unlikely”

Bottom Line:

1)  All of the probabilities of CliSci future disasters suffer from the failure to calculate the probabilities through multiplying their necessary component’s fractional probabilities.  [Multiplying produces an approximation, good enough for pragmatists.]

2)  Note that the probability of some disaster decreases substantially when multiple conditions (if this) must take place in a particular temporal order — as in If First This then If Next This then Maybe This.  The above examples just cover probabilities that all the conditions will happen, without regard to order.  Introducing a new condition – temporal order – necessarily decreases probability.

3)  This means, that for Climate Science, IPCC-style predictions, based on climate models that have very wide spreads (think: models predicting future global average temperature) where is no stated probabilities assigned, just a wide interval of possible values, we have to re-think all of the IPCC predicted outcomes.  Why?  The probabilities of all the predicted consequences must have their probabilities at least roughly calculated by multiplying the probabilities of the conditions that lead to those consequences. 

4)  We can ignore all press release or statements that present a predicted disaster that begin with “If temperatures continue to rise….”. This idiocy invariably means “If the Global Mean Surface Temperature continues to rise….”  — but that is not the same as “if temperatures continue to rise here” …“If the temperature above 6000 feet at Mount Hood rises high enough to prevent snowfall…”  (see this essay).

# # # # #

Author’s Comment:

Uncertainty is tricky, it is uncertain, it can be complex, it can be complicated and it can be chaotic (as a cause or an effect).   There are those who think we can safely corral uncertainty with the fences of statistics.   But this idea is used as a pacifier to keep us from facing the real uncertainty of the world around us.

Statistical approaches are alluring – they act to lull us into feeling that we have it all under our control bringing in a sense certainty in the place of uncertainty.  Given the world-as-it-is, this may be necessary for our sanity.

This, I fear, is but another version of something akin to Propter Nomen  — if we can label it “uncertainty bars” or “standard deviations” or “error bars”  or, and I like this one, “confidence interval” (which implies that it isn’t that nasty uncertainty, but rather we are confident about it), then we are no longer uncertain.  All of those “uncertainty” substitutes are simply corrals into which we hope we have confined all the uncertain parts of our problem.

There is and will always be some uncertainty in measurements and calculations.  The more different measurements and calculations involved, the greater the uncertainty becomes

Thanks for reading.

# # # # #

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans."
4.9 35 votes
Article Rating
138 Comments
Inline Feedbacks
View all comments
AndersV
January 18, 2023 12:46 am

Briggs brings up the nested probabilities, the ones that really messes up this stuff. And this stuff is complicated well before we start nesting stuff.

Seeing systems as being either serial or parallel is what we engineers try to do. Many systems are neither, but exhibit enough consistency that they can be reasonably broken into such strings.

The problem Briggs is describing is what Judy Curry states as a “wicked problem” when she is dealing with uncertainty in climate. The problem is that you have events that have a probability that depends on its own probability under different circumstances. And the relationships of the events are themselves dependent on the sequence in which they occur, which in turn affects their probabilities.

Messy stuff.

January 18, 2023 5:30 am

A most excellent insight! I have often had to correct colleagues on how efficiency multiplies, but I had not considered that probabilities also must multiply!

For example banning natural gas for cooking or heating is ludicrous and actually burns more fossil fuel to use electric cooking and heating appliances than burning the methane directly. A condensing gas furnace is about 90% efficient or 0.9. That is for every 10 units of energy content in the methane, 9 units of thermal energy is delivered to your home or business. But if you use electric, then the same 10 units of energy in that methane used to run a gas turbine to drive alternators supplying the grid, is at best 45% efficient (0.45). Grid losses are as much as 5-10% – let’s say the grid distribution is 95% efficient (0.95). And the electric furnace is close to 100, as any waste heat from the blower motors is used, so 0.99 for the furnace.

Now you need to multiply the chain of efficiencies for electric heating 0.45 x 0.95 x 0.99 = 0.423 or 42.3% efficient! So banning natural gas heating causes people to burn 0.9/.423 = 2.13 times more natural gas than having condensing gas furnaces. That is 213% more fuel burned to have electric heating instead of a good gas furnace!

The same analysis can be applied to running an electric vehicle – by the time you factor in hydrocarbon electricity generation with the grid loss, battery charging/discharging loss, and drive motor efficiency you end up about 32% of the initial hydrocarbon energy content for your virtue signalling electric vehicle. And modern small IC engines are now at between 30-35% efficient at burning the stuff directly!

Yooper
January 18, 2023 6:25 am
January 18, 2023 6:43 am

I’m absolutely in full agreement with you, Kip!

An excellent article and very lucid writeup.

We’ve had uncertainty discussions here on WUWT.
Few alarmists accept what is necessary to truly calculate uncertainty, or they prefer to agree with alarming public doom predictions. No error bars or uncertainty calculations needed.

Except that no model actually defines every criteria necessary, (I’ll use the emperor penguins), for penguins to survive every day in Antarctica. Or for any other alleged endangered animal.

Every day, every feeding, every conception, every birth, temperature, storms… i.e., every possible daily penguin endangerment for emperor penguin survival has it’s own certainty/uncertainty calculation whose combined result defines emperor penguin doom odds.

Instead, alarmists research something they deem important, ignore even the one uncertainty result necessary for that condition, then personally decide that they ‘think’ penguins are doomed.
Another uninformed narrow view is announced to the press.

Most of us understand that flipping a coin has a 50/50 chance of coming up heads. It also has a 50/50 chance of coming up tails.”

A necessary understanding with coin flipping is that the odds include someone flipping an astonishing number of heads or tails.
The total chances flipping coins is 50/50. Individual runs of coin flips can range widely from 50/50.

As a teenager, I learned how to flip half dollars and silver dollars so that it would turn up a specific result (heads/tails) a horrifying number of times. Let’s just say that friends stopped asking me to flip a coin with them.

Leaving me very much skeptical about “coin flips” by possibly less honest interested parties. What constitutes a parlor trick can and will be used by those more mendacious.

Reply to  Kip Hansen
January 18, 2023 3:58 pm

Your best buddy was quite unlucky.

Reply to  ATheoK
January 18, 2023 11:44 am

There is also the non-zero chance the coin flip will end up on the edge, particularly if the surface is smooth and the coin has some angular momentum.

Reply to  karlomonte
January 18, 2023 3:56 pm

It’s happened twice to me. Both were quarters that quickly took off downhill, at a time when quarters were real money.

Both occurred because I insisted that flipped coins must hit the ground to count. That little bounce on the sidewalk helps prevent parlor trickery.

Another time I was watching others flip coins, when one hit on edge and the guy who flipped it immediately stomped it flat.
I was impressed by his reflexes. He’d apparently had enough coins dash away.

We lived on a hill at that time.

I had another friend who flipped a dime inside to decide a game move, missed catching the dime. The dime bounced then rolled slowly in a circle.

I left while they were arguing about no-decision.

Reply to  karlomonte
January 18, 2023 4:17 pm

Montekarlo:

particularly if the surface is smooth”

I think it’s more likely if the surface is hard with exposed small pebbles. The coin bounces and catches a raised stone on it’s edge and it’s off to the races.

Reply to  ATheoK
January 18, 2023 8:17 pm

Yes, you are likely correct here.

Disc-shaped tablets (i.e. pills) have a quite high probability of landing on edge because they are generally thicker than coins.

January 18, 2023 2:28 pm

I’ve argued this relentlessly from my experience proposing oil and gas projects to executives who are profit-driven (ie pragmatists).

There are two sides to a proposal that needs to be economically positive: cost and revenue (Revenue- cost = profit, duh).

However …

Cost has uncertainty. Drilling costs, development costs (needs such and water handling or plant size). Taxes. Each could be +/- perhaps 25%, but a single number needs to be assigned. For an engineer who believes in a project, he will say (in my experience), the used cost is 10% BELOW the middle estimated cost.

Revenue comes from the geologists and engineers who propose the project. Total production volume, rate of production, quality and market price. Each product is estimated in the >=10% above middle range.

Note: Every pricing is ALWAYS listed as increasing (nobody would invest today with a decreasing price future! Also, this counteracts the Discount Factor of 10-15% routinely used to compare using money vs leaving it in a dividend bearing account. Yeah, I know that’s stipid high.)

Each if these is a daisy chain of uncertainties.

If the engineers don’t like a project, each if the factors they choose is 10% less than average. If they like it, 10% higher. The result if tge daiky chain is that projects get presented at either half their actual outcome or twice their actual outcome. Experienced managers who want to avoid bankruptcy apply a fudge factor if perhaps -40%. Those who want to be a hero, apply a +40% fudge factor to counter negativity or timidity in their staff.

This is why even major oil companies like Texaco and Continental Oil fall down. One discounts the probable higher cost, the other, the probable lower revenue.

Climate change alarmism has both problems. Overestimating future costs and underestimating future revenues. Their scenarios are created by people who don’t want to bet their future on what they think COULD happen tomorrow. But no one variable, like with the oil project, can be said “unreasonable”. Just 10%, perhaps. But the daily chain calculation multiplies the result until the disaster is shown mathematically “correct”.