Triangular Fuzzy Numbers and the IPCC

Guest Post by Willis Eschenbach

I got to thinking about “triangular fuzzy numbers” regarding the IPCC and their claims about how the climate system works. The IPCC, along with the climate establishment in general, make what to me is a ridiculous claim. This is the idea that in a hugely complex system like the Earth’s climate, the output is a linear function of the input. Or as these savants would have it:

Temperature change (∆T) is equal to climate sensitivity ( λ ) times forcing change (∆F).

Or as an equation,

∆T = λ  ∆F.

The problem is that after thirty years of trying to squeeze the natural world into that straitjacket, they still haven’t been able to get those numbers nailed down. My theory is that this is because there is a theoretical misunderstanding. The error is in the claim that temperature change is some constant times the change in forcing.

Figure 1. The triangular fuzzy number for the number of mammal species [4166,  4629,  5092] is shown by the solid line. The peak is at the best estimate, 4629. The upper and lower limits of expected number of species vary with the membership value. For a membership value of 0.65 (shown in dotted lines), the lower limit is 4,467 species and the upper limit is 4,791 species (IUCN 2000).

So what are triangular fuzzy numbers when they are at home, and how can they help us understand why the IPCC claims are meaningless?

A triangular fuzzy number is composed of three estimates of some unknown value—the lowest, highest, and best estimates. To do calculations involving this uncertain figure, it is useful to use “fuzzy sets.” Traditional set theory includes the idea of exclusively being or not being a member of a set. For example, an animal is either alive or dead. However, for a number of sets, no clear membership can be determined. For example, is a person “old” if they are 55?

While no yes/no answer can be given, we can use fuzzy sets to determine the ranges of these types of values. Instead of the 1 or 0 used to indicate membership in traditional sets, fuzzy sets use a number between 0 and 1 to indicate partial membership in the set.

Fuzzy sets can also be used to establish boundaries around uncertain values. In addition to upper and lower values, these boundaries can include best estimates as well. It is a way to do sensitivity analysis when we have little information about the actual error sources and amounts. At its simplest all we need are the values we think it will be very unlikely to be greater or less than. These lower and upper bounds plus the best estimate make up a triangular number. A triangular number is written as [lowest expected value,  best estimate,  highest expected value].

For example, the number of mammalian species is given by the IUCN Red List folks as 4,629 species. However, this is known to be an estimate subject to error, which is usually quoted as ± 10%.

This range of estimates of the number of mammal species can be represented by a triangular fuzzy number. For the number of species, this is written as [4166,  4629,  5092], to indicate the lower and upper bounds, as well as the best estimate in the middle. Figure 1 shows the representation of the fuzzy number representing the count of all mammal species.

All the normal mathematical operations can be carried out using triangular numbers. The end result of the operation shows the most probable resultant value, along with the expected maximum and minimum values. For the procedures of addition, subtraction and multiplication, the low, best estimate, and high values are simply added, subtracted, or multiplied. Consider two triangular fuzzy numbers, triangular number

T1 = [L1,  B1,  H1]

and triangular number

T2 = [L2,  B2,  H2],

where “L”, “B”, and “H” are the lowest, best and highest estimates.  The rules are:

T1 + T2  =  [L1 + L2,  B1 + B2,  H1 + H2]

T1 – T2  =  [L1 – L2,  B1 – B2,  H1 – H2]  [Incorrect, edited. See below. Posting too fast. -w]

T1 * T2  =  [L1 * L2,  B1 * B2,  H1 * H2]

So that part is easy. For  subtraction and division, it’s a little different. The lowest possible value will be the low estimate in the numerator and the high estimate in the denominator, and vice versa for the highest possible value. So division is done as follows:

T1 / T2 = [L1 / H2,  B1 / B2,  L2 / H1]

And subtraction like this:

T1 – T2  =  [L1 – H2,  B1 – B2,  H1 – L2]

So how can we use triangular fuzzy numbers to see what the IPCC is doing?

Well, climate sensitivity (in °C per W/m2) up there in the IPCC magical formula is made up of two numbers—temperature change expected from a doubling of CO2, and increased forcing expected from a doubling of CO2 . For each of them, we have estimates of the likely range of values.

For the first number, the forcing from a doubling of CO2, the usual IPCC number says that this will give 3.7 W/m2 of additional forcing. The end ranges on that are likely about 3.5 for the lower value, and 4.1 for the upper value (Hansen 2005). This gives the triangular number [3.5,  3.7,  4.0] W/m2 for the forcing change from a doubling of CO2.

The second number, temperature change per doubling of CO2, is given by the IPCC http://news.bbc.co.uk/2/shared/bsp/hi/pdfs/02_02_07_climatereport.pdf as the triangular number [2.0,  3.0,  4.5] °C for the change in temperature from a doubling of CO2.

Dividing the sensitivity per doubling by the change in forcing per doubling gives us a value for the change in temperature (∆T, °C) from a given change in forcing (∆F, watts/metre squared). Again this is a triangular number, and by the rules for division it is:

T1 / T2 = [L1 / H2, B1 / B2, L2 / H1] = [2.0 / 4.0,   3.0 / 3.7,   4.5 / 3.5]

which is a climate sensitivity of [0.5,  0.8,  1.28] °C of temperature change for each W/m2 change in forcing. Note that as expected, the central value is the IPCC canonical value of 3°C per doubling of CO2.

Now, let’s see what this means in the real world. The IPCC is all on about the change in forcing since the “pre-industrial” times, which they take as 1750. For the amount of change in forcing since 1750, ∆F, the IPCC says http://news.bbc.co.uk/2/shared/bsp/hi/pdfs/02_02_07_climatereport.pdf there has been an increase of [0.6,  1.6,  2.4] watts per square metre in forcing.

Multiplying the triangular number for the change in forcing [0.6,  1.6,  2.4] W/m2 by the triangular number for sensitivity [0.5,  0.8,  1.28] °C per W/m-2 gives us the IPCC estimate for the change in temperature that we should have expected since 1750. Of course this is a triangular number as well, calculated as follows:

T1 * T2 = [L1 * L2, B1 * B2, H1 * H2] = [0.5 * 0.6,  0.8 * 1.6,  2.4 *1.28]

The final number, their estimate for the warming since 1750 predicted by their magic equation, is [0.3,  1.3,  3.1] °C of warming.

Let me say that another way, it’s important. For a quarter century now the AGW supporters have put millions of hours and millions of dollars into studies and computer models. In addition, the whole IPCC apparatus has creaked and groaned for fifteen years now, and that’s the best they can tell us for all of that money and all of the studies and all of the models?

The mountain has labored and concluded that since 1750, we should have seen a warming of somewhere between a third of a degree and three degrees … that’s some real impressive detective work there, Lou …

Seriously? That’s the best they can do, after thirty years of study? A warming between a third of a degree and three whole degrees? I cannot imagine a less falsifiable claim. Any warming will be easily encompassed by that interval. No matter what happens they can claim success. And that’s hindcasting, not even forecasting. Yikes!

I say again that the field of climate science took a wrong turn when they swallowed the unsupported claim that a hugely complex system like the climate has a linear relationship between change in input and change in operating conditions. The fundamental equation of the conventional paradigm, ∆T = λ ∆F, that basic claim that the change in temperature is a linear function of the change in forcing, is simply not true.

All the best,

w.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
106 Comments
Inline Feedbacks
View all comments
NovaReason
February 8, 2012 12:30 am

Ugh… Okay, so my initial post I was still kind of unsure about how it works, but after more careful reading of the second source I listed… it DOES produce a TFN, but the equations are significantly more complex than simply multiplying or dividing individual values. You need to completely redo the equations as you posted them, and the numbers are going to come out a lot funkier because of it. I’m nearly certain that 0.3 will not be your lower value for expected warming, as the numbers I was getting for the lowest value of is significantly higher than the one you got. (1.75 deg change per w/m2 of forcing instead of 0.5…) Past that, I think I made a calculation error on the likely value because it came out lower than the high bound. Also, I didn’t work in the multiplication step at all because I couldn’t get effective numbers for it… might try a SS in Excel to work it out.
So based on a fuzzy math calculator I found.
[3.5, 3.7, 4.0] / [2.0, 3.0, 4.5] = [0.78, 1.23, 2.0]
[0.78 , 1.23, 2.0] x [0.6, 1.6, 2.4] = [.467, 1.968, 4.8]
But given those numbers, the chance that it IS .467 are both extremely low and based on the lowest possible parameters that you could ascribe to the situation. And it’s also 1.5x as high as your estimate.
(My previous statement that they multiplication or division did not produce fuzzy numbers was a misunderstanding based on me misreading the equations for results in the first resource I listed as giving a range of results instead of a set of single value results… either poor formatting, or I was reading way too fast.)

jtv
February 8, 2012 12:30 am

Is the “high” part of division really L2 / H1? I don’t actually know any fuzzy arithmetic, but I would have expected H1 / L2.

February 8, 2012 12:42 am

> please can you explain how you feel adequately trained and experienced to…
You’re not very good at research, are you? Try http://en.wikipedia.org/wiki/User:William_M._Connolley
> Going to un-edit any of those tens of thousands of false statements, propaganda pieces and false Galileo-Inquisition-styled “editing” you have been cutting out of Wikipedia
You’re a bit confused. You’ve just accused me of cutting out false statements from wiki :-). More seriously, vague assertions like that go nowhere, you’d need details. And, fun though it is for you to abuse me, please remember this post is about WE’s interesting theory.
> What German letters would spell “Edit Shall Make You Free”
Godwin. You lose.
> a radiative heat transfer equation that has as an input back radiation
I think you’re confused. This post http://scienceofdoom.com/2010/07/17/the-amazing-case-of-back-radiation/ will help you, if you read it. Or if you prefer only to accept facts when they come from someone on “your side”, you could read “Dr” Roy Spencer: http://www.drroyspencer.com/2010/08/help-back-radiation-has-invaded-my-backyard/
AA> They [GCMs] assume the relative humidity is constant
No, they don’t. Relative humidity is a free variable. All the rest of what you said was wrong, too. I appreciate that you people don’ tlike GCMs, because they provide you the “wrong” answer, but you could at least try to understand what you don’t like, rather than making things up.

NovaReason
February 8, 2012 12:47 am

Ugh… I did the equations wrong up there… swapped T1 and T2 in your original equation. No idea what the values would come out like.

February 8, 2012 2:02 am

WE> if you look at the models as a black box, you see that the output is a simple linear transformation of the input plus a bit of lag
No, clearly it isn’t. This is trivial to demonstrate on a short timescale, where you can see the non-linear growth of perturbations, the “butterfly effect”: http://mustelid.blogspot.com/2005/10/butterflies-notes-for-post.html
Its also trivial to demonstrate over long time scales with averaging: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-10-5.html
But over long timescales the response is indeed approximately linear. Which is the point. And there is no evidence that this isn’t a good approximation of how the real climate system behaves, too.
What you’ve actually discovered is that it is possible for the output of a hugely non-linear system (a GCM, or the Earth’s climate) to respond approximately linearly to forcing.

February 8, 2012 2:12 am

Connolley as is full of crap. CO2 and temperature do have a correlarion. But that corelation is this: CO2 alawys follows temperature. No exceptions.
Connolley is a lying propagandist; we know this. Only the credulous believe in his mendacious spin. Word up to the wise.

Kev-in-Uk
February 8, 2012 3:38 am

I agree with Willis – I don’t think it’s sensible to consider any GCM or climate model as anything but a linear black box in terms of input versus output. And, as has been said many times, the machinations within the model are often ‘set’ or ‘adjusted’ or simply based on best guess estimates of causes and effects. Changes in any one of the relatively few variables should affect the other parameters, that’s well understood – but if you don’t actually KNOW the effect (as in it’s real rate of change, etc) and you are simply best guessing – the net effect of several of these within a model will likely be some form of linear output – because your best guess or adjustment will always be to assume some generic linear change. And yet we know full well that some variables’ cause and effect are not linear, or may be linear on one variable but non-linear on another! Does raising temp, raise humidity and therefore always raise precipitation, for example? – Perhaps (in the modellers eyes) – but what about albedo (which would increase and therefore decrease direct solar forcing) and then there’s other things like seasonal wind patterns, etc, etc – all of which means that ANY variable and its effect on other variables, and vice versa, becomes a hotch potch of crazy interactions and +ve/-ve feedbacks. Why anyone (Mr Connolley) would assume that the net effect of one change (CO2 – in the case of CAGW) will be linear when constructing ones model is beyond me. It’s a fallacy – and one which simply cannot be proven within such a complex non-linear system – but as Mr Connelly explains, he assumes the net effect is going to be linear……..(Personally, my attitude is that Mother Nature is seemingly ‘self correcting’ or self balancing (within large variations!) – and so why should the climate system be any different?)
In the uk, they are considering raising the motorway speed limit. This would naturally be expected to increase road deaths on the motorways, you would think? But it’s not as simple as that, some people will perhaps drive more alertly at higher speeds, nervous drivers may keep off the motorways more often! – cars are getting increasingly safer, more vehicles are travelling (making net speeds slower!), vehicle testing (MOT’s) are getting stricter, etc, etc – these effects are NOT directly known or measurable – so the natural assumption of a linear rise in road deaths is plain wrong….and in this simple example can be readily seen to be wrong once one applies a little thought. The simple act of increasing a speed limit does not automatically equate to more deaths, so the overall linear assumption is silly. On the basis of known climate complexity, which is obviously far more variable than road speed vs deaths – I am sure making any overall linear input/output assumption is erroneous!

February 8, 2012 3:52 am

Willis Eschenbach says: February 7, 2012 at 10:39 pm
“So redo my calculations and tell me what you think the correct answer should be,”

No, I’m saying you can’t get the answers you want this way at all. And nor can I. You’ve used rules for triples that might (with some repair) be reasonable for absolute ranges. But you’re looking at confidence intervals, and they just don’t multiply (or even add).
So when you say that the range of CS is [0.5, 0.8, 1.28]. You’ve used an invalid step to get there but let’s suppose. And forcing is [0.6, 1.6, 2.4].
That says there is a 5% chance that CS<0.5, and also a 5% chance that F<0.6. But then there is only a 0.25% chance that both are true.
Now that’s not necessary for the product to be less than 0.3, your minimum. There are other combinations, eg CS=0.4, F=0.7. You’d have to integrate these possibilities to get the total. But you won’t get anywhere near 5% for 0.3. The minimum is much less.
In fact, the full gruesome story on products of normal variates is here. Modified Bessel functions. OK, I guess I could work it out, but you have two separate steps.

Spartacus
February 8, 2012 4:47 am

“I appreciate that you people don’t like GCMs, because they provide you the “wrong” answer, but you could at least try to understand what you don’t like, rather than making things up.”
Wrong William, It’s nature itself that provides the “wrong answer” to the CGM’s “predictions” concerning longer time periods. You cannot control model’s “drifting” unless you “cheat” with fudge factors, and you know that very well. Since you cannot tune nature, all your modeling is a mere “run and catch” exercise. The only way to fine tune capable predictive models (and i put a grain of salt in this) is to have very good observation data from a very long period of time. We don’t have such kind of data. Our proxies are only rough approximations of past climate data. I’ve worked in several of these proxies and I know very well that there are huge problems to solve and lots of inconsistent data. Some of these proxies show data that have a strong regional signal and are not of any use for global climate considerations for instance.
You can use models to predict well controlled experiments, were you fully know all variables, otherwise we could not built effective mobile phones, computers or F1 cars. In climate, however, there is a huge number of variables that are not fully understood or they are wrongly explained. We cannot even detect or separate their signal from the noisy climate data.

February 8, 2012 5:12 am

> Why anyone (Mr [sic] Connolley) would assume that the net effect of one change (CO2 – in the case of CAGW) will be linear when constructing ones model is beyond me.
There is no such assumption. You’re confusing the input with the output. It turns out, when you run the GCMs, that the response is largely linear in log(CO2) (not CO2 itself; at least around current concentrations). But that quasi-linear response is the aggregate of a large number of non-linear calculations.

novareason
February 8, 2012 5:42 am

William M. Connolley says:
February 8, 2012 at 5:12 am
> Why anyone (Mr [sic] Connolley) would assume that the net effect of one change (CO2 – in the case of CAGW) will be linear when constructing ones model is beyond me.
There is no such assumption. You’re confusing the input with the output. It turns out, when you run the GCMs, that the response is largely linear in log(CO2) (not CO2 itself; at least around current concentrations). But that quasi-linear response is the aggregate of a large number of non-linear calculations.

And as Willis said multiple times. You can call it anything you like, but when the equations inside of it, regardless of complexity, produce a trend line that responds linearly to one factor (CO2) then you’ve created a linear model. If it looks like a duck, and quacks like a duck…
You have inputs, and your outputs are linear increases based on CO2 forcings.

February 8, 2012 6:28 am

> You have inputs, and your outputs are linear increases based on CO2 forcings
As I’ve said, no, the outputs aren’t linear. I linked to a pic, but clearly you didn’t look. Even if you restrict your view to annual-average-global-average temperature, the output isn’t linear: fairly often, the change from one year to next is negative, not positive, even though the change in forcing from year to year is the same. In the models, and in reality. If your response one year is of opposite sign to the next year, that is non-linear. Just like the real world. Though over the long run, it tends to average out. Just like the real world.
It seems to me that your main objection to the quasi-linear response is that you have some mystical source of knowledge that teaches you that this behaviour is wrong; whatever it is, it isn’t observations. Since this revelation is unavailable to the scientists, they cannot take advantage of it.

wsbriggs
February 8, 2012 6:28 am

So now it appears that there is an “orchestrated” attempt to discredit Willis. I love the public display of the arrogance once only visible in the Climategate emails.
The whole GCM thing kind of reminds me of Disney’s Toot, Whistle, Plunk, Boom film. The numbers go round and round and they come out here…
Forget past measurements, they couldn’t measure with the accuracy we can now. Forget the geologic evidence that seas were lower and higher than now – uplift and down thrust did it. Oh, yeah, forget that we told you that there was going to be a new Ice Age in the 1970s. Forget that we told you that it was going to get hotter in the Arctic – it’s Climate Change! Forget everything about the world but that man is now in control – and we want to control him.
Like you always say in the “CAGW Science” blogs – follow the money.
Way to go Willis! It appears that you and some of the other welcome posters on this blog have really put the stick in the hornets nest this time. Whipping out real math and putting the calculations out there for everyone to see and criticize is surely a lot different than the “knowledgeable” responses – “you’re wrong,” etc.
Maybe we should have Prof. McKitrick join in the fray. There’s going to be a lot of sleepless nights over his last cross posting here.
I’m starting my popcorn now!

February 8, 2012 7:50 am

William M. Connolley says:
February 8, 2012 at 12:42 am
> a radiative heat transfer equation that has as an input back radiation
I think you’re confused.
William M. Connolley says:
February 7, 2012 at 3:04 pm
“Because the underlying physics is mostly well known (radiative transfer, atmospheric and ocean dynamics; ecosystems are less well known, of course, but that is more biology than physics). What is of interest is the *modelling* of the physics, and the interactions of the various models.”
Mr. Connolley I asked of you to present a radiative heat transfer equation not someone else. If you don’t know or don’t understand then fine say so.
As to my being confused well as my hair turns grey I do sometimes forget where my car keys are but on this I am not. You defend something you appear not to understand.

Nikola Milovic
February 8, 2012 9:34 am

Climate change on our planet (and the other in the solar system) does not depend significantly either the human factor or composition of our atmosphere (CO2, etc.). Main generators of all the changes are the interaction of magnetic fields of the Sun and Earth. How these fields appear and how they affect to the substance through which they pass, to be science that investigates, not to make models based on nebulous assumptions that have no natural connection with the overall trends which we consider as the enigmatics .Who generates magnetic fields to-peek at the inside celestial body and “see” discontinuities in it and then easy to see how the electric fields created and how the proton neutron nucleus behave under the influence of these planets electro-magnetic fields.It deeds to worry for a collective delusion and blindness, and even science when it comes to studying the laws of nature of which we are formed, but we have some imaginary models “more logical” than the most logical.The forces are in question of the order of power: (12.363-3137) x10 ^ 20 units Consequently it is empty and discuss these diagrams, formulas and odds and ends when it do not even know the origin of any phenomenon in nature.

John Garrett
February 8, 2012 9:44 am

Mr. Eschenbach: sorry for this nit-picking but many of us will refer others to your excellent piece—
“…straitjacket is the most common spelling, strait-jacket is also frequently used…”
Source: http://en.wikipedia.org/wiki/Straitjacket

February 8, 2012 10:19 am

In support of Nick Stokes, you have to use Willis’ rules of Triangular Fuzzy Arithematic with several caveats.
As you add any two or more distributions, they will TEND more toward Normality.
As you multiply any two or more distributions (if they are completely positive value), then they will tend toward LogNormal distributions, because multiplication is the act of addition in the log space.
The key point is that you may start with P100, ML, P0 triangular distributions, but once you perform any arithmetic on them, you will have P100, kinda-ML, P0 of non-triangular single mode distributions. To assume they are still triangular is to significantly increase the uncertainty beyond what is real.
The trouble is, you often cannot prove your point with the P100 and P0 ranges. You wind up with almost anything possible but at infinitesimal probabilities. Staying within a P95 – P05 (90% confidence interval) or P90 – P10 (80% C.I) is usually far more enlightening. But to do that, you do more work by keeping track of the variances, 3rd and 4th moments, or convolve the probability density distributions.

February 8, 2012 12:55 pm

Is the NovaReason citation a multiplication of probability distributions or a multiplication of fuzzy
sets? Willis 11:54am, the y-axis from 0 to 1 looks like set membership to me, not a probability density function.
Maybe fuzzy set theory and probability distributions can be mixed under the right circumstances.
Here is an interesting paper that I’m going to study. FWIW.
Fuzzy sets and probability: Misunderstandings, bridges and gaps
Didier Dubois – Henri Prade

The paper is structured as follows : first we address some classical misunderstandings between fuzzy sets and probabilities. They must be solved before any discussion can take place. Then we consider probabilistic interpretations of membership functions, that may help in membership function assessment. We also point out non-probabilistic interpretations of fuzzy sets. The next section examines the literature on possibility-probability
transformations and tries to clarify some lurking controversies on that topic. In conclusion, we briefly
mention several subfields of fuzzy set research where fuzzy sets and probability are conjointly used.

novareason
February 8, 2012 1:39 pm

That actually makes a lot more sense now, Willis. Thanks for the explanation. Fuzzy calculators I was plugging the numbers into later gave me values similar to yours (rounding differences), with the same kind of simplification used. Functionally there’s little to no difference and for the sake of proving a point you’re entirely correct in your calculations. This was just kind of a distant topic for me (been a few years since a hard math course).
That being said, the point brought up above about the vanishing probability of highest and lowest is rather relevant. The numbers they gave allow for the possibility that it’s only 0.3 deg warming, but that is AS LIKELY given their numbers as 3.1 deg. Realistically, this does show the ridiculous amount of error they’re playing with, and arguing that their blind guesses are accurate predictions.

February 8, 2012 3:07 pm

Re Willis’ comment at Feb. 8/ 10:06 a.m.
The IPCC formula is [deltaT = deltaF x CS (climate sensitivity). To get their lower bound on deltaT since 1750, we multiply their lower bound on forcing since 1750 [0.6 W/m^2] by their lower bound on CS [0.54 degC per W/m^2]. OK, .6 x .54 = 0.324 degC = deltaT since 1750. You got “about” 0.5 degC for delta T. What am I missing?