Triangular Fuzzy Numbers and the IPCC

Guest Post by Willis Eschenbach

I got to thinking about “triangular fuzzy numbers” regarding the IPCC and their claims about how the climate system works. The IPCC, along with the climate establishment in general, make what to me is a ridiculous claim. This is the idea that in a hugely complex system like the Earth’s climate, the output is a linear function of the input. Or as these savants would have it:

Temperature change (∆T) is equal to climate sensitivity ( λ ) times forcing change (∆F).

Or as an equation,

∆T = λ  ∆F.

The problem is that after thirty years of trying to squeeze the natural world into that straitjacket, they still haven’t been able to get those numbers nailed down. My theory is that this is because there is a theoretical misunderstanding. The error is in the claim that temperature change is some constant times the change in forcing.

Figure 1. The triangular fuzzy number for the number of mammal species [4166,  4629,  5092] is shown by the solid line. The peak is at the best estimate, 4629. The upper and lower limits of expected number of species vary with the membership value. For a membership value of 0.65 (shown in dotted lines), the lower limit is 4,467 species and the upper limit is 4,791 species (IUCN 2000).

So what are triangular fuzzy numbers when they are at home, and how can they help us understand why the IPCC claims are meaningless?

A triangular fuzzy number is composed of three estimates of some unknown value—the lowest, highest, and best estimates. To do calculations involving this uncertain figure, it is useful to use “fuzzy sets.” Traditional set theory includes the idea of exclusively being or not being a member of a set. For example, an animal is either alive or dead. However, for a number of sets, no clear membership can be determined. For example, is a person “old” if they are 55?

While no yes/no answer can be given, we can use fuzzy sets to determine the ranges of these types of values. Instead of the 1 or 0 used to indicate membership in traditional sets, fuzzy sets use a number between 0 and 1 to indicate partial membership in the set.

Fuzzy sets can also be used to establish boundaries around uncertain values. In addition to upper and lower values, these boundaries can include best estimates as well. It is a way to do sensitivity analysis when we have little information about the actual error sources and amounts. At its simplest all we need are the values we think it will be very unlikely to be greater or less than. These lower and upper bounds plus the best estimate make up a triangular number. A triangular number is written as [lowest expected value,  best estimate,  highest expected value].

For example, the number of mammalian species is given by the IUCN Red List folks as 4,629 species. However, this is known to be an estimate subject to error, which is usually quoted as ± 10%.

This range of estimates of the number of mammal species can be represented by a triangular fuzzy number. For the number of species, this is written as [4166,  4629,  5092], to indicate the lower and upper bounds, as well as the best estimate in the middle. Figure 1 shows the representation of the fuzzy number representing the count of all mammal species.

All the normal mathematical operations can be carried out using triangular numbers. The end result of the operation shows the most probable resultant value, along with the expected maximum and minimum values. For the procedures of addition, subtraction and multiplication, the low, best estimate, and high values are simply added, subtracted, or multiplied. Consider two triangular fuzzy numbers, triangular number

T1 = [L1,  B1,  H1]

and triangular number

T2 = [L2,  B2,  H2],

where “L”, “B”, and “H” are the lowest, best and highest estimates.  The rules are:

T1 + T2  =  [L1 + L2,  B1 + B2,  H1 + H2]

T1 – T2  =  [L1 – L2,  B1 – B2,  H1 – H2]  [Incorrect, edited. See below. Posting too fast. -w]

T1 * T2  =  [L1 * L2,  B1 * B2,  H1 * H2]

So that part is easy. For  subtraction and division, it’s a little different. The lowest possible value will be the low estimate in the numerator and the high estimate in the denominator, and vice versa for the highest possible value. So division is done as follows:

T1 / T2 = [L1 / H2,  B1 / B2,  L2 / H1]

And subtraction like this:

T1 – T2  =  [L1 – H2,  B1 – B2,  H1 – L2]

So how can we use triangular fuzzy numbers to see what the IPCC is doing?

Well, climate sensitivity (in °C per W/m2) up there in the IPCC magical formula is made up of two numbers—temperature change expected from a doubling of CO2, and increased forcing expected from a doubling of CO2 . For each of them, we have estimates of the likely range of values.

For the first number, the forcing from a doubling of CO2, the usual IPCC number says that this will give 3.7 W/m2 of additional forcing. The end ranges on that are likely about 3.5 for the lower value, and 4.1 for the upper value (Hansen 2005). This gives the triangular number [3.5,  3.7,  4.0] W/m2 for the forcing change from a doubling of CO2.

The second number, temperature change per doubling of CO2, is given by the IPCC http://news.bbc.co.uk/2/shared/bsp/hi/pdfs/02_02_07_climatereport.pdf as the triangular number [2.0,  3.0,  4.5] °C for the change in temperature from a doubling of CO2.

Dividing the sensitivity per doubling by the change in forcing per doubling gives us a value for the change in temperature (∆T, °C) from a given change in forcing (∆F, watts/metre squared). Again this is a triangular number, and by the rules for division it is:

T1 / T2 = [L1 / H2, B1 / B2, L2 / H1] = [2.0 / 4.0,   3.0 / 3.7,   4.5 / 3.5]

which is a climate sensitivity of [0.5,  0.8,  1.28] °C of temperature change for each W/m2 change in forcing. Note that as expected, the central value is the IPCC canonical value of 3°C per doubling of CO2.

Now, let’s see what this means in the real world. The IPCC is all on about the change in forcing since the “pre-industrial” times, which they take as 1750. For the amount of change in forcing since 1750, ∆F, the IPCC says http://news.bbc.co.uk/2/shared/bsp/hi/pdfs/02_02_07_climatereport.pdf there has been an increase of [0.6,  1.6,  2.4] watts per square metre in forcing.

Multiplying the triangular number for the change in forcing [0.6,  1.6,  2.4] W/m2 by the triangular number for sensitivity [0.5,  0.8,  1.28] °C per W/m-2 gives us the IPCC estimate for the change in temperature that we should have expected since 1750. Of course this is a triangular number as well, calculated as follows:

T1 * T2 = [L1 * L2, B1 * B2, H1 * H2] = [0.5 * 0.6,  0.8 * 1.6,  2.4 *1.28]

The final number, their estimate for the warming since 1750 predicted by their magic equation, is [0.3,  1.3,  3.1] °C of warming.

Let me say that another way, it’s important. For a quarter century now the AGW supporters have put millions of hours and millions of dollars into studies and computer models. In addition, the whole IPCC apparatus has creaked and groaned for fifteen years now, and that’s the best they can tell us for all of that money and all of the studies and all of the models?

The mountain has labored and concluded that since 1750, we should have seen a warming of somewhere between a third of a degree and three degrees … that’s some real impressive detective work there, Lou …

Seriously? That’s the best they can do, after thirty years of study? A warming between a third of a degree and three whole degrees? I cannot imagine a less falsifiable claim. Any warming will be easily encompassed by that interval. No matter what happens they can claim success. And that’s hindcasting, not even forecasting. Yikes!

I say again that the field of climate science took a wrong turn when they swallowed the unsupported claim that a hugely complex system like the climate has a linear relationship between change in input and change in operating conditions. The fundamental equation of the conventional paradigm, ∆T = λ ∆F, that basic claim that the change in temperature is a linear function of the change in forcing, is simply not true.

All the best,

w.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
106 Comments
Inline Feedbacks
View all comments
Vince Causey
February 7, 2012 12:53 pm

I think Willis has just done a Toto, and pulled away the curtains from the IPCC wizard.

Noonan
February 7, 2012 1:02 pm

I am an economist by training, and I think climate forecasting suffers from the same problem as economic forecasting: too many variables.

DesertYote
February 7, 2012 1:04 pm

Wow, just look at all the propagandist posting here trying to distract and confuse. The scary thing is I think that some of them actually think they are making sense.

Peter Plail
February 7, 2012 1:07 pm

Whenever Willis writes something, I end up understanding what he means. He has a rare ability to communicate complex subjects in a logical and clear style which doesn’t talk down.
I have read and reread Connolley’s comments and I simply don’t understand them. Maybe it is me but ………..

Bill Hunter
February 7, 2012 1:21 pm

William M. Connolley says:
February 7, 2012 at 9:27 am
“Yes, its complex. But it is possible to test it, to some extent, by using hugely complex non-linear models (whether you believe the models are accurate or not, you do believe they are complex and non-linear). Those models show that the idea does, indeed, basically work”
Yeah but with a worse fit than astrologists get with curve fitting. So one is on solid ground arguing that the models are not helping the “cause”. If pouring a billion dollars into model development can’t beat out the curve fitting of a guy in a steeple with a pointy hat living on bread crumbs one has to wonder about the usefulness of the exercise or at least how its being spread around.

February 7, 2012 1:25 pm

A couple points.
Smokey you are wrong. Some realizations of GCMs did in fact “predict” the cooling. the MEAN
of all them did not. Its pretty simple. GCMs as connelly notes are highly non linear. When you
look at all 52 runs ( give or take) you will find values that go up and down. on AVERAGE they
go up. so about 3% of the runs show cooler temps than the observed, the average of all
shows more warming. There is another factor to consider. It goes like this
Suppose I tell you : My model says that IF you drop a bomb flying at 660 knts from an
altitude of 30K feet, with no head wind, the bomb will travel forward 1 nm before hitting the ground.
That is a CONDITIONAL prediction. IF 660 knots, IF 30K feet, IF no head wind, THEN
the bomb lands 1nm from the point of release.
Now, we try to test that. Opps. The pilot releases the bomb at 695 knots, at 31,102 feet,
and there is a 15 knot quartering wind in his face. The bomb lands 1.2nm from the release
point.
Is the model wrong? Well, we really cant say for sure. Why? because the conditions prescribed
in the experiment were not met. The IF clauses didnt happen.
So what do we do. Well we can do two things
1. Assume our model is correct and recalculate the prediction GIVEN the ACTUAL test conditions.
2. Rerun the test and tell the pilot to be more careful and plan the test for non windy days.
With a GCM we cannot do #2. We cannot rerun 2001 to 2011. There was one experiment.
one set of ACTUAL forcings. That leaves option 1. rerunning the GCMS with the ACTUAL
forcing we saw during 2001 to 2010. Sadly, people dont do this. What they do instead
is say that the prediction is “good enough” that the models are not proven wrong.
So. In the first place some model runs did in fact predict less cooling. Those model runs were outnumbered by the majority which did predict warming. In the second place to really evaluate
the models you would have to re run them with ACTUAL forcings. Third, at some point the
Modelling community will have to grow some stones and eliminate models which are running hot.
The mean sensitivity is 3.2C, that’s likely too high of a value
But if you feel like you’ve got proof that the models are not useful, then go visit Tamsins blog
on GCM models. talk to a real modeler and tell her why you are right.
My bet. you wont go there and have a girl kick your intellectual butt

Rosco
February 7, 2012 1:26 pm

I don’t get it – radiation in the atmosphere is clearly NOT anything like the dominant transport of energy mechanism.
I’ve seen the argument that humidity “feels” hotter because of the extra radiation from the extra water vapour – what about the fact that our body relies on evaporation to cool (sweat is the extreme form), the extra humidity inhibits this and we feel discomfort because we cannot cool effectively ? Supply a breeze and the situation resolves.
If one believes radiation is a dominant energy transport mechanism then why do “radiators” rely on conduction and convection – why do they put the “convection” in the oven etc etc.
And of course the ultimate conclusion – the only atmospheric components that “matter” are those that absorb radiation with the bulk remaining, apparently, stony cold – nonsense.

February 7, 2012 1:33 pm

To a certain extent I admire the attempts being made to ‘try’ to model the climate – the trouble is its like trying to model some huge self configuring machine that has no central program or control. Its free to create and destroy its own feedbacks based on any prior state in the past.. In terms of NP complete problems – this is the key stone. If they can model this properly than a whole raft of similarly complex problems – like crypto cracking become child’s play!
As regards turbulence, thats a subproblem of modeling climate, at least with modeling turbulence you don’t get what happened a 100 years ago having an effect on what you observe now..
For instance over on http://www.facebook.com/pages/I-Bet-We-Can-Get-100000-people-to-Say-NO-to-the-Carbon-Tax/163658930354822?ref=ts we have a lovely set of examples of a set of warmists trying to claim that the average hindcast of the 14 models used by the IPCC is somehow significant in showing that they have modeled the prior climate accurately.. They got truly batted out of court.

Rosco
February 7, 2012 1:38 pm

I recently saw a statement I really believe in – “atmospheres act as refrigerators”.
If you dismiss the nonsense of the false low levels of insolation peddled – (how does an average represent something that is either on or off anyway ?) – and recognize the Sun could fry us during the day – (1000 W/sq m used to be quoted as maximum insolation not about a quarter of this) then you have to accept the reality that we have twin air conditioners protecting us from this intense energy – one evaporative and the other convective – and the oceans and atmosphere act to reduce the surface temperature.
Both take the heat away from the surface – without it Earth wouldn’t be habitable.
Perhaps the fact that CO2 is less soluble in warmer water is like a safety valve – as the energy levels rise in the ocean CO2 is released and provides an EXTRA mechanism for removing heat from the surface by absorbing some extra radiation and radiating at least half to space with a net effect of cooling not warming.
Just throwing ideas around.

February 7, 2012 1:44 pm

Those models show that the idea does, indeed, basically work. They demonstrate that your basic claim (“huge complex non-linear model => no linear relation between forcing and result”) is false.
The climate models do no such thing. At best they demonstrate that a possible Earth climate has a linear relation between forcing and state.
What relationship that possible Earth climate bears to the actual Earth climate is unknown.
Willis, you need a function for feedbacks in your equation and that function is non-linear at least IMO. Not sure what the IPCC says about it.
If the Forcings model/theory is sound, then there is indeed the claimed linear relationship.
Forcing model proponents, such as R Gates claim the Forcing model must be correct a priori, and in a narrow sense they are right.
The flaw in the Forcings model/theory is that they assume feedbacks operate on roughly the same timescale as the forcings.
If the feedbacks operate on substantially shorter timescales (and if they do then they must necessarily be net negative otherwise we would have runaway warming) then the Forcings model/theory is correct but worthless for predicting climate change.
The key to climate prediction is knowing the timescales of feedbacks. A subject the IPCC and the Warmists ignore.

Ian L. McQueen
February 7, 2012 1:56 pm

FWIW: “straitjacket”.
IanM
[Thanks, fixed – w.]

Truthseeker
February 7, 2012 1:59 pm

A physicist says:
February 7, 2012 at 9:41 am
Your attempted strawman argument has very nicely proved what the sceptics have been saying for years. For models of complex systems to be valid, they have to be continuously tested and re-calibrated against actual observations. Not something that seems to happen with GCMs ….

RACookPE1978
Editor
February 7, 2012 2:08 pm

Willis Eschenbach says:
February 7, 2012 at 1:53 pm (Speaking to Steven Mosher)
steven mosher says:
February 7, 2012 at 1:25 pm
A couple points. Smokey you are wrong. Some realizations of GCMs did in fact “predict” the cooling. the MEAN of all them did not. Its pretty simple.
Mosh, this is the third time you’ve tried this claim. Both times before I challenged you to name the model and show us the runs that predicted the hiatus in warming.

I too have never seen ANY evidence of ANY model plotting a flat peak extending for (what is now) 15 years of continuously increasing CO2 and flat temperatures. Note that actual measured whole-earth-average temperatures (if you even believe such a creature exists) are actually slightly negative and you’d have to show by model output that real world temperatures have not increased since the baseline dates of the mid-1970’s.
Regardless, and I will ask this with my tongue only slightly in my cheek …
If 97% of the GCM models are dead wrong (since you admit that only 3% have ever been correct in a 15 year span) …
And if 97% of the world’s climate scientists are “right” in believing the catastrophic CAGW dogma based strictly and entirely on their faith in modeled results of the failed 97% of GCM models …
Which 3 percent are correct? 8<)
Can we fire the remaining 97%?

February 7, 2012 2:09 pm

Willis,
I can’t agree with what you say about triangular fuzzy numbers, and I think you should cite some authority. As an obvious example, consider your multiplication rule with T1=[-1,0,1], T2=[-1,0,1]. Then T1*T2=[1,0,1]. WUWT?
But more seriously, you can’t sensibly use this when L and H are probability bounds, as with, say, forcings. If they represent, say, 2 sd deviations, then the range of the sum is tighter, and the product formula will be way wrong.

Owen in GA
February 7, 2012 2:44 pm

Truthseeker: I am not sure that is completely true (even if it is mostly true). It is just that climate scientists seem to like a certain set of psuedo-data that were run and reported on a particular date and included in one of there rigged pal review journals and never go back and check out what the modelers could do today. I also feel that the modelers themselves are getting a shaft from the climate scientists because climate scientists never update the underlying physics to enable modelers to make a better model. If one never challenges ones assumptions, one always gets the same wrong results and gets blindsided by reality. One can’t test a hypothesis if no one ever puts a hypothesis forward. Also, I am not sure we ever see the most recent output of the models anyway.
Of course if the modelers just arbitrarily turns the knobs to make any one 30 year period fit, it will likely be off for all the other time frames. I could say sensitivity to CO2 doubling was any number from -infinity to infinity, and as long as I included coupled opposite signed feedbacks with it, I could get close to maybe making it work, especially if I polynomialed the feedback to be of a high enough n degree and adjusted all the coefficients until I made it work. Polynomial fitting is a fine pass time, but unless there is a precise physical meaning for each coefficient term it is just artwork.

William M. Connolley
February 7, 2012 2:47 pm

TF> are you suggesting that we test a model against another model?
No.
WE> It is a simple conclusion from the premises clearly stated by the IPCC
No, it isn’t. You made it up.
WE> The climate models are complex, but as I and others have repeatedly shown (see here, here, and here), they are most assuredly linear.
Err, no, they aren’t linear. It is trivial to see this from their construction, or from looking at their results.
WE> those models have done so well at predicting the current lack of warming
Irrelevant, as I’ve already said.
WE> the table manners of a bonobo
I’m sure everyone here is capable of judging manners, including yours, without your guidance.
C> by using hugely complex earth-centric models
Not sure what your point is. Yes, it is entirely possible, and indeed permissible, to cast all of physics assuming a stationary Earth. This is what GR teaches us.

Brian H
February 7, 2012 2:47 pm

Bloke down the pub says:
February 7, 2012 at 11:17 am
It obviously wasn’t about the fuzzy triangles that I thought it was.

On account of you’re a low-life lecherous blackguard, and proud of it!
;p

February 7, 2012 2:48 pm

The climate models contain psuedo-random functions, Run them enough times and you will get any prediction you like, including an ice age starting next Wednesday.
The variability in climate model output is evidence of absolutely nothing.

Brian H
February 7, 2012 2:50 pm

P.S. to Bloke;
Follow-on comments involving sensitivity, linear responses, etc. You can fill in the blanks.
😀

RACookPE1978
Editor
February 7, 2012 3:01 pm

William M. Connolley says:
February 7, 2012 at 2:47 pm
TF> are you suggesting that we test a model against another model?
No.
WE> It is a simple conclusion from the premises clearly stated by the IPCC
No, it isn’t. You made it up.

Ok.
So 97% of the world’s GCM model results are wrong over a 15 year basis based on real-world measured temperatures and a accelerating (?) increasing CO2 level. Do you agree or disagree with that summary from Steve Mosher?
What are YOU doing to correct the FALSE 97% of the world’s models?
What have YOU done to stop the propaganda that YOU have been pushing that says these models are accurate and must be used to ruin the world’s economies and murder billions of innocents in the false dogma of CAGW hype?
What have YOU done to publish the missing 3% of the models that DID predict a 15 year flat-line in temperatures while CO2 rose?

February 7, 2012 3:03 pm

Love it, always learn something from the Willis posts, just add motivation to learn.
Have a question, in the paragraph…

For the first number, the forcing from a doubling of CO2, the usual IPCC number says that this will give 3.7 W/m2 of additional forcing. The end ranges on that are likely about 3.5 for the lower value, and 4.1 for the upper value (Hansen 2005). This gives the triangular number [3.5, 3.7, 4.0] W/m2 for the forcing change from a doubling of CO2.

Why isn’t the triangular number [3.5, 3.8, 4.1]W/m2? or alternatively (if the central number has to be 3.7) why isn’t it [3.3, 3.7, 4.1]W/m2 or [3.5, 3.7, 3.9]W/m2
thnx in advance

Kev-in-UK
February 7, 2012 3:04 pm

@Mosh
c’mon steve – you can try and defend GCM’s as much as you like but it won’t wash because they have NO history. – at least as far as I know. By that I mean, has there been a good (say 90% accurate) model ever produced? – and I don’t just mean by hindcasting – I mean by a good hindcast AND a good forecast? (and don’t quote something that has loads of ‘adjustments’ within the code, either!) FFS man, the metoffice with all their sooperdooper computers cannot predict the weather in a LOCAL region (1000 sq miles?) accurately over more than a couple of days – and thats based on accurate detailed close spaced actual updated measurements!!
I don’t care what anyone says, any simplification of earths climate system into what, a dozen or so primary variables just ain’t gonna represent real life. – I feel, as always, that every model run should quote in big letters the usual financial caveat along the lines of ‘financial investiments can go up AND down and no guarantees are given’. Now, if the financial wizards cannot do it, what fecking hell chance do climate modellers have? surely, it is obvious that the climate is at least AS variable AND unpredictable as financing! I am not saying that some very basic models cant be constructed that give ‘indications’ – but in terms of my analogy with stock markets, it’s like lumping all the stocks in the different markets together and just using the main market indices as your ‘data’, e.g dow jones, Ft, etc – as REAL indicators – it’s not valid – because in each market, copper stocks could be falling, whilst oil stocks are rising (but this is simplified, there are hundreds of stocks in EACH market!) – so the net result (market index value) stays the same – but as an investor – you missed the boat because you didn’t know the detail!!

William M. Connolley
February 7, 2012 3:04 pm

> climate scientists never update the underlying physics
Because the underlying physics is mostly well known (radiative transfer, atmospheric and ocean dynamics; ecosystems are less well known, of course, but that is more biology than physics). What is of interest is the *modelling* of the physics, and the interactions of the various models.