Guest Post by Willis Eschenbach
I got to thinking about “triangular fuzzy numbers” regarding the IPCC and their claims about how the climate system works. The IPCC, along with the climate establishment in general, make what to me is a ridiculous claim. This is the idea that in a hugely complex system like the Earth’s climate, the output is a linear function of the input. Or as these savants would have it:
Temperature change (∆T) is equal to climate sensitivity ( λ ) times forcing change (∆F).
Or as an equation,
∆T = λ ∆F.
The problem is that after thirty years of trying to squeeze the natural world into that straitjacket, they still haven’t been able to get those numbers nailed down. My theory is that this is because there is a theoretical misunderstanding. The error is in the claim that temperature change is some constant times the change in forcing.
Figure 1. The triangular fuzzy number for the number of mammal species [4166, 4629, 5092] is shown by the solid line. The peak is at the best estimate, 4629. The upper and lower limits of expected number of species vary with the membership value. For a membership value of 0.65 (shown in dotted lines), the lower limit is 4,467 species and the upper limit is 4,791 species (IUCN 2000).
So what are triangular fuzzy numbers when they are at home, and how can they help us understand why the IPCC claims are meaningless?
A triangular fuzzy number is composed of three estimates of some unknown value—the lowest, highest, and best estimates. To do calculations involving this uncertain figure, it is useful to use “fuzzy sets.” Traditional set theory includes the idea of exclusively being or not being a member of a set. For example, an animal is either alive or dead. However, for a number of sets, no clear membership can be determined. For example, is a person “old” if they are 55?
While no yes/no answer can be given, we can use fuzzy sets to determine the ranges of these types of values. Instead of the 1 or 0 used to indicate membership in traditional sets, fuzzy sets use a number between 0 and 1 to indicate partial membership in the set.
Fuzzy sets can also be used to establish boundaries around uncertain values. In addition to upper and lower values, these boundaries can include best estimates as well. It is a way to do sensitivity analysis when we have little information about the actual error sources and amounts. At its simplest all we need are the values we think it will be very unlikely to be greater or less than. These lower and upper bounds plus the best estimate make up a triangular number. A triangular number is written as [lowest expected value, best estimate, highest expected value].
For example, the number of mammalian species is given by the IUCN Red List folks as 4,629 species. However, this is known to be an estimate subject to error, which is usually quoted as ± 10%.
This range of estimates of the number of mammal species can be represented by a triangular fuzzy number. For the number of species, this is written as [4166, 4629, 5092], to indicate the lower and upper bounds, as well as the best estimate in the middle. Figure 1 shows the representation of the fuzzy number representing the count of all mammal species.
All the normal mathematical operations can be carried out using triangular numbers. The end result of the operation shows the most probable resultant value, along with the expected maximum and minimum values. For the procedures of addition, subtraction and multiplication, the low, best estimate, and high values are simply added, subtracted, or multiplied. Consider two triangular fuzzy numbers, triangular number
T1 = [L1, B1, H1]
and triangular number
T2 = [L2, B2, H2],
where “L”, “B”, and “H” are the lowest, best and highest estimates. The rules are:
T1 + T2 = [L1 + L2, B1 + B2, H1 + H2]
T1 – T2 = [L1 – L2, B1 – B2, H1 – H2] [Incorrect, edited. See below. Posting too fast. -w]
T1 * T2 = [L1 * L2, B1 * B2, H1 * H2]
So that part is easy. For subtraction and division, it’s a little different. The lowest possible value will be the low estimate in the numerator and the high estimate in the denominator, and vice versa for the highest possible value. So division is done as follows:
T1 / T2 = [L1 / H2, B1 / B2, L2 / H1]
And subtraction like this:
T1 – T2 = [L1 – H2, B1 – B2, H1 – L2]
So how can we use triangular fuzzy numbers to see what the IPCC is doing?
Well, climate sensitivity (in °C per W/m2) up there in the IPCC magical formula is made up of two numbers—temperature change expected from a doubling of CO2, and increased forcing expected from a doubling of CO2 . For each of them, we have estimates of the likely range of values.
For the first number, the forcing from a doubling of CO2, the usual IPCC number says that this will give 3.7 W/m2 of additional forcing. The end ranges on that are likely about 3.5 for the lower value, and 4.1 for the upper value (Hansen 2005). This gives the triangular number [3.5, 3.7, 4.0] W/m2 for the forcing change from a doubling of CO2.
The second number, temperature change per doubling of CO2, is given by the IPCC http://news.bbc.co.uk/2/shared/bsp/hi/pdfs/02_02_07_climatereport.pdf as the triangular number [2.0, 3.0, 4.5] °C for the change in temperature from a doubling of CO2.
Dividing the sensitivity per doubling by the change in forcing per doubling gives us a value for the change in temperature (∆T, °C) from a given change in forcing (∆F, watts/metre squared). Again this is a triangular number, and by the rules for division it is:
T1 / T2 = [L1 / H2, B1 / B2, L2 / H1] = [2.0 / 4.0, 3.0 / 3.7, 4.5 / 3.5]
which is a climate sensitivity of [0.5, 0.8, 1.28] °C of temperature change for each W/m2 change in forcing. Note that as expected, the central value is the IPCC canonical value of 3°C per doubling of CO2.
Now, let’s see what this means in the real world. The IPCC is all on about the change in forcing since the “pre-industrial” times, which they take as 1750. For the amount of change in forcing since 1750, ∆F, the IPCC says http://news.bbc.co.uk/2/shared/bsp/hi/pdfs/02_02_07_climatereport.pdf there has been an increase of [0.6, 1.6, 2.4] watts per square metre in forcing.
Multiplying the triangular number for the change in forcing [0.6, 1.6, 2.4] W/m2 by the triangular number for sensitivity [0.5, 0.8, 1.28] °C per W/m-2 gives us the IPCC estimate for the change in temperature that we should have expected since 1750. Of course this is a triangular number as well, calculated as follows:
T1 * T2 = [L1 * L2, B1 * B2, H1 * H2] = [0.5 * 0.6, 0.8 * 1.6, 2.4 *1.28]
The final number, their estimate for the warming since 1750 predicted by their magic equation, is [0.3, 1.3, 3.1] °C of warming.
Let me say that another way, it’s important. For a quarter century now the AGW supporters have put millions of hours and millions of dollars into studies and computer models. In addition, the whole IPCC apparatus has creaked and groaned for fifteen years now, and that’s the best they can tell us for all of that money and all of the studies and all of the models?
The mountain has labored and concluded that since 1750, we should have seen a warming of somewhere between a third of a degree and three degrees … that’s some real impressive detective work there, Lou …
Seriously? That’s the best they can do, after thirty years of study? A warming between a third of a degree and three whole degrees? I cannot imagine a less falsifiable claim. Any warming will be easily encompassed by that interval. No matter what happens they can claim success. And that’s hindcasting, not even forecasting. Yikes!
I say again that the field of climate science took a wrong turn when they swallowed the unsupported claim that a hugely complex system like the climate has a linear relationship between change in input and change in operating conditions. The fundamental equation of the conventional paradigm, ∆T = λ ∆F, that basic claim that the change in temperature is a linear function of the change in forcing, is simply not true.
All the best,
w.
Leigh B. Kelley says:
February 8, 2012 at 3:07 pm
You are right. I meant to say “about 0.3°C, not “about 0.5°C”. I’ve fixed it.
w.
novareason says:
February 8, 2012 at 1:39 pm
Actually, the IPCC numbers for the climate sensitivity are not formal confidence intervals, and the IPCC admits that the numbers could be outside those ranges. In general, the IPCC confidence intervals are only 90%, and that’s based on “expert judgement” … which they seem to be lacking. So there is no such “vanishing probability” of the highest and lowest.
In fact, the IPCC goes out of their way to say climate sensitivity it could well be anywhere in the range 2 – 4.5, it’s not a gaussian distribution kind of thing.
That’s why I like using triangular numbers. They work well when we have little information on the distribution.
w.
William Connolley on Feb 8 at 2:02 linked to an IPCC set of graphs predicting global T from the year 2000 forward. The graphs present results from 17 or 21 different models plus the ensemble average as spaghetti charts, under 3 different assumed conditions. Connolley’s point was that the purportedly nonlinear models nevertheless produce somewhat linear outputs (actually more parabolic, but over short time spans close enough to linear).
But the interesting thing is that each of these 59 models (21 + 21 + 17) predicts that global T increases over the period 2000 – 2011, rather than remaining flat as actually occurred. Thus in contrast to the assertions of Connolley (and Steve Mosher earlier), none of these published IPCC models appears to accurately forecast temperature trends over the time interval for which we have data. Similarly, the 2011 paper by Spencer and Braswell demonstrates that at least 11 of the 14 IPCC models fail to track the temperature changes surrounding el nino events, and even the 3 better models do not accurately predict the actual changes in temperature over the el nino time period.
Thus I see no evidence that any of the IPCC models are at all reliable. So perhaps Connolley or Mosher or any other defenders of the models could link to any prospective prediction which actually matches the observed temperature changes.
And importantly, the IPCC must provide future temperature predictions on which it is explicitly willing to stake the credibility of its climate models. That way in 10 or 20 years we might be able to decide whether the models are at all useful. The failure of the IPCC predictions over the last 15 years has not forced the IPCC to admit failure because they haven’t prospectively committed to a standard by which failure or success of the models should be judged. That is why apologists like Connolley feel they can continue to avoid confronting the discrepancy between model and measured temperatures. Until they do clearly set a prospective standard on which they are willing to be judged and have a decade or so of measurements to which to compare, the models are merely hypotheses. Even a few decades of concurrence between models and measurements of course do not prove the models are valid; but divergence does suffice to demonstrate their failure.
> But the interesting thing is that each of these 59 models (21 + 21 + 17) predicts that global T increases over the period 2000 – 2011, rather than remaining flat as actually occurred
This is the usual mistake. I’ll correct you, but you won’t even read what I say much less believe or check it.
1. those aren’t temperature predictions. We all know that the GCMs don’t produce “weather” type predictions aiming to accurately track interannual fluctuations over coming decades (except for a very few experimental runs like the Smith et al stuff, but that’s not what you’re seeing here). They are aiming to produce trend predictions over longer timescales.
2. [snip]
Connolley says:
“GCMs don’t produce ‘weather’ type predictions… over coming decades…They are aiming to produce trend predictions over longer timescales.”
Fine! So define the time interval over which you are willing to have the models evaluated. It’s your call!
But recall that the time interval over which the current warming trends occur extend from the late 1970s to about 1998-1999, a period of about 20 years. If the 11 years from 2000-2011 are too short in which to evaluate a temperature trend, are 20 years somehow enough? Are you willing to commit to a 20 year evaluation period? If so, then by your standards we should know by about 2020 whether the current climate stasis reflects a significant trend.
If on the other hand you think a mere 20 years is too short a time within which to evaluate temperature trends, then you must necessarily believe that the 20 year warming from 1979-1999 is also too short to indicate a warming trend. The planet cooled from about 1940 to the late 1970s. If you evaluate the trend from 1940 to 2011, the warming seems a lot smaller. In fact, some of the most reliable data (from North America) indicate that the late 1930s-1940 period were as warm as 1998-1999, suggesting no warming at all over the last 70 years.
Personally, I think a wise interval should encompass at least one cycle of a periodic function. But of course climate changes over multiple periodicities. A clear interval encompasses about 60 years, corresponding to the PDO and AMO periods. Other periods seem to last about 1000 and 1500 years.
But if I were to suggest that we wait until 2039 to decide whether the warming which began in 1979 is significant, you might well accuse me of being dilatory. Not to mention waiting until 3479.
But again the ball is in your court. How long do we need to wait to decide whether the predicted temperature trend from 2000 is verified or refuted??
I await your reply.
Barry Elledge says:
“So define the time interval over which you are willing to have the models evaluated… How long do we need to wait to decide whether the predicted temperature trend from 2000 is verified or refuted??”
Connolley, like all incompetent and mendacious alarmists, pretends that models are reality. They are not, as planet Earth is clearly demonstrating.
It is amusing watching serial propagandists like Mr Censorship Connolley try to convince us that by carefully selecting only those few computer models that just happen to get lucky, and thus somewhat close to the temperature record for a limited time frame, can actually predict the climate accurately. That is known as the Texas Sharpshooter Fallacy: shoot holes in a barn door, then draw a circle around a few of them and declare, “Bullseye!” But like a broken clock that is right twice a day, not one computer model is consistently correct. They are all expensive failures.
Connolley is an alarmist fakir whose every word is founded on the climate charlatanism that says CO2 will cause runaway global warming and climate disruption. As if. He is censoring scum… IMO, of course, based on what I’ve seen. And a despicable hypocrite to boot, because he freely posts here, while censoring the sincere, honest and science-based posts of others where he connives to moderate. He would make the perfect North Korean bureaucRat.
Isn’t it great that everybody is able to comment freely here, while scientific skeptics [the only honest kind of scientists] are moderated out of existence at propaganda blogs like Wikipedia for simply having a different view? The upside is the much heavier traffic the alarmist contingent brings to WUWT – the internet’s Best Science site.