Guest Post by Willis Eschenbach
I got to thinking about “triangular fuzzy numbers” regarding the IPCC and their claims about how the climate system works. The IPCC, along with the climate establishment in general, make what to me is a ridiculous claim. This is the idea that in a hugely complex system like the Earth’s climate, the output is a linear function of the input. Or as these savants would have it:
Temperature change (∆T) is equal to climate sensitivity ( λ ) times forcing change (∆F).
Or as an equation,
∆T = λ ∆F.
The problem is that after thirty years of trying to squeeze the natural world into that straitjacket, they still haven’t been able to get those numbers nailed down. My theory is that this is because there is a theoretical misunderstanding. The error is in the claim that temperature change is some constant times the change in forcing.
Figure 1. The triangular fuzzy number for the number of mammal species [4166, 4629, 5092] is shown by the solid line. The peak is at the best estimate, 4629. The upper and lower limits of expected number of species vary with the membership value. For a membership value of 0.65 (shown in dotted lines), the lower limit is 4,467 species and the upper limit is 4,791 species (IUCN 2000).
So what are triangular fuzzy numbers when they are at home, and how can they help us understand why the IPCC claims are meaningless?
A triangular fuzzy number is composed of three estimates of some unknown value—the lowest, highest, and best estimates. To do calculations involving this uncertain figure, it is useful to use “fuzzy sets.” Traditional set theory includes the idea of exclusively being or not being a member of a set. For example, an animal is either alive or dead. However, for a number of sets, no clear membership can be determined. For example, is a person “old” if they are 55?
While no yes/no answer can be given, we can use fuzzy sets to determine the ranges of these types of values. Instead of the 1 or 0 used to indicate membership in traditional sets, fuzzy sets use a number between 0 and 1 to indicate partial membership in the set.
Fuzzy sets can also be used to establish boundaries around uncertain values. In addition to upper and lower values, these boundaries can include best estimates as well. It is a way to do sensitivity analysis when we have little information about the actual error sources and amounts. At its simplest all we need are the values we think it will be very unlikely to be greater or less than. These lower and upper bounds plus the best estimate make up a triangular number. A triangular number is written as [lowest expected value, best estimate, highest expected value].
For example, the number of mammalian species is given by the IUCN Red List folks as 4,629 species. However, this is known to be an estimate subject to error, which is usually quoted as ± 10%.
This range of estimates of the number of mammal species can be represented by a triangular fuzzy number. For the number of species, this is written as [4166, 4629, 5092], to indicate the lower and upper bounds, as well as the best estimate in the middle. Figure 1 shows the representation of the fuzzy number representing the count of all mammal species.
All the normal mathematical operations can be carried out using triangular numbers. The end result of the operation shows the most probable resultant value, along with the expected maximum and minimum values. For the procedures of addition, subtraction and multiplication, the low, best estimate, and high values are simply added, subtracted, or multiplied. Consider two triangular fuzzy numbers, triangular number
T1 = [L1, B1, H1]
and triangular number
T2 = [L2, B2, H2],
where “L”, “B”, and “H” are the lowest, best and highest estimates. The rules are:
T1 + T2 = [L1 + L2, B1 + B2, H1 + H2]
T1 – T2 = [L1 – L2, B1 – B2, H1 – H2] [Incorrect, edited. See below. Posting too fast. -w]
T1 * T2 = [L1 * L2, B1 * B2, H1 * H2]
So that part is easy. For subtraction and division, it’s a little different. The lowest possible value will be the low estimate in the numerator and the high estimate in the denominator, and vice versa for the highest possible value. So division is done as follows:
T1 / T2 = [L1 / H2, B1 / B2, L2 / H1]
And subtraction like this:
T1 – T2 = [L1 – H2, B1 – B2, H1 – L2]
So how can we use triangular fuzzy numbers to see what the IPCC is doing?
Well, climate sensitivity (in °C per W/m2) up there in the IPCC magical formula is made up of two numbers—temperature change expected from a doubling of CO2, and increased forcing expected from a doubling of CO2 . For each of them, we have estimates of the likely range of values.
For the first number, the forcing from a doubling of CO2, the usual IPCC number says that this will give 3.7 W/m2 of additional forcing. The end ranges on that are likely about 3.5 for the lower value, and 4.1 for the upper value (Hansen 2005). This gives the triangular number [3.5, 3.7, 4.0] W/m2 for the forcing change from a doubling of CO2.
The second number, temperature change per doubling of CO2, is given by the IPCC http://news.bbc.co.uk/2/shared/bsp/hi/pdfs/02_02_07_climatereport.pdf as the triangular number [2.0, 3.0, 4.5] °C for the change in temperature from a doubling of CO2.
Dividing the sensitivity per doubling by the change in forcing per doubling gives us a value for the change in temperature (∆T, °C) from a given change in forcing (∆F, watts/metre squared). Again this is a triangular number, and by the rules for division it is:
T1 / T2 = [L1 / H2, B1 / B2, L2 / H1] = [2.0 / 4.0, 3.0 / 3.7, 4.5 / 3.5]
which is a climate sensitivity of [0.5, 0.8, 1.28] °C of temperature change for each W/m2 change in forcing. Note that as expected, the central value is the IPCC canonical value of 3°C per doubling of CO2.
Now, let’s see what this means in the real world. The IPCC is all on about the change in forcing since the “pre-industrial” times, which they take as 1750. For the amount of change in forcing since 1750, ∆F, the IPCC says http://news.bbc.co.uk/2/shared/bsp/hi/pdfs/02_02_07_climatereport.pdf there has been an increase of [0.6, 1.6, 2.4] watts per square metre in forcing.
Multiplying the triangular number for the change in forcing [0.6, 1.6, 2.4] W/m2 by the triangular number for sensitivity [0.5, 0.8, 1.28] °C per W/m-2 gives us the IPCC estimate for the change in temperature that we should have expected since 1750. Of course this is a triangular number as well, calculated as follows:
T1 * T2 = [L1 * L2, B1 * B2, H1 * H2] = [0.5 * 0.6, 0.8 * 1.6, 2.4 *1.28]
The final number, their estimate for the warming since 1750 predicted by their magic equation, is [0.3, 1.3, 3.1] °C of warming.
Let me say that another way, it’s important. For a quarter century now the AGW supporters have put millions of hours and millions of dollars into studies and computer models. In addition, the whole IPCC apparatus has creaked and groaned for fifteen years now, and that’s the best they can tell us for all of that money and all of the studies and all of the models?
The mountain has labored and concluded that since 1750, we should have seen a warming of somewhere between a third of a degree and three degrees … that’s some real impressive detective work there, Lou …
Seriously? That’s the best they can do, after thirty years of study? A warming between a third of a degree and three whole degrees? I cannot imagine a less falsifiable claim. Any warming will be easily encompassed by that interval. No matter what happens they can claim success. And that’s hindcasting, not even forecasting. Yikes!
I say again that the field of climate science took a wrong turn when they swallowed the unsupported claim that a hugely complex system like the climate has a linear relationship between change in input and change in operating conditions. The fundamental equation of the conventional paradigm, ∆T = λ ∆F, that basic claim that the change in temperature is a linear function of the change in forcing, is simply not true.
All the best,
w.
Esp. given that it isn’t ∆F, its ∆Fn – with some negative, some positive, and you cant make a simple ensemble blend as they effect each other.
‘I say again that the field of climate science took a wrong turn when they swallowed the unsupported claim that a hugely complex system like the climate has a linear relationship between change in input and change in operating conditions. ”
I concur and would add that the same is pretty obvious when one observes weather. We have seen repeatedly, via satellite.that extraordinary weather in one local often heralds impacting weather through out the world.
Well done Willis. Figure 1 illustrates the entire IPCC method perfectly — A pyramid scheme.
“simply not true”
That is, indeed, where we are.
> concluded that since 1750, we should have seen a warming of somewhere between a third of a degree and three degrees
Nope, that isn’t a conclusion of the IPCC, its something you’ve made up.
> the unsupported claim that a hugely complex system like the climate has a linear relationship between change in input and change in operating conditions
Yes, its complex. But it is possible to test it, to some extent, by using hugely complex non-linear models (whether you believe the models are accurate or not, you do believe they are complex and non-linear). Those models show that the idea does, indeed, basically work. They demonstrate that your basic claim (“huge complex non-linear model => no linear relation between forcing and result”) is false.
“Those models show that the idea does, indeed, basically work.”
Flat wrong, Connolley.
There is not a single GCM that correctly predicted the flat to declining temperatures over the past decade and a half. Not one.
Run along now back to your Wiki censorship job, chump.
LOL, just to remind folks of a familiar example, the airflow over a 787 aircraft (flaps down, wheels down) is fully turbulent, and thus exhibits a dynamical complexity that is utterly beyond the capacity of any computer model to simulate ab initio.
And yet, Boeing nowadays uses wind tunnels and flight tests mainly to verity computer predictions of flight stability — computer predictions that Boeing (rightly) expects to be accurate to within a few percent.
Moreover, the response of a 787 in the landing patterns to small deflections of the control surfaces is … linear to very high accuracy.
That is why rational skepticism has to ask “How does modeling of complex nonlinear dynamical systems really work? Given work-and-tuning, why are modeling efforts so commonly successful?”
As Sculley and Mulder used to say on X-Files: “The answer is out there.” 🙂
A William M. Connolley ??
Yep I had a WMC once, it used to blow holes in hot air, but then the wheels broke and fell off
> There is not a single GCM that
You’re not understanding what I’m saying; try thinking before writing. And I did try to write it as simply as possibly. I’m making no assertion, here, that the GCMs are correct. I’m saying is that they are large, complex, non-linear systems that display a simple forcing-response relationship, in contradiction to WE’s assertion.
You can get out of the hole by arguing that they are insufficiently complex, if you like. I’m not sure if that will convince people though.
William M. Connolley,
I don’t want to put words in your mouth, but are you suggesting that we test a model against another model?
I’d classify models that are tested against other models as unverifiable hypotheses.
Tim
For example, is a person “old” if they are 55?
NO
Steve T (age 57)
Interesting stuff, Willis, but I can’t play. Very, very close to resolution in the Jelbring thread, and I’ve actually written a matlab program that computes some thermodynamically interesting things about the DALR atmospheric profile, such as the fact that it rises to zero temperature, pressure, and density at a specific height. Welcome to violating the third law of thermodynamics, the preparation of a gas at zero temperature. Obvious once you think of it.
But you are right, the fundamental problem with these models is that they imply that one can write a linear equation like the one you write above to describe a fundamentally nonlinear differential equation with multiple feedbacks and self-organized differential flows of energy in multiple channels. They’ve idealized all of the physics out of it and buried it in an asserted, unproven form that cannot possibly describe the known thermal record of only the last million years, or only the Holocene, or only the full 20th century. Linear response works just great, sometimes, if you measure the slope of a smooth line and then use the slope to extrapolate over a short enough time scale, before all of the physics in the nonlinear part you are neglecting has time to completely change the slope…
rgb
Radiative forcing is pure pseodoscience. The underlying assumptions are incorrect – before any equations are written down or any computer code is written. They assume a magical ‘climate equlibrium state’. This can be perturbed by adding CO2 etc. to the atmosphere and calculating a new ‘equilibrium state’. Since the original perturbation cacluations did not give the desired warming, more magical ‘water feedbacks’ have been added. The radiative forcing constants are just empirical fudge factors. Pull a number from some warm body orifice to get the CO2 sensitivity, and then apply this to the other greenhouse gases. The warming number obtained is too high, so add aerols for cooling and hope no one notices.
There is no such thing as a climate equilbrium state. Never was. Never will be. The climate has to be explained in terms of the time varying heat coupled into a series linked thermal reservoirs. Global warming then disappears into the flux noise of the real engineering calculations. There are more details at http://www.venturaphotonics.com.
It is time to shut down these so called ‘climate models’ and throw the climate astrologers in jail.
William M. Connolley, too much wikipedia editing, of any slightly skeptical opinion about climate change alarmism, drove you off from climate knowledge and sanity.
Your arguments are rather poor such as your wikipedia editing.
Cheers
[From a TRUE climate scientist]
William M. Connolley says:
February 7, 2012 at 9:27 am
It is a simple conclusion from the premises clearly stated by the IPCC. You are mincing words and picking nits. They gave the climate sensitivity and the change in forcing. I merely multiplied them together, you could have done the same thin… well, perhaps you couldn’t.
But in any case, your claim is that by multiplying their claimed climate sensitivity by their claimed change in forcing using their error bars, I’ve done something so unusual, so unexpected, that it cannot be seen as a conclusion of the IPCC.
You can keep believing that if it makes you feel better. Me, I just follow the numbers.
I believe nothing of the sort. The climate models are complex, but as I and others have repeatedly shown (see here, here, and here), they are most assuredly linear. You really should try to keep up with the field, Billy, I fear your day job censoring Wikipedia isn’t leaving you enough time to read the scientific literature.
Right, those models have done so well at predicting the current lack of warming …
My basic claim? That’s not my claim at all. You are a liar, sir, I did not make that claim. Putting your own pathetic words in quotes, to fool people into thinking that you were actually quoting me, is just more of your slimy tactics.
Billy, you have the brains of a box of bolts, unfortunately combined with the morals of a Shanghai pimp, the table manners of a bonobo, and the lock-jaw tenacity of a moray eel. Your censorship at Wikipedia will be cited in future books on the history of science, and the damage done by your actions will be noted.
Despite that, in the spirit of scientific inquiry, you are welcome to post what you claim is science on my threads. In return, I will post my honest opinion of your claims.
Trying to put words in my mouth, however, will get your face slapped every time.
w.
Connolley says:
“But it is possible to test it, to some extent, by using hugely complex non-linear models (whether you believe the models are accurate or not, you do believe they are complex and non-linear). Those models show that the idea does, indeed, basically work.”
The same could be said for the epicycle theory of celestial orbits – I paraphrase:
“But it is possible to test it, to some extent, by using hugely complex earth-centric models (whether you believe the models are accurate or not, you do believe they are complex and earth-centric). Those models show that the idea does, indeed, basically work.” No, they don’t. They show that the basic premise is false, through their requirement for huge complexity without producing any (much less a lot of) predicability wrt future observations.
Conrad
William M. Connolley says:
February 7, 2012 at 9:57 am
I see. When someone doesn’t understand your writing and your ideas, it’s their fault. I didn’t understand you either. I guess we’re all just not as brilliant as you are. I’ll try to keep up …
Ah, now I understand your claim. The models are large, yes. Complex, yes.
But “non-linear”? No.
The main problem is, they are not naturally evolved and evolving complex systems of the kind we find in the world around us.
Natural complex systems are things like meandering rivers. You can cut through an oxbow bend on a meandering river, and shorten it by some number of miles. Simple and linear. We can even make an equation:
∆R = – λ C
where ∆R is the change in river length, lambda is the “cutoff multiplier effect”, and C is the length of the cutoff. And that works perfectly to calculate the change in the length of the river resulting from the cutoff.
… It works perfectly, it’s true, but only until you wait a little while, and the river responds by lengthening somewhere else. Because a river is not one of the current type of computer climate models, it responds and changes in response to changed conditions. For a meandering river, the length of the river is constant on average over long periods of time. It cuts through an oxbow here and in response way downstream a bend widens out, and the length of the river doesn’t change.
So no, the current crop of climate models are not a valid analog of complex natural systems at all. They could be, I mean computers can be used to do that kind of modeling.
But these current GCMs are based around the idea and built by people who believe to their depths that ∆F = λ ∆T. As a result, that’s what they put out. Perhaps you find that surprising. I find nothing surprising in computer models which predict what their programmers believe.
w.
William M. Connolley says:
February 7, 2012 at 9:27 am
> concluded that since 1750, we should have seen a warming of somewhere between a third of a degree and three degrees
Nope, that isn’t a conclusion of the IPCC, its something you’ve made up. ..
Actually wikipedia provides links to quite useful articles on fuzzy set theory and fuzzy estimates of uncertainty. If you read those first, then read Willis article all the way from the beginning to the end, his reasoning would be clear; and you’ld have the basic information to follow the argument. Then any objection you advanced would have more chance of being an informed criticism.
The beauty of “fuzzy numbers” used in linear models is that you are able to select a particular “fudge factor” that makes your selected model fit your selected set of data. What if they can produce a model that indicates that the CO2 sensitivity is negligible compared to the sensitivity to water vapor and clouds?
Hey, Willis! Stop mincing words, will ya? Tell us what you REALLY think about William M. Connolley.
You don’t need any sort of numbers. It’s just automatic common sense for anyone who has tried to measure any part of nature, whether biological or meteorological. Nature doesn’t do linear.
Linear approximations will do for very small spatial or temporal intervals, such as comparing the temperature of my yard versus the neighbor’s yard, or the temperature right now with the temperature 5 minutes from now. Anything beyond a few miles or a few minutes, it’s just a waste of neurons and energy.
It obviously wasn’t about the fuzzy triangles that I thought it was.
Every climate scientist knows that temperature and energy levels are linearly related.
It is the basic physics that climate science relies on and is taught in every climate science textbook. Every climate scientist knows how to say “its basic physics.”
And this is clearly spelled out in the fundamental physics equation governing temperature and energy levels in the universe.
[Make sure to click the link to fully understand what is said above. (No WMC comments on this Wiki I believe).]
http://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law
Climate Review: II
by Bob Carter
February 7, 2012
——————————————————————————–
A summary and analysis of selected events and papers elevant to global warming, in Australian political context. Climate Review: I is here.
——————————————————————————–
January to June, 2011
Stimulated by research spending of billions of dollars, inexorably, and month by month. torrents of new scientific information appear that are relevant to the twin issues of global warming and climate change.
No one scientist, or group, can possibly absorb and précis accurately the full range of this literature, though valiant efforts are made both by the IPCC and by its essential counterpart, the Non-governmental International Panel on Climate Change (NIPCC).
To date, research findings are consistent with a largely natural, though still incompletely understood, origin for modern climate change. Discounting virtual reality computer model studies, no recent paper has provided empirical evidence that dangerous human-caused global warming is occurring; and neither the atmosphere nor the ocean are currently warming despite the continuing increases in atmospheric carbon dioxide.
http://www.quadrant.org.au/blogs/doomed-planet/2012/02/climate-review-ii
Willis,
You hit the nail on the head here. I remember reading a paper on three dimensional electromagnetic modeling that started with a simple linear equation. A sudden switch to ellipsoidal integrals and an introduction to Bessel functions and I suddenly realized that not even the authors knew what they were doing. Not only is the Earth non-linear, we don’t even know where to start.