# Triangular Fuzzy Numbers and the IPCC

Guest Post by Willis Eschenbach

I got to thinking about “triangular fuzzy numbers” regarding the IPCC and their claims about how the climate system works. The IPCC, along with the climate establishment in general, make what to me is a ridiculous claim. This is the idea that in a hugely complex system like the Earth’s climate, the output is a linear function of the input. Or as these savants would have it:

Temperature change (∆T) is equal to climate sensitivity ( λ ) times forcing change (∆F).

Or as an equation,

∆T = λ  ∆F.

The problem is that after thirty years of trying to squeeze the natural world into that straitjacket, they still haven’t been able to get those numbers nailed down. My theory is that this is because there is a theoretical misunderstanding. The error is in the claim that temperature change is some constant times the change in forcing.

Figure 1. The triangular fuzzy number for the number of mammal species [4166,  4629,  5092] is shown by the solid line. The peak is at the best estimate, 4629. The upper and lower limits of expected number of species vary with the membership value. For a membership value of 0.65 (shown in dotted lines), the lower limit is 4,467 species and the upper limit is 4,791 species (IUCN 2000).

So what are triangular fuzzy numbers when they are at home, and how can they help us understand why the IPCC claims are meaningless?

A triangular fuzzy number is composed of three estimates of some unknown value—the lowest, highest, and best estimates. To do calculations involving this uncertain figure, it is useful to use “fuzzy sets.” Traditional set theory includes the idea of exclusively being or not being a member of a set. For example, an animal is either alive or dead. However, for a number of sets, no clear membership can be determined. For example, is a person “old” if they are 55?

While no yes/no answer can be given, we can use fuzzy sets to determine the ranges of these types of values. Instead of the 1 or 0 used to indicate membership in traditional sets, fuzzy sets use a number between 0 and 1 to indicate partial membership in the set.

Fuzzy sets can also be used to establish boundaries around uncertain values. In addition to upper and lower values, these boundaries can include best estimates as well. It is a way to do sensitivity analysis when we have little information about the actual error sources and amounts. At its simplest all we need are the values we think it will be very unlikely to be greater or less than. These lower and upper bounds plus the best estimate make up a triangular number. A triangular number is written as [lowest expected value,  best estimate,  highest expected value].

For example, the number of mammalian species is given by the IUCN Red List folks as 4,629 species. However, this is known to be an estimate subject to error, which is usually quoted as ± 10%.

This range of estimates of the number of mammal species can be represented by a triangular fuzzy number. For the number of species, this is written as [4166,  4629,  5092], to indicate the lower and upper bounds, as well as the best estimate in the middle. Figure 1 shows the representation of the fuzzy number representing the count of all mammal species.

All the normal mathematical operations can be carried out using triangular numbers. The end result of the operation shows the most probable resultant value, along with the expected maximum and minimum values. For the procedures of addition, subtraction and multiplication, the low, best estimate, and high values are simply added, subtracted, or multiplied. Consider two triangular fuzzy numbers, triangular number

T1 = [L1,  B1,  H1]

and triangular number

T2 = [L2,  B2,  H2],

where “L”, “B”, and “H” are the lowest, best and highest estimates.  The rules are:

T1 + T2  =  [L1 + L2,  B1 + B2,  H1 + H2]

T1 – T2  =  [L1 – L2,  B1 – B2,  H1 – H2]  [Incorrect, edited. See below. Posting too fast. -w]

T1 * T2  =  [L1 * L2,  B1 * B2,  H1 * H2]

So that part is easy. For  subtraction and division, it’s a little different. The lowest possible value will be the low estimate in the numerator and the high estimate in the denominator, and vice versa for the highest possible value. So division is done as follows:

T1 / T2 = [L1 / H2,  B1 / B2,  L2 / H1]

And subtraction like this:

T1 – T2  =  [L1 – H2,  B1 – B2,  H1 – L2]

So how can we use triangular fuzzy numbers to see what the IPCC is doing?

Well, climate sensitivity (in °C per W/m2) up there in the IPCC magical formula is made up of two numbers—temperature change expected from a doubling of CO2, and increased forcing expected from a doubling of CO2 . For each of them, we have estimates of the likely range of values.

For the first number, the forcing from a doubling of CO2, the usual IPCC number says that this will give 3.7 W/m2 of additional forcing. The end ranges on that are likely about 3.5 for the lower value, and 4.1 for the upper value (Hansen 2005). This gives the triangular number [3.5,  3.7,  4.0] W/m2 for the forcing change from a doubling of CO2.

The second number, temperature change per doubling of CO2, is given by the IPCC http://news.bbc.co.uk/2/shared/bsp/hi/pdfs/02_02_07_climatereport.pdf as the triangular number [2.0,  3.0,  4.5] °C for the change in temperature from a doubling of CO2.

Dividing the sensitivity per doubling by the change in forcing per doubling gives us a value for the change in temperature (∆T, °C) from a given change in forcing (∆F, watts/metre squared). Again this is a triangular number, and by the rules for division it is:

T1 / T2 = [L1 / H2, B1 / B2, L2 / H1] = [2.0 / 4.0,   3.0 / 3.7,   4.5 / 3.5]

which is a climate sensitivity of [0.5,  0.8,  1.28] °C of temperature change for each W/m2 change in forcing. Note that as expected, the central value is the IPCC canonical value of 3°C per doubling of CO2.

Now, let’s see what this means in the real world. The IPCC is all on about the change in forcing since the “pre-industrial” times, which they take as 1750. For the amount of change in forcing since 1750, ∆F, the IPCC says http://news.bbc.co.uk/2/shared/bsp/hi/pdfs/02_02_07_climatereport.pdf there has been an increase of [0.6,  1.6,  2.4] watts per square metre in forcing.

Multiplying the triangular number for the change in forcing [0.6,  1.6,  2.4] W/m2 by the triangular number for sensitivity [0.5,  0.8,  1.28] °C per W/m-2 gives us the IPCC estimate for the change in temperature that we should have expected since 1750. Of course this is a triangular number as well, calculated as follows:

T1 * T2 = [L1 * L2, B1 * B2, H1 * H2] = [0.5 * 0.6,  0.8 * 1.6,  2.4 *1.28]

The final number, their estimate for the warming since 1750 predicted by their magic equation, is [0.3,  1.3,  3.1] °C of warming.

Let me say that another way, it’s important. For a quarter century now the AGW supporters have put millions of hours and millions of dollars into studies and computer models. In addition, the whole IPCC apparatus has creaked and groaned for fifteen years now, and that’s the best they can tell us for all of that money and all of the studies and all of the models?

The mountain has labored and concluded that since 1750, we should have seen a warming of somewhere between a third of a degree and three degrees … that’s some real impressive detective work there, Lou …

Seriously? That’s the best they can do, after thirty years of study? A warming between a third of a degree and three whole degrees? I cannot imagine a less falsifiable claim. Any warming will be easily encompassed by that interval. No matter what happens they can claim success. And that’s hindcasting, not even forecasting. Yikes!

I say again that the field of climate science took a wrong turn when they swallowed the unsupported claim that a hugely complex system like the climate has a linear relationship between change in input and change in operating conditions. The fundamental equation of the conventional paradigm, ∆T = λ ∆F, that basic claim that the change in temperature is a linear function of the change in forcing, is simply not true.

All the best,

w.

## 106 thoughts on “Triangular Fuzzy Numbers and the IPCC”

1. Jean Parisot says:

Esp. given that it isn’t ∆F, its ∆Fn – with some negative, some positive, and you cant make a simple ensemble blend as they effect each other.

2. pat says:

‘I say again that the field of climate science took a wrong turn when they swallowed the unsupported claim that a hugely complex system like the climate has a linear relationship between change in input and change in operating conditions. ”

I concur and would add that the same is pretty obvious when one observes weather. We have seen repeatedly, via satellite.that extraordinary weather in one local often heralds impacting weather through out the world.

3. Hugh K says:

Well done Willis. Figure 1 illustrates the entire IPCC method perfectly — A pyramid scheme.

4. Paul Vaughan says:

“simply not true”

That is, indeed, where we are.

5. William M. Connolley says:

> concluded that since 1750, we should have seen a warming of somewhere between a third of a degree and three degrees

Nope, that isn’t a conclusion of the IPCC, its something you’ve made up.

> the unsupported claim that a hugely complex system like the climate has a linear relationship between change in input and change in operating conditions

Yes, its complex. But it is possible to test it, to some extent, by using hugely complex non-linear models (whether you believe the models are accurate or not, you do believe they are complex and non-linear). Those models show that the idea does, indeed, basically work. They demonstrate that your basic claim (“huge complex non-linear model => no linear relation between forcing and result”) is false.

6. Smokey says:

“Those models show that the idea does, indeed, basically work.”

Flat wrong, Connolley.

There is not a single GCM that correctly predicted the flat to declining temperatures over the past decade and a half. Not one.

Run along now back to your Wiki censorship job, chump.

7. A physicist says:

LOL, just to remind folks of a familiar example, the airflow over a 787 aircraft (flaps down, wheels down) is fully turbulent, and thus exhibits a dynamical complexity that is utterly beyond the capacity of any computer model to simulate ab initio.

And yet, Boeing nowadays uses wind tunnels and flight tests mainly to verity computer predictions of flight stability — computer predictions that Boeing (rightly) expects to be accurate to within a few percent.

Moreover, the response of a 787 in the landing patterns to small deflections of the control surfaces is … linear to very high accuracy.

That is why rational skepticism has to ask “How does modeling of complex nonlinear dynamical systems really work? Given work-and-tuning, why are modeling efforts so commonly successful?”

As Sculley and Mulder used to say on X-Files:The answer is out there.”   :)

8. Fredrick Lightfoot says:

A William M. Connolley ??
Yep I had a WMC once, it used to blow holes in hot air, but then the wheels broke and fell off

9. William M. Connolley says:

> There is not a single GCM that

You’re not understanding what I’m saying; try thinking before writing. And I did try to write it as simply as possibly. I’m making no assertion, here, that the GCMs are correct. I’m saying is that they are large, complex, non-linear systems that display a simple forcing-response relationship, in contradiction to WE’s assertion.

You can get out of the hole by arguing that they are insufficiently complex, if you like. I’m not sure if that will convince people though.

10. Tim Fitzgerald says:

William M. Connolley,

I don’t want to put words in your mouth, but are you suggesting that we test a model against another model?

I’d classify models that are tested against other models as unverifiable hypotheses.

Tim

11. steveta_uk says:

For example, is a person “old” if they are 55?

NO

Steve T (age 57)

12. Interesting stuff, Willis, but I can’t play. Very, very close to resolution in the Jelbring thread, and I’ve actually written a matlab program that computes some thermodynamically interesting things about the DALR atmospheric profile, such as the fact that it rises to zero temperature, pressure, and density at a specific height. Welcome to violating the third law of thermodynamics, the preparation of a gas at zero temperature. Obvious once you think of it.

But you are right, the fundamental problem with these models is that they imply that one can write a linear equation like the one you write above to describe a fundamentally nonlinear differential equation with multiple feedbacks and self-organized differential flows of energy in multiple channels. They’ve idealized all of the physics out of it and buried it in an asserted, unproven form that cannot possibly describe the known thermal record of only the last million years, or only the Holocene, or only the full 20th century. Linear response works just great, sometimes, if you measure the slope of a smooth line and then use the slope to extrapolate over a short enough time scale, before all of the physics in the nonlinear part you are neglecting has time to completely change the slope…

rgb

13. Radiative forcing is pure pseodoscience. The underlying assumptions are incorrect – before any equations are written down or any computer code is written. They assume a magical ‘climate equlibrium state’. This can be perturbed by adding CO2 etc. to the atmosphere and calculating a new ‘equilibrium state’. Since the original perturbation cacluations did not give the desired warming, more magical ‘water feedbacks’ have been added. The radiative forcing constants are just empirical fudge factors. Pull a number from some warm body orifice to get the CO2 sensitivity, and then apply this to the other greenhouse gases. The warming number obtained is too high, so add aerols for cooling and hope no one notices.
There is no such thing as a climate equilbrium state. Never was. Never will be. The climate has to be explained in terms of the time varying heat coupled into a series linked thermal reservoirs. Global warming then disappears into the flux noise of the real engineering calculations. There are more details at http://www.venturaphotonics.com.
It is time to shut down these so called ‘climate models’ and throw the climate astrologers in jail.

14. Spartacus says:

William M. Connolley, too much wikipedia editing, of any slightly skeptical opinion about climate change alarmism, drove you off from climate knowledge and sanity.
Your arguments are rather poor such as your wikipedia editing.
Cheers

[From a TRUE climate scientist]

15. Willis Eschenbach says:

William M. Connolley says:
February 7, 2012 at 9:27 am

> concluded that since 1750, we should have seen a warming of somewhere between a third of a degree and three degrees

Nope, that isn’t a conclusion of the IPCC, its something you’ve made up.

It is a simple conclusion from the premises clearly stated by the IPCC. You are mincing words and picking nits. They gave the climate sensitivity and the change in forcing. I merely multiplied them together, you could have done the same thin… well, perhaps you couldn’t.

But in any case, your claim is that by multiplying their claimed climate sensitivity by their claimed change in forcing using their error bars, I’ve done something so unusual, so unexpected, that it cannot be seen as a conclusion of the IPCC.

You can keep believing that if it makes you feel better. Me, I just follow the numbers.

> the unsupported claim that a hugely complex system like the climate has a linear relationship between change in input and change in operating conditions

Yes, its complex. But it is possible to test it, to some extent, by using hugely complex non-linear models (whether you believe the models are accurate or not, you do believe they are complex and non-linear).

I believe nothing of the sort. The climate models are complex, but as I and others have repeatedly shown (see here, here, and here), they are most assuredly linear. You really should try to keep up with the field, Billy, I fear your day job censoring Wikipedia isn’t leaving you enough time to read the scientific literature.

Those models show that the idea does, indeed, basically work.

Right, those models have done so well at predicting the current lack of warming …

They demonstrate that your basic claim (“huge complex non-linear model => no linear relation between forcing and result”) is false,

My basic claim? That’s not my claim at all. You are a liar, sir, I did not make that claim. Putting your own pathetic words in quotes, to fool people into thinking that you were actually quoting me, is just more of your slimy tactics.

Billy, you have the brains of a box of bolts, unfortunately combined with the morals of a Shanghai pimp, the table manners of a bonobo, and the lock-jaw tenacity of a moray eel. Your censorship at Wikipedia will be cited in future books on the history of science, and the damage done by your actions will be noted.

Despite that, in the spirit of scientific inquiry, you are welcome to post what you claim is science on my threads. In return, I will post my honest opinion of your claims.

Trying to put words in my mouth, however, will get your face slapped every time.

w.

16. conrad clark says:

Connolley says:

“But it is possible to test it, to some extent, by using hugely complex non-linear models (whether you believe the models are accurate or not, you do believe they are complex and non-linear). Those models show that the idea does, indeed, basically work.”

The same could be said for the epicycle theory of celestial orbits – I paraphrase:

“But it is possible to test it, to some extent, by using hugely complex earth-centric models (whether you believe the models are accurate or not, you do believe they are complex and earth-centric). Those models show that the idea does, indeed, basically work.” No, they don’t. They show that the basic premise is false, through their requirement for huge complexity without producing any (much less a lot of) predicability wrt future observations.

17. Willis Eschenbach says:

William M. Connolley says:
February 7, 2012 at 9:57 am

> There is not a single GCM that

You’re not understanding what I’m saying; try thinking before writing.

I see. When someone doesn’t understand your writing and your ideas, it’s their fault. I didn’t understand you either. I guess we’re all just not as brilliant as you are. I’ll try to keep up …

And I did try to write it as simply as possibly. I’m making no assertion, here, that the GCMs are correct. I’m saying is that they are large, complex, non-linear systems that display a simple forcing-response relationship, in contradiction to WE’s assertion.

Ah, now I understand your claim. The models are large, yes. Complex, yes.

But “non-linear”? No.

The main problem is, they are not naturally evolved and evolving complex systems of the kind we find in the world around us.

Natural complex systems are things like meandering rivers. You can cut through an oxbow bend on a meandering river, and shorten it by some number of miles. Simple and linear. We can even make an equation:

∆R = – λ C

where ∆R is the change in river length, lambda is the “cutoff multiplier effect”, and C is the length of the cutoff. And that works perfectly to calculate the change in the length of the river resulting from the cutoff.

… It works perfectly, it’s true, but only until you wait a little while, and the river responds by lengthening somewhere else. Because a river is not one of the current type of computer climate models, it responds and changes in response to changed conditions. For a meandering river, the length of the river is constant on average over long periods of time. It cuts through an oxbow here and in response way downstream a bend widens out, and the length of the river doesn’t change.

So no, the current crop of climate models are not a valid analog of complex natural systems at all. They could be, I mean computers can be used to do that kind of modeling.

But these current GCMs are based around the idea and built by people who believe to their depths that ∆F = λ ∆T. As a result, that’s what they put out. Perhaps you find that surprising. I find nothing surprising in computer models which predict what their programmers believe.

w.

18. Duster says:

William M. Connolley says:
February 7, 2012 at 9:27 am

> concluded that since 1750, we should have seen a warming of somewhere between a third of a degree and three degrees

Nope, that isn’t a conclusion of the IPCC, its something you’ve made up. ..

Actually wikipedia provides links to quite useful articles on fuzzy set theory and fuzzy estimates of uncertainty. If you read those first, then read Willis article all the way from the beginning to the end, his reasoning would be clear; and you’ld have the basic information to follow the argument. Then any objection you advanced would have more chance of being an informed criticism.

19. The beauty of “fuzzy numbers” used in linear models is that you are able to select a particular “fudge factor” that makes your selected model fit your selected set of data. What if they can produce a model that indicates that the CO2 sensitivity is negligible compared to the sensitivity to water vapor and clouds?

20. Michael Reed says:

Hey, Willis! Stop mincing words, will ya? Tell us what you REALLY think about William M. Connolley.

21. You don’t need any sort of numbers. It’s just automatic common sense for anyone who has tried to measure any part of nature, whether biological or meteorological. Nature doesn’t do linear.

Linear approximations will do for very small spatial or temporal intervals, such as comparing the temperature of my yard versus the neighbor’s yard, or the temperature right now with the temperature 5 minutes from now. Anything beyond a few miles or a few minutes, it’s just a waste of neurons and energy.

22. Bloke down the pub says:

It obviously wasn’t about the fuzzy triangles that I thought it was.

23. Bill Illis says:

Every climate scientist knows that temperature and energy levels are linearly related.

It is the basic physics that climate science relies on and is taught in every climate science textbook. Every climate scientist knows how to say “its basic physics.”

And this is clearly spelled out in the fundamental physics equation governing temperature and energy levels in the universe.

[Make sure to click the link to fully understand what is said above. (No WMC comments on this Wiki I believe).]

http://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law

24. Ed Scott says:

Climate Review: II

by Bob Carter

February 7, 2012

——————————————————————————–

A summary and analysis of selected events and papers elevant to global warming, in Australian political context. Climate Review: I is here.

——————————————————————————–

January to June, 2011

Stimulated by research spending of billions of dollars, inexorably, and month by month. torrents of new scientific information appear that are relevant to the twin issues of global warming and climate change.

No one scientist, or group, can possibly absorb and précis accurately the full range of this literature, though valiant efforts are made both by the IPCC and by its essential counterpart, the Non-governmental International Panel on Climate Change (NIPCC).

To date, research findings are consistent with a largely natural, though still incompletely understood, origin for modern climate change. Discounting virtual reality computer model studies, no recent paper has provided empirical evidence that dangerous human-caused global warming is occurring; and neither the atmosphere nor the ocean are currently warming despite the continuing increases in atmospheric carbon dioxide.

25. Steve from Rockwood says:

Willis,
You hit the nail on the head here. I remember reading a paper on three dimensional electromagnetic modeling that started with a simple linear equation. A sudden switch to ellipsoidal integrals and an introduction to Bessel functions and I suddenly realized that not even the authors knew what they were doing. Not only is the Earth non-linear, we don’t even know where to start.

26. Vince Causey says:

I think Willis has just done a Toto, and pulled away the curtains from the IPCC wizard.

27. Noonan says:

I am an economist by training, and I think climate forecasting suffers from the same problem as economic forecasting: too many variables.

28. DesertYote says:

Wow, just look at all the propagandist posting here trying to distract and confuse. The scary thing is I think that some of them actually think they are making sense.

29. Peter Plail says:

Whenever Willis writes something, I end up understanding what he means. He has a rare ability to communicate complex subjects in a logical and clear style which doesn’t talk down.

I have read and reread Connolley’s comments and I simply don’t understand them. Maybe it is me but ………..

30. Bill Hunter says:

William M. Connolley says:
February 7, 2012 at 9:27 am
“Yes, its complex. But it is possible to test it, to some extent, by using hugely complex non-linear models (whether you believe the models are accurate or not, you do believe they are complex and non-linear). Those models show that the idea does, indeed, basically work”

Yeah but with a worse fit than astrologists get with curve fitting. So one is on solid ground arguing that the models are not helping the “cause”. If pouring a billion dollars into model development can’t beat out the curve fitting of a guy in a steeple with a pointy hat living on bread crumbs one has to wonder about the usefulness of the exercise or at least how its being spread around.

31. A couple points.

Smokey you are wrong. Some realizations of GCMs did in fact “predict” the cooling. the MEAN
of all them did not. Its pretty simple. GCMs as connelly notes are highly non linear. When you
look at all 52 runs ( give or take) you will find values that go up and down. on AVERAGE they
go up. so about 3% of the runs show cooler temps than the observed, the average of all
shows more warming. There is another factor to consider. It goes like this
Suppose I tell you : My model says that IF you drop a bomb flying at 660 knts from an
altitude of 30K feet, with no head wind, the bomb will travel forward 1 nm before hitting the ground.
That is a CONDITIONAL prediction. IF 660 knots, IF 30K feet, IF no head wind, THEN
the bomb lands 1nm from the point of release.
Now, we try to test that. Opps. The pilot releases the bomb at 695 knots, at 31,102 feet,
and there is a 15 knot quartering wind in his face. The bomb lands 1.2nm from the release
point.

Is the model wrong? Well, we really cant say for sure. Why? because the conditions prescribed
in the experiment were not met. The IF clauses didnt happen.

So what do we do. Well we can do two things
1. Assume our model is correct and recalculate the prediction GIVEN the ACTUAL test conditions.
2. Rerun the test and tell the pilot to be more careful and plan the test for non windy days.

With a GCM we cannot do #2. We cannot rerun 2001 to 2011. There was one experiment.
one set of ACTUAL forcings. That leaves option 1. rerunning the GCMS with the ACTUAL
forcing we saw during 2001 to 2010. Sadly, people dont do this. What they do instead
is say that the prediction is “good enough” that the models are not proven wrong.

So. In the first place some model runs did in fact predict less cooling. Those model runs were outnumbered by the majority which did predict warming. In the second place to really evaluate
the models you would have to re run them with ACTUAL forcings. Third, at some point the
Modelling community will have to grow some stones and eliminate models which are running hot.
The mean sensitivity is 3.2C, that’s likely too high of a value

But if you feel like you’ve got proof that the models are not useful, then go visit Tamsins blog
on GCM models. talk to a real modeler and tell her why you are right.

My bet. you wont go there and have a girl kick your intellectual butt

32. Rosco says:

I don’t get it – radiation in the atmosphere is clearly NOT anything like the dominant transport of energy mechanism.

I’ve seen the argument that humidity “feels” hotter because of the extra radiation from the extra water vapour – what about the fact that our body relies on evaporation to cool (sweat is the extreme form), the extra humidity inhibits this and we feel discomfort because we cannot cool effectively ? Supply a breeze and the situation resolves.

If one believes radiation is a dominant energy transport mechanism then why do “radiators” rely on conduction and convection – why do they put the “convection” in the oven etc etc.

And of course the ultimate conclusion – the only atmospheric components that “matter” are those that absorb radiation with the bulk remaining, apparently, stony cold – nonsense.

33. To a certain extent I admire the attempts being made to ‘try’ to model the climate – the trouble is its like trying to model some huge self configuring machine that has no central program or control. Its free to create and destroy its own feedbacks based on any prior state in the past.. In terms of NP complete problems – this is the key stone. If they can model this properly than a whole raft of similarly complex problems – like crypto cracking become child’s play!

As regards turbulence, thats a subproblem of modeling climate, at least with modeling turbulence you don’t get what happened a 100 years ago having an effect on what you observe now..

For instance over on http://www.facebook.com/pages/I-Bet-We-Can-Get-100000-people-to-Say-NO-to-the-Carbon-Tax/163658930354822?ref=ts we have a lovely set of examples of a set of warmists trying to claim that the average hindcast of the 14 models used by the IPCC is somehow significant in showing that they have modeled the prior climate accurately.. They got truly batted out of court.

34. Rosco says:

I recently saw a statement I really believe in – “atmospheres act as refrigerators”.

If you dismiss the nonsense of the false low levels of insolation peddled – (how does an average represent something that is either on or off anyway ?) – and recognize the Sun could fry us during the day – (1000 W/sq m used to be quoted as maximum insolation not about a quarter of this) then you have to accept the reality that we have twin air conditioners protecting us from this intense energy – one evaporative and the other convective – and the oceans and atmosphere act to reduce the surface temperature.

Both take the heat away from the surface – without it Earth wouldn’t be habitable.

Perhaps the fact that CO2 is less soluble in warmer water is like a safety valve – as the energy levels rise in the ocean CO2 is released and provides an EXTRA mechanism for removing heat from the surface by absorbing some extra radiation and radiating at least half to space with a net effect of cooling not warming.

Just throwing ideas around.

35. Philip Bradley says:

Those models show that the idea does, indeed, basically work. They demonstrate that your basic claim (“huge complex non-linear model => no linear relation between forcing and result”) is false.

The climate models do no such thing. At best they demonstrate that a possible Earth climate has a linear relation between forcing and state.

What relationship that possible Earth climate bears to the actual Earth climate is unknown.

Willis, you need a function for feedbacks in your equation and that function is non-linear at least IMO. Not sure what the IPCC says about it.

If the Forcings model/theory is sound, then there is indeed the claimed linear relationship.

Forcing model proponents, such as R Gates claim the Forcing model must be correct a priori, and in a narrow sense they are right.

The flaw in the Forcings model/theory is that they assume feedbacks operate on roughly the same timescale as the forcings.

If the feedbacks operate on substantially shorter timescales (and if they do then they must necessarily be net negative otherwise we would have runaway warming) then the Forcings model/theory is correct but worthless for predicting climate change.

The key to climate prediction is knowing the timescales of feedbacks. A subject the IPCC and the Warmists ignore.

36. Willis Eschenbach says:

steven mosher says:
February 7, 2012 at 1:25 pm

A couple points. Smokey you are wrong. Some realizations of GCMs did in fact “predict” the cooling. the MEAN of all them did not. Its pretty simple.

Mosh, this is the third time you’ve tried this claim. Both times before I challenged you to name the model and show us the runs that predicted the hiatus in warming. Each time you have not done so.

Time to put up or shut up, mon ami …

w.

37. Ian L. McQueen says:

FWIW: “straitjacket”.

IanM

[Thanks, fixed – w.]

38. Truthseeker says:

A physicist says:
February 7, 2012 at 9:41 am

Your attempted strawman argument has very nicely proved what the sceptics have been saying for years. For models of complex systems to be valid, they have to be continuously tested and re-calibrated against actual observations. Not something that seems to happen with GCMs ….

39. RACookPE1978 says:

Willis Eschenbach says:
February 7, 2012 at 1:53 pm (Speaking to Steven Mosher)

steven mosher says:
February 7, 2012 at 1:25 pm

A couple points. Smokey you are wrong. Some realizations of GCMs did in fact “predict” the cooling. the MEAN of all them did not. Its pretty simple.

Mosh, this is the third time you’ve tried this claim. Both times before I challenged you to name the model and show us the runs that predicted the hiatus in warming.

I too have never seen ANY evidence of ANY model plotting a flat peak extending for (what is now) 15 years of continuously increasing CO2 and flat temperatures. Note that actual measured whole-earth-average temperatures (if you even believe such a creature exists) are actually slightly negative and you’d have to show by model output that real world temperatures have not increased since the baseline dates of the mid-1970’s.

Regardless, and I will ask this with my tongue only slightly in my cheek …

If 97% of the GCM models are dead wrong (since you admit that only 3% have ever been correct in a 15 year span) …

And if 97% of the world’s climate scientists are “right” in believing the catastrophic CAGW dogma based strictly and entirely on their faith in modeled results of the failed 97% of GCM models …

Which 3 percent are correct? 8<)

Can we fire the remaining 97%?

40. Willis,
I can’t agree with what you say about triangular fuzzy numbers, and I think you should cite some authority. As an obvious example, consider your multiplication rule with T1=[-1,0,1], T2=[-1,0,1]. Then T1*T2=[1,0,1]. WUWT?

But more seriously, you can’t sensibly use this when L and H are probability bounds, as with, say, forcings. If they represent, say, 2 sd deviations, then the range of the sum is tighter, and the product formula will be way wrong.

41. Owen in GA says:

Truthseeker: I am not sure that is completely true (even if it is mostly true). It is just that climate scientists seem to like a certain set of psuedo-data that were run and reported on a particular date and included in one of there rigged pal review journals and never go back and check out what the modelers could do today. I also feel that the modelers themselves are getting a shaft from the climate scientists because climate scientists never update the underlying physics to enable modelers to make a better model. If one never challenges ones assumptions, one always gets the same wrong results and gets blindsided by reality. One can’t test a hypothesis if no one ever puts a hypothesis forward. Also, I am not sure we ever see the most recent output of the models anyway.

Of course if the modelers just arbitrarily turns the knobs to make any one 30 year period fit, it will likely be off for all the other time frames. I could say sensitivity to CO2 doubling was any number from -infinity to infinity, and as long as I included coupled opposite signed feedbacks with it, I could get close to maybe making it work, especially if I polynomialed the feedback to be of a high enough n degree and adjusted all the coefficients until I made it work. Polynomial fitting is a fine pass time, but unless there is a precise physical meaning for each coefficient term it is just artwork.

42. TF> are you suggesting that we test a model against another model?

No.

WE> It is a simple conclusion from the premises clearly stated by the IPCC

No, it isn’t. You made it up.

WE> The climate models are complex, but as I and others have repeatedly shown (see here, here, and here), they are most assuredly linear.

Err, no, they aren’t linear. It is trivial to see this from their construction, or from looking at their results.

WE> those models have done so well at predicting the current lack of warming

Irrelevant, as I’ve already said.

WE> the table manners of a bonobo

I’m sure everyone here is capable of judging manners, including yours, without your guidance.

C> by using hugely complex earth-centric models

Not sure what your point is. Yes, it is entirely possible, and indeed permissible, to cast all of physics assuming a stationary Earth. This is what GR teaches us.

43. Bloke down the pub says:
February 7, 2012 at 11:17 am

It obviously wasn’t about the fuzzy triangles that I thought it was.

On account of you’re a low-life lecherous blackguard, and proud of it!
;p

44. Philip Bradley says:

The climate models contain psuedo-random functions, Run them enough times and you will get any prediction you like, including an ice age starting next Wednesday.

The variability in climate model output is evidence of absolutely nothing.

45. P.S. to Bloke;
Follow-on comments involving sensitivity, linear responses, etc. You can fill in the blanks.
:D

46. RACookPE1978 says:

William M. Connolley says:
February 7, 2012 at 2:47 pm

TF> are you suggesting that we test a model against another model?

No.

WE> It is a simple conclusion from the premises clearly stated by the IPCC

No, it isn’t. You made it up.

Ok.

So 97% of the world’s GCM model results are wrong over a 15 year basis based on real-world measured temperatures and a accelerating (?) increasing CO2 level. Do you agree or disagree with that summary from Steve Mosher?

What are YOU doing to correct the FALSE 97% of the world’s models?
What have YOU done to stop the propaganda that YOU have been pushing that says these models are accurate and must be used to ruin the world’s economies and murder billions of innocents in the false dogma of CAGW hype?
What have YOU done to publish the missing 3% of the models that DID predict a 15 year flat-line in temperatures while CO2 rose?

47. Love it, always learn something from the Willis posts, just add motivation to learn.

Have a question, in the paragraph…

For the first number, the forcing from a doubling of CO2, the usual IPCC number says that this will give 3.7 W/m2 of additional forcing. The end ranges on that are likely about 3.5 for the lower value, and 4.1 for the upper value (Hansen 2005). This gives the triangular number [3.5, 3.7, 4.0] W/m2 for the forcing change from a doubling of CO2.

Why isn’t the triangular number [3.5, 3.8, 4.1]W/m2? or alternatively (if the central number has to be 3.7) why isn’t it [3.3, 3.7, 4.1]W/m2 or [3.5, 3.7, 3.9]W/m2

48. Kev-in-UK says:

@Mosh
c’mon steve – you can try and defend GCM’s as much as you like but it won’t wash because they have NO history. – at least as far as I know. By that I mean, has there been a good (say 90% accurate) model ever produced? – and I don’t just mean by hindcasting – I mean by a good hindcast AND a good forecast? (and don’t quote something that has loads of ‘adjustments’ within the code, either!) FFS man, the metoffice with all their sooperdooper computers cannot predict the weather in a LOCAL region (1000 sq miles?) accurately over more than a couple of days – and thats based on accurate detailed close spaced actual updated measurements!!
I don’t care what anyone says, any simplification of earths climate system into what, a dozen or so primary variables just ain’t gonna represent real life. – I feel, as always, that every model run should quote in big letters the usual financial caveat along the lines of ‘financial investiments can go up AND down and no guarantees are given’. Now, if the financial wizards cannot do it, what fecking hell chance do climate modellers have? surely, it is obvious that the climate is at least AS variable AND unpredictable as financing! I am not saying that some very basic models cant be constructed that give ‘indications’ – but in terms of my analogy with stock markets, it’s like lumping all the stocks in the different markets together and just using the main market indices as your ‘data’, e.g dow jones, Ft, etc – as REAL indicators – it’s not valid – because in each market, copper stocks could be falling, whilst oil stocks are rising (but this is simplified, there are hundreds of stocks in EACH market!) – so the net result (market index value) stays the same – but as an investor – you missed the boat because you didn’t know the detail!!

49. > climate scientists never update the underlying physics

Because the underlying physics is mostly well known (radiative transfer, atmospheric and ocean dynamics; ecosystems are less well known, of course, but that is more biology than physics). What is of interest is the *modelling* of the physics, and the interactions of the various models.

50. Kev-in-UK says:

RACookPE1978 says:
February 7, 2012 at 3:01 pm

Top comment dude! (and I’m not just saying that as an ex oilfield geologist! LOL!)

51. > So 97% of the world’s GCM model results are wrong over a 15 year basis

No.

> What are YOU doing to correct the FALSE 97% of the world’s models?

Not much, I’m am embedded software engineer. Have you not been keeping up?

> What have YOU done to stop the propaganda

I’m talking to you lot. You’re not very welcoming, though.

52. Kev-in-UK says:

William M. Connolley says:
February 7, 2012 at 3:14 pm

as an embedded software engineer – please can you explain how you feel adequately trained and experienced to advise/comment on wiki’s climate change entries?????

53. RACookPE1978 says:

William M. Connolley says:
February 7, 2012 at 3:14 pm (Replying to RACookPE above)

> So 97% of the world’s GCM model results are wrong over a 15 year basis

No.

> What are YOU doing to correct the FALSE 97% of the world’s models?

Not much, I’m am embedded software engineer. Have you not been keeping up?

> What have YOU done to stop the propaganda?

I’m talking to you lot. You’re not very welcoming, though.

Hmmn. Going to un-edit any of those tens of thousands of false statements, propaganda pieces and false Galileo-Inquisition-styled “editing” you have been cutting out of Wikipedia for the past many years as you deliberately removed facts from CAGW related references that – if left alone – would have been accurate?

What German letters would spell “Edit Shall Make You Free” over the gates of the CAGW camp you have led millions of children into?

54. Kev-in-UK says:

RACookPE1978 says:
February 7, 2012 at 3:28 pm

I fear that editting won’t help though. Fortunately, I can honestly say that real folk will not rely on wiki for sole info. My 16 yo daughter was doing a AGW project for school and (without any knowledge of my skepticism) worked out it didn’t make sense! So, I do believe that the likes of the Team, and all it’s supporting crudites (wiki editors, MSM supporters, etc) have basically been peacing in the wind. The truth will always out – eventually!!

55. John Andrews says:

Clearly the earth’s climate is linear, just like the weather. Today is just like yesterday and tomorrow will be just like today. /sarc

56. Smokey says:

steven mosher,

Plenty of commenters are questioning your model assertions, including Willis, who wrote:

“Mosh, this is the third time you’ve tried this claim. Both times before I challenged you to name the model and show us the runs that predicted the hiatus in warming. Each time you have not done so.

“Time to put up or shut up, mon ami …”

Yes. Where’s that model that makes consistently accurate predictions, year after year? The universe of NASDAQ stocks is far smaller than the number of typhoons, and volcanoes [more than 3 million undersea volcanoes reccently discovered, remember?], and square kilometres of ocean, and soot, and jet streams, and clouds, and the sun, and ocean currents, and galactic dust, and lots of other variables that affect the planet’s climate. So I wanna borrow that climate model, and use it to pick stocks. You say it can predict, so let’s put it to some good use. I’ll cut you in for a share.☺

The fact is, you’re simply describing the Texas Sharpshooter fallacy. You looked back and found a few close model runs out of lots. Did you know beforehand that they might be close? No, you did not. And I don’t think you can pick a GCM – right here and now – that will predict next year’s climate parameters without football field-sized error bars. But give it a try. Time to put up…

• • •

Connolley says:

“I’m talking to you lot. You’re not very welcoming, though.”

Poor Billy, he’s getting push-back from people more up to speed on the subject than he is. That’s the difference between an echo chamber blog and the internet’s Best Science site.

And I am not about to be personally welcoming to a lying propagandist, no matter what his manners are. I’m sure Pol Pot had some really good manners. And they say Stalin had a fine singing voice. But the would would still have been a lot better off without them.

57. mkelly says:

William M. Connolley says:

February 7, 2012 at 3:04 pm

> climate scientists never update the underlying physics

Because the underlying physics is mostly well known (radiative transfer, atmospheric and ocean dynamics; ecosystems are less well known, of course, but that is more biology than physics). What is of interest is the *modelling* of the physics, and the interactions of the various models.

Sir, please show a radiative heat transfer equation that has as an input back radiation. You must be able to do this as you believe back radiation heats the surface. I await your answer.

58. StuartMcL says:

William M. Connolley says:
February 7, 2012 at 2:47 pm

C> by using hugely complex earth-centric models
Not sure what your point is. Yes, it is entirely possible, and indeed permissible, to cast all of physics assuming a stationary Earth. This is what GR teaches us.
————————————————————————————
Do you really not understand the difference between an earth-centric cosmology and an earth-centred frame of reference? In which case it is no surprise that in your world everything revolves around the CO2 molecule.

59. William M. Connolley said:
February 7, 2012 at 2:47 pm
I’m sure everyone here is capable of judging manners, including yours, without your guidance.
———————————–
William M. Connolley said:
February 7, 2012 at 3:14 pm
I’m talking to you lot. You’re not very welcoming, though.
======================
Hah! The irony! This warmunist comes here and expects (and gets) fair treatment: his statements are neither redacted nor deleted. Honesty – one of the big differences between his sort and scientists and other truth-seekers – we gots it, they don’t.

That he is allowed to post in this forum is a huge corroborating testimony to the methods and objectives of the Skeptic/Scientific community, and of Anthony Watts and Willis Eschenbach, et al.

Especially to Anthony I say – thank you Sir for your efforts, and for this forum.

To William Connolley I say – you are a disgrace. But do not go away – please keep posting here, that people can witness the perfidy of your ilk.

60. A physicist,

ultimately, aerodynamics of aircraft is an experimental and empirical pursuit with real life flight testing. Often enough computer simulations yield useful and even pretty accurate results, but most of the time there are several flight characteristics that flight simulations, air tunnels and other computer generated tests largely or completely missed… often enough during flight tests aircraft take on flight characteristics not predicted, and sometimes with fatal crashes. The planet’s climate system makes that look infinitesimal and far less complex in comparison. I suggest Van Doenhof and Von Karman for further discussion on aerodynamics and aircraft design.

61. another warmist defeated ! I love it ! small wonder that warmists refuse to debate the issue.

62. Scott says:

William M. Connolley, the boorish drunk at the bar, ruining the dinner conversation for all.

He single handedly trashed the reputation of Wikipedia. I have no time for this idiot.

63. Faux Science Slayer says:

The ‘solid’ part of Earth is 259 trillion cubic miles of mostly molten rock with an average temperature in excess of 2500F.

The liquid part of the Earth is 310 million cubic miles of water with an average temperature of 2F.
Humans have put 28 gigatons of CO2 in the air that will be quickly converted to dirt or seashells at 125 lbs per cubic foot, or less than 3 cubic miles of material.

It takes a real magic for that tiny amount of gas to control all that mass…as well as an overpaid group of nitwits in white lab coats.

64. Agile Aspect says:

Kev-in-UK says:
February 7, 2012 at 3:04 pm

I don’t care what anyone says, any simplification of earths climate system into what, a dozen or so primary variables just ain’t gonna represent real life.

;———————————————————————

Actually, it’s worse than we thought.

There is no physics in the GCMs.

Just a hand full of algebraic equations with a set of radiation constraints and a set parameters to tweak.

They assume the relative humidity is constant and then they tweak the evaporation and precipitation parameters to ensure it remains constant.

There are no calculations to determine the magnitude atmosphere’s heat capacity (also known as GHE.)

Water dominates the heat capacity of the atmosphere, and for all practical purposes, there’s infinite amount of water.

It’s extremely naive to assume global temperatures will wait until CO2 concentrations rise.

In the end, they input carbon dioxide concentrations, assume the Earth is flat disk, and then tweak whatever parameters are necessary in order to fit the temperature data.

And once they have convergence, they perturb around the convergence by varying the parameters slightly, labeling each perturbation a separate GCM model, then create an ensemble average.

We expect more from our high school science fairs.

65. ikh says:

William M. Connolley says:
February 7, 2012 at 2:47 pm

>WE> It is a simple conclusion from the premises clearly stated by the IPCC

>No, it isn’t. You made it up.

Wow! Are you really that intellectually challenged? If you are familiar with Willis’s work then you would know he is incapable of such a deceit. If you are not familiar with his work, then such an assumption is idiotic! Are you just Trolling?

Willis explained carefully that using the IPCC’s numbers, that they use to produce the projection of 1.5 to 4.5C temp rise for CO2 doubling he calculates a projection from 1750 to date. That is not making anything up. It is drawing a logical conclusion fro AR4 numbers. Simples :-).

/ikh

66. Richard M says:

William M. Connolley says:
February 7, 2012 at 3:04 pm

> climate scientists never update the underlying physics

Because the underlying physics is mostly well known (radiative transfer, atmospheric and ocean dynamics; ecosystems are less well known, of course, but that is more biology than physics). What is of interest is the *modelling* of the physics, and the interactions of the various models.

Of course, as noted above, there is no “modelling” of physics. If it were attempted the model runs would take decades. So, in order to run them in a useful period of time, the physics is “approximated”. In other words, it becomes just some person’s best guess.

Another person’s best guess gets different results which is why there are so many different models.

If you know anything about Software engineering then this is nothing new. As a software engineer myself I do wonder at your attempt to obfuscate reality. Did you really think you could make silly claims and not get called on them?

67. Howard T. Lewis III says:

Mr. Willis, I am a big fan of the scientific method. The IPCC is not.

IPCC>>>>>>>>>>>>>>v
v
v
[86]

68. Frank K. says:

Steve Mosher:
“Third, at some point the Modelling community will have to grow some stones and eliminate models which are running hot.”

I’ve been advocating this for a long time. For example, GET RID OF MODEL E!! Find a code that is well documented and put all your money there. I’m tired of having the taxpayers pay for lousy research software.

69. Peter says:

Mosh,

Willis called you out……..hard. Put up or shut up.

70. Willis Eschenbach says:

steven mosher says:
February 7, 2012 at 1:25 pm

… Its pretty simple. GCMs as connelly notes are highly non linear.

Steve, we obviously have very different meanings for “non-linear”. As I have shown in detail for a couples models, if you look at the models as a black box, you see that the output is a simple linear transformation of the input plus a bit of lag.

So I look at that, black box, output is linear with input, I say “highly linear”. That’s what linear means, at least to me, that the output is a linear transform of the input.

You are saying but no, there’s lots of complex and very non-linear and no doubt really, really sciencey stuff going on in the black box … maybe so, and maybe no.

But so what? The model as a whole is highly linear, input to output. By their fruits is how I know them. I fail to see the non-linearity you refer to.

w.

71. Willis Eschenbach says:

Nick Stokes says:
February 7, 2012 at 2:09 pm

Willis,
I can’t agree with what you say about triangular fuzzy numbers, and I think you should cite some authority. As an obvious example, consider your multiplication rule with T1=[-1,0,1], T2=[-1,0,1]. Then T1*T2=[1,0,1]. WUWT?

But more seriously, you can’t sensibly use this when L and H are probability bounds, as with, say, forcings. If they represent, say, 2 sd deviations, then the range of the sum is tighter, and the product formula will be way wrong.

So redo my calculations and tell me what you think the correct answer should be, Nick. I said 0.3 to 3 W/m2, and I showed my work. You keep coming back to say I’m wrong, wrong, wrong … so where are your numbers.

w.

72. NovaReason says:

http://www.sid.ir/en/VEWSSID/J_pdf/90820070106.pdf

Another explanation of fuzzy number operations, that I feel does a little better job explaining fuzzy division formulaically.

Willis, I love your posts. I love your dedication and attitude, but you flat out missed the mark on this one, I’d be interested to see what the calculations are actually like, but given that you did several iterations of division and multiplication, the results are going to be highly complex math that I can’t think through right now.

73. NovaReason says:

Ugh… Okay, so my initial post I was still kind of unsure about how it works, but after more careful reading of the second source I listed… it DOES produce a TFN, but the equations are significantly more complex than simply multiplying or dividing individual values. You need to completely redo the equations as you posted them, and the numbers are going to come out a lot funkier because of it. I’m nearly certain that 0.3 will not be your lower value for expected warming, as the numbers I was getting for the lowest value of is significantly higher than the one you got. (1.75 deg change per w/m2 of forcing instead of 0.5…) Past that, I think I made a calculation error on the likely value because it came out lower than the high bound. Also, I didn’t work in the multiplication step at all because I couldn’t get effective numbers for it… might try a SS in Excel to work it out.

So based on a fuzzy math calculator I found.
[3.5, 3.7, 4.0] / [2.0, 3.0, 4.5] = [0.78, 1.23, 2.0]
[0.78 , 1.23, 2.0] x [0.6, 1.6, 2.4] = [.467, 1.968, 4.8]

But given those numbers, the chance that it IS .467 are both extremely low and based on the lowest possible parameters that you could ascribe to the situation. And it’s also 1.5x as high as your estimate.

(My previous statement that they multiplication or division did not produce fuzzy numbers was a misunderstanding based on me misreading the equations for results in the first resource I listed as giving a range of results instead of a set of single value results… either poor formatting, or I was reading way too fast.)

74. jtv says:

Is the “high” part of division really L2 / H1? I don’t actually know any fuzzy arithmetic, but I would have expected H1 / L2.

75. > please can you explain how you feel adequately trained and experienced to…

You’re not very good at research, are you? Try http://en.wikipedia.org/wiki/User:William_M._Connolley

> Going to un-edit any of those tens of thousands of false statements, propaganda pieces and false Galileo-Inquisition-styled “editing” you have been cutting out of Wikipedia

You’re a bit confused. You’ve just accused me of cutting out false statements from wiki :-). More seriously, vague assertions like that go nowhere, you’d need details. And, fun though it is for you to abuse me, please remember this post is about WE’s interesting theory.

> What German letters would spell “Edit Shall Make You Free”

Godwin. You lose.

> a radiative heat transfer equation that has as an input back radiation

I think you’re confused. This post http://scienceofdoom.com/2010/07/17/the-amazing-case-of-back-radiation/ will help you, if you read it. Or if you prefer only to accept facts when they come from someone on “your side”, you could read “Dr” Roy Spencer: http://www.drroyspencer.com/2010/08/help-back-radiation-has-invaded-my-backyard/

AA> They [GCMs] assume the relative humidity is constant

No, they don’t. Relative humidity is a free variable. All the rest of what you said was wrong, too. I appreciate that you people don’ tlike GCMs, because they provide you the “wrong” answer, but you could at least try to understand what you don’t like, rather than making things up.

76. NovaReason says:

Ugh… I did the equations wrong up there… swapped T1 and T2 in your original equation. No idea what the values would come out like.

77. William M. Connolley says:

WE> if you look at the models as a black box, you see that the output is a simple linear transformation of the input plus a bit of lag

No, clearly it isn’t. This is trivial to demonstrate on a short timescale, where you can see the non-linear growth of perturbations, the “butterfly effect”: http://mustelid.blogspot.com/2005/10/butterflies-notes-for-post.html

Its also trivial to demonstrate over long time scales with averaging: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-10-5.html

But over long timescales the response is indeed approximately linear. Which is the point. And there is no evidence that this isn’t a good approximation of how the real climate system behaves, too.

What you’ve actually discovered is that it is possible for the output of a hugely non-linear system (a GCM, or the Earth’s climate) to respond approximately linearly to forcing.

78. Smokey says:

Connolley as is full of crap. CO2 and temperature do have a correlarion. But that corelation is this: CO2 alawys follows temperature. No exceptions.

Connolley is a lying propagandist; we know this. Only the credulous believe in his mendacious spin. Word up to the wise.

79. Kev-in-Uk says:

I agree with Willis – I don’t think it’s sensible to consider any GCM or climate model as anything but a linear black box in terms of input versus output. And, as has been said many times, the machinations within the model are often ‘set’ or ‘adjusted’ or simply based on best guess estimates of causes and effects. Changes in any one of the relatively few variables should affect the other parameters, that’s well understood – but if you don’t actually KNOW the effect (as in it’s real rate of change, etc) and you are simply best guessing – the net effect of several of these within a model will likely be some form of linear output – because your best guess or adjustment will always be to assume some generic linear change. And yet we know full well that some variables’ cause and effect are not linear, or may be linear on one variable but non-linear on another! Does raising temp, raise humidity and therefore always raise precipitation, for example? – Perhaps (in the modellers eyes) – but what about albedo (which would increase and therefore decrease direct solar forcing) and then there’s other things like seasonal wind patterns, etc, etc – all of which means that ANY variable and its effect on other variables, and vice versa, becomes a hotch potch of crazy interactions and +ve/-ve feedbacks. Why anyone (Mr Connolley) would assume that the net effect of one change (CO2 – in the case of CAGW) will be linear when constructing ones model is beyond me. It’s a fallacy – and one which simply cannot be proven within such a complex non-linear system – but as Mr Connelly explains, he assumes the net effect is going to be linear……..(Personally, my attitude is that Mother Nature is seemingly ‘self correcting’ or self balancing (within large variations!) – and so why should the climate system be any different?)

In the uk, they are considering raising the motorway speed limit. This would naturally be expected to increase road deaths on the motorways, you would think? But it’s not as simple as that, some people will perhaps drive more alertly at higher speeds, nervous drivers may keep off the motorways more often! – cars are getting increasingly safer, more vehicles are travelling (making net speeds slower!), vehicle testing (MOT’s) are getting stricter, etc, etc – these effects are NOT directly known or measurable – so the natural assumption of a linear rise in road deaths is plain wrong….and in this simple example can be readily seen to be wrong once one applies a little thought. The simple act of increasing a speed limit does not automatically equate to more deaths, so the overall linear assumption is silly. On the basis of known climate complexity, which is obviously far more variable than road speed vs deaths – I am sure making any overall linear input/output assumption is erroneous!

80. Willis Eschenbach says: February 7, 2012 at 10:39 pm

“So redo my calculations and tell me what you think the correct answer should be,”

No, I’m saying you can’t get the answers you want this way at all. And nor can I. You’ve used rules for triples that might (with some repair) be reasonable for absolute ranges. But you’re looking at confidence intervals, and they just don’t multiply (or even add).

So when you say that the range of CS is [0.5, 0.8, 1.28]. You’ve used an invalid step to get there but let’s suppose. And forcing is [0.6, 1.6, 2.4].

That says there is a 5% chance that CS<0.5, and also a 5% chance that F<0.6. But then there is only a 0.25% chance that both are true.

Now that’s not necessary for the product to be less than 0.3, your minimum. There are other combinations, eg CS=0.4, F=0.7. You’d have to integrate these possibilities to get the total. But you won’t get anywhere near 5% for 0.3. The minimum is much less.

In fact, the full gruesome story on products of normal variates is here. Modified Bessel functions. OK, I guess I could work it out, but you have two separate steps.

81. Spartacus says:

“I appreciate that you people don’t like GCMs, because they provide you the “wrong” answer, but you could at least try to understand what you don’t like, rather than making things up.”

Wrong William, It’s nature itself that provides the “wrong answer” to the CGM’s “predictions” concerning longer time periods. You cannot control model’s “drifting” unless you “cheat” with fudge factors, and you know that very well. Since you cannot tune nature, all your modeling is a mere “run and catch” exercise. The only way to fine tune capable predictive models (and i put a grain of salt in this) is to have very good observation data from a very long period of time. We don’t have such kind of data. Our proxies are only rough approximations of past climate data. I’ve worked in several of these proxies and I know very well that there are huge problems to solve and lots of inconsistent data. Some of these proxies show data that have a strong regional signal and are not of any use for global climate considerations for instance.
You can use models to predict well controlled experiments, were you fully know all variables, otherwise we could not built effective mobile phones, computers or F1 cars. In climate, however, there is a huge number of variables that are not fully understood or they are wrongly explained. We cannot even detect or separate their signal from the noisy climate data.

82. William M. Connolley says:

> Why anyone (Mr [sic] Connolley) would assume that the net effect of one change (CO2 – in the case of CAGW) will be linear when constructing ones model is beyond me.

There is no such assumption. You’re confusing the input with the output. It turns out, when you run the GCMs, that the response is largely linear in log(CO2) (not CO2 itself; at least around current concentrations). But that quasi-linear response is the aggregate of a large number of non-linear calculations.

83. novareason says:

William M. Connolley says:
February 8, 2012 at 5:12 am
> Why anyone (Mr [sic] Connolley) would assume that the net effect of one change (CO2 – in the case of CAGW) will be linear when constructing ones model is beyond me.

There is no such assumption. You’re confusing the input with the output. It turns out, when you run the GCMs, that the response is largely linear in log(CO2) (not CO2 itself; at least around current concentrations). But that quasi-linear response is the aggregate of a large number of non-linear calculations.
And as Willis said multiple times. You can call it anything you like, but when the equations inside of it, regardless of complexity, produce a trend line that responds linearly to one factor (CO2) then you’ve created a linear model. If it looks like a duck, and quacks like a duck…

You have inputs, and your outputs are linear increases based on CO2 forcings.

84. William M. Connolley says:

> You have inputs, and your outputs are linear increases based on CO2 forcings

As I’ve said, no, the outputs aren’t linear. I linked to a pic, but clearly you didn’t look. Even if you restrict your view to annual-average-global-average temperature, the output isn’t linear: fairly often, the change from one year to next is negative, not positive, even though the change in forcing from year to year is the same. In the models, and in reality. If your response one year is of opposite sign to the next year, that is non-linear. Just like the real world. Though over the long run, it tends to average out. Just like the real world.

It seems to me that your main objection to the quasi-linear response is that you have some mystical source of knowledge that teaches you that this behaviour is wrong; whatever it is, it isn’t observations. Since this revelation is unavailable to the scientists, they cannot take advantage of it.

85. wsbriggs says:

So now it appears that there is an “orchestrated” attempt to discredit Willis. I love the public display of the arrogance once only visible in the Climategate emails.

The whole GCM thing kind of reminds me of Disney’s Toot, Whistle, Plunk, Boom film. The numbers go round and round and they come out here…

Forget past measurements, they couldn’t measure with the accuracy we can now. Forget the geologic evidence that seas were lower and higher than now – uplift and down thrust did it. Oh, yeah, forget that we told you that there was going to be a new Ice Age in the 1970s. Forget that we told you that it was going to get hotter in the Arctic – it’s Climate Change! Forget everything about the world but that man is now in control – and we want to control him.

Like you always say in the “CAGW Science” blogs – follow the money.

Way to go Willis! It appears that you and some of the other welcome posters on this blog have really put the stick in the hornets nest this time. Whipping out real math and putting the calculations out there for everyone to see and criticize is surely a lot different than the “knowledgeable” responses – “you’re wrong,” etc.

Maybe we should have Prof. McKitrick join in the fray. There’s going to be a lot of sleepless nights over his last cross posting here.

I’m starting my popcorn now!

86. mkelly says:

William M. Connolley says:
February 8, 2012 at 12:42 am

> a radiative heat transfer equation that has as an input back radiation

I think you’re confused.

William M. Connolley says:
February 7, 2012 at 3:04 pm

“Because the underlying physics is mostly well known (radiative transfer, atmospheric and ocean dynamics; ecosystems are less well known, of course, but that is more biology than physics). What is of interest is the *modelling* of the physics, and the interactions of the various models.”

Mr. Connolley I asked of you to present a radiative heat transfer equation not someone else. If you don’t know or don’t understand then fine say so.

As to my being confused well as my hair turns grey I do sometimes forget where my car keys are but on this I am not. You defend something you appear not to understand.

87. Nikola Milovic says:

Climate change on our planet (and the other in the solar system) does not depend significantly either the human factor or composition of our atmosphere (CO2, etc.). Main generators of all the changes are the interaction of magnetic fields of the Sun and Earth. How these fields appear and how they affect to the substance through which they pass, to be science that investigates, not to make models based on nebulous assumptions that have no natural connection with the overall trends which we consider as the enigmatics .Who generates magnetic fields to-peek at the inside celestial body and “see” discontinuities in it and then easy to see how the electric fields created and how the proton neutron nucleus behave under the influence of these planets electro-magnetic fields.It deeds to worry for a collective delusion and blindness, and even science when it comes to studying the laws of nature of which we are formed, but we have some imaginary models “more logical” than the most logical.The forces are in question of the order of power: (12.363-3137) x10 ^ 20 units Consequently it is empty and discuss these diagrams, formulas and odds and ends when it do not even know the origin of any phenomenon in nature.

88. John Garrett says:

Mr. Eschenbach: sorry for this nit-picking but many of us will refer others to your excellent piece—

“…straitjacket is the most common spelling, strait-jacket is also frequently used…”

89. Willis Eschenbach says:

Nick Stokes says:
February 8, 2012 at 3:52 am

Willis Eschenbach says: February 7, 2012 at 10:39 pm

“So redo my calculations and tell me what you think the correct answer should be,”

No, I’m saying you can’t get the answers you want this way at all. And nor can I. You’ve used rules for triples that might (with some repair) be reasonable for absolute ranges. But you’re looking at confidence intervals, and they just don’t multiply (or even add).

Nick, you say:

No, I’m saying you can’t get the answers you want this way at all. And nor can I.

Perhaps you can’t, but the rest of us can. Let me go over it real slow for you, Nick.

The IPCC says that the climate sensitivity may be as low as 0.54 °C per W/m2. It also says the forcing since 1750 may be as low as 0.6 W/m2.

Finally, the IPCC says that temperature change is forcing change times sensitivity.

You may not be able to multiply these two numbers together. But I can, and that gives me a low IPCC estimate of about 0.3°C warming since 1750.

The IPCC says that the climate sensitivity may be as high as 1.22 °C per W/m2. It says the forcing since 1750 may be as high as 2.4 W/m2.

You may not be able to multiply these two numbers together. But I can, and that gives me a high IPCC estimate of about 2.9°C warming since 1750.

Duh.

(There is a bit more error in there because I have not included the error from the IPCC assumption of 3.7 W/m2 per doubling of CO2.)

So yes, Nick, we can indeed do the math, even if you can’t.

w.

90. Willis Eschenbach says:

Baa Humbug says:
February 7, 2012 at 3:03 pm

Love it, always learn something from the Willis posts, just add motivation to learn.

Have a question, in the paragraph…

For the first number, the forcing from a doubling of CO2, the usual IPCC number says that this will give 3.7 W/m2 of additional forcing. The end ranges on that are likely about 3.5 for the lower value, and 4.1 for the upper value (Hansen 2005). This gives the triangular number [3.5, 3.7, 4.0] W/m2 for the forcing change from a doubling of CO2.

Why isn’t the triangular number [3.5, 3.8, 4.1]W/m2? or alternatively (if the central number has to be 3.7) why isn’t it [3.3, 3.7, 4.1]W/m2 or [3.5, 3.7, 3.9]W/m2

There is no requirement that the best estimate has to be centered between the high and low estimate, and in most cases it won’t be. This is because for many things, the errors are different in the two directions.

w.

91. In support of Nick Stokes, you have to use Willis’ rules of Triangular Fuzzy Arithematic with several caveats.

As you add any two or more distributions, they will TEND more toward Normality.
As you multiply any two or more distributions (if they are completely positive value), then they will tend toward LogNormal distributions, because multiplication is the act of addition in the log space.

The key point is that you may start with P100, ML, P0 triangular distributions, but once you perform any arithmetic on them, you will have P100, kinda-ML, P0 of non-triangular single mode distributions. To assume they are still triangular is to significantly increase the uncertainty beyond what is real.

The trouble is, you often cannot prove your point with the P100 and P0 ranges. You wind up with almost anything possible but at infinitesimal probabilities. Staying within a P95 – P05 (90% confidence interval) or P90 – P10 (80% C.I) is usually far more enlightening. But to do that, you do more work by keeping track of the variances, 3rd and 4th moments, or convolve the probability density distributions.

92. Willis Eschenbach says:

NovaReason says:
February 7, 2012 at 11:53 pm

http://www.ccms.or.kr/data/pdfpaper/jcms22_2/22_2_161.pdf

Sorry to do this one to you, Willis, but multiplication and division of TFNs does not produce TFNs as a result. You got addition and subtraction right, but went wheels off the rails after that.

Thanks, NovaReason. You are correct, I gave a simplified version. Actually, multiplication and division produce trapezoidal fuzzy numbers, which are slightly different from the triangular fuzzy numbers resulting from addition and subtraction.

However, the lowest estimate, best estimate, and highest estimate are the same whether you use the simplified or the complex calculations. So the numbers that I gave as a result of my simplified calculations, are quite correct.

w.

93. Willis Eschenbach says:

NovaReason says:
February 8, 2012 at 12:01 am

http://www.sid.ir/en/VEWSSID/J_pdf/90820070106.pdf

Another explanation of fuzzy number operations, that I feel does a little better job explaining fuzzy division formulaically.

Willis, I love your posts. I love your dedication and attitude, but you flat out missed the mark on this one, I’d be interested to see what the calculations are actually like, but given that you did several iterations of division and multiplication, the results are going to be highly complex math that I can’t think through right now.

Excellent reference, NovaReason. As I said above, the numbers I gave are correct for best, lowest, and highest estimates, and since that’s all I care about, I used the simplified solution …

w.

94. Willis Eschenbach says:

Further to NovaReason’s comment, I wanted to show the difference between the exact calculation he points out, and the simplified calculation I used in the head post. This uses the multiplication example in NovaReason’s citation, multiplying [1, 2, 4] by [2, 4, 5]. The trapezoidal result is the accurate one posted by NovaReason, and my simplification is shown in red.

As you can see, for my purposes of estimating the highest, lowest, and most likely estimates, my simplification is 100% accurate.

Thanks to NovaReason for the excellent citation.

w.

95. Is the NovaReason citation a multiplication of probability distributions or a multiplication of fuzzy
sets? Willis 11:54am, the y-axis from 0 to 1 looks like set membership to me, not a probability density function.
Maybe fuzzy set theory and probability distributions can be mixed under the right circumstances.

Here is an interesting paper that I’m going to study. FWIW.
Fuzzy sets and probability: Misunderstandings, bridges and gaps
Didier Dubois – Henri Prade

The paper is structured as follows : first we address some classical misunderstandings between fuzzy sets and probabilities. They must be solved before any discussion can take place. Then we consider probabilistic interpretations of membership functions, that may help in membership function assessment. We also point out non-probabilistic interpretations of fuzzy sets. The next section examines the literature on possibility-probability
transformations and tries to clarify some lurking controversies on that topic. In conclusion, we briefly
mention several subfields of fuzzy set research where fuzzy sets and probability are conjointly used.

96. novareason says:

That actually makes a lot more sense now, Willis. Thanks for the explanation. Fuzzy calculators I was plugging the numbers into later gave me values similar to yours (rounding differences), with the same kind of simplification used. Functionally there’s little to no difference and for the sake of proving a point you’re entirely correct in your calculations. This was just kind of a distant topic for me (been a few years since a hard math course).

That being said, the point brought up above about the vanishing probability of highest and lowest is rather relevant. The numbers they gave allow for the possibility that it’s only 0.3 deg warming, but that is AS LIKELY given their numbers as 3.1 deg. Realistically, this does show the ridiculous amount of error they’re playing with, and arguing that their blind guesses are accurate predictions.

97. Re Willis’ comment at Feb. 8/ 10:06 a.m.

The IPCC formula is [deltaT = deltaF x CS (climate sensitivity). To get their lower bound on deltaT since 1750, we multiply their lower bound on forcing since 1750 [0.6 W/m^2] by their lower bound on CS [0.54 degC per W/m^2]. OK, .6 x .54 = 0.324 degC = deltaT since 1750. You got “about” 0.5 degC for delta T. What am I missing?

98. Willis Eschenbach says:

Leigh B. Kelley says:
February 8, 2012 at 3:07 pm

Re Willis’ comment at Feb. 8/ 10:06 a.m.

The IPCC formula is [deltaT = deltaF x CS (climate sensitivity). To get their lower bound on deltaT since 1750, we multiply their lower bound on forcing since 1750 [0.6 W/m^2] by their lower bound on CS [0.54 degC per W/m^2]. OK, .6 x .54 = 0.324 degC = deltaT since 1750. You got “about” 0.5 degC for delta T. What am I missing?

You are right. I meant to say “about 0.3°C, not “about 0.5°C”. I’ve fixed it.

w.

99. Willis Eschenbach says:

novareason says:
February 8, 2012 at 1:39 pm

That being said, the point brought up above about the vanishing probability of highest and lowest is rather relevant. The numbers they gave allow for the possibility that it’s only 0.3 deg warming, but that is AS LIKELY given their numbers as 3.1 deg. Realistically, this does show the ridiculous amount of error they’re playing with, and arguing that their blind guesses are accurate predictions.

Actually, the IPCC numbers for the climate sensitivity are not formal confidence intervals, and the IPCC admits that the numbers could be outside those ranges. In general, the IPCC confidence intervals are only 90%, and that’s based on “expert judgement” … which they seem to be lacking. So there is no such “vanishing probability” of the highest and lowest.

In fact, the IPCC goes out of their way to say climate sensitivity it could well be anywhere in the range 2 – 4.5, it’s not a gaussian distribution kind of thing.

That’s why I like using triangular numbers. They work well when we have little information on the distribution.

w.

100. Barry Elledge says:

William Connolley on Feb 8 at 2:02 linked to an IPCC set of graphs predicting global T from the year 2000 forward. The graphs present results from 17 or 21 different models plus the ensemble average as spaghetti charts, under 3 different assumed conditions. Connolley’s point was that the purportedly nonlinear models nevertheless produce somewhat linear outputs (actually more parabolic, but over short time spans close enough to linear).

But the interesting thing is that each of these 59 models (21 + 21 + 17) predicts that global T increases over the period 2000 – 2011, rather than remaining flat as actually occurred. Thus in contrast to the assertions of Connolley (and Steve Mosher earlier), none of these published IPCC models appears to accurately forecast temperature trends over the time interval for which we have data. Similarly, the 2011 paper by Spencer and Braswell demonstrates that at least 11 of the 14 IPCC models fail to track the temperature changes surrounding el nino events, and even the 3 better models do not accurately predict the actual changes in temperature over the el nino time period.

Thus I see no evidence that any of the IPCC models are at all reliable. So perhaps Connolley or Mosher or any other defenders of the models could link to any prospective prediction which actually matches the observed temperature changes.

And importantly, the IPCC must provide future temperature predictions on which it is explicitly willing to stake the credibility of its climate models. That way in 10 or 20 years we might be able to decide whether the models are at all useful. The failure of the IPCC predictions over the last 15 years has not forced the IPCC to admit failure because they haven’t prospectively committed to a standard by which failure or success of the models should be judged. That is why apologists like Connolley feel they can continue to avoid confronting the discrepancy between model and measured temperatures. Until they do clearly set a prospective standard on which they are willing to be judged and have a decade or so of measurements to which to compare, the models are merely hypotheses. Even a few decades of concurrence between models and measurements of course do not prove the models are valid; but divergence does suffice to demonstrate their failure.

101. William M. Connolley says:

> But the interesting thing is that each of these 59 models (21 + 21 + 17) predicts that global T increases over the period 2000 – 2011, rather than remaining flat as actually occurred

This is the usual mistake. I’ll correct you, but you won’t even read what I say much less believe or check it.

1. those aren’t temperature predictions. We all know that the GCMs don’t produce “weather” type predictions aiming to accurately track interannual fluctuations over coming decades (except for a very few experimental runs like the Smith et al stuff, but that’s not what you’re seeing here). They are aiming to produce trend predictions over longer timescales.

2. [snip]

102. Barry Elledge says:

Connolley says:
“GCMs don’t produce ‘weather’ type predictions… over coming decades…They are aiming to produce trend predictions over longer timescales.”

Fine! So define the time interval over which you are willing to have the models evaluated. It’s your call!

But recall that the time interval over which the current warming trends occur extend from the late 1970s to about 1998-1999, a period of about 20 years. If the 11 years from 2000-2011 are too short in which to evaluate a temperature trend, are 20 years somehow enough? Are you willing to commit to a 20 year evaluation period? If so, then by your standards we should know by about 2020 whether the current climate stasis reflects a significant trend.

If on the other hand you think a mere 20 years is too short a time within which to evaluate temperature trends, then you must necessarily believe that the 20 year warming from 1979-1999 is also too short to indicate a warming trend. The planet cooled from about 1940 to the late 1970s. If you evaluate the trend from 1940 to 2011, the warming seems a lot smaller. In fact, some of the most reliable data (from North America) indicate that the late 1930s-1940 period were as warm as 1998-1999, suggesting no warming at all over the last 70 years.

Personally, I think a wise interval should encompass at least one cycle of a periodic function. But of course climate changes over multiple periodicities. A clear interval encompasses about 60 years, corresponding to the PDO and AMO periods. Other periods seem to last about 1000 and 1500 years.
But if I were to suggest that we wait until 2039 to decide whether the warming which began in 1979 is significant, you might well accuse me of being dilatory. Not to mention waiting until 3479.

But again the ball is in your court. How long do we need to wait to decide whether the predicted temperature trend from 2000 is verified or refuted??

103. Smokey says:

Barry Elledge says:

“So define the time interval over which you are willing to have the models evaluated… How long do we need to wait to decide whether the predicted temperature trend from 2000 is verified or refuted??”

Connolley, like all incompetent and mendacious alarmists, pretends that models are reality. They are not, as planet Earth is clearly demonstrating.

It is amusing watching serial propagandists like Mr Censorship Connolley try to convince us that by carefully selecting only those few computer models that just happen to get lucky, and thus somewhat close to the temperature record for a limited time frame, can actually predict the climate accurately. That is known as the Texas Sharpshooter Fallacy: shoot holes in a barn door, then draw a circle around a few of them and declare, “Bullseye!” But like a broken clock that is right twice a day, not one computer model is consistently correct. They are all expensive failures.

Connolley is an alarmist fakir whose every word is founded on the climate charlatanism that says CO2 will cause runaway global warming and climate disruption. As if. He is censoring scum… IMO, of course, based on what I’ve seen. And a despicable hypocrite to boot, because he freely posts here, while censoring the sincere, honest and science-based posts of others where he connives to moderate. He would make the perfect North Korean bureaucRat.

Isn’t it great that everybody is able to comment freely here, while scientific skeptics [the only honest kind of scientists] are moderated out of existence at propaganda blogs like Wikipedia for simply having a different view? The upside is the much heavier traffic the alarmist contingent brings to WUWT – the internet’s Best Science site.