Life is Like a Black Box of Chocolates

Guest Post by Willis Eschenbach

In my earlier post about climate models, “Zero Point Three Times The Forcing“, a commenter provided the breakthrough that allowed the analysis of the GISSE climate model as a black box. In a “black box” type of analysis, we know nothing but what goes into the box and what comes out. We don’t know what the black box is doing internally with the input that it has been given. Figure 1 shows the situation of a black box on a shelf in some laboratory.

Figure 1. The CCSM3 climate model seen as a black box, with only the inputs and outputs known.

A “black box” analysis may allow us to discover the “functional equivalent” of whatever might be going on inside the black box. In other words, we may be able to find a simple function that provides the same output as the black box. I thought it might be interesting if I explain how I went about doing this with the CCSM3 model.

First, I went and got the input variables. They are all in the form of “ncdf” files, a standard format that contains both data and metadata. I converted them to annual or monthly averages using the computer language “R”, and saved them as text files. I opened these in Excel, and collected them into one file. I have posted the data up here as an Excel spreadsheet.

Next, I needed the output. The simplest place to get it was the graphic located here. I digitized that data using a digitizing program (I use “GraphClick”, on a Mac computer).

My first procedure in this kind of exercise is to “normalize” or “standardize” the various datasets. This means to adjust each one so that the average is zero, and the standard deviation is one. I use the Excel function ‘STANDARDIZE” for this purpose. This allows me to see all of the data in a common size format. Figure 2 shows those results.

Figure 2. Standardized forcings used by the CCSM 3.0 climate model to hindcast the 20th century temperatures. Dark black line shows the temperature hindcast by the CCSM3 model.

Looking at that, I could see several things. First, the CO2 data has the same general shape as the sulfur, ozone, and methane (CH4) data. Next, the effects of the solar and volcano data were clearly visible in the temperature output signal. This led me to believe that the GHG data, along with the solar and the volcano data, would be enough to replicate the model’s temperature output.

And indeed, this proved to be the case. Using the Excel “Solver” function, I used the formula which (as mentioned above) had been developed through the analysis of the GISS model. This is:

 T(n+1) = T(n)+λ ∆F(n+1) * (1- exp( -1 / τ )) + ΔT(n) exp( -1 / τ )

OK, now lets render this equation in English. It looks complex, but it’s not.

T(n) is pronounced “T sub n”. It is the temperature “T” at time “n”. So T sub n plus one, written as T(n+1), is the temperature during the following time period. In this case we’re using years, so it would be the next year’s temperature.

F is the forcing, in watts per square metre. This is the total of all of the forcings under consideration. The same time convention is followed, so F(n) means the forcing “F” in time period “n”.

Delta, or “∆”, means “the change in”. So ∆T(n) is the change in temperature since the previous period, or T(n) minus the previous temperature T(n-1). ∆F(n), correspondingly, is the change in forcing since the previous time period.

Lambda, or “λ”, is the climate sensitivity. And finally tau, or “τ”, is the lag time constant. The time constant establishes the amount of the lag in the response of the system to forcing. And finally, “exp (x)” means the number 2.71828 to the power of x.

So in English, this means that the temperature next year, or T(n+1), is equal to the temperature this year T(n), plus the immediate temperature increase due to the change in forcing λ F(n+1) * (1-exp( -1 / τ )), plus the lag term ΔT(n) exp( -1 / τ ) from the previous forcing. This lag term is necessary because the effects of the changes in forcing are not instantaneous.

Figure 3 shows the final result of that calculation. I used only a subset of the forcings, which were the greenhouse gases (GHGs), the solar, and the volcanic inputs. The size of the others is quite small in terms of forcing potential, so I neglected them in the calculation.

 Figure 3. CCSM3 model functional equivalent equation, compared to actual CCSM3 output. The two are almost identical.

As with the GISSE model, we find that the CCSM3 model also slavishly follows the lagged input. The match once again is excellent, with a correlation of 0.995. The values for lambda and tau are also similar to those found during the GISSE investigation.

So what does all of this mean?

Well, the first thing it means is that, just as with the GISSE model, the output temperature of the CCSM3 model is functionally equivalent to a simple, one-line lagged linear transformation of the input forcings.

It also implies that, given that the GISSE and CCSM3 models function in the same way, it is very likely that we will find the same linear dependence of output on input in other climate models.

(Let me add in passing that the CCSM3 model does a very poor job of replicating the historical decline in temperatures from ~ 1945 to ~ 1975 … as did the GISSE model.)

Now, I suppose that if you think the temperature of the planet is simply a linear transformation of the input forcings plus some “natural variations”, those model results might seem reasonable, or at least theoretically sound.

Me, I find the idea of a linear connection between inputs and output in a complex, multiply interconnected, chaotic system like the climate to be a risible fantasy. It is not true of any other complex system that I know of. Why would climate be so simply and mechanistically predictable when other comparable systems are not?

This all highlights what I see as the basic misunderstanding of current climate science. The current climate paradigm, as exemplified by the models, is that the global temperature is a linear function of the forcings. I find this extremely unlikely, from both a theoretical and practical standpoint. This claim is the result of the bad mathematics that I have detailed in “The Cold Equations“. There, erroneous substitutions allow them to cancel everything out of the equation except forcing and temperature … which leads to the false claim that if forcing goes up, temperature must perforce follow in a linear, slavish manner.

As we can see from the failure of both the GISS and the CCSM3 models to replicate the post 1945 cooling, this claim of linearity between forcings and temperatures fails the real-world test as well as the test of common sense.

w.

TECHNICAL NOTES ON THE CONVERSION TO WATTS PER SQUARE METRE

Many of the forcings used by the CCSM3 model are given in units other than watts/square metre. Various conversions were used.

The CO2, CH4, NO2, CFC-11, and CFC-12 values were converted to w/m2 using the various formulas of Myhre as given in Table 3.

Solar forcing was converted to equivalent average forcing by dividing by 4.

The volcanic effect, which CCSM3 gives in total tonnes of mass ejected, has no standard conversion to W/m2. As a result we don’t know what volcanic forcing the CCSM3 model used. Accordingly, I first matched their data to the same W/m2 values as used by the GISSE model. I then adjusted the values iteratively to give the best fit, which resulted in the “Volcanic Adjustment” shown above in Figure 3.

[UPDATE] Steve McIntyre pointed out that I had not given the website for the forcing data. It is available here (registration required, a couple of gigabyte file).

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

220 Comments
Inline Feedbacks
View all comments
Brian H
May 14, 2011 7:10 pm

Jim D’s repeated suggestions that it’s all the result of averaging GCMs is ludicrous. The reason, Jim, that they have to do that is that the GCMs are all over the map. Trying to replicate any one of them is pointless and futile.
What Willis has demonstrated is that all the teraflops are just Moon-walking and running-on-the-spot, and that the only actual functional math in them is brain-dead-simple/stupid.

May 14, 2011 7:32 pm

In my view, “onion2” on May 14, 2011 at 4:59 am has it right. Willis has stumbled upon a rather well-known and not very interesting computer science application. This is known in some circles as a “model of a model.”
I (and many colleagues) have also created simple one-equation relationships of very complex computer modeling results for oil refineries, petrochemical plants, and chemical plants. We did not generally obtain a linear result, but obtained a quadratic result (meaning one of the terms was squared). Many companies do this; the incentive being it is far faster to solve the problem using the simplified “model of a model.” Many times, the detailed, complex model may require hours or days to solve. The “model of a model” may solve in a few seconds, or much faster. The results are sufficiently accurate for the particular application.
For those who might be interested, one application of a model-of-a-model was in advanced process control of oil refinery process units. This control application required the simplified model-of-a-model to solve within a few seconds, and was solved once per hour. The simplified model was fast and robust – it solved every time – while the complex model would solve (sometimes) and required an hour or two.
Another application was to determine the energy consumption of a large, complex refinery and chemical processing plant, plus the utility plant that provided steam, electric power, and compressed air. The individual models that were used to provide the basis for the model-of-a-model required weeks to set up and solve. The model-of-a-model solved in about one second, and was run daily.
This is not a big deal in and of itself.
The greater question is, how much of the last century’s warming was due to natural cycles and natural events (volcanic eruptions), and how much, if any, was due to man’s activities? Man’s activities include (at a minimum) the release into the atmosphere of various gases and particles, changing the land surface, emitting massive quantities of heat from burning fossil fuels, and producing nuclear power.

Theo Goodwin
May 14, 2011 8:00 pm

Roger Sowell says:
May 14, 2011 at 7:32 pm
‘In my view, “onion2″ on May 14, 2011 at 4:59 am has it right. Willis has stumbled upon a rather well-known and not very interesting computer science application. This is known in some circles as a “model of a model.”’
You don’t read the forum before you post, do you? If you did I would not need to reply to you. Your argument is circular. You assume that there is some explicated model of climate change that the Warmista have presented to the public. No such thing exists. Having assumed the existence of the nonexistent Warmista explicated model, you then argue that what Willis has done is a trivial exercise in creating a simple model of another model. You hope, thereby, to have your readers conclude that there is a Warmista model explicated for the public. But there is none. As Willis explained above and as many of us have explained since, Willis’ model replaces a black box. Warmista have offered nothing but a black box.
Warmista have no physical hypotheses which could explain the warming that only they claim to exist and they have only a black box where they claim there is a model. Warmista have nothing. And you know it. I would say that you should be ashamed of yourself, but I can readily see that you are incapable of shame.

Theo Goodwin
May 14, 2011 8:05 pm

steven mosher says:
May 14, 2011 at 5:48 pm
“here Willis.
One way its done with 103 runs of a GCM.
http://www.newton.ac.uk/programmes/CLP/seminars/120711001.html
“It what?
Explicate the model for the general public.
Respond to the claim that Warmista have no physical hypotheses which can explain forcings.
You cannot do either. You know you cannot do either. The fact that you won’t address the matter puts you in the same league as Mann, Briffa, and the rest of the lot.
And, for God’s sake, son, don’t assign homework. If you are not ready to state your arguments then do not post. This is a forum for debate among consenting adults.

EJ
May 14, 2011 8:32 pm

Thanks Willis!

EJ
May 14, 2011 8:32 pm

This is what bugged me from the beginning.

May 14, 2011 8:40 pm

@Theo Goodwin on May 14, 2011 at 8:00 pm
You don’t read the forum before you post, do you?”
Yes, I did. Completely. And with understanding.
” If you did I would not need to reply to you. Your argument is circular. “
No, my argument is sound. The fact that you don’t understand it is your problem, not mine. But, in the interest of furthering your understanding, I’ll explain it in some greater detail.
“You assume that there is some explicated model of climate change that the Warmista have presented to the public. No such thing exists. “
The solid black line in the post above, Figure 2, CCSM3 is the model RESULT that I refer to. That was produced from some workers in the climate science field, presumably from a complex General Circulation Model that was used in hind-cast mode, and after being given appropriate factors to tune the model to approximate the measured temperature over time. In my field, as explained above in my comment, the result of complex and tedious simulations and modeling also produced a curve (sometimes several curves or several segments of curves). That resulting curve is what was then fitted with parameters to determine a simplified equation or equations that would effectively substitute for the complex model.
“Having assumed the existence of the nonexistent Warmista explicated model, you then argue that what Willis has done is a trivial exercise in creating a simple model of another model. You hope, thereby, to have your readers conclude that there is a Warmista model explicated for the public. But there is none. As Willis explained above and as many of us have explained since, Willis’ model replaces a black box. Warmista have offered nothing but a black box.”
See above statement.
“Warmista have no physical hypotheses which could explain the warming that only they claim to exist and they have only a black box where they claim there is a model. Warmista have nothing. And you know it. “
Actually, the AGW proponents have an oft-stated hypothesis, or premise, that has no valid proof as of yet. Their premise is that CO2 and other “Greenhouse” gases (see Willis’ post above for some of these, or see the Kyoto Protocol for a list of six such gases, or see California’s AB 32 for a much more comprehensive list) slightly inhibit at least a portion of the heat leaving the Earth’s surface via infra-red radiation due to their capacity to absorb radiative energy in particular bandwidths. The Earth does NOT heat up as a result, however, it is supposed to cool off slightly less, especially in winter nights and with a more pronounced impact at the poles than at the equator. The net effect, they say, is to have warmer temperatures. Further, their premise holds that the warmer surface (that is, not as cool as it otherwise would be) leads to more water evaporation, this they term a positive feedback. The increased water vapor, itself a “greenhouse” gas, leads to yet less cooling at night.
I, on the other hand, have no problem stating that climate warming has occurred since at least the 1750 to 1800 time frame, as there is ample evidence of extreme cold throughout Western Europe and in North America. As I’ve written before, the only question is the cause of the warming, not the existence of the warming. Actually, there is another question: exactly how much warming has occurred? We really don’t know that answer, since the data set is corrupted, and adjusted, to say the least.
“I would say that you should be ashamed of yourself, but I can readily see that you are incapable of shame.”
Well, that’s an ad hominem attack, and I’ll let you speak for yourself. I will note, in passing, that one who resorts to such name-calling also admits he’s lost the debate due to a complete shortage of facts and logical arguments leading to valid conclusions. And having done so in such a public medium as WUWT!
My points above stand, as written.

Theo Goodwin
May 14, 2011 8:44 pm

Willis has struck the Warmista in the very heart of their credibility. Willis has stated very clearly something that we all know, namely, that the so-called model(s) used by the Warmista are nothing but black boxes. Warmista make runs. If they don’t like the results of their runs, they rejigger the model and run again. All to get to the predetermined run that shows just the right amount of global warming.
If you want some credibility, Warmista, Willis has just given you the best opportunity in the world. Come explicate your model here. Lay out the rational procedures that you use in modifying models after runs. Show us your record of decisions about modifications that you made as you built a history of runs and show us how those decisions are leading to some rational progress regarding the internal structure of the model.
Commenters on this site will do you the favor of analyzing your model, your procedures for recording modifications, your procedures for making modifications, the whole nine yards. That is what you want, isn’t it? You are scientists, aren’t you.
Mosher has dropped a very interesting hint. He has said that ENSO is treated as statistical noise in the models. Warmista could start there. Explain that point.
I promise not to use the word ‘unfalsifiable’.

jorgekafkazar
May 14, 2011 9:49 pm

I once was assigned during its last month to a project that involved a complex model using the most powerful computer then available. Anomalous output in some runs was unexplained until I did some hand calculations and discovered (1) a subtly unwarranted assumption had been embedded in the model, and (2) I could replicate the output within 5% by taking a simple average of six inputs and dividing by another. So much complexity had been built into the program that no one up to then had understood the implications of the assumption. Most of the intermediate numbers (our primitive version of “EVER-SO-HIGH DIMENSIONAL” output) simply got shuffled around for n iterations and then dropped out when the target result was calculated. A fortune had been spent on computer time; all wasted in detailed, irrelevant, “regional” printouts.

May 14, 2011 11:34 pm

Actually, had Willis come out with a quadratic, we would be having a very different conversation. It’s the linearity that kills the GCM argument.

Konrad
May 15, 2011 1:08 am

I do not find Willis being able to replace the workings of the CCSM3 model with a one line equation too surprising given what he discovered about the GISSE model. He has again highlighted the critical problem with these models, in that they effectively model a given temperature point by applying forcings to the previous temperature point. There are too many multi year changes to polar ice, ocean heat content and other real world conditions for this type of modeling to be accurate.
What I do find surprising the forcings Willis shows in figure 2. The plots shown for ozone, sulphur, CO2 and CFCs all seem unrealistic. Sulphur and CFCs show no variation, despite both being known to be produced by volcanoes. Ozone levels depend on UV radiation which varies significantly over solar cycles, yet the only feature in the ozone plot appears to be a unproven response to the equally unrealistic CFC plots. The CO2 plot also seems dodgy, being at odds with both plant stoma proxies and historical direct chemical measurements.
Given that this CCSM3 model appears to be only one year deep and using atmospheric data from a different planet, I would not hold out too much hope for CCSM4

May 15, 2011 1:08 am

Willis, thank you for the thoughtful reply. I very much appreciate the opportunity to exchange views. You have some very interesting posts on WUWT, and I have learned much from you. Thank you for your writings, and for the time you spend to explain things clearly and in sufficient detail. However, and I mean this in the most civilized way, I don’t always agree with what you write. I believe that this is one of those times, but perhaps it is because I have not made my points sufficiently clear. I believe I have understood your point. Let me explain. You wrote,
“The claim is made that the climate models, because of their unbelievable complexity, can successfully reproduce these chaotic phenomena. What I have shown is that they are not chaotic or complex in their essence—they are purely and mechanistically linear.”
My earlier work, in the 1980s and 1990s, was with analogous systems to what you accurately describe in the italicized quote as “these chaotic phenomena.” The systems we modeled are highly non-linear, have multiple simultaneous equations for chemical reaction mechanisms and reaction paths, have many degrees of freedom, a multitude of variables, several simultaneous or sometimes sequential constraints, and in many cases depend entirely on the choice of boundary conditions or initiation values. Plus, their behavior changes with time. Their behavior tomorrow is based at least in part on what happened today. These systems are very much analogous to the climate models, in that thermodynamics are invoked, equations of state are used many thousands of times, boundaries are computed then iterations are made to have those boundary values match within a certain tolerance, and there are also other similarities. There are also several competing versions of models that are used to simulate the processes. In summary, we are each describing very similar, complex, non-linear, computation-intensive simulations.
My point is that these complex simulations can and do produce results that can be graphed in two dimensions. That graph for GCMs is an x-y graph of temperature vs time, where temperature is the global average mean temperature. Your Figure 2 above shows that as the solid black line. That temperature vs time relationship is the output from running the GCM in hind-cast mode. Similarly, the complex models I worked with in oil refineries would produce an x-y graph for some result as a function of something else.
From there, both processes are about the same. As I understand your post above, you selected the few input criteria with the most likely impacts, and found values for those input criteria that would duplicate the solid black line as nearly as possible. You were successful, having achieved a correlation coefficient of 0.995. That is outstanding success.
We also did the same, by determining an equation that would duplicate our x-y graph as closely as possible.
You point out that the models “are not chaotic or complex in their essence—they are purely and mechanistically linear.” As to the chaotic aspect, I suspect that is correct. The modeling of chaotic systems, at least to my understanding of the state of that art, is not very successful at this time. That is, even if we can successfully model a chaotic system for a short time frame, any predictive value is worthless because the model fails quickly the farther into the future one predicts. Even if the model solves or reaches a solution, the divergence from reality is simply too great for the prediction to have any value.
As to the complexity aspect, these GCMs are somewhat complex, but really are not all that complex compared to some models. For one, they don’t have multiple chemical reactions occurring, each with its own set of kinetic rate equations; they don’t have catalysts that change in relative activity due to a host of variables including the passage of time; they don’t have highly variable energy input over time, because the sun is one big constant source of energy within a very small range of variability; they don’t even try to model things that are known to affect the system such as clouds, and oceanic basin thermal oscillations, and a few others. To my knowledge, they do not even begin to recognize or compute the effect of solar cycles and sunspot number, although the empirical evidence shows low sunspot number equals cold climate, and high sunspot number equals warm climate.
Leaving those known defects aside for the moment, what they do attempt to do is solve multiple simultaneous equations that are highly non-linear, for a large number of gridded cells that represent spatial regions on the globe.
The output, what the modelers ask for, is the global average mean temperature as a function of time. Whether the output is valid or not is determined by how well the model’s output matches measured values. As you correctly point out, the known temperature decline from around 1940 to 1975 is not well represented. There is a bust in the model, as we would say.
Finally, I hope that I have explained my point more completely and clearly. It is certainly no surprise to me, or to anybody in the petroleum industry that practices in economics and planning, that one can achieve a relatively simple equation that reproduces the output from a complex non-linear model. Every major oil company and most independent oil companies do this routinely, for dozens of applications, and have done so for decades.
I also know that the simple equation has serious limitations. One cannot extrapolate very far outside the known parameters. When a new variable begins to impact the real system, the entire process must be repeated to account for that variable. It appears to me that the absence of sunspot number as an input variable, and the failure to account for ocean basin thermal oscillations will soon show the serious deficiencies in their models.

Jim D
May 15, 2011 1:22 am

netdr, the graph you show from CCSM3 shows 0.4 C per decade initially, but IPCC with more models shows the average is much lower. In fact, the consensus is more like 0.2 which is consistent with 3 C per doubling, and consistent with the decadal averages between the 00’s and the 90’s.

May 15, 2011 1:25 am

Roger Sowell,
I designed auto-tuning servos for chemical plants. The algorithm used was practically trivial. i.e.
1. Stabilize the process at the running value
2. Turn your actuator up to 100%
3. Measure the delay until the plant starts responding
4. Measure the slope of the rise after the plant starts responding
The delay and the slope are all you need to tune the servo (using various proportionality constants which determine the “aggressiveness” of the servo)
Of course there are quite a few details I have left out – but that is the gist of it.
The beauty is that because it is a feedback system that is constantly adjusting the plant – you only need to be close. The above model can be applied to better than 90% of the control situations. i.e. a first order approximation is good enough.
Once your servo is servoing you can do continuous checks on the system (sophisticated systems can use the inherent noise of a system for this) and update the control parameters accordingly.
Which makes me think – could climate sensitivity be extracted from the noise of the data? I suppose it depends in part on the quality of the measurement.

Jim D
May 15, 2011 1:26 am

Brian H, the use of the ensemble average is to remove natural internal variability. Once you remove that, all that should be left is climate forcing. What do you think should be left after removing natural internal variability? This is what Willis shows, and it should be obvious.

May 15, 2011 1:29 am

Mosher says “In short, it is not at all surprising that the global average (low dimensional output) of a complex system is this simple.”
Just because the models are aggregated and averaged doesn’t excuse the linearity. The real world follows one precise climatic path and *I* would be surprised if that was one that could be described as linear.

EllisM
May 15, 2011 1:48 am

Willis, if it was possible to assemble all the relevant observations (so-called “greenhouse” gases, the various kinds of aerosols, solar values, etc.) and use the correct temperature record (UAH, please), could your model replicate UAH?
That would be interesting!

Varco
May 15, 2011 3:03 am

Out of interest I used the excellent Oakdale ‘Datafit 9’ to do a non-linear regression of columns B to K of the forcing spreadsheet supplied (no standardization used) vs the temp output from the same spreadsheet and got an R^2 of 0,95. What does it mean – ‘dunno’, but it took less time than a cup of coffee to creat 🙂
Willis, keep up the great work!
Equation ID: a*x1+b*x2+c*x3+d*x4+e*x5+f*x6+g*x7+h*x8+i*x9+j*x10+k
Number of observations = 131
Number of missing observations = 0
Solver type: Nonlinear
Nonlinear iteration limit = 250
Diverging nonlinear iteration limit =10
Number of nonlinear iterations performed = 9
Residual tolerance = 0.0000000001
Sum of Residuals = -1.63226376859171E-12
Average Residual = -1.24600287678757E-14
Residual Sum of Squares (Absolute) = 0.434578735934514
Residual Sum of Squares (Relative) = 0.434578735934514
Standard Error of the Estimate = 6.01788124352828E-02
Coefficient of Multiple Determination (R^2) = 0.9516724117
Proportion of Variance Explained = 95.16724117%
Adjusted coefficient of multiple determination (Ra^2) = 0.9476451127
Durbin-Watson statistic = 0.711469121227893
Regression Variable Results
Variable Value Standard Error t-ratio Prob(t)
a -1.89744278704406E-02 1.81523787513384E-03 -10.45286027 0.0
b 0.118418933483967 1.72891731205921E-02 6.84931157 0.0
c 116423.721682155 159924.971877965 0.7279896336 0.46804
d 126041339641.26 49438332950.4219 2.549465812 0.01205
e 3.46922240210789E-02 0.013893017447072 2.497097852 0.01388
f 8.14977251236907E-04 8.81368257363809E-04 0.9246727964 0.35699
g -0.111026191793535 2.89799501182426E-02 -3.831138126 0.0002
h -1.48009700962797E-02 4.04691874358841E-03 -3.657343039 0.00038
i 9.6706315535647E-03 2.18369061633845E-03 4.428572198 0.00002
j -1.05146889747344 0.886025154523761 -1.186725785 0.23768
k -144.236832673365 24.3509286610933 -5.923257987 0.0

Shub Niggurath
May 15, 2011 3:29 am

Mosher
The need to create and maintain complex computer models of the climate system, the whole justification for the exercise, is the unstated assumption or claim that the model is a good replication of a climate system because it is representationally irreducible.
[1] Under this scheme of thinking, further refinement is only possible by introduction of more parameters and equations (which supposedly mirror further facets of reality), or reductions in confidence intervals of prior parameters.
Consider the following: [I am using “” symbol to mean – interaction, as in, it could be +, -. *, causes lagged increase, causes exponential increase, causes slow decay etc]
A climate model M can be represented by:
M(output) = X(a)X(b)X(c)X(d)…X(n)
where X(a), X(b) and so on, are the participating contributory factors that result in M(output).
Now, the justification for using this model rests on two assumptions:
a) there are no large unknown factors in our list of X(a) to X(n)
b) the sum complement of interactions possible between X(a) to X(n) cannot be broken down, or reduced to simpler modes for all possible values of X(a) to X(n). In other words, confronting or tackling the high dimensionality must be unavoidable.
As a rough example, there must be a real reason to say (if x=2, y=3 and z=1):
a= ((x+y)*y)-(x^2+z)-(((y/x)*2)+z*2)
if,
a=x+y
does the same job.
So Willis’ analysis does not question the specific model alone, but the justification of the exercise of modelling. If Willis’s equation is correct, then the climate system is indeed a simple lagged linear response to forcings, which implies there is absolutely no need for high-end computational modelling. If on the other hand, the climate system is a complex high dimensional one, one that is barely represented with the greatest of difficulties with the computational power that is required to handle the climate model, then it should not be able to be broken down to be represented by a simple linear equation.
Secondly, climate models have long been criticized for being nothing more than linear regressions exercises and therefore valid only as heuristics, with a bristling response from the climate modelers and scientists. From the above, it does not look like the latest models are in anyway, representational than just being good as heuristic guides.
For examples of ‘modeling blindness’ see: Lahsen M. Seductive Simulations? Uncertainty Distribution Around Climate Models. Social Studies of Science. December 2005 35: 895-922 (google it, paper available freely from Scholar)

manacker
May 15, 2011 4:59 am

Willis
You point out that the models are simply set up to respond to certain previously factored-in forcings.
When they fail to produce the same trends as physically observed (such as 1945-1975), the rationalization of the modeler is:
“Well, my model was correct, except for… “(add in any unforeseen factor that made the model invalid). [See Nassim Taleb’s The Black Swan]
The problem in real life (and real climate) is that it is dominated by “except fors”.
That’s why the models are worthless for predicting the future.
Max

Paul
May 15, 2011 6:15 am

Willis, you may enjoy this. Email exchanges between outside scientists and RealClimate scientists are always fun to read.
Rancourt (a physicist) recently wrote a paper on radiation physics, the greenhouse effect, and the Earth’s radiation balance, arriving at the conclusion that “the predicted effect of CO2 is two orders of magnitude smaller than the effects of other parameters.” He sent the paper to the RealClimate folks for input. He now presents the email exchanges.
He introduces the exchanges as follows:
How do scientists operate? How do they attempt to influence each other? How do they protect their intellectual interests? Do they use intimidation? Paternalism? Do they mob challenges from outside their chosen field?
Consider this example from the area of climate science, involving some of the top establishment scientists in the field…
Peer criticism — Revised version of Rancourt radiation physics paper.
1. Rancourt writes original version of article, HERE.
2. Asks for and receives peer criticism, HERE.
3. Rancourt writes significantly revised version of article, HERE.
4. Asks for and receives further peer criticism about revised version, PRESENT POST.
5. It appears that Rancourt’s revised paper is correct: The predicted effect of CO2 is two orders of magnitude smaller than the effects of other parameters.
Following the posting of THIS significantly revised version of Denis Rancourt’s paper about Earth’s radiation balance, Rancourt asked the climate scientists at RealClimate for follow-up criticism — resulting in this email exchange:
http://climateguy.blogspot.com/2011/05/peer-criticism-revised-version-of.html

netdr
May 15, 2011 6:42 am

Jim D says:
May 15, 2011 at 1:22 am
netdr, the graph you show from CCSM3 shows 0.4 C per decade initially, but IPCC with more models shows the average is much lower. In fact, the consensus is more like 0.2 which is consistent with 3 C per doubling, and consistent with the decadal averages between the 00′s and the 90′s.
***********************
The estimated warming gets less and less as more time passes.
In 2006 or so warming was estimated to be + 8 ° C in 2100 then + 3 then + 2 the supposed warming goes down each year. why is that ? The 2 ° c number is just on the cusp of being a slight problem for some places and a benefit for others.
I would appreciate a link to the other models output [with lower warming] which you claim to have seen. Most models including this one show a decreasing amount of warming as time goes on because of the logarithmic nature of CO2 warming.
The point I was making is that this model is seriously wrong so far and Dr Hansen’s 1988 model was just as wrong. The correct ones [if there are any] are so not scary that they are kept in a locked room and not shown to the press.

1 3 4 5 6 7 9
Verified by MonsterInsights