Model Climate Sensitivity Calculated Directly From Model Results

Guest Post by Willis Eschenbach

[UPDATE: Steven Mosher pointed out that I have calculated the transient climate response (TCR) rather than the equilibrium climate sensitivity (ECS). For the last half century, the ECS has been about 1.3 times the TCR (see my comment below for the derivation of this value). I have changed the values in the text, with strikeouts indicating the changes, and updated the graphic. My thanks to Steven for the heads up. Additionally, several people pointed out a math error, which I’ve also corrected, and which led to the results being about 20% lower than they should have been. Kudos to them as well for their attention to the details.]

In a couple of previous posts, Zero Point Three Times the Forcing and Life is Like a Black Box of Chocolates, I’ve shown that regarding global temperature projections, two of the climate models used by the IPCC (the CCSM3 and GISS models) are functionally equivalent to the same simple equation, with slightly different parameters. The kind of analysis I did treats the climate model as a “black box”, where all we know are the inputs (forcings) and the outputs (global mean surface temperatures), and we try to infer what the black box is doing. “Functionally equivalent” in this context means that the contents of the black box representing the model could be replaced by an equation which gives the same results as the climate model itself. In other words, they perform the same function (converting forcings to temperatures) in a different way but they get the same answers, so they are functionally equivalent.

The equation I used has only two parameters. One is the time constant “tau”, which allows for the fact that the world heats and cools slowly rather than instantaneously. The other parameter is the climate sensitivity itself, lambda.

However, although I’ve shown that two of the climate models are functionally equivalent to the same simple equation, until now I’ve not been able to show that is true of the climate models in general. I stumbled across the data necessary to do that while researching the recent Otto et al paper, “Energy budget constraints on climate response”, available here (registration required). Anthony has a discussion of the Otto paper here,  and I’ll return to some curious findings about the Otto paper in a future post.

cmip5 model temperature and forcing changeFigure 1. A figure from Forster 2013 showing the forcings and the resulting global mean surface air temperatures from nineteen climate models used by the IPCC. ORIGINAL CAPTION. The globally averaged surface temperature change since preindustrial times (top) and computed net forcing (bottom). Thin lines are individual model results averaged over their available ensemble members and thick lines represent the multi-model mean. The historical-nonGHG scenario is computed as a residual and approximates the role of aerosols (see Section 2).

In the Otto paper they say they got their forcings from the 2013 paper Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models by Forster et al. (CMIP5 is the latest Coupled Model Intercomparison Project.) Figure 1 shows the Forster 2013 representation of the historical forcings used by the nineteen models studied in Forster 2013, along with the models’ hindcast temperatures. which at least notionally resemble the historical global temperature record.

Ah, sez I when I saw that graph, just what I’ve been looking for to complete my analysis of the models.

So I digitized the data, because trying to get the results from someone’s scientific paper is a long and troublesome process, and may not be successful for valid reasons. The digitization these days can be amazingly accurate if you take your time. Figure 2 shows a screen shot of part of the process:

digitization processFigure 2. Digitizing the Forster data from their graphic. The red dots are placed by hand, and they are the annual values. As you can see, the process is more accurate than the width of the line … see the upper part of Figure 1 for the actual line width. I use “GraphClick” software on my Mac, assuredly there is a PC equivalent.

Once I had the data, it was a simple process to determine the coefficients of the equation. Figure 3 shows the result:

black box analysis averaged climate modelsFigure 3. The blue line shows the average hindcast temperature from 19 models in the  the Forster data. The red line is the result of running the equation shown in the graph, using the Forster average forcing as the input.

As you can see, there is an excellent fit between the results of the simple equation and the average temperature hindcast by the nineteen models. The results of this analysis are very similar to my results from the individual models, CCSM3 and GISS. For CCSM3, the time constant was 3.1 years, with a sensitivity of 1.2 2.0°C per doubling. The GISS model gave a time constant of 2.6 years, with the same sensitivity, 1.2 2.0°C per doubling. So the model average results show about the same lag (2.6 to 3.1 years), and the sensitivities are in the same range (1.2 2.0°C/doubling vs 1.6 2.4°C/doubling) as the results for the individual models. I note that these low climate sensitivities are similar to the results of the Otto study, which as I said above I’ll discuss in a subsequent post.

So what can we conclude from all of this?

1. The models themselves show a  lower climate sensitivity (1.2 2.0°C to 1.6 2.4°C per doubling of CO2) than the canonical values given by the IPCC (2°C to 4.5°C/doubling).

2. The time constant tau, representing the lag time in the models, is fairly short, on the order of three years or so.

3. Despite the models’ unbelievable complexity, with hundreds of thousands of lines of code, the global temperature outputs of the models are functionally equivalent to a simple lagged linear transformation of the inputs.

4. This analysis does NOT include the heat which is going into the ocean. In part this is because we only have information for the last 50 years or so, so anything earlier would just be a guess. More importantly, the amount of energy going into the ocean has averaged only about 0.25 W/m2 over the last fifty years. It is fairly constant on a decadal basis, slowly rising from zero in 1950 to about half a watt/m2 today. So leaving it out makes little practical difference, and putting it in would require us to make up data for the pre-1950 period. Finally, the analysis does very, very well without it …

5. These results are the sensitivity of the models with respect to their own outputs, not the sensitivity of the real earth. It is their internal sensitivity.

Does this mean the models are useless? No. But it does indicate that they are pretty worthless for calculating the global average temperature. Since all the millions of calculations that they are doing are functionally equivalent to a simple lagged linear transformation of the inputs, it is very difficult to believe that they will ever show any skill in either hindcasting or forecasting the global climate.

Finally, let me reiterate that I think that this current climate paradigm, that the global temperature is a linear function of the forcings and they are related by the climate sensitivity, is completely incorrect. See my posts It’s Not About Feedback and Emergent Climate Phenomena for a discussion of this issue.

Regards to everyone, more to come,

w.

DATA AND CALCULATIONS: The digitized Forster data and calculations are available here as an Excel spreadsheet.

Advertisements

  Subscribe  
newest oldest most voted
Notify of

HIlarious Willis – reminds me of one of my favourite scenes out of the Chevy Chase / Dan Aykroyd movie “Spies Like Us”.

Here’s a better version of the clip – “Break it down again with the machine!” 🙂

RockyRoad

Something I’ve always felt was nothing but a bunch of bloviating, hyperventilating, egomaniacal climate scientists hooked on petaflops, and Willis has proven my hunch to be correct.
They should have stuck with the KISS* principle long ago. Sure, they wouldn’t have had the luxury and notoriety of playing with some of the most powerful computers on the planet, but they’d be honest in their frugality, which is always a better approach.
* KISS–Keep it simple, stupid (climate scientists)!

Kasuha

What you have just proved is that Nic Levis’ and Otto et al analyses are worthless. Climate sensitivity is a parameter of models so if it cannot be reliably established from their results, the method of establishing the climate sensitivity from results (or actual data) is wrong.
But I rather believe that your analysis is wrong. For instance, the only model you are analysing is your linear model and its regression to multi-model mean. And in the end you declare that linear model (i.e. your model) is wrong. Well, duh.

Bart

“Any intelligent fool can make things bigger and more complex. It takes a touch of genius – and a lot of courage – to move in the opposite direction.”
– A. Einstein

Rud Istvan

Willis, we disagree on many things (yup, future fossil fuel max annual energy extraction) but my complements to you on this. Man, you appear to have nailed this thesis (meaning my formerly well trained (supposedly) mind can find no flaw, so MUST AGREE!). Good job!
(You might even get the irony of that, not directed at you at all, given other comments elsewhere this date.)

Lance Wallace

I tried looking at the forcing and exponential pieces of your equation separately. They are graphed in your modified Excel file. The five sharp drops in the temperature are associated with the forcing term,which always recovers either 1 or 2 years later to a strong positive value. What this means I have no idea, but you might be interested in looking at it.
https://dl.dropboxusercontent.com/u/75831381/CMIP5%20black%20box%20reconstruction–law.xlsx

Peter Miller

The net result of all this is that ‘climate scientists’ are more likely to hide their models’ research findings behind paywalls and increase their obfuscation in regards to providing access to raw data and the methodology in how it is processed. In their eyes, complexity for its own sake is a virtue.
One analogy to climate models is geological models. There is always a very strong temptation to generate increasingly complex models when you do not really understand what you are looking at. Then, all of a sudden, one day some bright spark summarises it into something relatively simple which fits all the known observations and bends no laws of science. At first, this is ignored or attacked, but eventually it is accepted – but this can take several years.
If today’s ‘climate scientists’ were to derive Einstein’s Law of Relativity of E=Mc2, they would end up with a formula spread over several pages.
In any scientific field, unless there are strong commercial or strategic issues, those involved should strive for KISS and be totally open about how they arrive at their conclusions. There is not much of either of these in climate science.
In any event, attacks on Willis’ findings are likely to be mostly of the “It’s far too simplistic” variety.

Stephen

I think we can conclude something else profound about the models:
If CO2-sensitivity and lag-time are the only effective parameters, then those models do not effectively consider any other processes which might affect the global mean temperature. Either other drivers are assumed to be small or all of these models predict feedbacks which drive them to zero. That is an unbelievable coincidence, if it is one. It seems more likely that those who constructed the models simply did not effectively include any other long-term processes (like these: http://wattsupwiththat.com/reference-pages/research-pages/potential-climatic-variables/). Their central result, “confirmation” that warming will continue (to one degree or another) as overall CO2-concentration increases seems to be pure circular logic, rendering them worthless.

Manfred

As absolutely nothing in the forcings covers AMO and PDO, these climate models will fail, or actually did fail.

James Bull

This sort of thing happens in big companies as well management spend large sums on studies to find how to make things more efficient (sack people) and when the’re done all that happens is what those who make the products said would happen before the start of it all.
It does beg the question what are these super computers doing if climate calcs are so easy. It must be one heck of a game of Tetras.
James Bull

Ben D.

Is this an example of Occam’s razor?

tty

Willis, have you tried fiddling with tau and lambda to match the actual historical record? You might produce a very superior GCM that you could flog to some credulous government for a billion dollars.

Frank

Stephen: if I understand this post correctly, the climate models estimate a much higher climate sensitivity parameter than the same parameter derived from the model’s predictions. This means, I think, that the other parameters, feedbacks etc. in the models force the model to derive a higher value for climate sensitivity in order to backfit the known data. So the other parameters and feedbacks do have an effect.

Willis, congratulations. Excellent work and a very significant finding.
I’d make a few points.
Complexity in science is a bad thing. I’d go as far as to say, all scientific progress occurs when simply explanations are proposed. Occam’s Razor, etc.
Predictive models of the climate do not require physically plausible mechanisms as part of the predictive process, ie the model, in order to produce valid predictions. Trying to simulate the physics of the climate at our current level of understanding is a fool’s errand.
People have a, in part, irrational faith in what computers tell them. Try walking into your local bank, and telling them the computer is wrong. This faith is justified with commercial applications which are very extensively tested against hard true/false reality. But there is no remotely comparable testing of climate models, for a number of reasons, most of which you are doubtless aware off. However, this faith in computer (outputs) was the main way in which the climate models were sold to the UN, politicians, etc.
Further, increasing complexity was the way to silence any dissenting sceptical voices, by making the models increasing hard to understand (complex) and hence to criticize. You have cut that Gordian Knot.
Again, congratulations.

Sweet, so sweet! Looking back from the perspective of a poor old downtrodden Generation VW scientist back at these Gen X model exercises in circular logic I can only say…bravo. You have nailed them. I take it you are aware that ~3 year lag factor has authoritative precedents too, in terms of the mean e-folding time for recirculation of CO2 between atmosphere and oceans and hence is a primary reflection of the responses of the real earth (AMO, ENSO etc.)

Sorry I meant recirculation of heat (not CO2). See Schwartz, 2007 etc.

cd

Willis
Do you think we give too much credence to the models by even discussing their results? They are trained on historical data that has huge associated errors.
3) Despite the models’ unbelievable complexity, with hundreds of thousands of lines of code, the global temperature outputs of the models are functionally equivalent to a simple lagged linear transformation of the inputs.
But if I understand you correctly, the forcings are derived from the model runs. So one of the inputs to your function has to be derived at first. So although, the output of the collated models can be expressed as a simple algorithm (as you have done here and the models commonly do), they still need to do the runs in order to define the inputs to the final model. Do you not think that you need a caveat here – give them their dues. That the cleverness is in the derivation not the output. Perhaps if we massage the egos of the people who design/write the models they might be less adversarial?
Finally, and perhaps I am wrong here also, but the models also have evolving feedbacks so one would expect dependency on previous result and hence the lagged dependence in your function.

Richard LH

Hmm. So if the output is a lagged response of today +2 to 3 years then you should be able to predict the future up to that point in time from already measured values.
So what does the future hold (for the models anyway)?

cd

Steve Short
I think there seems to be a lot of hatred toward the people who generate these models.
We only have one climate system on Earth, so we can hardly do experiments like a “Generation VW scientist” would like ;). Computers do give us an opportunity to at least have a play and see what might happen in a simplified world and as we increase the complexity and computational power, the models may converge on the real world. So I think we are attacking the wrong people. The problem lies with those (such as ecologists say) who use the model outputs to do impact assessments/predictions without taking the time to understand the model limitations (if not the underlying theory) and then support alarmists nonsense with unfounded confidence in the press.
BTW, most engineering projects (big and small) depend on computer models; as do many scientific fields were experiments are very expensive or impractical.

Greg Goodman

Good work Willis. This is a good way to analyse the behaviour of the models. Amazingly good demonstration that despite all the complexity they are telling us nothing more than the trivially obvious about the key quesiton.
However, unless I have misunderstood what you are extracting is the overall sensitivity to all forcings , not the CS to CO2. ie it is the sum of volcanic and CO2 over a period that corresponds to a double of CO2.
This is what I’ve been saying for a while, exaggerated volcanics allows exaggerated GHG. Since there was lots of volcanism during the period when they wanted lot of GHG forcing it works … up to 2000.
Last big volcano was Mt P and then once the dust settled the whole thing goes wrong. Post y2k is the proof that they have both volcanoes and GHG too strong.
The orange line shows the huge and permanent offset that Mt Agung is supposed to have made, yet this is not shown in any real data, as you have pointed out on many occasions.
Since clouds , precipitation and ocean currents are not understood and modelled but are just guesswork “parameters” this is all the models can do.
Does your digitisation allow us to put volcanics to net zero and see what GHG still gives about the right curve.
[Sorry I have not been able to look at you xlsx file, it crashes Libre Office. It’s flakey . ]

richard verney

Willis
It would be useful to list the major volcanic eruptions since 1850 and the claimed negative feedback with respect to each.
Does anyone really think that Pinatubo (1991) had the same impact as Krakatoa (1883). Without digitalizing, to my unaided eye, the negative forcings appear similar.
Does anyone really have any confidence in the temperatures? Surely few people hold the view that today is about 0.6degC warmer than it was in the 1930s, or for that matter some 0.8degC warmer than the 1980s. I would have tjought that within the margins error, it is difficult to conclude that temperatures today today truly are warmer than those either in the 1930s or 1880s such that the present temperature record does not show the full extent of variability in past temperatures.
As far as i am concerned, models are simply GIGO, and one cannot even begin to properply back tune them until one has a proper record of past temperatures.
Just to point out the obvious, if they are tuned to incorrect past temperatures, going forward they will obviously be wrong. It is now more difficult to fudge current temperature anomaly changes because of the satellite data sets, and this is one reason why we are seeing divergence on recent timescales.

richard verney

The 3rd paragraph in my above post contains typo and should have read:
“Does anyone really have any confidence in the temperatures? Surely few people hold the view that today is about 0.6degC warmer than it was in the 1930s, or for that matter some 0.8degC warmer than the 1880s?”

Bloke down the pub

If the organisations that funded those models saw that they could be replicated by a couple of lines of equations, do you think they might ask for their money back?

Greg Goodman

cd “So I think we are attacking the wrong people. The problem lies with those (such as ecologists say) who use the model outputs to do impact assessments/predictions without taking the time to understand the model limitations ”
The problem is that a lot of research groups are filled with your “ecologists” (by which I presume you mean environmentalists) .
If the modellers were honest about their state of development they would be saying don’t use them for prediction , we are about 50 years away from being able to do that. They are not. They are on the gravy train by playing down the uncertainties and promoting their use now. Many seem to be filled with some environmentalist zeal that is influencing not only their presentation of suitability but the actual “parameters” that are chosen to get an expected or politically desired result out of the model.
Hansen is a self declared activist and his volcanic forcing are bigger than anyone else’s.
When scientists attempt to use (abuse) their authority as experts to be non scientific and push an agenda, this does create a certain animosity.

Greg Goodman

richard : ” it is difficult to conclude that temperatures today today truly are warmer than those either in the 1930s or 1880s such that the present temperature record does not show the full extent of variability in past temperatures.”
I did point out over a year ago that Hadley adjustments were removing 2/3 of the variability from the earlier half of the record and that this was done based on rather speculative reasoning not fact.
http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/#comment-188237
Those adjustments helped make the data better fit the model.
If that was not the concious intent of those adjustments , it was certainly the effect.
There is also post war adjustment of -0.5C which makes the late 20th c. rise look more like CO2.
However, the Ice Classic record would suggest that the wartime bump was a real event , not something that requires one sided correction.
This is why I do most of my work using ICOADS SST not Hadley’s adjusted datasets.

Greg Goodman

Note the 1939-40 that usually gets “corrected” leaving just the post war drop. This was Folland’s folly, introduces in late 80’s IIRC It has been smoothed out a bit in HadSST3 but is still there and still -0.5 deg C.
http://climategrog.wordpress.com/?attachment_id=258

Greg Goodman

Bloke down the pub says:If the organisations that funded those models saw that they could be replicated by a couple of lines of equations, do you think they might ask for their money back?
Why would they do that ? They are using the results to set up a $100bn slush fund with no legal auditing or accountability.
I’m sure they are very happy with the return on investment.

jhborn

Did anyone else get a broken link at cell G3 of Mr. Eschenbach’s spreadsheet?

agricultural economist

Willis,
you use aggregate forcings (F) as a driving variable in your equation, but you interpret the result as sensitivity to CO2. But in these models, CO2 is not the only forcing factor. To arrive at CO2 forcing, you would have to split F up into its individual historical components (CO2, soot, aerosols etc.). Many of these other forcing factors have a negative temperature effect, making it likely that the models still assume a higher sensitivity for CO2 … which would be expected.
To many commenters here: climate models try to simulate physical processes, so they are called process models. Willis puts the results of this into an ex-post statistical model to identify the importance of driving factors. The fact that this is possible does not mean that the process models are worthless. Willis has just simplified the models tremendously, thereby losing a lot of information that went into these models.
The problem with process models is that they cannot simulate processes which you do not program into them, either because you don’t know them or you find them insignificant or inconvenient for the outcome.This means that you really have to look at such models in detail to be able to fundamentally criticize them. Due to the complexity of the models this is very difficult even for scientifically literate persons.
Says an economic modeler …

Evan Jones

Well, that appears to show that a simple top-down analysis beats sickeningly complex bottom-up every time. As a wargame designer, this comes as no surprise. If you want to “simulate” the Russian Front, you do it by looking at Army Groups and fronts, not man-to-man, for heaven’s sake.

Greg Goodman

agricultural economist says:
you use aggregate forcings (F) as a driving variable in your equation, but you interpret the result as sensitivity to CO2. But in these models, CO2 is not the only forcing factor. To arrive at CO2 forcing, you would have to split F up into its individual historical components (CO2, soot, aerosols etc.).
===
Yes, that confirms my comment above.
” Many of these other forcing factors have a negative temperature effect, making it likely that the models still assume a higher sensitivity for CO2 … which would be expected.”
Ah, expectations. That is indeed the primary forcing in climate modelling.

Greg Goodman

agricultural economist says: Willis puts the results of this into an ex-post statistical model to identify the importance of driving factors.
What Willis shows, I think, is that after all the supposedly intricate modelling of basic physics and guessed forcings and parameters, all that comes out is a simple solution to the Laplace equation.
The equation he has used is basically a solution to the equation:
http://en.wikipedia.org/wiki/Heat_equation
Now once models work they will tell us a lot of detail on a regional scale but IMO we are decades away from that level of understanding where know enough to make a first principals approach work.
A climate model that cannot produce it’s own tropical storms is, frankly, worthless as it stands.
The problem now is that we know lots of the processes in great detail but that ones that really matter are still guesswork. All the guesses are adjusted until the output is about what is “expected”.
At which point we may as well use Willis’ equation.

Greg Goodman

Since Willis’ approach seems to capture the behaviour of the GCMs I would suggest he digitises all the individual ‘forcings’ , reduces volcanism to something that he feels to be more in line with observation, puts CO2 at it’s value based on radiation physics (1.3 from memory) and see whether his model produces a better post 2000 result.
If I understand what is being shown in the graphs , without reading the paper, I think the GHG plot will be real forcing calculations plus hypothetical water vapour feedback.

jhborn

agricultural economist: “Willis puts the results of this into an ex-post statistical model to identify the importance of driving factors. The fact that this is possible does not mean that the process models are worthless. Willis has just simplified the models tremendously, thereby losing a lot of information that went into these models.”
But the fact that dispensing with that information had little effect on the results shows how rapidly the returns to adding that information diminished–i.e., how little the modelers really accomplished with all their teraflops of computer power.

Bill Illis

The issue is how the “Historical_nonGHGs” are used to offset the HistoricalGHGs in the hindcast.
Essentially, the Aerosols and other negative forcings like Land-Use, increasingly offset the warming caused by GHGs in the hindcast.
The GHG temperature impact will follow a formula something like X.X * ln(CO2ppm) – 2y.yC. The 2003 version of GISS ModelE was 4.053 * ln(CO2ppm) – 23.0 [which is just a small 7 year lag from an instantaneous temperature response which is 4.33 * ln(CO2) – 24.8] but each individual model will have a different sensitivity to GHGs and then a different offset from aerosols etc. The higher the GHG sensivity, the higher the aerosols offset there is in the hindcast.
http://img183.imageshack.us/img183/6131/modeleghgvsotherbc9.png
In the future, of course, no climate model is building in an increase in the negative from aerosols. IPCC AR5 assumptions have the (direct and indirect) aerosols offset becoming smaller in the future, changing from -1.1 W/m2 today to -0.6 W/m2 by 2100. The GHG temperature impact becomes dominant.
http://s13.postimg.org/rx2hw6s1j/IPCC_AR5_Aerosols.png

Richard Smith

Willis, is it not logical to assume that the forcings are actually a back calculation? Where does one get the data to calculate the forcings back in 1850?

Greg Goodman

Bill Inis: “IPCC AR5 assumptions have the (direct and indirect) aerosols offset becoming smaller in the future”
Thanks, what is the basis of that assumption?
Presumably the volcanic forcing is about zero already by about 2000. Are they assuming the Chinese are going to stop using coal ?
What is the big drop in aerosols, especially direct, from 2000 onwards caused by ?
Are you aware of what factor in hadGEM3 caused it give less warming ?
thx

Willis,
I was wondering about your basis for interpreting λ as climate sensitivity. But before I could figure that, the units don’t seem right. λ does seem to have the units ΔT/ΔF, so λ ΔF has units T, but then it is divided by τ which has units years? It’s also unexpected to see an exponential with a dimensional argument.
But anyway, what sort of CS do you mean? This seems to be a transient version. In fact, with your model, if you make a one-time increase of 1 in ΔF, T increases in that step by λ/τ, but then it goes on increasing for a while, in fact to λ/τ/(1-exp(-1/τ)). Which actually is pretty close to λ, but the unit issues are a puzzle.

agricultural economist

Greg Goodman:
“Ah, expectations. That is indeed the primary forcing in climate modelling.”
It’s not that I myself expect a higher sensitivity for CO2. But the climate modelers do.
But I would like to stress that I find Willis’ effort highly useful … given he manages to isolate different forcings in a multivariate regression.

Bill_W

I agree that this seems to be some sort of net forcing, not just due to CO2.
Nick Stokes, we need to give this “goes on increasing for awhile a name”.
I suggest the climate multiplier effect. Then we can get Paulie Krugnuts to come on board and join the conversation.

Bill Illis

Greg Goodman says:
May 22, 2013 at 5:17 am
Are you aware of what factor in hadGEM3 caused it give less warming?
——————————
Obviously fudgefactors.
HadGEM2 submitted to the upcoming IPCC AR5 report has a very high GHG sensitivity.
http://s2.postimg.org/6uehe2sdl/Had_GEM2_2100.png
Aerosols decline because they assume we are going to increasingly regulate sulfate emissions from all sources. Its mostly cleaned up already.

kadaka (KD Knoebel)

Joe Born said on May 22, 2013 at 3:14 am:

Did anyone else get a broken link at cell G3 of Mr. Eschenbach’s spreadsheet?

=CORREL(#REF!,G8:G163))
And an extra right parentheses.

CORREL
Returns the correlation coefficient between two data sets.
Syntax
CORREL(Data1; Data2)
Data1 is the first data set.
Data2 is the second data set.
Example
=CORREL(A1:A50;B1:B50) calculates the correlation coefficient as a measure of the linear correlation of the two data sets.

Willis, excuse me, gotta question. Correlation command at E4 is “=CORREL(D10:D163,E10:E163)”
Why not start at row 8, start of data? Result is still 0.991.

T. G. Brown

Willis,
At risk of oversimplification, you have shown that that the models provide an integrated (globally averaged temperature) response that is a linear function of the perturbation (forcing due to CO2). There are many examples of this in physics and allied fields. For example, one can take a solid beam that is a very inhomogeneous collection of atoms, molecules, and even randomly oriented crystalline domains (as with a metal beam) and reduce its mechanical response to several simple macroscopic material parameters (Young’s modulus, Poisson ratio, etc). The reason these work is that when one considers weak perturbations about a mean value, the first terms in a Taylor expansion are usually linear and decoupled. The coupling — which is the more complex behavior — doesn’t usually start until the nonlinear terms are considered.
What would be interesting (and quite a bit harder) is to see if a simple physical descriptor also governs the mean-square fluctuation. (Systems like yours usually follow something called a fluctuation-dissipation theorem.) However, you may not have access to the data necessary to do that analysis.

Willis,
Paraphrasing Von Neumann elsewhere: “Climate change is a bit like an elephant’s jungle trail (randomness with a purpose), whereby the elephant is a Milankovic cycle, far north Atlantic the elephant’s trunk (slowly up/down and sideways), ENSO its tail (swishing around back and forth), and CO2 analogous to few flees that come and go.with the least of a consequence”
Modellers are only sketching the Hindustan plain thrusting into Himalayas , with no elephants roaming around.

Quinn the Eskimo

Patrick Frank’s article “A Climate of Belief”, http://wattsupwiththat.com/2010/06/23/a-climate-of-belief/, makes a very similar point. I can’t cut and paste, but he shows that a simple passive warming model, equally as simple as Willis’, is very very close to model mean projections.

Willis, you’re saying something I’ve said for the last 10 years, the models are programmed for a specific CS, in fact I remember reading an early Hansen paper where he says exactly the same thing.
As far as the code goes, I think they are all derivatives of the same code, and are almost functionally equivalent. The cell model is pretty simple, the complexity is around processing and managing all of the cell data as it runs.
Lastly, the code is easy to get (well NASA’s is) I just downloaded the source code for ModelII and ModelE1, it is in fortran though, you’d think someone would bother to rewrite it in a modern language. And if you think about it, since I think they’re all fortran, it shows their common roots, no one has bothered to rewrite it.
Lastly This NASA link points out that the models aren’t so accurate when you don’t use real SSTs.

cd

Greg
By ecologists I mean biologists whose specialism is in ecology. They are not the only ones botanists, zoologists etc.
I don’t agree that just because one modeller – did Hansen actually design/write the models – makes them all bad. I would suspect few of the actual people who do this would actually suggest that the models should be used for predictions. You’ll probably find it is a team leader who did very little in the design or implementation phase are the ones making all the shrill predictions.

Kasuha says:
May 21, 2013 at 10:26 pm
What you have just proved is that Nic Levis’ and Otto et al analyses are worthless.
========
That doesn’t follow. Sounds more like wishful thinking on your part without any evidence in support.

agricultural economist says:
May 22, 2013 at 3:27 am
This means that you really have to look at such models in detail to be able to fundamentally criticize them.
====
a mathematician says otherwise. looking at the details is a fools errand when it comes to validating complex models. you will miss the forest for the trees.
there are very simple tests based on the use of “hidden data” – data that is kept hidden from both the model builders and the model itself – that can be used to test the skill of any complex model. if the model cannot predict the hidden data better than chance it has no skill, regardless of what the model details may be telling you.
where the model builders go off the rails is in assuming that the model are predicting the future. They are almost certainly not, because predicting the future is quite a complex problem. what the models are almost certainly predicting is what the model builders believe the future will be. this is a much simpler problem because the model builders will tell you what they believe the future will be if you ask them.
and this is what the models are doing when they print out a projection. they are asking the model builders “is this correct? is this what you believe?” if it is, if the answer is “yes” then the model builders will leave the model as is. if the model builders say “no”, then the models builders will change the model. in this fashion the models are always asking the model builders what they believe to be correct.