Life is Like a Black Box of Chocolates

Guest Post by Willis Eschenbach

In my earlier post about climate models, “Zero Point Three Times The Forcing“, a commenter provided the breakthrough that allowed the analysis of the GISSE climate model as a black box. In a “black box” type of analysis, we know nothing but what goes into the box and what comes out. We don’t know what the black box is doing internally with the input that it has been given. Figure 1 shows the situation of a black box on a shelf in some laboratory.

Figure 1. The CCSM3 climate model seen as a black box, with only the inputs and outputs known.

A “black box” analysis may allow us to discover the “functional equivalent” of whatever might be going on inside the black box. In other words, we may be able to find a simple function that provides the same output as the black box. I thought it might be interesting if I explain how I went about doing this with the CCSM3 model.

First, I went and got the input variables. They are all in the form of “ncdf” files, a standard format that contains both data and metadata. I converted them to annual or monthly averages using the computer language “R”, and saved them as text files. I opened these in Excel, and collected them into one file. I have posted the data up here as an Excel spreadsheet.

Next, I needed the output. The simplest place to get it was the graphic located here. I digitized that data using a digitizing program (I use “GraphClick”, on a Mac computer).

My first procedure in this kind of exercise is to “normalize” or “standardize” the various datasets. This means to adjust each one so that the average is zero, and the standard deviation is one. I use the Excel function ‘STANDARDIZE” for this purpose. This allows me to see all of the data in a common size format. Figure 2 shows those results.

Figure 2. Standardized forcings used by the CCSM 3.0 climate model to hindcast the 20th century temperatures. Dark black line shows the temperature hindcast by the CCSM3 model.

Looking at that, I could see several things. First, the CO2 data has the same general shape as the sulfur, ozone, and methane (CH4) data. Next, the effects of the solar and volcano data were clearly visible in the temperature output signal. This led me to believe that the GHG data, along with the solar and the volcano data, would be enough to replicate the model’s temperature output.

And indeed, this proved to be the case. Using the Excel “Solver” function, I used the formula which (as mentioned above) had been developed through the analysis of the GISS model. This is:

 T(n+1) = T(n)+λ ∆F(n+1) / (1- exp( -1 / τ )) + ΔT(n) exp( -1 / τ )

OK, now lets render this equation in English. It looks complex, but it’s not.

T(n) is pronounced “T sub n”. It is the temperature “T” at time “n”. So T sub n plus one, written as T(n+1), is the temperature during the following time period. In this case we’re using years, so it would be the next year’s temperature.

F is the forcing, in watts per square metre. This is the total of all of the forcings under consideration. The same time convention is followed, so F(n) means the forcing “F” in time period “n”.

Delta, or “∆”, means “the change in”. So ∆T(n) is the change in temperature since the previous period, or T(n) minus the previous temperature T(n-1). ∆F(n), correspondingly, is the change in forcing since the previous time period.

Lambda, or “λ”, is the climate sensitivity. And finally tau, or “τ”, is the lag time constant. The time constant establishes the amount of the lag in the response of the system to forcing. And finally, “exp (x)” means the number 2.71828 to the power of x.

So in English, this means that the temperature next year, or T(n+1), is equal to the temperature this year T(n), plus the immediate temperature increase due to the change in forcing λ F(n+1) / (1-exp( -1 / τ )), plus the lag term ΔT(n) exp( -1 / τ ) from the previous forcing. This lag term is necessary because the effects of the changes in forcing are not instantaneous.

Figure 3 shows the final result of that calculation. I used only a subset of the forcings, which were the greenhouse gases (GHGs), the solar, and the volcanic inputs. The size of the others is quite small in terms of forcing potential, so I neglected them in the calculation.

 Figure 3. CCSM3 model functional equivalent equation, compared to actual CCSM3 output. The two are almost identical.

As with the GISSE model, we find that the CCSM3 model also slavishly follows the lagged input. The match once again is excellent, with a correlation of 0.995. The values for lambda and tau are also similar to those found during the GISSE investigation.

So what does all of this mean?

Well, the first thing it means is that, just as with the GISSE model, the output temperature of the CCSM3 model is functionally equivalent to a simple, one-line lagged linear transformation of the input forcings.

It also implies that, given that the GISSE and CCSM3 models function in the same way, it is very likely that we will find the same linear dependence of output on input in other climate models.

(Let me add in passing that the CCSM3 model does a very poor job of replicating the historical decline in temperatures from ~ 1945 to ~ 1975 … as did the GISSE model.)

Now, I suppose that if you think the temperature of the planet is simply a linear transformation of the input forcings plus some “natural variations”, those model results might seem reasonable, or at least theoretically sound.

Me, I find the idea of a linear connection between inputs and output in a complex, multiply interconnected, chaotic system like the climate to be a risible fantasy. It is not true of any other complex system that I know of. Why would climate be so simply and mechanistically predictable when other comparable systems are not?

This all highlights what I see as the basic misunderstanding of current climate science. The current climate paradigm, as exemplified by the models, is that the global temperature is a linear function of the forcings. I find this extremely unlikely, from both a theoretical and practical standpoint. This claim is the result of the bad mathematics that I have detailed in “The Cold Equations“. There, erroneous substitutions allow them to cancel everything out of the equation except forcing and temperature … which leads to the false claim that if forcing goes up, temperature must perforce follow in a linear, slavish manner.

As we can see from the failure of both the GISS and the CCSM3 models to replicate the post 1945 cooling, this claim of linearity between forcings and temperatures fails the real-world test as well as the test of common sense.

w.

TECHNICAL NOTES ON THE CONVERSION TO WATTS PER SQUARE METRE

Many of the forcings used by the CCSM3 model are given in units other than watts/square metre. Various conversions were used.

The CO2, CH4, NO2, CFC-11, and CFC-12 values were converted to w/m2 using the various formulas of Myhre as given in Table 3.

Solar forcing was converted to equivalent average forcing by dividing by 4.

The volcanic effect, which CCSM3 gives in total tonnes of mass ejected, has no standard conversion to W/m2. As a result we don’t know what volcanic forcing the CCSM3 model used. Accordingly, I first matched their data to the same W/m2 values as used by the GISSE model. I then adjusted the values iteratively to give the best fit, which resulted in the “Volcanic Adjustment” shown above in Figure 3.

[UPDATE] Steve McIntyre pointed out that I had not given the website for the forcing data. It is available here (registration required, a couple of gigabyte file).

0 0 votes
Article Rating
220 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
May 14, 2011 3:20 am

Willis – congratulations for your new super-cooled-computer. I didn’t know you had access to such a powerful machine, of the kind needed by the Met Office in the UK at the small cost of £10M.
ps you’re not implying you’ve done it all on a sub-$1,000 PC, are you? What a scandal that would be!!!
pps in hindsight, considering the absurdist idea that climate can be computed by adding forcings up, it makes perfect senses that the output is a linear transformation of the forcings…

Margaret
May 14, 2011 3:21 am

I am puzzled as to which variable on your standardised forcing graph has that amazing downward trend from 1970 onwards. Is it ozone? and whatever it is, can you comment on whether it is a reasonable path for it to take.

Patrick Davis
May 14, 2011 3:31 am

Oh the “black box” fix for everything complex. Yes, the simple, thoughtless, solution. Rather than actually finding/working out how it works or working within the boundaries of a “known” system, some want to “pump stuff in”, regardless of, say capacity, and then, “expect desired results”.

Jack Simmons
May 14, 2011 3:59 am

I’m with Margaret.
What is the item taking the dive around 1970?

Marcus Kesseler
May 14, 2011 4:02 am

Wow! Just wow!
All those lines of code evolving over decades, all those complex interactions between grid cells, all these finely tuned parameters, this huge international academic cooperation, to get – what?
An output equivalent to a simple Excel sheet cell formula with three parameters!
This is really mindboggling!
Congrats to Willis and all those involved and cooperating in putting the pieces together over the last months.
Marcus Kesseler

Editor
May 14, 2011 4:06 am

“risible fantasy”, Willis? You’re being too kind. I like risible, though.

Steve in SC
May 14, 2011 4:43 am

Willis,
There are termites in that shelf.

onion2
May 14, 2011 4:59 am

I don’t understand what this article proves and I completely disagree with this:
“Me, I find the idea of a linear connection between inputs and output in a complex, multiply interconnected, chaotic system like the climate to be a risible fantasy.”
Quite the contrary, I would be surprised if the behavior =didnt= boil down to a simple equation. This is quite common. The system might be chaotic but the global surface temperature element of it might be highly deterministic on energy balance and so can be expressed as simple equation. In fact that makes complete sense does it not?
After-all the concept of climate sensitivity and forcing implicitly mean global temperature is linearly related to forcings.
Other examples abound. Line by line radiative models to calculate the forcing from a doubling of CO2 for example are very complicated but after you have the results you can find a simple equation to fit it, which happens to be the famous ln(CO2/CO2base) * 5.35. Similar simple equations have been made for other greenhouse gas increases. In unrelated areas too – eg we can fit express the complex motions of planets and moons to the equation G(m1 + m2) / r2.
Some commenter’s seem to be confusing the derivation of such a simple equation to fit the behavior as a suitable replacement for the models themselves. As if scientists could have just foregone the millions spent on modeling and instead just used a simple equation. But this overlooks the fact that the equation is derived by fitting variables to the model output. Without the model output in the first place you can’t generate the equation.

David Wells
May 14, 2011 5:13 am

My understanding is that a computer is incapable of generating a random number if that is true then whatever information you input the result would always be something other than a random/chaotic number therefore trying to model a climate which is by nature chaotic is not possible. A computer can record data and display it by way of a graph but as far as I understand it cannot generate chaos of its own volition but if you have a degree in computer science you would know this so why do we bother trying to model our climate when we know you can never design a suitable model by using a computer? You can use all the clever equations that you like but the result will always be a result based upon what you input and our climate does not work in that way it is random so any model is utterley and completely pointless, correct if if I am wrong?

RockyRoad
May 14, 2011 5:16 am

Just because some horribly expensive computer can perform an amazing number of FLOPS or IPS doesn’t mean the application they’re targeting necessarily NEEDS that level of processing power–It appears in all likelihood they’re applying “monetary forcing” to solve a problem that, indeed, a standard PC with a perfect solution (substance in/substance out) can process without difficulty. Let’s just remove their bombastic cover and label it accurately: Deception. Spin. Gluttony. All bad.
Good work, Willis! This is the hardest face palm I’ve applied in a quite a while.

Frank K.
May 14, 2011 5:21 am

Great post Willis!
Until we get REAL verification, validation, and documentation standards applied to the climate models, they will forever be “black boxes” stuffed with ill-posed numerical representations of coupled systems of partial differential equations, parameterized models of subgrid scale physics, and approximate (or poorly understood) boundary and initial conditions. Some groups (e.g. NCAR) are at least doing a respectable job on the documentation front…

Mats Bengtsson
May 14, 2011 5:36 am

My question would be, with the findings now done and the formula now configured as a black box equivalent for the used factors, but adding the CO2 factors and those following its shape. How would a third and fourth graph look on the page, a third graph that shows the added curve and a fourth graph that shows the result had CO2 remained constant during the years, how much would the difference between known and equalised CO2 changes make on the graph?
— Mats —

Tom in Florida
May 14, 2011 5:39 am

“F is the forcing, in watts per square metre. This is the total of all of the forcings under consideration”
It’s the “all forcings under consideration” that gets me. Is this not a subjective consideration? Or is it agreed upon by everyone that these forcings are all the correct ones that should be considered and that they are using the correct numbers for those forcings? If so then apparently we know everything about climate forcings and the science is settled.

Ron Cram
May 14, 2011 5:47 am

Great work, Willis. Please submit this to a few dozen journals.

Bob Moss
May 14, 2011 5:51 am

If you take this formula and project into the future with a doubling of CO2 does it match the IPCC projections?

Richard M
May 14, 2011 5:54 am

I don’t think it is at all unusual that during a time of relative climate stability one would see a linear response to the major forcings. However, since the models fail to properly hindcast the temperatures, even given the relative stability, it demonstrates they are pretty much worthless in their current state.
If it can’t get the easy one right what use is it?

Brett
May 14, 2011 6:11 am

Does anyone actually know what the equation used in CCSM3 actually is? I havn’t the time or wherewithal to dig through the code. If these models are truly using linear equations to test causality then the results should be suspect. I’m a little rusty in stats as it’s been a few decades, but wouldn’t these guys be using a more complex non-linear model or at least non-linear estimation techniques? The old adage that correlation does not mean causality certainly rings true if the modelers believe that a linear equation fit to the data is proof of a causal model.

Michael J
May 14, 2011 6:28 am

Willis: In fairness you haven’t really proven anything. You were able to fit the model data to a linear equation, but that doesn’t mean that the original model used that equation. They may have used a much more complex model and your result may be simply coincidental. Then again, they may not. We don’t have enough information to tell.
One of my great objections to AGW enthusiasts (and many involved in modern “science”) is confusing “plausible” with “proven”. You have show that it is plausible that the model in question used a linear equation, but I don’t believe that you have proven it.
Regardless, thanks for a thought-provoking article.

ferd berple
May 14, 2011 6:31 am

“which variable on your standardised forcing graph has that amazing downward trend from 1970 onwards. Is it ozone?”
The argument made by the IPCC and others that CO2 must be responsible for the temperature rise in the last half of the 20th century because they cannot find any other cause if visibly false.
The main forcing that changed is at that time is Ozone. If you were looking for a reason that temperatures went up, this must be the first suspect. Less Ozone = higher temperatures. Maybe because it is blocking less UV?

May 14, 2011 6:36 am

Nifty analysis. Does not your/their low value for lamda suggest some potent negative feedback mechansims?

Michael J
May 14, 2011 6:40 am

My understanding is that a computer is incapable of generating a random number

A computer cannot generate randomness from nothing. It can generate “pseudorandom” numbers that look like random numbers, but are not truly random.
However a computer can tap into a source of true randomness and use that to make true random numbers. In simple cases it might sample voltage fluctuations or mouse movements or sounds picked up by a microphone. In more robust setups it mught sample white noise from a radio or even (in one case) digital images of a lava lamp.
So in summary, computers can generate random numbers, but it is hard.

ferd berple
May 14, 2011 6:42 am

Have you run the model forward to see what it calculates for future temperatures past 2000 to see how well it is doing? Also to see how well it tracks the GISS and CCSM3 projections.
This is potentially a huge finding, because if your Excel spreadsheet tracks the model to .995 going forward, it suggests that the the models can be replaced by a much less expensive equivalent.
It also suggests that for all their reported sophistication, the model are nothing of the sort. Their behavior can be reduced to an extremely simple equation that demonstrates the underlying assumptions inherent in the models.

EllisM
May 14, 2011 6:42 am

Can you replicate the TOA flux, or precipitation, or sea ice area or volume, with this method? What about regional effects? Likewise, what of ENSO/PDO/AMO?
That would be really interesting to see. Who knew Excel had such power?

May 14, 2011 6:50 am

onion2 says: May 14, 2011 at 4:59 am
I don’t understand what this article proves. . .

Apparently.
The models, with all their complexity, can be replaced with a single line formula to get the same result. The climate, on the other hand, cannot.
Both model and formula miss the chilly 50’s and 60’s. So after $80 billion spent, we give birth to a mouse, a model that is no better at predictions than a simple equation, and less skillful than the Farmer’s Almanac.

juanita
May 14, 2011 6:59 am

hmmm, seem to have been promised a box of chocolates, or even a discussion about chocolates. Had again by Anthony’s witty blog titles.
Actually, this is very interesting – we’re doing “functions” in math, and here’s an example of a real world application. That’s as far as I can follow it. I need another cup of coffee.

Don K
May 14, 2011 7:03 am

“David Wells says:
May 14, 2011 at 5:13 am
My understanding is that a computer is incapable of generating a random number if that is true then whatever information you input the result would always be something other than a random/chaotic number therefore trying to model a climate which is by nature chaotic is not possible.”
Strictly speaking, computer “random numbers” are pseudo-random numbers generated by various algorithms. But pseudo-random really is good enough for most purposes, and you have the option of “seeding” them with some more random number generated from, for example, the time needed to respond to an operator prompt. Or you can just pick a number from a table of actual random numbers generated, as I recall, by digitizing white noise. For more information than you really want to know about pseudo-random numbers, Donald Knuth devoted a lot of space in one of the volumes of “The Art of Computer Programming” to their generation and use.

May 14, 2011 7:04 am

Hmm, can see one thing wrong right off the back – the assumption that this year’s temperature is predicated on last year’s. No wonder this always builds up over time.
What if there is no correlation to last year’s temperature? What if this year’s temperature is independent, and actually due to the current TSI alone? What if the lag time from TSI (due to the enormous heat sink which are our oceans) is what causes the slow response across years?
What if the natural balance point of the Earth is the (internal heat + TSI) – (total energy radiation rate to space), where “radiation rate” is both the atmospheric rate and the much slower heat transfer from the oceans to atmosphere?
I see nothing in any model which can conclude each year’s temperature is driven by the previous year’s temperature. I can see how radiation rates can be driven by processes which span years, where heat trapped takes years to dissipate.
But this rate could fluctuate based on the TSI changes. Small changes result in small radiation rate changes. But a huge drop in TSI could accelerate the loss, while a huge increase in heat (either TSI, TSI capture due to volcanoes, or a spike in heat escaping from the core) could take years to bleed off (such as thew 1998 spike).
Is prior year temp just a lousy representation for excess heat left over from a prior time period? How does this function work when the prior year is much cooler. Does it still force upwards due to more GHG (something now proven to be wrong)?

Olen
May 14, 2011 7:07 am

The most important input was omitted, money fueled by special interests.

DirkH
May 14, 2011 7:15 am

onion2 says:
May 14, 2011 at 4:59 am
“Other examples abound.”
What you give as “examples” are exactly not examples of complex systems with feedbacks (mind you, the existence of life itself proves that negative feedbacks dominate; some of them created by life itself, for instance the obvious CO2-regulating features of vegetation). So expecting the planet’s climate to behave like a simple one line linear transformation speaks volumes about the childish mental state of the Institute of Professional Corrupt Collusionists (did i get the long form of IPCC right? Hope so.)

DirkH
May 14, 2011 7:20 am

Michael J says:
May 14, 2011 at 6:28 am
“Willis: In fairness you haven’t really proven anything. You were able to fit the model data to a linear equation, but that doesn’t mean that the original model used that equation. They may have used a much more complex model and your result may be simply coincidental. Then again, they may not. We don’t have enough information to tell. ”
Willis has proven that the original model is practically functionally equivalent with a simple transformation of the input forcings, no more and no less – so for all practical considerations it can be substituted with the simple transformation.
There *might* be a possibility that the complex models develop a deviating behaviour sometime in the future *but* as they are validated by assessing their hindcasting (which *is* functionally equivalent to Willis’ transformation) such a deviating behaviour would come as a surprise even for the authors of the complex model!
In other words, the future projections of the IPCC *must* conform to Willis’ simple model as well!

Fred from Canuckistan
May 14, 2011 7:28 am

Well this should send some of the True Believers into fits.
I think I can hear the first Tamino bleatings . . . . Incoming !

May 14, 2011 7:30 am

onion2 writes “The system might be chaotic but the global surface temperature element of it might be highly deterministic on energy balance and so can be expressed as simple equation. In fact that makes complete sense does it not?”
So if it could be shown that the climate behaved non-linearly, then that would be a strong argument for falsifying the models wouldn’t it?

Eric Chief Lion
May 14, 2011 7:34 am

David Wells says:
May 14, 2011 at 5:13 am
This is why cryptography uses a seed for pseudorandom number generation where the seed must contain sufficient entropy to provide an assurance of randomness.
See NIST Special Publication 800-90A, Rev 1 at: http://csrc.nist.gov/publications/drafts/800-90/Draft_SP800-90A-Rev1_May-2011.pdf

May 14, 2011 7:35 am

Wonderful post, Willis, despite posters such as Onion, who obviously try desperately hard to understand stuff, throwing metaphorical rocks at you for demostrating that one of the icons of Warmism is merely a piece of applied snake-oil salesmanship.

Jay
May 14, 2011 7:39 am

Willis,
Your PC has the computing power of Crays from decades ago.
The importance computation is less than the importance of thinking !
-Jay

Theo Goodwin
May 14, 2011 7:51 am

onion2 says:
May 14, 2011 at 4:59 am
“Some commenter’s seem to be confusing the derivation of such a simple equation to fit the behavior as a suitable replacement for the models themselves. As if scientists could have just foregone the millions spent on modeling and instead just used a simple equation. But this overlooks the fact that the equation is derived by fitting variables to the model output. Without the model output in the first place you can’t generate the equation.”
You are arguing in a circle. You are assuming that there is something called “the Warmista climate model” that the Warmista have explicated for the public, that the public appreciates the internal complexity and beauty of this model though they might not fully understand it, and that the purpose of Willis’ equation is to summarize the main result from the model. Willis’ starting point (assumption) is the factual truth that the Warmista present us with a Black Box. The Warmista have explicated nothing for the public, except how to bow to Warmista. Willis’ equation, then, is not a summary of model results; rather, it is the sum total of all the public knows about the model. And the blame for that lies squarely at the door of the Warmista who will allow the public to understand their model when it is pried from their cold dead hands. It is the same behavior that is found in Mann who hides his data and his statistical methods, in Briffa who hides the physical changes in his proxies and never finds the scientific curiosity to pursue an explanation of those physical changes, and this behavior is found in all of the Warmista. None of these people have the instincts of scientists.

Theo Goodwin
May 14, 2011 7:56 am

AJStrata says:
May 14, 2011 at 7:04 am
“Hmm, can see one thing wrong right off the back – the assumption that this year’s temperature is predicated on last year’s. No wonder this always builds up over time.
What if there is no correlation to last year’s temperature? What if this year’s temperature is independent, and actually due to the current TSI alone?”
There you go introducing science again. Haven’t you learned that this is “Warmista Science?” /sarc

Don K
May 14, 2011 7:56 am

“Brett says:
May 14, 2011 at 6:11 am
Does anyone actually know what the equation used in CCSM3 actually is? I havn’t the time or wherewithal to dig through the code. ”
I don’t know for sure, but I’d more or less assumed that the models use step wise integration. Basically, you have a simple prediction equation that includes all the forcings. You take a little step, recompute the forcings, then take another little step, recalculate, etc. It’s much more complicated than that, but that’s the way object tracking/prediction works for many missiles, aircraft, and satellites. I’ve always assumed that tropical storm and other weather tracking works like that. I also assumed that climate scientists draw on experience with weather prediction and would use similar techniques.
And no, just because the basic equations may be simple, the results very often might not be simple at all. It’s entirely possible to track an object that is maneuvering enthusiastically using stepwise integration.

SandyInDerby
May 14, 2011 8:03 am

Margaret & Jack Simmons
Looking at the colour key I’d say CFC11 – Montreal Agreement?

Admin
May 14, 2011 8:08 am

Climate logic is not linear, but circular.

Benjamin
May 14, 2011 8:08 am

onion2 “Without the model output in the first place you can’t generate the equation.”
You’re kidding right ?
The equation is physics101 of a linear response with a timelag (hence lambda/tau).
Then it’s just fitting to get the right parameters.
The more free parameters you have, the better you can fit but the less significant it is (with an infinite number of parameters, you can fit anything).

Werner Brozek
May 14, 2011 8:12 am

I believe there is a contradiction in the following two comments from the article and that they can be reconciled:
“Figure 3 shows the final result of that calculation. I used only a subset of the forcings, which were the greenhouse gases (GHGs), the solar, and the volcanic inputs. The size of the others is quite small in terms of forcing potential, so I neglected them in the calculation.”
“(Let me add in passing that the CCSM3 model does a very poor job of replicating the historical decline in temperatures from ~ 1945 to ~ 1975 … as did the GISSE model.)”
I believe that the PDO, which is roughly a 60 year sine wave, needs to added to the calculations as another term which is NOT negligible. Then all other forcings would be relative to the sine wave instead of to a straight line. It would certainly account for the slight drop in temperatures over the last decade.

May 14, 2011 8:24 am

AJStrata says,
“I see nothing in any model which can conclude each year’s temperature is driven by the previous year’s temperature. I can see how radiation rates can be driven by processes which span years, where heat trapped takes years to dissipate.”
have a look at the second expert article regarding simple modelling:
http://www.copenhagenclimatechallenge.org/index.php?option=com_content&view=article&id=52&Itemid=55

Theo Goodwin
May 14, 2011 8:26 am

“Brett says:
May 14, 2011 at 6:11 am
“Does anyone actually know what the equation used in CCSM3 actually is? I havn’t the time or wherewithal to dig through the code. ”
The equations are only a minor part of the story. What Warmista must explicate for the public, what they have a duty to explicate, are their judgements. They make model runs, see something in the results that they do not like, and they change the model. The big question is: what are their processes of judgement, of recording their judgements, and of reasoning about their judgements. What we non-Warmista suspect is that they get a result that they don’t like, they rejigger the model, and they try again. In other words, they don’t even have a record of their judgements (rejiggers) and there is no rational process in place for evaluating judgements (rejiggers). Warmista have a duty to explicate their judgements and permit criticisms of their judgements and their methods of evaluating their judgements. If they fail to satisfy that duty then they are not practicing as scientists. So far, they are not practicing as scientists.

May 14, 2011 8:32 am

Now instead of leaving those giant computers running idle whilst we compute the climate of 2050 using OpenOffice, how about using them to model the AGWer brain? And yes, I fully expect a linear equation as the output.

Murray
May 14, 2011 8:43 am

How does the model handle ozone? If in some manner, the ozone depletion drives at least a significant part of the post 1970 warming, then the black-box correlation with the model is not meaningful. If the ozone does nothing, why is it included as a forcing?

Theo Goodwin
May 14, 2011 8:44 am

Anthony Watts says:
May 14, 2011 at 8:08 am
“Climate logic is not linear, but circular.”
Warmista logic is circular. When your only goal is to deflect criticism then necessarily your logic will be circular, unless you take the next step and become outrightly deceptive.

Bowen
May 14, 2011 8:47 am

Ok . . . I’ll take a bite . . . . I like the hind sight approach . . . but what does (did) it say for 2000 – 2010 and then say just the next 5, 10, 15, and 20 yrs. . . . Just out of curiosity . . .

Ian W
May 14, 2011 8:48 am

Jack Simmons says:
May 14, 2011 at 3:59 am
I’m with Margaret.
What is the item taking the dive around 1970?

From the earlier figure 2 there is a large volcanic eruption at that time causing the drop in temperature in the original CCSM3 output. All Willis has done is replicate that output using far simpler modeling.

onion2 says:
May 14, 2011 at 4:59 am

The problem is that the climate has NOT responded linearly to forcing increases. So you may think that all chaotic systems of inter-reacting chaotic sub-systems are linear in their behavior in response to inputs – but you are wrong as the real world climate has shown. Or there would have been no drop in the 1940’s
The problem I have is with a singe ‘lambda’ λ; the real world does not appear to have a single sensitivity. It would appear that there is a tendency to move toward an attractor that is the ‘normal’ temperature for the Holocene. A ‘forcing’ that moves the system toward that attractor receives positive feedback but if the same forcing continues taking the system past the attractor the feedback to that forcing becomes negative to return the system back toward the attractor. The climatologists’ assumption that feedbacks are always positive is a major weakness in the AGW hypothesis.
For some reason the climate systems have another stronger attractor of the ice-age conditions and given the right inputs at the right time the system can move from interglacial to that strong attractor. Perhaps the Bond events are excursions that nearly move from the interglacial attractor to the ice-age strong attractor but some input is missing, is in the wrong phase or is the incorrect value.

Jim D
May 14, 2011 8:48 am

For CCSM3 the ensemble mean was used, which removed all the internal variability of the individual members, so no wonder the fit is good, because on average internal variability should cancel out, and this becomes an expression that the global climate response holds for each year. It is equivalent to decadally smoothing the measured surface temperature and displaying that as a function of measured forcing. It should match quite well, given the right forcing, because the real climate response is a simple function of forcing too. Use an individual run of CCSM3 and see what kind of fit that gives. That would be more like using real data.

Theo Goodwin
May 14, 2011 8:48 am

Philip Foster says:
May 14, 2011 at 8:24 am
“have a look at the second expert article regarding simple modelling:
http://www.copenhagenclimatechallenge.org/index.php?option=com_content&view=article&id=52&Itemid=55
Please don’t assign homework. This is a debate among consenting adults. If you have a point to make, state it. Then you can cite your references. For me, people who assign homework are trying to appear to have something to say when they have nothing to say.

Theo Goodwin
May 14, 2011 9:12 am

Oh, by the way, earlier I wrote:
“In other words, they don’t even have a record of their judgements (rejiggers) and there is no rational process in place for evaluating judgements (rejiggers). Warmista have a duty to explicate their judgements and permit criticisms of their judgements and their methods of evaluating their judgements.”
In major corporations and the Defense Department and all organizations that use large and complex models for actual decision making, it is standard operating procedure to document all the judgments made about the model. Most discussion and debate about models is about the judgements made during their use. It has to be this way. Otherwise, no one can determine whether a rational process is in place for use of the models. I spent a lot of time trying to explain this to youngish engineers. They wanted numbers only. Eventually, I would just send a name and the word “hopeless” to the VP of engineering.

Jim G
May 14, 2011 9:20 am

Very interesting, Anthony. I am not so sure about your chaos argument as chaotic systems may still be somewhat estimable if not entirely predictable though the probable complexity of the interactions involved argues in your favor here..
Point in fact, multiple regression analysis can say nothing about cause and effect, so irrespective of the actual formulae, cause is STILL ‘inferred”. Is temperature the effect or the cause? Plus the existence of multicolinearity amongst the independent variables is another pitfall of the models being used to predict climate and causes them to be of much less value. With enough gyrations, as you have proven, a line can be fit to any set of data.
I would be very interested in what some of the other posts have asked regarding what happens in your derived formula when CO2 is taken out or changed?
REPLY: Note the author

Theo Goodwin
May 14, 2011 9:35 am

Jim G says:
May 14, 2011 at 9:20 am
“Point in fact, multiple regression analysis can say nothing about cause and effect, so irrespective of the actual formulae, cause is STILL ‘inferred”. Is temperature the effect or the cause?”
There are no temperatures in nature. Nature does not have temperatures. Temperatures are neither cause nor effect. Temperature is a measure assigned by humans to natural phenomena. The causes and effects are in the natural phenomena but temperature is not. The point is that if you do not understand the causal relationships in the natural phenomena then your attempts to assign temperatures to them are futile.
I don’t mean to pick on you. I think most everyone shares this very common misunderstanding.

Physics Major
May 14, 2011 9:44 am

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.
-John von Neumann


Well done, Willis

netdr2
May 14, 2011 9:54 am

Very interesting and understandable.
Looking at the excellent graph for the output of the models [thanks for that Willis] all outputs except the one that freezes CO2 levels at the 2000 level predict about .4 ° C warming from 2000 to 2010.
That didn’t happen [according to GISS] did it?
http://data.giss.nasa.gov/gistemp/graphs/Fig.C.gif
The other surprise is that SESRESB1 predicts an asymptotic approach to 1 ° C maximum warming. What is that input like ? Did it stop further emissions after 2050 or so ?

Jim D
May 14, 2011 9:59 am

My main point was, why use the ensemble average? Individual CCSM3 runs give you variability more like current climate, and don’t match the forcing so well. That was why I suggested also using real data, which also won’t match because of internal variability.
The forcing for the next century is certainly going to be dominated by CO2, so, yes, I think this is a good predictor plus or minus a few tenths of a degree if we knew the forcing’s development. Running your model forwards would show this in a very obvious way, especially if you also display the total forcing (or just assumed CO2 forcing) for the next century compared to the last, since CO2 forcing increases fivefold for the 21st century compared to the 20th.

May 14, 2011 10:14 am

Thanks much for your excellent work, Willis.
The latest version of the model (CCSM4) was recently announced in the abstract below, which contains a remarkable admission that the model exaggerates the global warming from 1850 – 2005 by 0.4C ( ~ 67% more than was observed).
http://journals.ametsoc.org/doi/abs/10.1175/2011JCLI4083.1
Your results seem to suggest that they could have spent a few of those billion dollars on tweaking the forcings a bit better before releasing a model with a clear upward bias.
Might be interesting to compare the forcings used and warming predicted by the two versions to see if climate science continues to devolve.

Doug Proctor
May 14, 2011 10:16 am

I noted that much of the projections can be done with eyeball, ruler and pencils. Sad to say, not all of the universe is so complex, or maybe a lot of the complexities don’t matter or cancel out. How simple is the Lorenz contraction equation and E=mc2!
Gee, a citizen might be able to understand, interpret and predict a few things after all. Puts the boots to God-as-politician/scientist/environmentalist, doesn’t it?

Theo Goodwin
May 14, 2011 10:46 am

Hockey Schtick says:
May 14, 2011 at 10:14 am
“The latest version of the model (CCSM4) was recently announced in the abstract below, which contains a remarkable admission that the model exaggerates the global warming from 1850 – 2005 by 0.4C ( ~ 67% more than was observed).”
Isn’t it true that the data used for the early history, say 1850 to 1900 or so, was collected by Phil Jones and remains under his practical control? Just asking. There was someone during Climategate who had documented this, but I can’t find that documentation now.

May 14, 2011 10:50 am

The CCSM3 model, like the other 30 or so climate models that simulate global warming are all hard wired using radiative forcing. This is based on a circular set of empirical assumptions:
1) Long term averages of climate variables such as ‘surface temperature’ form some kind of ‘climate equilibrium’ that can be analyzed using perturbation theory.
2) Small changes in downward longwave IR (LWIR) flux can then change the ‘surface temperature’ and be calculated using black body equilibrium assumptions.
3) The meteorological surface air temperature (MSAT) measured in an enclosure at eye level above the ground can be substituted for the real surface temperature (the one ‘under my bare feet’).
4) The observed increase in the long term MSAT record of 1 C as indicated by the ‘hockey stick’ has been caused by a 100 ppm increase in atmospheric CO2 concentration.
5) The increase in downward LWIR flux from 100 ppm of CO2 has produced an increase in downard LIWR flux of 1.7 w.m-2 based on spectroscopic radiative transfer calculations. (correct). This can be used as a ‘calibration factor’ calculate the change in surface temperature from the change in LWIR flux from any greenhouse gas or aerosol. (totally false).
6) Using the ‘calibration factor’ from CO2, an increase in LWIR flux of 1.7 W.m-2 has produced an increase in temperature of 1 C. Linear scaling then indicates that a 1 Wm-2 increase in LWIR flux from any greenhouse gas or other ‘forcing’ agent’ must produce an increase in ‘equilbrium surface temperature’ of 2/3 C.
7) Since an increase of 1.7 W.m-2 of flux comes from a change in ‘surface temperature’ of 0.31 C at an average surface temperature of 15 C (288 K), the other 0.69 C must come from ‘water vapor feedback’. (Radiative forcing is by definition correct and the Laws of Physics must therefore be modified to match the global warming religion)
Yes, the models are just using the hockey stick to predict more hockey stick.
The LWIR greenhouse gas forcings are fixed by the atmospheric concentrations of the greenhouse gases. The aerosol and other forcings and ‘stratospheric’ ozone are used as fudge factors to make the ‘hindcasting’ work.
All of this is buried under lots of fancy fluid dynamics code. However, the Navier Stokes equation is so difficult to solve in climate models that it can be manipulated to give any desired result. (This is called ‘spin up’, to get the models of the ‘right’ track to give the ‘right’ answer).
Willis is indeed correct that the radiative forcing is linear with the temperature change. Its is made that way by Divine Decree from the Global Warming Gods.
Garbage in, Gospel Out.
When is NSF going to stop funding this climate astrology?

netdr
May 14, 2011 10:55 am

Jim D wrote
The forcing for the next century is certainly going to be dominated by CO2, so, yes, I think this is a good predictor plus or minus a few tenths of a degree if we knew the forcing’s development. Running your model forwards would show this in a very obvious way, especially if you also display the total forcing (or just assumed CO2 forcing) for the next century compared to the last, since CO2 forcing increases fivefold for the 21st century compared to the 20th.
*****************
So far it isn’t happening is it ?
http://www.cgd.ucar.edu/ccr/strandwg/CCSM3_AR4_Experiments.html
The model’s start their predictions in 2000. By 2010 they predicted there would be .4 ° C warming which obviously didn’t happen.
Why not ? El Nino’s and La Nina’s balanced out over that period so that can’t be the reason. Did the model fail to included some other variable or is there almost no effect from increased CO2 ? The actual warming from 2000 to 2010 is so close to zero that it is not possible to measure.
So why do you think CO2 will cause more effect in the distant future ?

jorgekafkazar
May 14, 2011 11:04 am

onion2 says: “I don’t understand what this article proves…”
Correct at last, onoion!
“…After-all the concept of climate sensitivity and forcing implicitly mean global temperature is linearly related to forcings…”
The concept is most likely wrong, then.
“…In unrelated areas too – eg we can fit express the complex motions of planets and moons to the equation G(m1 + m2) / r2…”
Your statement is completely false and demonstrates total ignorance of the application of mathematical relationships to science. But since you think it is true, onoion, show us how to use that equation to derive the orbital elements for, say, Mars. Since you haven’t a clue as to what orbital elements are, I’ll start you on your way: a, e, i, node, omega, and t.
“…Some commenter’s [sic] seem to be confusing the derivation of such a simple equation to fit the behavior as a suitable replacement for the models themselves. As if scientists could have just foregone the millions spent on modeling and instead just used a simple equation….”
Looks like they could have if they’d been as clever as they think they are.

Michael J
May 14, 2011 11:05 am

The issue is, does the model have a much simpler functional equivalent, and the answer is clearly “Yes”.

I apologise if I misinterpreted you. I thought you were asserting that the CCSM3 model did in fact use the linear model (or something close to it). It appears I was incorrect.
I wanted to emphasise that just because a linear model will produce the same results for a particular set of inputs, it might not do so for other inputs. But …

PS – I doubt very much if it is “simply coincidental” that the same mathematical model fits both the GISSE and the CCSM3 data

In my haste, I didn’t consider that you had matched two different data sets, thus making much more likely that you do have something approaching the same model.
I’m playing devil’s advocate here. Too much pro-agw science leaps to conclusions prematurely and I was trying to ensure that we don’t do the same thing. It looks like there’s nothing to worry about. 🙂
Again, thanks for an interesting post.

Jim D
May 14, 2011 11:28 am

netdr, the more typical value, and what would be expected from 3 degrees C per doubling is near 0.2 C per decade, and indeed the 2000’s was nearly 0.2 degrees warmer than the 90’s. I think the 2010’s will exceed the 2000’s by a similar amount, especially since that decade ended up with a soft record to beat by not warming much.

Quinn the Eskimo
May 14, 2011 11:37 am

Patrick Frank performed a somewhat similar analysis in 2008 on other models in his article, Climate of Belief, available here: http://www.skeptic.com/reading_room/a-climate-of-belief/

Stephen Singer
May 14, 2011 11:38 am

Margaret and Willis it appears to me to be a combination of a couple of small volcanoes just prior to and after 1970 along with the beginning of the large drop in ozone. You’ll note that after 1970 there were similar drops around a couple of large volcanoes as ozone continues to decline.
I have no plausible explanation for how the two small volcanoes and Ozone might be linked to the temp drop.

henrythethird
May 14, 2011 11:38 am

Bob Moss says:
May 14, 2011 at 5:51 am
If you take this formula and project into the future with a doubling of CO2 does it match the IPCC projections?
###################################
I imagine it might – but that’s with the assumption that none of the other “forcings” change much. A change in any other “input”, even with a doubling of CO2, might cause little change in output.
That was proven, because, as he said, “…the CCSM3 model does a very poor job of replicating the historical decline in temperatures from ~ 1945 to ~ 1975…”
Remember, he only used the GHG data, solar and the volcano data.
In order for any model to forecast the future, you’d have to know for certain when the next large volcanic eruption will occur.
Or if the sun comes up tomorrow…

netdr
May 14, 2011 12:48 pm

Jim D says:
May 14, 2011 at 11:28 am
netdr, the more typical value, and what would be expected from 3 degrees C per doubling is near 0.2 C per decade, and indeed the 2000′s was nearly 0.2 degrees warmer than the 90′s. I think the 2010′s will exceed the 2000′s by a similar amount, especially since that decade ended up with a soft record to beat by not warming much.
******************
I notice you avoided answering my observation that the models predict .4 ° C warming from 2000 to 2010. Do you agree or disagree ?
The actual warming is much less than that. Do you agree or disagree ?
The satellite data shows very slight warming from 2000 to 2010 [inclusive]
#Least squares trend line; slope = 0.00660975 per year
http://www.woodfortrees.org/data/uah/from:2000/to:2010/trend/plot/uah/from:2000/to:2010
That is .66 ° C in 100 years which is puny at best.
The warming effect of CO2 is obviously not a problem worthy of crow-barring our economy for.
The games being played with surface stations makes their data unreliable so I didn’t use it.]

May 14, 2011 1:21 pm

Willis, very audacious! I doff my mortarboard to you.
You set out on an experiment to test the hypothesis, “Can I get close to CCSM3 output by simply combining the input forcings using the simplest linear time-decay function?” Most would have said, “You’re crazy.” Most others, who appreciate that the CCSM3 is linear in its parts, maybe you can get close, but there will still be a lot of residual.”
You just did it and knocked one out of the park! As a favorite professor of OR/MS, Gene Woolsey, says: “Do the dumb things first!”
Here is my take home:
Willis would be the first to say that predicting climate response to human and natural forcings is beyond his knowledge and power.
But Willis found a way to predict the output of a CCSM3 model! That IS apparently within his power!
So, If Willis can predict CCSM3, we are left with only 2 possible solution states:
A. Willis CAN predict tomorrow’s climate, or
B. CCSM3 CANNOT predict climate.
The Emperor’s threads are looking quite tattered.

EllisM
May 14, 2011 1:27 pm

Quick question – wouldn’t it be possible to replicate the observed 20C temperatures with any set of curves, using the same methodology?

Shub Niggurath
May 14, 2011 1:30 pm

Willis
Your analysis is correct.
Sadly enough, you conclusions are wrong 🙂 🙂
It is precisely because the CCSM3 reduces the complex, interacting variables fed to it, into a simple output so elegantly, that we should completely believe in it.
The logic goes like this:
[1] Your simple equation shows that the climate is essentially a simple first order responder to CO2
[2] The manifestly complex computer model which represents the entire climate system, the output of which resembles your simple equation, is the very proof that the climate is essentially a simple first-order responder to CO2
Where do you think the climate activists get their certainty from? Why do you think we are branded ‘deniers’?
It is because we can’t get a simple thing as this.

Dave
May 14, 2011 1:35 pm

Here is some clear evidence of IPPC modeling bias!
Forecasting experts’ simple model leaves expensive climate models cold
Modeling problems: The naïve model approach Verses the IPCC model forecasting procedures
The naïve model approach is confusing to non-forecasters who are aware that temperatures have always varied. Moreover, much has been made of the observation that the temperature series that the IPCC uses shows a broadly upward trend since 1850 and that this coincides with increasing industrialization and associated increases in manmade carbon dioxide gas emissions.
To test the naive model, we started with the actual global average temperature for the year 1850 and simulated making annual forecasts from one to 100 years after that date – i.e. for every year from 1851 to 1950. We then started with the actual 1851 temperature and made simulated forecasts for each of the next 100 years after that date – i.e. for every year from 1852 to 1951. This process was repeated over and over starting with the actual temperature in each subsequent year, up to 2007, and simulating forecasts for the years that followed (i.e. 100 years of forecasts for each series until after 1908 when the number of years in the temperature record started to diminish as we approached the present). This produced 10,750 annual temperature forecasts for all time horizons, one to 100 years, which we then compared with forecasts for the same periods from the IPCC forecasting procedures. It was the first time that the IPCC’s forecasting procedures had been subject to a large-scale test of the accuracy of their forecasts.
Over all the forecasts, the IPCC error was 7.7 times larger than the error from the naïve model.
http://www.copenhagenclimatechallenge.org/index.php?option=com_content&view=article&id=52&Itemid=55

May 14, 2011 1:54 pm

A Black Box of N-Rays
This little experiment Willis did on CCSM research reminds me of the story of the debunking of N-Rays by Robert Wood.
Wood suspected that N-rays were a delusion. To demonstrate such, he removed the prism from the N-ray detection device, unbeknownst to Blondlot or his assistant. Without the prism, the machine couldn’t work. Yet, when Blondlot’s assistant conducted the next experiment he found N-rays. Wood then tried to surreptitiously replace the prism but the assistant saw him and thought he was removing the prism. The next time he tried the experiment, the assistant swore he could not see any N-rays. But he should have, since the equipment was in full working order.
http://skepdic.com/blondlot.html
With this quote at the top of the page:
One of the first things I did with every graduate student who worked with me is to convince them how difficult it was to keep oneself from unconscious bias.* –Michael Witherell, head of Fermi National Accelerator Laboratory
another telling of the story: http://www.mikeepstein.com/path/nrays.html

JT
May 14, 2011 1:57 pm

@Roy Clark
“Using the ‘calibration factor’ from CO2, an increase in LWIR flux of 1.7 W.m-2 has produced an increase in temperature of 1 C. Linear scaling then indicates that a 1 Wm-2 increase in LWIR flux from any greenhouse gas or other ‘forcing’ agent’ must produce an increase in ‘equilbrium surface temperature’ of 2/3 C.”
I’m not sure what you mean by this. IF the calibration assumes that a 1C increase in the temperature of the surface was caused by a 1.7W/sq.m. increase in IR incident upon and absorbed by the surface, then a linear scaling would assert that a 1W/sq.m increase would produce a .588C increase in surface temperature, not a .667C increase (1/1.7=.588…).
“Since an increase of 1.7 W.m-2 of flux comes from a change in ‘surface temperature’ of 0.31 C at an average surface temperature of 15 C (288 K), the other 0.69 C must come from ‘water vapor feedback’.”
Then when you make the second quoted statement above I can’t tell if you are saying that a .31C increase in surface temperature caused a 1.7W/sq.m increase in surface IR emission; or whether you meant that a .31C increase in surface temperature was the effect of a 1.7W/sq.m increase in surface IR absorption. That is: you speak of the flux “coming from” the .31C temperature change, and then you refer to a .69C temperature change “coming from” ‘water vapor feedback’ which is a flux.
Finally, I don’t understand how you are relating the .69C difference between a .31C increase in surface temperature and a 1C increase in surface temperature to the figure of 1.7W/sq.m in terms of “amplification”.

Ian W
May 14, 2011 2:14 pm

Stephen Singer says:
May 14, 2011 at 11:38 am
Margaret and Willis it appears to me to be a combination of a couple of small volcanoes just prior to and after 1970 along with the beginning of the large drop in ozone. You’ll note that after 1970 there were similar drops around a couple of large volcanoes as ozone continues to decline.
I have no plausible explanation for how the two small volcanoes and Ozone might be linked to the temp drop.

Is it not more likely that the large drop in UV radiation from the Sun led to the drop in ozone as well as a drop in the altitude of the Top of the Atmosphere? It is the high frequency energy that heats the oceans not the low IR frequencies. So ozone drop is just an indicator not a causation.

D. J. Hawkins
May 14, 2011 2:42 pm

ferd berple says:
May 14, 2011 at 6:42 am
Have you run the model forward to see what it calculates for future temperatures past 2000 to see how well it is doing? Also to see how well it tracks the GISS and CCSM3 projections.
This is potentially a huge finding, because if your Excel spreadsheet tracks the model to .995 going forward, it suggests that the the models can be replaced by a much less expensive equivalent.
It also suggests that for all their reported sophistication, the model are nothing of the sort. Their behavior can be reduced to an extremely simple equation that demonstrates the underlying assumptions inherent in the models.

You have to remember that the temperature calculation is done step-wise; it’s not a smooth function because the forcings are empirically adjusted. I’d be curious to see a graph of the actual value of the total forcing function over time and if the various forcings are simple time-variant or coupled. Except of course, the very evil CO2 😉 which we know is the root of all climate disaster and whose forcing must perforce rocket upward with time (/sarc). So, does a change in CO2 or water vapor also affect, say, the value of the forcing for methane? Or is everything coupled to CO2? Except for ozone, apparently.

mpaul
May 14, 2011 2:45 pm

Willis, this is a surprising result. The reason that it is surprising to me is that I know for a fact (first hand knowledge) that the modelers are making substantial use of very large supercomputers. For example, NASA’s Columbia system is a 10,240 processor, NUMA machine with 10 Terabytes of main memory– this is a big machine by any standard. I also know that a simulation run of this machine takes many hours — in other words, there’s a massive number of cycles expended solving a very large number of computational fluid dynamics equations. Even without knowing the structure of the model, one would assume that the relationship of the inputs to the output would be a very complex function. What’s surprising is that the output could be so closely emulated by a simple linear transformation consisting of a few variables. This is a stunning result. But I’m scratching me head to figure out how this could be. Any thoughts on this?

netdr
May 14, 2011 2:53 pm

What would be expected from unbiased scientists is models which are sometimes high and sometimes low but all the models which have been published predict too much warming. Is that because of bias ? I think so.
Dr Hansen’s 1988 model was fairly accurate until 2007 but as of 2011 it is completely off the mark.
http://cstpr.colorado.edu/prometheus/archives/hansenscenarios.png
The temperature as of 2011 is lower than the scenario “C” which was with stringent CO2 curbs. Even using the biased GISS numbers.
http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.txt
The CCSM 3.0 simulations were done in 2000 and already they are way too high.
The warming since 2000 has been at a rate of .06 ° per decade which is puny. The prediction of .4 ° C is 666 % too high. These predictions are clearly “not skillful “!

BLouis79
May 14, 2011 3:45 pm

Great work Willis. Could you please tell us then how the model manages to do forecasting when the data runs out? Clearly nearly all the core data are rising. Ozone looks like it turned around – where is it going now and will it cause cooling or warming??

EllisM
May 14, 2011 4:09 pm

“The final curve is a very specific shape, which cannot in general be matched by “any set of curves” to a correlation of 0.995 as I have done above.”
But wait – wouldn’t proper weighting, or normalization as you call it, and judicious choices for τ and λ result in the observed temp time series, given some random set of “input” curves?
Likewise, instead of using an admixture of the kinds of forcings (W/m-2, volcanic aerosol mass, ppm, ppb, etc.) wouldn’t it be better to use the computed radiative values for each forcing? CCSM3 is over 7 years old now, certainly someone somewhere has those numbers.

Theo Goodwin
May 14, 2011 4:21 pm

Shub Niggurath says:
May 14, 2011 at 1:30 pm
As clever a description of circular reasoning as I have seen. Thank You, Sir.

rbateman
May 14, 2011 4:38 pm

netdr says:
May 14, 2011 at 2:53 pm
.06 C/decade and what is going on now seems to imply that the warming ended, but is now rolling over into cooling. What we need is the rate of cooling in C/decade.

May 14, 2011 4:49 pm

Not to belittle Will’s excellent work (as always) but this is of course rather similar to what Dr. Jeff Glassman did last year over at Rocket Scientists Journal (‘RSJ’) where he elegantly teased out the (amplified and lagged) solar signal in the official IPCC AR4 HadCRUT3 global temperature record of the last ~160 years.
http://rocketscientistsjournal.com/2010/03/sgw.html
Little known is that Jeff is the retired head of Hughes Aerospace Science Division and a world expert in signal deconvolution algorithms to whom, in a nice irony, every cell phone user owes an unconscious debt.
RSJ – the quietest sceptical blog in the world (and, again ironically, one of the very best).
Nothing anyone has said since (including most especially that self-appointed solar dogmatist Leif Svalgaard) has invalidated Jeff’s conclusion.
Almost every month now we are seeing mainstream literature papers which refine our understanding of the manifold ways in which solar forcing is amplified e.g.
http://www.mps.mpg.de/projects/sun-climate/papers/uvmm-2col.pdf
http://www.agu.org/pubs/crossref/2011/2010JA016220.shtml

Reply to  Ecoeng
May 14, 2011 4:53 pm

Tamino decided to call it all “fake”, yeah that’s the ticket.
http://tamino.wordpress.com/2011/05/14/fake-forcing/

mike restin
May 14, 2011 5:22 pm

EllisM
I got a better idea.
Why don’t they show us how they got there….or have they?

May 14, 2011 5:24 pm

Hi Willis,
The formula you give is incorrect (typo?) and should read
T(n+1) = T(n) + (lamba)dF(n+1)/tau + dT(n)exp(-1/tau)
Where dF(n+1) = F(n+1) – F(n)
Nice post though. Thanks.

May 14, 2011 5:37 pm

“Me, I find the idea of a linear connection between inputs and output in a complex, multiply interconnected, chaotic system like the climate to be a risible fantasy. It is not true of any other complex system that I know of. Why would climate be so simply and mechanistically predictable when other comparable systems are not?”
#1. The output you are fitting is a global average. That means a large amount of the regional variability is suppressed. If you looked at the work that real scientists do when they try to “model” the “model”, they have to build much more complex emulations than you have done. In those versions they build models to capture the regional variations. This is know as predicting the HIGH DIMENSIONAL output of a GCM. In short, it is not at all surprising that the global average (low dimensional output) of a complex system is this simple. You’re surprised; I’m not, neother are the people who have tried to predict the high dimensional output..
#2 the output you are fitting is itself the average of individual runs. Again, you are fitting averages.
There is nothing surprising in any of this. We’ve known it for a while
http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/05/2-linearity-of-the-forced-response/
CE Buck, does some very nice work ( she presented a great PPT at AGU )
http://www.noc.soton.ac.uk/pucm/

May 14, 2011 5:48 pm

here Willis.
One way its done with 103 runs of a GCM.
http://www.newton.ac.uk/programmes/CLP/seminars/120711001.html

May 14, 2011 6:23 pm

steven mosher says:
“#1. The output you are fitting is a global average. That means a large amount of the regional variability is suppressed. If you looked at the work that real scientists do…”
‘Real’ scientists?

netdr
May 14, 2011 6:33 pm

rbateman says:
May 14, 2011 at 4:38 pm
netdr says:
May 14, 2011 at 2:53 pm
.06 C/decade and what is going on now seems to imply that the warming ended, but is now rolling over into cooling. What we need is the rate of cooling in C/decade.
**********
I didn’t claim cooling yet but we are in the negative phase of the PDO so wait a few years. In 5 years or much less there will be measurable cooling. [2011 is very cool so far ]
Models which include ocean currents show 20 to 30 years of slight cooling from now which causes the overall rate of warming to be 1/2 ° C per century.
This study seems to me to have it pretty close to correct.
http://people.iarc.uaf.edu/~sakasofu/pdf/two_natural_components_recent_climate_change.pdf
Paraphrasing it says the temperature is a 60 year sine wave caused by ocean currents and 1/2 ° C per century from coming out of the little ice age.

Shub Niggurath
May 14, 2011 7:10 pm

As opposed to the fake scientists like Willis, he means.

Brian H
May 14, 2011 7:10 pm

Jim D’s repeated suggestions that it’s all the result of averaging GCMs is ludicrous. The reason, Jim, that they have to do that is that the GCMs are all over the map. Trying to replicate any one of them is pointless and futile.
What Willis has demonstrated is that all the teraflops are just Moon-walking and running-on-the-spot, and that the only actual functional math in them is brain-dead-simple/stupid.

May 14, 2011 7:32 pm

In my view, “onion2” on May 14, 2011 at 4:59 am has it right. Willis has stumbled upon a rather well-known and not very interesting computer science application. This is known in some circles as a “model of a model.”
I (and many colleagues) have also created simple one-equation relationships of very complex computer modeling results for oil refineries, petrochemical plants, and chemical plants. We did not generally obtain a linear result, but obtained a quadratic result (meaning one of the terms was squared). Many companies do this; the incentive being it is far faster to solve the problem using the simplified “model of a model.” Many times, the detailed, complex model may require hours or days to solve. The “model of a model” may solve in a few seconds, or much faster. The results are sufficiently accurate for the particular application.
For those who might be interested, one application of a model-of-a-model was in advanced process control of oil refinery process units. This control application required the simplified model-of-a-model to solve within a few seconds, and was solved once per hour. The simplified model was fast and robust – it solved every time – while the complex model would solve (sometimes) and required an hour or two.
Another application was to determine the energy consumption of a large, complex refinery and chemical processing plant, plus the utility plant that provided steam, electric power, and compressed air. The individual models that were used to provide the basis for the model-of-a-model required weeks to set up and solve. The model-of-a-model solved in about one second, and was run daily.
This is not a big deal in and of itself.
The greater question is, how much of the last century’s warming was due to natural cycles and natural events (volcanic eruptions), and how much, if any, was due to man’s activities? Man’s activities include (at a minimum) the release into the atmosphere of various gases and particles, changing the land surface, emitting massive quantities of heat from burning fossil fuels, and producing nuclear power.

Theo Goodwin
May 14, 2011 8:00 pm

Roger Sowell says:
May 14, 2011 at 7:32 pm
‘In my view, “onion2″ on May 14, 2011 at 4:59 am has it right. Willis has stumbled upon a rather well-known and not very interesting computer science application. This is known in some circles as a “model of a model.”’
You don’t read the forum before you post, do you? If you did I would not need to reply to you. Your argument is circular. You assume that there is some explicated model of climate change that the Warmista have presented to the public. No such thing exists. Having assumed the existence of the nonexistent Warmista explicated model, you then argue that what Willis has done is a trivial exercise in creating a simple model of another model. You hope, thereby, to have your readers conclude that there is a Warmista model explicated for the public. But there is none. As Willis explained above and as many of us have explained since, Willis’ model replaces a black box. Warmista have offered nothing but a black box.
Warmista have no physical hypotheses which could explain the warming that only they claim to exist and they have only a black box where they claim there is a model. Warmista have nothing. And you know it. I would say that you should be ashamed of yourself, but I can readily see that you are incapable of shame.

Theo Goodwin
May 14, 2011 8:05 pm

steven mosher says:
May 14, 2011 at 5:48 pm
“here Willis.
One way its done with 103 runs of a GCM.
http://www.newton.ac.uk/programmes/CLP/seminars/120711001.html
“It what?
Explicate the model for the general public.
Respond to the claim that Warmista have no physical hypotheses which can explain forcings.
You cannot do either. You know you cannot do either. The fact that you won’t address the matter puts you in the same league as Mann, Briffa, and the rest of the lot.
And, for God’s sake, son, don’t assign homework. If you are not ready to state your arguments then do not post. This is a forum for debate among consenting adults.

EJ
May 14, 2011 8:32 pm

Thanks Willis!

EJ
May 14, 2011 8:32 pm

This is what bugged me from the beginning.

May 14, 2011 8:40 pm

@Theo Goodwin on May 14, 2011 at 8:00 pm
You don’t read the forum before you post, do you?”
Yes, I did. Completely. And with understanding.
” If you did I would not need to reply to you. Your argument is circular. “
No, my argument is sound. The fact that you don’t understand it is your problem, not mine. But, in the interest of furthering your understanding, I’ll explain it in some greater detail.
“You assume that there is some explicated model of climate change that the Warmista have presented to the public. No such thing exists. “
The solid black line in the post above, Figure 2, CCSM3 is the model RESULT that I refer to. That was produced from some workers in the climate science field, presumably from a complex General Circulation Model that was used in hind-cast mode, and after being given appropriate factors to tune the model to approximate the measured temperature over time. In my field, as explained above in my comment, the result of complex and tedious simulations and modeling also produced a curve (sometimes several curves or several segments of curves). That resulting curve is what was then fitted with parameters to determine a simplified equation or equations that would effectively substitute for the complex model.
“Having assumed the existence of the nonexistent Warmista explicated model, you then argue that what Willis has done is a trivial exercise in creating a simple model of another model. You hope, thereby, to have your readers conclude that there is a Warmista model explicated for the public. But there is none. As Willis explained above and as many of us have explained since, Willis’ model replaces a black box. Warmista have offered nothing but a black box.”
See above statement.
“Warmista have no physical hypotheses which could explain the warming that only they claim to exist and they have only a black box where they claim there is a model. Warmista have nothing. And you know it. “
Actually, the AGW proponents have an oft-stated hypothesis, or premise, that has no valid proof as of yet. Their premise is that CO2 and other “Greenhouse” gases (see Willis’ post above for some of these, or see the Kyoto Protocol for a list of six such gases, or see California’s AB 32 for a much more comprehensive list) slightly inhibit at least a portion of the heat leaving the Earth’s surface via infra-red radiation due to their capacity to absorb radiative energy in particular bandwidths. The Earth does NOT heat up as a result, however, it is supposed to cool off slightly less, especially in winter nights and with a more pronounced impact at the poles than at the equator. The net effect, they say, is to have warmer temperatures. Further, their premise holds that the warmer surface (that is, not as cool as it otherwise would be) leads to more water evaporation, this they term a positive feedback. The increased water vapor, itself a “greenhouse” gas, leads to yet less cooling at night.
I, on the other hand, have no problem stating that climate warming has occurred since at least the 1750 to 1800 time frame, as there is ample evidence of extreme cold throughout Western Europe and in North America. As I’ve written before, the only question is the cause of the warming, not the existence of the warming. Actually, there is another question: exactly how much warming has occurred? We really don’t know that answer, since the data set is corrupted, and adjusted, to say the least.
“I would say that you should be ashamed of yourself, but I can readily see that you are incapable of shame.”
Well, that’s an ad hominem attack, and I’ll let you speak for yourself. I will note, in passing, that one who resorts to such name-calling also admits he’s lost the debate due to a complete shortage of facts and logical arguments leading to valid conclusions. And having done so in such a public medium as WUWT!
My points above stand, as written.

Theo Goodwin
May 14, 2011 8:44 pm

Willis has struck the Warmista in the very heart of their credibility. Willis has stated very clearly something that we all know, namely, that the so-called model(s) used by the Warmista are nothing but black boxes. Warmista make runs. If they don’t like the results of their runs, they rejigger the model and run again. All to get to the predetermined run that shows just the right amount of global warming.
If you want some credibility, Warmista, Willis has just given you the best opportunity in the world. Come explicate your model here. Lay out the rational procedures that you use in modifying models after runs. Show us your record of decisions about modifications that you made as you built a history of runs and show us how those decisions are leading to some rational progress regarding the internal structure of the model.
Commenters on this site will do you the favor of analyzing your model, your procedures for recording modifications, your procedures for making modifications, the whole nine yards. That is what you want, isn’t it? You are scientists, aren’t you.
Mosher has dropped a very interesting hint. He has said that ENSO is treated as statistical noise in the models. Warmista could start there. Explain that point.
I promise not to use the word ‘unfalsifiable’.

jorgekafkazar
May 14, 2011 9:49 pm

I once was assigned during its last month to a project that involved a complex model using the most powerful computer then available. Anomalous output in some runs was unexplained until I did some hand calculations and discovered (1) a subtly unwarranted assumption had been embedded in the model, and (2) I could replicate the output within 5% by taking a simple average of six inputs and dividing by another. So much complexity had been built into the program that no one up to then had understood the implications of the assumption. Most of the intermediate numbers (our primitive version of “EVER-SO-HIGH DIMENSIONAL” output) simply got shuffled around for n iterations and then dropped out when the target result was calculated. A fortune had been spent on computer time; all wasted in detailed, irrelevant, “regional” printouts.

May 14, 2011 11:34 pm

Actually, had Willis come out with a quadratic, we would be having a very different conversation. It’s the linearity that kills the GCM argument.

Konrad
May 15, 2011 1:08 am

I do not find Willis being able to replace the workings of the CCSM3 model with a one line equation too surprising given what he discovered about the GISSE model. He has again highlighted the critical problem with these models, in that they effectively model a given temperature point by applying forcings to the previous temperature point. There are too many multi year changes to polar ice, ocean heat content and other real world conditions for this type of modeling to be accurate.
What I do find surprising the forcings Willis shows in figure 2. The plots shown for ozone, sulphur, CO2 and CFCs all seem unrealistic. Sulphur and CFCs show no variation, despite both being known to be produced by volcanoes. Ozone levels depend on UV radiation which varies significantly over solar cycles, yet the only feature in the ozone plot appears to be a unproven response to the equally unrealistic CFC plots. The CO2 plot also seems dodgy, being at odds with both plant stoma proxies and historical direct chemical measurements.
Given that this CCSM3 model appears to be only one year deep and using atmospheric data from a different planet, I would not hold out too much hope for CCSM4

May 15, 2011 1:08 am

Willis, thank you for the thoughtful reply. I very much appreciate the opportunity to exchange views. You have some very interesting posts on WUWT, and I have learned much from you. Thank you for your writings, and for the time you spend to explain things clearly and in sufficient detail. However, and I mean this in the most civilized way, I don’t always agree with what you write. I believe that this is one of those times, but perhaps it is because I have not made my points sufficiently clear. I believe I have understood your point. Let me explain. You wrote,
“The claim is made that the climate models, because of their unbelievable complexity, can successfully reproduce these chaotic phenomena. What I have shown is that they are not chaotic or complex in their essence—they are purely and mechanistically linear.”
My earlier work, in the 1980s and 1990s, was with analogous systems to what you accurately describe in the italicized quote as “these chaotic phenomena.” The systems we modeled are highly non-linear, have multiple simultaneous equations for chemical reaction mechanisms and reaction paths, have many degrees of freedom, a multitude of variables, several simultaneous or sometimes sequential constraints, and in many cases depend entirely on the choice of boundary conditions or initiation values. Plus, their behavior changes with time. Their behavior tomorrow is based at least in part on what happened today. These systems are very much analogous to the climate models, in that thermodynamics are invoked, equations of state are used many thousands of times, boundaries are computed then iterations are made to have those boundary values match within a certain tolerance, and there are also other similarities. There are also several competing versions of models that are used to simulate the processes. In summary, we are each describing very similar, complex, non-linear, computation-intensive simulations.
My point is that these complex simulations can and do produce results that can be graphed in two dimensions. That graph for GCMs is an x-y graph of temperature vs time, where temperature is the global average mean temperature. Your Figure 2 above shows that as the solid black line. That temperature vs time relationship is the output from running the GCM in hind-cast mode. Similarly, the complex models I worked with in oil refineries would produce an x-y graph for some result as a function of something else.
From there, both processes are about the same. As I understand your post above, you selected the few input criteria with the most likely impacts, and found values for those input criteria that would duplicate the solid black line as nearly as possible. You were successful, having achieved a correlation coefficient of 0.995. That is outstanding success.
We also did the same, by determining an equation that would duplicate our x-y graph as closely as possible.
You point out that the models “are not chaotic or complex in their essence—they are purely and mechanistically linear.” As to the chaotic aspect, I suspect that is correct. The modeling of chaotic systems, at least to my understanding of the state of that art, is not very successful at this time. That is, even if we can successfully model a chaotic system for a short time frame, any predictive value is worthless because the model fails quickly the farther into the future one predicts. Even if the model solves or reaches a solution, the divergence from reality is simply too great for the prediction to have any value.
As to the complexity aspect, these GCMs are somewhat complex, but really are not all that complex compared to some models. For one, they don’t have multiple chemical reactions occurring, each with its own set of kinetic rate equations; they don’t have catalysts that change in relative activity due to a host of variables including the passage of time; they don’t have highly variable energy input over time, because the sun is one big constant source of energy within a very small range of variability; they don’t even try to model things that are known to affect the system such as clouds, and oceanic basin thermal oscillations, and a few others. To my knowledge, they do not even begin to recognize or compute the effect of solar cycles and sunspot number, although the empirical evidence shows low sunspot number equals cold climate, and high sunspot number equals warm climate.
Leaving those known defects aside for the moment, what they do attempt to do is solve multiple simultaneous equations that are highly non-linear, for a large number of gridded cells that represent spatial regions on the globe.
The output, what the modelers ask for, is the global average mean temperature as a function of time. Whether the output is valid or not is determined by how well the model’s output matches measured values. As you correctly point out, the known temperature decline from around 1940 to 1975 is not well represented. There is a bust in the model, as we would say.
Finally, I hope that I have explained my point more completely and clearly. It is certainly no surprise to me, or to anybody in the petroleum industry that practices in economics and planning, that one can achieve a relatively simple equation that reproduces the output from a complex non-linear model. Every major oil company and most independent oil companies do this routinely, for dozens of applications, and have done so for decades.
I also know that the simple equation has serious limitations. One cannot extrapolate very far outside the known parameters. When a new variable begins to impact the real system, the entire process must be repeated to account for that variable. It appears to me that the absence of sunspot number as an input variable, and the failure to account for ocean basin thermal oscillations will soon show the serious deficiencies in their models.

Jim D
May 15, 2011 1:22 am

netdr, the graph you show from CCSM3 shows 0.4 C per decade initially, but IPCC with more models shows the average is much lower. In fact, the consensus is more like 0.2 which is consistent with 3 C per doubling, and consistent with the decadal averages between the 00’s and the 90’s.

May 15, 2011 1:25 am

Roger Sowell,
I designed auto-tuning servos for chemical plants. The algorithm used was practically trivial. i.e.
1. Stabilize the process at the running value
2. Turn your actuator up to 100%
3. Measure the delay until the plant starts responding
4. Measure the slope of the rise after the plant starts responding
The delay and the slope are all you need to tune the servo (using various proportionality constants which determine the “aggressiveness” of the servo)
Of course there are quite a few details I have left out – but that is the gist of it.
The beauty is that because it is a feedback system that is constantly adjusting the plant – you only need to be close. The above model can be applied to better than 90% of the control situations. i.e. a first order approximation is good enough.
Once your servo is servoing you can do continuous checks on the system (sophisticated systems can use the inherent noise of a system for this) and update the control parameters accordingly.
Which makes me think – could climate sensitivity be extracted from the noise of the data? I suppose it depends in part on the quality of the measurement.

Jim D
May 15, 2011 1:26 am

Brian H, the use of the ensemble average is to remove natural internal variability. Once you remove that, all that should be left is climate forcing. What do you think should be left after removing natural internal variability? This is what Willis shows, and it should be obvious.

May 15, 2011 1:29 am

Mosher says “In short, it is not at all surprising that the global average (low dimensional output) of a complex system is this simple.”
Just because the models are aggregated and averaged doesn’t excuse the linearity. The real world follows one precise climatic path and *I* would be surprised if that was one that could be described as linear.

EllisM
May 15, 2011 1:48 am

Willis, if it was possible to assemble all the relevant observations (so-called “greenhouse” gases, the various kinds of aerosols, solar values, etc.) and use the correct temperature record (UAH, please), could your model replicate UAH?
That would be interesting!

Varco
May 15, 2011 3:03 am

Out of interest I used the excellent Oakdale ‘Datafit 9’ to do a non-linear regression of columns B to K of the forcing spreadsheet supplied (no standardization used) vs the temp output from the same spreadsheet and got an R^2 of 0,95. What does it mean – ‘dunno’, but it took less time than a cup of coffee to creat 🙂
Willis, keep up the great work!
Equation ID: a*x1+b*x2+c*x3+d*x4+e*x5+f*x6+g*x7+h*x8+i*x9+j*x10+k
Number of observations = 131
Number of missing observations = 0
Solver type: Nonlinear
Nonlinear iteration limit = 250
Diverging nonlinear iteration limit =10
Number of nonlinear iterations performed = 9
Residual tolerance = 0.0000000001
Sum of Residuals = -1.63226376859171E-12
Average Residual = -1.24600287678757E-14
Residual Sum of Squares (Absolute) = 0.434578735934514
Residual Sum of Squares (Relative) = 0.434578735934514
Standard Error of the Estimate = 6.01788124352828E-02
Coefficient of Multiple Determination (R^2) = 0.9516724117
Proportion of Variance Explained = 95.16724117%
Adjusted coefficient of multiple determination (Ra^2) = 0.9476451127
Durbin-Watson statistic = 0.711469121227893
Regression Variable Results
Variable Value Standard Error t-ratio Prob(t)
a -1.89744278704406E-02 1.81523787513384E-03 -10.45286027 0.0
b 0.118418933483967 1.72891731205921E-02 6.84931157 0.0
c 116423.721682155 159924.971877965 0.7279896336 0.46804
d 126041339641.26 49438332950.4219 2.549465812 0.01205
e 3.46922240210789E-02 0.013893017447072 2.497097852 0.01388
f 8.14977251236907E-04 8.81368257363809E-04 0.9246727964 0.35699
g -0.111026191793535 2.89799501182426E-02 -3.831138126 0.0002
h -1.48009700962797E-02 4.04691874358841E-03 -3.657343039 0.00038
i 9.6706315535647E-03 2.18369061633845E-03 4.428572198 0.00002
j -1.05146889747344 0.886025154523761 -1.186725785 0.23768
k -144.236832673365 24.3509286610933 -5.923257987 0.0

Shub Niggurath
May 15, 2011 3:29 am

Mosher
The need to create and maintain complex computer models of the climate system, the whole justification for the exercise, is the unstated assumption or claim that the model is a good replication of a climate system because it is representationally irreducible.
[1] Under this scheme of thinking, further refinement is only possible by introduction of more parameters and equations (which supposedly mirror further facets of reality), or reductions in confidence intervals of prior parameters.
Consider the following: [I am using “” symbol to mean – interaction, as in, it could be +, -. *, causes lagged increase, causes exponential increase, causes slow decay etc]
A climate model M can be represented by:
M(output) = X(a)X(b)X(c)X(d)…X(n)
where X(a), X(b) and so on, are the participating contributory factors that result in M(output).
Now, the justification for using this model rests on two assumptions:
a) there are no large unknown factors in our list of X(a) to X(n)
b) the sum complement of interactions possible between X(a) to X(n) cannot be broken down, or reduced to simpler modes for all possible values of X(a) to X(n). In other words, confronting or tackling the high dimensionality must be unavoidable.
As a rough example, there must be a real reason to say (if x=2, y=3 and z=1):
a= ((x+y)*y)-(x^2+z)-(((y/x)*2)+z*2)
if,
a=x+y
does the same job.
So Willis’ analysis does not question the specific model alone, but the justification of the exercise of modelling. If Willis’s equation is correct, then the climate system is indeed a simple lagged linear response to forcings, which implies there is absolutely no need for high-end computational modelling. If on the other hand, the climate system is a complex high dimensional one, one that is barely represented with the greatest of difficulties with the computational power that is required to handle the climate model, then it should not be able to be broken down to be represented by a simple linear equation.
Secondly, climate models have long been criticized for being nothing more than linear regressions exercises and therefore valid only as heuristics, with a bristling response from the climate modelers and scientists. From the above, it does not look like the latest models are in anyway, representational than just being good as heuristic guides.
For examples of ‘modeling blindness’ see: Lahsen M. Seductive Simulations? Uncertainty Distribution Around Climate Models. Social Studies of Science. December 2005 35: 895-922 (google it, paper available freely from Scholar)

manacker
May 15, 2011 4:59 am

Willis
You point out that the models are simply set up to respond to certain previously factored-in forcings.
When they fail to produce the same trends as physically observed (such as 1945-1975), the rationalization of the modeler is:
“Well, my model was correct, except for… “(add in any unforeseen factor that made the model invalid). [See Nassim Taleb’s The Black Swan]
The problem in real life (and real climate) is that it is dominated by “except fors”.
That’s why the models are worthless for predicting the future.
Max

Paul
May 15, 2011 6:15 am

Willis, you may enjoy this. Email exchanges between outside scientists and RealClimate scientists are always fun to read.
Rancourt (a physicist) recently wrote a paper on radiation physics, the greenhouse effect, and the Earth’s radiation balance, arriving at the conclusion that “the predicted effect of CO2 is two orders of magnitude smaller than the effects of other parameters.” He sent the paper to the RealClimate folks for input. He now presents the email exchanges.
He introduces the exchanges as follows:
How do scientists operate? How do they attempt to influence each other? How do they protect their intellectual interests? Do they use intimidation? Paternalism? Do they mob challenges from outside their chosen field?
Consider this example from the area of climate science, involving some of the top establishment scientists in the field…
Peer criticism — Revised version of Rancourt radiation physics paper.
1. Rancourt writes original version of article, HERE.
2. Asks for and receives peer criticism, HERE.
3. Rancourt writes significantly revised version of article, HERE.
4. Asks for and receives further peer criticism about revised version, PRESENT POST.
5. It appears that Rancourt’s revised paper is correct: The predicted effect of CO2 is two orders of magnitude smaller than the effects of other parameters.
Following the posting of THIS significantly revised version of Denis Rancourt’s paper about Earth’s radiation balance, Rancourt asked the climate scientists at RealClimate for follow-up criticism — resulting in this email exchange:
http://climateguy.blogspot.com/2011/05/peer-criticism-revised-version-of.html

netdr
May 15, 2011 6:42 am

Jim D says:
May 15, 2011 at 1:22 am
netdr, the graph you show from CCSM3 shows 0.4 C per decade initially, but IPCC with more models shows the average is much lower. In fact, the consensus is more like 0.2 which is consistent with 3 C per doubling, and consistent with the decadal averages between the 00′s and the 90′s.
***********************
The estimated warming gets less and less as more time passes.
In 2006 or so warming was estimated to be + 8 ° C in 2100 then + 3 then + 2 the supposed warming goes down each year. why is that ? The 2 ° c number is just on the cusp of being a slight problem for some places and a benefit for others.
I would appreciate a link to the other models output [with lower warming] which you claim to have seen. Most models including this one show a decreasing amount of warming as time goes on because of the logarithmic nature of CO2 warming.
The point I was making is that this model is seriously wrong so far and Dr Hansen’s 1988 model was just as wrong. The correct ones [if there are any] are so not scary that they are kept in a locked room and not shown to the press.

Craig Loehle
May 15, 2011 7:04 am

Willis’ analysis shows that the model he is emulating (not the real climate) does not have any “tipping points” because such can not exist with a linear system. The lack of multidecadal oscillations also suggests the GCM model is missing something.

Bill Illis
May 15, 2011 7:19 am

Very simple formula can simulate the climate model’s output.
I downloaded the A1B monthly forecast going out to the year 2100 from the 23 climate models used in the IPCC’s AR4 report.
Other than the fact, the ensemble mean projects a large increase in the annual seasonal cycle change (the Earth average temperature throughout the year – which peaks in July),
… it is primarily a function of CO2. The projections do not spontaneously pop-out based on a simulation of the climate. The result is programmed in. Going out to 2100, the A1B climate simulation result is very closely related to the projected A1B CO2 levels.
http://imageshack.us/m/831/1395/ipccar4a1bmonthlypredic.png
For March 2011, the enemble mean projection was about +0.58C while Hadcrut3 was at +0.318C.

Theo Goodwin
May 15, 2011 7:32 am

Shub Niggurath says:
May 15, 2011 at 3:29 am
Mosher
“The need to create and maintain complex computer models of the climate system, the whole justification for the exercise, is the unstated assumption or claim that the model is a good replication of a climate system because it is representationally irreducible.”
Brilliant Post. A new level of sophistication in discussions of models. Keep up the good work.

Theo Goodwin
May 15, 2011 9:36 am

I really do not want to see Willis’ post hijacked. For that reason, I must insist that there is a Bottom Line here. Willis has shown that Warmista models are computationally equivalent to a simple linear transformation on inputs. That is Willis’ contribution to the debate. Those who wish to defend the Warmista must now present some information about their model(s) which show that they really are complicated and refined in their representation of forces that determine climate but all that reduces to a simple linear transformation. No one commenting on this site has attempted this. For anyone wanting a formal statement of this point, look at Shub’s post above. He spells out formally, in the simplest terms, what must be accomplished. Warmista, the ball is in your court.

May 15, 2011 10:35 am

Forgive me, but..
1 – comparing your output to theirs using a digitization of one of their graphs is absurd. That graphical representation incorporated rounding effects, so did the print/image, and so did your digitizer. And no, averages accumulate error, it does not magically cancel out.
2 – I looked carefully at the inards of one of the major models about a year ago. There was 1960s fortran IV there – along with data structures from that same era and linearity assumptions imposed by the limitations of the 1401/360 machines they were writing for.
Very obviously what had happened was that people moved from job to job carrying card decks (boxes and later tapes) encoding their relative competitive advantage in climate modeling – i.e. old models get perpetuated.
Then, of course, the results don’t match – so in goes a new bit adjusting something to make the numbers come out right. And then another grad student works on it, but doesn’t grok the original structure so.. etc etc etc
The next result? several hundred thousand lines (including libs) most of which does nothing more than perpetuate forgotten fantasies – strip away the detrius, especially the counter-acting code bits put in at different times, and what you get is a pretty simplistic formulation.

AJ
May 15, 2011 10:36 am

Hi Willis… if forcings are realized as per the Newtonian view (exp( -1 / τ )), then the amount of “heat in the pipeline” has converged on a limit.
This is like an annuity where you deposit $100 the first year, $110 the next, then $120 and so on. If at the same time you have a negative interest rate (2% management fee), then after some time your balance will reach an upper limit where your deposits are matched by the fees.
If that’s the case, then we can estimate sensitivity by simply regressing ln(co2) against temperature.

Jim D
May 15, 2011 11:09 am

netdr, I was looking at the IPCC report AR4 from WG1. It has a graph. Bill Illis above posted something similar. I think the models underestimated aerosol effects for the early 2000’s, possibly because the 90’s had a rapid temperature rise due to decreasing aerosol effects, and when they increase aerosols later the rate of temperature increase drops. Aerosols are important to predict correctly for near-future decades, but can only mask AGW for a limited time without unacceptable pollution levels occurring.

Jim D
May 15, 2011 11:13 am

People haven’t realized that Willis removed the CCSM3 multi-decadal signals by taking an ensemble average of multiple CCSM3 runs. He therefore found that climate forcing does indeed force climate in these models, and that this is independent of internal variations.

Robert
May 15, 2011 11:16 am

Willis,
did you really just call your two previous posts “papers” ? I’m pretty sure for them to be papers they’d have to be published, and if you’re so confident in your results i’m sure you can publish them somewhere? But if you’re afraid that there is some AGW bias at some of the journals then I’m sure E&E could accept you…

May 15, 2011 11:54 am

Jim D says:
May 15, 2011 at 11:13 am
People haven’t realized that Willis removed the CCSM3 multi-decadal signals by taking an ensemble average of multiple CCSM3 runs.

So why don’t they just publish the “correct” runs?

netdr
May 15, 2011 1:30 pm

Jim D says:
May 15, 2011 at 11:09 am
netdr, I was looking at the IPCC report AR4 from WG1. It has a graph. Bill Illis above posted something similar. I think the models underestimated aerosol effects for the early 2000′s, possibly because the 90′s had a rapid temperature rise due to decreasing aerosol effects, and when they increase aerosols later the rate of temperature increase drops. Aerosols are important to predict correctly for near-future decades, but can only mask AGW for a limited time without unacceptable pollution levels occurring.
***************
Thanks I checked the post and it seems to predict .2 ° C from 2005 to 2011 which didn’t happen.
I think aerosols are being used as a “fudge factor” to explain why warming isn’t happening as expected. At some time in the future the aerosols are removed [from the simulation] and the temperature [simulation] climbs rapidly.
The time is far enough in the future that the forecasters will be safely retired.
A better explanation it seems to me is that there was a negative PDO from 1940 to 1978 and this caused excess La Nina’s over El Nino’s. Check the chart below.
http://rankexploits.com/musings/2008/nasa-says-pdo-switched-to-cold-phase/
From 1978 to 1998 the PDO switched to positive and the temperature went up.
From 1998 to present there have been an equal number of El Nino’s and La Nina’s.
The overall temperature change during that time is essentially zero.
http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml
The correspondence of temperature and the El Nino/la Nina balance is astounding.
1999 and 2000 were la Nina years as was 2008.
The El Nino/La Nina balance is the first derivative of temperature. When there are more La Nina’s the temperature goes down.
In between there were all El Nino’s and the temperature went up. The general shape is an inverted “U”.
It looks to me like CO2 did almost nothing.

Ron Cram
May 15, 2011 1:32 pm

Jimd,
Actually, the cooling effect of aerosols have been overestimated. See http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.161.571&rep=rep1&type=pdf

Ron Cram
May 15, 2011 1:42 pm

Willis,
I do hope you publish this analysis in a high impact journal. I just read your earlier blog post Zero Point Three times the Forcing and the comments by Paul_K and I hope Paul_K becomes a co-author of the paper.
I would also love to see a blog post on the implications of your analysis. Several commenters have expressed their view of the most implications.
Keep up the great work!

jorgekafkazar
May 15, 2011 2:06 pm

PAPER: pa·per (ppr); n.
…3a. A formal written composition intended to be published, presented, or read aloud; a scholarly essay or treatise.
Formal? Check
Intended to be published? It was published here, so Check
Scholarly? Check
Essay or treatise? Check
Ok, looks like it’s a paper.

May 15, 2011 3:17 pm

Actually the cooling effect of variations in the Sun’s activity and the geomagnetic field strength has clearly been underestimated:
“An interesting question is what role the
Sun is going to play in the near future.
The 9300-year long composite of solar activity
(Steinhilber et al., 2008) shows that
during the past six decades the Sun has
been in a state of high solar activity compared
to the entire period of 9300 years.
The statistics of the occurrence of periods
of high activity suggests that the current
period of high activity will come to
an end in the next decades (Abreu et al.,
2008). Furthermore, the distribution of
grand solar minima in the past 9300 years
shows that it is likely that a Maunder Minimum-
like period would occur around
2100 AD (Abreu et al., 2010). Such a period
of low solar activity would probably
lead to a temporary reduction in Earth’s
temperature rise due to the anthropogenic
greenhouse effect. However, the
9300-year long record shows that in the
past a grand maximum has always been
followed by a period of high activity, with
the very likely assumption that the Sun’s
future behavior will be similar to that of
the past, it is clear that the Sun will not
permanently compensate for human made
global warming.”
http://www.pages-igbp.org/download/docs/Steinhilber%20and%20Beer_2011-1%285-6%29.pdf
Despite the obligatory nod to the ruling AGW ‘consensus’ at the end of the above quote, note that this nod also ignores the fact that, based on the experience of the last million years plus, the end of the current (Holocene) interglacial could just as easily begin to terminate any time real soon now …..or is poor Milankovitch just another nasty old sceptic too?

Ron Cram
May 15, 2011 3:29 pm

Tamino seems to have pulled his “fake forcing” post cited by Anthony in comments above. At least when I try, I get the Open Mind blog but the page is completely blank.

EllisM
May 15, 2011 4:48 pm

The post over at Tamino’s is still there.
REPLY: yes the only one he deletes are the ones that he loses control of – Anthony

EllisM
May 15, 2011 6:37 pm

Anthony, it’s good to know that you’re better than Tamino in not deleting anything. That puts you one up on him!
REPLY: No please don’t put words in my mouth. I never said that I was “better”, only that he’s deleted some posts that he can’t control. – Anthony

netdr
May 15, 2011 6:38 pm

I posted over at Tamino’s too just for the fun of it.
I am razzing them because their explanation for cooling from 1940 to 1978 is so lame.
They claim that aerosols just conveniently mimic the 60 year PDO cycle ?
I don’t believe it !

Ron Cram
May 15, 2011 6:56 pm

Okay, the post at Tamino’s is still there but I have to scroll down to see it. Does anyone else have that problem? Or is it just Google Chrome users?
Anyway, I skimmed through Tamino’s writeup and he quotes from Willis’s first post but not much from his second and more correct post. Tamino is saying Paul_K suggested fake data (using only 72.4% of volcanic forcing), but he doesn’t quote Paul_K saying that. I used the find function on my browser and it did not come up.
I don’t get what Tamino thinks he is proving when he writes “I can do that too.” So he is proving Willis’s work is reproducible. And he thinks that is a bad thing?
Can’t wait to see Willis’s withering response!

BLouis79
May 15, 2011 8:48 pm

Considering Joseph Postma’s “Thermodynamic Atmosphere Effect” paper appears to more than adequately explain temperatures on earth, perhaps Willis could build a little thermodynamic atmosphere model using the inputs he has that have an effect on the relevant parameters in the equations as used by Postma to see how that runs??

May 15, 2011 10:13 pm

Theo:
‘Explicate the model for the general public. ”
If you want to know what a GCM does… READ THE CODE. The first time I looked at ModelE was 2007. Its not that hard or that long. get started.
I’ve linked to a presentation and a PDF of the process of emulating the higher order output of a GCM. Its not that hard, read it and watch the video.
Jim D
“People haven’t realized that Willis removed the CCSM3 multi-decadal signals by taking an ensemble average of multiple CCSM3 runs. He therefore found that climate forcing does indeed force climate in these models, and that this is independent of internal variations.”
err I believe I’ve pointed that out.

Craig Allen
May 15, 2011 11:46 pm

Hang on. You are saying it’s a ‘black box’ but you don’t make the effort of actually looking inside it. The GISS Model E GCM can be downloaded here. You can read the code yourself and it is documented. The box is actually completely transparent and comes with a manual!
The fact that you can do a curve fitting exercise to emulate one aspect of the model outputs in no way demonstrates that you understand how the model works and why it produces the results as it does.
And you have only done the curve fitting exercise on the average global temperature. The model simulates the global climate system – temperature, rainfall, pressure, snow ice etc., and the output includes the horizontal, vertical and temporal patterns of each.
Your effort is a pointless exercise in demonstrating that you don’t understand the model and have no interest in attempting to.

May 16, 2011 12:38 am

Craig Allen,
GISS couldn’t predict its way out of a wet paper bag. The endless “adjustments” it makes to putatively show that the planet is warming precipitously have been the subject of much debunking on WUWT. Here are a few of the many examples posted:
click1
click2
click3
Note that all of the massaged examples show greater warming, never less warming. What are the odds, eh?

Theo Goodwin
May 16, 2011 5:50 am

steven mosher says:
May 15, 2011 at 10:13 pm
Theo:
‘Explicate the model for the general public. ”
“If you want to know what a GCM does… READ THE CODE. The first time I looked at ModelE was 2007. Its not that hard or that long. get started.”
The simple-mindedness of this response is exceeded only by its condescending spirit. The code is part of the matter. Code contains heuristics that are understood only by advanced programmers who design the model and decide how to solve it. Otherwise, you are working with a tinker toy. No scientist has ever had the time to master such heuristics. What is really important and what we really want is the history of runs and the history of decisions made on the basis of those runs. Without that, we have no idea whether there is a rational process in place for using the model. A program run produces vast output, though this might not be known to the users, and that output has to be culled and interpreted. What I need to know is the rational process that is in place for doing the culling and interpreting. My guess is that there is none. Certainly, you seem to be unaware of it. I have seen nothing in publication by Warmista that would indicate that they are aware of it. Without such a rational process, specified on paper and regularly critiqued, all the model runs and all the model revisions amount to nothing more than rejiggering.
But to get back to basics, Shub Niggurath posted a beautiful little explanation of what it is to have a model that is not equivalent to a simple linear transformation on inputs. The model must use multiple terms (what I call predicates) and that list of terms must be irreducible. Do you know whether the models do this? I believe that you don’t have a clue what I am asking. If so, please say so.
Finally, there is the great frustration that sceptics experience whenever we challenge or question a Warmista, namely, the response is always “Figure it out yourself.” We want you to explain matters and to take positions on your explanation. This process involving all of us is supposed to be an argument, not a condescending teacher and his students. So, Warmista, stop acting like children, explain what you are doing, at least in response to legitimate challenges, such as Willis’, and, thereby, take responsibility for your work. So far, you have not taken responsibility for your work. You simply pretend to be above it all. Yet the fact of the matter is that you are out of it all.

Theo Goodwin
May 16, 2011 6:03 am

Craig Allen says:
May 15, 2011 at 11:46 pm
“Hang on. You are saying it’s a ‘black box’ but you don’t make the effort of actually looking inside it. The GISS Model E GCM can be downloaded here. You can read the code yourself and it is documented. The box is actually completely transparent and comes with a manual!”
A typical Warmista response: “Figure it out yourself.” This is supposed to be a debate. After being challenged, Warmista are supposed to explain their position and defend it. We need their explanation so that we can criticize it. That is how science advances. Now, of course, Michael Mann and the Team pretend to have a new kind of scientific method where you never have to explain or defend your claims. Either they are pretending or they are the stupidest people now working in something called “science.” There is no alternative scientific method. When challenged, step up with your explanation and prepare to defend it. It does not work any other way.

BLouis79
May 16, 2011 6:45 am

Anyone who has ever tried a “space blanket” will attest that it does not retain heat as well as a real blanket. They are used as much because they are light/compact/cheap/disposable. Besides I already accept that insulation has no impact on a thermal *equilibrium* state because insulation does no work.
No doubt someone will do the blackbody reflector experiment in due course and a real answer will be known.
Personally, I find Postma’s explanations physically rational and useful. Further, if it is true that a large chunk of the reason for earth’s temperature is the radiative thermal equilibrium with the sun, it follows that the temperature of earth is highly sun-driven (others have said this) and that there is probably not a lot anyone on earth can do to modify it (over the dead bodies of the warmist believers).
So if Willis thinks it beyond the scope of all possibility that the combination of albedo effect (reflectance) and solar activity can explain earth’s temperature, then someone else will have to do the thermodynamic model run……

Jeremy
May 16, 2011 7:41 am

I question those forcing inputs at the outset. How were those arrived at? I’m not questioning your work Willis, I’m questioning however someone arrived at those forcings.
Volcanoes are a large net positive forcing? How is this? So the extra cloud creation from aerosol release is net positive?
Ozone is a large net negative forcing? Is this really the case? Most O3 loss is at one pole where direct sunlight is a rare luxury and ice blankets the continent 365 days of the year.

Joel Shore
May 16, 2011 8:33 am

BLouis79:

Personally, I find Postma’s explanations physically rational and useful.

Then, you’ve been “had”. As Willis points out, Postma is nonsense.
I’ll just go a little further in explaining why than Willis did. In particular, what is wrong with his lapse rate argument? The answer is that the basics of the picture are correct (and discussed in just about any elementary textbook on atmospheric physics or climate). Indeed, if you know the temperature at one point in the troposphere then you can use the lapse rate to determine the temperature at the surface.
However, what Postma would have you believe is that the effective radiating height is set by … well, he really doesn’t say but seems to suggest that you somehow just compute the average of the mass of the atmosphere or something like that. The truth of the matter is that the effective radiating height is determined by that level of the atmosphere where the “optical depth” ( http://en.wikipedia.org/wiki/Optical_depth ) of the atmosphere above is of order 1, so that most of the emitted radiation successfully escapes to space. (In reality, you have to consider an average because optical depth is a very strong function of wavelength due to the various absorption lines of the IR-active atmospheric constituents.) And that, of course, is determined by the make-up of the atmosphere, in particular, the substances in the atmosphere that absorb and emit mid- and far-IR radiation. (In particular, if the atmosphere was transparent to such radiation, then the effective radiating level would be the surface of the earth and hence it is the surface that would have to be at 255 K assuming the same earth-system albedo as we have now.)
So, Postma is right: Once you know the temperature at one level in the troposphere, you can use the lapse rate to get the temperature at the surface. The only problem is that the only way to get the temperature at a certain level of the atmosphere is to consider the greenhouse effect.
My guess is that Postma is smart enough to know this…which, if true, means he is engaging in active deception. Admittedly though, I could be wrong on this guess and it could be that he is deceiving himself too.

May 16, 2011 8:54 am

ian edmonds says:
May 14, 2011 at 5:24 pm
Hi Willis,
The formula you give is incorrect (typo?) and should read
T(n+1) = T(n) + (lamba)dF(n+1)/tau + dT(n)exp(-1/tau)
Where dF(n+1) = F(n+1) – F(n)
Nice post though. Thanks.

I had this exact same question, and I see you beat me to asking it. I did not see this addressed tho’. Willis?

Steve Fitzpatrick
May 16, 2011 10:12 am

Willis,
Nice post.
You note that both this GCM and the GISS model E fail to predict the actual 20th century temperature history, especially the ~1945 to ~1975 cooling trend. That failure stems (I suspect) from a combination of overstated sensitivity to forcing and the models not being able to simulate natural variation in the system. The most obvious natural variation is ENSO, but longer term natural oscillations like AMO, PDO, etc, which are not clearly understood, may explain most of the 1945 to 1975 cooling discrepancy.

Gary Swift
May 16, 2011 11:14 am

Willis said:
“The current climate paradigm, as exemplified by the models, is that the global temperature is a linear function of the forcings. ”
…so, …the beginning and end of the ice ages should be a piece of cake to explain. ….uh. ….right? (insert sound of crickets chirping here)

Nuke
May 16, 2011 11:35 am

When I input my data and run it through the equation provided, the answer is 42.

May 16, 2011 1:02 pm

Willis Eschenbach says: “Yes, in fact a black body does warm up if you reflect energy back onto it, that’s how the famous “space blanket” works in part … and so everything that follows is worthless.”
A “space blanket” works by insulation. The fact you do not read any further smells of fear. If you have ANY body where its thermal emission is entirely reflected back onto itself, all that you will get is a standing EM wave pattern of energy flux density equal to the emission temperature. The temperature will not increase higher than the source temperature. At that point, the outside of the reflective surface will be emitting the thermal energy, because even the electric clouds in the atoms of a mirror are kinetically excited by photon impingement, which translates into thermal energy.
Joel Shore says: “The truth of the matter is that the effective radiating height is determined by that level of the atmosphere where the “optical depth” of the atmosphere above is of order 1, so that most of the emitted radiation successfully escapes to space.”
No disagreement. I’ll make this more clear in my next edit since it helps the case of reality so much.
Joel Shore says: “In reality, you have to consider an average because optical depth is a very strong function of wavelength due to the various absorption lines of the IR-active atmospheric constituents. And that, of course, is determined by the make-up of the atmosphere, in particular, the substances in the atmosphere that absorb and emit mid- and far-IR radiation.”
And because all of the O2 and N2, which make up 99% of the atmosphere, are radiating thermal IR, they weight by volume the average emission level to far above the ground. The N2 and O2 of the atmosphere emit the vast majority of mid and far IR radiation to space. You seem to imply it is only GHG which do so: you are incorrect.
“In particular, if the atmosphere was transparent to such radiation, then the effective radiating level would be the surface of the earth”
You must be one of the types who thinks only GHG’s emit thermal IR energy. ALL of the gases of the atmosphere emit thermal IR, and the N2 and O2 are thousands of times more prevalent than GHG’s. They also present their own “back-radiation” to the surface, but for some reason, are never included in how much warming effect this should have. This, when their ground-directed flux is thousands of times more prevalent than that from GHG’s. Interesting.
It is impossible for the radiating surface to be the ground surface of a planet if it has an atmosphere. ALL the gases of an atmosphere emit thermal IR, even if void of GHG’s, and this HAS to weight the average surface of emission off of the ground. And don’t forget, the true solar insolation temperature upon the earth surface is upwards of 100C, not -18C, as you’ve been led to believe.
“and hence it is the surface that would have to be at 255 K assuming the same earth-system albedo as we have now.”
Actually, you’d have to work very hard to actually find WHERE it was 255K on the surface. In truth, it would be a maximum of 87C (assuming albedo = 0.3), and a minimum determined by the heat capacity and rate of cooling at night, which is anyone’s guess. The total amount of actual Joules emitted would be equal to that incoming, but the energy flux density and related temperatures could be anything they wanted within that.
“The only problem is that the only way to get the temperature at a certain level of the atmosphere is to consider the greenhouse effect.”
Then why is it that the standard adiabatic material analysis gets the right answer without reference to GHG’s? It is because radiative transfer is INCLUDED in the materialistic adiabatic distribution. And don’t forget, ALL the molecules in the atmosphere emit thermal IR. Claiming only GHG’s do is incredibly naive. The temperature within the system can ONLY be determined by the input energy – the Sun. No passive component of the system can add more energy and increase its own temperature…come on this is so basic. And with an average of +30C insolation for the system, with a maximum of about 100C, there is more than enough solar input to sustain a ground temperature of +15C, especially when considering the additional adiabatic heating effects.
“My guess is that Postma is smart enough to know this…which, if true, means he is engaging in active deception.”
Nope…just writing about standard science and physics without referral to fiction.
Willis Eschenbach says: “Do you see why people don’t like to engage in your kind of discussion. READ THE PAPERS.”
Willis, please…just after you defend NOT reading my paper. The part you’re getting stuck at is actually only a very minor part of the paper. Even if I was wrong on that little issue, it actually has very little to do with the rest of the paper!
Equivocating between insulation vs. back-IR heating definitions becomes irrelevant when you understand the larger context of what I presented. There is NOT even a NEED to postulate a GH effect when you understand the the surface insolation is NOT equal to -18C. It is actually +30C for the system on the side which receives sunlight, with a maximum system temperature of ~100C under the zenith. The fact that it never actually gets this hot can only mean that the atmosphere cools the ground. The fact that the atmosphere IS cooler than the ground can only mean that it cools the ground. The laws of heat transfer apply equally to radiation as they do conduction. No object can conduct with itself to make itself hotter; no object can radiate with itself to raise its own temperature. Stop equivocating with sophisms related to insulation effects – the atmosphere is free to space.
“The problem with the Postma type of solution is that it says something like ‘Left to itself, the bottom of a planetary atmosphere without GHGs is constrained by physical principles to be warmer than the top.” This is perfectly true.”
And by obvious logical conclusion, the AVERAGE of the system has to be found in between. Therefore, the bottom of the atmosphere HAS to be warmer than the average.
“But then they then go a bridge too far and say “This shows that the bottom of the atmosphere is warmed by atmospheric processes”. Not true.”
That’s absurd. Venus doesn’t even absorb enough energy from the Sun in the first place to get to a surface temperature of ~700C, yet it does. Since this temperature CAN NOT be a result of radiative heating, since it doesn’t get that much radiation in the first place, it HAS to be a result of physical atmospheric processes. Radiation can not spontaneously amplify itself to a higher temperature potential…this is basic conservation of energy. Please neglect sophisms on magnifying lenses. There needs to be another source for the temperature increase on Venus…that source comes from physical processes in the atmosphere, such as and convection and adiabatic effects.
That’s the case on Venus. On Earth, the bottom of the atmosphere is heated by up to ~100C worth of solar insolation. Yet on Earth, the ground-air temperature is ALWAYS cooler than the maximum solar insolation. To increase the ground air temperature you need to increase the physical atmospheric depth and density.
Please read the entire paper at the website link .
BTW, I will be publishing a new paper in the coming weeks which completely supersedes ALL of this usual discussion on AGW and the GHE. What we’re discussing here won’t even matter anymore.
There are fundamental logical flaws in the the standard GH paradigm which, when exposed and explained in the way a stellar astrophysicist can, are seen to be so simple and so obvious that you wonder why you never saw it before. It will supersede the entire skeptic-alarmist paradigm…although the alarmists will lose the most. 🙂 I am sure the skeptics will love it once they understand the science…skeptics seem more prone to actual science.

BLouis79
May 16, 2011 1:06 pm

Shore says:”…you’ve been “had”. As Willis points out, Postma is nonsense.”
Well in the one corner we have physicists like Gerlich and Tscheuschner and Postma and Rancourt et al, in the other corner we have physicists like Joel Shore et al. It’s great to have a debate over physics. In the end, science will rule. Postma is was ahead on points for me. (Willis’s shells make not much sense, so don’t score points with me and I don’t see a lot of point arguing about scientific thought experiments that are untestable.)
I haven’t yet seen an AGW physicist propose a falsifiable proposition on the mechanism of the radiative “blanket” or “backradiation” theory that can be tested in the laboratory.
I have read discussions elsewhere on whether different gases can have different lapse rates without creating perpetual energy. Postma assumes that this is impossible and so the mean optical depth of the atmosphere is the point at which the temperature equals the mean surface temperature measurable from space using optical instruments. I find this reasonable. He shows this to be consistent with observed atmospheric and surface temperature data.
Perhaps Joel can tell me how an IR laser might be attenuated/scattered/reflected/thrown out of phase/delayed if directed through various gaseous environments? Can someone point me to papers on this??

Joel Shore
May 16, 2011 2:15 pm

BLouis79 says:

Well in the one corner we have physicists like Gerlich and Tscheuschner and Postma and Rancourt et al, in the other corner we have physicists like Joel Shore et al. It’s great to have a debate over physics. In the end, science will rule.

Don’t fool yourself. You can go other places on the web and see people debate if the earth is 6000 years old or not. The science has already ruled. G&T and Postma are just peddling pseudoscientific nonsense. Why do you think that people like Roy Spencer, Fred Singer, Richard Lindzen, and even Lord Monckton won’t endorse this nonsense?

Postma is was ahead on points for me.

That is a sad admission that you are incapable of distinguishing between real science and pseudo-scientific nonsense that you want to believe.

I haven’t yet seen an AGW physicist propose a falsifiable proposition on the mechanism of the radiative “blanket” or “backradiation” theory that can be tested in the laboratory.

More nonsense. You can point a radiation detector up at the sky and detect the back-radiation. Satellites can look down from above the earth’s atmosphere and see the spectrum of radiation emitted. Furthermore, engineers use the same radiative transfer equations that scientists are using for the greenhouse effect in thousands of real world calculations.

I have read discussions elsewhere on whether different gases can have different lapse rates without creating perpetual energy.

What are you talking about? Who says different gases have different lapse rates. The molecules in the air rapidly thermalize. There are no different lapse rates.

Postma assumes that this is impossible and so the mean optical depth of the atmosphere is the point at which the temperature equals the mean surface temperature measurable from space using optical instruments.

I cannot figure out a way to parse this sentence that makes any sense. Do you undeerstand what “optical depth” means?

He shows this to be consistent with observed atmospheric and surface temperature data.

Of course his result is consistent! All that he has shown by this is that conservation of energy holds (i.e., that the earth / atmosphere system from space appears as a body emitting the amount of radiation that it must to be in radiative balance) and that the known lapse rate in the atmosphere holds. He hasn’t explained to you why the earth is only emitting this much radiation when its surface temperature is such that its surface emits much more. I.e., why does the earth look from space (sort of**) like it is a body with a temperature that is actually the temperature at some point 5 km above the ground? And, the reason that this discrepancy between what the surface emits and what the earth / atmosphere system as seen from space emits can be supported is because of the greenhouse effect.
You are being played for a fool by G&T and Postma. However, it is your decision to make whether you are going to show yourself to be such or whether you are going to show us that you can successfully distinguish between science and pseudoscience. From what you have said so far, I am not optimistic, but maybe you can pleasantly surprise us.
**[I say “sort of” because if you look at the complete spectrum of the radiation from the earth / atmosphere system, it is not simply radiating as a 255 K blackbody; it is only the total amount of radiation that corresponds.. The actual spectrum is dictated by the concentration and distribution of the IR-absorbing components in the atmosphere.]

Joel Shore
May 16, 2011 2:32 pm

By the way, BLouis79, can you explain to us how the paragraph that Willis quoted to you from Postma is consistent with conservation of energy? If the object is a blackbody, it must absorb the radiation reflected back. Where does the energy go if not into increasing the temperature of that body? Not only does Postma’s nonsense contradict experience, it also contradicts fundamental laws of physics!
And, you really want to admit that you are incapable of telling the difference between such nonsense and real science?

May 16, 2011 2:38 pm

“The current climate paradigm, as exemplified by the models, is that the global temperature is a linear function of the forcings. ”
…so, …the beginning and end of the ice ages should be a piece of cake to explain. ….uh. ….right? (insert sound of crickets chirping here)”
Exactly. How could/can a predicted Maunder Minimum in 2100 (Abreu, 2010), in the context of more than 10,000 years of this interglacial already, become the beginning of the end of the Holocene and wipe out AGW?
Include Milankovitch as a forcing?
Why do the GCMs not include the indisputable major forcing mechanism of the last million years plus? Ignorance (after 1000s of papers)?
http://www.pages-igbp.org/download/docs/Steinhilber%20and%20Beer_2011-1%285-6%29.pdf

Joel Shore
May 16, 2011 2:41 pm

Gary Swift says:

…so, …the beginning and end of the ice ages should be a piece of cake to explain. ….uh. ….right? (insert sound of crickets chirping here)

In terms of the relationship between forcings (mainly changes in albedo and greenhouse gas concentrations) and temperature: yes, quite impressively. See http://arxiv.org/abs/1105.1140 (Fig. 4).
The basics of the timing of these changes in terms of Milankovitch osccilations leading to the buildup or melting of land ice is also largely understood, even if some details are still fuzzy. And, there are still some question about the exact mechanism that triggers the observed rise in the greenhouse gases, although again there is a rough understanding of why this would occur.

May 16, 2011 2:55 pm

Joel Shore says:
“More nonsense. You can point a radiation detector up at the sky and detect the back-radiation.”
So, what is the contribution to the greenhouse effect of all the O2 and N2 emitting back-radiation then? 🙂
The Laws of heat transfer apply to radiation as well as to conduction. A cold object does not raise the temperature of a warmer object, even through it conducts with it and may even diffuse into it. Likewise, a colder object will not raise the temperature of a hotter object by radiation, even though it exchanges radiation with it.
“Furthermore, engineers use the same radiative transfer equations that scientists are using for the greenhouse effect in thousands of real world calculations.”
LOL. No, they do not, for if they did, we would have free-energy heating systems. Put 240W/m2 in, get 480 W/m2 out. And passively. Sorry, don’t think so.
“What are you talking about? Who says different gases have different lapse rates. The molecules in the air rapidly thermalize. There are no different lapse rates.”
What are YOU talking about? Lapse rate depends on thermal capacity and that is specific to each gas! Why in the world would you say something like this?
“And, the reason that this discrepancy between what the surface emits and what the earth / atmosphere system as seen from space emits can be supported is because of the greenhouse effect.”
No, it is a natural consequence of any atmosphere. Even a purely inert atmosphere of Helium would be warmer on the bottom than the top, the radiative emission average would be somewhere in the middle, and the ground-air average temperature would be warmer than the mathematical spherical average.

May 16, 2011 3:39 pm

Joel Shore
“In terms of the relationship between forcings (mainly changes in albedo and greenhouse gas concentrations) and temperature: yes, quite impressively. See http://arxiv.org/abs/1105.1140 (Fig. 4). ”
Nonsense. There is nothing impressive about a short time span ‘model’ which claims to find an accumulating heat content in the ocean which cannot be validated.
In particular, this ‘reference’ says absolutely nothing about why ~120,000 years ago the global sea level was 4 – 6 m higher than now at a time when atmospheric CO2 levels were still significantly lower than now. It’s as simple as that.
“The basics of the timing of these changes in terms of Milankovitch osccilations (sic) leading to the buildup or melting of land ice is also largely understood, even if some details are still fuzzy. And, there are still some question about the exact mechanism that triggers the observed rise in the greenhouse gases, although again there is a rough understanding of why this would occur.”
Again nonsense. In terms of the state-of-the-art of GCMs the details are still so very very fuzzy they are not even included!
You are just spouting juvenile generalisations which don’t mean much at all.
There is no GCM which includes the (Milankovitch) eccentricity, tilt and precessional forcings, which, perhaps in combination with TSI and geomagnetic field cyclicity produce a reduction in NH insolation sufficient to cause the polar sheets to grow outwards for a period of ~100,000 years, even in the absence of a significant change in GHG forcing. The ice record clearly shows CO2 changes initiate neither an interglacial or a glacial.
Where is the GCM which is consistent with the 10Be ice record of solar activity over the last 9300 years?
They can’t even explain frequency or phase changes of ENSO or PDO cycles for goodness sake!

Speed
May 16, 2011 4:07 pm

Postma said (May 16, 2011 at 2:55 pm)
The Laws of heat transfer apply to radiation as well as to conduction … a colder object will not raise the temperature of a hotter object by radiation, even though it exchanges radiation with it.
Nothing written here will change Postma’s view on this. Others may find The Science of Doom interesting, entertaining and useful:
The First Law of Thermodynamics Meets the Imaginary Second Law

Joel Shore
May 16, 2011 6:25 pm

Postma says:

If you have ANY body where its thermal emission is entirely reflected back onto itself, all that you will get is a standing EM wave pattern of energy flux density equal to the emission temperature. The temperature will not increase higher than the source temperature.

If this statement is about a body that is not receiving energy or generating thermal energy, then it is correct but irrelevant. If the statement is about an object like the earth getting energy from the sun (or the human body generating thermal energy from chemical energy) then it is incorrect.

And because all of the O2 and N2, which make up 99% of the atmosphere, are radiating thermal IR, they weight by volume the average emission level to far above the ground. The N2 and O2 of the atmosphere emit the vast majority of mid and far IR radiation to space. You seem to imply it is only GHG which do so: you are incorrect.

Get real. Do you have any data to back up this claim? Those molecules in isolation don’t have emission / absorption lines in the mid and far-IR. The only way that emission / absorption can happen is via collisional processes, which give very small contributions at earth’s atmospheric pressures.
Here is a paper discussing the measurement of such absorption lines at pressures of 0 to 10 atmospheres: http://www.opticsinfobase.org/view_article.cfm?gotourl=http%3A%2F%2Fwww.opticsinfobase.org%2FDirectPDFAccess%2FBC73981D-EC8E-6BE9-5F66346A49A16C1A_60399.pdf%3Fda%3D1%26id%3D60399%26seq%3D0%26mobile%3Dno&org=Rochester%20Institute%20of%20Technology
Note that even for the strongest absorption line, the measurements rely on obtaining ultra-high purities of nitrogen because any small contamination by CO or CO2 overwhelms the measurement:

At the path length used in this study, trace impurities in the sample presented a serious problem, and we outline in some detail our purification method. Both CO and CO2 have strong bands that fall near to or on top of the collisionally induced absorption band of N2. Concentrations of these molecules at the 10^-9 level cause interfering lines to be observed in the spectrum, and, in fact, commercial Ultra-High Purity nitrogen with a stated purity of 99.9995% contained enough CO and CO2 to render the measurements useless. Intensity measurements on individual impurity lines showed that the concentrations of CO and CO2 were 0.4 and 1 x 10^-6, respectively.

So, apparently only 1 part per million of CO2 was, along with the CO, enough to render the measurements useless. Imagine what 380 parts per million does! Furthermore, as this graph shows http://www.learner.org/courses/envsci/visual/img_med/electromagnetic_spectrum.jpg , the absorption line of N2 that they are talking about, which is at 4.3 um, would already place it quite far out in the wings of the terrestrial radiation spectrum.

Actually, you’d have to work very hard to actually find WHERE it was 255K on the surface. In truth, it would be a maximum of 87C (assuming albedo = 0.3), and a minimum determined by the heat capacity and rate of cooling at night, which is anyone’s guess. The total amount of actual Joules emitted would be equal to that incoming, but the energy flux density and related temperatures could be anything they wanted within that.

What? It would not be 87 C unless you believe that the only thing that matters is the local radiation intensity. In the absence of IR absorption in the atmosphere, what energy balance constrains is the average of the T^4 over the surface of the earth (really the emissivity times T^4 but the emissivity of most terrestrial surface…or most surfaces period…in the IR is very close to 1).

That’s absurd. Venus doesn’t even absorb enough energy from the Sun in the first place to get to a surface temperature of ~700C, yet it does. Since this temperature CAN NOT be a result of radiative heating, since it doesn’t get that much radiation in the first place, it HAS to be a result of physical atmospheric processes. Radiation can not spontaneously amplify itself to a higher temperature potential…this is basic conservation of energy.

No…That’s not basic conservation of energy. That’s nonsense. The temperature is determined by energy balance considerations. Of course Venus absorbs enough energy to get up to 700 C. It absorbs enough energy to get to an arbitrarily high temperature if it never emitted any energy back out into space. Of course, that’s a counterfactual because it can’t not emit energy back into space. However, the only specific limit that I know of on its temperature is that it could never get hotter than the sun that heats it.

Joel Shore
May 16, 2011 6:36 pm

Postma says:

The Laws of heat transfer apply to radiation as well as to conduction. A cold object does not raise the temperature of a warmer object, even through it conducts with it and may even diffuse into it. Likewise, a colder object will not raise the temperature of a hotter object by radiation, even though it exchanges radiation with it.

Again, this statement is either true but irrelevant if you are talking about a cold object and a warm object with no other source of energy OR it is false if you are talking about a case, such as the sun, earth, and atmosphere, where you have to compute the radiative balance of the system.
The greenhouse effect is no big mystery: In the absence of an IR-absorbing atmosphere, all the radiation emitted by the earth goes back out into space and the earth’s surface temperature is determined by the balance of what it receives from the sun and what it emits back out into space. In the presence of an IR-absorbing atmosphere, some of the radiation that it emits finds it way back to the earth and hence, for a given surface temperature, the net heat flow away from the earth is reduced. The earth’s temperature must rise until radiative balance is retored.
“Furthermore, engineers use the same radiative transfer equations that scientists are using for the greenhouse effect in thousands of real world calculations.”

LOL. No, they do not, for if they did, we would have free-energy heating systems. Put 240W/m2 in, get 480 W/m2 out. And passively. Sorry, don’t think so.

You are smart enough that you can’t possibly believe this nonsense. Why do you continually confuse the case of an object with no source of thermal energy with the earth that is receiving energy from the sun?

No, it is a natural consequence of any atmosphere. Even a purely inert atmosphere of Helium would be warmer on the bottom than the top, the radiative emission average would be somewhere in the middle, and the ground-air average temperature would be warmer than the mathematical spherical average.

That notion does not even obey conservation of energy. An atmosphere transparent to IR radiation would allow all the radiation from the surface of the earth to go out into space. At its present surface temperature, the earth would be emitting much more than it absorbs from the sun and would rapidly cool.

Joel Shore
May 16, 2011 6:44 pm

Postma says:

The Laws of heat transfer apply to radiation as well as to conduction. A cold object does not raise the temperature of a warmer object, even through it conducts with it and may even diffuse into it. Likewise, a colder object will not raise the temperature of a hotter object by radiation, even though it exchanges radiation with it.

If you really believe this is true in any relevant way (i.e., to an object like the earth receiving energy from the sun…or an object generating its own thermal energy), explain to me how the examples that we have presented in Sections 2.2 and 2.3 of our comment on G&T is wrong: http://scienceblogs.com/stoat/upload/2010/05/halpern_etal_2010.pdf

Joel Shore
May 16, 2011 6:56 pm

Ecoeng says:

Nonsense. There is nothing impressive about a short time span ‘model’ which claims to find an accumulating heat content in the ocean which cannot be validated.

Did you miss the “Fig. 4” part. You seem to spend your entire post going off on irrelevant tangents.

In particular, this ‘reference’ says absolutely nothing about why ~120,000 years ago the global sea level was 4 – 6 m higher than now at a time when atmospheric CO2 levels were still significantly lower than now. It’s as simple as that.

The CO2 levels have only shot up to their current values in the last 100 years. Do you think the sea level instantaneously adjusts to the forcings?

Joel Shore
May 16, 2011 7:05 pm

Postma says:

What are YOU talking about? Lapse rate depends on thermal capacity and that is specific to each gas! Why in the world would you say something like this?

Somehow I missed this priceless comment. You are telling me that gases in the atmosphere don’t thermalize through collisions but instead the different constituent gases are at different temperatures? I really think I need you to expound on this one a bit more.

May 17, 2011 12:08 am

Joel Shore
“The CO2 levels have only shot up to their current values in the last 100 years. Do you think the sea level instantaneously adjusts to the forcings?”
No I don’t. Only those who can’t recognise cheap shots when they see them might think that. But I do think someone has to come up with GCMs which can, for example:
Explain what forcings or (even just linear) combinations of forcings could produce a sea level 4 – 6 m higher than where it is now for much lesser CO2 levels than now, not once, but repeatedly through the Pleistocene.
Explain the good proxy temperature and 10Be record of the last 9300 years again for much lesser CO2 levels than now and thus ordinary things like why (say) the Inca were growing maize on little terraces (visitable to this day) halfway up the sides of mountains 1000s of feet above where it can be grown now.
Your comments betray all the classic traits of the AGW hubris., firmly rooted in a naive, post-modernist belief that a complex, chaotic and partially non-equilibrium thermodynamic system is either reliably predictable and/or such predictions can be somehow imposed by act of will. Laughable at a time when even the science of large scale non-equilibrium thermodynamics is new.

RC Saumarez
May 17, 2011 2:37 am

I’m sorry, this is a signal processing sleight of hand. What you have done is fit parameters of a convolution. This will inevitably lead to a correlation which is spurious. In fact you have performed one of the most basic statistical errors in attempting to test the significance of data that is used to construct a hypothesis.
I suggest you read Bendat& Piersol or Papoulis.

Joel Shore
May 17, 2011 4:56 am

Willis,
You and Ecoeng are going off on all of these tangents. I am not discussing all of Hansen’s paper…or even the main thesis of Hansen’s paper. I am simply pointing to one figure in it, Figure 4, that addresses this question / comment:

“The current climate paradigm, as exemplified by the models, is that the global temperature is a linear function of the forcings. ”
…so, …the beginning and end of the ice ages should be a piece of cake to explain. ….uh. ….right? (insert sound of crickets chirping here)”

The point is that, yes, the ice core temperature record does show what appears to be a linear relationship between the forcings, as determined from greenhouse gas levels and sea levels (to get the albedo forcing), and the temperature.
Ecoeng says:

Your comments betray all the classic traits of the AGW hubris., firmly rooted in a naive, post-modernist belief that a complex, chaotic and partially non-equilibrium thermodynamic system is either reliably predictable and/or such predictions can be somehow imposed by act of will. Laughable at a time when even the science of large scale non-equilibrium thermodynamics is new.

And yet, there are some things that can reliably be predicted. I can reliably predict that the average temperature here in Rochester in July will be roughly 25 C warmer than in January, even if I can’t predict the weather on any particular day.
Is there the possibility of some surprises as we embark on our little “experiment” with the earth’s climate system? Absolutely…But such surprises seem more likely to be unpleasant than pleasant.

May 17, 2011 5:56 pm

“Entia non sunt multiplicanda praeter necessitatem”
(“Entities should not be multiplied more than necessary”).
Very simplistic Willis (in a good way) Nice!

Jim D
May 17, 2011 6:49 pm

netdr, the ensemble average is steadily 0.2 C per decade. Individual members may vary from zero to 0.4 C per decade. The current climate is within the ensemble. Confusion with using the smooth ensemble projection to compare with the observations, which are like an individual noisy member, is a common problem here.

Joel Shore
May 17, 2011 6:51 pm

Willis Eschenbach says:

So the model that Hansen admits contains large errors, and which is forced into balance by adjusting a single parameter, shows an imbalance?

They can only succeed in tuning the model to be within half a W/m2 of balance, and then they proceed to claim an error of 0.15 W/m2 …. WUWT, as they say?

Now that I have read through your diatribe on Hansen’s paper (which I didn’t do until now simply because it wasn’t relevant to the tiny piece of the paper that I was even referring to), I have to admit that I am pretty confused. As near as I can tell, Hansen’s estimate of the energy imbalance that you quote is based on various pieces of empirical data…The nearest they come to using a model for any of the pieces is a 1-d conduction equation that they use to estimate the very small land contribution. (They do then do some comparisons to Model E calculated values for the imbalance, but that is in fact to show that they think the model may have too slow a response because of the way that mixing into the deep ocean is handled.)

RC Saumarez
May 18, 2011 2:27 am

My last comment was incomplete and partially wrong.
What has been do here is NOT a test of linearity. Instead a first order linear step method has been applied which stems from Taylor’s theorem and is the basis of Euler’s method for solving a differenetial equation. It says nothing about the linearity of the process involved.
I know, Mr Eschenbach, that you pride yourself on being a generalist, but should you not try and learn some basic mathematics and physics before presenting ideas like this?

May 18, 2011 3:05 am

RC Saumarez: it’s easy to fill one’s mouth and keyboard with big-sounding names. Please explain to us poor ignoramuses what you know of “Taylor’s theorem”…
As I’ve already said, the point is not that one can approximate the model’s output. The point is that one can approximate it so well AND as simply as Willis did.

RC Saumarez
May 18, 2011 8:26 am

@Maurizio Morabito
I’m sorry that you have feel I’ve filled my mouth with big sounding names. Taylor and Euler were mathematicians in the late 18th and early 19th century and laid the ground for a large body of computational mathematics.
Taylor’s theorem is quite simple:
If you know the value of a function, say temperature T, at a time t, can one establiblish the temperature at a later time t+delta_t?
Taylor’s theorem states that:
T(t+delta_t)=T(t)+delta_t*T’+ delta_t^2*T”/2!+ delta_t*T”’/3! ……..
where T’. T”, T”’ are the first,second and third derivatives of T with respect to t.
The point about this theorem is that for small values of delta_t, a linear approximation is valid, and this is the basis for the solution of differential equations by (crude) multistep methods.
This is essentially what has been done in the analysis presented here and does not imply that the system is linear. Rather, it is an assumption that arises from the use of Taylor’s theorem in the numerical solution of a first order differential equation.
To be rigorous, a linear system must show:
a) Proportionarity, i.e.: if you put in twice the input, you get twice the output.
b) Stationarity: the system does not change its properties over time.
c) Superposition: ie: if the response to x(t)=y(t) and xx(t)= yy(t), the response to
x(t)+xx(t)= y(t)+yy(t).
There is nothing in the analysis presented here that distinguishes the difference between the criteria of a truly linear system and the fact that a differential equation can be solved numerically by using the first term of Taylor’s theorem, i.e: Euler’s method.
I regard this whole approach as superficial and wrong. If the mathematics of climate models is to be addressed, I would argue that this is better done by people with a proper background in the relevent mathematics and experience in its use.

BLouis79
May 18, 2011 5:30 pm

@JoelShore “the emissivity of most terrestrial surface…or most surfaces period…in the IR is very close to 1”
I am bothered by the use of albedo and emissivity in an apples vs oranges way. It is generally accepted that albedo very close to 0.3 reduces sunlight absorption. No problem with that. We can understand measurement of albedo from space is consistent with this. But quoting an emissivity of 1 assumes the land surface is the emitter. From space, the emitter is the earth including atmosphere (clouds). Clearly the emissivity of earth as a whole as seen from space is not 1, the blackbody, it is less for the greybody.
As I understand Kirchoff’s Law (of thermal radiation), is a law applying to equilibrium, even a greybody cannot absorb more than it emits or vice versa. One can’t use a greybody for absorption and an blackbody for emission when considering the energy of earth in radiative thermal equilibrium with the sun.

May 19, 2011 3:22 am

RC Saumarez – even Wikipedia shows how a simple non-linearity can’t be much reduced to any number of Taylor components.
Once again: what really stinks is how easy it was to use just the first term. Add to that the fact that it can be easily referred to the original “forcings” and there is something quite rotten in Climatemark.

David
May 19, 2011 3:27 am

Joel Shore states, “Is there the possibility of some surprises as we embark on our little “experiment” with the earth’s climate system? Absolutely…But such surprises seem more likely to be unpleasant than pleasant.”
The models predict much that is unpleasant. Thus far the observations have been, on the whole, pleasant, even if difficult to ascribe to CO2.
The decadal surprise has been that since 1998 the atmosphere has slightly cooled.

Joel Shore
May 19, 2011 5:28 pm

BLouis79 says:

As I understand Kirchoff’s Law (of thermal radiation), is a law applying to equilibrium, even a greybody cannot absorb more than it emits or vice versa. One can’t use a greybody for absorption and an blackbody for emission when considering the energy of earth in radiative thermal equilibrium with the sun.

Kirchhoff”s Laws say that absorptivity and emissivity have to be equal at a given wavelength. The emission spectrum of the sun and the earth are very different…with an extremely small overlap. In the visible, UV, and near IR of the solar radiation, the earth’s surface is not that close to a perfect blackbody (although, it isn’t that far away: it’s something like 12% reflectance). In the mid- and far-IR, it much better approximates a blackbody. Take snow, for example: It can be a very good reflector in the visible but is nearly a perfect absorber for the wavelengths of significance for terrestrial radiation.
David says:

The decadal surprise has been that since 1998 the atmosphere has slightly cooled.

Actually, that is not in fact true anymore. Furthermore, it is not a surprise that one can cherry-pick periods over which the trend is not very different from zero or is even negative, especially as one makes the length of the period shorter. The same thing is seen for climate models that are forced with steadily-increasing greenhouse gases. The shorter the period of time, the larger the error bars on the trend estimate.

BLouis79
May 21, 2011 12:36 am

@JoelShore “Kirchhoff”s Laws say that absorptivity and emissivity have to be equal at a given wavelength.”
If you take that strictly to be true (as a variation on the looser Kirchoff’s Law which is happy for the integrated absorptivity and emissivity to be equal across a range of wavelengths), then how can the earth absorb as a blackbody with albedo 0.3 from outside the atmosphere and emit as a “nearly black” greybody from the surface inside the atmosphere without creating a big hole (“energy imbalance”) caused by the clouds and atmosphere accounted for in absorption and neglected in emission??
Simplistically, the surface has to emit more causing warming nearer the surface in order for the radiative equilibrium to occur above the clouds after clouds have absorbed some the the outgoing radiation.

Joel Shore
May 22, 2011 2:04 pm

BLouis79 says:

Simplistically, the surface has to emit more causing warming nearer the surface in order for the radiative equilibrium to occur above the clouds after clouds have absorbed some the the outgoing radiation.

Yes…And, not just clouds but also greenhouse gases absorb (and re-emit) outgoing terrestrial radiation. The fact that the surface must be warmer than in the case when there is no such absorption is what is called “the greenhouse effect”.

BLouis79
May 23, 2011 6:59 am

@JoelShore “not just clouds but also greenhouse gases absorb (and re-emit) outgoing terrestrial radiation”
Seeing that land and water absorb heat energy in the sun, and that clouds block both incoming and outgoing radiation, it is perfectly feasible that the “warmer” surface could be created with ZERO greenhouse gases. So would anyone dare run a GCM allowing for this effect and a ZERO GHG effect (not counting clouds as gases).

Nico Baecker
May 23, 2011 11:58 am

Willis,
a funny trick. You reproduced the temperature data T(n) mainly by themselves. Your formula for T(n+1) is a hidden autoregressive process of order 2, using the proceeding data T(n) and T(n-1) and modified by a “disturbing term” containing the external forcings.
You can get an even higher correlation factor r of 0.997 if you set all forcings to zero and optimize the coefficients for T(n) and T(n-1) independently.

May 23, 2011 2:47 pm

“You can get an even higher correlation factor r of 0.997 if you set all forcings to zero and optimize the coefficients for T(n) and T(n-1) independently.”
And then, …….voila!
http://rocketscientistsjournal.com/2010/03/sgw.html

May 23, 2011 3:27 pm

Jeff Glassman:
“IPCC’s (HadCRUT3) smoothed model for Earth’s temperature has a noise power of 0.0782^2 = 0.00614. Compared to the noise power in the original annual data, 0.2392^2 = 0.0573, smoothing reduces the variance in the temperature data by 89.3%. The noise power in the two-stage estimate from the Sun is 0.110 ^2 = 0.0120, a variance reduction 79.0%. The Sun provides an estimate of Earth’s global average surface temperature within 10% as accurate as IPCC’s best effort using temperature measurements themselves. Estimating Earth’s temperature from the Sun is to that extent as good as representing Earth’s temperature by smoothing actual thermometer readings. Moreover, to the extent that man might be influencing Earth’s temperature, the effect would lie within that 10% not taken into account by the models, at most one eighth the effect of the Sun. Any reasonable model for Earth’s climate must take variability in solar radiation into account before considering possible human effects.”

Nico Baecker
May 24, 2011 5:01 am

Hi Ecoeng,
sorry, but this paper of Jeff Glassman is no science.
It is not very surprising to find something that fits well to the temperature record over a certain time frame by simply playing around long enough with some other data.
Btw, my comment regarding the Eschenbach method does not mean that the temperature data are fully explainable by the sun’s activity or something else.
I just want to emphasize that his formula refers meanly on the data itself (= autoregressive approach).

Joel Shore
May 24, 2011 5:29 am

BLouis79 says:

Seeing that land and water absorb heat energy in the sun, and that clouds block both incoming and outgoing radiation, it is perfectly feasible that the “warmer” surface could be created with ZERO greenhouse gases.

The amount of greenhouse effect due to clouds vs. that due to greenhouse gases is quite well-characterized. In particular, note that the net effect of clouds is cooling (i.e., their blocking of solar radiation is a somewhat greater effect overall than their effect on outgoing longwave radiation). And, if you compute the temperature necessary for an earth with an albedo due only to its surface, you still get a value below the actual surface temperature. (In fact, even if you assume the earth is a perfect blackbody, i.e., has zero albedo, you get a temperature below the actual surface temperature.)
Besides which, it would be hard to explain how greenhouse gases magically would not affect the surface temperature. I know that this is desperately what you want to believe because of your ideology, but the science is what it is independent of what you want to believe.

Nico Baecker
May 24, 2011 6:20 am

Here in more details what is behind Eschenbach’s approach, he suggested:
T(n+1) = T(n)+lambda x delta F(n+1) / tau + deltaT(n) exp(-1 / tau)
that means:
T(n+1) = T(n) + [T(n) – T(n-1)] exp(-1 / tau) +lambda x delta F(n+1) / tau
T(n+1) = [1+exp(-1 / tau) ] T(n) – exp(-1 / tau) T(n-1) + lambda x delta F(n+1) / tau
This has the form:
T(n+1) = A T(n) + B T(n-1) + C delta F(n+1)
with
A = 1+exp(-1 / tau)
B = -exp(-1 / tau)
C = lambda / tau
Eschenbach got now by optimization of A, (B) and C a correlation factor of 0,995. In this approach A and B are no independent fit parameters due to the chosen formula.
If you now set alternatively C = 0 and optimize A and B independently you will get r = 0,997 in best case. Therefore the autoregressive approach itself without taking forcings into account gives higher correlation.
This demonstrates that Eschenbachs approach is doubtful because it uses internal information of the data instead of describing them by external forcings like climate models do!

BLouis79
May 24, 2011 6:33 am

@JoelShore
I am perfectly happy for science to reveal what is. At this point, the major factors appear to be solar irradiance and cosmic ray effects on clouds. Cosmic ray effect on clouds is are notably missing from the CCSM3 model input data.
I have not seen any experimental laboratory verification of the postulated “greenhouse” mechanism in action – magnitude of measurable temperature change caused by small amounts of CO2+/- other gases. Perhaps someone can point me to some papers. It is unclear if there is even agreement about what the postulated “greenhouse” mechanism is amongst proponents.

Nico Baecker
May 24, 2011 11:33 am

Hi Blouis,
your questions have simple answers:
“Cosmic ray effect on clouds is are notably missing from the CCSM3 model input data.”
Clear, because nobody knows the physical mechanism and its strength.
“I have not seen any experimental laboratory verification of the postulated “greenhouse” mechanism in action ”
Obviously, the greenhouse effect of the atmospheres needs an atmosphaere with the most essential features in nature like vertikal temperature gradient, troposphaere ozone layer, solar irradiance and absorption and emission of thermal and solar radiation through kilometers of a vertical air column in the lab. 

May 24, 2011 3:50 pm

Nico Baecker
“This demonstrates that Eschenbachs approach is doubtful because it uses internal information of the data instead of describing them by external forcings like climate models do!”
Deeply contradictory isn’t it that Nico Baecker can ‘dismiss’ Dr. Jeff Glassman’s work relating the ‘official’ IPCC AR4 Wang et al (2005) curve for the Sun’s activity (TSI) to the same IPCC AR4 ‘official’ HadCRUT3 temperature record since 1850 on the spurious grounds that it is ‘no science’.
Obviously Baecker is unable to even estimate the odds against this and unable to even admit that solar activity is a forcing.
Furthermore, in his naivety Baecker is obviously unaware that Glassman is a respected retired scientist (former head Hughes Aerospace Science Division) who was one of the 20th century’s leading experts on signal deconvolution techniques.
In fact, if Nico Baecker has, and uses, a cell phone he is unwittingly using very day one of the very mathematical techniques Glassman developed!
The fact is that Glassman used a mainstream deconvolution technique to identify the imprint of the Wang et al (2005) TSI plot is not ‘no science’. It is good, standard science.
In addition it uses exactly an ‘external forcing’ Nico asked for!!!!
If anything, Glassman’s approach has provided good evidence that there probably is some of amplification of the solar forcing signal going on. A significant number of recent mainstream literature papers on whether there is a floor or not in the heliospheric field (despite Svalgaard saying yes, and setting it higher than it probably is even if there is one) are highly relevant to this question.
Then of course there are the now well reviewed 10Be, 14C and tree ring records of the last 9300 years (e.g. Steinhilber et al). How to explain the solar activity and temperature record of the last 9300 years, in the absence of any evidence CO2 forcing, if there isn’t any amplificvation of the solar signal occurring. In a word, you can’t!
Clearly there are no simple answers even now and if there were Nico Baecker certainly doesn’t have them.

Joel Shore
May 24, 2011 7:06 pm

Ecoeng says:

Obviously Baecker is unable to even estimate the odds against this and unable to even admit that solar activity is a forcing.

Great…So apparently you are? Please tell us what those odds are and how you calculated them.

Furthermore, in his naivety Baecker is obviously unaware that Glassman is a respected retired scientist (former head Hughes Aerospace Science Division) who was one of the 20th century’s leading experts on signal deconvolution techniques.

There seems to be little evidence to verify this claim on the web. By what standard is he one of the leading experts…Did he publish a book on it, did he publish papers on it, is he on some path-breaking patents regarding it? (I couldn’t find any patents by a Glassman with the assignee being Hughes in the USPTO database.)
Frankly, from looking at his website, he appears to be pretty much of a crackpot who doesn’t even accept that humans are responsible for the rise of CO2 levels in the atmosphere…That puts him pretty far out there.

May 24, 2011 7:25 pm

Joel Shore says:
“…humans are responsible for the rise of CO2 levels in the atmosphere…”
Classic misdirection [“Look over there! A kitten!”]
The real question is: what empirical, testable evidence is there that definitively shows that CO2 is harmful?
Joel Shore always avoids providing any evidence of global damage due to CO2, simply because there is no such evidence. Not to be too hard on Joel; no one else has any evidence of global harm from CO2, either.

Nico Baecker
May 25, 2011 1:54 am

Hi Ecoeng,
keep serious. I have no interest to deal with a paper which provides at a first glance so many crude statements and is not published in a scientific magazine including review by experts. I’m interested in scientific findings and not in private opinions.
To get the interest of scientfic community the author has to answer in my option at minimum the following obvious questions:
– what is the physical process behind or what could be?
– why is this process selecting these special periods of 134 and 46 years and this special TSI derivative not anything else?
– can artifacts by the data treatment cause these findings? Note, that the used series spans over 160 years and that the length of the moving average of 134 years is nearby. Every communications engineer with experience in signal analysis is aware on the influence of limited time frame on the spectral signature of a signal, would extensively investigate on potential trouble with that and would present these results. Thus, I miss clear and convincing arguments to exclude that the findings are rubbish at the end.
That’s all I have to say about this paper. I would appreciate if we come back to the original topic of this blog regarding Eschenbach’s funny formula.

Joel Shore
May 25, 2011 10:41 am

Smokey says:

Classic misdirection [“Look over there! A kitten!”]

It is not misdirection when we are not addressing what Smokey wants to talk about in every thread. What we are trying to establish here is whether some guy (Jeffrey Glassman) who posts some un-peer-reviewed stuff up on the internet looks like he is any sort of credible source or whether he says things that every reasonable person knows is nonsense.

May 25, 2011 11:37 am

As always, Joel Shore avoids posting evidence, per the scientific method, showing global damage from CO2. As I stated, Joel’s comments are classic misdirection: instead of saying, “Look, a kitten!”, Joel Shore says, “Look, Dr Glassman!”
Produce measurable, verifiable evidence of global harm from CO2 — or admit that CO2 is harmless.

BLouis79
May 25, 2011 4:44 pm

Cosmic ray effects on clouds appear to be backed by solid science. Shaviv appears to have data to enable him to correct for this effect.
http://www.phys.huji.ac.il/%7Eshaviv/articles/2004JA010866.pdf
It would be interesting to see Willis’ model run with just solar forcing and cosmic ray effect.

Nico Baecker
May 26, 2011 1:01 am

Hi BLouis79 ,
interesting, but as explained above, there is less space left to improve the correlation factor of Willis’ model by adding more forcings. Because the main portion of the goodness of fit is not due to physical driving causes at all.

May 26, 2011 6:45 am

Joel Shore
“There seems to be little evidence to verify this claim on the web. By what standard is he one of the leading experts…Did he publish a book on it, did he publish papers on it, is he on some path-breaking patents regarding it? (I couldn’t find any patents by a Glassman with the assignee being Hughes in the USPTO database.)
Frankly, from looking at his website, he appears to be pretty much of a crackpot who doesn’t even accept that humans are responsible for the rise of CO2 levels in the atmosphere…That puts him pretty far out there.”
Dr. Jeffrey A. Glassman is one of the giants of 20th century signal processing. He is the (sole) author (1970) of one of the highest citation index papers in the history of FFTs (Fast Fourier Transforms) and his general N point FFT is still the fastest algorithm available for any length input series – hence it’s widespread use in signal processing. There is no need to go into the many other highpoints of Glassman’s outstanding career in applied avionics etc.
Childish utterings by types like Joel Shore such as the one quoted above are standalone proofs of what happens when babes would enter the woods……meaning nothing much at all except to signal (pun intended) the quiet end of any decent blog thread.

Joel Shore
May 26, 2011 4:12 pm

Well, I am glad to hear that Glassman was a giant in signal processing back in his day of 1970. I guess I have probably used some version of his FFT algorithms without knowing that he was the inventor.
However, his uttering in regards to climate science seem to be junk…which is of course why the only place they are “published” is on the web.