Guest Post by Willis Eschenbach
In my previous post, A Longer Look at Climate Sensitivity, I showed that the match between lagged net sunshine (the solar energy remaining after albedo reflections) and the observational temperature record is quite good. However, there was still a discrepancy between the trends, with the observational trends being slightly larger than the calculated results. For the NH, the difference was about 0.1°C per decade, and for the SH, it was about 0 05°C per decade.
I got to thinking about the “exponential decay” function that I had used to calculate the lag in warming and cooling. When the incoming radiation increases or decreases, it takes a while for the earth to warm up or to cool down. In my calculations shown in my previous post, this lag was represented by a gradual exponential decay.
But nature often doesn’t follow quite that kind of exponential decay. Instead, it quite often follows what is called a “fat-tailed”, “heavy-tailed”, or “long-tailed” exponential decay. Figure 1 shows the difference between two examples of a standard exponential decay, and a fat-tailed exponential decay (golden line).
Figure 1. Exponential and fat-tailed exponential decay, for values of “t” from 1 to 30 months. Lines show the fraction of the original amount that remains after time “t”. The “fatness” of the tail is controlled by the variable “c”. Line with circles shows the standard exponential decay, from t=1 to t=20. Golden line shows a fat-tailed exponential decay. Black line shows a standard exponential decay, with a longer time constant “tau”. The “fatness” of the tail is controlled by the variable “c”.
Note that at longer times “t”, a fat-tailed decay function gives the same result as a standard exponential decay function with a longer time constant. For example, in Figure 1 at “t” equal to 12 months, a standard exponential decay with a time constant “tau” of 6.2 months (black line) gives the same result as the fat-tailed decay (golden line).
So what difference does it make when I use a fat-tailed exponential decay function, rather than a standard exponential decay function, in my previous analysis? Figure 2 shows the results:
Figure 2. Observations and calculated values, Northern and Southern Hemisphere temperatures. Note that the observations are almost hidden by the calculation.
While this is quite similar to my previous result, there is one major difference. The trends fit better. The difference in the trends in my previous results is just barely visible. But when I use a fat-tailed exponential decay function, the difference in trend can no longer be seen. The trend in the NH is about three times as large as the trend in the SH (0.3°C vs 0.1°C per decade). Despite that, using solely the variations in net sunshine we are able to replicate each hemisphere exactly.
Now, before I go any further, I acknowledge that I am using three tuned parameters. The parameters are lambda, the climate sensitivity; tau, the time constant; and c, the variable that controls the fatness of the tail of the exponential decay.
Parameter fitting is a procedure that I’m usually chary of. However, in this case each of the parameters has a clear physical meaning, a meaning which is consistent with our understanding of how the system actually works. In addition, there are two findings that increase my confidence that these are accurate representations of physical reality.
The first is that when I went from a regular to a fat-tailed distribution, the climate sensitivity did not change for either the NH or the SH. If they had changed radically, I would have been suspicious of the introduction of the variable “c”.
The second is that, although the calculations for the NH and the SH are entirely separate, the fitting process produced the same “c” value for the “fatness” of the tail, c = 0.6. This indicates that this value is not varying just to match the situation, but that there is a real physical meaning for the value.
Here are the results using the regular exponential decay calculations
SH NH lambda 0.05 0.10°C per W/m2 tau 2.4 1.9 months RMS residual error 0.17 0.26 °C trend error 0.05 ± 0.04 0.11 ± 0.08, °C / decade (95% confidence interval)
As you can see, the error in the trends, although small, is statistically different from zero in both cases. However, when I use the fat-tailed exponential decay function, I get the following results.
SH NH lambda 0.04 0.09°C per W/m2 tau 2.2 1.5 months c 0.59 0.61 RMS residual error 0.16 0.26 °C trend error -0.03 ± 0.04 0.03 ± 0.08, °C / decade (95% confidence interval)
In this case, the error in the trends is not different from zero in either the SH or the NH. So my calculations show that the value of the net sun (solar radiation minus albedo reflections) is quite sufficient to explain both the annual and decadal temperature variations, in both the Northern and Southern Hemispheres, from 1984 to 1997. This is particularly significant because this is the period of the large recent warming that people claim is due to CO2.
Now, bear in mind that my calculations do not include any forcing from CO2. Could CO2 explain the 0.03°C per decade of error that remains in the NH trend? We can run the numbers to find out.
At the start of the analysis in 1984 the CO2 level was 344 ppmv, and at the end of 1997 it was 363 ppmv. If we take the IPCC value of 3.7 W/m2, this is a change in forcing of log(363/344,2) * 3.7 = 0.28 W/m2 per decade. If we assume the sensitivity determined in my analysis (0.08°C per W/m2 for the NH), that gives us a trend of 0.02°C per decade from CO2. This is smaller than the trend error for either the NH or the SH.
So it is clearly possible that CO2 is in the mix, which would not surprise me … but only if the climate sensitivity is as low as my calculations indicate. There’s just no room for CO2 if the sensitivity is as high as the IPCC claims, because almost every bit of the variation in temperature is already adequately explained by the net sun.
Best to all,
w.
PS: Let me request that if you disagree with something I’ve said, QUOTE MY WORDS. I’m happy to either defend, or to admit to the errors in, what I have said. But I can’t and won’t defend your interpretation of what I said. If you quote my words, it makes all of the communication much clearer.
MATH NOTES: The standard exponential decay after a time “t” is given by:
e^(-1 * t/tau) [ or as written in Excel notation, exp(-1 * t/tau) ]
where “tau” is the time constant and e is the base of the natural logarithms, ≈ 2.718. The time constant tau and the variable t are in whatever units you are using (months, years, etc). The time constant tau is a measure that is like a half-life. However, instead of being the time it takes for something to decay to half its starting value, tau is the time it takes for something to decay exponentially to 1/e ≈ 1/2.7 ≈ 37% of its starting value. This can be verified by noting that when t equals tau, the equation reduces to e^-1 = 1/e.
For the fat-tailed distribution, I used a very similar form by replacing t/tau with (t/tau)^c. This makes the full equation
e^(-1 * (t/tau)^c) [ or in Excel notation exp(-1 * (t/tau)^c) ].
The variable “c’ varies between zero and one to control how fat the tail is, with smaller values giving a fatter tail.
[UPDATE: My thanks to Paul_K, who pointed out in the previous thread that my formula was slightly wrong. In that thread I was using
∆T(k) = λ ∆F(k)/τ + ∆T(k-1) * exp(-1 / τ)
when I should have been using
∆T(k) = λ ∆F(k)(1 – exp(-1/ τ) + ∆T(k-1) * exp(-1 / τ)
The result of the error is that I have underestimated the sensitivity slightly, while everything else remains the same. Instead of the sensitivities for the SH and the NH being 0.04°C per W/m2 and 0.08°C per W/m2 respectively in the both the current calculations, the correct sensitivities for this fat-tailed analysis should have been 0.04°C per W/m2 and 0.09°C per W/m2. The error was slightly larger in the previous thread, increasing them to 0.05 and 0.10 respectively. I have updated the tables above accordingly.
w.]
[ERROR UPDATE: The headings (NH and SH) were switched in the two blocks of text in the center of the post. I have fixed them.
P. Solar says:
June 4, 2012 at 3:15 am
Thanks for that, I had assumed it was a satellite troposphere record. For a surface temp record, I think black carbon could well be (and probably is) a significant common cause.
Very neat work. Congratulations. “Sun and clouds are sufficient” must be the best working hypothesis to date.
Which does not invalidate other contributions – how does C02 contribute to cloud cover, how do Cosmic rays seed clouds, how do the oceans contribute to the fat-tailed decay, are there other causes which cancel one another out … and on and on …
But in the meantime, the Willis Eschenbach observation “Sun and Clouds are Sufficient” rules OK. Occam smiles from Heaven, as we Keep it Simple.
Perhaps in a century or two we will be thinking that Climate is simple – only Weather is complex.
richardscourtney says:
June 4, 2012 at 4:07 am
“The only thing I can think of says X so Y must be correct” ignores everything one has failed to “think of”.
As I said in the post you are addressing
we need to avoid jumping to undue certainty (which others did with the resulting creation of the IPCC).
I wasn’t jumping to undue certainty. I was looking to open a discussion about what factors could be a common cause of both decreased albedo and increased temperatures.
Willis , could you post a link to the albedo data set you used ?
The paper says they have full gridded output , that’s why I suggested you look at tropics in the last thread. Clearly they have not released the full monty.
Maybe worth asking for full data, they seem to into open publishing of their work, which is very refreshing in this field.
“Philip Bradley says:
June 4, 2012 at 1:46 am
I went back thru the previous posts and couldn’t see the source of your temperature data.
What does surprise me is that your model seems to preclude ocean variability/cycles (ie variable heat release from the oceans) having a significant effect on atmospheric temperatures.”
Models cannot “preclude” anything. What the model says is that ocean variability/cycles are not necessary to explain the variability in temperature during the period under study. Ocean variability could be (and probably is) part of the overall picture. But they were not significant in explaining the variability during the time frame in question, if I understand the post correctly.
Mr. Eschenbach,
Just as food for thought: how about a 6 month or so prediction of future global temperatures based on this ‘fat tailed decay’ model?
It would seem to be a quick and easy way to provide background data.
Hi Richard,
You say: “But Willis has NOT merely conducted a “curve fit” and ‘c’ is not “an as-yet unexplained variable”
But Willis says: “Now, before I go any further, I acknowledge that I am using three tuned parameters.”
Both of you can’t be right. He threw his toys out of the pram when Nikolov and Zeller used, as he saw it, five tuned parameters. He uses less than them (and gets an impressive result, as they did), but he can’t have his cake and eat it.
FWIW that doesn’t change the fact that I think Willis is absolutely right on this occasion. The correlation between decreased cloud albedo and temperatures is good (http://oi49.tinypic.com/302uzpu.jpg).
H2O is the most anomalous of liquids and is yet to be fully understood, most are aware of its odd behaviour around 0 and 4c , it has many other tricks, one occurring around 35C something to do with specific heat if my memory is correct. Around body temperature water becomes very much a receptor of very long wave radiation. Waters tricks are yet to be understood by science and as a thermostat for our planet more than some of its tricks may be at play.
Good post Willis.
Hi Willis,

contest sufficient evidence that one is better than another over this short an interval anyway, but over a longer interval it might be possible to get a clean separation of the two models.
will be sharply differentiated; c) sooner or later one has to come up with at least a hand-waving argument for the fractal dimensionality (a scaling argument, that is, e.g. surface to volume scaling but in time?).
In general, I agree with Richard’s observations. In fact, I seem to be doing that with sufficient regularity that I’m starting to think that he is really a Pretty Smart Guy (don’t we always think that of those with whom we agree?:-). I would like to point out one alternative possible/probable fit that IMO is at least as likely as a fat-tailed exponential with embedded power law, although it is less parsimonious.
Consider the double exponential:
Yes, it has four parameters, but it also has a simpler physical explanation. There are two processes that both contribute towards equilibrium. One of them has a relatively short time constant, and is responsible for the steeper part of the slope. The other part has a longer time constant and is responsible for the “fat tail” — the more slowly decaying residual left over after the faster part is (mostly) gone.
It is now straightforward to come up with and/or falsify physical interpretations — The fast process is e.g. atmospheric transport and relaxation, the slow process is the longer time associated with oceanic buffering of the heat, for example (not saying that this is what it is, only one of several things it might be that can be checked by at least guestimating the relevant time constants from actual data).
Over a short time interval like the one you are fitting it might be difficult to resolve the two functional forms (or not, dunno) and I’m not certain I would consider simply winning a nonlinear regression
The model you propose above — IIRC — is basically a “fractal” law — a law that suggests that the true dimensionality of time in the underlying process is not one. Complex systems are perfectly capable of producing fractal behavior, but a) it is a lot more difficult to analyze such behavior; b) over a long enough time,
Exponentials, OTOH are bone simple to explain. Loss rates are proportional to the residual of the quantity. A double exponential simply implies a double reservoir.
BTW I agree with your assessment that albedo is the primary governing factor simply because the greybody temperature itself contains the albedo and it is an absolutely direct effect, one that for reasons unknown the IPCC is completely ignoring in spite of the fact that the 7% increase in albedo over the last 15 years corresponds to 2 degrees Celsius of cooling (after some still unknown lag, probable timescale decades to a century). I also commend to you Koustayannis’ work (I just posted the links on another thread, the one on CO_2 model fitting) — he provides a powerful argument that climate is governed by Hurst-Kolmogorov statistics, that is, stationary stochastic jumps driven by very slow nonstationary modulators with scaling properties. It’s tough going — this isn’t stuff in your everyday stats course — but I think the evidence (including e.g. SST graphs that he doesn’t even present) is overwhelming that he is right. In which case MOST of what we are trying to “fit” with a deterministic model is actually Ifni-spawn, pure stochastic noise…
rgb
The residuals clearly go between about -1°C and +1°C, which is higher amplitude than the total amount of warming claimed to have occurred over that time. Plotted at this scale you only prove that you are able to match total warming trend (one parameter) with three parameters you are using. That’s not very hard to do.
Please provide more detailed analysis of the residual, starting with plotting it at a scale which allows resolving details, and comparison with actual temperatures. Correlation analysis of the residual with detrended temperature data might be also useful.
Thanks for the update. If I understand the fat tail exponential decay right, would id not be simpler to eplain it as two exponential decays, one with a time constant of about 2.6 months or so over oceans and one of the order of 1.2 months over land? This would also give the average temperature rise difference ocean/vs land.
Well Willis … the proof of the pudding … how does your model do with the temperatures from 1997 to present?? Maybe I’m reading it wrong, but am I correct in assumming that these data are strictly for the period of 1984-1997??
Just as we observe with AGW models, the proof of validity is how well they predict the future. If you model models well between 1984-1997, then it should also model from 1997 to the present equally as well.
Your work is brilliant Willis! I look forward to your publishing this.
Philip Bradley and Harriet Harridan:
Philip, thankyou for your clarification at June 4, 2012 at 4:37 am. I admit that I did misunderstand your earlier post so I am pleased to have been corrected.
Harriet, at June 4, 2012 at 5:12 am you say to me
Actually, in context I think Willis and I are both right.
As I said, Willis attempted to determine the minimum number of variables required to model mean global temperature variation. In his original analysis he used only two parameters to achieve a match with mean global temperature.
And, as I also said, Willis refined that model by adjusting his arbitrary exponential decay rate to obtain an even better fit. His third variable was introduced to obtain that modification.
He discusses that saying
So, we are “both right” and Willis justifies his ADDITION of a third parameter as a refinement to his model. Remove that addition and we are left with his original and sufficient match with only two parameters.
You go on to say of Willis
He is not asking for “his cake and eat it”.
• Nikolov and Zeller used 5 parameters to adjust a model. Anything can be adjusted to match anything by tuning 5 variables.
• Willis Eschenbach investigated the minimum number of variables needed to match variation in mean global temperature, and he determined that he could do that using only two. No match is perfect, and he modified his model to improve his match by use of a justified third parameter. Remove his modification and his finding remains valid.
So, your assertion of “cake and eat it” is a comparison of chalk and cheese.
Additionally, I note that you say
OK. But you do not add to your arguments by inclusion of phrases such as “threw his toys out of the pram”. He, his behaviour and his work are different things. We are trying to discuss his work here, and I remind that in this thread alone I have already called for a realistic appraisal of that work on three occasions.
Richard
I look upon your calculations as being similar to a transformation of coordinate systems.
Instead of using forcings for things like aerosols, CO2 , and any associated feedbacks; you have simply moved onward to total effect on albedo.
So the CO2- related question changes from “what is the forcing and feedbacks from CO2?” to “What are the albedo changes due to CO2?”
Harriet Harridan makes a good point. I pointed out to Willis on one of the N&K threads that the fact they were curve fitting did not invalidate it without further analysis. Willis came unglued.
I think Willis has done an outstanding job here. It’s too bad he let his emotions take over previously. There was also a physical basis that existed with the N&K equation as well. It was the ideal gas law. For the record, I think overall the N&K theory is completely wrong, however, the relationship they found might have merit when looked at from a different angle. It’s too bad Willis refused to even consider that. Maybe this experience will cause him to rethink that situation. That would be a positive on top of this great analysis.
Finally, it would be interesting to take this theory and use it to predict the future. We have several predictions of Solar changes to use. If an albedo cycle could be added in, it would be interesting indeed.
I think this may be right, but it is hard to use it to make future predictions as you have to have the albedo figure to get temperature. We could make a prediction based on a SWAG of the future albedo, but that isn’t gaining a lot. Or I am not understanding something (always a likelihood in any given situation).
I also don’t see this as having any real inconsistencies with Svensmark, because his work describes an external driver of albedo that would in effect provide an new set point while the Galactic Cosmic Ray levels persisted. In a case where GCRs were more active in striking the atmosphere (whatever the source) more clouds would form than the normal feedback point. This would lower temperatures and the system would respond with less natural cloudiness. There would still be more cloud than normal for the temperature however so the system would not respond all the way back to the previous temperature and how much cloudiness the GCRs induced would determine the new lower feedback set point of the system. Contrariwise, if the GCRs are less active, all cloud formation would be based solely on temperature and the set point would move back up to the system “normal”. So indeed prolonged active GCRs could induce ice ages, while quiet GCRs could induce climate optimums (assuming a constant sun – which isn’t true, but may be the driver for GCR activity).
In some ways the above indicates that Willis’ conjecture and Svensmark’s work could fit nicely in explaining the mechanism of long term climate. Much work still remains, and it certainly could still all wind up as a dead end, but we won’t know unless we can come up with some experimental test of the hypothesis.
richardscourtney says:
June 4, 2012 at 6:32 am
Philip, thankyou for your clarification at June 4, 2012 at 4:37 am.
That’s fine Richard. I appreciate your role as the epistemological police.
Is the trend error the size of the combined greenhouse gas forcing (rather than just CO2)?
http://www.esrl.noaa.gov/gmd/aggi/
Way off topic.
From the solar reference page:
Current solar status: M class flare
Geomagnetic conditions: Storm!
Charlie A: I don’t really see CO2 as being necessarily a driver here. The entire feedback regime could be effected by CO2, but doesn’t have to be. There could be any number of things driving the energy around the system, and a trace gas like CO2 doesn’t have to figure at all. In order to make the statements you made you have to first buy into the CO2 is evil meme, and despite all the IPCCs attempts I don’t for a minute accept it as proven.
To provide for the falsifiability of claims, this study needs a statistical population. It doesn’t have one..
The “fat-tailed” distributions are usually distribution functions that do not live in L2 when they are probability functions. Such functions lead to mathematical chaos in a mathematical process. I am not sure what the implication of a fat-talied distribution would be here, but it is something to think about.
Actually this answers a question I asked in a previous post. Albedo is the sum of multiple causes, part of which is indirectly due to the increased CO2 just because of the physics. However there is some residual warming because of the CO2 increase that is not counteracted by the change in albedo. So, all things being equal (and of course they probably won’t be in the long term), there will be an increase of .3 deg in 100 years due to the CO2. I’m sweating already.
“In some ways the above indicates that Willis’ conjecture and Svensmark’s work could fit nicely in explaining the mechanism of long term climate. Much work still remains, and it certainly could still all wind up as a dead end, but we won’t know unless we can come up with some experimental test of the hypothesis.”
We already have Scafetta’s projections, based on GCR changes due to planetary motions driving Svensmarks observations. The proof of Willis work, and how it fits in, is happening now. In three years GCRs and albedo will either follow Scaffetta, then Willis, or not. I expect it will.