Sun and Clouds are Sufficient

Guest Post by Willis Eschenbach

In my previous post, A Longer Look at Climate Sensitivity, I showed that the match between lagged net sunshine (the solar energy remaining after albedo reflections) and the observational temperature record is quite good. However, there was still a discrepancy between the trends, with the observational trends being slightly larger than the calculated results. For the NH, the difference was about 0.1°C per decade, and for the SH, it was about 0 05°C per decade.

I got to thinking about the “exponential decay” function that I had used to calculate the lag in warming and cooling. When the incoming radiation increases or decreases, it takes a while for the earth to warm up or to cool down. In my calculations shown in my previous post, this lag was represented by a gradual exponential decay.

But nature often doesn’t follow quite that kind of exponential decay. Instead, it quite often follows what is called a “fat-tailed”, “heavy-tailed”, or “long-tailed” exponential decay. Figure 1 shows the difference between two examples of a standard exponential decay, and a fat-tailed exponential decay (golden line).

Figure 1. Exponential and fat-tailed exponential decay, for values of “t” from 1 to 30 months. Lines show the fraction of the original amount that remains after time “t”. The “fatness” of the tail is controlled by the variable “c”. Line with circles shows the standard exponential decay, from t=1 to t=20. Golden line shows a fat-tailed exponential decay. Black line shows a standard exponential decay, with a longer time constant “tau”. The “fatness” of the tail is controlled by the variable “c”.

Note that at longer times “t”, a fat-tailed decay function gives the same result as a standard exponential decay function with a longer time constant. For example, in Figure 1 at “t” equal to 12 months, a standard exponential decay with a time constant “tau” of 6.2 months (black line) gives the same result as the fat-tailed decay (golden line).

So what difference does it make when I use a fat-tailed exponential decay function, rather than a standard exponential decay function, in my previous analysis? Figure 2 shows the results:

Figure 2. Observations and calculated values, Northern and Southern Hemisphere temperatures. Note that the observations are almost hidden by the calculation.

While this is quite similar to my previous result, there is one major difference. The trends fit better. The difference in the trends in my previous results is just barely visible. But when I use a fat-tailed exponential decay function, the difference in trend can no longer be seen. The trend in the NH is about three times as large as the trend in the SH (0.3°C vs 0.1°C per decade). Despite that, using solely the variations in net sunshine we are able to replicate each hemisphere exactly.

Now, before I go any further, I acknowledge that I am using three tuned parameters. The parameters are lambda, the climate sensitivity; tau, the time constant; and c, the variable that controls the fatness of the tail of the exponential decay.

Parameter fitting is a procedure that I’m usually chary of. However, in this case each of the parameters has a clear physical meaning, a meaning which is consistent with our understanding of how the system actually works. In addition, there are two findings that increase my confidence that these are accurate representations of physical reality.

The first is that when I went from a regular to a fat-tailed distribution, the climate sensitivity did not change for either the NH or the SH. If they had changed radically, I would have been suspicious of the introduction of the variable “c”.

The second is that, although the calculations for the NH and the SH are entirely separate, the fitting process produced the same “c” value for the “fatness” of the tail, c = 0.6. This indicates that this value is not varying just to match the situation, but that there is a real physical meaning for the value.

Here are the results using the regular exponential decay calculations

                    SH               NH

lambda             0.05             0.10°C per W/m2

tau                2.4              1.9 months

RMS residual error 0.17             0.26 °C

trend error        0.05 ± 0.04      0.11 ± 0.08, °C / decade (95% confidence interval)

As you can see, the error in the trends, although small, is statistically different from zero in both cases. However, when I use the fat-tailed exponential decay function, I get the following results.

                    SH               NH

lambda             0.04             0.09°C per W/m2

tau                2.2              1.5 months

c                  0.59             0.61

RMS residual error 0.16             0.26 °C

trend error       -0.03 ± 0.04      0.03 ± 0.08, °C / decade (95% confidence interval)

In this case, the error in the trends is not different from zero in either the SH or the NH. So my calculations show that the value of the net sun (solar radiation minus albedo reflections) is quite sufficient to explain both the annual and decadal temperature variations, in both the Northern and Southern Hemispheres, from 1984 to 1997. This is particularly significant because this is the period of the large recent warming that people claim is due to CO2.

Now, bear in mind that my calculations do not include any forcing from CO2. Could CO2 explain the 0.03°C per decade of error that remains in the NH trend? We can run the numbers to find out.

At the start of the analysis in 1984 the CO2 level was 344 ppmv, and at the end of 1997 it was 363 ppmv. If we take the IPCC value of 3.7 W/m2, this is a change in forcing of log(363/344,2) * 3.7 = 0.28 W/m2 per decade. If we assume the sensitivity determined in my analysis (0.08°C per W/m2 for the NH), that gives us a trend of 0.02°C per decade from CO2. This is smaller than the trend error for either the NH or the SH.

So it is clearly possible that CO2 is in the mix, which would not surprise me … but only if the climate sensitivity is as low as my calculations indicate. There’s just no room for CO2 if the sensitivity is as high as the IPCC claims, because almost every bit of the variation in temperature is already adequately explained by the net sun.

Best to all,

w.

PS: Let me request that if you disagree with something I’ve said, QUOTE MY WORDS. I’m happy to either defend, or to admit to the errors in, what I have said. But I can’t and won’t defend your interpretation of what I said. If you quote my words, it makes all of the communication much clearer.

MATH NOTES: The standard exponential decay after a time “t” is given by:

e^(-1 * t/tau) [ or as written in Excel notation, exp(-1 * t/tau) ]

where “tau” is the time constant and e is the base of the natural logarithms, ≈ 2.718. The time constant tau and the variable t are in whatever units you are using (months, years, etc). The time constant tau is a measure that is like a half-life. However, instead of being the time it takes for something to decay to half its starting value, tau is the time it takes for something to decay exponentially to 1/e ≈ 1/2.7 ≈ 37% of its starting value. This can be verified by noting that when t equals tau, the equation reduces to e^-1 = 1/e.

For the fat-tailed distribution, I used a very similar form by replacing t/tau with (t/tau)^c. This makes the full equation

e^(-1 * (t/tau)^c) [ or in Excel notation exp(-1 * (t/tau)^c) ].

The variable “c’ varies between zero and one to control how fat the tail is, with smaller values giving a fatter tail.

[UPDATE: My thanks to Paul_K, who pointed out in the previous thread that my formula was slightly wrong.  In that thread I was using

∆T(k) = λ ∆F(k)/τ + ∆T(k-1) * exp(-1 / τ)

when I should have been using

∆T(k) = λ ∆F(k)(1 – exp(-1/ τ) + ∆T(k-1) * exp(-1 / τ)

The result of the error is that I have underestimated the sensitivity slightly, while everything else remains the same. Instead of the sensitivities for the SH and the NH being 0.04°C per W/m2 and 0.08°C per W/m2 respectively in the both the current calculations, the correct sensitivities for this fat-tailed analysis should have been 0.04°C per W/m2 and 0.09°C per W/m2. The error was slightly larger in the previous thread, increasing them to 0.05 and 0.10 respectively. I have updated the tables above accordingly.

w.]

[ERROR UPDATE: The headings (NH and SH) were switched in the two blocks of text in the center of the post. I have fixed them.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
218 Comments
Inline Feedbacks
View all comments
pochas
June 4, 2012 8:51 am

P. Solar says:
June 4, 2012 at 2:07 am
“If I follow you right, this albedo argument just works on incoming solar. However, albedo changes are presumably mostly cloud and cloud cover is well known to block out going IR. ”
If you can accept my out-of-the box revisionism, Willis’ model is not really about albedo, it’s about “greybodyness.” In my view formation of clouds is a reversible event, that is, the overall heat balance at the cloud level is the same after the cloud has formed as it was before, only above the cloud there is more outgoing SW and less outgoing LW and below the cloud there is less incoming SW and more downgoing LW. Of course the cloud soaks up heat from its surroundings as the water condenses. But clouds radiate/scatter as greybodies while the earth surface and ocean has IR ‘color’, so clouds get the earth to better resemble a blackbody radiator and allow it to more efficiently reject heat to space. By way of explanation, a greybody behaves like a blackbody wrt external radiation and temperature, only it is more reflective (“brighter”). It reflects more external radiation away but also reflects more internal radiation back inside.

Brian D
June 4, 2012 8:57 am

Boy, it sure would be nice if there were more data to current to see if this formula holds true. The more years in the mix the better. Guess only time will tell when more data is available. Good work, Willis, on some very insightful thinking. If only everyone in climate science would quit with the politics and focus on science, stuff like this would be more common.

P. Solar
June 4, 2012 9:21 am

Robert Brown says:
>>
Consider the double exponential:
T(t) = T_0 e^{t/\tau_0} + T_1 e^{t/\tau_1}
Yes, it has four parameters, but it also has a simpler physical explanation. There are two processes that both contribute towards equilibrium. One of them has a relatively short time constant, and is responsible for the steeper part of the slope. The other part has a longer time constant and is responsible for the “fat tail” — the more slowly decaying residual left over after the faster part is (mostly) gone.
>>
Yes, I find that more justifiable than the “fat tail”.
I’d also like to see what this looks like with real data (like the ERBE data from which they say they have non trivial divergence). I really don’t like the idea of using any kind of model output as “data” for such work.
There is also some more recent satellite data that may be useful if it has sufficient polar coverage.

Stephen Wilde
June 4, 2012 9:30 am

“Willis’ conjecture and Svensmark’s work could fit nicely in explaining the mechanism of long term climate. ”
The problem I have with Svensmark’s hypothesis is that it doesn’t yet try to explain how changes in GCR amounts could cause the changes in the vertical temperature profile of the atmosphere that are required in order to produce the observed circulation and albedo / cloudiness changes.
In contrast, we have good evidence that such changes can be caused by solar wavelength and / or particle variations having different effects on ozone quantities at different levels of the atmosphere.
It has recently been observed that the ozone response to solar variability reverses from the usual expectation from 45km upward and that phenomenon is currently under close investigation. It may well have a bearing on the vertical temperature profile changes that are needed to account for observed circulation changes.
For that reason I think that GCR changes are simply a fortuitous correlation with little or no causative significance though they might have some impact on what happens anyway.
I predict that if the sun stays quiet then the AO and AAO will remain rather negative, the equatorial air masses will shrink whilst the polar air masses expand, cloudiness will remain higher than it was in the late 20th century with more meridional jets, La Nina will continue to dominate over El Nino as the oceans slowly lose energy due to increased cloudiness and tropospheric temperatures will slowly decline.
If the sun becomes more active for long enough then all that should reverse.
Everything will be consistent with Willis’s observation that sun and clouds are sufficient to explain observed tropospheric temperature trends.
Human CO2 would have an effect in theory but too small to measure amongst all the other variables.

DocMartyn
June 4, 2012 10:04 am

Part of the total albedo comes from commercial airliners and so is man-made.
One could examine the earlier mono-functional decay prior to and after 9/11. The grounding of the airliners should result in a blip in albedo, and would allow the time base to be quite accurately estimated.

moe
June 4, 2012 10:27 am

One minor mistake: The change in Forcing from CO2 is:
log(363ppm/344.2ppm) / log(2) * 3.7W/m² = 0.28W/m²
[Thanks, typo fixed, doesn’t affect the rest of the calculations. -w.]

Meyer
June 4, 2012 10:43 am

vukcevic on June 4, 2012 at 12:59 am said:
“It is an oversimplification to resolve global temperature variability with only one independent and one internal feedback variable.”
It is an oversimplification to speak of “global temperature” at all, really…

Paul_K
June 4, 2012 10:50 am

Willis,
I’m sorry to see you heading off in this direction. Richard Courtney has commented several times that your choice of an exponential decay function was an arbitrary choice. I never saw it that way. It represents the solution to the single capacity, linear feedback equation, and as such formed a solid basis for your examination of SW founded on clear assumptions.
Your move in this thread takes you into curve-fitting with no tie-back to a physical model. All you have is a guess on the response function from that model, with no clear assumptive basis. I think this is retrogressive.

P. Solar
June 4, 2012 10:58 am

W. says:
Thanks, P. Solar. Albedo changes both longwave and shortwave in a fairly complex fashion. What these calculations show is that the net effect of the albedo, including both long- and shortwaves, is to cool the earth.
But albedo is the reflectivity ( IR and SW OK ) . My point is, how come you get such a good match to temperature without apparently accounting for changes in outgoing IR. I have not checked the numbers but I was not of the impression that is was so small that it could just be what accounts for the mismatch between your exp model and their model albedo “data” plus isolation
Maybe I’m missing something in their paper, but it seems that there is no accounting of outgoing IR in their albedo estimations (neither would I expect there to be).
I noted that the NH loop in your plots in noticeably narrower at the winter end. Could this reduced amplitude in winter be the blanketting effect of more cloud cover in winter. Just the IR factor I mentioned above?
I have some of Spencer’s work on ERBE, I’ll have to look at relative magnitudes.

Clay Marley
June 4, 2012 11:15 am

“I am not sure what the implication of a fat-talied distribution would be here, but it is something to think about.”
Agreed, but also there is no special requirement that the decay be exponential. An exponential decay is valuable in some cases because it is the only memoryless continuous distribution. In other words, this does not have to be a Markov process. However another model, such as the sum of several different exponential decays may be an improvement (sort of like the Bern Carbon Cycle Models) if we can identify what these processes are.

June 4, 2012 11:42 am

Mr. Eschenbach
I do not accuse you of making errors, what I am suggesting is that despite good logic and correct maths (although I would like to see residual graph compared to annual anomalies since appears that residuals are of the same order about 0.25C) , what you are proposing is not a convincing resolution for understanding any of the major events as the MWP, LIA or recent warming period, although it is OK as an academic exercise.
Also for the reasons I mentioned in my post addressed to Mr. Wilde
http://wattsupwiththat.com/2012/06/04/sun-and-clouds-are-sufficient/#comment-1000844
And finally application of a suitable procedure on even the best data may produce number of different outcomes. As an example you could (but I doubt that you would) take a closer look at my ‘Summer Season Spoof’ at
http://www.vukcevic.talktalk.net/00f.htm
it is based on real data, has good logic, doesn’t break any laws of physics and perfectly agrees with Hansen’s calculations, but is it realistic reflection of the real world, I hope not.
Your past posts relating to the oceanic effects are very educational, and as a result I have learned from such.
……………………………………..
Meyer says: June 4, 2012 at 10:43 am
……..
Agree.
Arctic and the North Atlantic oscillations hardly have any effect in the southern hemisphere, equally the Antarctic’s temperature wave doesn’t even reach tropics, while the ENSO (el nino and la nina) are equatorial events and therefore affect both hemispheres. Averaging two hemisphere into single global dataset is definitely counterproductive for the purpose of understanding the long term natural variation.

moe
June 4, 2012 11:57 am

Several comments in this thread criticize Mr. Eschenbachs use of curve fitting and even call him a hypocrite. This shows that those commentators do not understand the difference between verification and validation.
During verification one must stick rigorously to math in order to show, that a model does not violate any older very well researched models (i.e. “laws” of science).
During validation one ideally compares the results of the model with measured data of the real world system. If this comparison is not possible a second best approach is to compare the behavior the model and the behavior of the real world.
When analyzing the behavior of a systems measured data, it is common practice to use curve fitting. This curve must only be fitted to the measured data, and the model data must not have any influence on the curve shape.
If one, however, uses curve fitting in order to “fill the gap” between model and observation, one essentially creates a empirical model between the original model and the observation. Verification of this empirical model is not possible. With enough parameters one can connect almost any model with any system, but these models are meaningless.
The model which Mr. Eschenbach explained above, shows the behavior of the measured data in a certain time-frame. It produces abstract parameters, which can be used to validate verificated models of the climate.
Proper validation is extremely rare in climate science. while most of the millions of dollars are spent on the verifications. Comparing trends is not proper validation, if the differences between the model and the reality is larger than the trend.

Matthew R Marler
June 4, 2012 12:07 pm

what exactly is your model? previously it was this: ΔT(n+1) = λ∆F(n+1)/τ + ΔT(n) exp(-1/ τ)
now is it this? ΔT(n+1) = λ∆F(n+1)/(τ^c) + ΔT(n) exp(-(t/ τ)^c))
this? ΔT(n+1) = λ∆F(n+1)/(τ^c) + ΔT(n) exp(-1/( τ^c))
something else?

joeldshore
June 4, 2012 12:09 pm

Willis Eschenbach says:

Thanks, P. Solar. Albedo changes both longwave and shortwave in a fairly complex fashion. What these calculations show is that the net effect of the albedo, including both long- and shortwaves, is to cool the earth.

Willis, I think this misses the most important part of P. Solar’s point: You have assumed that the forcing is due solely to the albedo effect on the shortwave radiation. If, in fact the albedo change is real and accurately-measured (which I am still somewhat skeptical about) and if it is due to a net decrease in cloudiness over the period, then presumably this decrease in cloudiness has also produced an increase in outgoing longwave radiation. In fact, if the outgoing longwave radiation has increased because of decreasing cloudiness, this will offset some (perhaps quite a large fraction!) of the forcing due to the increase in incoming shortwave radiation. Hence, the net forcing due to this change in cloudiness might be considerably less.
If you overestimate the forcing associated with the temperature trend, then you underestimate the climate sensitivity.
So, I think there are still a lot of issues with your fit to the temperature trend…mainly because I don’t think you really know the forcings involved. (Your fit to the seasonal cycle is fine, because that forcing and the temperature response is known to a good accuracy percentage-wise…but unfortunately that fit tells you very little about the equilibrium climate sensitivity because of the issues with damping of these higher frequencies by the slower time scales in the system.)
[By the way, another issue here, which I think other people have touched on in the other threads, is how much of the cloudiness change counts as a forcing and how much could be a feedback from the temperature change. For example, if you have the causality partly wrong…i.e., some of the decreasing in cloudiness is due to the warming rather than the other way around…then you will be counting as a forcing something that is actually a feedback and this will give you a lower estimate of climate sensitivity than the actual sensitivity. Of course, your leaving out the direct forcing due to the change in greenhouse gases is another issue, which would tend to raise your estimate of the climate sensitivity relative to the actual.]