Guest Post by Willis Eschenbach
In my previous post, A Longer Look at Climate Sensitivity, I showed that the match between lagged net sunshine (the solar energy remaining after albedo reflections) and the observational temperature record is quite good. However, there was still a discrepancy between the trends, with the observational trends being slightly larger than the calculated results. For the NH, the difference was about 0.1°C per decade, and for the SH, it was about 0 05°C per decade.
I got to thinking about the “exponential decay” function that I had used to calculate the lag in warming and cooling. When the incoming radiation increases or decreases, it takes a while for the earth to warm up or to cool down. In my calculations shown in my previous post, this lag was represented by a gradual exponential decay.
But nature often doesn’t follow quite that kind of exponential decay. Instead, it quite often follows what is called a “fat-tailed”, “heavy-tailed”, or “long-tailed” exponential decay. Figure 1 shows the difference between two examples of a standard exponential decay, and a fat-tailed exponential decay (golden line).
Figure 1. Exponential and fat-tailed exponential decay, for values of “t” from 1 to 30 months. Lines show the fraction of the original amount that remains after time “t”. The “fatness” of the tail is controlled by the variable “c”. Line with circles shows the standard exponential decay, from t=1 to t=20. Golden line shows a fat-tailed exponential decay. Black line shows a standard exponential decay, with a longer time constant “tau”. The “fatness” of the tail is controlled by the variable “c”.
Note that at longer times “t”, a fat-tailed decay function gives the same result as a standard exponential decay function with a longer time constant. For example, in Figure 1 at “t” equal to 12 months, a standard exponential decay with a time constant “tau” of 6.2 months (black line) gives the same result as the fat-tailed decay (golden line).
So what difference does it make when I use a fat-tailed exponential decay function, rather than a standard exponential decay function, in my previous analysis? Figure 2 shows the results:
Figure 2. Observations and calculated values, Northern and Southern Hemisphere temperatures. Note that the observations are almost hidden by the calculation.
While this is quite similar to my previous result, there is one major difference. The trends fit better. The difference in the trends in my previous results is just barely visible. But when I use a fat-tailed exponential decay function, the difference in trend can no longer be seen. The trend in the NH is about three times as large as the trend in the SH (0.3°C vs 0.1°C per decade). Despite that, using solely the variations in net sunshine we are able to replicate each hemisphere exactly.
Now, before I go any further, I acknowledge that I am using three tuned parameters. The parameters are lambda, the climate sensitivity; tau, the time constant; and c, the variable that controls the fatness of the tail of the exponential decay.
Parameter fitting is a procedure that I’m usually chary of. However, in this case each of the parameters has a clear physical meaning, a meaning which is consistent with our understanding of how the system actually works. In addition, there are two findings that increase my confidence that these are accurate representations of physical reality.
The first is that when I went from a regular to a fat-tailed distribution, the climate sensitivity did not change for either the NH or the SH. If they had changed radically, I would have been suspicious of the introduction of the variable “c”.
The second is that, although the calculations for the NH and the SH are entirely separate, the fitting process produced the same “c” value for the “fatness” of the tail, c = 0.6. This indicates that this value is not varying just to match the situation, but that there is a real physical meaning for the value.
Here are the results using the regular exponential decay calculations
SH NH lambda 0.05 0.10°C per W/m2 tau 2.4 1.9 months RMS residual error 0.17 0.26 °C trend error 0.05 ± 0.04 0.11 ± 0.08, °C / decade (95% confidence interval)
As you can see, the error in the trends, although small, is statistically different from zero in both cases. However, when I use the fat-tailed exponential decay function, I get the following results.
SH NH lambda 0.04 0.09°C per W/m2 tau 2.2 1.5 months c 0.59 0.61 RMS residual error 0.16 0.26 °C trend error -0.03 ± 0.04 0.03 ± 0.08, °C / decade (95% confidence interval)
In this case, the error in the trends is not different from zero in either the SH or the NH. So my calculations show that the value of the net sun (solar radiation minus albedo reflections) is quite sufficient to explain both the annual and decadal temperature variations, in both the Northern and Southern Hemispheres, from 1984 to 1997. This is particularly significant because this is the period of the large recent warming that people claim is due to CO2.
Now, bear in mind that my calculations do not include any forcing from CO2. Could CO2 explain the 0.03°C per decade of error that remains in the NH trend? We can run the numbers to find out.
At the start of the analysis in 1984 the CO2 level was 344 ppmv, and at the end of 1997 it was 363 ppmv. If we take the IPCC value of 3.7 W/m2, this is a change in forcing of log(363/344,2) * 3.7 = 0.28 W/m2 per decade. If we assume the sensitivity determined in my analysis (0.08°C per W/m2 for the NH), that gives us a trend of 0.02°C per decade from CO2. This is smaller than the trend error for either the NH or the SH.
So it is clearly possible that CO2 is in the mix, which would not surprise me … but only if the climate sensitivity is as low as my calculations indicate. There’s just no room for CO2 if the sensitivity is as high as the IPCC claims, because almost every bit of the variation in temperature is already adequately explained by the net sun.
Best to all,
w.
PS: Let me request that if you disagree with something I’ve said, QUOTE MY WORDS. I’m happy to either defend, or to admit to the errors in, what I have said. But I can’t and won’t defend your interpretation of what I said. If you quote my words, it makes all of the communication much clearer.
MATH NOTES: The standard exponential decay after a time “t” is given by:
e^(-1 * t/tau) [ or as written in Excel notation, exp(-1 * t/tau) ]
where “tau” is the time constant and e is the base of the natural logarithms, ≈ 2.718. The time constant tau and the variable t are in whatever units you are using (months, years, etc). The time constant tau is a measure that is like a half-life. However, instead of being the time it takes for something to decay to half its starting value, tau is the time it takes for something to decay exponentially to 1/e ≈ 1/2.7 ≈ 37% of its starting value. This can be verified by noting that when t equals tau, the equation reduces to e^-1 = 1/e.
For the fat-tailed distribution, I used a very similar form by replacing t/tau with (t/tau)^c. This makes the full equation
e^(-1 * (t/tau)^c) [ or in Excel notation exp(-1 * (t/tau)^c) ].
The variable “c’ varies between zero and one to control how fat the tail is, with smaller values giving a fatter tail.
[UPDATE: My thanks to Paul_K, who pointed out in the previous thread that my formula was slightly wrong. In that thread I was using
∆T(k) = λ ∆F(k)/τ + ∆T(k-1) * exp(-1 / τ)
when I should have been using
∆T(k) = λ ∆F(k)(1 – exp(-1/ τ) + ∆T(k-1) * exp(-1 / τ)
The result of the error is that I have underestimated the sensitivity slightly, while everything else remains the same. Instead of the sensitivities for the SH and the NH being 0.04°C per W/m2 and 0.08°C per W/m2 respectively in the both the current calculations, the correct sensitivities for this fat-tailed analysis should have been 0.04°C per W/m2 and 0.09°C per W/m2. The error was slightly larger in the previous thread, increasing them to 0.05 and 0.10 respectively. I have updated the tables above accordingly.
w.]
[ERROR UPDATE: The headings (NH and SH) were switched in the two blocks of text in the center of the post. I have fixed them.
Paul_K:
My view is that the division of statisticians into schools is artificial and damaging. Using long existing technology it is possible to unite the warring factions under the banner of logic. A barrier to accomplishment is widespread ignorance on the part of academic philosophers and statisticians. The list of people needing instruction does not end with statistical neophytes such as Willis.
Terry Oldberg,
So why don’t you instruct us neophytes on your ‘list’ the way it is?
Smokey:
I’m willing to play the instructor if you are willing to play the student. I’d need feedback from you on what you don’t understand. Is it a deal?
Terry Oldberg,
Thank you for your considered response. I was offering a serious suggestion about communication, and in response you give me a declaration that you can reconcile the more-than-a-century old argument at a stroke of the pen. The last time I saw something so profound it was on a fortune cookie. Write it up, publish or be damned, sir.
Paul_K:
To the extent it means anything, I at least have not ignored your comment about different forcing components’ resulting in different feedback. But, although I’m inclined to think that Mr. Eschenbach’s measurement of the relationship between change in net and change in upward longwave means that in this case the feedbacks are the same–or at least his technique gets the right answer–I keep getting interrupted before I can convince myself that I’m right (or wrong, as the case may be).
Again, I know your comment wasn’t directed to me, but someone’s definitely out here listening, even though there may be no meaningful response.
Paul_K:
Your rudely phrased presumption that it has not been written up is incorrect.
Paul_K says:
June 6, 2012 at 7:52 am
The “standard exponential decay” function is not haphazardly chosen. It is THE UNIQUE solution to the heat balance equation for a single capacity system under the assumption that the Earth has a linear radiative response to temperature.
As soon as you [Willis] postulate an arbitrary response function, you disconnect your results from a physically meaningful conceptual model, where the assumptions can be clearly stated and tested. ………………….But fitting an arbitrary functional form which cannot be tied back to a physical system looks like curve-fitting.
—————————————————————————————————–
Just a point: in groundwater hydrology the groundwater flow equation which is exactly the same as the heat equation in a perfectly homogeneous system that will in turn behave in the idealized exponential manner during drainage. But not in a non-homogeneous system where it is possible to get a ‘drainage curve’ that does not necessarily quite match an exponential equation (i.e part of the curve can contain a delayed drainage component).
This is just to point out as an example that in the real world things don’t necessarily behave in a idealistic way as you have suggested and as Willis has alluded to.
FrankK,
“This is just to point out as an example that in the real world things don’t necessarily behave in a idealistic way as you have suggested and as Willis has alluded to.”
OK, that’s certainly true.
But say in your hydrology example, you take the single phase diffusivity equation as your basis for “understanding” the system. You conclude in the idealised system, your response should be given by a specific analytic function PHI relating pressure to time and space, and parameterised on the diffusivity constant. However, you note that the actual data matches PHI moderately well, but matches a different function, CHI, superbly well. Are you then allowed to use your new arbitrarily made up function CHI to estimate the diffusivity constant? That’s the analogy I’m talking about.
Terry Oldberg says:
June 12, 2012 at 2:40 pm
Paul_K says:
June 13, 2012 at 10:06 am
Terry Oldberg,
OK, now that the laughing is done, here is the question and Terry’s answer:
Terry, I didn’t ask what they were. I asked why they were not a sample. You have ignored the question, and then been unpleasant about it when I pointed it out.
All you have done is claim they are not a sample, but something else. I know you claim that, you have been claiming that for a while, it’s no surprise, it’s not news. It’s also not an answer to the question.
What I don’t understand is why. What makes one a sample and another a time series? Why is taking daily measurements of temperature a time series while taking daily measurements of baseball success is a sample?
Now, I could tolerate your dickish arrogance, and your supercilious assumption of superior knowledge. I can live with that kind of nonsense if there is some benefit to be had.
But you are just blowing wind, you are not answering questions in any shape or form, you are just endlessly stating and restating the same claims. Color me unimpressed. You may know something worthwhile, I wouldn’t be surprised if you did, but your foolish behavior and your refusal to answer questions means you are useless to me.
w.
Willis:
Please focus on the exact wording of your question, Q2; it is “Why are the records of the temperatures and the insolation in your yard for say thirty days not a sample.” Today, you shift the question to “Why is taking daily measurements of temperature a time series while taking daily measurements of baseball success is a sample?” Q2 references a RECORD of temperatures; that RECORD is a time series. Today’s question references “taking measurements of temperature”; that’s not a time series.
Paul_K says:
June 6, 2012 at 7:52 am
Paul, you are correct that the standard exponential decay is THE UNIQUE solution to the model … but nature doesn’t seem to be listening to you. Instead, because nature is granular and stepwise rather than smooth, because different things happen at different times and places, we find that natural long-tailed distributions are quite common. Go figure … you should speak sharply to nature, she’s not following your lead.
And you are right that it is better to use a “physically meaningful conceptual model” … but I’m coming at the question from the other end. I’m simply exploring what the shape of the response actually looks like … and what it looks like is not THE UNIQUE solution. It is fat-tailed.
OK, that tells me something. It tells me that it is likely that we’re looking at a faster plus a slower time constant. And since I know what the shape of the curve is like, I can analyze it to find out the most likely values for the faster and slower components. For example, I find above that the variable “c” is ~ 0.6. Now, this means that if tau is 2.2 months as calculated above, the resulting fat-tailed exponential decay with tau = 2.2 and c = 0.6 can be very closely approximated by two standard exponential decays with tau1 = 0.7 and tau2 = 5.6 … and that is useful information that may lead me to a “physically meaningful conceptual model”.
All the best,
w.
Willis:
Currently, the record of our conversation is in an untidy state, resulting from your denunciation of me for not answering your Question 2 when I had answered Question 2. If you were to issue a mea culpa this would clear the air. Perhaps we could then get back to the topic that instigated the conversation. The topic is the important one of fabrication of information in estimates of the equilibrium climate sensitivity. In particular, I would prove that 100% of the information which policy makers believe themselves to have about the outcomes from their policy decisions is fabricated.
Terry Oldberg says:
June 15, 2012 at 7:49 am
My friend, you are in fantasy land if you think you’ll get a mea culpa from me for your actions. Despite your repeated claims to the contrary, you didn’t answer question 2. Here’s the interchange again, along with my comments, from above. I have seen nothing to change my mind.
It’s bozo simple. “They are not a sample” is not an answer to “why are they not a sample?”, no matter how many times you claim it is.
You don’t get it, Terry. Your actions are irritating. You don’t answer questions. Your tone is pompous and patronizing. You insult me without even noticing. You say you want me to “move [my] point of view from the lofty position of debater to the humble position of student” before you will deign to reply. It is like pulling teeth to get you to answer even the simplest question. To top it off, you think I should apologize to you because you have not answered my questions? Now you want me to “clear the air”?
OK, let me be perfectly clear. I have no desire at all to be your student. Based on your actions to date, I find you totally unqualified for the task of teaching anyone anything. You might have something valuable to say, but whatever it is, it’s not worth trying to pry it out of you, the task is too unpleasant to contemplate. You’ve burnt your bridges with me. Take your oh-so-valuable insights to someone who can tolerate your attitude, because I can’t.
So … is the air clear enough now? Did you miss the meaning when I said above “here’s a quarter, call someone who cares”? If so, here’s the video to explain my message.
w.