Guest Post by Willis Eschenbach
In my previous post, A Longer Look at Climate Sensitivity, I showed that the match between lagged net sunshine (the solar energy remaining after albedo reflections) and the observational temperature record is quite good. However, there was still a discrepancy between the trends, with the observational trends being slightly larger than the calculated results. For the NH, the difference was about 0.1°C per decade, and for the SH, it was about 0 05°C per decade.
I got to thinking about the “exponential decay” function that I had used to calculate the lag in warming and cooling. When the incoming radiation increases or decreases, it takes a while for the earth to warm up or to cool down. In my calculations shown in my previous post, this lag was represented by a gradual exponential decay.
But nature often doesn’t follow quite that kind of exponential decay. Instead, it quite often follows what is called a “fat-tailed”, “heavy-tailed”, or “long-tailed” exponential decay. Figure 1 shows the difference between two examples of a standard exponential decay, and a fat-tailed exponential decay (golden line).
Figure 1. Exponential and fat-tailed exponential decay, for values of “t” from 1 to 30 months. Lines show the fraction of the original amount that remains after time “t”. The “fatness” of the tail is controlled by the variable “c”. Line with circles shows the standard exponential decay, from t=1 to t=20. Golden line shows a fat-tailed exponential decay. Black line shows a standard exponential decay, with a longer time constant “tau”. The “fatness” of the tail is controlled by the variable “c”.
Note that at longer times “t”, a fat-tailed decay function gives the same result as a standard exponential decay function with a longer time constant. For example, in Figure 1 at “t” equal to 12 months, a standard exponential decay with a time constant “tau” of 6.2 months (black line) gives the same result as the fat-tailed decay (golden line).
So what difference does it make when I use a fat-tailed exponential decay function, rather than a standard exponential decay function, in my previous analysis? Figure 2 shows the results:
Figure 2. Observations and calculated values, Northern and Southern Hemisphere temperatures. Note that the observations are almost hidden by the calculation.
While this is quite similar to my previous result, there is one major difference. The trends fit better. The difference in the trends in my previous results is just barely visible. But when I use a fat-tailed exponential decay function, the difference in trend can no longer be seen. The trend in the NH is about three times as large as the trend in the SH (0.3°C vs 0.1°C per decade). Despite that, using solely the variations in net sunshine we are able to replicate each hemisphere exactly.
Now, before I go any further, I acknowledge that I am using three tuned parameters. The parameters are lambda, the climate sensitivity; tau, the time constant; and c, the variable that controls the fatness of the tail of the exponential decay.
Parameter fitting is a procedure that I’m usually chary of. However, in this case each of the parameters has a clear physical meaning, a meaning which is consistent with our understanding of how the system actually works. In addition, there are two findings that increase my confidence that these are accurate representations of physical reality.
The first is that when I went from a regular to a fat-tailed distribution, the climate sensitivity did not change for either the NH or the SH. If they had changed radically, I would have been suspicious of the introduction of the variable “c”.
The second is that, although the calculations for the NH and the SH are entirely separate, the fitting process produced the same “c” value for the “fatness” of the tail, c = 0.6. This indicates that this value is not varying just to match the situation, but that there is a real physical meaning for the value.
Here are the results using the regular exponential decay calculations
SH NH lambda 0.05 0.10°C per W/m2 tau 2.4 1.9 months RMS residual error 0.17 0.26 °C trend error 0.05 ± 0.04 0.11 ± 0.08, °C / decade (95% confidence interval)
As you can see, the error in the trends, although small, is statistically different from zero in both cases. However, when I use the fat-tailed exponential decay function, I get the following results.
SH NH lambda 0.04 0.09°C per W/m2 tau 2.2 1.5 months c 0.59 0.61 RMS residual error 0.16 0.26 °C trend error -0.03 ± 0.04 0.03 ± 0.08, °C / decade (95% confidence interval)
In this case, the error in the trends is not different from zero in either the SH or the NH. So my calculations show that the value of the net sun (solar radiation minus albedo reflections) is quite sufficient to explain both the annual and decadal temperature variations, in both the Northern and Southern Hemispheres, from 1984 to 1997. This is particularly significant because this is the period of the large recent warming that people claim is due to CO2.
Now, bear in mind that my calculations do not include any forcing from CO2. Could CO2 explain the 0.03°C per decade of error that remains in the NH trend? We can run the numbers to find out.
At the start of the analysis in 1984 the CO2 level was 344 ppmv, and at the end of 1997 it was 363 ppmv. If we take the IPCC value of 3.7 W/m2, this is a change in forcing of log(363/344,2) * 3.7 = 0.28 W/m2 per decade. If we assume the sensitivity determined in my analysis (0.08°C per W/m2 for the NH), that gives us a trend of 0.02°C per decade from CO2. This is smaller than the trend error for either the NH or the SH.
So it is clearly possible that CO2 is in the mix, which would not surprise me … but only if the climate sensitivity is as low as my calculations indicate. There’s just no room for CO2 if the sensitivity is as high as the IPCC claims, because almost every bit of the variation in temperature is already adequately explained by the net sun.
Best to all,
w.
PS: Let me request that if you disagree with something I’ve said, QUOTE MY WORDS. I’m happy to either defend, or to admit to the errors in, what I have said. But I can’t and won’t defend your interpretation of what I said. If you quote my words, it makes all of the communication much clearer.
MATH NOTES: The standard exponential decay after a time “t” is given by:
e^(-1 * t/tau) [ or as written in Excel notation, exp(-1 * t/tau) ]
where “tau” is the time constant and e is the base of the natural logarithms, ≈ 2.718. The time constant tau and the variable t are in whatever units you are using (months, years, etc). The time constant tau is a measure that is like a half-life. However, instead of being the time it takes for something to decay to half its starting value, tau is the time it takes for something to decay exponentially to 1/e ≈ 1/2.7 ≈ 37% of its starting value. This can be verified by noting that when t equals tau, the equation reduces to e^-1 = 1/e.
For the fat-tailed distribution, I used a very similar form by replacing t/tau with (t/tau)^c. This makes the full equation
e^(-1 * (t/tau)^c) [ or in Excel notation exp(-1 * (t/tau)^c) ].
The variable “c’ varies between zero and one to control how fat the tail is, with smaller values giving a fatter tail.
[UPDATE: My thanks to Paul_K, who pointed out in the previous thread that my formula was slightly wrong. In that thread I was using
∆T(k) = λ ∆F(k)/τ + ∆T(k-1) * exp(-1 / τ)
when I should have been using
∆T(k) = λ ∆F(k)(1 – exp(-1/ τ) + ∆T(k-1) * exp(-1 / τ)
The result of the error is that I have underestimated the sensitivity slightly, while everything else remains the same. Instead of the sensitivities for the SH and the NH being 0.04°C per W/m2 and 0.08°C per W/m2 respectively in the both the current calculations, the correct sensitivities for this fat-tailed analysis should have been 0.04°C per W/m2 and 0.09°C per W/m2. The error was slightly larger in the previous thread, increasing them to 0.05 and 0.10 respectively. I have updated the tables above accordingly.
w.]
[ERROR UPDATE: The headings (NH and SH) were switched in the two blocks of text in the center of the post. I have fixed them.
Robert Brown says:
June 4, 2012 at 7:51 pm
Found the Colorado presentation here, I hadn’t read it, and indeed it is a tour de force. The man knows his stuff.
Many thanks,
w.
tallbloke says:
June 4, 2012 at 10:13 pm
Thanks, tallbloke. Proctor’s work is good, I hadn’t thought about using “sunshine hours” as a proxy. Should be possible to couple that with the NASA gridded annual solar data to refine his work.
Willie is a great guy, he’s one of my heroes, and a funny and fun man to have a beer with. I hadn’t seen that work of his, nice stuff.
I got to talk a bit with Nir in Chicago. He’s young, full of fire, has laughing eyes. His calorimeter piece you referenced is new to me, and a most interesting piece. He makes a good case that huge amounts of energy are flowing into and out of the ocean, modulated by the clouds. I wrote about this in my 2009 paper, The Thermostat Hypothesis, where I said regarding variations in the global thermal equilibrium:
You go on to say:
I agree that their albedo is “fixed”, but curiously it is not the albedo of any of their rocky bodies, nor is it the average of the rocky bodies. I asked N&Z where it came from, and got basically the answer you give me now, that it is “fixed” … curiously, it is “fixed” exactly where it works the best. Which is why it is the fifth parameter.
But the real joke was not that Nikolov and Zeller used 5 parameters. Nor was it that they could pick any equation, no matter how non-physical.
It was that they were fitting their equation to only EIGHT DATA POINTS … if you don’t find fitting even 4 parameters for EIGHT DATA POINTS sidesplittingly funny, you don’t understand math.
Plus, of course, there wasn’t any physical basis for their miracle equation, where in the current case we have lots of examples of exponential decay to build upon. So we’re not chasing extraneous variables.
You’ll have to point me to where Nir Shaviv said that albedo is a function of pressure on other planets, I haven’t seen that.
Thanks, tall bloke, appreciated.
w.
Willis,
Let me add to Richard Courtney’s comment on over-reaching.
There are powerful reasons why this work is never going to be able to yield a credible estimate of long-term climate sensitivity. I will list a few of them below. If this work has importance, and I think it may, then it is in the area of attribution, and not in the estimation of sensitivity. For this you need a credible, physically meaningful model of short-term sensitivity. You started off with one – the single capacity, linear feedback model. This may not be the best one to use – I don’t know – and you may need to move to a more sophisticated model, but, if you do so, it needs to be one which is based on a physically meaningful and testable hypothesis. I believe that you need to firmly resist the temptation to evermore complicated response functions which are not underpinned by a physical hypothesis, just because they offer you an improved fit to the data. And this includes fat-tailed response functions or n-pole feedback models, if they are not clearly underpinned. In my opinion, already your addition of an unexplained parameter here diminishes credibility relative to what you had previously achieved – and it is for this reason that I see it as retrogressive.
Why is the work not going to yield the holy grail – a credible estimate of long-term climate sensitivity?
(1) Your data is strictly limited to the satellite era. Long slow responses will not be visible nor estimable, but you cannot rule out their existence as potentially major controls on sensitivity.
(2) Earth’s radiative response seems to be near linear in the short term, with small forcings. Over the long term, nearly all of the GCMs exhibit non-linear behavior to a greater or lesser extent. You can neither estimate this effect, nor can you discount it with the data available.
(3) The flux perturbations which you are considering comprise a mix of forcings and feedbacks in the SW, which would need to be untangled rigorously for any estimate of sensitivity to be meaningful within its conventional definition. ( However, I don’t believe that you have to do this to interrogate relative attribution.)
With respect to item (3), conventionally both albedo and clouds are considered to be part of the feedback coefficient – the reciprocal of climate sensitivity. TSI variation is included as a forcing. The important breakdown for your work is between SW and LW effects, I think. The split broadly looks like this:-
SW perturbations (forcings and feedbacks) – TSI variation, sea-ice albedo, cloud reflectance, aerosol direct effects and atmospheric absorption
LW perturbations (forcings and feedbacks) – WMGHG’s, water vapour, lapse rate, cloud absorption and re-emission.
Because you are using net received SW as a FORCING, the feedback term that you are abstracting here (1/lambda in your nomenclature) does not correspond to the “conventional” feedback term, and neither then does your climate sensitivity, lambda. Specifically, your feedback term excludes sea-ice albedo, SW cloud effects and atmospheric absorption changes. These are captured along with TSI variation and aerosols as radiative forcings.
The importance of your finding is related to the fact that the temperature variation can be largely explained by (just) SW variation. This is not what one would expect if the heating were largely attributable to WMGHG’s. But this requires a clear accounting, I think, which should be the next step in my view.
As before, Willis, please take this as a constructive critique of your work. I am not trying to do a hatchet job, I promise you.
Willis:
Thankyou for linking to the Koutsoyiannis’ Colorado presentation in your post at June 5, 2012 at 12:24 am.
The presentation is brilliant!
How I wish I had the sense to have found it when Robert first commended it! Stupid of me: if he commends it then it surely must be good.
I very, very strongly commend everybody interested in this thread to go through it. It is at
http://www.cwi.colostate.edu/nonstationarityworkshop/SpeakerNotes/Wednesday%20Morning/Koutsoyiannis.pdf
Anyway, I return to enjoying the Diamond Jubilee celebrations. HM is about to leave for the cathedral.
But I really needed to thank you for getting me to read the gem from Koutsoyiannis. Thankyou.
Richard
P Solar says:
I’ll enjoy reading the article at BHill.
One thing I didn’t include in that article, but I now think is significant in reducing low level aerosols/particulates and seeded low level clouds is the mandating of catalytic converters on vehicles in 1975 (and similar measures to reduce aerosol/particulate emissions from vehicles in subsequent years) in the USA and much of the rest of the world shortly afterwards.
Willis,
Let me add to Richard Courtney’s comment on over-reaching.
There are powerful reasons why this work is never going to be able to yield a credible estimate of long-term climate sensitivity. I will list a few of them below. If this work has importance, and I think it may, then it is in the area of attribution, and not in the estimation of sensitivity. For this you need a credible, physically meaningful model of short-term sensitivity. You started off with one – the single capacity, linear feedback model. This may not be the best one to use – I don’t know – and you may need to move to a more sophisticated model, but, if you do so, it needs to be one which is based on a physically meaningful and testable hypothesis. I believe that you need to firmly resist the temptation to evermore complicated response functions which are not underpinned by a physical hypothesis, just because they offer you an improved fit to the data. And this includes fat-tailed response functions or n-pole feedback models, if they are not clearly underpinned. In my opinion, already your addition of an unexplained parameter here diminishes credibility relative to what you had previously achieved – and it is for this reason that I see it as retrogressive.
Why is the work not going to yield the holy grail – a credible estimate of long-term climate sensitivity?
(1) Your data is strictly limited to the satellite era. Long slow responses will not be visible nor estimable, but you cannot rule out their existence as potentially major controls on sensitivity.
(2) Earth’s radiative response seems to be near linear in the short term, with small forcings. Over the long term, nearly all of the GCMs exhibit non-linear behavior to a greater or lesser extent. You can neither estimate this effect, nor can you discount it with the data available.
(3) The flux perturbations which you are considering comprise a mix of forcings and feedbacks in the SW, which would need to be untangled rigorously for any estimate of sensitivity to be meaningful within its conventional definition. However, I don’t believe that you have to do this to interrogate relative attribution.
With respect to item (3), conventionally both albedo and clouds are considered to be part of the feedback coefficient – the reciprocal of climate sensitivity. TSI variation is included as a forcing. The important breakdown for your work is between SW and LW effects, I think. The split broadly looks like this:-
SW perturbations (forcings and feedbacks) – TSI variation, sea-ice albedo, cloud reflectance, aerosol direct effects and atmospheric absorption
LW perturbations (forcings and feedbacks) – WMGHG’s, water vapour, lapse rate, cloud absorption and re-emission.
Because you are using net received SW as a FORCING, the feedback term that you are abstracting here (1/lambda in your nomenclature) does not correspond to the “conventional” feedback term, and neither then does your climate sensitivity, lambda. Specifically, your feedback term excludes sea-ice albedo, SW cloud effects and atmospheric absorption changes. These are captured along with TSI variation and aerosols as radiative forcings.
The importance of your finding then is related to the fact that the temperature variation can be largely explained by (just) SW variation. This is not what one would expect if the heating were largely attributable to WMGHG’s. But this requires a clear accounting, I think.
As before, Willis, please take this as a constructive critique of your work. I am not trying to do a hatchet job, I promise you.
P Solar says:
I’ll enjoy reading the article at BHill.
One thing I didn’t include in that article, but I now think is significant in reducing low level aerosols/particulates and seeded low level clouds is the mandating of catalytic converters on all new vehicles in 1975 (and similar measures to reduce aerosol/particulate emissions from vehicles in subsequent years) in the USA and much of the rest of the world shortly afterwards.
Willis
In the following equation:
∆T(k) = λ ∆F(k)(1 – exp(-1/ τ) + ∆T(k-1) * exp(-1 / τ)
Something does not look right.
You have five opening brackets “(“ but only four closing brackets “)”
[Thanks, it’s ∆T(k) = λ ∆F(k)(1 – exp(-1/ τ)) + ∆T(k-1) * exp(-1 / τ)
w.]
Willis Eschenbach says:
June 5, 2012 at 12:57 am
I hadn’t thought about using “sunshine hours” as a proxy. Should be possible to couple that with the NASA gridded annual solar data to refine his work.
Thanks for flagging up the data
Willie is a great guy, he’s one of my heroes, and a funny and fun man to have a beer with. I hadn’t seen that work of his, nice stuff.
Welcome to the real world of external climate drivers.
I agree that their albedo is “fixed”, but curiously it is not the albedo of any of their rocky bodies, nor is it the average of the rocky bodies. I asked N&Z where it came from, and got basically the answer you give me now, that it is “fixed” … curiously, it is “fixed” exactly where it works the best. Which is why it is the fifth parameter.
They use the Moon’s albedo as being representative of the greybody albedo of rocky planets in general. It is not a ‘tuned parameter’.
But the real joke was … only EIGHT DATA POINTS…
It’s most inconsiderate of the solar system to provide less planets than statisticians would like. 🙂
You’ll have to point me to where Nir Shaviv said that albedo is a function of pressure on other planets, I haven’t seen that.
Heh. N&Z and Shaviv are in agreement that cloud albedo variation is related to insolation variation at the TOA. That’s why Willie Soon’s sunshine hours graph is in approximate agreement with TSI as well as temperature, give or take ENSO.
Philip Bradley says:
June 5, 2012 at 2:17 am
One thing I didn’t include in that article, but I now think is significant in reducing low level aerosols/particulates and seeded low level clouds is the mandating of catalytic converters on all new vehicles in 1975
Has any work been done to compare the magnitude of that effect against changes in atmospheric angular momentum carrying dust around?
On the page 35 of the Koutsoyiannis’ Colorado presentation
http://www.cwi.colostate.edu/nonstationarityworkshop/SpeakerNotes/Wednesday%20Morning/Koutsoyiannis.pdf
there is an example using the AMO index; it is an almost unknown fact that there is an 11 years advanced precursor to it.
http://www.vukcevic.talktalk.net/theAMO.htm
It’s most inconsiderate of the solar system to provide less planets than statisticians would like. 🙂
But it doesn’t. It provides several more moons of gas giants with at least as much atmosphere as the moons they selected and with varying albedo. It’s just that if you plot them using precisely N&Z’s algorithm and openly published atmosphere/temperature data they fall nowhere near their miracle curve.
Can you say “cherrypicking”?
Curiously, if you plot the moons that they did plot but use NASA data for atmosphere and temperature (and perhaps throw, I dunno, error bars in since some of those planetoid moons have an atmosphere so tenuous that it is given as a range, not a number) then they don’t fall on the curve either.
Which leads me, at least, to wonder if the curve came first and then the data or the other way around. IMO it is better the other way around, even though then there is no miracle, there is just a non-existent fit to no-atmosphere moons, whose temperature is understandably correlated with the real bond albedo of the moon, not its atmosphere. High albedo, relatively low temperature (at a given insolation, comparing Jovian moons to Jovian moons etc). No “universal greybody albedo” set from the moon, because the albedo of even the ice free Jovian moons is not at all like that of the moon!
All of which you would know if you looked at the plots I laboriously generated when checking the N&Z results numerically way back when. Why bother complaining about reproducibility in science if somebody “publishes” a result, somebody else openly checks that result against hard numbers and finds that it doesn’t, actually, fit the data (failing in some extremely suspicious/questionable ways), and that same somebody points out that the physical dimensioned constants in the fit are completely, totally irrelevant to any physical process that could occur on the surface of a planet in association with warming and have the primary function of forcing a curve to work for probably bent physical data for nearly airless moons?
Nikolov and Zeller should be utterly forgotten. It is terrible statistics — five parameters, 8 data points, suspicious (at the very least cherrypicked) data. It is terrible physics — really, it is. You can’t just pull a power law with absurd exponents and constants out of thin air, especially when neither exponent nor dimensioned constant are in the vague realm of relevance to any conceivable physical process. Neither can you assert that the moon’s albedo is somehow generalizable to all planetary objects, not when you can SEE AND MEASURE their bond albedo from here, and note that it is wildly different from object to object, and can SEE AND MEASURE the fact that surface temperatures at constant insolation do indeed vary with bond albedo, not with surface pressure (which is not that variable for the Jovian moons).
The Jovian moons ALONE refute N&Z. They all have the same insolation. They have very, very different albedo. They have remarkably similar surface pressure, in all cases a whiff above hard vacuum, barely enough to be called “an atmosphere” (but more than the moon or mercury). And their temperatures correlate with albedo, not pressure, and none of them fit on the miracle curve. End of story, move along, folks, nothing to see here.
Sorry to harp on this, but I object to ANY effort to rehabilitate N&Z even by implication unless and until every one of these objections is addressed. And some of them cannot be addressed save by simply withdrawing the hypothesis, as it is (in my carefully considered, data based, numerically backed up opinion) a false hypothesis, failing both the test of reason and consistency with known physics and the test of empiricism and an unbiased comparison of the hypothesis with all of the data. What more does one have to do to disprove it?
rgb
tallbloke says:
June 5, 2012 at 3:32 am
Run the freakin’ numbers, Tallbloke. It’s not the albedo from the moon. The number in question is used to calculate their fifth tunable parameter, t5.
Here are the albedos from the paper, along with the corresponding t5 parameter if we used that albedo …
Body, Bond Albedo, Parameter t5
Mercury, 0.12, 25.4
Venus, 0.75, 18.6
Earth, 0.3, 24.0
Moon, 0.11, 25.5
Mars, 0.18, 25.0
Europa, 0.64, 20.3
Titan, 0.22, 24.7
Triton, 0.75, 18.6
These albedos range from a low end of 0.11 for the moon’s albedo to 0.75 for Triton’s albedo. The corresponding value for your parameter t1 ranges from 25.5 down to 18.6. And as a result, your value for t5 of 25.3966 is not only different from all of the eight bodies used in their study … it is outside the range of all of them. Not only that, but it is obvious that the albedo of the moon is not “representative of the greybody albedo of rocky planets in general”. In fact, it is the lowest albedo of all the bodies under consideration. I have no idea why you claim that is representative of any of the others.
So no, tallbloke, the claim that the fifth parameter uses the Moon’s albedo is simply not true. It’s easy to verify that it’s not true … run the numbers, my friend.
(For those wondering about the subject under discussion, see “The Mystery of Equation 8″.)
So what? So freakin’ what? Do you truly think that the fact that nature only gives you a few data points justifies using five (or even four) parameters to fit eight data points? Even using your figures, that’s one tunable parameter for every two data points. By your lights, since I have 168 data points, I’d be justified in using 84 tunable parameters …
First, heh, that’s not a citation. Second, heh, it says nothing about albedo being a function of atmospheric pressure, which was your claim. You said:
So again I ask—where does Nir Shaviv say that the planetary albedo is a function of pressure induced by gravity?
Many thanks,
w.
Formerly you had a low-order vector autoregressive model in which the regression parameters were functions of two underlying variables. Now you have a low-order vector autoregressive model in which the regression parameters are functions of 3 underlying variables. As my question above indicated, I do not know what the model is, but almost for sure the parameter estimation is unstable. Most of the explanatory power of the model comes from the fact that you are making one-step-ahead forecasts when the autocorrelation of delta(n) is highly correlated with deltat(n-1).
Paul_K says:
June 5, 2012 at 2:17 am
First, my thanks for your reasoned thoughts on the question, Paul, always appreciated.
Next, you say that a standard exponential decay is somehow theoretically more solidly based than a “fat-tailed” exponential decay. I fear I don’t see why, given that fat-tailed distributions are as common in nature as their standard exponential counterparts. You see this as using “evermore complicated response functions”, but I think that there is more physical justification for using a fat-tailed response than there is for using a standard response.
If there is a long slow response, why would it not show up in the fourteen years of data that I have, particularly since it is among the fastest-warming periods in the 20th century?
This is one of the more enduring myths of the GCMs, that they somehow “exhibit non-linear behavior”. The two that I have tested, the CCSM3 and the GISSE models, are strictly and definitely linear. I have no reason to assume that the other models are different.
As far as I know, the standard definition of climate sensitivity also includes all of the feedbacks that you mention. They are claimed to be the reason that the sensitivity is so much higher than the nominal Stefan-Boltzmann change in temperature expected from a 1 W/m2 change in forcing.
So I fail to see how I’m doing something different. Yes, my result also includes all of the various feedbacks … so does the standard approach. And I agree with Joel Shore that it is necessary to include the change in LW due to the changes in the clouds.
But changes in e.g. water vapor are included in my calculations just as in theirs. If I change the amount of sunshine, the system responds. The conventional view is that it amplifies (increases) the amount of warming that we’d expect from that change in sunshine. I think that’s backwards. I think that the response of the earth is following Le Chatelier’s Principle, and is pushing it back towards equilibrium. And the data that I have presented above seem to bear that out.
It’s good writing this, because I think I can see a way to disentangle some of this stuff. That is to compare a change in insolation due to changing sun with a change in insolation due to changing clouds … I’ll have to think about that one and get back to you.
My thanks for your insights,
w.
Willis,
Thanks for your thoughtful response to my comment. Here are a few more comments on it.
Actually, an important quibble on the wording: This data doesn’t tell you anything about cloud feedback (for which you would have to know how cloudiness changes with warming…both numerically and in terms of the types of clouds…to determine). What it does tell you is that the net radiative effect of clouds is cooling, although the LW cloud forcing does offset a healthy amount (~64%) of the SW albedo effect.
Actually, you are making another assumption here. While it may be true that only 70% of the total albedo is due to clouds, the real question is what percentage of the small change in albedo that was seen over this period is due to clouds. That could well be closer to 100%. (I suppose some of the drop in albedo could also be due to melting of high-albedo ice and snow…but wasn’t the albedo data that you used limited to lower latitudes anyway?) If this were the case, then naively, your estimate of the sensitivity, or response, might be only be ~36% of the size that it should be over that time interval.
I think that carrying this all the way through may actually result in a larger change in sensitivity than your estimate here because of the following consideration: With your original estimate of the forcing, you found that the same one-time constant model that worked well for fitting the annual cycle also did a good job fitting to the linear trend over the 14-year period (with the same parameters). Now, you’ll find that this is no longer the case, i.e., you will find that using the model that you developed for the seasonal cycle, you considerably underestimate this linear trend. This means that if you use a more complicated model (such as one with two timescales or…I hope…the fat-tailed exponential model), it will begin to detect the fact that you get larger and larger estimates for the sensitivity as you look at phenomena at lower and lower frequencies…and so the extrapolation to still lower frequencies will yield a still higher sensitivity.
It is sort of analogous to considering a linear extrapolation of some quantity to zero frequency when you’ve measured it at two frequencies, say, 80 and 100 Hz. If you got the value of 1 at both of these frequencies then your linear extrapolation would also give you a value of 1 at zero frequency. However, if the value at 80 Hz were to double to 2, then the linear extrapolation to zero frequency doesn’t just double…It goes up to 6, i.e., it increases by a factor of 6 from your original estimate. (Of course, these are just made-up numbers…but meant to be illustrative of a basic point.)
That is our basic point…That the annual cycle is seeing a response that is severely damped out. As you look at longer-term trends (i.e., lower frequency response), you are seeing a less damped response. And, if you look at still lower frequencies, you will see still less damping. The fact that the ocean is such a large heat sink means that responses are higher frequencies are very heavily damped.
Willis Eschenbach: “If there is a long slow response, why would it not show up in the fourteen years of data that I have, particularly since it is among the fastest-warming periods in the 20th century?”
As I mentioned repeatedly, I have employed your technique on synthetic data from a system that does indeed have a long, slow response, and your technique not only failed to detect the slow response but also underestimated the sensitivity as a result.
oops, another typo. I wrote Most of the explanatory power of the model comes from the fact that you are making one-step-ahead forecasts when the autocorrelation of delta(n) is highly correlated with deltat(n-1).
it should be: Most of the explanatory power of the model comes from the fact that you are making one-step-ahead forecasts when the autocorrelation of deltat(n) is highly correlated with deltat(n-1).
“I have employed your technique on synthetic data from a system that does indeed have a long, slow response, and your technique not only failed to detect the slow response but also underestimated the sensitivity as a result.”
If your technique is to use a climate model to produce synthetic data, it is still not real data. Willis uses REAL world data, and that cannot be faked. Yours is fake, and any result you get is a faiiry tale. FAIL!
Willis Eschenbach says:
June 5, 2012 at 10:01 am
tallbloke says:
June 5, 2012 at 3:32 am
They use the Moon’s albedo as being representative of the greybody albedo of rocky planets in general. It is not a ‘tuned parameter’.
Run the freakin’ numbers, Tallbloke. It’s not the albedo from the moon. The number in question is used to calculate their fifth tunable parameter, t5.
These albedos range from a low end of 0.11 for the moon’s albedo to 0.75 for Triton’s albedo…
So no, tallbloke, the claim that the fifth parameter uses the Moon’s albedo is simply not true. It’s easy to verify that it’s not true.
(For those wondering about the subject under discussion, see “The Mystery of Equation 8″.)
See also N&Z’s reply:
http://tallbloke.wordpress.com/2012/04/18/2012/02/09/nikolov-zeller-reply-eschenbach/
Where they say:
” Equation (2) calculates the mean surface temperature (Tgb) of a standard Planetary Gray Body (PGB) with no atmosphere“…”αgb = 0.12 is the PGB shortwave albedo”
In brief, the Moon’s albedo is the albedo the other bodies would have if they had no atmosphere or ice. That 0.12 lbedo is what N&Z refer to as the greybody albedo. It is assumed to be the same for all rocky bodies and that is the number which along with the rest of the result of eq2 is plugged into the later equation 8.
T.B.: It’s most inconsiderate of the solar system to provide less planets than statisticians would like. 🙂
W.E.: So what? So freakin’ what?
So people who are investigating solar system dynamics have to find other ways to increase confidence in their theories.
So again I ask—where does Nir Shaviv say that the planetary albedo is a function of pressure induced by gravity?
He doesn’t. They both recognise that variation in Earth’s albedo is primarily a function of variation in external forcings (at timescales for which atmopheric mass is fairly constant).
Cheers
TB.
Ed_B: “If your technique is to use a climate model to produce synthetic data, it is still not real data. Willis uses REAL world data, and that cannot be faked. Yours is fake, and any result you get is a faiiry tale. FAIL!”
Brilliant riposte. Consider me well and truly chastised. Too bad for Michael Mann that he did not have you to come to his defense when Steve McIntyre used synthetic data to demonstrate that Mann’s technique would find hockey sticks where none existed.
In brief, the Moon’s albedo is the albedo the other bodies would have if they had no atmosphere or ice.
Except that it’s not. I suggest that you look up the actual data on the Jovian moons.
rgb
So people who are investigating solar system dynamics have to find other ways to increase confidence in their theories.
And I repeat. There are plenty more moons — they just didn’t apply their theory to them because it doesn’t work miraculously. Nor does it work with the moons they did apply their theory to, unless you use the secret recipe.
rgb
So again I ask—where does Nir Shaviv say that the planetary albedo is a function of pressure induced by gravity?
He doesn’t. They both recognise that variation in Earth’s albedo is primarily a function of variation in external forcings (at timescales for which atmopheric mass is fairly constant).
Because the albedo for all planetoid objects has almost nothing to do with atmospheric pressure. Compare Europa and Ganymede.
That’s why N&Z’s curve is two almost completely independent fits — one of Mars, Earth and Jupiter (with 2.5 parameters) and one of the atmosphere-free moons (a fit with incredibly absurd parameters and that doesn’t work unless one uses the “right” set of numbers for e.g. mean temperature, ignores error bars, and so on. Basically the fact of the matter is that mean surface temperature at constant insolation depends on albedo (moderated mildly but whatever little atmosphere one has on the Mars and smaller objects plus whatever greenhouse effect the atmosphere offers, which is also miniscule for atmospheres in which water would boil at room temperature, that is, basically “a vacuum”) and it depends on lots of complex stuff for the Earth (!) and for Venus (!!).
I really, truly don’t understand why you continue to defend the work of Nikolov and Zeller. Being skeptical of badly done climate science is one thing — endorsing junk science based on cherrypicked and possibly “adjusted” data just because it supports the implausible hypothesis that there is no such thing as a greenhouse effect, especially when one can DIRECTLY OBSERVE the CO_2 hole in TOA IR emissions, seems to me, at least, to be unwise.
rgb
rgb
Joe Born says:
June 5, 2012 at 4:40 pm
“Too bad for Michael Mann that he did not have you to come to his defense when Steve McIntyre used synthetic data to demonstrate that Mann’s technique would find hockey sticks where none existed.”
fail!
M Mann had a statistical method, which was designed to find hockey sticks and average everything else around the shaft.. S McIntyre proved that as it worked well finding sticks in red noise.(20,000 sets) Apples and oranges.
If you put in a gradual slope of increasing insolation you will of course get a gradual slope of increasing temperature. That says nothing at all about climate sensitivity. The effects of CO2 are just too small to be needed in the model. Sorry, but who cares about 0.3 C.