Sun and Clouds are Sufficient

Guest Post by Willis Eschenbach

In my previous post, A Longer Look at Climate Sensitivity, I showed that the match between lagged net sunshine (the solar energy remaining after albedo reflections) and the observational temperature record is quite good. However, there was still a discrepancy between the trends, with the observational trends being slightly larger than the calculated results. For the NH, the difference was about 0.1°C per decade, and for the SH, it was about 0 05°C per decade.

I got to thinking about the “exponential decay” function that I had used to calculate the lag in warming and cooling. When the incoming radiation increases or decreases, it takes a while for the earth to warm up or to cool down. In my calculations shown in my previous post, this lag was represented by a gradual exponential decay.

But nature often doesn’t follow quite that kind of exponential decay. Instead, it quite often follows what is called a “fat-tailed”, “heavy-tailed”, or “long-tailed” exponential decay. Figure 1 shows the difference between two examples of a standard exponential decay, and a fat-tailed exponential decay (golden line).

Figure 1. Exponential and fat-tailed exponential decay, for values of “t” from 1 to 30 months. Lines show the fraction of the original amount that remains after time “t”. The “fatness” of the tail is controlled by the variable “c”. Line with circles shows the standard exponential decay, from t=1 to t=20. Golden line shows a fat-tailed exponential decay. Black line shows a standard exponential decay, with a longer time constant “tau”. The “fatness” of the tail is controlled by the variable “c”.

Note that at longer times “t”, a fat-tailed decay function gives the same result as a standard exponential decay function with a longer time constant. For example, in Figure 1 at “t” equal to 12 months, a standard exponential decay with a time constant “tau” of 6.2 months (black line) gives the same result as the fat-tailed decay (golden line).

So what difference does it make when I use a fat-tailed exponential decay function, rather than a standard exponential decay function, in my previous analysis? Figure 2 shows the results:

Figure 2. Observations and calculated values, Northern and Southern Hemisphere temperatures. Note that the observations are almost hidden by the calculation.

While this is quite similar to my previous result, there is one major difference. The trends fit better. The difference in the trends in my previous results is just barely visible. But when I use a fat-tailed exponential decay function, the difference in trend can no longer be seen. The trend in the NH is about three times as large as the trend in the SH (0.3°C vs 0.1°C per decade). Despite that, using solely the variations in net sunshine we are able to replicate each hemisphere exactly.

Now, before I go any further, I acknowledge that I am using three tuned parameters. The parameters are lambda, the climate sensitivity; tau, the time constant; and c, the variable that controls the fatness of the tail of the exponential decay.

Parameter fitting is a procedure that I’m usually chary of. However, in this case each of the parameters has a clear physical meaning, a meaning which is consistent with our understanding of how the system actually works. In addition, there are two findings that increase my confidence that these are accurate representations of physical reality.

The first is that when I went from a regular to a fat-tailed distribution, the climate sensitivity did not change for either the NH or the SH. If they had changed radically, I would have been suspicious of the introduction of the variable “c”.

The second is that, although the calculations for the NH and the SH are entirely separate, the fitting process produced the same “c” value for the “fatness” of the tail, c = 0.6. This indicates that this value is not varying just to match the situation, but that there is a real physical meaning for the value.

Here are the results using the regular exponential decay calculations

                    SH               NH

lambda             0.05             0.10°C per W/m2

tau                2.4              1.9 months

RMS residual error 0.17             0.26 °C

trend error        0.05 ± 0.04      0.11 ± 0.08, °C / decade (95% confidence interval)

As you can see, the error in the trends, although small, is statistically different from zero in both cases. However, when I use the fat-tailed exponential decay function, I get the following results.

                    SH               NH

lambda             0.04             0.09°C per W/m2

tau                2.2              1.5 months

c                  0.59             0.61

RMS residual error 0.16             0.26 °C

trend error       -0.03 ± 0.04      0.03 ± 0.08, °C / decade (95% confidence interval)

In this case, the error in the trends is not different from zero in either the SH or the NH. So my calculations show that the value of the net sun (solar radiation minus albedo reflections) is quite sufficient to explain both the annual and decadal temperature variations, in both the Northern and Southern Hemispheres, from 1984 to 1997. This is particularly significant because this is the period of the large recent warming that people claim is due to CO2.

Now, bear in mind that my calculations do not include any forcing from CO2. Could CO2 explain the 0.03°C per decade of error that remains in the NH trend? We can run the numbers to find out.

At the start of the analysis in 1984 the CO2 level was 344 ppmv, and at the end of 1997 it was 363 ppmv. If we take the IPCC value of 3.7 W/m2, this is a change in forcing of log(363/344,2) * 3.7 = 0.28 W/m2 per decade. If we assume the sensitivity determined in my analysis (0.08°C per W/m2 for the NH), that gives us a trend of 0.02°C per decade from CO2. This is smaller than the trend error for either the NH or the SH.

So it is clearly possible that CO2 is in the mix, which would not surprise me … but only if the climate sensitivity is as low as my calculations indicate. There’s just no room for CO2 if the sensitivity is as high as the IPCC claims, because almost every bit of the variation in temperature is already adequately explained by the net sun.

Best to all,

w.

PS: Let me request that if you disagree with something I’ve said, QUOTE MY WORDS. I’m happy to either defend, or to admit to the errors in, what I have said. But I can’t and won’t defend your interpretation of what I said. If you quote my words, it makes all of the communication much clearer.

MATH NOTES: The standard exponential decay after a time “t” is given by:

e^(-1 * t/tau) [ or as written in Excel notation, exp(-1 * t/tau) ]

where “tau” is the time constant and e is the base of the natural logarithms, ≈ 2.718. The time constant tau and the variable t are in whatever units you are using (months, years, etc). The time constant tau is a measure that is like a half-life. However, instead of being the time it takes for something to decay to half its starting value, tau is the time it takes for something to decay exponentially to 1/e ≈ 1/2.7 ≈ 37% of its starting value. This can be verified by noting that when t equals tau, the equation reduces to e^-1 = 1/e.

For the fat-tailed distribution, I used a very similar form by replacing t/tau with (t/tau)^c. This makes the full equation

e^(-1 * (t/tau)^c) [ or in Excel notation exp(-1 * (t/tau)^c) ].

The variable “c’ varies between zero and one to control how fat the tail is, with smaller values giving a fatter tail.

[UPDATE: My thanks to Paul_K, who pointed out in the previous thread that my formula was slightly wrong.  In that thread I was using

∆T(k) = λ ∆F(k)/τ + ∆T(k-1) * exp(-1 / τ)

when I should have been using

∆T(k) = λ ∆F(k)(1 – exp(-1/ τ) + ∆T(k-1) * exp(-1 / τ)

The result of the error is that I have underestimated the sensitivity slightly, while everything else remains the same. Instead of the sensitivities for the SH and the NH being 0.04°C per W/m2 and 0.08°C per W/m2 respectively in the both the current calculations, the correct sensitivities for this fat-tailed analysis should have been 0.04°C per W/m2 and 0.09°C per W/m2. The error was slightly larger in the previous thread, increasing them to 0.05 and 0.10 respectively. I have updated the tables above accordingly.

w.]

[ERROR UPDATE: The headings (NH and SH) were switched in the two blocks of text in the center of the post. I have fixed them.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
218 Comments
Inline Feedbacks
View all comments
George E. Smith;
June 4, 2012 12:27 pm

Willis, I would echo Richard’s caution, and suggest another possible angle which is cause related.
If you look at the great mathematical theories of Physics, off hand, I can’t think of one, that has a (c) like your formula; at least a non-integral one.
So your “fat tail exponential” has all the odors of a fudge factor; a simple forced curve fitting; well that’s what Dr Roy’s third order; excuse me that’s fourth order, comedy power series fit does.
So it seems to me, unlikely, that some simple physical process can yield a non integral (c) or even a non unity (c).
BUT ! what might fit you data, and could also be physically causal, would be if the fit curve was actually the sum of two exponentials with different time constants, and of course each would need some fraction of the starting value, so you would need something like:-
f = a.exp(-t/tau1) +b.exp(-t/tau2).
I happen to know, that some commonly used scintillation crystals for particle detectors, emit a light pulse, in response to a charged particle, that has at least two time constant components, and the mix of those two components depends on the identity of the particle.
Stilbene for example, which is one I actually have worked with, can detect gamma rays, as a result of an electron getting kicked out of an atom, and neutrons as a result of a knock on proton, as well as alpha particles.
The peak height of the light pulse is proportional to the energy of the incident particle, while the amount of the long time constant tail, is particle identity dependent. Neutrons (proton) give a bigger long component, than gammas (electron) and alphas give an even bigger long tail component.
So for a pulse of a given energy height, the total area of the pulse is defined by the particle identity. I used this discrimination technique, to count neutron events very efficiently, in the presence of huge gamma ray fluxes, which I could reject on the basis of height/area discrimination.
The trick is to integrate the anode current pulse for area, and take a peak reading wide band pulse from the last dynode of the photo-multiplier tube.
So your data, could have two different physical processes underlying, which likely had different decay time constants, and your (c) formalism, would not reveal that.
Just a thought to rattle around in that brain of yours.

June 4, 2012 12:58 pm

Vuk,
I don’t see any inconsistency between my propositions and yours.
Your comments and data seem to me to be expected from a scenario whereby the air circulation responds to negate ANY forcing other than more energy from the sun or a higher atmospheric mass.
joeldshore said:
“if the outgoing longwave radiation has increased because of decreasing cloudiness, this will offset some (perhaps quite a large fraction!) of the forcing due to the increase in incoming shortwave radiation.”
Obviously so. The widened equatorial air masses with reduced cloud cover will radiate more freely to space at night. That makes it somewhat easier for the poleward shift in the entire air circulation pattern to offset the extra incoming to the oceans during the daytime.Some of that extra energy into the oceans is retained and has to be moved poleward by the oceans before it can be lost to space.
Note that the albedo / cloudiness is a result of (and proportionate to) the quantity of energy passing through the troposphere from oceans to space. It does not do any forcing in itself. It represents the netted out result of ALL available factors that affect tropospheric energy content and manifests itself in the particular air circulation configuration at any given moment.
The Earth system does not rise to a higher equilibrium temperature when there is a change in anything other than solar input at top of atmosphere or an increase in total atmospheric mass. Instead it maintains the same system energy content and changes the rate of energy throughput to maintain stability.The global air circulation adjusts as necessary and albedo follows closely.
Any planet with an atmosphere appears to have the same capability. But that brings us to the comments of Harry Dale Huffman and the findings of Nikolov and Zeller which are not suitable for discussion here. I only mention that because it helps to show how it could all fit together in a wider scheme of things.
Robert Brown said:
“The fast process is e.g. atmospheric transport and relaxation, the slow process is the longer time associated with oceanic buffering of the heat”
Just so. The fast process is latitudinal air circulation shifting. The slow process is internal ocean cycling. The latter affects the former (as does the sun and any other forcing process) and at any given time the net balance between top down solar and bottom up oceanic forcings is represented by the global air circulation pattern at that moment. Global albedo would therefore be the critical indicator for the system trend at any given time. There will be an albedo figure for net thermal balance but in practice it is never maintained for long because all the parameters are constantly changing which also changes the albedo required to achieve balance.

Gary W
June 4, 2012 1:03 pm

Congratulations Willis. Climate Science is being reduced from a 3 year course to 2 weeks and now being taught in High School only.

June 4, 2012 1:03 pm

George E> Smith said:
“So your data, could have two different physical processes underlying, which likely had different decay time constants”
Yes. Robert Brown said that too.
In my opinion the two physical processes are air circulation shifting (fast) and internal ocean cycling (slow).

June 4, 2012 1:11 pm

Vuk said:
“what you are proposing is not a convincing resolution for understanding any of the major events as the MWP, LIA or recent warming period, although it is OK as an academic exercise.”
On the short timescale discussed in this thread, no.
But if one proposes millennial solar cycling influencing the polar air masses from the top down and similar internal ocean cycling along the thermohaline circulation affecting SST and the equatorial air masses each of which can affect albedo then MWP, LIA and all other Holocene climate swings can be readily brought into Willis’s scenario.
The ocean cycling would be a delayed reflection of the solar cycling and both acting on the air circulation would affect albedo without offending Willis’s observation that sun and clouds are sufficient.

June 4, 2012 1:25 pm

Using the spreadsheet given in the first post of this series;
I have plotted the rolling 12 month average of the actual SH temperatures (in AQ) alongside a 12 month moving average of the cumulation of the calculated monthly SH changes (in AG) initialised with the first temperature (16.4 C) i.e. 16.4 plus -0.1 then 16.3 plus -0.9 etc.
These dont look very similar : -(
Does anyone have any idea what I am doing wrong?
Thanks

June 4, 2012 1:46 pm

George E. Smith; says: June 4, 2012 at 12:27 pm
……that has at least two time constant components…
Climate indices not only may have different time constants, but even run on two different clocks; it took me some time to get around this one:
In the North Atlantic there are two oscillations
Atlantic Multidecadal Oscillation the AMO (ocean temperature) and
North Atlantic Oscillation the NAO (atmospheric pressure)
They run synchronously until about 1910, and then the North Hemisphere temperature took off, and what happens the AMO’s clock slowed down (or the NAO’s speeded up). Weird thing about it is if you squeeze the AMO it falls again into a perfect synchronism with the NAO, or put it simply the NAO is currently some 11 year older the AMO, if you assume there were same age in 1910.
http://www.vukcevic.talktalk.net/AMO-NAO.htm
(btw there is perfectly good natural reason for it; I do not see a single silver bullet solution to the long term temperature oscillations) Hey, don’t run your body clock to the NAO.
Stephen Wilde says: June 4, 2012 at 1:11 pm
……..
As far as I understand your hypothesis, we only disagree about the cause of the polar jet shift, you think it comes from above, stratosphere etc, I think it comes from below, release of energy from deep convection in the North Atlantic, south of Iceland in the winter, Nordic Seas in the summer. Here is what my man says:
http://www.theweatherprediction.com/weatherpapers/077/index.html
he can flatten your Svensmark into a pancake before you could say ‘galactic cosmic rays’.
Hope you are enjoying the festivities, got soaked on the Thames riverbank yesterday, but it was worth it.

hum
June 4, 2012 2:26 pm

Joel Shore “Willis, I think this misses the most important part of P. Solar’s point: You have assumed that the forcing is due solely to the albedo effect on the shortwave radiation. If, in fact the albedo change is real and accurately-measured (which I am still somewhat skeptical about) and if it is due to a net decrease in cloudiness over the period, then presumably this decrease in cloudiness has also produced an increase in outgoing longwave radiation. In fact, if the outgoing longwave radiation has increased because of decreasing cloudiness, this will offset some (perhaps quite a large fraction!) of the forcing due to the increase in incoming shortwave radiation. Hence, the net forcing due to this change in cloudiness might be considerably less. ”
Joel you are failing to consider convection. Low clouds albedo will always impact incoming radiation more than outgoing since a sizeable amount of heat is carried from the surface to the troposphere due to convection before it is radiated. Albedo is not an equal factor inbound versus outbound. That is just another reason why negative feedbacks actually rule in natural processes.

Jim D
June 4, 2012 3:23 pm

With temperatures already above average and increasing, and albedo decreasing mostly due to reductions in cloud cover, how does this support the negative feedback idea for clouds in a warming world? Has it not kicked in yet, making it pure speculation that is not supported by the data? On the other hand the data supports a positive feedback. The question is, if the cloud cover is decreasing, what is going to limit the warming unless at some point the cloud cover turns around and starts to increase again?

June 4, 2012 3:32 pm

Vuk,
I think it is BOTH top down solar AND bottom up oceanic.
I also think you should put less weight on the NAO, important though it is, and look at the global variations including both poles.The jets become more meridional / zonal in both hemispheres at similar times on multidecadal timescales but the variability is less in the SH due to the thermal inertia of oceans as compared to land.
As I said, I see nothing fatal to my propositions or those of Willis in the findings you have set out.Your work is a useful supplement to the basic proposition and gives information about how the processes work through the system.
Even the bottom up oceanic forcings are simply a delayed reflection of earlier solar variations. Oceans only modulate solar input.

P. Solar
June 4, 2012 3:43 pm

Willis:
>>
My analysis concerns itself with the average net effect of the albedo changes, which are mostly from clouds. As a result, it perforce must include all of the effects of clouds—changes in incoming and outgoing SW, changes in incoming and outgoing LW, changes in wind, changes in evaporation, changes in ocean albedo, heat transfer from the surface to the atmosphere, all of the myriad things that clouds do that affect the temperature.
>>
shortwave albedo is pretty clear cut, there’s only one source and any outgoing is reflection (albedo).
However, LW IR can be reflected solar or thermally emitted by surface or atmosphere. Some IR will be absorbed and re-emitted at the same wavelength ( reflection of a sort if you will). Other will be higher energies that cause warming and hence emission of IR.
From the paper:
>>
In this study, a deterministic radiative transfer model is
used to compute the global distribution of all TOA shortwave
radiation budget components on a mean monthly and 2.5◦ by
2.5◦ longitude-latitude resolution, spanning the 14-year period
from January 1984 through December 1997.
>>
So the model developed in the paper seems to be clearly just about reflection proper of SW radiation from the sun. So the modulation of out-going IR is missing from your calculations which makes it all the more surprising how well it works. Unless, of course, IR is quite small in relation to SW solar.
Your original exponential seems reasonable, though I would expect you to need two lambas and two taus. As you pointed out, this would be nearly indistinguishable from your fat tail idea.
You would not be introducing more parameters by have 2x lambda 2x tau since NH and SH should be able to use the same values in proportion to their land/sea ratios.

George E. Smith;
June 4, 2012 3:50 pm

“””””…..Stephen Wilde says:
June 4, 2012 at 1:03 pm
George E> Smith said:
“So your data, could have two different physical processes underlying, which likely had different decay time constants”
Yes. Robert Brown said that too……”””””
Looks like one of those “Read everything before doing anything exams.”
Had I done that I would have seen the good Professor’s earlier exposition; and also his more expansive comment regarding two possible processes.
So I stand aside and let Robert take the bow; great call Professor. I suspect we both agree, two processes, with two time constants, is infinitely more likely, than a fractal fudging.
And Willis, it shouldn’t be too difficult to separate the two functions, from short time, and long time detail. And given your mathematical propensity, Willis, you can probably get excel to find a best fit value for the four parameters. You would then have a model, that could be refined if better data becomes available to you; and had some physical reality.

Dr. Deanster
June 4, 2012 4:01 pm

Hey Willis ….. thanks for the reply.
Following up on my first post, I’m no expert on your model, not quite sure what the parameters are. But they seem to be some sort of explanation of Global Temperature based on Solar and Albedo forcings. I know we have the Solar Data up to present. I’m guessing there is some albedo data out their as well. If not, there is sure to be a range of albedo effects that could give a confidence interval for expected temperatures up to date.
I’d sure like to see what your model predicts with all the time lags, etc, through to the present. I mean, if you could predict the second half with the first half, it would seem you could take a stab at it for dates beyond 1997.
I mean .. your model could be really big!! As has been said, simplicity is usually the best solution, as it eliminates a lot of noise.

June 4, 2012 7:06 pm

Of relevance here is that nearly half of the measured land surface warming over the last 60 years is spurious, and results from deriving average temperature from min+max/2.
The reason is, minimum temperatures generally occur in the early morning when solar insolation exceeds outgoing LWR, minimum temperature is sensitive to small changes in solar insolation at this time. And changes in near ground aerosols/particulates and aerosol seeded clouds have a disproportionately large effect on early morning insolation (compared to other times of day).
I wrote about this at the link below.
http://www.bishop-hill.net/blog/2011/11/4/australian-temperatures.html
The relevance to Willis’ analysis is that HADCRUT land is mostly based on minimum and maximum temperatures and contains a significant amount of warming that does not exist, if an average of representative temperatures throughout the day is used.
Were Willis to use a temperature set genuinely representative of the average temperature throughout the 24 hours, I expect not such a good fit, leaving some room for non-albedo effects.
None the less, albedo will still be the primary driver of climate (with the caveat richard courtney explained), as the above accounts for only about 15% of HADCRUT land/ocean warming.

June 4, 2012 7:33 pm

• Nikolov and Zeller used 5 parameters to adjust a model. Anything can be adjusted to match anything by tuning 5 variables.
Furthermore those parameters were utterly devoid of physics, and corresponded to dimensioned scale factors that were totally absurd — not only non-physical but literally inconceivable of BEING physical. Finally, their “miracle fits” utterly fails if one plots all of the OTHER gas giant moons, or replots the gas giant moons that they selected using their actual published data.
I’m just saying.
rgb

June 4, 2012 7:51 pm

PS—Do you have a link to the Koutsoyiannis paper you referenced above? I took a quick look and couldn’t find it. I find his work to be excellent and always fascinating.
I posted a couple of them, didn’t I? Or maybe it was on another thread. Damn, I can’t even keep track anymore.
I have to go back over to Pivers Island to teach for a few hours (yes, at 10:30 pm, sigh) but if you Google “Hurst Koutsoyiannis” or “Hurst-Kolmogorov Koutsoyiannis” the PDF of the colorado state workshop talk on this shows up on the latter, a paper on just Hurst on the former, and if you search on his name and something like “climate variability” you can get several of his other papers and preprint PDFs from this general site:
itia.ntua.gr/en/docinfo/1001/
HTH.
rgb
P.S. — Speak of the devil! Koutsoyiannis just posted on WUWT himself on the “flying dinosaurs” thread. With luck you can contact him directly and ask him for a toplevel list of links to his papers.

June 4, 2012 7:53 pm

So I stand aside and let Robert take the bow; great call Professor. I suspect we both agree, two processes, with two time constants, is infinitely more likely, than a fractal fudging.>
Actually my experiences are very similar to yours, except that I did the college “Neutron activation of silver” experiment as an undergrad which features two separate (nearby) decay processes so I had to design counters and so on and a statistical method to extract the two decay constants. And people think that college isn’t good for anything….;-)
rgb

June 4, 2012 9:49 pm

Willis (June 4, 2012 at 11:13 am):
Thanks for taking the time to respond. In criticizing your article, I have the larger purpose of demolishing the IPCC’s claim to have conducted a scientific study on global warming. The IPCC’s study cannot have been “scientific” for such a study references the underlying statistical population but for the IPCC’s study there isn’t one. A statistical population is the sine qua non of a scientific study.
You’ve raised the issue of what is meant by a “statistical population.” As I’ll use the term it references a set of statistically independent events, a set of “conditions” that are conditions on the associated model’s independent variables and a set of “outcomes” that are conditions on the model’s dependent variables. An example of a set of conditions is [cloudy, not cloudy]. An example of a set of outcomes is [rain in the next 24 hours, no rain in the next 24 hours].
The “Cartesian product” of the two sets is the set of all pairings of a condition with an outcome. In my example, the Cartesian product is the set {[cloudy, rain in the next 24 hours], [cloudy, no rain in the next 24 hours], [not cloudy, rain in the next 24 hours], [not cloudy, no rain in the next 24 hours]}.
Each element in the Cartesian product is a description of an independent event. A “prediction” is an extrapolation from an observed condition to an unobserved but observable outcome. For example, it is an extrapolation from the observed condition “cloudy” to the unobserved but observable condition “rain in the next 24 hours.”
A “sample” is a subset of the elements of a statistical population in which the outcomes of the events as well as the conditions have been observed. In a sample, a count of those events with identical outcomes is an example of a “frequency.” A model that is “scientific” is one that makes a representation about the frequencies of the various outcomes. In scientific principle if this representation is falsified by the evidence this model is discarded. Otherwise, the model is said to be “validated.”
In reference to the notion that there exists in nature the property of Earth’s climate that is known as “the climate sensitivity” (TECS), this idea identifies no events, statistical population or sample; thus, speculations regarding the magnitude of TECS are scientifically non-sensical. How then did numbers of otherwise sane people come to think these speculations make sense? One possibility is for the deluded to have overlooked the non-observability of the equilibrium temperature for the nonsensicality follows from this non-observability.

tallbloke
June 4, 2012 10:13 pm

Willis Eschenbach says:
June 4, 2012 at 10:20 am
I fear that this analysis cannot do any predicting at all about tomorrow. The reason is that it is based on the albedo, and we do not know what the albedo will do tomorrow …

Willis: well done finding some albedo data which can start to put some numerical detail into the qualitative and wiggle-comparative studies which already set out the hypothesis:
http://tallbloke.wordpress.com/2012/02/13/doug-proctor-climate-change-is-caused-by-clouds-and-sunshine/
Willie Soon was onto this stuff several years ago too with a numerically supported regional study:
http://tallbloke.wordpress.com/2010/06/21/willie-soon-brings-sunshine-to-the-debate-on-solar-climate-link/
I recall you said you had a chat with Willie Soon at the ICCC7. I’m glad to see some of his influence is rubbing off on you. 😉
An older study which sheds some light on the relationship between solar variation and albedo change is Nir Shaviv’s paper on using the oceans as a calorimeter.
http://sciencebits.com/calorimeter
If Nir Shaviv is right, then we can make a reasonable stab at what albedo will do in the future if we can predict what the Sun will do in the future. That’s why that issue of solar prediction has been the main focus of my efforts for the last 4 years.
By the way, Nikolov and Zeller used 4 parameters not five. The same number Robert brown and E.M. Smith are (correctly) recommending you use. ( the albedo of their ‘greybody’ ‘no atmosphere’ planets is fixed in their theory for all rocky solar system bodies). N&Z are in agreement with Nir Shaviv since they say that the actual albedo on an atmosphere bearing planet is a function of pressure induced by the action of gravity on atmospheric mass and insolation at the TOA.
Since CGR’s are in approximate anti-correlation with solar variation, they too can have a role in this externally driven variation. The Earth tends to homeostasis. Change is externally driven. This is the right direction to be going in and I’m glad to see you are moving towards it.
Cheers
TB.

moe
June 4, 2012 10:26 pm

One possible error: While the change in forcing from the sun is positive in the northern hemisphere it is negative in the southern hemisphere. therefore heat-exchange between the hemispheres could reduce the sensitivity of the system. Forcing from CO2, however, is positive on both hemispheres at the same time.

P. Solar
June 4, 2012 11:17 pm

Philip Bradley says:
June 4, 2012 at 7:06 pm
Of relevance here is that nearly half of the measured land surface warming over the last 60 years is spurious, and results from deriving average temperature from min+max/2.
Now that is very interesting. I have always thought something must be fundamentally wrong with one or other (both?) land and sea datasets when land can show such a significantly larger warming than the oceans. Mind you, there have been some pretty questionable “corrections” made to HadSST as well , so I would not trust either dataset further than I could spit.
http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/
I’ll enjoy reading the article at BHill.
thx

Leonard Lane
June 4, 2012 11:22 pm

Willis:
I liked the original analysis with a single exponential function as an overal average of land and ocean.
The idea of using a double exponential to represent land and ocean I think is a better one. I wonder if you get the gridded data and mask the land surfaces if this will show the two exponentials can be operated independently for land and ocean and then combined, by area weighting, to get your original result with the double exponential (fat tail) distribution?
Hope this makes sense, it is late and I’m off now. Thanks again.

June 4, 2012 11:49 pm

Jim D said:
“The question is, if the cloud cover is decreasing, what is going to limit the warming unless at some point the cloud cover turns around and starts to increase again?”
Cloud cover started increasing some 12 years ago just around the time temperature stopped rising. Give it time and unless cloudcover starts decreasing again the energy content of oceans and troposphere will actually fall.

Kiminori Itoh
June 5, 2012 12:19 am

Douglass, Blackman and Knox made a similar analysis and gave even smaller values of climate sensitivity: “Temperature response of Earth to the annual solar irradiance cycle,” Phys. Lett. A, 323, 315-322 (2004) and its Erratum. Their climate sensitivity values (K/(W m^-2)) are, 0.02 for latitude band of 60S-30S, 0.025 for 30S-0, 0.027 for 0-30S, and 0.058 for 30N-60N. A simple average gives 0.035 K/(W m^-2) and 0.13 degC for CO2-doubling.
Kiminori Itoh, Yokohama National University, Japan