Guest Post by Willis Eschenbach
In my previous post, A Longer Look at Climate Sensitivity, I showed that the match between lagged net sunshine (the solar energy remaining after albedo reflections) and the observational temperature record is quite good. However, there was still a discrepancy between the trends, with the observational trends being slightly larger than the calculated results. For the NH, the difference was about 0.1°C per decade, and for the SH, it was about 0 05°C per decade.
I got to thinking about the “exponential decay” function that I had used to calculate the lag in warming and cooling. When the incoming radiation increases or decreases, it takes a while for the earth to warm up or to cool down. In my calculations shown in my previous post, this lag was represented by a gradual exponential decay.
But nature often doesn’t follow quite that kind of exponential decay. Instead, it quite often follows what is called a “fat-tailed”, “heavy-tailed”, or “long-tailed” exponential decay. Figure 1 shows the difference between two examples of a standard exponential decay, and a fat-tailed exponential decay (golden line).
Figure 1. Exponential and fat-tailed exponential decay, for values of “t” from 1 to 30 months. Lines show the fraction of the original amount that remains after time “t”. The “fatness” of the tail is controlled by the variable “c”. Line with circles shows the standard exponential decay, from t=1 to t=20. Golden line shows a fat-tailed exponential decay. Black line shows a standard exponential decay, with a longer time constant “tau”. The “fatness” of the tail is controlled by the variable “c”.
Note that at longer times “t”, a fat-tailed decay function gives the same result as a standard exponential decay function with a longer time constant. For example, in Figure 1 at “t” equal to 12 months, a standard exponential decay with a time constant “tau” of 6.2 months (black line) gives the same result as the fat-tailed decay (golden line).
So what difference does it make when I use a fat-tailed exponential decay function, rather than a standard exponential decay function, in my previous analysis? Figure 2 shows the results:
Figure 2. Observations and calculated values, Northern and Southern Hemisphere temperatures. Note that the observations are almost hidden by the calculation.
While this is quite similar to my previous result, there is one major difference. The trends fit better. The difference in the trends in my previous results is just barely visible. But when I use a fat-tailed exponential decay function, the difference in trend can no longer be seen. The trend in the NH is about three times as large as the trend in the SH (0.3°C vs 0.1°C per decade). Despite that, using solely the variations in net sunshine we are able to replicate each hemisphere exactly.
Now, before I go any further, I acknowledge that I am using three tuned parameters. The parameters are lambda, the climate sensitivity; tau, the time constant; and c, the variable that controls the fatness of the tail of the exponential decay.
Parameter fitting is a procedure that I’m usually chary of. However, in this case each of the parameters has a clear physical meaning, a meaning which is consistent with our understanding of how the system actually works. In addition, there are two findings that increase my confidence that these are accurate representations of physical reality.
The first is that when I went from a regular to a fat-tailed distribution, the climate sensitivity did not change for either the NH or the SH. If they had changed radically, I would have been suspicious of the introduction of the variable “c”.
The second is that, although the calculations for the NH and the SH are entirely separate, the fitting process produced the same “c” value for the “fatness” of the tail, c = 0.6. This indicates that this value is not varying just to match the situation, but that there is a real physical meaning for the value.
Here are the results using the regular exponential decay calculations
SH NH lambda 0.05 0.10°C per W/m2 tau 2.4 1.9 months RMS residual error 0.17 0.26 °C trend error 0.05 ± 0.04 0.11 ± 0.08, °C / decade (95% confidence interval)
As you can see, the error in the trends, although small, is statistically different from zero in both cases. However, when I use the fat-tailed exponential decay function, I get the following results.
SH NH lambda 0.04 0.09°C per W/m2 tau 2.2 1.5 months c 0.59 0.61 RMS residual error 0.16 0.26 °C trend error -0.03 ± 0.04 0.03 ± 0.08, °C / decade (95% confidence interval)
In this case, the error in the trends is not different from zero in either the SH or the NH. So my calculations show that the value of the net sun (solar radiation minus albedo reflections) is quite sufficient to explain both the annual and decadal temperature variations, in both the Northern and Southern Hemispheres, from 1984 to 1997. This is particularly significant because this is the period of the large recent warming that people claim is due to CO2.
Now, bear in mind that my calculations do not include any forcing from CO2. Could CO2 explain the 0.03°C per decade of error that remains in the NH trend? We can run the numbers to find out.
At the start of the analysis in 1984 the CO2 level was 344 ppmv, and at the end of 1997 it was 363 ppmv. If we take the IPCC value of 3.7 W/m2, this is a change in forcing of log(363/344,2) * 3.7 = 0.28 W/m2 per decade. If we assume the sensitivity determined in my analysis (0.08°C per W/m2 for the NH), that gives us a trend of 0.02°C per decade from CO2. This is smaller than the trend error for either the NH or the SH.
So it is clearly possible that CO2 is in the mix, which would not surprise me … but only if the climate sensitivity is as low as my calculations indicate. There’s just no room for CO2 if the sensitivity is as high as the IPCC claims, because almost every bit of the variation in temperature is already adequately explained by the net sun.
Best to all,
w.
PS: Let me request that if you disagree with something I’ve said, QUOTE MY WORDS. I’m happy to either defend, or to admit to the errors in, what I have said. But I can’t and won’t defend your interpretation of what I said. If you quote my words, it makes all of the communication much clearer.
MATH NOTES: The standard exponential decay after a time “t” is given by:
e^(-1 * t/tau) [ or as written in Excel notation, exp(-1 * t/tau) ]
where “tau” is the time constant and e is the base of the natural logarithms, ≈ 2.718. The time constant tau and the variable t are in whatever units you are using (months, years, etc). The time constant tau is a measure that is like a half-life. However, instead of being the time it takes for something to decay to half its starting value, tau is the time it takes for something to decay exponentially to 1/e ≈ 1/2.7 ≈ 37% of its starting value. This can be verified by noting that when t equals tau, the equation reduces to e^-1 = 1/e.
For the fat-tailed distribution, I used a very similar form by replacing t/tau with (t/tau)^c. This makes the full equation
e^(-1 * (t/tau)^c) [ or in Excel notation exp(-1 * (t/tau)^c) ].
The variable “c’ varies between zero and one to control how fat the tail is, with smaller values giving a fatter tail.
[UPDATE: My thanks to Paul_K, who pointed out in the previous thread that my formula was slightly wrong. In that thread I was using
∆T(k) = λ ∆F(k)/τ + ∆T(k-1) * exp(-1 / τ)
when I should have been using
∆T(k) = λ ∆F(k)(1 – exp(-1/ τ) + ∆T(k-1) * exp(-1 / τ)
The result of the error is that I have underestimated the sensitivity slightly, while everything else remains the same. Instead of the sensitivities for the SH and the NH being 0.04°C per W/m2 and 0.08°C per W/m2 respectively in the both the current calculations, the correct sensitivities for this fat-tailed analysis should have been 0.04°C per W/m2 and 0.09°C per W/m2. The error was slightly larger in the previous thread, increasing them to 0.05 and 0.10 respectively. I have updated the tables above accordingly.
w.]
[ERROR UPDATE: The headings (NH and SH) were switched in the two blocks of text in the center of the post. I have fixed them.
It is an oversimplification to resolve global temperature variability with only one independent and one internal feedback variable.
vukcevic says:
June 4, 2012 at 12:59 am (Edit)
Sorry it’s not complex enough for you, vukcevic. And yet it works … go figure.
w.
Willis:
This refinement of your model is an excellent response to those who have been attacking instead of assessing your model. Thankyou.
The principle of parsimony says your model is the best we have of recent global temperature rise.
However, I am writing to caution against overstatement of your findings.
I again remind of the warning I repeatedly stated on the previous thread; i.e.
Hence, I write to provide a caveat to your statement that says;
Although directly true, your statement suggests that ‘high’ vales of climate sensitivity are wrong. Please note that I think such ‘high’ values are wrong, but they may be possible despite your analysis being correct.
As I said in the previous thread:
Indeed, you later acknowledged that possibility in your later post to that thread at June 3, 2012 at 2:41 am where you wrote:
However, as you there point out, “increased clouds” would decrease temperature which would be a negative feedback providing a lower climate sensitivity than e.g. the IPCC proposes.
But that decrease is only one of several possible effects which may occur in the real climate system.
To avoid misunderstanding of what I am trying to say, I iterate that
• The principle of parsimony says your model is the best we have of recent global temperature rise.
And
• I find your model cogent.
However, we need to avoid jumping to undue certainty (which others did with the resulting creation of the IPCC). And, therefore, I again remind that an ability to attribute a cause(s) is NOT evidence that the cause is the true cause in part or in whole.
One of the reasons your model is so very important is that it suggests falsifiable hypotheses for mechanisms of global climate change (indeed, Stephen Wilde has provided one such falsifiable hypothesis on the previous thread). GCMs do not suggest such falsifiable hypotheses.
Hence, I am writing to caution against overstatement of your findings. Such overstatement provides ‘straw men’ which can be used to generate excuses for ignoring your important findings.
Richard
I went back thru the previous posts and couldn’t see the source of your temperature data.
Otherwise, its no surprise to me that albedo/solar insolation drives atmospheric temperatures. Nor does it surprise me that the effect of CO2 increases at current concentrations are minimal. What does surprise me is that your model seems to preclude ocean variability/cycles (ie variable heat release from the oceans) having a significant effect on atmospheric temperatures.
vukcevic says:
June 4, 2012 at 12:59 am
It is an oversimplification
=========
Complexity is no assurance that an answer is right. Simplicity is no assurance an answer is wrong. As a general rule, the simplest method that produces the correct answer is the preferred method.
So, the correlation implies that all non-solar forcings net out as albedo. You don’t need to account for them separately, as this would be double counting once you allow for changes in albedo. And it also implies that any theory of climate that assumes a constant albedo with changing temperature is wrong.
Willis, this is so simple and such a good fit it’s worrying.
If I follow you right, this albedo argument just works on incoming solar. However, albedo changes are presumably mostly cloud and cloud cover is well known to block out going IR. So I don’t see how your model can fit without taking outgoing IR into account.
Am I mis reading you hypothesis?
Mr. Eschenbach
Many of your previous posts I took very seriously, so I am still inclined to think that this is intended as a kind of a ‘summer season spoof’.
Either way good luck.
“It is an oversimplification to resolve global temperature variability with only one independent and one internal feedback variable”
Not if that one internal feedback variable serves as a proxy for the net outturn of all the many other internal system feedbacks.
I suggest that albedo / cloudiness is just such a proxy.
I have been aware of that principle in general terms for many years so my main interest is in the next step.
That step is to determine what feature of the system serves best as a means of determining HOW the system uses clouds to achieve such an effect as a means of producing the observed long term system stability.
Svensmark suggests simple changes in cloud quantities as driven by cosmic ray condensation nuclei but that suggests that changes in the system are DRIVEN by the cloudiness / albedo changes whereas I think the cloudiness / albedo changes are a RESPONSE which maintains system stability.
The feature we should be looking at is the way cloud quantities appear to decrease when the mid latitude jets shift poleward and become more zonal yet increase when the jets become more meridional and shift equatorward.
As Richard Courtney said in the previous thread the mechanism which I propose in support of Willis’s findings is plausible and should be falsifiable so I await such falsification – or not.
So the atmosphere moderates temperature changes imposed on the planet by the sun and its variations. The moon, zero atmosphere but receiving the same insolation as the Earth, has sunlit temperatures in excess of +150C whilst shadow temperatures are below -150C. Wonderful thing atmosphere.
Yup it is a travesty to use low number of parameters especially if none of them represents GHGs. Willis what were you thunking. You are smart but not Occam
Hi Willis,
As far as I can tell, it’s a good bit of work, congratulations. However I find it rich that you are content to curve fit and have ‘c’ as an as-yet unexplained variable, but dismiss others (often in quite childishly crass terms) when they do similar fitting. It’s not a case of “If the curve fits you must acquit”, but don’t be dismissive of great correlations just because science, as yet, has not come up with a suitable explanation.
As someone once said, k.i.s.s.
Let’s hear it for C-0.6!
OLR varies strictly with temperature, and all its forcings and factors work via albedo.
I like it!
I again remind that an ability to attribute a cause(s) is NOT evidence that the cause is the true cause in part or in whole.
What you say is correct, but begs the question what causes both decreased albedo and increased temperatures (the only other possible explanation).
All I can think of is black carbon, which certainly does both. However, I am sure, because BC scatters incoming solar radiation (warming the troposphere), it causes climate cooling after a fairly short lag. Because, had the BC not intercepted the incoming solar radiation, it would have reached the surface and the energy would be retained longer in the climate system.
I think it very unlikely BC is the cause of what Willis has found.
Stephen Wilde says: “Svensmark suggests simple changes in cloud quantities as driven by cosmic ray condensation nuclei but that suggests that changes in the system are DRIVEN by the cloudiness / albedo changes whereas I think the cloudiness / albedo changes are a RESPONSE which maintains system stability.”
Nothing to stop both being true.
If Svenmark style GCR infulences albedo that would not disrupt Willis’ findings. If that results in the GMT being a bit cooler than equilibrium , negative feedback reduces cloud . Again this would fit was Willis is suggesting.
Philip Bradley says ” I went back thru the previous posts and couldn’t see the source of your temperature data.”
I picked that up late in the last thead and Willis posted that is is HadCRUT. I think he ought to update the articles to show this (preferably with a precise version and a link to the data).
Figure 4 in this article shows the period used by W. was one of the few parts of the record that was not heavily “adjusted” by Hadley Centre , so perhaps it can be taken as reasonably accurate.
http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/
Stephen Wilde says: June 4, 2012 at 2:27 am
…….
Hi Steven
Most of the temperature rise in the N. Hemisphere during the last 300+ years is due to change in the winter temperatures
http://www.vukcevic.talktalk.net/CETsw.htm
these changes are driven by the Arctic polar jet-stream, but since the Arctic gets very little if any insolation and has more or less stable winter albedo, I find the hypothesis proposed hardly plausible. I will look forward to further evaluation.
Philip Bradley asked:
“what causes both decreased albedo and increased temperatures (the only other possible explanation).”
Poleward shifting of the air circulation pattern which reduces global cloudiness and allows more energy into the oceans.
But the thermal effect is offset by the faster or larger water cycle implicit in a more poleward configuration.
So the question should be as to what shifts the air circulation poleward and the answer is more energy in the troposphere.
Whatever places that extra energy in the troposphere whether it be sun, oceans, GHGs or anything else the surface pressure configuration shifts so as to prevent it from affecting total system energy content.
Quite simply, anything that tries to make more energy accumulate in the troposphere just sees it negated by a faster throughput to space.
In the process, regions on the surface observe warmer air masses as the energy flows across them on its way out but total system energy content stays the same.
As regards human CO2 the effect is miniscule as compared to natural changes from sun and oceans.
From the abedo paper:
” The model is able to predict the seasonal and geographical vari-
ation of SW TOA fluxes. On a mean annual and global
basis, the model is in very good agreement with ERBE,
overestimating the outgoing SW radiation at TOA (OSR)
by 0.93 Wm−2 (or by 0.92%), within the ERBE uncertain-
ties. At pixel level, the OSR differences between model and
ERBE are mostly within ±10 Wm−2 , with ±5 Wm−2 over
extended regions, while there exist some geographic areas
with differences of up to 40 Wm−2 , associated with uncer-
tainties in cloud properties and surface albedo. ”
I know you share my dislike of pretending model output is “data”.
Have you checked to make sure that model is not using temps and the NASA isolation data to calculate the albedo !?
ferd berple says: June 4, 2012 at 1:47 am
…….
Ferd
I am always in favor of simplicity and elegance for a solution providing it is plausible, I am not certain this one is, see my post above addressed to Mr. Wild
http://wattsupwiththat.com/2012/06/04/sun-and-clouds-are-sufficient/#comment-1000844
DEEBEE and Harriet Harridan:
DEEBEE, I am assuming that your post at June 4, 2012 at 2:33 am is intended to be sarcasm and not a ‘true’ rejection of Willis’ analysis, but you do not indicate that. Perhaps it would be good if you were to clarify the matter.
Harriet Harridan at June 4, 2012 at 2:41 am you say to Willis:
But Willis has NOT merely conducted a “curve fit” and ‘c’ is not “an as-yet unexplained variable”
Willis showed by demonstration that solar input and albedo together are sufficient variables to describe change to mean global temperature over the observation period (i.e. 1984 to 1998). That is certainly not a mere curve fit.
However, the demonstration required a decay function which Willis chose as being exponential. His choice of exponential decay was arbitrary, and he explains
.
Willis’ ‘c’ is merely a determination of the decay rate obtained by best fit. Your point would have had more validity if it were an objection to Willis having originally chosen an exponential decay function because that choice was arbitrary.
This procedure is NOT the same as merely curve fitting to obtain a desired result. Willis original analysis determined the minimum number of variables required to match empirical observations, and his refinement (which adopts ‘c’) improves the already-determined match.
At issue now is
• if the determined two variables are coincidentally sufficient to match the empirical observations over the analysis period
or
• if there is an underlying mechanism(s) which induces the two variables to be sufficient for description of climate system behaviour which governs mean global temperature.
Hence, your criticism is without merit.
Richard
Philip Bradley:
Thankyou for your post at June 4, 2012 at 3:00 am. I agree all it says. However, I point out that it supports the argument in my post at June 4, 2012 at 1:35 am which it is answering.
As you say;
With respect, that demonstrates my point. It is yet another example of the logical fallacy of ‘argument from ignorance’ which got us into this AGW-scare.
“The only thing I can think of says X so Y must be correct” ignores everything one has failed to “think of”.
As I said in the post you are addressing
Or, to put that another way, I repeat what I said to Harriet Harriman
Richard
ferd berple says:
June 4, 2012 at 1:57 am
So, the correlation implies that all non-solar forcings net out as albedo. You don’t need to account for them separately, as this would be double counting once you allow for changes in albedo. And it also implies that any theory of climate that assumes a constant albedo with changing temperature is wrong.
Albedo – the reflectivity of clouds is an indication of the system response to increased internal heat content. The El Nino/La Nina variations can be due to the presence or lack of clouds inhibiting or allowing energy entering the Pacific. So the underlying homeostasis in the Earth system is driven by clouds forming as a response to the hydrological cycle speeding up or slowing down. Thus as Fred says, measuring the albedo of the clouds hides the complexity of what is causing the changes in cloudiness.
As Stephen Wilde reminds us larger scale variations, perhaps on longer timescales than this study, affect the albedo as the Hadley cells are compressed and the jet streams move equatorwards changing the distribution of clouds and the albedo.
As Svensmark (and others) have been showing albedo (cloudiness) could also be increased by increases in the rate of high energy galactic cosmic rays.
So we appear to be in a Goldilocks Earth system that over short timescales (two decades) exhibits homeostasis. But the homeostastic mechanism(s) although apparently unaffected by volcanic activity such as Pinatubo, may be subject to perturbations from longer large scale effects and/or from external GCR (and perhaps other unknown factors).
Well done Willis. The residuals are leaving very little wiggle room for the GCMs (pun intended).
P. Solar says:
June 4, 2012 at 3:15 am
Thanks for that, I had assumed it was a satellite troposphere record. For a surface temp record, I think black carbon could well be (and probably is) a significant common cause.
Very neat work. Congratulations. “Sun and clouds are sufficient” must be the best working hypothesis to date.
Which does not invalidate other contributions – how does C02 contribute to cloud cover, how do Cosmic rays seed clouds, how do the oceans contribute to the fat-tailed decay, are there other causes which cancel one another out … and on and on …
But in the meantime, the Willis Eschenbach observation “Sun and Clouds are Sufficient” rules OK. Occam smiles from Heaven, as we Keep it Simple.
Perhaps in a century or two we will be thinking that Climate is simple – only Weather is complex.
richardscourtney says:
June 4, 2012 at 4:07 am
“The only thing I can think of says X so Y must be correct” ignores everything one has failed to “think of”.
As I said in the post you are addressing
we need to avoid jumping to undue certainty (which others did with the resulting creation of the IPCC).
I wasn’t jumping to undue certainty. I was looking to open a discussion about what factors could be a common cause of both decreased albedo and increased temperatures.
Willis , could you post a link to the albedo data set you used ?
The paper says they have full gridded output , that’s why I suggested you look at tropics in the last thread. Clearly they have not released the full monty.
Maybe worth asking for full data, they seem to into open publishing of their work, which is very refreshing in this field.
“Philip Bradley says:
June 4, 2012 at 1:46 am
I went back thru the previous posts and couldn’t see the source of your temperature data.
What does surprise me is that your model seems to preclude ocean variability/cycles (ie variable heat release from the oceans) having a significant effect on atmospheric temperatures.”
Models cannot “preclude” anything. What the model says is that ocean variability/cycles are not necessary to explain the variability in temperature during the period under study. Ocean variability could be (and probably is) part of the overall picture. But they were not significant in explaining the variability during the time frame in question, if I understand the post correctly.
Mr. Eschenbach,
Just as food for thought: how about a 6 month or so prediction of future global temperatures based on this ‘fat tailed decay’ model?
It would seem to be a quick and easy way to provide background data.
Hi Richard,
You say: “But Willis has NOT merely conducted a “curve fit” and ‘c’ is not “an as-yet unexplained variable”
But Willis says: “Now, before I go any further, I acknowledge that I am using three tuned parameters.”
Both of you can’t be right. He threw his toys out of the pram when Nikolov and Zeller used, as he saw it, five tuned parameters. He uses less than them (and gets an impressive result, as they did), but he can’t have his cake and eat it.
FWIW that doesn’t change the fact that I think Willis is absolutely right on this occasion. The correlation between decreased cloud albedo and temperatures is good (http://oi49.tinypic.com/302uzpu.jpg).
H2O is the most anomalous of liquids and is yet to be fully understood, most are aware of its odd behaviour around 0 and 4c , it has many other tricks, one occurring around 35C something to do with specific heat if my memory is correct. Around body temperature water becomes very much a receptor of very long wave radiation. Waters tricks are yet to be understood by science and as a thermostat for our planet more than some of its tricks may be at play.
Good post Willis.
Hi Willis,

contest sufficient evidence that one is better than another over this short an interval anyway, but over a longer interval it might be possible to get a clean separation of the two models.
will be sharply differentiated; c) sooner or later one has to come up with at least a hand-waving argument for the fractal dimensionality (a scaling argument, that is, e.g. surface to volume scaling but in time?).
In general, I agree with Richard’s observations. In fact, I seem to be doing that with sufficient regularity that I’m starting to think that he is really a Pretty Smart Guy (don’t we always think that of those with whom we agree?:-). I would like to point out one alternative possible/probable fit that IMO is at least as likely as a fat-tailed exponential with embedded power law, although it is less parsimonious.
Consider the double exponential:
Yes, it has four parameters, but it also has a simpler physical explanation. There are two processes that both contribute towards equilibrium. One of them has a relatively short time constant, and is responsible for the steeper part of the slope. The other part has a longer time constant and is responsible for the “fat tail” — the more slowly decaying residual left over after the faster part is (mostly) gone.
It is now straightforward to come up with and/or falsify physical interpretations — The fast process is e.g. atmospheric transport and relaxation, the slow process is the longer time associated with oceanic buffering of the heat, for example (not saying that this is what it is, only one of several things it might be that can be checked by at least guestimating the relevant time constants from actual data).
Over a short time interval like the one you are fitting it might be difficult to resolve the two functional forms (or not, dunno) and I’m not certain I would consider simply winning a nonlinear regression
The model you propose above — IIRC — is basically a “fractal” law — a law that suggests that the true dimensionality of time in the underlying process is not one. Complex systems are perfectly capable of producing fractal behavior, but a) it is a lot more difficult to analyze such behavior; b) over a long enough time,
Exponentials, OTOH are bone simple to explain. Loss rates are proportional to the residual of the quantity. A double exponential simply implies a double reservoir.
BTW I agree with your assessment that albedo is the primary governing factor simply because the greybody temperature itself contains the albedo and it is an absolutely direct effect, one that for reasons unknown the IPCC is completely ignoring in spite of the fact that the 7% increase in albedo over the last 15 years corresponds to 2 degrees Celsius of cooling (after some still unknown lag, probable timescale decades to a century). I also commend to you Koustayannis’ work (I just posted the links on another thread, the one on CO_2 model fitting) — he provides a powerful argument that climate is governed by Hurst-Kolmogorov statistics, that is, stationary stochastic jumps driven by very slow nonstationary modulators with scaling properties. It’s tough going — this isn’t stuff in your everyday stats course — but I think the evidence (including e.g. SST graphs that he doesn’t even present) is overwhelming that he is right. In which case MOST of what we are trying to “fit” with a deterministic model is actually Ifni-spawn, pure stochastic noise…
rgb
The residuals clearly go between about -1°C and +1°C, which is higher amplitude than the total amount of warming claimed to have occurred over that time. Plotted at this scale you only prove that you are able to match total warming trend (one parameter) with three parameters you are using. That’s not very hard to do.
Please provide more detailed analysis of the residual, starting with plotting it at a scale which allows resolving details, and comparison with actual temperatures. Correlation analysis of the residual with detrended temperature data might be also useful.
Thanks for the update. If I understand the fat tail exponential decay right, would id not be simpler to eplain it as two exponential decays, one with a time constant of about 2.6 months or so over oceans and one of the order of 1.2 months over land? This would also give the average temperature rise difference ocean/vs land.
Well Willis … the proof of the pudding … how does your model do with the temperatures from 1997 to present?? Maybe I’m reading it wrong, but am I correct in assumming that these data are strictly for the period of 1984-1997??
Just as we observe with AGW models, the proof of validity is how well they predict the future. If you model models well between 1984-1997, then it should also model from 1997 to the present equally as well.
Your work is brilliant Willis! I look forward to your publishing this.
Philip Bradley and Harriet Harridan:
Philip, thankyou for your clarification at June 4, 2012 at 4:37 am. I admit that I did misunderstand your earlier post so I am pleased to have been corrected.
Harriet, at June 4, 2012 at 5:12 am you say to me
Actually, in context I think Willis and I are both right.
As I said, Willis attempted to determine the minimum number of variables required to model mean global temperature variation. In his original analysis he used only two parameters to achieve a match with mean global temperature.
And, as I also said, Willis refined that model by adjusting his arbitrary exponential decay rate to obtain an even better fit. His third variable was introduced to obtain that modification.
He discusses that saying
So, we are “both right” and Willis justifies his ADDITION of a third parameter as a refinement to his model. Remove that addition and we are left with his original and sufficient match with only two parameters.
You go on to say of Willis
He is not asking for “his cake and eat it”.
• Nikolov and Zeller used 5 parameters to adjust a model. Anything can be adjusted to match anything by tuning 5 variables.
• Willis Eschenbach investigated the minimum number of variables needed to match variation in mean global temperature, and he determined that he could do that using only two. No match is perfect, and he modified his model to improve his match by use of a justified third parameter. Remove his modification and his finding remains valid.
So, your assertion of “cake and eat it” is a comparison of chalk and cheese.
Additionally, I note that you say
OK. But you do not add to your arguments by inclusion of phrases such as “threw his toys out of the pram”. He, his behaviour and his work are different things. We are trying to discuss his work here, and I remind that in this thread alone I have already called for a realistic appraisal of that work on three occasions.
Richard
I look upon your calculations as being similar to a transformation of coordinate systems.
Instead of using forcings for things like aerosols, CO2 , and any associated feedbacks; you have simply moved onward to total effect on albedo.
So the CO2- related question changes from “what is the forcing and feedbacks from CO2?” to “What are the albedo changes due to CO2?”
Harriet Harridan makes a good point. I pointed out to Willis on one of the N&K threads that the fact they were curve fitting did not invalidate it without further analysis. Willis came unglued.
I think Willis has done an outstanding job here. It’s too bad he let his emotions take over previously. There was also a physical basis that existed with the N&K equation as well. It was the ideal gas law. For the record, I think overall the N&K theory is completely wrong, however, the relationship they found might have merit when looked at from a different angle. It’s too bad Willis refused to even consider that. Maybe this experience will cause him to rethink that situation. That would be a positive on top of this great analysis.
Finally, it would be interesting to take this theory and use it to predict the future. We have several predictions of Solar changes to use. If an albedo cycle could be added in, it would be interesting indeed.
I think this may be right, but it is hard to use it to make future predictions as you have to have the albedo figure to get temperature. We could make a prediction based on a SWAG of the future albedo, but that isn’t gaining a lot. Or I am not understanding something (always a likelihood in any given situation).
I also don’t see this as having any real inconsistencies with Svensmark, because his work describes an external driver of albedo that would in effect provide an new set point while the Galactic Cosmic Ray levels persisted. In a case where GCRs were more active in striking the atmosphere (whatever the source) more clouds would form than the normal feedback point. This would lower temperatures and the system would respond with less natural cloudiness. There would still be more cloud than normal for the temperature however so the system would not respond all the way back to the previous temperature and how much cloudiness the GCRs induced would determine the new lower feedback set point of the system. Contrariwise, if the GCRs are less active, all cloud formation would be based solely on temperature and the set point would move back up to the system “normal”. So indeed prolonged active GCRs could induce ice ages, while quiet GCRs could induce climate optimums (assuming a constant sun – which isn’t true, but may be the driver for GCR activity).
In some ways the above indicates that Willis’ conjecture and Svensmark’s work could fit nicely in explaining the mechanism of long term climate. Much work still remains, and it certainly could still all wind up as a dead end, but we won’t know unless we can come up with some experimental test of the hypothesis.
richardscourtney says:
June 4, 2012 at 6:32 am
Philip, thankyou for your clarification at June 4, 2012 at 4:37 am.
That’s fine Richard. I appreciate your role as the epistemological police.
Is the trend error the size of the combined greenhouse gas forcing (rather than just CO2)?
http://www.esrl.noaa.gov/gmd/aggi/
Way off topic.
From the solar reference page:
Current solar status: M class flare
Geomagnetic conditions: Storm!
Charlie A: I don’t really see CO2 as being necessarily a driver here. The entire feedback regime could be effected by CO2, but doesn’t have to be. There could be any number of things driving the energy around the system, and a trace gas like CO2 doesn’t have to figure at all. In order to make the statements you made you have to first buy into the CO2 is evil meme, and despite all the IPCCs attempts I don’t for a minute accept it as proven.
To provide for the falsifiability of claims, this study needs a statistical population. It doesn’t have one..
The “fat-tailed” distributions are usually distribution functions that do not live in L2 when they are probability functions. Such functions lead to mathematical chaos in a mathematical process. I am not sure what the implication of a fat-talied distribution would be here, but it is something to think about.
Actually this answers a question I asked in a previous post. Albedo is the sum of multiple causes, part of which is indirectly due to the increased CO2 just because of the physics. However there is some residual warming because of the CO2 increase that is not counteracted by the change in albedo. So, all things being equal (and of course they probably won’t be in the long term), there will be an increase of .3 deg in 100 years due to the CO2. I’m sweating already.
“In some ways the above indicates that Willis’ conjecture and Svensmark’s work could fit nicely in explaining the mechanism of long term climate. Much work still remains, and it certainly could still all wind up as a dead end, but we won’t know unless we can come up with some experimental test of the hypothesis.”
We already have Scafetta’s projections, based on GCR changes due to planetary motions driving Svensmarks observations. The proof of Willis work, and how it fits in, is happening now. In three years GCRs and albedo will either follow Scaffetta, then Willis, or not. I expect it will.
P. Solar says:
June 4, 2012 at 2:07 am
“If I follow you right, this albedo argument just works on incoming solar. However, albedo changes are presumably mostly cloud and cloud cover is well known to block out going IR. ”
If you can accept my out-of-the box revisionism, Willis’ model is not really about albedo, it’s about “greybodyness.” In my view formation of clouds is a reversible event, that is, the overall heat balance at the cloud level is the same after the cloud has formed as it was before, only above the cloud there is more outgoing SW and less outgoing LW and below the cloud there is less incoming SW and more downgoing LW. Of course the cloud soaks up heat from its surroundings as the water condenses. But clouds radiate/scatter as greybodies while the earth surface and ocean has IR ‘color’, so clouds get the earth to better resemble a blackbody radiator and allow it to more efficiently reject heat to space. By way of explanation, a greybody behaves like a blackbody wrt external radiation and temperature, only it is more reflective (“brighter”). It reflects more external radiation away but also reflects more internal radiation back inside.
Boy, it sure would be nice if there were more data to current to see if this formula holds true. The more years in the mix the better. Guess only time will tell when more data is available. Good work, Willis, on some very insightful thinking. If only everyone in climate science would quit with the politics and focus on science, stuff like this would be more common.
Robert Brown says:
>>
Consider the double exponential:
T(t) = T_0 e^{t/\tau_0} + T_1 e^{t/\tau_1}
Yes, it has four parameters, but it also has a simpler physical explanation. There are two processes that both contribute towards equilibrium. One of them has a relatively short time constant, and is responsible for the steeper part of the slope. The other part has a longer time constant and is responsible for the “fat tail” — the more slowly decaying residual left over after the faster part is (mostly) gone.
>>
Yes, I find that more justifiable than the “fat tail”.
I’d also like to see what this looks like with real data (like the ERBE data from which they say they have non trivial divergence). I really don’t like the idea of using any kind of model output as “data” for such work.
There is also some more recent satellite data that may be useful if it has sufficient polar coverage.
“Willis’ conjecture and Svensmark’s work could fit nicely in explaining the mechanism of long term climate. ”
The problem I have with Svensmark’s hypothesis is that it doesn’t yet try to explain how changes in GCR amounts could cause the changes in the vertical temperature profile of the atmosphere that are required in order to produce the observed circulation and albedo / cloudiness changes.
In contrast, we have good evidence that such changes can be caused by solar wavelength and / or particle variations having different effects on ozone quantities at different levels of the atmosphere.
It has recently been observed that the ozone response to solar variability reverses from the usual expectation from 45km upward and that phenomenon is currently under close investigation. It may well have a bearing on the vertical temperature profile changes that are needed to account for observed circulation changes.
For that reason I think that GCR changes are simply a fortuitous correlation with little or no causative significance though they might have some impact on what happens anyway.
I predict that if the sun stays quiet then the AO and AAO will remain rather negative, the equatorial air masses will shrink whilst the polar air masses expand, cloudiness will remain higher than it was in the late 20th century with more meridional jets, La Nina will continue to dominate over El Nino as the oceans slowly lose energy due to increased cloudiness and tropospheric temperatures will slowly decline.
If the sun becomes more active for long enough then all that should reverse.
Everything will be consistent with Willis’s observation that sun and clouds are sufficient to explain observed tropospheric temperature trends.
Human CO2 would have an effect in theory but too small to measure amongst all the other variables.
Philip Bradley says:
June 4, 2012 at 1:46 am
Thanks, Philip. The temperature data is from HadCRUT.
w.
P. Solar says:
June 4, 2012 at 2:07 am
Thanks, P. Solar. Albedo changes both longwave and shortwave in a fairly complex fashion. What these calculations show is that the net effect of the albedo, including both long- and shortwaves, is to cool the earth.
w.
vukcevic says:
June 4, 2012 at 2:09 am
Thanks, vukcevic, and I assure you that I am totally serious … if you think there is a problem with my analysis, then where is it? Where is the error in my math or my data or my logic?
w.
Part of the total albedo comes from commercial airliners and so is man-made.
One could examine the earlier mono-functional decay prior to and after 9/11. The grounding of the airliners should result in a blip in albedo, and would allow the time base to be quite accurately estimated.
P. Solar says:
June 4, 2012 at 4:44 am
I digitized the data from the paper that you discuss, “Long-term global distribution of Earth’s shortwave radiation budget at the top of atmosphere“, by N. Hatzianastassiou et al.
I will, as you suggest, write to the authors and ask for the gridded data.
Thanks,
w.
c1ue says:
June 4, 2012 at 5:07 am
That’s an interesting question, c1ue, but I fear that this analysis cannot do any predicting at all about tomorrow. The reason is that it is based on the albedo, and we do not know what the albedo will do tomorrow …
w.
One minor mistake: The change in Forcing from CO2 is:
log(363ppm/344.2ppm) / log(2) * 3.7W/m² = 0.28W/m²
[Thanks, typo fixed, doesn’t affect the rest of the calculations. -w.]
vukcevic on June 4, 2012 at 12:59 am said:
“It is an oversimplification to resolve global temperature variability with only one independent and one internal feedback variable.”
It is an oversimplification to speak of “global temperature” at all, really…
Willis,
I’m sorry to see you heading off in this direction. Richard Courtney has commented several times that your choice of an exponential decay function was an arbitrary choice. I never saw it that way. It represents the solution to the single capacity, linear feedback equation, and as such formed a solid basis for your examination of SW founded on clear assumptions.
Your move in this thread takes you into curve-fitting with no tie-back to a physical model. All you have is a guess on the response function from that model, with no clear assumptive basis. I think this is retrogressive.
Robert Brown says:
June 4, 2012 at 5:45 am
Robert, as always, it’s a pleasure to hear from you. I have indeed considered the double exponential, and it may very well be that it will give a better fit. I have no problem with that. My goal was to see how far I could get with a singe exponential, which has turned out to be a very long ways.

Given the short length of the dataset, and the excellent fit of a fat-tailed exponential to the data, I doubt greatly whether we have enough resolving power to distinguish between the two hypotheses. I have written to Dr. Hatzianastassiou to see if he will give me the gridded data from his study. The most obvious reason to use a double exponential decay is that the thermal characteristics of the land and the ocean are totally different. It may be possible to analyze the data using a land mask and determine just how different they are, and whether a separate analysis for ocean and land gives better results.
It will also be very difficult to distinguish between the fat-tailed and the double exponential functions because they are very, very similar. Here’s some sample data to illustrate my point:
As you can see, there’s almost no difference between a “two-box” or double differential model, and a single fat-tailed model. In fact, that suggests to me that it should be possible to convert my calculations using a fat-tailed model into the (one of the?) corresponding double-exponential setup that best fits my fat-tailed exponential calculation …
For my purposes, however, I’m satisfied with the current state of the analysis. It shows that temperature is a function of albedo in a way that to my knowledge has never been demonstrated. It strongly supports my hypothesis, which is that clouds and thunderstorms form a governing system that keeps the earth’s temperatures within a very narrow range.
Always more for me to learn, thank you as always for your contribution to that process.
w.
PS—Do you have a link to the Koutsoyiannis paper you referenced above? I took a quick look and couldn’t find it. I find his work to be excellent and always fascinating.
W. says:
Thanks, P. Solar. Albedo changes both longwave and shortwave in a fairly complex fashion. What these calculations show is that the net effect of the albedo, including both long- and shortwaves, is to cool the earth.
But albedo is the reflectivity ( IR and SW OK ) . My point is, how come you get such a good match to temperature without apparently accounting for changes in outgoing IR. I have not checked the numbers but I was not of the impression that is was so small that it could just be what accounts for the mismatch between your exp model and their model albedo “data” plus isolation
Maybe I’m missing something in their paper, but it seems that there is no accounting of outgoing IR in their albedo estimations (neither would I expect there to be).
I noted that the NH loop in your plots in noticeably narrower at the winter end. Could this reduced amplitude in winter be the blanketting effect of more cloud cover in winter. Just the IR factor I mentioned above?
I have some of Spencer’s work on ERBE, I’ll have to look at relative magnitudes.
Harriet Harridan says:
June 4, 2012 at 2:41 am
Thanks, Harriet, you need to be a bit more subtle in your analysis. I protested strongly when Nikolov and Zeller used an equation with no physics based real world basis or explanation to do a 5-parameter fit to a measly 8 data points … if you think that is even remotely similar to this case, you need your glasses adjusted.
w.
Dr. Deanster says:
June 4, 2012 at 6:18 am
Sadly, I don’t have data for anything but the period in question. However, in my previous analysis I showed that I get the same answer using half the data for fitting the equation, and then applying it to the other half of the data. So I’ve done your test, and passed.
w.
Terry Oldberg says:
June 4, 2012 at 8:07 am
Terry, I’ve seen you make this claim before, and never understood it. What is a “statistical population” on your planet? What is an example of one? You may be right, but I haven’t a clue what you’re talking about. The OECD Statistical Glossary, says:
So … perhaps my “target population” is all of the albedos and the temperatures of the planet throughout history, and my “survey population” is the albedos and temperatures 1984-1997 … but what is the “statistical population” you are talking about? What am I missing here?
Thanks,
w.
“I am not sure what the implication of a fat-talied distribution would be here, but it is something to think about.”
Agreed, but also there is no special requirement that the decay be exponential. An exponential decay is valuable in some cases because it is the only memoryless continuous distribution. In other words, this does not have to be a Markov process. However another model, such as the sum of several different exponential decays may be an improvement (sort of like the Bern Carbon Cycle Models) if we can identify what these processes are.
Paul_K says:
June 4, 2012 at 10:50 am
I appreciate the thought, Paul_K. However, we have lots of examples of “fat-tailed” exponential decay in nature, they are quite common.
As a result, I fail to see how is it that a regular exponential decay forms a “solid basis” for my examination, while in your estimation a fat-tailed exponential decay doesn’t form a solid basis … what am I missing?
Thanks,
w.
P. Solar says:
June 4, 2012 at 10:58 am
My analysis concerns itself with the average net effect of the albedo changes, which are mostly from clouds. As a result, it perforce must include all of the effects of clouds—changes in incoming and outgoing SW, changes in incoming and outgoing LW, changes in wind, changes in evaporation, changes in ocean albedo, heat transfer from the surface to the atmosphere, all of the myriad things that clouds do that affect the temperature.
These are all happening at different timescales and with different “climate sensitivities”. All of these are netted out into the timescale and the sensitivity of my analysis.
At least that’s how I see it,
w.
Mr. Eschenbach
I do not accuse you of making errors, what I am suggesting is that despite good logic and correct maths (although I would like to see residual graph compared to annual anomalies since appears that residuals are of the same order about 0.25C) , what you are proposing is not a convincing resolution for understanding any of the major events as the MWP, LIA or recent warming period, although it is OK as an academic exercise.
Also for the reasons I mentioned in my post addressed to Mr. Wilde
http://wattsupwiththat.com/2012/06/04/sun-and-clouds-are-sufficient/#comment-1000844
And finally application of a suitable procedure on even the best data may produce number of different outcomes. As an example you could (but I doubt that you would) take a closer look at my ‘Summer Season Spoof’ at
http://www.vukcevic.talktalk.net/00f.htm
it is based on real data, has good logic, doesn’t break any laws of physics and perfectly agrees with Hansen’s calculations, but is it realistic reflection of the real world, I hope not.
Your past posts relating to the oceanic effects are very educational, and as a result I have learned from such.
……………………………………..
Meyer says: June 4, 2012 at 10:43 am
……..
Agree.
Arctic and the North Atlantic oscillations hardly have any effect in the southern hemisphere, equally the Antarctic’s temperature wave doesn’t even reach tropics, while the ENSO (el nino and la nina) are equatorial events and therefore affect both hemispheres. Averaging two hemisphere into single global dataset is definitely counterproductive for the purpose of understanding the long term natural variation.
Several comments in this thread criticize Mr. Eschenbachs use of curve fitting and even call him a hypocrite. This shows that those commentators do not understand the difference between verification and validation.
During verification one must stick rigorously to math in order to show, that a model does not violate any older very well researched models (i.e. “laws” of science).
During validation one ideally compares the results of the model with measured data of the real world system. If this comparison is not possible a second best approach is to compare the behavior the model and the behavior of the real world.
When analyzing the behavior of a systems measured data, it is common practice to use curve fitting. This curve must only be fitted to the measured data, and the model data must not have any influence on the curve shape.
If one, however, uses curve fitting in order to “fill the gap” between model and observation, one essentially creates a empirical model between the original model and the observation. Verification of this empirical model is not possible. With enough parameters one can connect almost any model with any system, but these models are meaningless.
The model which Mr. Eschenbach explained above, shows the behavior of the measured data in a certain time-frame. It produces abstract parameters, which can be used to validate verificated models of the climate.
Proper validation is extremely rare in climate science. while most of the millions of dollars are spent on the verifications. Comparing trends is not proper validation, if the differences between the model and the reality is larger than the trend.
what exactly is your model? previously it was this: ΔT(n+1) = λ∆F(n+1)/τ + ΔT(n) exp(-1/ τ)
now is it this? ΔT(n+1) = λ∆F(n+1)/(τ^c) + ΔT(n) exp(-(t/ τ)^c))
this? ΔT(n+1) = λ∆F(n+1)/(τ^c) + ΔT(n) exp(-1/( τ^c))
something else?
Willis Eschenbach says:
Willis, I think this misses the most important part of P. Solar’s point: You have assumed that the forcing is due solely to the albedo effect on the shortwave radiation. If, in fact the albedo change is real and accurately-measured (which I am still somewhat skeptical about) and if it is due to a net decrease in cloudiness over the period, then presumably this decrease in cloudiness has also produced an increase in outgoing longwave radiation. In fact, if the outgoing longwave radiation has increased because of decreasing cloudiness, this will offset some (perhaps quite a large fraction!) of the forcing due to the increase in incoming shortwave radiation. Hence, the net forcing due to this change in cloudiness might be considerably less.
If you overestimate the forcing associated with the temperature trend, then you underestimate the climate sensitivity.
So, I think there are still a lot of issues with your fit to the temperature trend…mainly because I don’t think you really know the forcings involved. (Your fit to the seasonal cycle is fine, because that forcing and the temperature response is known to a good accuracy percentage-wise…but unfortunately that fit tells you very little about the equilibrium climate sensitivity because of the issues with damping of these higher frequencies by the slower time scales in the system.)
[By the way, another issue here, which I think other people have touched on in the other threads, is how much of the cloudiness change counts as a forcing and how much could be a feedback from the temperature change. For example, if you have the causality partly wrong…i.e., some of the decreasing in cloudiness is due to the warming rather than the other way around…then you will be counting as a forcing something that is actually a feedback and this will give you a lower estimate of climate sensitivity than the actual sensitivity. Of course, your leaving out the direct forcing due to the change in greenhouse gases is another issue, which would tend to raise your estimate of the climate sensitivity relative to the actual.]
Willis, I would echo Richard’s caution, and suggest another possible angle which is cause related.
If you look at the great mathematical theories of Physics, off hand, I can’t think of one, that has a (c) like your formula; at least a non-integral one.
So your “fat tail exponential” has all the odors of a fudge factor; a simple forced curve fitting; well that’s what Dr Roy’s third order; excuse me that’s fourth order, comedy power series fit does.
So it seems to me, unlikely, that some simple physical process can yield a non integral (c) or even a non unity (c).
BUT ! what might fit you data, and could also be physically causal, would be if the fit curve was actually the sum of two exponentials with different time constants, and of course each would need some fraction of the starting value, so you would need something like:-
f = a.exp(-t/tau1) +b.exp(-t/tau2).
I happen to know, that some commonly used scintillation crystals for particle detectors, emit a light pulse, in response to a charged particle, that has at least two time constant components, and the mix of those two components depends on the identity of the particle.
Stilbene for example, which is one I actually have worked with, can detect gamma rays, as a result of an electron getting kicked out of an atom, and neutrons as a result of a knock on proton, as well as alpha particles.
The peak height of the light pulse is proportional to the energy of the incident particle, while the amount of the long time constant tail, is particle identity dependent. Neutrons (proton) give a bigger long component, than gammas (electron) and alphas give an even bigger long tail component.
So for a pulse of a given energy height, the total area of the pulse is defined by the particle identity. I used this discrimination technique, to count neutron events very efficiently, in the presence of huge gamma ray fluxes, which I could reject on the basis of height/area discrimination.
The trick is to integrate the anode current pulse for area, and take a peak reading wide band pulse from the last dynode of the photo-multiplier tube.
So your data, could have two different physical processes underlying, which likely had different decay time constants, and your (c) formalism, would not reveal that.
Just a thought to rattle around in that brain of yours.
Vuk,
I don’t see any inconsistency between my propositions and yours.
Your comments and data seem to me to be expected from a scenario whereby the air circulation responds to negate ANY forcing other than more energy from the sun or a higher atmospheric mass.
joeldshore said:
“if the outgoing longwave radiation has increased because of decreasing cloudiness, this will offset some (perhaps quite a large fraction!) of the forcing due to the increase in incoming shortwave radiation.”
Obviously so. The widened equatorial air masses with reduced cloud cover will radiate more freely to space at night. That makes it somewhat easier for the poleward shift in the entire air circulation pattern to offset the extra incoming to the oceans during the daytime.Some of that extra energy into the oceans is retained and has to be moved poleward by the oceans before it can be lost to space.
Note that the albedo / cloudiness is a result of (and proportionate to) the quantity of energy passing through the troposphere from oceans to space. It does not do any forcing in itself. It represents the netted out result of ALL available factors that affect tropospheric energy content and manifests itself in the particular air circulation configuration at any given moment.
The Earth system does not rise to a higher equilibrium temperature when there is a change in anything other than solar input at top of atmosphere or an increase in total atmospheric mass. Instead it maintains the same system energy content and changes the rate of energy throughput to maintain stability.The global air circulation adjusts as necessary and albedo follows closely.
Any planet with an atmosphere appears to have the same capability. But that brings us to the comments of Harry Dale Huffman and the findings of Nikolov and Zeller which are not suitable for discussion here. I only mention that because it helps to show how it could all fit together in a wider scheme of things.
Robert Brown said:
“The fast process is e.g. atmospheric transport and relaxation, the slow process is the longer time associated with oceanic buffering of the heat”
Just so. The fast process is latitudinal air circulation shifting. The slow process is internal ocean cycling. The latter affects the former (as does the sun and any other forcing process) and at any given time the net balance between top down solar and bottom up oceanic forcings is represented by the global air circulation pattern at that moment. Global albedo would therefore be the critical indicator for the system trend at any given time. There will be an albedo figure for net thermal balance but in practice it is never maintained for long because all the parameters are constantly changing which also changes the albedo required to achieve balance.
Congratulations Willis. Climate Science is being reduced from a 3 year course to 2 weeks and now being taught in High School only.
George E> Smith said:
“So your data, could have two different physical processes underlying, which likely had different decay time constants”
Yes. Robert Brown said that too.
In my opinion the two physical processes are air circulation shifting (fast) and internal ocean cycling (slow).
Vuk said:
“what you are proposing is not a convincing resolution for understanding any of the major events as the MWP, LIA or recent warming period, although it is OK as an academic exercise.”
On the short timescale discussed in this thread, no.
But if one proposes millennial solar cycling influencing the polar air masses from the top down and similar internal ocean cycling along the thermohaline circulation affecting SST and the equatorial air masses each of which can affect albedo then MWP, LIA and all other Holocene climate swings can be readily brought into Willis’s scenario.
The ocean cycling would be a delayed reflection of the solar cycling and both acting on the air circulation would affect albedo without offending Willis’s observation that sun and clouds are sufficient.
Using the spreadsheet given in the first post of this series;
I have plotted the rolling 12 month average of the actual SH temperatures (in AQ) alongside a 12 month moving average of the cumulation of the calculated monthly SH changes (in AG) initialised with the first temperature (16.4 C) i.e. 16.4 plus -0.1 then 16.3 plus -0.9 etc.
These dont look very similar : -(
Does anyone have any idea what I am doing wrong?
Thanks
George E. Smith; says: June 4, 2012 at 12:27 pm
……that has at least two time constant components…
Climate indices not only may have different time constants, but even run on two different clocks; it took me some time to get around this one:
In the North Atlantic there are two oscillations
Atlantic Multidecadal Oscillation the AMO (ocean temperature) and
North Atlantic Oscillation the NAO (atmospheric pressure)
They run synchronously until about 1910, and then the North Hemisphere temperature took off, and what happens the AMO’s clock slowed down (or the NAO’s speeded up). Weird thing about it is if you squeeze the AMO it falls again into a perfect synchronism with the NAO, or put it simply the NAO is currently some 11 year older the AMO, if you assume there were same age in 1910.
http://www.vukcevic.talktalk.net/AMO-NAO.htm
(btw there is perfectly good natural reason for it; I do not see a single silver bullet solution to the long term temperature oscillations) Hey, don’t run your body clock to the NAO.
Stephen Wilde says: June 4, 2012 at 1:11 pm
……..
As far as I understand your hypothesis, we only disagree about the cause of the polar jet shift, you think it comes from above, stratosphere etc, I think it comes from below, release of energy from deep convection in the North Atlantic, south of Iceland in the winter, Nordic Seas in the summer. Here is what my man says:
http://www.theweatherprediction.com/weatherpapers/077/index.html
he can flatten your Svensmark into a pancake before you could say ‘galactic cosmic rays’.
Hope you are enjoying the festivities, got soaked on the Thames riverbank yesterday, but it was worth it.
joeldshore says:
June 4, 2012 at 12:09 pm
Joel, thanks as always for your thoughtful comments. You are correct, I hadn’t understood P. Solar’s point. That is an issue I had not considered.
Unfortunately, we are woefully short of datasets on all of these questions. The ERBE data, as referenced in Present-Day Atmospheric Simulations Using GISS ModelE: Comparison to In Situ,
Satellite, and Reanalysis Data, gives the following global averages for the effect of clouds on LW and SW radiation:
TOA SW cloud forcing -48.4 W/m2
TOA LW cloud forcing 31.1 W/m2
This clearly indicates that the net cloud feedback is negative, and that the change in LW counteracts about 3/5 of the change in solar forcing. However, only about 70% of the albedo is from clouds, and the feedback in surface albedo works in the opposite direction to that of clouds (positive rather than negative with increasing temperature).
If this is reasonably accurate, then about 40% of the change in solar forcing (3/5 * 70%) is offset by an opposite change in LW forcing from the clouds. In turn, this would imply that my climate sensitivities are only 60% of the size that they should be. This would make them about 0.07°C and 0.16°C per W/m2 for the SH and the NH respectively (~ 0.2°C and ~ 0.6°C from a doubling of CO2 respectively, for a global average of ~ 0.4°C per doubling of CO2).
While this is quite possible, it is still way, way below the canonical claims of the IPCC, which say we should see 2°C to 4.5°C change per doubling.
Finally, the main point remains. The sun and the clouds are quite sufficient to explain the net change in temperature during the recent warming period, with no need to invoke CO2.
My thanks as always for your comments and corrections, always appreciated.
w.
Joel Shore “Willis, I think this misses the most important part of P. Solar’s point: You have assumed that the forcing is due solely to the albedo effect on the shortwave radiation. If, in fact the albedo change is real and accurately-measured (which I am still somewhat skeptical about) and if it is due to a net decrease in cloudiness over the period, then presumably this decrease in cloudiness has also produced an increase in outgoing longwave radiation. In fact, if the outgoing longwave radiation has increased because of decreasing cloudiness, this will offset some (perhaps quite a large fraction!) of the forcing due to the increase in incoming shortwave radiation. Hence, the net forcing due to this change in cloudiness might be considerably less. ”
Joel you are failing to consider convection. Low clouds albedo will always impact incoming radiation more than outgoing since a sizeable amount of heat is carried from the surface to the troposphere due to convection before it is radiated. Albedo is not an equal factor inbound versus outbound. That is just another reason why negative feedbacks actually rule in natural processes.
With temperatures already above average and increasing, and albedo decreasing mostly due to reductions in cloud cover, how does this support the negative feedback idea for clouds in a warming world? Has it not kicked in yet, making it pure speculation that is not supported by the data? On the other hand the data supports a positive feedback. The question is, if the cloud cover is decreasing, what is going to limit the warming unless at some point the cloud cover turns around and starts to increase again?
Vuk,
I think it is BOTH top down solar AND bottom up oceanic.
I also think you should put less weight on the NAO, important though it is, and look at the global variations including both poles.The jets become more meridional / zonal in both hemispheres at similar times on multidecadal timescales but the variability is less in the SH due to the thermal inertia of oceans as compared to land.
As I said, I see nothing fatal to my propositions or those of Willis in the findings you have set out.Your work is a useful supplement to the basic proposition and gives information about how the processes work through the system.
Even the bottom up oceanic forcings are simply a delayed reflection of earlier solar variations. Oceans only modulate solar input.
Willis:
>>
My analysis concerns itself with the average net effect of the albedo changes, which are mostly from clouds. As a result, it perforce must include all of the effects of clouds—changes in incoming and outgoing SW, changes in incoming and outgoing LW, changes in wind, changes in evaporation, changes in ocean albedo, heat transfer from the surface to the atmosphere, all of the myriad things that clouds do that affect the temperature.
>>
shortwave albedo is pretty clear cut, there’s only one source and any outgoing is reflection (albedo).
However, LW IR can be reflected solar or thermally emitted by surface or atmosphere. Some IR will be absorbed and re-emitted at the same wavelength ( reflection of a sort if you will). Other will be higher energies that cause warming and hence emission of IR.
From the paper:
>>
In this study, a deterministic radiative transfer model is
used to compute the global distribution of all TOA shortwave
radiation budget components on a mean monthly and 2.5◦ by
2.5◦ longitude-latitude resolution, spanning the 14-year period
from January 1984 through December 1997.
>>
So the model developed in the paper seems to be clearly just about reflection proper of SW radiation from the sun. So the modulation of out-going IR is missing from your calculations which makes it all the more surprising how well it works. Unless, of course, IR is quite small in relation to SW solar.
Your original exponential seems reasonable, though I would expect you to need two lambas and two taus. As you pointed out, this would be nearly indistinguishable from your fat tail idea.
You would not be introducing more parameters by have 2x lambda 2x tau since NH and SH should be able to use the same values in proportion to their land/sea ratios.
“””””…..Stephen Wilde says:
June 4, 2012 at 1:03 pm
George E> Smith said:
“So your data, could have two different physical processes underlying, which likely had different decay time constants”
Yes. Robert Brown said that too……”””””
Looks like one of those “Read everything before doing anything exams.”
Had I done that I would have seen the good Professor’s earlier exposition; and also his more expansive comment regarding two possible processes.
So I stand aside and let Robert take the bow; great call Professor. I suspect we both agree, two processes, with two time constants, is infinitely more likely, than a fractal fudging.
And Willis, it shouldn’t be too difficult to separate the two functions, from short time, and long time detail. And given your mathematical propensity, Willis, you can probably get excel to find a best fit value for the four parameters. You would then have a model, that could be refined if better data becomes available to you; and had some physical reality.
Hey Willis ….. thanks for the reply.
Following up on my first post, I’m no expert on your model, not quite sure what the parameters are. But they seem to be some sort of explanation of Global Temperature based on Solar and Albedo forcings. I know we have the Solar Data up to present. I’m guessing there is some albedo data out their as well. If not, there is sure to be a range of albedo effects that could give a confidence interval for expected temperatures up to date.
I’d sure like to see what your model predicts with all the time lags, etc, through to the present. I mean, if you could predict the second half with the first half, it would seem you could take a stab at it for dates beyond 1997.
I mean .. your model could be really big!! As has been said, simplicity is usually the best solution, as it eliminates a lot of noise.
Of relevance here is that nearly half of the measured land surface warming over the last 60 years is spurious, and results from deriving average temperature from min+max/2.
The reason is, minimum temperatures generally occur in the early morning when solar insolation exceeds outgoing LWR, minimum temperature is sensitive to small changes in solar insolation at this time. And changes in near ground aerosols/particulates and aerosol seeded clouds have a disproportionately large effect on early morning insolation (compared to other times of day).
I wrote about this at the link below.
http://www.bishop-hill.net/blog/2011/11/4/australian-temperatures.html
The relevance to Willis’ analysis is that HADCRUT land is mostly based on minimum and maximum temperatures and contains a significant amount of warming that does not exist, if an average of representative temperatures throughout the day is used.
Were Willis to use a temperature set genuinely representative of the average temperature throughout the 24 hours, I expect not such a good fit, leaving some room for non-albedo effects.
None the less, albedo will still be the primary driver of climate (with the caveat richard courtney explained), as the above accounts for only about 15% of HADCRUT land/ocean warming.
• Nikolov and Zeller used 5 parameters to adjust a model. Anything can be adjusted to match anything by tuning 5 variables.
Furthermore those parameters were utterly devoid of physics, and corresponded to dimensioned scale factors that were totally absurd — not only non-physical but literally inconceivable of BEING physical. Finally, their “miracle fits” utterly fails if one plots all of the OTHER gas giant moons, or replots the gas giant moons that they selected using their actual published data.
I’m just saying.
rgb
PS—Do you have a link to the Koutsoyiannis paper you referenced above? I took a quick look and couldn’t find it. I find his work to be excellent and always fascinating.
I posted a couple of them, didn’t I? Or maybe it was on another thread. Damn, I can’t even keep track anymore.
I have to go back over to Pivers Island to teach for a few hours (yes, at 10:30 pm, sigh) but if you Google “Hurst Koutsoyiannis” or “Hurst-Kolmogorov Koutsoyiannis” the PDF of the colorado state workshop talk on this shows up on the latter, a paper on just Hurst on the former, and if you search on his name and something like “climate variability” you can get several of his other papers and preprint PDFs from this general site:
itia.ntua.gr/en/docinfo/1001/
HTH.
rgb
P.S. — Speak of the devil! Koutsoyiannis just posted on WUWT himself on the “flying dinosaurs” thread. With luck you can contact him directly and ask him for a toplevel list of links to his papers.
So I stand aside and let Robert take the bow; great call Professor. I suspect we both agree, two processes, with two time constants, is infinitely more likely, than a fractal fudging.>
Actually my experiences are very similar to yours, except that I did the college “Neutron activation of silver” experiment as an undergrad which features two separate (nearby) decay processes so I had to design counters and so on and a statistical method to extract the two decay constants. And people think that college isn’t good for anything….;-)
rgb
Willis (June 4, 2012 at 11:13 am):
Thanks for taking the time to respond. In criticizing your article, I have the larger purpose of demolishing the IPCC’s claim to have conducted a scientific study on global warming. The IPCC’s study cannot have been “scientific” for such a study references the underlying statistical population but for the IPCC’s study there isn’t one. A statistical population is the sine qua non of a scientific study.
You’ve raised the issue of what is meant by a “statistical population.” As I’ll use the term it references a set of statistically independent events, a set of “conditions” that are conditions on the associated model’s independent variables and a set of “outcomes” that are conditions on the model’s dependent variables. An example of a set of conditions is [cloudy, not cloudy]. An example of a set of outcomes is [rain in the next 24 hours, no rain in the next 24 hours].
The “Cartesian product” of the two sets is the set of all pairings of a condition with an outcome. In my example, the Cartesian product is the set {[cloudy, rain in the next 24 hours], [cloudy, no rain in the next 24 hours], [not cloudy, rain in the next 24 hours], [not cloudy, no rain in the next 24 hours]}.
Each element in the Cartesian product is a description of an independent event. A “prediction” is an extrapolation from an observed condition to an unobserved but observable outcome. For example, it is an extrapolation from the observed condition “cloudy” to the unobserved but observable condition “rain in the next 24 hours.”
A “sample” is a subset of the elements of a statistical population in which the outcomes of the events as well as the conditions have been observed. In a sample, a count of those events with identical outcomes is an example of a “frequency.” A model that is “scientific” is one that makes a representation about the frequencies of the various outcomes. In scientific principle if this representation is falsified by the evidence this model is discarded. Otherwise, the model is said to be “validated.”
In reference to the notion that there exists in nature the property of Earth’s climate that is known as “the climate sensitivity” (TECS), this idea identifies no events, statistical population or sample; thus, speculations regarding the magnitude of TECS are scientifically non-sensical. How then did numbers of otherwise sane people come to think these speculations make sense? One possibility is for the deluded to have overlooked the non-observability of the equilibrium temperature for the nonsensicality follows from this non-observability.
Willis Eschenbach says:
June 4, 2012 at 10:20 am
I fear that this analysis cannot do any predicting at all about tomorrow. The reason is that it is based on the albedo, and we do not know what the albedo will do tomorrow …
Willis: well done finding some albedo data which can start to put some numerical detail into the qualitative and wiggle-comparative studies which already set out the hypothesis:
http://tallbloke.wordpress.com/2012/02/13/doug-proctor-climate-change-is-caused-by-clouds-and-sunshine/
Willie Soon was onto this stuff several years ago too with a numerically supported regional study:
http://tallbloke.wordpress.com/2010/06/21/willie-soon-brings-sunshine-to-the-debate-on-solar-climate-link/
I recall you said you had a chat with Willie Soon at the ICCC7. I’m glad to see some of his influence is rubbing off on you. 😉
An older study which sheds some light on the relationship between solar variation and albedo change is Nir Shaviv’s paper on using the oceans as a calorimeter.
http://sciencebits.com/calorimeter
If Nir Shaviv is right, then we can make a reasonable stab at what albedo will do in the future if we can predict what the Sun will do in the future. That’s why that issue of solar prediction has been the main focus of my efforts for the last 4 years.
By the way, Nikolov and Zeller used 4 parameters not five. The same number Robert brown and E.M. Smith are (correctly) recommending you use. ( the albedo of their ‘greybody’ ‘no atmosphere’ planets is fixed in their theory for all rocky solar system bodies). N&Z are in agreement with Nir Shaviv since they say that the actual albedo on an atmosphere bearing planet is a function of pressure induced by the action of gravity on atmospheric mass and insolation at the TOA.
Since CGR’s are in approximate anti-correlation with solar variation, they too can have a role in this externally driven variation. The Earth tends to homeostasis. Change is externally driven. This is the right direction to be going in and I’m glad to see you are moving towards it.
Cheers
TB.
One possible error: While the change in forcing from the sun is positive in the northern hemisphere it is negative in the southern hemisphere. therefore heat-exchange between the hemispheres could reduce the sensitivity of the system. Forcing from CO2, however, is positive on both hemispheres at the same time.
Philip Bradley says:
June 4, 2012 at 7:06 pm
Of relevance here is that nearly half of the measured land surface warming over the last 60 years is spurious, and results from deriving average temperature from min+max/2.
Now that is very interesting. I have always thought something must be fundamentally wrong with one or other (both?) land and sea datasets when land can show such a significantly larger warming than the oceans. Mind you, there have been some pretty questionable “corrections” made to HadSST as well , so I would not trust either dataset further than I could spit.
http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/
I’ll enjoy reading the article at BHill.
thx
Willis:
I liked the original analysis with a single exponential function as an overal average of land and ocean.
The idea of using a double exponential to represent land and ocean I think is a better one. I wonder if you get the gridded data and mask the land surfaces if this will show the two exponentials can be operated independently for land and ocean and then combined, by area weighting, to get your original result with the double exponential (fat tail) distribution?
Hope this makes sense, it is late and I’m off now. Thanks again.
Jim D said:
“The question is, if the cloud cover is decreasing, what is going to limit the warming unless at some point the cloud cover turns around and starts to increase again?”
Cloud cover started increasing some 12 years ago just around the time temperature stopped rising. Give it time and unless cloudcover starts decreasing again the energy content of oceans and troposphere will actually fall.
Douglass, Blackman and Knox made a similar analysis and gave even smaller values of climate sensitivity: “Temperature response of Earth to the annual solar irradiance cycle,” Phys. Lett. A, 323, 315-322 (2004) and its Erratum. Their climate sensitivity values (K/(W m^-2)) are, 0.02 for latitude band of 60S-30S, 0.025 for 30S-0, 0.027 for 0-30S, and 0.058 for 30N-60N. A simple average gives 0.035 K/(W m^-2) and 0.13 degC for CO2-doubling.
Kiminori Itoh, Yokohama National University, Japan
Robert Brown says:
June 4, 2012 at 7:51 pm
Found the Colorado presentation here, I hadn’t read it, and indeed it is a tour de force. The man knows his stuff.
Many thanks,
w.
tallbloke says:
June 4, 2012 at 10:13 pm
Thanks, tallbloke. Proctor’s work is good, I hadn’t thought about using “sunshine hours” as a proxy. Should be possible to couple that with the NASA gridded annual solar data to refine his work.
Willie is a great guy, he’s one of my heroes, and a funny and fun man to have a beer with. I hadn’t seen that work of his, nice stuff.
I got to talk a bit with Nir in Chicago. He’s young, full of fire, has laughing eyes. His calorimeter piece you referenced is new to me, and a most interesting piece. He makes a good case that huge amounts of energy are flowing into and out of the ocean, modulated by the clouds. I wrote about this in my 2009 paper, The Thermostat Hypothesis, where I said regarding variations in the global thermal equilibrium:
You go on to say:
I agree that their albedo is “fixed”, but curiously it is not the albedo of any of their rocky bodies, nor is it the average of the rocky bodies. I asked N&Z where it came from, and got basically the answer you give me now, that it is “fixed” … curiously, it is “fixed” exactly where it works the best. Which is why it is the fifth parameter.
But the real joke was not that Nikolov and Zeller used 5 parameters. Nor was it that they could pick any equation, no matter how non-physical.
It was that they were fitting their equation to only EIGHT DATA POINTS … if you don’t find fitting even 4 parameters for EIGHT DATA POINTS sidesplittingly funny, you don’t understand math.
Plus, of course, there wasn’t any physical basis for their miracle equation, where in the current case we have lots of examples of exponential decay to build upon. So we’re not chasing extraneous variables.
You’ll have to point me to where Nir Shaviv said that albedo is a function of pressure on other planets, I haven’t seen that.
Thanks, tall bloke, appreciated.
w.
Willis,
Let me add to Richard Courtney’s comment on over-reaching.
There are powerful reasons why this work is never going to be able to yield a credible estimate of long-term climate sensitivity. I will list a few of them below. If this work has importance, and I think it may, then it is in the area of attribution, and not in the estimation of sensitivity. For this you need a credible, physically meaningful model of short-term sensitivity. You started off with one – the single capacity, linear feedback model. This may not be the best one to use – I don’t know – and you may need to move to a more sophisticated model, but, if you do so, it needs to be one which is based on a physically meaningful and testable hypothesis. I believe that you need to firmly resist the temptation to evermore complicated response functions which are not underpinned by a physical hypothesis, just because they offer you an improved fit to the data. And this includes fat-tailed response functions or n-pole feedback models, if they are not clearly underpinned. In my opinion, already your addition of an unexplained parameter here diminishes credibility relative to what you had previously achieved – and it is for this reason that I see it as retrogressive.
Why is the work not going to yield the holy grail – a credible estimate of long-term climate sensitivity?
(1) Your data is strictly limited to the satellite era. Long slow responses will not be visible nor estimable, but you cannot rule out their existence as potentially major controls on sensitivity.
(2) Earth’s radiative response seems to be near linear in the short term, with small forcings. Over the long term, nearly all of the GCMs exhibit non-linear behavior to a greater or lesser extent. You can neither estimate this effect, nor can you discount it with the data available.
(3) The flux perturbations which you are considering comprise a mix of forcings and feedbacks in the SW, which would need to be untangled rigorously for any estimate of sensitivity to be meaningful within its conventional definition. ( However, I don’t believe that you have to do this to interrogate relative attribution.)
With respect to item (3), conventionally both albedo and clouds are considered to be part of the feedback coefficient – the reciprocal of climate sensitivity. TSI variation is included as a forcing. The important breakdown for your work is between SW and LW effects, I think. The split broadly looks like this:-
SW perturbations (forcings and feedbacks) – TSI variation, sea-ice albedo, cloud reflectance, aerosol direct effects and atmospheric absorption
LW perturbations (forcings and feedbacks) – WMGHG’s, water vapour, lapse rate, cloud absorption and re-emission.
Because you are using net received SW as a FORCING, the feedback term that you are abstracting here (1/lambda in your nomenclature) does not correspond to the “conventional” feedback term, and neither then does your climate sensitivity, lambda. Specifically, your feedback term excludes sea-ice albedo, SW cloud effects and atmospheric absorption changes. These are captured along with TSI variation and aerosols as radiative forcings.
The importance of your finding is related to the fact that the temperature variation can be largely explained by (just) SW variation. This is not what one would expect if the heating were largely attributable to WMGHG’s. But this requires a clear accounting, I think, which should be the next step in my view.
As before, Willis, please take this as a constructive critique of your work. I am not trying to do a hatchet job, I promise you.
Willis:
Thankyou for linking to the Koutsoyiannis’ Colorado presentation in your post at June 5, 2012 at 12:24 am.
The presentation is brilliant!
How I wish I had the sense to have found it when Robert first commended it! Stupid of me: if he commends it then it surely must be good.
I very, very strongly commend everybody interested in this thread to go through it. It is at
http://www.cwi.colostate.edu/nonstationarityworkshop/SpeakerNotes/Wednesday%20Morning/Koutsoyiannis.pdf
Anyway, I return to enjoying the Diamond Jubilee celebrations. HM is about to leave for the cathedral.
But I really needed to thank you for getting me to read the gem from Koutsoyiannis. Thankyou.
Richard
P Solar says:
I’ll enjoy reading the article at BHill.
One thing I didn’t include in that article, but I now think is significant in reducing low level aerosols/particulates and seeded low level clouds is the mandating of catalytic converters on vehicles in 1975 (and similar measures to reduce aerosol/particulate emissions from vehicles in subsequent years) in the USA and much of the rest of the world shortly afterwards.
Willis,
Let me add to Richard Courtney’s comment on over-reaching.
There are powerful reasons why this work is never going to be able to yield a credible estimate of long-term climate sensitivity. I will list a few of them below. If this work has importance, and I think it may, then it is in the area of attribution, and not in the estimation of sensitivity. For this you need a credible, physically meaningful model of short-term sensitivity. You started off with one – the single capacity, linear feedback model. This may not be the best one to use – I don’t know – and you may need to move to a more sophisticated model, but, if you do so, it needs to be one which is based on a physically meaningful and testable hypothesis. I believe that you need to firmly resist the temptation to evermore complicated response functions which are not underpinned by a physical hypothesis, just because they offer you an improved fit to the data. And this includes fat-tailed response functions or n-pole feedback models, if they are not clearly underpinned. In my opinion, already your addition of an unexplained parameter here diminishes credibility relative to what you had previously achieved – and it is for this reason that I see it as retrogressive.
Why is the work not going to yield the holy grail – a credible estimate of long-term climate sensitivity?
(1) Your data is strictly limited to the satellite era. Long slow responses will not be visible nor estimable, but you cannot rule out their existence as potentially major controls on sensitivity.
(2) Earth’s radiative response seems to be near linear in the short term, with small forcings. Over the long term, nearly all of the GCMs exhibit non-linear behavior to a greater or lesser extent. You can neither estimate this effect, nor can you discount it with the data available.
(3) The flux perturbations which you are considering comprise a mix of forcings and feedbacks in the SW, which would need to be untangled rigorously for any estimate of sensitivity to be meaningful within its conventional definition. However, I don’t believe that you have to do this to interrogate relative attribution.
With respect to item (3), conventionally both albedo and clouds are considered to be part of the feedback coefficient – the reciprocal of climate sensitivity. TSI variation is included as a forcing. The important breakdown for your work is between SW and LW effects, I think. The split broadly looks like this:-
SW perturbations (forcings and feedbacks) – TSI variation, sea-ice albedo, cloud reflectance, aerosol direct effects and atmospheric absorption
LW perturbations (forcings and feedbacks) – WMGHG’s, water vapour, lapse rate, cloud absorption and re-emission.
Because you are using net received SW as a FORCING, the feedback term that you are abstracting here (1/lambda in your nomenclature) does not correspond to the “conventional” feedback term, and neither then does your climate sensitivity, lambda. Specifically, your feedback term excludes sea-ice albedo, SW cloud effects and atmospheric absorption changes. These are captured along with TSI variation and aerosols as radiative forcings.
The importance of your finding then is related to the fact that the temperature variation can be largely explained by (just) SW variation. This is not what one would expect if the heating were largely attributable to WMGHG’s. But this requires a clear accounting, I think.
As before, Willis, please take this as a constructive critique of your work. I am not trying to do a hatchet job, I promise you.
P Solar says:
I’ll enjoy reading the article at BHill.
One thing I didn’t include in that article, but I now think is significant in reducing low level aerosols/particulates and seeded low level clouds is the mandating of catalytic converters on all new vehicles in 1975 (and similar measures to reduce aerosol/particulate emissions from vehicles in subsequent years) in the USA and much of the rest of the world shortly afterwards.
Willis
In the following equation:
∆T(k) = λ ∆F(k)(1 – exp(-1/ τ) + ∆T(k-1) * exp(-1 / τ)
Something does not look right.
You have five opening brackets “(“ but only four closing brackets “)”
[Thanks, it’s ∆T(k) = λ ∆F(k)(1 – exp(-1/ τ)) + ∆T(k-1) * exp(-1 / τ)
w.]
Willis Eschenbach says:
June 5, 2012 at 12:57 am
I hadn’t thought about using “sunshine hours” as a proxy. Should be possible to couple that with the NASA gridded annual solar data to refine his work.
Thanks for flagging up the data
Willie is a great guy, he’s one of my heroes, and a funny and fun man to have a beer with. I hadn’t seen that work of his, nice stuff.
Welcome to the real world of external climate drivers.
I agree that their albedo is “fixed”, but curiously it is not the albedo of any of their rocky bodies, nor is it the average of the rocky bodies. I asked N&Z where it came from, and got basically the answer you give me now, that it is “fixed” … curiously, it is “fixed” exactly where it works the best. Which is why it is the fifth parameter.
They use the Moon’s albedo as being representative of the greybody albedo of rocky planets in general. It is not a ‘tuned parameter’.
But the real joke was … only EIGHT DATA POINTS…
It’s most inconsiderate of the solar system to provide less planets than statisticians would like. 🙂
You’ll have to point me to where Nir Shaviv said that albedo is a function of pressure on other planets, I haven’t seen that.
Heh. N&Z and Shaviv are in agreement that cloud albedo variation is related to insolation variation at the TOA. That’s why Willie Soon’s sunshine hours graph is in approximate agreement with TSI as well as temperature, give or take ENSO.
Philip Bradley says:
June 5, 2012 at 2:17 am
One thing I didn’t include in that article, but I now think is significant in reducing low level aerosols/particulates and seeded low level clouds is the mandating of catalytic converters on all new vehicles in 1975
Has any work been done to compare the magnitude of that effect against changes in atmospheric angular momentum carrying dust around?
On the page 35 of the Koutsoyiannis’ Colorado presentation
http://www.cwi.colostate.edu/nonstationarityworkshop/SpeakerNotes/Wednesday%20Morning/Koutsoyiannis.pdf
there is an example using the AMO index; it is an almost unknown fact that there is an 11 years advanced precursor to it.
http://www.vukcevic.talktalk.net/theAMO.htm
It’s most inconsiderate of the solar system to provide less planets than statisticians would like. 🙂
But it doesn’t. It provides several more moons of gas giants with at least as much atmosphere as the moons they selected and with varying albedo. It’s just that if you plot them using precisely N&Z’s algorithm and openly published atmosphere/temperature data they fall nowhere near their miracle curve.
Can you say “cherrypicking”?
Curiously, if you plot the moons that they did plot but use NASA data for atmosphere and temperature (and perhaps throw, I dunno, error bars in since some of those planetoid moons have an atmosphere so tenuous that it is given as a range, not a number) then they don’t fall on the curve either.
Which leads me, at least, to wonder if the curve came first and then the data or the other way around. IMO it is better the other way around, even though then there is no miracle, there is just a non-existent fit to no-atmosphere moons, whose temperature is understandably correlated with the real bond albedo of the moon, not its atmosphere. High albedo, relatively low temperature (at a given insolation, comparing Jovian moons to Jovian moons etc). No “universal greybody albedo” set from the moon, because the albedo of even the ice free Jovian moons is not at all like that of the moon!
All of which you would know if you looked at the plots I laboriously generated when checking the N&Z results numerically way back when. Why bother complaining about reproducibility in science if somebody “publishes” a result, somebody else openly checks that result against hard numbers and finds that it doesn’t, actually, fit the data (failing in some extremely suspicious/questionable ways), and that same somebody points out that the physical dimensioned constants in the fit are completely, totally irrelevant to any physical process that could occur on the surface of a planet in association with warming and have the primary function of forcing a curve to work for probably bent physical data for nearly airless moons?
Nikolov and Zeller should be utterly forgotten. It is terrible statistics — five parameters, 8 data points, suspicious (at the very least cherrypicked) data. It is terrible physics — really, it is. You can’t just pull a power law with absurd exponents and constants out of thin air, especially when neither exponent nor dimensioned constant are in the vague realm of relevance to any conceivable physical process. Neither can you assert that the moon’s albedo is somehow generalizable to all planetary objects, not when you can SEE AND MEASURE their bond albedo from here, and note that it is wildly different from object to object, and can SEE AND MEASURE the fact that surface temperatures at constant insolation do indeed vary with bond albedo, not with surface pressure (which is not that variable for the Jovian moons).
The Jovian moons ALONE refute N&Z. They all have the same insolation. They have very, very different albedo. They have remarkably similar surface pressure, in all cases a whiff above hard vacuum, barely enough to be called “an atmosphere” (but more than the moon or mercury). And their temperatures correlate with albedo, not pressure, and none of them fit on the miracle curve. End of story, move along, folks, nothing to see here.
Sorry to harp on this, but I object to ANY effort to rehabilitate N&Z even by implication unless and until every one of these objections is addressed. And some of them cannot be addressed save by simply withdrawing the hypothesis, as it is (in my carefully considered, data based, numerically backed up opinion) a false hypothesis, failing both the test of reason and consistency with known physics and the test of empiricism and an unbiased comparison of the hypothesis with all of the data. What more does one have to do to disprove it?
rgb
tallbloke says:
June 5, 2012 at 3:32 am
Run the freakin’ numbers, Tallbloke. It’s not the albedo from the moon. The number in question is used to calculate their fifth tunable parameter, t5.
Here are the albedos from the paper, along with the corresponding t5 parameter if we used that albedo …
Body, Bond Albedo, Parameter t5
Mercury, 0.12, 25.4
Venus, 0.75, 18.6
Earth, 0.3, 24.0
Moon, 0.11, 25.5
Mars, 0.18, 25.0
Europa, 0.64, 20.3
Titan, 0.22, 24.7
Triton, 0.75, 18.6
These albedos range from a low end of 0.11 for the moon’s albedo to 0.75 for Triton’s albedo. The corresponding value for your parameter t1 ranges from 25.5 down to 18.6. And as a result, your value for t5 of 25.3966 is not only different from all of the eight bodies used in their study … it is outside the range of all of them. Not only that, but it is obvious that the albedo of the moon is not “representative of the greybody albedo of rocky planets in general”. In fact, it is the lowest albedo of all the bodies under consideration. I have no idea why you claim that is representative of any of the others.
So no, tallbloke, the claim that the fifth parameter uses the Moon’s albedo is simply not true. It’s easy to verify that it’s not true … run the numbers, my friend.
(For those wondering about the subject under discussion, see “The Mystery of Equation 8″.)
So what? So freakin’ what? Do you truly think that the fact that nature only gives you a few data points justifies using five (or even four) parameters to fit eight data points? Even using your figures, that’s one tunable parameter for every two data points. By your lights, since I have 168 data points, I’d be justified in using 84 tunable parameters …
First, heh, that’s not a citation. Second, heh, it says nothing about albedo being a function of atmospheric pressure, which was your claim. You said:
So again I ask—where does Nir Shaviv say that the planetary albedo is a function of pressure induced by gravity?
Many thanks,
w.
Formerly you had a low-order vector autoregressive model in which the regression parameters were functions of two underlying variables. Now you have a low-order vector autoregressive model in which the regression parameters are functions of 3 underlying variables. As my question above indicated, I do not know what the model is, but almost for sure the parameter estimation is unstable. Most of the explanatory power of the model comes from the fact that you are making one-step-ahead forecasts when the autocorrelation of delta(n) is highly correlated with deltat(n-1).
Paul_K says:
June 5, 2012 at 2:17 am
First, my thanks for your reasoned thoughts on the question, Paul, always appreciated.
Next, you say that a standard exponential decay is somehow theoretically more solidly based than a “fat-tailed” exponential decay. I fear I don’t see why, given that fat-tailed distributions are as common in nature as their standard exponential counterparts. You see this as using “evermore complicated response functions”, but I think that there is more physical justification for using a fat-tailed response than there is for using a standard response.
If there is a long slow response, why would it not show up in the fourteen years of data that I have, particularly since it is among the fastest-warming periods in the 20th century?
This is one of the more enduring myths of the GCMs, that they somehow “exhibit non-linear behavior”. The two that I have tested, the CCSM3 and the GISSE models, are strictly and definitely linear. I have no reason to assume that the other models are different.
As far as I know, the standard definition of climate sensitivity also includes all of the feedbacks that you mention. They are claimed to be the reason that the sensitivity is so much higher than the nominal Stefan-Boltzmann change in temperature expected from a 1 W/m2 change in forcing.
So I fail to see how I’m doing something different. Yes, my result also includes all of the various feedbacks … so does the standard approach. And I agree with Joel Shore that it is necessary to include the change in LW due to the changes in the clouds.
But changes in e.g. water vapor are included in my calculations just as in theirs. If I change the amount of sunshine, the system responds. The conventional view is that it amplifies (increases) the amount of warming that we’d expect from that change in sunshine. I think that’s backwards. I think that the response of the earth is following Le Chatelier’s Principle, and is pushing it back towards equilibrium. And the data that I have presented above seem to bear that out.
It’s good writing this, because I think I can see a way to disentangle some of this stuff. That is to compare a change in insolation due to changing sun with a change in insolation due to changing clouds … I’ll have to think about that one and get back to you.
My thanks for your insights,
w.
Willis,
Thanks for your thoughtful response to my comment. Here are a few more comments on it.
Actually, an important quibble on the wording: This data doesn’t tell you anything about cloud feedback (for which you would have to know how cloudiness changes with warming…both numerically and in terms of the types of clouds…to determine). What it does tell you is that the net radiative effect of clouds is cooling, although the LW cloud forcing does offset a healthy amount (~64%) of the SW albedo effect.
Actually, you are making another assumption here. While it may be true that only 70% of the total albedo is due to clouds, the real question is what percentage of the small change in albedo that was seen over this period is due to clouds. That could well be closer to 100%. (I suppose some of the drop in albedo could also be due to melting of high-albedo ice and snow…but wasn’t the albedo data that you used limited to lower latitudes anyway?) If this were the case, then naively, your estimate of the sensitivity, or response, might be only be ~36% of the size that it should be over that time interval.
I think that carrying this all the way through may actually result in a larger change in sensitivity than your estimate here because of the following consideration: With your original estimate of the forcing, you found that the same one-time constant model that worked well for fitting the annual cycle also did a good job fitting to the linear trend over the 14-year period (with the same parameters). Now, you’ll find that this is no longer the case, i.e., you will find that using the model that you developed for the seasonal cycle, you considerably underestimate this linear trend. This means that if you use a more complicated model (such as one with two timescales or…I hope…the fat-tailed exponential model), it will begin to detect the fact that you get larger and larger estimates for the sensitivity as you look at phenomena at lower and lower frequencies…and so the extrapolation to still lower frequencies will yield a still higher sensitivity.
It is sort of analogous to considering a linear extrapolation of some quantity to zero frequency when you’ve measured it at two frequencies, say, 80 and 100 Hz. If you got the value of 1 at both of these frequencies then your linear extrapolation would also give you a value of 1 at zero frequency. However, if the value at 80 Hz were to double to 2, then the linear extrapolation to zero frequency doesn’t just double…It goes up to 6, i.e., it increases by a factor of 6 from your original estimate. (Of course, these are just made-up numbers…but meant to be illustrative of a basic point.)
That is our basic point…That the annual cycle is seeing a response that is severely damped out. As you look at longer-term trends (i.e., lower frequency response), you are seeing a less damped response. And, if you look at still lower frequencies, you will see still less damping. The fact that the ocean is such a large heat sink means that responses are higher frequencies are very heavily damped.
Willis Eschenbach: “If there is a long slow response, why would it not show up in the fourteen years of data that I have, particularly since it is among the fastest-warming periods in the 20th century?”
As I mentioned repeatedly, I have employed your technique on synthetic data from a system that does indeed have a long, slow response, and your technique not only failed to detect the slow response but also underestimated the sensitivity as a result.
oops, another typo. I wrote Most of the explanatory power of the model comes from the fact that you are making one-step-ahead forecasts when the autocorrelation of delta(n) is highly correlated with deltat(n-1).
it should be: Most of the explanatory power of the model comes from the fact that you are making one-step-ahead forecasts when the autocorrelation of deltat(n) is highly correlated with deltat(n-1).
“I have employed your technique on synthetic data from a system that does indeed have a long, slow response, and your technique not only failed to detect the slow response but also underestimated the sensitivity as a result.”
If your technique is to use a climate model to produce synthetic data, it is still not real data. Willis uses REAL world data, and that cannot be faked. Yours is fake, and any result you get is a faiiry tale. FAIL!
Willis Eschenbach says:
June 5, 2012 at 10:01 am
tallbloke says:
June 5, 2012 at 3:32 am
They use the Moon’s albedo as being representative of the greybody albedo of rocky planets in general. It is not a ‘tuned parameter’.
Run the freakin’ numbers, Tallbloke. It’s not the albedo from the moon. The number in question is used to calculate their fifth tunable parameter, t5.
These albedos range from a low end of 0.11 for the moon’s albedo to 0.75 for Triton’s albedo…
So no, tallbloke, the claim that the fifth parameter uses the Moon’s albedo is simply not true. It’s easy to verify that it’s not true.
(For those wondering about the subject under discussion, see “The Mystery of Equation 8″.)
See also N&Z’s reply:
http://tallbloke.wordpress.com/2012/04/18/2012/02/09/nikolov-zeller-reply-eschenbach/
Where they say:
” Equation (2) calculates the mean surface temperature (Tgb) of a standard Planetary Gray Body (PGB) with no atmosphere“…”αgb = 0.12 is the PGB shortwave albedo”
In brief, the Moon’s albedo is the albedo the other bodies would have if they had no atmosphere or ice. That 0.12 lbedo is what N&Z refer to as the greybody albedo. It is assumed to be the same for all rocky bodies and that is the number which along with the rest of the result of eq2 is plugged into the later equation 8.
T.B.: It’s most inconsiderate of the solar system to provide less planets than statisticians would like. 🙂
W.E.: So what? So freakin’ what?
So people who are investigating solar system dynamics have to find other ways to increase confidence in their theories.
So again I ask—where does Nir Shaviv say that the planetary albedo is a function of pressure induced by gravity?
He doesn’t. They both recognise that variation in Earth’s albedo is primarily a function of variation in external forcings (at timescales for which atmopheric mass is fairly constant).
Cheers
TB.
Ed_B: “If your technique is to use a climate model to produce synthetic data, it is still not real data. Willis uses REAL world data, and that cannot be faked. Yours is fake, and any result you get is a faiiry tale. FAIL!”
Brilliant riposte. Consider me well and truly chastised. Too bad for Michael Mann that he did not have you to come to his defense when Steve McIntyre used synthetic data to demonstrate that Mann’s technique would find hockey sticks where none existed.
In brief, the Moon’s albedo is the albedo the other bodies would have if they had no atmosphere or ice.
Except that it’s not. I suggest that you look up the actual data on the Jovian moons.
rgb
So people who are investigating solar system dynamics have to find other ways to increase confidence in their theories.
And I repeat. There are plenty more moons — they just didn’t apply their theory to them because it doesn’t work miraculously. Nor does it work with the moons they did apply their theory to, unless you use the secret recipe.
rgb
So again I ask—where does Nir Shaviv say that the planetary albedo is a function of pressure induced by gravity?
He doesn’t. They both recognise that variation in Earth’s albedo is primarily a function of variation in external forcings (at timescales for which atmopheric mass is fairly constant).
Because the albedo for all planetoid objects has almost nothing to do with atmospheric pressure. Compare Europa and Ganymede.
That’s why N&Z’s curve is two almost completely independent fits — one of Mars, Earth and Jupiter (with 2.5 parameters) and one of the atmosphere-free moons (a fit with incredibly absurd parameters and that doesn’t work unless one uses the “right” set of numbers for e.g. mean temperature, ignores error bars, and so on. Basically the fact of the matter is that mean surface temperature at constant insolation depends on albedo (moderated mildly but whatever little atmosphere one has on the Mars and smaller objects plus whatever greenhouse effect the atmosphere offers, which is also miniscule for atmospheres in which water would boil at room temperature, that is, basically “a vacuum”) and it depends on lots of complex stuff for the Earth (!) and for Venus (!!).
I really, truly don’t understand why you continue to defend the work of Nikolov and Zeller. Being skeptical of badly done climate science is one thing — endorsing junk science based on cherrypicked and possibly “adjusted” data just because it supports the implausible hypothesis that there is no such thing as a greenhouse effect, especially when one can DIRECTLY OBSERVE the CO_2 hole in TOA IR emissions, seems to me, at least, to be unwise.
rgb
rgb
Joe Born says:
June 5, 2012 at 4:40 pm
“Too bad for Michael Mann that he did not have you to come to his defense when Steve McIntyre used synthetic data to demonstrate that Mann’s technique would find hockey sticks where none existed.”
fail!
M Mann had a statistical method, which was designed to find hockey sticks and average everything else around the shaft.. S McIntyre proved that as it worked well finding sticks in red noise.(20,000 sets) Apples and oranges.
If you put in a gradual slope of increasing insolation you will of course get a gradual slope of increasing temperature. That says nothing at all about climate sensitivity. The effects of CO2 are just too small to be needed in the model. Sorry, but who cares about 0.3 C.
Instead of the sensitivities for the SH and the NH being 0.04°C per W/m2 and 0.08°C per W/m2 respectively in the both the current calculations, the correct sensitivities for this fat-tailed analysis should have been 0.04°C per W/m2 and 0.09°C per W/m2.
Are the tables mislabeled? It seems that either here or in the tables you have interchanged “NH” and “SH”.
Stephen Wilde, according to this theory, if cloud cover returned to 1984 levels, so would temperature. Is that what you would say? Since you say cloud cover has increased in the last decade, how much more increase is left to get back to 1984 levels. Also, why is the cloud cover going up and down like this, if it is not responding to temperature change. And how can a negative feedback be consistent with increasing temperatures, while already above average, going with decreasing cloud cover as happened during most of the 1990’s? Surely this is a positive feedback effect, if anything.
Robert Brown says:
June 5, 2012 at 5:23 pm
In brief, the Moon’s albedo is the albedo the other bodies would have if they had no atmosphere or ice.
Except that it’s not. I suggest that you look up the actual data on the Jovian moons.
Hi Robert: We have to bear in mind that Jupiter kicks out more energy than arrives at it from the Sun, and that its moons are in some cases tidally locked in ways that introduces a squeezing effect which warms them. It’s still early days for N&Z’s work; I am defending a space for them to expand it in where they can get feedback and ideas from the community. I am not blind to the difficulties with their theory, and have made my own criticisms and offered possible alternative interpretations of data. They are currently taking their time to assimilate criticism (including yours) and work on the issues raised. That is a reasonable way to do science.
Shooting them down in flames, heaping ad hominem abuse on them and misrepresenting how their equations fit together as Willis did is not.
Now, can you explain to me the physical basis for squaring the speed of light in the equation E=mc^2. Nothing can go faster than light, so this is obviously a meaningless unphysical concept isn’t it? 😉
Cheers
TB.
Jim D.
You will need to read my various articles to get answers to your questions. I don’t want to derail this thread with all the detail.
Robert Brown said:
“I really, truly don’t understand why you continue to defend the work of Nikolov and Zeller. Being skeptical of badly done climate science is one thing — endorsing junk science based on cherrypicked and possibly “adjusted” data just because it supports the implausible hypothesis that there is no such thing as a greenhouse effect, especially when one can DIRECTLY OBSERVE the CO_2 hole in TOA IR emissions, seems to me, at least, to be unwise.”
I’m puzzled as to why someone as experienced and knowledgeable as Robert gets so emphatic and emotional. I was intending to avoid discussion of N & Z here but this thread is nearly done and I can’t let Robert’s assertions pass.
As I’ve pointed out before, the N & Z findings are pretty much as one would expect from application of the well established and accepted Ideal Gas Law applied to planetary atmospheres.
Pointing to the moons of Jupiter as a suitable example to set against the other planets is misleading for the reasons that tallbloke points out. For those moons Jupiter itself is a secondary energy source so they are not comparable to the free standing planets used by N & Z.
The CO2 spectral ‘hole’ is an inappropriate distraction because the albedo response of the system as a whole takes that feature into account.
N & Z do not hypothesise that there is no greenhouse effect. They simply say that the phenomenon usually described as the greenhouse effect is a consequence of atmospheric density interacting with insolation at the surface so that temperature is highest at the surface where density is greatest.
That is how Wikipaedia and most science textbooks describe how the observed atmospheric temperature lapse rate arises. Surface temperature in a largely non GHG atmosphere is derived from surface heating plus conduction and convection rather than radiative processes.
GHGs (especially water vapour) actually aid the convective process and so stabilise rather than destabilising the system. The presence of GHGs means that the system can shift energy faster vertically to space than without them so the air circulation need be less violent horizontally in order to achieve in / out radiative balance.
The fact is that the atmospheric circulation of ANY planet reconfigures that circulation as necessary until radiative energy in equals radiative energy out and the surface temperature is set by atmospheric mass plus insolation at top of atmosphere.
If anything other than top of atmosphere insolation or atmospheric mass tries to change the surface temperature then the air circulation changes accordingly to negate the effect.
It really is that simple.
Shooting them down in flames, heaping ad hominem abuse on them and misrepresenting how their equations fit together as Willis did is not.
being only one of them, and zero predictions that are egregiously violated. Finally, the entire theory makes sense — it can be derived from simple principles of invariance and ultimately it is difficult to imagine it not being (very probably, at least approximately) true.
Now, can you explain to me the physical basis for squaring the speed of light in the equation E=mc^2. Nothing can go faster than light, so this is obviously a meaningless unphysical concept isn’t it? 😉
Actually, shooting them down in flames is absolutely the right thing to do when their work merits it, except that the truly correct thing all around is for them to shoot themselves down in flames and follow the recommendations of Feynman and present confounding as well as confirmatory data, and quite possibly refrain from publication altogether (or at the very least publish as a very speculative paper that is utterly honest about the problems/weaknesses) if the confounding parts exceed the confirmatory parts by some margin or the theory makes no physical sense.
IIRC the original study publication on your blog, nobody engaged in ad hominem (including Willis) at the beginning — I personally was impressed, although puzzled, by the perfection of the curve they obtained. People weren’t even “suspicious” at first, although a few people may have had their spidey-bullshit sense activated by e.g. that very (impossible) perfection. The problem evolved, as it always does, when some absolutely appropriate criticisms emerged (from Willis, from me, from several other physics people and people who work a lot with curve fitting) and the discussion polarized into defensive on one side and increasingly (but appropriately strident on the other side. When something is wrong (and it is your baby) it hurts, but science is cruel. I’ve been wrong in exactly that way. You get over it.
Regarding the speed of light — I’ve written a textbook on graduate level classical electrodynamics. Do you really want to go there and have me explain precisely why, dimensionally, this is exactly how one would naively expect energy to scale, why c is indeed a nearly universal scale parameter in classical relativistic and quantum relativistic physics (and above all, in electrodynamics and the tightly coupled theory of special relativity)? I’d say “read a few textbooks, possibly including mine” as an answer to a grad student, but the math (and conceptual basis) is tough going if you don’t work your way there over a few years of study. The first pass through special relativity for undergrad physics majors causes their brains to explode and recoalesce, better and smarter, six weeks later…
That’s why N&Z’s miracle equation does not work. I can point to something absolutely ubiquitous — light (and the general propagation of all massless fields) that has c as its/their speed. I can point to an entire geometric manifold — spacetime — that appears to have the speed of light quite literally built in to its invariances and coordinate transformation properties. I can point to a dozen physically observable consequences of that inertial coordinate invariance,
Point to one single point in any planetary atmosphere where a pressure of what was it — 54000 bar? — occurs. Or show me how a reference pressure of 200 bar (again, IIRC without looking it up in my own code/directory where I studied this) is relevant in any way to pressures on Mars, Europa. Show one could reason for including SOME but not ALL of the moons of Jupiter — there is absolutely no reason to exclude e.g. Ganymede except that if you plot it using their formula, it falls far from their curve. Of course if you plot Europa using the accepted data without using their special sauce, it falls far from their curve too.
Finally, as I’ve pointed out before and will again — the p-value of their absurdly nonlinear curve with its nonphysical parameters, for almost any reasonable error bars, is something like 0.99, or even higher. Although I didn’t really twig to that on the first pass through their work, that is as far as I’m concerned almost certain proof that they fit the data to the curve somehow, not the curve to the data. I do hypothesis testing with random distributions — dieharder is one big harness for generating p-values — and p = 0.99 is just as suspicious as p = 0.01. In particular, it is good justification for rejecting the null hypothesis “This is an unbiased work and it just happened to come out this way”. Throwing Ganymede in, replotting Europa, they merely further confirm what one already is almost certain is true.
I have no problem at all giving them a bully pulpit of sorts where they can be heard. Free speech is a valuable privilege in modern society. I do have a problem with sheltering them from equally free criticism or even by implication providing them with an “endorsement” of any sort while these enormous problems remain with their theory and the data.
The “right thing to do” is absolutely to withdraw the paper voluntarily and go back to the drawing board, and only come back when the theory makes sense. IMO that will be “never”, I’m sorry to say, because this is a senseless theory. It ignores far too much physics, and postulates absurd replacements to fit bent data across completely disparate regimes. But we have been through all of this, and I’m quite certain that nothing I say has the slightest impact. That alone is the mark of junk science. Not that they disrespect me — I could care less — but that they disrespect everybody who has pointed out these problems (and ignore those problems), and leave their paper out there, attracting flies.
rgb
Re:Willis Eschenbach says:
June 5, 2012 at 10:22 am
Hi again Willis,
Thanks for the thoughtful response.
You wrote:-
“Next, you say that a standard exponential decay is somehow theoretically more solidly based than a “fat-tailed” exponential decay. I fear I don’t see why, given that fat-tailed distributions are as common in nature as their standard exponential counterparts. You see this as using “evermore complicated response functions”, but I think that there is more physical justification for using a fat-tailed response than there is for using a standard response.”
The “standard exponential decay” function is not haphazardly chosen. It is THE UNIQUE solution to the heat balance equation for a single capacity system under the assumption that the Earth has a linear radiative response to temperature.
As soon as you postulate an arbitrary response function, you disconnect your results from a physically meaningful conceptual model, where the assumptions can be clearly stated and tested. I would have no problem with your re-stating the heat balance equation under a different set of testable assumptions and then fitting the new temperature solution to your data. That way, you have a story. But fitting an arbitrary functional form which cannot be tied back to a physical system looks like curve-fitting.
You also wrote:-
“This is one of the more enduring myths of the GCMs, that they somehow “exhibit non-linear behavior”. The two that I have tested, the CCSM3 and theGISSE models, are strictly and definitely linear. I have no reason to assume that the other models are different.”
The following article explains how the GCMs can exhibit linear behaviour over the instrumental period, and yet have a declared climate sensitivity much larger than would be expected if the linear behaviour were to continue into the higher temperature range. (Section E of the article also includes a derivation of the linear feedback equation and its solution. )
http://rankexploits.com/musings/2011/equilibrium-climate-sensitivity-and-mathturbation-part-2/
This second article below provides direct evidence that the GCMs really do exhibit a nonlinear radiative response – nearly every one of them – and makes use of the fact that you CAN fit a linear model to the GCM results over the instrument period.
http://rankexploits.com/musings/2012/the-arbitrariness-of-the-ipcc-feedback-calculations/
Evidently, the nonlinearity in the GCM’s is not a myth. However, it’s a completely separate question whether this feature is solely a property of the GCM’s or whether it is also a real world feature, since it only manifests itself in future projections.
So I would stand by my recommendations:- (a) that you continue to tie your response function to a physically meaningful model and (b) that you focus on attribution rather than long-term climate sensitivity. Just saying.
Re:Joe Born says:
June 5, 2012 at 4:40 pm
Joe,
You need to learn not to take on intellectual giants!
Thanks for your response re the second order linear ODE in support of your response function. It works in the sense of yielding your two-pole solution, But, at the risk of sounding like I am moving the goalposts, this wasn’t what I meant when I spoke of a “physically meaningful” governing equation.
Can you reparse the equation so that it is tied to, say, a heat balance for a multiple capacity system with some assumptions?
Paul
Re:Willis Eschenbach says:
June 5, 2012 at 10:22 am
My second attempt to post this. The first disappeared into cyberspace.
Hi Willis,
You wrote:-
“Next, you say that a standard exponential decay is somehow theoretically more solidly based than a “fat-tailed” exponential decay. I fear I don’t see why, given that fat-tailed distributions are as common in nature as their standard exponential counterparts. You see this as using “evermore complicated response functions”, but I think that there is more physical justification for using a fat-tailed response than there is for using a standard response.”
The “standard exponential decay” function is not haphazardly chosen. It is the UNIQUE solution to the heat balance equation for a single capacity system under the assumption that the Earth has a linear radiative response to temperature.
As soon as you postulate an arbitrary response function, you disconnect your results from a physically meaningful conceptual model, where the assumptions can be clearly stated and tested. I would have no problem with your re-stating the heat balance equation under a different set of testable assumptions and then fitting the new solution form to your data. That way, you have a story. But fitting an arbitrary solution form which cannot be tied back to a physical system looks like curve-fitting.
You also wrote:-
“This is one of the more enduring myths of the GCMs, that they somehow “exhibit non-linear behavior”. The two that I have tested, the CCSM3 and theGISSE models, are strictly and definitely linear. I have no reason to assume that the other models are different.”
The following article explains how the GCMs can exhibit linear behaviour over the instrumental period, and yet have a declared climate sensitivity much larger than would be expected if the linear behaviour were to continue into the higher temperature range. (Section E of the article also includes a derivation of the linear feedback equation and its solution. )
http://rankexploits.com/musings/2011/equilibrium-climate-sensitivity-and-mathturbation-part-2/
This second article below provides direct evidence that the GCMs really do exhibit a nonlinear radiative response – nearly every one of them – and makes use of the fact that you CAN fit a linear model to the GCM results over the instrument period.
http://rankexploits.com/musings/2012/the-arbitrariness-of-the-ipcc-feedback-calculations/
The nonlinearity in the GCM’s is not a myth. However, it’s a completely separate question whether this feature is solely a property of the models or whether it is also a real world feature, since it manifests itself in future projections.
So I would stand by my recommendations:- (a) that you tie your response function to a physically meaningful model (you choose and defend the model) and (b) that you focus on attribution rather than long-term climate sensitivity. Just saying.
tallbloke says:
After more than 5 months, they have shown no evidence of “assimilat[ing] criticism”. They have yet to admit one thing wrong in their original paper (and their subsequent “Part 1” response to critics) even though those things contain huge errors that show complete ignorance on very basic things like how one correctly adds convection into a model of the atmosphere and how one applies conservation of energy to a system that is not isolated. Until they are willing and able to correct such basic nonsense, they are rightfully dismissed.
This is just a bizarre question. Squaring c doesn’t produce something faster than the speed of light. It produces something of different dimensions. You can’t compare c and c^2 when c is dimensional… It is comparing apples to tofurky. (If you think c^2 is “larger” than c, try writing c in, say, astronomical units (AU) per second and square it and see what you get!)
Stephen Wilde says:
It has nothing to do with the ideal gas law. It has to do with people who don’t understand laws that have more than two variables in them. And, by the way, for someone who touts the ideal gas law, you ought to at least learn what the terms in it mean; Over at tallbloke’s in a post a few months ago you seemed to think that in the form pV = nRT, n is some sort of (number?) density; it is not. It is a number of moles. (If you divided n by V then you would have a molar density.) I can understand having this confusion at some level since n is sometimes used as a number density in, say, solid state physics (e.g. semiconductor devices); however, it is a pretty weird mistake for someone who claims to understand the implications of the ideal gas law better than scientists do.
Wikipedia and most science textbooks, unlike you, understand that while one might say that the lapse rate in the troposphere itself is mainly determined by convective processes, the vitally important boundary conditions on the temperature structure are provided by radiative processes. Please don’t project your own ignorance onto others…You are unjustly maligning them by claiming that they agree with your misconceptions.
Only for those who don’t understand basic physics.
joeldshore said:
“you seemed to think that in the form pV = nRT, n is some sort of (number?) density; it is not”
The definition of ‘n’ is as follows:
“n is the amount of substance of gas (also known as number of moles), ”
The more such substance in a given volume (V) the greater the density.
and joeldshore said
“the vitally important boundary conditions on the temperature structure are provided by radiative processes”
radiative processes are vitally important only in that radiative energy in must equal radiative energy out. It is the non radiative processes that adjust to ensure that that is achieved WITHOUT needing system energy content to rise.
More energy in the troposphere for whatever reason other than top of atmosphere insolation or atmospheric mass is simply dealt with by a change in tropospheric volume in accordance with pV = nRT.
If one increases insolation or atmospheric mass then the increase in volume will be accompanied by a temperature rise at the surface. Otherwise not.
“””””……Now, can you explain to me the physical basis for squaring the speed of light in the equation E=mc^2. Nothing can go faster than light, so this is obviously a meaningless unphysical concept isn’t it? ;-).
Cheers
TB……….”””””
Not so fast TB, in the expression, E = mc^2 , E is energy; not velocity, and because the equation must dimensionally balance, then mc^2, must also represent an energy; not a velocity; so it is simply no problem.
Everybody understands that in the non-relativistic world, the kinetic energy of a mass moving at a velocity v in some co-ordinate frame, is given by E = 1/2mv^2 .
But remember that E = mc^2 represents the energy that would be obtained by converting the entire mass m into energy, so that a mass balance would show the loss of some mass.
Particle Physicists, such as our friend Anna V even use a set of units where c = hbar =1, so that Einstein’s relation equates energy and mass as essentially the same thing. Interestingly the second term, hbar = 1 also equates energy with frequency (of photons; which just happen to travel at the speed of light) via the Planck/Einstein relation; E =hf , but in this case f is radians per second, and not Hertz.
In electomagnetism, it is the group velocity which can’t exceed c, and information propagates at the group velocity. The phase velocity is not so restricted.
If you point a flashlight up in the air and snap it rapidly in an arc, at some radial distance the spot of light is going faster than c. Just watch a wave arriving on a beach at a slight angle off normal, and you will see the contact point run along the beach must faster than the group velocity of the wave.
Stephen Wilde says:
There is absolutely no scientific reason to believe this. It contradicts a century of understanding of radiative transfer. It contradicts the actual results from including convective processes in a model correctly, whether it be a full-scale climate model or the simplest model for the greenhouse effect (such as the one that N&Z added convection to in a clearly incorrect manner by doing it so that it drove the atmosphere to be isothermal rather than having a lapse rate). Furthermore, nobody has succeeded in showing that it can obey conservation of energy, again at any level of mathematical modeling of the system.
It is simply a religious belief.
By the way, a change to a two time constant exponential model for Willis’curve fit, does not really introduce an extra parameter. The curve “shape” is defined by three variables, not four; two time constants, and the fraction of the starting value that represents one of the components. The other component then is by default the difference from 1 in amplitude
joeldshore said:
“It is simply a religious belief.”
Ok, got that.
The Ideal Gas Law is a religious belief 🙂
Yeah, Stephen, what they said. And more — N&Z already use Jovian moons in their plot. They just pick the moons. They already use Saturnian moons in their plot. They just pick the moons.
for the curve is probably less than 1, for 8 points fit. If you understand statistics, you realize that this is extremely unlikely. It’s up there along with rolling double sixes 8 times in a row (or rather, since one is fitting eight points with “4” parameters, 4 times in a row). Sure, it happens. And sure, it doesn’t happen very often, and if you’re not a sucker and it happens the first time somebody picks up the dice, you check to see if the dice aren’t loaded.

Newtons per square meter. There is no physical process involving gases, especially not ideal gases, where this scale pressure has or could have the slightest bit of meaning, in part because no gas would remain gas at this pressure at planetary temperatures. 54,000 atmospheres is as far divorced from the ideal gas law (or any gas law) as it is possible to be — atoms would be jammed so close together that their valence and maybe even inner shell electrons would be interpenetrating and pure Pauli would be holding them apart. Nor is there any reasonable interpretation of an exponent like 0.065 applied to this term. It has nothing to do with dimension of simple e.g. bulk volume or surface or even possible fractal dimensions that might be relevant to gases. If you disagree, feel free to play through and refute me — show me how you get to an exponent of 0.065 from the ideal gas law or the more reasonable Vanderwaals gas, or a still more reasonable real gas with nontrivial interactions that can actually undergo phase changes (an ideal gas can’t).
of the fit (with their numbers only) and with my replacements of their numbers when I got bored with being censored and commented out inline in my own replies on TB’s blog and quit working on it, but that would be very easy to do. Then you can look at the distribution of Pearson’s chi-squared for the fit and decide for yourself if it is even conceivable that a fit of honestly noisy and uncertain data could be so perfect. It does show the Jovian moons and Mercury — plotted with readily available data — falling nowhere near their curve, including Europa, but the moon (reference, can’t miss), Titan (their numbers), Mars, the Earth, and Venus are still perfectly fit.
I repeat — I have personally attempted to reproduce their results with data I personally pulled. Lacking their secret sauce, the numbers I got were scattered — actually somewhat believably — all over the place, not on their miracle curve. Their curve basically “fit” the moon — no choice because they built that in — and at the far end, Mars, Earth and Venus. If you break things down, the last three are fit primarily with one of the two power laws, the first five with the other, if you use the specific values they used which happen to fit on the curve. This slightly oversimplifies, as the combination does bend things a bit on the middle of the Mars and smaller sequence where the forms both contribute, but it is a decent enough heuristic description.
I also recommend that you imagine their curve plotted with honest error bars on each point, which for some of the points are almost as large as the points themselves. Their curve goes almost perfectly through the points in the centers of the error bars.
Finally, I do teach intro thermodynamics, and have started to write a textbook embracing the subject. I can derive PV = NkT from first principles using both elementary arguments (good for intro physics) and using actual stat mech. I can, among other things, prove that the equilibrium state of an isolated atmosphere is isothermal in spite of having a pressure gradient. So saying that their result is “simple” because it depends somehow on PV = NkT is both incorrect — they advance no theoretical argument or derivation whatsoever of their strictly numerical result — and misleading, because PV = NkT actually scales in a very physically reasonable and observable way — the Boltzmann (or molar equivalent Ideal Gas) constant sets the scale just right so that the law works remarkably well for things like Oxygen at 1 atmosphere of pressure at roughly 300 K.
This is what their empirical fit does not do. I posted and showed in considerable detail way back then that their four parameter fit could be (indeed must be, to make sense of it) put in dimensionless form. In dimensionless form, each term is represented by two numbers — a dimensionless exponent and a physical constant. In fact, it is basically the following function:
The 54000 and 202 are pressures expressed in atmospheres. Note well, 1 atmosphere is roughly one bar or 100,000 Pascals (Newtons per square meter). There is no place in the atmosphere of any planet in the solar system that has a pressure vaguely near 54,000 atmospheres, or
The second term (which is largely what fits the last three planets, Mars through Venus), although on the surface more reasonable — at least pressures like 202 atmospheres are found in fluids on the surface of the Earth, such as 2000 or so meters beneath the surface of the sea, and one could imagine an exponent of 1/3 or some renormalized variant thereof, it is still pretty far divorced from actual gas atmospheric pressures even on Venus, so far divorced that it is very difficult indeed to see how such a pressure could possibly be relevant to e.g. Mars with its surface pressure of 0.0064 bar (yes, that is 202/0.0064 = 31560 times less than the reference pressure).
To put this in understandable terms, finding these scale pressures is analogous to discovering that we cannot describe the motion of a baseball accurately without using in an irreducible way a scale length a million times the size of the baseball, raised to the 17th power. It is saying that something that happens when atoms are jammed together so tightly that their electronic shells have deeply interpenetrated is somehow relevant to the physics of motion of those same atoms when they are so far separated, so diffuse, and so cold, that they almost never interact at all with long mean free paths and little kinetic energy. It is so obviously wrong that any physicist who sees N&Z’s results placed in dimensionless form will instantly say “this can’t possibly be right, it makes no sense at all”.
No, Stephen, 54,000 atmospheres cannot possibly be a scale constant that describes the atmosphere of Europa in some physically reasonable way. It is cosmic debris of pure nonlinear curve fitting, with enough parameters to fit an elephant once you don’t even bother to restrict the form of the fit on physical grounds.
I have the actual matlab code that replicates N&Z’s results with the rest of Jupiter’s moons thrown in! and with over the counter numbers used for most of the other planetary bodies, if you or anybody else wants to play. I was getting ready to actually compute a reasonable guestimate of the
You can then believe what you like about their result. I believe the evidence, especially when I’ve checked it personally against their theory myself.
rgb
But
Stephen says:
No…It is the magical way you think it operates that is a religious belief. E.g., how things magically change to make the surface temperature remain constant when you want it to and to change when you want it to.
Well Robert, I wasn’t trying to justify the N & Z work in detail because I think it is just a version of the Ideal Gas Law and the Standard Atmosphere. I told Ned that much myself and I await hearing how he proposes to distinguish those concepts from his idea of the so called Atmospheric Thermal Enhancement.
Anyway to my mind the important fact is that ANY two or more planets even get close to any sort of curve in the first place given the diversity of planetary conditions.
N & Z may have chosen just the Jupiter moons that suited them but personally I would discount ALL Jupiter’s moons because Jupiter is a secondary heat source.
Of more relevance here would be for you to explain to me and any others still reading why the Ideal Gas Law and the Standard Atmosphere do not or cannot explain how the atmospheric volume could change to negate any surface heating that would otherwise arise from causes other than greater atmospheric mass or more top of atmosphere insolation.
We see from observational evidence that the tropopause rises when the troposphere gets warmer. Given the expansion of volume evidenced by that observation why would the surface need to warm up at all ?
According to the Ideal Gas Law the available energy is spread through a larger volume which means there need be no more energy at the surface than before.
If there were no volume increase then yes the surface would get warmer but there is a volume increase and it is proportionate to the degree of warming.
So you can have more energy in a larger volume of air but instead of a higher surface temperature you just get a change in the air circulation pattern which serves to remove energy from the surface faster than before for a zero effect on surface temperature except regionally whilst energy is transported faster from surface to space and equator to poles.
Basic meteorology tells us that for every warm wind flowing poleward there is a cold wind flowing equatorward so the system remains in balance except for periods when the adjustment process is in progress.
In practice that means cyclical warming and cooling as the rate of energy throughput constantly changes as necessary to maintain system equilibrium despite attempts to disrupt that equilibrium from internal system characteristics other than atmospheric mass or external forcings such as a change in insolation.
All one will see from more GHGs is a shift in the air circulation with no change in average global system energy content. In so far as GHGs slow down outgoing radiation from the surface they facilitate a faster water cycle plus more conduction and convection which offsets the effect and the evidence for that faster or larger water cycle and more vigorous convection is the rise in the height of the tropopause with resultant latitudinal climate zone shifting.
But the sun and oceans already do that naturally to such an extent that the effect of our emissions will be unmeasurable.
All that is a natural application of the Ideal Gas Laws and if you aver that it doesn’t work like that then please say why not.
In particular you need to explain how an expanded atmospheric volume could fail to prevent surface warming when there is no increase in atmospheric mass or in energy from the sun.
Are you able to break the formula pV = nR (orK) T such that the atmosphere does not expand enough to eliminate the surface warming effect that would otherwise occur ?
What could prevent the atmosphere from expanding enough ?
There is no constraining force around the Earth apart from gravity and that stays constant for present purposes.
“””””…..Stephen Wilde says:
June 6, 2012 at 3:54 pm
Well Robert, I wasn’t trying to justify the N & Z work in detail because I think it is just a version of the Ideal Gas Law and the Standard Atmosphere………..pV = nR (orK) T………”””””
Your formula, contains p, V, n, R, T, or maybe K whatever that be.
The equation is based on an assumption; namely, that EVERY one of those five factors is absolutely constant over the “system” space. It has NO applicability to a system, where four out of those five terms may vary over the space occupied by the system. Fortunately the other one; R, is a physical constant.
Why do people keep traipsing out the “ideal gas law” in regard to the earth atmosphere where it has no applicability at all. It is an idealized formula for a sytem that is in static equilibrium. Earth atmosphere is never in any kind of equilibrium.
George.
K is the alternative term that Robert used for R.
The equation is based on the assumption that every one of those five factors is interlinked and will respond predictably to changes in the others.You accept that the value of R (or K) is a physical constant so knowing that is the key.
Thus the equation is capable of describing how the system equilibrium is maintained. Change any one or more of the terms and one or more of the others changes to restore equilibrium and confirm the validity of the Ideal Gas Law.
Now one could argue that the atmosphere not being an ideal gas the Law is capable of being invalidated but if one were to say that then you have to show exactly how and to what extent the non ideal nature of the gas causes a divergence from what the Law predicts.
As far as I am aware the differences are negligible in practice hence the regular use of the concept of a Standard Atmosphere.
So it comes down to the fact that there is an increase in volume when there is greater energy content in the troposphere. Can you demonstrate that that expansion is not sufficient to offset the surface warming that would have occurred in the absence of that expansion ?
I should add that Willis’s findings highlighted by this thread, if correct, could only be explained by a process such as the one described by the terms of the Ideal Gas Law.
If it were not for the Ideal Gas Law then clouds and sunshine would not be sufficient to explain observations.
The link between clouds, and the Ideal Gas Law is that changes in the volume of the atmosphere as predicted by the Ideal Gas Law are what change the vertical heights, the surface air pressure distribution beneath the tropopause and ultimately the amount of cloud globally.
In effect Willis is here proving the point though I know that as yet he doesn’t accept the link to surface pressure and cloudiness via the Ideal Gas Laws.
It pretty much removes the need for Svensmark’s cosmic ray hypothesis too. All one needs to change albedo is shifts in the surface air pressure distribution and those shifts occur as a result of the processes implicit in the Ideal Gas Law.
There is increase of both pressure and temperature at the surface; what else pushes the TOA up? It requires real force to do so.
Brian H asked:
“There is increase of both pressure and temperature at the surface; what else pushes the TOA up?”
Taking the globe as a whole there is no increase in pressure despite the increased energy content of the troposphere.To increase pressure globally you need greater atmospheric mass or a stronger gravitational field.
There is however, a redistribution of surface pressure regionally which is what changes cloudiness and albedo as per Willis’s findings.
In the absence of a change in pressure globally the rise in the tropopause is due to increased buoyancy within the troposphere. It is not necessary for the surface temperature to rise other than regionally and since what goes up must come down and what flows poleward must flow back equatorward the regional changes balance out.
An example:
During the late 20th century warming spell the more zonal circulation allowed the equatorial air masses to expand and the air flowing poleward across the mid latitudes became a fraction warmer.
However the more zonal jets reduced inflows of warm air to the polar regions which actually became colder. It is known that zonal jets tend to cut off and isolate the polar air masses.
The net effect was pretty much zero globally and the reverse applies when the jets become more meridional. Then the polar regions warm due to more incursions of warm air and the mid latitudes cool due to more polar air flows across them
There is a complicating factor in the Arctic because warm ocean water can flow into the Arctic ocean below the ice and melt it from beneath but the global air circulation and albedo simply changes accordingly to accommodate that feature too.
There is a case for arguing that the observed warming is simply an artifact of our non satellite temperature measuring system which failed to give appropriate relative weights to polar, equatorial and mid latitude temperature readings.
Hence the satellites showing much less variability.
∆T(k) = λ ∆F(k)(1 – exp(-1/ τ) + ∆T(k-1) * exp(-1 / τ)
Willis, I’m not sure how this is supposed to read but the number of brackets does not match here. Please check. However, I’d agree with others that it would be better to stick to a simple exp unless you can give a positive reason for doing otherwise (eg land and sea responses or different ocean depths 200m).
The other thing about different responses on different time scales. There must be a strong negative feedback on short time scales otherwise we would not be here to discuss it. I think this is what Linzen and Choi 2011 (On the Observational Determination of Climate Sensitivity and Its Implications) was picking up and why their feedback was a lot stronger than other studies.
They deliberately picked out sections of the record showing significant (deseasonalised) change.
These are, by choice, the parts of the record with the fastest change, hence the greatest radiative imbalance. Thus they are informative but probably not a measure of “the” feedback value (or the implied sensitivity) but that of short term response to imbalance.
They used actual data from ERBE and CERES TOA , this may be good alternative to the model TOA you used here and may provide a better test for you hypothesis.
Equally your choice of looking at the annual cycle during a period with a large decadal warming trend will reveal the annual time-scale response by the shape of the Lissajous figures and the decadal scale from the change of the figures over time.
I like this Lissajous approach as I think this kind of overall system analysis can tell us more about how it behaves than dubious home rolled statistics.
Equally I don’t think you can look for just for one time constant. I think there are different depths of ocean (primarily) involved that will have hugely different time constants. For a simple analogy think of paralleled capacitors : short term response being nF decadal uF and centennial scale mF ; deep ocean in farad !
The overall long term feedback must be negative as is witnessed by the last 4.5 billion years.
Somewhere in the middle is a positive feedback that gives rise to the bistable glacial-interglacial climate flip-flop. We are already at the hot state of the bistable, there is 4.5 Ga of data showing sufficient negative feedback for the system to be solidly constrained and not susceptible to “tipping points” .despite huge changes in “forcing”.
I suggest you look at the Lindzen and Choi paper, it looks at the tropics specifically as I suggested you could do in the earlier thread. They will probably point you to relevant data that you said you were unable to get for the albedo paper.
Your initial result still has value despite not accounting for LW. The fact that the model worked as well as it did indicates you should be able to make the gross approximation that LW is affected in a similar way to SW.
While there are a number of reasons why this is technically not accurate, it has to be said that your model is probably the most accurate of anything I have seen in the last 5 years of looking at a whole range areas of climate related studies.
Despite the naivety of the approach, I think that makes it a remarkable achievement.
/best.
Of more relevance here would be for you to explain to me and any others still reading why the Ideal Gas Law and the Standard Atmosphere do not or cannot explain how the atmospheric volume could change to negate any surface heating that would otherwise arise from causes other than greater atmospheric mass or more top of atmosphere insolation.
What, exactly, is it about the ideal Gas law that you are fond of? It is utterly free of dynamics. It describes one specific thing — a gas in a container in a gravity-free idealized environment, where the molecules of gas are basically non-interacting or are trivially (hard sphere) interacting, so it cannot describe phase changes.
If you want to understand atmospheric dynamics, you can start by looking at the Navier-Stokes equations — nonlinear partial differential equations so fiendishly difficult that mathematicians cannot even prove that general solutions exist, let alone find them in any but the simplest cases. Of course this isn’t enough — the Earth is a set of coupled Navier-Stokes systems (at least one for atmosphere and another for the oceans), and in the ocean, density is driven by salinity, evaporation, turnover, temperature, land runoff, ice melt, surface winds and weather — over 1 to 1000 year timescales (so some fraction of what the ocean and climate are doing now depends on what the ocean and climate did during the dark ages). In other words, too complex for humans to be able to solve. Still, if you want to understand, for example, the adiabatic lapse rate and why a lower atmosphere is warmer than the upper atmosphere even though gravity does no net work heating the system then Navier-Stokes is the right place to start, although one can make heuristic arguments that help you with the general idea without it.
The ideal gas law per se is (obviously) isothermal, and as soon as you start to let one parcel of gas expand into another (even in those heuristic arguments) you have to take a staggering amount of stuff into account — buoyancy, turbulence, compressibility, conductivity, non-Markovian past history so that even describing the parcel itself according to the ideal gas law with a strictly local temperature requires the full use of the first law of thermodynamics (work done by or on the parcel, heat flow in and out of the parcel, total enthalpy of the parcel) and then there is always the water in the air, which isn’t even approximately describable by an ideal gas law. Water is a polar molecule! It is always strongly interacting at a molecular level, and it has startlingly nonlinear radiative properties as well. So I don’t really “get” your fixation on the ideal gas law as an explanation for Nikolov and Zeller’s “miracle”.
On top of this, I don’t understand what you are asking me to do. Explain how the atmospheric volume would change to negate any surface heating that would otherwise arise from causes other than greater atmospheric mass? Greater atmospheric mass doesn’t cause any surface heating. What changes in atmospheric volume? And what about albedo? TOA insolation matters, sure, but the fraction of that which reaches the ground is determined in large part by straight up albedo, and the albedos of the planets on N&Z’s list are staggeringly different. I don’t even understand what this sentence means — its referents make no sense. Volume doesn’t have anything to do with surface heating. Surface heating is pretty much caused by insolation first, winds blown in from places where insolation caused surface heating second, and winds blown in from still more delayed reservoirs (e.g. the ocean) carrying heat delivered by insolation in some mix of air temperature and latent heat (humidity). What on earth does “greater atmospheric mass” have to do with anything, aside from providing more matter to help carry this heat from place to place?
rgb
Why do people keep traipsing out the “ideal gas law” in regard to the earth atmosphere where it has no applicability at all. It is an idealized formula for a sytem that is in static equilibrium. Earth atmosphere is never in any kind of equilibrium.
, that describe local variations in molar density (or, with a conversion factor for moles to mass, mass density). Those relations are useful (and often used) to derive various relations in climate science. But one derives those relations. One doesn’t wave one’s hands and pretend that the words correctly describe a complex mathematical system.
You took the words right out of my — uh, keyboard. It is not correct to say that it has “no applicability at all”, though. It has some local validity for “parcels” of air in a coarse grained description of the atmosphere. That is, if I take a chunk of air the size of a breadbox in between my two hands, it has a roughly uniform pressure, temperature, volume, and mass, and is “pretty well” described by PV = NkT. If you take too small a parcel it isn’t (and it gets to be too hard to solve the PDEs that describe the global system. If you take too big a parcel, as you note, the pressure in the parcel differs significantly from top to bottom, the temperature varies across or up and down the parcel, the parcel is differentially moving, there is turbulence in the parcel, etc.
The best that can be said for the ideal gas law is that you can form a set of partial differential equations from it, things like
rgb
Oops, I needed an “R”, not a “k” in the PDE in the previous reply. I was thinking moles, but I’m used to working with molecules. Sorry — scale one or the other by Avogadro’s number…
rgb
[Fixed -w.]
When a gas expands its temperature drops.
The troposphere expands vertically when more energy is added.
Unless the additional energy represents an increase in total system energy content the temperature at the surface will not rise.
Total system energy content only increases if one increases top of atmosphere insolation OR total atmospheric mass.The more massive an atmosphere the higher the surface temperature.
If the energy content of the troposphere increases from any other cause then that represents only a redistribution of available energy within the system and so there need be no increase in surface temperature.
In practice some areas of the surface warm but others cool and the net effect on average global surface temperature is near zero.
As far as I am aware that is all well established science relating to non radiative energy transfers.
Robert Brown asked:
“What on earth does “greater atmospheric mass” have to do with anything, aside from providing more matter to help carry this heat from place to place?”
You answered your own question. Think it through.
If there is more matter carrying more heat from place to place then the surface temperature will get higher before that heat is finally released to space because the shifting from place to place takes longer and more energy accumulates in the system.
Absolutely bog standard basic physics and at base what N & Z are saying. It is also as per the Ideal Gas Law which describes the dynamic relationship etween the 5 parameters of Pressure, Volume, Temperature, molecular density and the gas constant R.
When a gas expands its temperature drops.
Maybe you should learn the first law of thermodynamics sometime.
I’m just saying.
rgb
If there is more matter carrying more heat from place to place then the surface temperature will get higher before that heat is finally released to space because the shifting from place to place takes longer and more energy accumulates in the system.
where
is the dimensionless scaled relative-to-greybody warming computed relative to the moon as the “ideal” representative greybody and
is surface pressure in bar, as a universal law that works perfectly — and I do mean perfectly — for Mercury, the Moon, Triton, Titan, Europa, Mars, Earth and Venus, which is true if one uses their special sauce to evaluate
, and which results in a very imperfect fit (some would cruelly be tempted to call it no fit at all) if one leaves out the sauce and uses published numbers with error bars, and an even more contrary fit if one includes the Jovian moons besides Europa (and, I’m sure, other moons of the other gas giants or the gas giants themselves, provided one could actually get data on their “surface” pressure and temperature in the first place).
and
are large (and completely unacknowledged in their miracle fit — where exactly is that pesky
that should accompany any such fit?).
(and countervaries with distance from Jupiter so your assertion that “Jupiter warms them” is irrelevant if true — it doesn’t warm them enough, does it?).
only used to “predict” a relative surface temperature. Yet no such variation is observed in N&Z’s “miracle curve”, is it? Not even in planets in their list with enormously different albedos, where if you plotted the albedos of the planets or moons against their surface pressure you would get no correlation whatsoever. Again we see this in just three Jovian moons — albedos of 0.2, 0.4, and over 0.6 where in all cases the atmospheric surface pressure is basically “hard vacuum” (
bar — picobar).
for the number of degrees of freedom and a p-value for the null hypothesis of “this is unbiased measured data that just happened to fit exactly on this curve” that is most correctly interpreted as “the null hypothesis is almost certainly false” quite independent of the curve in question and its merits. What the source of bias is, well, I personally think that is pretty clear, especially after I plot the data myself and land nowhere near their curve with any small body but the Moon, Mars, Earth and Venus when I omit the secret sauce.
and
for the moons we used to fit this curve and ignored the rest, forgetting that the inclusion of error bars or independent checking of our numbers would reveal what we did to anyone that actually looked”. But of course that isn’t plausible. Is it?
Granting that there is some truth to this, it has nothing to do with the ideal gas law. Nor is it even vaguely, remotely, original to N&Z — adiabatic lapse rates are derived (subject to various assumptions) in over-the-counter climate physics textbooks.
Finally, it is not “at base what they are saying”. They are saying that
There is other stuff N&Z do that isn’t terrible. I appreciate their desire to improve the computation of average surface temperature, since that sucks even on the Earth (with huge numbers of sampling stations and methods) and is almost laughable when one attempts to determine a mean temperature for the other planets from a teensy handful of observations, but ultimately that simply means that the error bars on both
But the paper in question is awful. Seriously, seriously awful. It is completely stupid to try to make the “base” albedo of all planetoids match that of the moon. Europa is covered in ice, Ganymede very definitely is not, and this is “reflected” in their bond albedos — 0.43 for Ganymede, 0.67 for Europa. But the real wild card is Callisto — a bond albedo of ~0.2, farthest from Jupiter (and hence least susceptible to all forms of warming from Jupiter).
If you compare Europa (included by N&Z) and Callisto (not included for obvious reasons) the whole argument is over. The mean surface temperature of Callisto is considerably higher than that of Europa. Of course it is, its albedo is around a third as large. It’s dark and absorbs lots of sunlight where Europa reflects it before it has any chance to warm it. It is farther from Jupiter (so if Jupiter is a source of radiant heat, it should be cooler). It is farther from Jupiter (so if Jupiter is a source of tidal heating, it should be cooler). It’s mean insolation is, of course, on average the same as Europa and the variations there cannot possibly be used to explain the fact that Callisto is over 20% warmer than Europa!
By every standard, if “moon-referenced greybody atmospheric heating” is a valid concept for moon-sized objects with atmospheres, Europa should be warmer than Callisto. It is not. Ganymede, farther from Jupiter, with the same atmospheric pressure as Callisto but an intermediate albedo (between Europa and Callisto) has — wait for it — a temperature that it intermediate between Europa and Callisto.
The temperature differences at near-constant insolation of all three Jovian moons is easily explained by something else — the albedo — completely independent of
The albedo therefore should (and, of course, does) modulate an unpredictable error/difference/anomaly compared to any function of
A bit of a puzzle, isn’t it? All of these moons, yet the one included is the one that (after suitable and occult secret sauce adjustments) fits square on their curve. A clearly necessary anomaly is omitted, a correction relative to a pressure only model based on albedo where the latter is completely decorrelated from surface pressure and where one of the snippets of physics we are pretty certain of is that actual surface insolation contributing to heating is TOA insolation minus the reflected fraction which is directly dependent on the albedo, not the surface pressure. And yet damn, they all fit square onto a horribly nonlinear curve with impossibly nonphysical reference pressures and absurd exponents, even though the error bars in our knowledge of their surface pressures and temperatures — quite aside from the albedo-based corrections — are in some cases almost as large as the quantities themselves. Which leads to a miniscule
One is tempted to interpret their miracle as “we fit the moon, mars, the Earth and Venus with a four parameter nonlinear function, and then tweaked our estimate of
rgb
Paul_K: “Can you reparse the equation so that it is tied to, say, a heat balance for a multiple capacity system with some assumptions?”
Yes, I can, but I’ll need to get back to you tomorrow; I’m just responding now so you won’t think I’m ignoring you.
http://www.qrg.northwestern.edu/projects/vss/docs/thermal/3-how-could-expanding-gas-cause-heat-loss.html
“When gas expands, the decrease in pressure causes the molecules to slow down. This makes the gas cold.”
Thus if one adds energy to a gaseous atmosphere open to space, which then expands, the two processes offset one another if there is no additional input from an external energy source such as a sun and no increase in atmospheric mass.
“Granting that there is some truth to this:
“If there is more matter carrying more heat from place to place then the surface temperature will get higher before that heat is finally released to space because the shifting from place to place takes longer and more energy accumulates in the system.”
it has nothing to do with the ideal gas law”
The Ideal Gas Law describes the relationships between pressure, volume, molecular density, temperature and the gas constant R.
Therefore it has everything to do with the temperature that will develop within an atmosphere of a given mass at a given pressure and subjected to insolation.
Increase the mass then all other things remaining equal temperature will rise.
It is the Ideal Gas Law governing non radiative energy transfers which adjusts the internal system energy flows to ensure that radiative energy in always matches radiative energy out at a given atmospheric mass subjected to a given level of solar input.
Oh No! It’s worse than we thought…
the obvious cause of albedo variation is cloud cover fraction which does affect both incoming and outgoing. It’s not the only cause of albedo variation. Evidently, one can vary the reflectivity of clouds by varying the size of the nucleation particles. Use smaller particles and you get higher reflectivity out of the same patch of clouds and that doesn’t have an effect on LWR like increased cloud cover fraction.
As I recall, Lindzen has an Iris effect theory using this sort of albedo variation.
For the whole Earth average, balance for incoming and outgoing power happens at about 239 W/m^2 coming in and going out. Radiation leaving the Earth’s surface at the average T value, 288K, results in an escape rate of around 270 W/m^2 for power leaving the troposphere which means that about 70% of the surface emission escapes or is replaced by atmospheric emission on the way out. With about 70% of surface radiation escaping one can see that to reduce 270W/m^2 down to 239 W/m^2 would require a reduced output from the surface of around 43W/m^2, dropping 390 down to 347 W/m^2 which is equivalent to a radiating temperature average of around 280K. That would mean the surface T would be down around 280K or a mere 7 deg C about freezing.
What is happening here is that around 60% of the Earth has clouds and 40% is clear sky. Clear skies contribute around 270W/m^2 and the overall average is around 239. Working through a weighted average (60/40), one can determine an average emission for cloudy skies. This comes out to around 218 W/m^2 of radiated power coming from cloud tops that needs to escape the atmosphere. This corresponds to around 249 K or -24 deg C. That’s the atmospheric temperature about halfway to the tropopause.
Consequently, while losing cloud cover, one increases the radiated power leaving the Earth as well as reducing albedo and allowing more incoming solar power. If one assumed we totally lost all cloud cover and wound up with an Earth albedo of 0.08 (due to all that of water on the surface), we’d only have 313 W/m^2 coming in to be absorbed and to be balanced. If we assume the same fraction of surface emissions make it through the atmosphere (70% for clear skies), then the surface would have to heat up enough to radiate 450 w/m^2 and that corresponds to a T average of 298K, only 10 degrees warmer than the 288k mean.
anyone here starting to see how lunatic claims of 6 deg C rise due to a somewhat minor increase of a trace gas like co2?
Robert, you said this:
“one of the snippets of physics we are pretty certain of is that actual surface insolation contributing to heating is TOA insolation minus the reflected fraction which is directly dependent on the albedo, not the surface pressure.”N & Z have only made a tentative start by
Do you not realise that surface pressure sets both the surface temperature at a given level of insolation AND the amount and shape of atmospheric circulation required to produce a given albedo ?
For any given planet with an atmosphere the air circulation must reconfigure itself until radiation in equals radiation out.
In the process it influences albedo.
If the air circulation were to fail in its task then there could be no atmosphere because it would be congealed on the surface or boiled off to space.
The equation of the Ideal Gas Law ultimately determines how the energy flows through the atmosphere must be configured in order for stability to be maintained and for an atmosphere to be retained and remain gaseous.
I suspect that when we know more about the atmospheric circulations of the moons of Jupiter and the various phase changes of materials that can occur within them then we will see how and why they have differing temperatures and / or fail to fit the curve noted by N & Z.
Note the fit of the easiest examples to the curve. Nothing in nature is perfect and so it will never be as neat as you seem to expect. As regards examples that do not fit I am sure that the reasons will be found within the particular air circulations and compositional variations of those examples.
Given the huge differences between Venus, Earth, Mars and the Moon the mere fact that they fit anywhere near each other on any curve is quite a surprise and highly likely to be of more general significance.
Let it ride for a few years before sounding off witrh such certainty 🙂
“N & Z have only made a tentative start by”
Mods, could you remove the above words from para 1 of my last post please.
[Done. -w.]
Paul_K: “Can you reparse the equation so that it is tied to, say, a heat balance for a multiple capacity system with some assumptions?”
responds to the surface temperature and also loses heat by re-radiation, it behaves similarly:

the surface receives is made up of radiation
from space
and some re-radiation from the atmosphere proportional to the atmosphere’s temperature, we have

results in:
,
If, say, the atmospheric temperature
Now let’s say that the total radiation
Plugging that in above and re-arranging to eliminate
Paul_K: “Can you reparse the equation so that it is tied to, say, a heat balance for a multiple capacity system with some assumptions?”
rises in response to total received radiation
and cools as it re-radiates, its temperature
behaves thus:
.
responds to the surface temperature and also loses heat by re-radiation, it behaves similarly:
.
the surface receives is made up of radiation
from space and some re-radiation from the atmosphere proportional to the atmosphere’s temperature:
.
results in:
,
,
and whose time constants are
and
.
Sorry my last response was obscure; haste makes waste. Let me try again:
If the earth’s surface temperature
If, say, the atmospheric temperature
Now let’s say that the total radiation
Plugging that in above and re-arranging to eliminate
which is of the same form as
namely, the equation, whose form I gave the other day, for the two-pole system whose sensitivity is
Robert and Willis may find this new article informative
http://tallbloke.wordpress.com/2012/06/08/pressure-induced-changes-in-surface-temperature-of-titan-neptunes-largest-moon/
Thanks Tallbloke.
There are some pretty amazing gaps in the general scientific knowledge of some highly qualified and experienced but apparently overly specialised scientists.
Stephen Wilde says:
June 7, 2012 at 1:39 am
Now one could argue that the atmosphere not being an ideal gas the Law is capable of being invalidated but if one were to say that then you have to show exactly how and to what extent the non ideal nature of the gas causes a divergence from what the Law predicts.
Oh but the atmosphere is an ideal gas, Stephen – according to the fisics created to promote AGW. That’s why they don’t have convection, or gravity..
They don’t have anything between the vacuum of space and Earth, their atmosphere is empty space; hence only radiation applicable.
They can’t follow your arguments because they can’t relate the ideal gas laws to real gases (which no real gas obeys), they really think the atmosphere is comprised of ideal gas molecules, except water vapour, which travel at high speed through empty space bouncing off each other, and the container,without attraction. By which they get carbon dioxide thoroughly mixed because as per ideal gas description it will spontaneously rise from the ground and diffuse at great speed through the atmosphere, which for them is empty space. That’s why it can stay up hundreds and even thousands of years accumulating, because it is ideal gas it doesn’t have any weight, is not subject to gravity.
Their ideal gas world has it that gases aren’t buoyant in air – how can they be since they have no air?
They have no way of producing clouds, even though they say water vapour isn’t an ideal gas, clouds just magikly appear. They don’t have rain because in their ideal gas world there is no attraction, carbon dioxide just goes bouncy bouncy all over the empty space accumulating.
Don’t look for internal consistency in their arguments for this, they will in the same paragraph describe ideal gas diffusion in empty space as the reason carbon dioxide becomes thoroughly mixed in no time at all and then say it’s because it’s being bounced around in Brownian motion and that’s the reason scent wafts across the room when the bottle is opened..
But empty space is what they have as a concept of the atmosphere around them between their ears..
…they have no sound in their AGW Science Fiction world, they can’t hear this.
They don’t get the joke.
See my post here where I tell the story of how I discovered they think this is real physics and so why arguments between those who still live in the real world where gases have volume subject to gravity, who have a real fluid gas atmosphere with weight around them, and those who live through the looking glass with Al where impossible physics is the norm where gases are hard imaginary ideal gas dots bouncing off each other and the walls of a container, talk past each other.
http://wattsupwiththat.com/2012/06/02/what-can-we-learn-from-the-mauna-loa-co2-curve-2/#comment-1003183
Do you not realise that surface pressure sets both the surface temperature at a given level of insolation AND the amount and shape of atmospheric circulation required to produce a given albedo ?
Do you not realize that the actual data on albedo makes it clear that this statement is completely, utterly, cosmically false, untrue, not the case, absurd?
It isn’t even particularly true for the last three planets on the list — Mars, Earth and Venus — if you want to find a “trend”. Nor is it true just for the Earth. The Earth’s albedo has varied 7% over the last 15 years. Are you trying to assert that the mean surface pressure has varied at all over that time as a causal factor? If so, data please.
Look, plot albedo against surface pressure. Then come talk.
rgb
Re:Joe Born says:
June 7, 2012 at 4:52 pm
Wow. Seriously brilliant, Joe.
This is again a “watch this space” post since I have literally only just picked up your post. I need to play with it for a while before getting back to you. I may need a day because “she indoors” is on my back at the moment.
“The Earth’s albedo has varied 7% over the last 15 years. Are you trying to assert that the mean surface pressure has varied at all over that time as a causal factor? If so, data please.”
Mean surface pressure ?
That doesn’t vary except over geological timescales.
What does vary is the surface distribution of pressure and, indeed, cloudiness and albedo has varied.
However it is result of a reconfiguration of the surface pressure patterns. In the case of Earth albedo declined when the jets were more zonal and is now increasing with more meridional jets.
So, to make it clearer.
Mean surface pressure sets the surface temperature.
Changes in the regional distribution of surface pressure reconfigure the atmospheric circulation to ensure that radiation out equals radiation in and part of the process is changing cloudiness and albedo.
Plot albedo against the changing surface pressure distribution and apply some thought.
Paul_K says, June 5, 2012 at 2:15 am: As before, Willis, please take this as a constructive critique of your work. I am not trying to do a hatchet job, I promise you.
Unlike Willis who never ceases to do that in the case of Nikolov & Zeller…
“””””” …….Stephen Wilde says:
June 7, 2012 at 1:39 am
George.
K is the alternative term that Robert used for R.
The equation is based on the assumption that every one of those five factors is interlinked and will respond predictably to changes in the others.You accept that the value of R (or K) is a physical constant so knowing that is the key…….””””””
Stephen,
The ideal gas law, and indeed any of the other equations of state such as the Van der Waals equation, apply ONLY to systems, where the values of p, v, T are absolutely constant everywhere in the system, and that the amount of material (n) also is fixed.
R is the ideal gas constant, so it is NOT K which represents Temperature, or k, which is Boltzman’s constant. One cannot go throwing around universal symbols willy nilly, as knowledgeable readers, are expecting standard physical terms to use accepted standard symbols.
In earth’s atmosphere, none of the p, v, T trio is even vaguely constant everywhere in the atmosphere. Nothing useful is learned by applying an equation to a system which simply does not conform to the set of restrictions for which that equation is valid.
George.
Robert used K instead of R. He later acknowledged that as an error. I just referred to it in passing by way of covering my back because I thought he knew something I didn’t.
I used R which is the gas constant which varies from gas to gas.
In Earth’s atmosphere P (pressure) is a constant if taken globally at less than geological timescales.
Likewise T being a measure of solar input is also nearly a constant because raw TSI varies so little.
n is also fixed because there are only so many gas molecules in the atmosphere (ignoring phase changes).
V is variable as seen from the rising and falling of the tropopause and the expansion and contraction of the upper atmosphere in response to solar variability.
So, if something internal to the system such as GHGs try to increase temperature in the atmosphere the variability of V provides a mechanism for offsetting the thermal effect of the GHGs.
The equation PV = nRT describes interlinked relationships and can be used to predict the system outcome from changes in individual variables.
In so far as Earths atmosphere is composed of non ideal gases the air circulation adjusts appropriately so that the equation is preserved overall.
Paul_K:
.
Just in case you were tempted to rely on my algebra, I’ll mention that a factor is missing from one of my last missive’s equations. That equation should have read:
What does vary is the surface distribution of pressure and, indeed, cloudiness and albedo has varied.
in N&Z — and albedo, that works across all of the bodies in the solar system. Be sure to include the gas giants, as soon as you figure out what their “surface pressure” and “surface temperature” is, compared to their albedo. Surely that is no crazier than including Venus — a wild card in the solar system — in the same plot as Europa, Mars, the Earth, and Mercury.
that describes or predicts planetoid albedo!
for their fit.
bar and
bar respectively and raised to the power of
and $\nu_2 = 0.385$ respectively. Show me where that rule comes from, or what it has to do with
, albedo, the DALR, or anything else in the known universe. Explain to me how a reference pressure that could only be found at the bottom of a water column 540,000 meters deep appears in an expression predicting surface temperatures on airless moons (while those same expressions completely ignore albedo).
Are we talking in the same solar system? Nikolov and Zeller isn’t about the Earth. It makes egregious claims for the planetary bodies of the solar system. Show me a plot of correlation between mean surface pressure on planetary objects —
As for the Earth — again I remind you that we are talking about Nikolov and Zeller, not just “the Earth”, and N&Z include the mean surface temperature of the Earth plotted against mean surface pressure. So while of course, sure, cloudiness and hence albedo varies with surface air pressure (for complex reasons) on Earth, this neither explains (by providing a causal factor) the increase in albedo in the 80s and 90s to all-time (recorded) highs and its subsequent decrease by 7% afterwards. Nobody knows why either one happened, in the sense that they predicted either event or can predict when the current albedo will shift again or in what direction. All that we know is that all things being equal increased mean albedo means decreased mean global temperature because it directly modulates insolation (primary heat source) by reflecting sunlight away with almost no air or surface warming and only indirectly and inconsistently modulates heat loss — cloudy days being consistently cooler than sunny days, most times and places.
So please, if you want to argue don’t change the topic of the argument in mid-stream. I assert that N&Z’s work is both incorrect and indefensible even as a hypothesis or wild assertion. By this I don’t mean to say that air pressure is completely irrelevant to e.g. the DALR, nor do I mean to say that the ideal gas law is completely inapplicable to atmospheric air, so don’t put words in my mouth or raise red herrings or straw men in the argument. I will say it again — albedo is visibly almost completely decorrelated with surface atmospheric pressure in planetary objects. It varies by a factor of three or more in planetary objects with no or almost no atmosphere — simply the color of their surface rock is different for different airless bodies. It varies by a factor of 5 (or even six) across the observable ranges. Some of the planetary objects with the highest albedo have almost no atmosphere. Some have atmospheres so thick and deep we don’t even know how thick and deep they are because we can’t see through them to the bottom. Atmospheric chemistry varies wildly. Temperature varies wildly. There is no simple physical rule based on
There is a simple rule predicting how planetoid greybody temperature should vary based on albedo, however. The physics in this rule is pretty elementary and has long since been verified. It is used (for better or worse) in climate science (including by N&Z) for the baseline temperature from which the GHE proceeds. Yet N&Z’s plot of the planets utterly fails to respect this rule — Europa, Titan and Triton have no business being on their “miracle curve” because all of three have somewhat anomalous albedos compared to their atmospheric pressure. Of course they aren’t on their curve — if you plot them using over the counter data rather than data processed with their secret sauce — and they aren’t believably on their curve under any circumstances (even with the sauce) if one plots the error bars for the data and uses them to compute an actual Pearson’s
So let’s stay on topic. We’re not discussing the weather — the tendency for rain on the Earth when the barometer drops, fair weather when the barometer is high — we’re not even talking about climate on the Earth, we’re talking about the scaled, dimensionless relative surface temperature of a list of planetoid objects compared to the moon, as a function of mean, dimensionless surface pressure scaled by
rgb
I used R which is the gas constant which varies from gas to gas.
where
is Avogadro’s number. There is no difference between
and
— they just express the ideal gas law in different units — moles vs molecules. NEITHER of them expresses it in terms of molar mass — if you want to turn it into a formula for mass density you have to scale it by molecular weight. And it doesn’t vary from gas to gas — rather either a gas behaves approximately like an ideal gas in a given pressure/temperature range or it doesn’t, and whether or not it does depends on details of intermolecular interaction (and e.g. whether or not one is near a phase transition).
completely break down). But dry air for the most part is reasonably ideal. Humid air all bets are off, however. Water vapour isn’t even vaguely, approximately, ideal in its behavior at typical Earth temperatures and humidities.
Say what?
All of this stuff is fairly clearly laid out in any decent textbook on thermodynamics or stat mech. Sadly, some of the math is pretty complicated, especially the math associated with phase transitions (which are highly nonlinear phenomena where things like
rgb
Joe,
OK, I managed to grab a few minutes off from toting barges and lifting bales to have a look at your work. You have come as close as I have seen to developing a meaningful physical underpinning for a response function with multiple superposed timeframes, but you are not there yet. Please do not be put off by the critique I make here. I think you have a great chance of finding a credible solution, but this isn’t it. All I am trying to do here is to make clear to you the broad constraints you need to consider for such a solution. I certainly don’t want to discourage you from carrying on the search.
I took your three starting equations to confirm that they do yield the differential equation you derive. I saw immediately that you have a typo in your equation. There should be a T1 term (rather than a constant) on the LHS, as you noted, but you should also note that its coefficient should be (r1r2 – ab). This is sufficiently close to what you wrote that I am sure it is a typo rather than any conceptual difference.
I then solved the equation for T1 to verify the solution form, and to match up the constants and coefficients.
The big problem stems from your three starting assumptions and their interpretation. As a general rule, the energy balance equations (actually flux balance) start with a Top of Atmosphere (TOA) balance. The basic idea is that after a flux perturbation (i.e. a forcing) the net flux difference has to “go somewhere”. If we ignore orbital kinetics, it is the only externally sourced energy to change the heat content of the planet. The assumption is that you can move heat around internally or store it – which might affect surface temperature – but you can’t add system energy outside of the TOA net flux balance.
A commonly made assumption is that the total energy over time (the integral of the net flux term) should be approximately equal to the total heat gain in the ocean, because of the ocean’s huge heat capacity, relative to other options. There is nothing at all to stop you from asserting that the net energy gain should be partitioned (instead) between the ocean and the atmosphere, which is what I think you are trying to do here. You do need to be aware however that given the relative heat capacities, your term C2 dT2/dt is likely to be very small and unlikely to be the main reason for needing a multi-period response function to explain observations! Most of the explanations for multi-time response of temperature in the time domain are based on slow deep ocean response, and this requires a connection via heat flux, not radiative flux. Atmospheric heat gain is small beer by comparison.
However, let’s continue to consider your solution under the assumption that energy associated with the total net flux imbalance is exhaustively partitioned between the atmosphere and the ocean – your two heat capacities. Your three equations do not reflect this partitioning. Your first equation looks remarkably similar to a typical TOA radiative balance for a single capacity system. However, when we get to your third equation – it becomes clear that you are treating FsubT as the total radiation the surface receives. You declare F to be the radiative flux from space, but note that this IS NOT equal to the TOA forcing and has a very complex relationship with the TOA forcing. It IS possible to express the energy balance at the surface, but you cannot do it with just radiative terms. Both your first and third equations would have to include latent heat and sensible heat gains/losses to make any sense if they are expressed at the surface. For a full description of what this looks like, there is a paper written by Ramanathan in 1981 which is still the best I have seen. I will dig out the reference.
In summary, there is no conservation of radiative energy within the system, which is perhaps what you are assuming (?). You are trying to express an energy balance in terms of a radiative balance at surface. This is a no, no.
Quite seriously, please don’t stop trying.
Joe,
Trying again, the Ramanathan paper is downloadable from here:
http://journals.ametsoc.org/doi/abs/10.1175/1520-0469%281981%29038%3C0918%3ATROOAI%3E2.0.CO%3B2
Paul
The equation PV = nRT describes interlinked relationships and can be used to predict the system outcome from changes in individual variables.
describes a hard-sphere (non-interacting) gas in thermal equilibrium. It has absolutely nothing to do with “interlinked relationships”. It can predict system outcome, provided that the system is a closed system in thermal equilibrium or a slowly varying “quasi-static” system that is always approximately in thermal equilibrium. It can be heuristically useful somewhat past that regime, as long as one no longer says predict but rather understand some aspects of.

in the real world (as opposed to idealized cylinders of an ideal gas in an idealized physics textbook that is clearly the only thing you’ve ever looked at to try to understand thermodynamics).
In so far as Earths atmosphere is composed of non ideal gases the air circulation adjusts appropriately so that the equation is preserved overall.
You do realize that this is complete nonsense, right?
The Earth’s atmosphere is primarily composed of gases that in fact are for all practical purposes ideal — N_2 and O_2 liquefy at temperatures well below the lowest temperatures (or pressures well above the highest pressures) found on the surface of the Earth — and CO_2 and O_3 and H_2 and He and Argon and the other trace gases ditto. The sole wild card is water vapor, which is constantly moving huge chunks of heat into and out of the surrounding air as it evaporates and condenses. Finally, as for “adjusting” so that the equation is “preserved overall” — what in the world does this even mean? The Earth’s atmospheric pressure and temperature vary wildly in all directions, all the time. It is safer to say that it is never in equilibrium than it is always in equilibrium. It is an open thermal system with energy coming in and going out in a very inhomogeneous way everywhere, all the time.
Finally, even in the very simplest studies of thermodynamics — first year into physics level stuff involving “ideal gases” (used simply because we can write down a reasonably simple equation of state that most undergrads can manage algebraically, not because it is particularly correct or universally applicable) — one usually learns about the First Law of Thermodynamics:
In words: The heat added in to a system plus the work done on a system equals the change of the internal energy of a system. Of these, only the latter (the enthalpy) is related to temperature per se.
There exist isothermal quasi-static processes — e.g. isothermal expansion of an ideal gas — for which the right hand side is zero. There exist adiabatic processes — e.g. adiabatic expansion or compression of an ideal gas without input or loss of heat. There exist processes where the gas is sealed in a container at constant volume so no work is done on or by the gas, where heating the gas directly increases the temperature. But most generally, all three happen at once — parcels of gas in an actual atmosphere are compressed or expand (work) while gaining or losing heat (conductivity, radiation) while changing their temperature, often in air that is saturated with humidity so that instead of changing temperature of the air water is pushed into or out of its liquid state, gobbling up or contributing a latent heat of vaporization along the way.
It is almost insulting to pretend that one can gain insight into atmospheric dynamics and actually predict global average climate on the basis of the ideal gas law. Piffle. The problem is a hard problem. Making the smallest assertion concerning the average behavior of the system can only be rationally done after doing some serious mathematics, not just waving your hands and saying that all of the underlying dynamics in this open system just happens to come out so that the ideal gas law — of all things — is true “on average” after all.
And then (when you’ve managed the First Law) we can try working on the Second Law of Thermodynamics, which also conspires against the “prettiness” of
rgb
Sigh. I meant decrease of albedo in the 80’s and 90’s and increase in the albedo by 7% from the late 90’s to the present.
The bottom line being that the heating of the 80’s and 90’s can be proximately attributed solely to decreased albedo for unknown reasons, and the lack of additional heating and possible advent of weak cooling since then can also be tentatively attributed to increased albedo for equally unknown reasons. Yes, there are hypotheses for why albedo has varied. No, IMO they are not proven yet.
And it isn’t just albedo. The water vapor content of the stratosphere has dropped by 10% over a similar time frame.
Sadly, climate scientists are for the most part so distracted by CO_2 that they are ignoring albedo and stratospheric water vapor. If they weren’t they’d be mousy quiet about further heating, because the connection between both and global temperature ain’t rocket science, and without knowing the proximate cause of the albedo change they cannot say when, or if, or how, albedo will shift again.
rgb
More errata:
so
. Algebraic lydexsia strickes again…:-)
But seriously, I do know this stuff, somewhere, deep down inside;-)
rgb
Robert said in relation to the ideal gas law:
“It can predict system outcome, provided that the system is a closed system in thermal equilibrium or a slowly varying “quasi-static” system that is always approximately in thermal equilibrium”.
That is exactly what a planet with an atmosphere is.
In order not to lose the atmosphere the system has to be in thermal equilibrium or a slowly varying quasi-static system for so long as an atmosphere is present. If it were not, the atmosphere would boil off or freeze to the surface.
The only way that can be achieved is for radiation in to equal radiation out for most of the time and so it does as per observations. The only thing that changes is the height at which that balance occurs and the relevant variable is the volume of the atmosphere.
Thus, by your own admission the ideal gas law can predict system outcome for a planet with an atmosphere.
The system does it by altering the atmospheric circulation to always respond negatively to anything that tries to disturb the equilibrium.
If some internal factor causes incoming to exceed outgoing then the circulation changes to accelerate energy flow through the system so that equilibrium or quasi equilibrium is maintained.
Likewise if some internal factor causes outgoing to exceed incoming then the circulation changes to decelerate energy flow through the system.
Only increased top of atmosphere energy input or an increase in atmospheric mass will raise the equilibrium temperature as well as increasing the volume of the atmosphere.
What happens on Earth happens on all the other planets or moons with atmospheres too but we do not appear to have enough data about the way the other atmospheres reconfigure themselves over time to maintain their top of atmosphere energy balances.
At least N & Z are making a start.
You also said this:
“The bottom line being that the heating of the 80′s and 90′s can be proximately attributed solely to decreased albedo for unknown reasons, and the lack of additional heating and possible advent of weak cooling since then can also be tentatively attributed to increased albedo for equally unknown reasons.”
With which I absolutely agree but I am giving you a reason. And at the heart of it is atmospheric pressure at the surface and the ability of an atmosphere to change volume to counter destabilising influences and in the process reconfigure the surface distribution of pressure which alters albedo via clouds or aerosols or anything else that might affect the optical depth of the atmosphere.
The daft thing is that I agree with your conclusions about many things relating to climate and I agree with Willis generally about his thermostat hypothesis but neither of you will accept what seems obvious to me, namely that the behaviour of gases restrained by gravity and subjected to insolation as described in the ideal gas law and as observed in the Standard Atmosphere does indeed supply the answer that you both need.
Paul_K: “You do need to be aware however that given the relative heat capacities, your term C2 dT2/dt is likely to be very small and unlikely to be the main reason for needing a multi-period response function to explain observations! Most of the explanations for multi-time response of temperature in the time domain are based on slow deep ocean response, and this requires a connection via heat flux, not radiative flux. Atmospheric heat gain is small beer by comparison.”
First, let me make sure that I have not unintentionally flown false colors here. I’m not a scientist–I don’t even play one on TV:-) I’m just a retired lawyer attempting to separate the climate-debate wheat from the chaff, the latter of which seems greatly to outweigh the former, So I wouldn’t dream of attempting to write the actual equations for climate system. I just whipped off some equations to show that physical systems could indeed result in “multiple-pole” behavior (as guys who know this stuff tell me they refer to it.) I emphatically was not “trying to express an energy balance in terms of a radiative balance at surface,” and, although I’m a rank layman, I am aware that latent heat, conduction, convection, etc. would all go into the mix.
Be that as it may, I completely agree with your other statement I quoted above, and in fact I think I made an observation to that general effect in one of my previous comments in this thread. I even thought of using the oceans instead of the atmosphere for my matching equations and including conduction. But, as I say, I’m a layman, and all this math gives me a headache.
So I hope you’ll understand if I decline your invitation to keep trying. I got into this only because my bluster detector went off when Mr. Eschenbach claimed his approach would have detected a greater error if there were additionally longer time constants. (And, by the way, although I agree with you that it doesn’t matter much, I have demonstrated to myself that the time constants he gets for a single-pole system are indeed about half a period (half a month) too great.)
Joe Born says:
June 9, 2012 at 3:53 pm
Thanks, Joe. Bluster? I was stating a fact, Joe, which is that despite trying a whole range of possible configurations, I haven’t been able to fit a pair of time constants, one longer and one shorter, to the actual data. I can’t make it converge to that kind of arrangement no matter what I’ve tried.
Nor, as near as I can tell, have you been able to fit something like that to the actual data, or at least you haven’t reported the results. Nor has Paul_K. You’ve come up with very interesting formulas, but no actual results that fit the observations.
So that’s why I said that the error increases when I added a second, longer time constant to the setup. It was not bluster, it was simply a report of what I found. Might be right, might be wrong, I just report them as I find them.
Now it’s entirely possible that someone can come up with a way to do so. I couldn’t, and so far neither has anyone else, but absence of evidence is not evidence of absence.
Finally, I see no reason why the instantaneous sensitivity would be so small if there is a much larger sensitivity hiding out somewhere. What would make the sensitivity go from the short-term sensitivity of ~ 0.3° per doubling of CO2, which I’ve calculated above, to ten times that in the long term as the IPCC claims? I can’t see physically how that might happen.
In any case, I’m looking now at the CERES albedo dataset, which is gridded. With that I should be able to distinguish between the sensitivity and time constant of the ocean and the land separately.
However, given that I already have NH and SH figures for lambda and tau, and I know that the SH is 82% ocean and the NH is 62% ocean, I can at least estimate the values for land and ocean separately. Setting up the equations, with “x” being the value for the ocean and “y” being the value for the land, I get:
In[7]:= eqn1 = .62 x + .38 y == 1.9
eqn2 = .82 x + .18 y == 2.4
Solve[{eqn1, eqn2}, {x, y}]
Out[9]= {{x -> 2.85, y -> 0.35}}
As expected, this gives a longer time constant for the ocean (2.8 months) and a shorter time constant for the land (a third of a month).
Regarding the climate sensitivity, I find the following
In[10]:= eqn3 = .62 x + .38 y == .1
eqn4 = .82 x + .18 y == .05
Solve[{eqn3, eqn4}, {x, y}]
Out[12]= {{x -> 0.005, y -> 0.255}}
Again as expected, we find the sensitivity for the ocean to be small (0.005°C per W/m2) and that of the land to be significantly larger (0.25° per W/m2).
However, these are just estimates and certainly may be in error. I should be able to give you better results after I crunch the CERES data.
w.
Paul_K says:
June 9, 2012 at 12:26 pm
Likely my fault, but I didn’t understand this one, Paul. Most of the heating of the ocean is from direct penetration of the solar flux into the “photic zone”, the top one or two hundred metres or so of the ocean. The heat is retained in the ocean by the absorption of the longwave radiation in the surface layer, which slows the cooling rate of the ocean.
But heat flux from the atmosphere? That doesn’t do a whole lot, for the reason that you point out, which is that the relative heat capacities of the ocean and atmosphere are so different. There’s just not enough heat in the atmospheric boundary layer to do a whole lot of oceanic heating.
The combination of all that is why I am perplexed by your claim that the deep ocean response “requires a connection by heat flux, not radiative flux”.
w.
Willis Eschenbach: “I was stating a fact, Joe, which is that despite trying a whole range of possible configurations, I haven’t been able to fit a pair of time constants, one longer and one shorter, to the actual data. I can’t make it converge to that kind of arrangement no matter what I’ve tried.”
If indeed you applied your data to a double-pole model–a premise for which I had missed any evidence in this thread–then perhaps a no-long-time-constant conclusion is indeed warranted. But your post seemed to indicate–because you gave the actual equation–that you applied the data only to a single-pole model. If that’s the case, then what I’ve seen, as I explain below, is that you are not justified in concluding the negative implied by “If there is a long slow response, why would it not show up in the fourteen years of data that I have, particularly since it is among the fastest-warming periods in the 20th century?”
Willis Eschenbach: “Nor, as near as I can tell, have you been able to fit something like that to the actual data, or at least you haven’t reported the results.”
That’s true; I haven’t fit the data to a two-pole model.. To be candid, I haven’t even tried. This despite the fact that I (I think I) have written such a model. Indeed, the reason I haven’t tried is that in truth I doubt I’d find a two-pole fit; i.e., I’m not arguing that I really think there’s a significant long-time constant response to insolation.
My argument is directed only to whether your single-pole model would find the longer time constant if it were there. On that score, I believe I do have “actual results.” Specifically, I applied a sine wave, and the steady-state response of a two-pole system to a sine wave, to your (single-pole) model, and what I found is that it concluded, with very small error, that the sensitivity was less than what the two-pole system’s actually is and that the time constant was nearer to the short time constant than to the long time constant. (I don’t recall how great the errors were when I additionally added a trend, i.e., added a ramp to the sine wave; perhaps there’s some ammunition for you there.)
If indeed you have applied your data to a two-pole model, then my reservations my not be so serious. But I’d be grateful if you could confirm that, preferably by giving the model equations explicitly, as you did for your single-pole model.
On the other hand, here are findings that tend to support Mr. Eschenbach’s conclusion.
Having recognized during our colloquy that I really should have applied his radiation data to a two-time-constant model myself rather than just invite him to do so, I went through that exercise, varying the time constants and sensitivities as he did. (This is in distinction to what I had previously done, which was determine whether his one-time-constant model would detect a significant long-time-constant component if there were one. In that case, I concluded that such a component could escape undetected, at least in some circumstances.) Although I found as a result of this new exercise that both hemispheres’ data provide a better fit to models that (at least nominally) exhibit two time constants, the improvement was minuscule.
In the case of the Northern Hemisphere, in fact, even this faint contention is an overstatement, since the sensitivity of the two-time-constant model’s longer-time-constant (17.2-month) component was less than 0.1% of its (1.2-month) shorter-time-constant component’s: what appeared as a second time constant is readily written off as noise.
In the case of the Southern Hemisphere, the data do match a model having 1.3- and 12.9-month time constants marginally better than they do the best (1.5-month) single-time-constant model I found, but that two-time-constant model’s longer-time-constant component exhibited a sensitivity barely more than 10% the shorter-time-constant component’s. (I should note here that my time-constant values differ from Mr. Eschenbach’s because I use different model equations. Although I prefer mine, the distinction is not important.) As far as the ultimate question before the house is concerned, this time constant is not significant, either.
So, although I remain convinced that Mr. Eschenbach’s one-pole-model approach could miss a significant second time constant if one existed, my two-time-constant-model approach turned up no evidence of one, either.
A caveat: as Mr. Eschenbach did, I searched for best-match models by using Excel’s “Solver” facility. In the case of the two-pole model, though, that facility was faced with six parameters (a sensitivity and time constant for each of the poles, and two initial-condition values) to vary. This makes the question rather a poser, so “Solver” required a certain amount of hand-holding. A consequence is that I’m not entirely confident that I didn’t miss a better local optimum somewhere. Additionally, since I didn’t really expect to find a significant sensitivity with a long time constant, there’s always the danger of confirmation bias.
Thanks for that, Joe. You’ve run up against the problem I found. When I looked at the “two-box model” solution, utilizing a second longer time constant, I got a minuscule sensitivity. Similar to your finding, it is on the order of a tenth of the first sensitivity that I found above in the head post.
I took another run at it this morning, using a slightly different technique I thought of last night while falling asleep (get the residuals and try to model them with a longer time constant). I had no success with that one either, same result, longer time constant but tiny sensitivity.
Let me say again that absence of evidence is definitely not evidence of absence, so the fact I’ve not been able to find such a relationship doesn’t mean that it doesn’t exist. I continue to look, and as I mentioned above, I’m now looking at the CERES data. As usual I’ll report my findings as soon as I have some …
w.
Willis:
I’m not sure where you’re going with your series of articles on the climate sensitivity. As you develop an argument, I urge you to avoid building the fallacy that is known as “base-rate neglect” into this argument. It is a logical error that people tend to make when they assign numerical values to the conditional probabilities of the outcomes of events. This error is committed, for example, when it is assumed that the best estimate of this year’s batting average for a specified baseball player is the previous batting average for the same player. Actually, the best estimate lies between his previous average and the league average; the league average is the base-rate. To assume that the best estimate is his previous average is to a) neglect the base-rate and b) fabricate information Skeptics and warmers alike are extremely prone to fabricating information in this way.
When information is fabricated, this “crime” is discovered if and when the model is statistically tested and the observed relative frequency of a outcome is found to lie closer to the base-rate than predicted by the model. A requirement for statistical testing is the existence of the underlying statistical population. IPCC climatologistists make it impossible for their “crime” to be detected by refusing to identify the statistical population for their study of global warming. Thus far, you’ve not defined the population for your study either.
Re:Willis Eschenbach says:
June 9, 2012 at 11:38 pm
Likely my fault, but I didn’t understand this one, Paul. Most of the heating of the ocean is from direct penetration of the solar flux into the “photic zone”, the top one or two hundred metres or so of the ocean. The heat is retained in the ocean by the absorption of the longwave radiation in the surface layer, which slows the cooling rate of the ocean.
Wills,
No argument from me. The heat flux I was referring to was heat flux from deep ocean to the mixed layer.
Terry Oldberg says:
June 10, 2012 at 2:10 pm
Thanks, Terry, but I haven’t a clue what you are referring to when you say “fabricating information”. I’m not fabricating anything, as far as I know.
As to “where [I’m] going with [my] series of articles on the climate sensitivity”, I’m not going anywhere. I am attempting to estimate the climate sensitivity based on observations of the planet.
Now, I understand that you don’t think that “climate sensitivity” is measurable because, as you say:
I fear that despite your prior explanation, I still don’t understand what that means. Suppose you have a thermometer in your yard, enclosed in a Stevenson Screen so it is out of the sun. Surely you will find that when the sun is stronger, your thermometer will indicate a warmer temperature … so why, in your opinion, is the average magnitude of that change not quantifiable, either for one thermometer or the average of a hundred thermometers? Why are the records of the temperatures and the isolation in your yard for say thirty days not a sample?
Also, you say that there are no “events, statistical population, or sample” involved. Why are e.g. the average monthly albedos or the average monthly solar insolation not a statistical population from which I have taken a sample from 1984 to 1997? What am I missing?
All the best,
w.
Willis:
Thank you for taking the time to reply. When you say “Why are the records of the temperatures and the isolation in your yard for say thirty days not a sample?” it sounds as though you are conflating the idea that is referenced by the term “sample” with the idea that is referenced by the term “time series.” A record of the temperatures in one’s yard is an example of a time series. A record of the insolation in one’s yard is a different example of a time series.
The term “event” is synonymous with the term “happening.” In the game of baseball, an example of an event is an at bat. A statistical population is comprised of statistically independent events. A subset of these events in which the events have been observed is a “sample.”
Like every event, an at bat can be described by a pair of states of nature. One of these states is a condition on the associated model’s dependent variables and is called an “outcome.” The other state is a condition on this model’s independent variables and is called a “condition.” For an event that is an at bat, an example of an outcome is a hit and an example of a condition is the batter’s identity.
Prior to the publication, circa 1960, of papers by Stein and James of Stanford University, statisticians thought that the best estimator of a baseball player’s batting average for the following season was this player’s batting average in preceeding seasons. In using this estimator, they inadvertently fabricated information. Stein and James showed that a better estimator of a player’s batting average for the following season lies between his own past batting average and the batting average for the league. The latter estimator claims possession of less information about the player’s batting average.
The batting average for the league is an example of a “base-rate.” One can avoid fabricating information by factoring the base-rate into one’s estimates of the relative frequencies of future outcomes.Today, cognitive psychologists rate one’s thinking as illogical if it neglects or underweights the base-rate. The error of neglecting the base rate plays a role in the thinking of modern day climatologists. For them, there is no statistical population and hence no base-rate.
Willis,
You wrote:
“Finally, I see no reason why the instantaneous sensitivity would be so small if there is a much larger sensitivity hiding out somewhere. What would make the sensitivity go from the short-term sensitivity of ~ 0.3° per doubling of CO2, which I’ve calculated above, to ten times that in the long term as the IPCC claims? I can’t see physically how that might happen.”
I don’t think you have taken on board what I was trying to explain in one of my previous comments (Paul_K says:
June 5, 2012 at 2:15 am).
You cannot make a direct comparison between your 0.3° per doubling and the IPCC’s 3° per doubling, because you are comparing apples and bananas. You are not considering the conventional input forcings and you are not considering the conventional feedback terms which go into the IPCC number. As I tried to explain earlier, you would need to back them out if you want to try to make a valid comparison.
To illustrate the point, suppose for a minute that 90% of the variation in (your) net received SW is due to fluctuations in sea ice albedo plus clouds, and that they are fluctuating because they really are temperature-dependent feedbacks. Then only some 10% of what you are calling input forcing would conventionally be considered a forcing, so you would be overestimating your feedback coefficient (1/lambda) by a factor of 10 for the same temperature sensitivity. In fact the true situation is a bit more complicated than this, but you need to get the basic idea that your sensitivity cannot be directly compared with IPCC’s climate sensitivity.
I know I’m repeating myself, but your finding is much more important in the context of attribution than it is for climate sensitivity.
Paul_K says:
June 10, 2012 at 6:14 pm
Paul, the IPCC does not “back out” the feedbacks. They explicitly include them in their calculation of the overall sensitivity, which is how they get from a blackbody change in temperature (~ 0.7°C per doubling of CO2) to the 3°C per doubling that they claim. The only difference is that they claim the net feedbacks are overwhelmingly positive, while the observations say they are net negative.
Since they don’t remove the feedback terms, neither have I, in that regard it’s apples to apples … what am I missing? I could easily be wrong, I just don’t understand where.
Now, I do understand that I have not included the change in upwelling longwave radiation, but I have estimated it above, and I am currently working on the CERES dataset which includes ULR. Once I include that, however, I don’t see why other feedbacks (which are all included in the IPCC numbers) should be backed out.
My best to you, and thanks for all of the assistance, it is much appreciated,
w.
PS—I just finished the preliminary analysis of the CERES data. It is a 1° gridded dataset showing downwelling and upwelling solar radiation, along with upwelling longwave radiation. The CERES data shows that as a global average,
∆ULR = 0.16 ∆NSR + .006 , with a p-value less than 2E-16
where ∆ULR is the change in upwelling longwave radiation and ∆NSR is the change in net solar radiation (downwelling minus upwelling), and both ULR and NSR are taken as positive numbers.
So instead of having a change in net solar forcing of e.g. 1 W/m2, we have a net solar forcing of 0.84 W/m2. This means that my sensitivity is underestimated by 1/0.84, or about 20%. This is actually a smaller difference than my estimate from above, where I thought it was more like 40%.
Please note that this calculation is direct, without area-averaging. I don’t think that makes a difference, because what we are looking at is basically ∆ULR / ∆NSR, with the constant term approximately zero (0.006), and both variables are affected equally by the area of the cell. However, I’m happy to be convinced otherwise.
Hi again, Willis,
You wrote:-
“Paul, the IPCC does not “back out” the feedbacks. They explicitly include them in their calculation of the overall sensitivity, which is how they get from a blackbody change in temperature (~ 0.7°C per doubling of CO2) to the 3°C per doubling that they claim. The only difference is that they claim the net feedbacks are overwhelmingly positive, while the observations say they are net negative.
Since they don’t remove the feedback terms, neither have I, in that regard it’s apples to apples … what am I missing? I could easily be wrong, I just don’t understand where.”
You can write the TOA balance as:-
Net flux imbalance = F – feedback * ∆T (Equation 1)
Your climate sensitivity term expressed in deg C/watts/m2 is equal to 1/(feedback). This scales by the magnitude of F, since the equilibrium temperature as net flux goes to zero is given by
∆T = F/(feedback) .
In equation (1), the net flux imbalance is the total SW and LW imbalance. The F value and the value of (feedback) are the sum of the SW and LW constituent components.
Now let’s split up the forcings and feedback terms into their constituents. We can write:-
Net flux imbalance = (F1 + F2 + F3 + ) – (feedback1 + feedback2 + feedback3+…) * ∆T
Now consider what happens if you take one of the feedback terms, say cloud SW response, and redefine it as a forcing. Your “new” forcing may look like this:
F = (F1 + F2 + F3 +…- feedback1* ∆T )
and your new feedback term looks like this:-
(feedback) = ( feedback2 + feedback3+…) * ∆T
You still balance the equation, but when you calculate your feedback term, it is not the same as the feedback term you would calculate from Equation (1), because you have re-labelled part of the IPCC feedback as a forcing. Do you see it now?
Robert Brown,
you mentioned the massive variation in albedo of around 1990 and commented on it being an all time high (and later corrected it to all time low). I think your use of ‘all time’, while correct is a bit misleading. I’ve only found about 30 yrs worth of albedo data total. “all time” implies over history, not just over the history of 30 years of data. Have you found an actual source of albedo data that is better coverage than I described?
Have you noticed that many of the older albedo estimates for Earth also tend to give higher values than we have been measuring over recent years, more in the 0.35 to 0.40 realm as compared to the nominal 0.30 range?
Are you familiar with Palle’ and Goode and their Earthshine project?
We don’t really even have good data over the satellite era and yet there’s all these people trying to figure out (or claiming to have figured out) climate sensitivities and precision power balance without taking albedo into account. BTW, essentially all of them assume constant albedo.
Terry Oldberg says:
June 11, 2012 at 9:00 am
Thanks, Terry, but I still don’t see the difference. Why is the measurement of temperature every day a “time series” but a measurement of someone’s success at batting every day is not a time series? Why is the daily measurement of temperature not an “event” while the daily measurement of batting success is an event?
I don’t get it. I don’t see the different you are pointing at. Suppose I have a machine that rolls the dice once a minute. Once every day at 3:00 I record one roll of the dice, and at the same time I record one temperature … is one of these an “event” and the other not an “event”, and if so, why?
w.
Willis:
It’s not the measurement of temperature which is a time series but rather is the record of the temperatures that were measured which is a time series.
Also, while the measurement of a quantity is a type of event, this type of event is not the best for didactic purposes. For these purposes, an at bat works better. This is, in fact, the event that was featured in a Scientific American article on the topic I am addressing. If you’d like me to teach you, lets stick with the at bat for the time being.
Terry
Terry Oldberg says:
June 12, 2012 at 7:42 am
Terry, I would like to learn from anyone, but I’m getting very frustrated, because your vague hand-waving doesn’t teach anything. I asked a simple question. You gave me … well … nothing. I still still have no clue why a series of measurements of temperature is different from a series of measurements of batting skill. I still have no idea what distinguishes an “event” in your lexicon from something which is not an “event”. I asked a clear and specific question, viz:
You come back to tell me what is the best for didactic purposes …
w.
Willis:
Thank you for taking the time to respond. The presentation which you characterize as “vague hand waving” is a presentation of terms and concepts of mathematical statistics which you evidently do not know. While you do not know them, I cannot inform you of important shortcomings in the methodology of the IPCC inquiry into anthropogenic global warming. These same shortcomings are a feature of your own works.
I can teach you about these terms and concepts but only if you move your point of view from the lofty position of debater to the humble position of student. While in the position of debater, I find, you are prone to veering off the topic that I have introduced for didactic purposes in ways that preserve your own ignorance and that serve to preserve the misimpression that your own works are flawless..
Terry Oldberg says:
June 12, 2012 at 10:35 am
I am under no illusions that my works are flawless, I’ve been on the earth far too long to hold such a childish claim. I have publicly acknowledged a number of errors in my work in the past, so I fear that you are off in fantasy about my “misimpressions”.
All I asked for was simple answers to simple questions, viz:
and
and
and
So far, you have not answered a single one of my questions. Since you are obviously unwilling to answer questions, and instead you want to lecture me about my shortcomings, I fear you’ll never get any traction for whatever ideas you are pushing.
I am interested in learning from you, but if you want to teach someone, Terry, you likely need to answer questions. The Sufis say “Some say that a teacher needs to be this way, and some say a teacher should be that way. But all a teacher needs is to be what the student needs …” In other words, trying to force your favorite teaching style on someone doesn’t work. You need to adapt your style to be what the student needs … and it appears that you are far too arrogant and proud to do that. You think you have the right teaching style and you are unwilling to change that … I have no problem with that, that’s your prerogative, but it doesn’t work for me.
So I fear that your teaching style is not at all what I need. What I need is someone willing to answer questions, not someone who wants to make mysterious undefined distinctions between an “event” and a non-event, and then refuse to explain the difference while at the same time insulting me …
In other words, sorry, not interested in the slightest. Take your teaching style to someone who cares for it and for whom it works.
w.
Willis:
Thanks for the response. I’ll hold my opinion on the decorum that is required of a student for another occasion. Your questions and my answers follow:
Q1: Suppose you have a thermometer in your yard, enclosed in a Stevenson Screen so it is out of the sun. Surely you will find that when the sun is stronger, your thermometer will indicate a warmer temperature … so why, in your opinion, is the average magnitude of that change not quantifiable, either for one thermometer or the average of a hundred thermometers?
A1: The premise that “when the sun is stronger, your thermometer will indicate a warmer temperature” is incorrect.
Q2: Why are the records of the temperatures and the isolation in your yard for say thirty days not a sample?
A2: They are not a sample but rather are a time series.
Q3: Why are e.g. the average monthly albedos or the average monthly solar insolation not a statistical population from which I have taken a sample from 1984 to 1997? What am I missing?
A3: The average monthly albedos or average monthly insolations are not statistical populations because neither an average monthly albedo nor an average monthly insolation is a statistical event. Possibly, the two quantities are variables belonging to a model.
Q4: Suppose I have a machine that rolls the dice once a minute. Once every day at 3:00 I record one roll of the dice, and at the same time I record one temperature … is one of these an “event” and the other not an “event”, and if so, why?
A4: Whether or not the outcome of an event is recorded is immaterial to the definition of an “event.” Also, in the context of a study of anthropogenic global warming, the definition of an event must reference a state of nature that is additional to its outcome. For the purpose of setting policy on CO2 emissions, one needs a predictive model. A prediction from such a model is an extrapolation from an observed condition on the model’s independent variables to an unobserved but observable condition on the model’s dependent variables. The former condition is called a “condition.” The latter condition is called an “outcome.” The conditions and outcomes are states of the climate. Each event in the associated population is describable by: a) its condition b) the time at which this condition is observable c) the outcome and d) the time at which the outcome is observable. The two times define a time period; for the statistical independence of the various events, their periods do not overlap. An event in which both conditions have been observed is said to be an “observed event”; A “sample” is a collection of observed events.
Terry Oldberg says:
June 12, 2012 at 12:43 pm
Dude, if you think my decorum is inadequate, you have already lost the plot. You seem to be looking for someone willing to kiss your ring, and that’s not me. Your overweening arrogance has now successfully cost you the whole game. The tragedy is, I actually thought you might be on to something and be able to teach it. You’ll have to find someone else to impress, because frankly, Terry, I don’t give a damn. Here’s a quarter, call someone who cares.
w.
PS: As an explanation of why I don’t give a damn, this interaction sums up the problem nicely:
You have already made that claim, more than once … but what you haven’t done is ANSWER THE FREAKING QUESTION. But despite not answering the question time after time, you want to lecture me on decorum … sorry, my friend, but it doesn’t work that way.
Willis:
i answered that question. See A2.
Terry Oldberg,
“i answered that question. See A2.”
Great response. Bwahhaha.
You remind me a bit of the character played by Jack Nicholson in the movie “Anger Management”. If your intention is to drive Willis up the wall, then you are doing a highly successful job. If your intention is to convince Willis, and indeed any other readers, of the need for rigorous canonical form from a frequentist philosophy, alas, I fear you are doing a job of the same quality as Peter Gleick on the subject of how to better communicate the need for ethics in science.
Strangely enough, I agree with quite a lot of what you are saying; lack of attention to basic precepts and their handmaiden, canonical form, has led to a loss, or in some cases, a complete absence of statistical rigour in climate science. (And by the way I don’t think we support the same football club – I’m more a Bayesian pragmatist – but if you will accept a little advice from this not-so-humble student, you need to talk into the listening. It would be better to explain what you are trying to do about getting mainstream scientists to accept the need for statistical rigour and why clarity on basic definitions is important. After that, you might gently suggest that we should all consider the implications for our own self-challenge.
It really is a bit much when you are going after the mote in Willis’s eye and not clearly explaining the beams in the IPCC literature, at least in context for Willis.
Also stop answering questions about meaning with another definition, and explain why the distinction MATTERS.
Paul_K:
My view is that the division of statisticians into schools is artificial and damaging. Using long existing technology it is possible to unite the warring factions under the banner of logic. A barrier to accomplishment is widespread ignorance on the part of academic philosophers and statisticians. The list of people needing instruction does not end with statistical neophytes such as Willis.