Revisiting the Pinatubo Eruption as a Test of Climate Sensitivity
By Roy W. Spencer, PhD.

The eruption of Mt. Pinatubo in the Philippines on June 15, 1991 provided a natural test of the climate system to radiative forcing by producing substantial cooling of global average temperatures over a period of 1 to 2 years. There have been many papers which have studied the event in an attempt to determine the sensitivity of the climate system, so that we might reduce the (currently large) uncertainty in the future magnitude of anthropogenic global warming.
In perusing some of these papers, I find that the issue has been made unnecessarily complicated and obscure. I think part of the problem is that too many investigators have tried to approach the problem from the paradigm most of us have been misled by: believing that sensitivity can be estimated from the difference between two equilibrium climate states, say before the Pinatubo eruption, and then as the climate system responds to the Pinatubo aerosols. The trouble is that this is not possible unless the forcing remains constant, which clearly is not the case since most of the Pinatubo aerosols are gone after about 2 years.
Here I will briefly address the pertinent issues, and show what I believe to be the simplest explanation of what can — and cannot — be gleaned from the post-eruption response of the climate system. And, in the process, we will find that the climate system’s response to Pinatubo might not support the relatively high climate sensitivity that many investigators claim.
Radiative Forcing Versus Feedback
I will once again return to the simple model of the climate system’s average change in temperature from an equilibrium state. Some call it the “heat balance equation”, and it is concise, elegant, and powerful. To my knowledge, no one has shown why such a simple model can not capture the essence of the climate system’s response to an event like the Pinatubo eruption. Increased complexity does not necessarily ensure increased accuracy.
The simple model can be expressed in words as:
[system heat capacity] x[temperature change with time] = [Radiative Forcing] – [Radiative Feedback],
or with mathematical symbols as:
Cp*[dT/dt] = F – lambda*T .
Basically, this equation says that the temperature change with time [dT/dt] of a climate system with a certain heat capacity [Cp, dominated by the ocean depth over which heat is mixed] is equal to the radiative forcing [F] imposed upon the system minus any radiative feedback [lambda*T] upon the resulting temperature change. (The left side is also equivalent to the change in the heat content of the system with time.)
The feedback parameter (lambda, always a positive number if the above equation is expressed with a negative sign) is what we are interested in determining, because its reciprocal is the climate sensitivity. The net radiative feedback is what “tries” to restore the system temperature back to an equilibrium state.
Lambda represents the combined effect of all feedbacks PLUS the dominating, direct infrared (Planck) response to increasing temperature. This Planck response is estimated to be 3.3 Watts per sq. meter per degree C for the average effective radiating temperature of the Earth, 255K. Clouds, water vapor, and other feedbacks either reduce the total “restoring force” to below 3.3 (positive feedbacks dominate), or increase it above 3.3 (negative feedbacks dominate).
Note that even though the Planck effect behaves like a strong negative feedback, and is even included in the net feedback parameter, for some reason it is not included in the list of climate feedbacks. This is probably just to further confuse us.
If positive feedbacks were strong enough to cause the net feedback parameter to go negative, the climate system would potentially be unstable to temperature changes forced upon it. For reference, all 21 IPCC climate models exhibit modest positive feedbacks, with lambda ranging from 0.8 to 1.8 Watts per sq. meter per degree C, so none of them are inherently unstable.
This simple model captures the two most important processes in global-average temperature variability: (1) through energy conservation, it translates a global, top-of-atmosphere radiative energy imbalance into a temperature change of a uniformly mixed layer of water; and (2) a radiative feedback restoring forcing in response to that temperature change, the value of which depends upon the sum of all feedbacks in the climate system.
Modeling the Post-Pinatubo Temperature Response
So how do we use the above equation together with measurements of the climate system to estimate the feedback parameter, lambda? Well, let’s start with 2 important global measurements we have from satellites during that period:
1) ERBE (Earth Radiation Budget Experiment) measurements of the variations in the Earth’s radiative energy balance, and
2) the change in global average temperature with time [dT/dt] of the lower troposphere from the satellite MSU (Microwave Sounding Unit) instruments.
Importantly — and contrary to common beliefs – the ERBE measurements of radiative imbalance do NOT represent radiative forcing. They instead represent the entire right hand side of the above equation: a sum of radiative forcing AND radiative feedback, in unknown proportions.
In fact, this net radiative imbalance (forcing + feedback) is all we need to know to estimate one of the unknowns: the system net heat capacity, Cp. The following two plots show for the pre- and post-Pinatubo period (a) the ERBE radiative balance variations; and (b) the MSU tropospheric temperature variations, along with 3 model simulations using the above equation. [The ERBE radiative flux measurements are necessarily 72-day averages to match the satellite’s orbit precession rate, so I have also computed 72-day temperature averages from the MSU, and run the model with a 72-day time step].
As can be seen in panel b, the MSU-observed temperature variations are consistent with a heat capacity equivalent to an ocean mixed layer depth of about 40 meters.
So, What is the Climate Model’s Sensitivity, Roy?
I think this is where confusion usually enters the picture. In running the above model, note that it was not necessary to assume a value for lambda, the net feedback parameter. In other words, the above model simulation did not depend upon climate sensitivity at all!
Again, I will emphasize: Modeling the observed temperature response of the climate system based only upon ERBE-measured radiative imbalances does not require any assumption regarding climate sensitivity. All we need to know was how much extra radiant energy the Earth was losing [or gaining], which is what the ERBE measurements represent.
Conceptually, the global-average ERBE-measured radiative imbalances measured after the Pinatubo eruption are some combination of (1) radiative forcing from the Pinatubo aerosols, and (2) net radiative feedback upon the resulting temperature changes opposing the temperature changes resulting from that forcing– but we do not know how much of each. There are an infinite number of combinations of forcing and feedback that would be able to explain the satellite observations.
Nevertheless, we do know ONE difference in how forcing and feedback are expressed over time: Temperature changes lag the radiative forcing, but radiative feedback is simultaneous with temperature change.
What we need to separate the two is another source of information to sort out how much forcing versus feedback is involved, for instance something related to the time history of the radiative forcing from the volcanic aerosols. Otherwise, we can not use satellite measurements to determine net feedback in response to radiative forcing.
Fortunately, there is a totally independent satellite estimate of the radiative forcing from Pinatubo.
SAGE Estimates of the Pinatubo Aerosols
For anyone paying attention back then, the 1991 eruption of Pinatubo produced over one year of milky skies just before sunrise and just after sunset, as the sun lit up the stratospheric aerosols, composed mainly of sulfuric acid. The following photo was taken from the Space Shuttle during this time:
There are monthly stratospheric aerosol optical depth (tau) estimates archived at GISS, which during the Pinatubo period of time come from the SAGE (Stratospheric Aerosol and Gas Experiment). The following plot shows these monthly optical depth estimates for the same period of time we have been examining.
Note in the upper panel that the aerosols dissipated to about 50% of their peak concentration by the end of 1992, which is 18 months after the eruption. But look at the ERBE radiative imbalances in the bottom panel – the radiative imbalances at the end of 1992 are close to zero.
But how could the radiative imbalance of the Earth be close to zero at the end of 1992, when the aerosol optical depth is still at 50% of its peak?
The answer is that net radiative feedback is approximately canceling out the radiative forcing by the end of 1992. Persistent forcing of the climate system leads to a lagged – and growing – temperature response. Then, the larger the temperature response, the greater the radiative feedback which is opposing the radiative forcing as the system tries to restore equilibrium. (The climate system never actually reaches equilibrium, because it is always being perturbed by internal and external forcings…but, through feedback, it is always trying).
A Simple and Direct Feedback Estimate
Previous workers (e.g. Hansen et al., 2002) have calculated that the radiative forcing from the Pinatubo aerosols can be estimated directly from the aerosol optical depths measured by SAGE: the forcing in Watts per sq. meter is simply 21 times the optical depth.
Now we have sufficient information to estimate the net feedback. We simply subtract the SAGE-based estimates of Pinatubo radiative forcings from the ERBE net radiation variations (which are a sum of forcing and feedback), which should then yield radiative feedback estimates. We then compare those to the MSU lower tropospheric temperature variations to get an estimate of the feedback parameter, lambda. The data (after I have converted the SAGE monthly data to 72 day averages), looks like this:
The slope of 3.66 Watts per sq. meter per degree corresponds to weakly negative net feedback. If this corresponded to the feedback operating in response to increasing carbon dioxide concentrations, then doubling of atmosphere CO2 (2XCO2) would cause only 1 deg. C of warming. This is below the 1.5 deg. C lower limit the IPCC is 90% sure the climate sensitivity will not be below.
The Time History of Forcing and Feedback from Pinatubo
It is useful to see what two different estimates of the Pinatubo forcing looks like: (1) the direct estimate from SAGE, and (2) an indirect estimate from ERBE minus the MSU-estimated feedbacks, using our estimate of lambda = 3.66 Watts per sq. meter per deg. C. This is shown in the next plot:
Note that at the end of 1992, the Pinatubo aerosol forcing, which has decreased to about 50% of its peak value, almost exactly offsets the feedback, which has grown in proportion to the temperature anomaly. This is why the ERBE-measured radiative imbalance is close to zero…radiative feedback is canceling out the radiative forcing.
The reason why the ‘indirect’ forcing estimate looks different from the more direct SAGE-deduced forcing in the above figure is because there are other, internally-generated radiative “forcings” in the climate system measured by ERBE, probably due to natural cloud variations. In contrast, SAGE is a limb occultation instrument, which measures the aerosol loading of the cloud-free stratosphere when the instrument looks at the sun just above the Earth’s limb.
Discussion
I have shown that Earth radiation budget measurements together with global average temperatures can not be used to infer climate sensitivity (net feedback) in response to radiative forcing of the climate system. The only exception would be from the difference between two equilibrium climate states involving radiative forcing that is instantaneously imposed, and then remains constant over time. Only in this instance is all of the radiative variability due to feedback, not forcing.
Unfortunately, even though this hypothetical case has formed the basis for many investigations of climate sensitivity, this exception never happens in the real climate system
In the real world, some additional information is required regarding the time history of the forcing — preferably the forcing history itself. Otherwise, there are an infinite number of combinations of forcing and feedback which can explain a given set of satellite measurements of radiative flux variations and global temperature variations.
I currently believe the above methodology, or something similar, is the most direct way to estimate net feedback from satellite measurements of the climate system as it responds to a radiative forcing event like the Pinatubo eruption. The method is not new, as it is basically the same one used by Forster and Taylor (2006 J. of Climate) to estimate feedbacks in the IPCC AR4 climate models. Forster and Taylor took the global radiative imbalances the models produced over time (analogous to our ERBE measurements of the Earth), subtracted the radiative forcings that were imposed upon the models (usually increasing CO2), and then compared the resulting radiative feedback estimates to the corresponding temperature variations, just as I did in the scatter diagram above.
All I have done is apply the same methodology to the Pinatubo event. In fact, Forster and Gregory (also 2006 J. Climate) performed a similar analysis of the Pinatubo period, but for some reason got a feedback estimate closer to the IPCC climate models. I am using tropospheric temperatures, rather than surface temperatures as they did, but the 30+ year satellite record shows that year-to-year variations in tropospheric temperatures are larger than the surface temperatures variations. This means the feedback parameter estimated here (3.66) would be even larger if scaled to surface temperature. So, other than the fact that the ERBE data have relatively recently been recalibrated, I do not know why their results should differ so much from my results.





I am more concerned with the effect of Pinatubo on global temperature than climate sensitivity. Much nonsense has been written about it, starting with Stephen Self et al. in the big Pinatubo book “Fire and Mud.” In their article “The Atmospheric Impact of the 1991 Mount Pinatubo Eruption” they claim an observed surface cooling in the Northern Hemisphere of up to 0.5 to 0.6 degrees Celsius and a cooling perhaps as large as -0.4 degrees over large parts of the earth in 1992-93. But when you look at where these numbers come from he shows you global temperature curves from 1991 to 1994 (his Figure 12A) for stratosphere, troposphere and surface temperatures. The troposphere and surface temperatures both show a peak exactly where the eruption is and temperature descends from there into a valley that bottoms out in 1992. The depth of the valley is about 0.6 degrees Celsius and this must be the source of his numbers. He goes on to pontificate that “The Pinatubo climate forcing was stronger than the opposite, warming effects of either the El Nino event or anthropogenic greenhouse gases in the period 1991-1993.” Unfortunately he is dead wrong both on temperature as well as on forcing. He does not understand that temperature peaks and valleys like the one he shows are a normal part of global temperature oscillations whose cause is the ENSO system in the Pacific. The satellite record of lower tropospheric temperatures shows five such El Nino peaks before 1998. The peaks correspond to the El Nino periods and the valleys in between are La Ninas. It so happens that Pinatubo erupted exactly when an El Nino peaked and the temperature was just beginning to descend into a La Nina valley. Obviously Pinatubo did nothing to suppress an El Nino but just got a free ride when a convenient La Nina was appropriated to give it cooling power. But Self also wonders about “…why surface cooling is is clearly documented after some eruptions (for example, Gunung Agung, Bali, in 1963) but not others – for example El Chichon, Mexico, in 1982.” Apparently what we have is pot luck: if a volcano erupts when the El Nino has peaked and temperature is going down you can report cooling. If it erupts when a La Nina has just bottomed out and temperature is going up there is no cooling to report. This is what happened to poor El Chichon: it erupted when a La Nina had just bottomed out and there was no chance for a free ride since an El Nino was building up. Unfortunately the misinformation about Pinatubo cooling has spread far and wide by now and the 1991-92 La Nina is still mismarked “Pinatubo cooling” on many temperature charts.
“”” dr.bill says:
June 28, 2010 at 8:25 pm
re Ike: June 28, 2010 at 1:26 pm
……………………..
The other process is for the Earth to radiate to space directly. The rate of emission depends on the temperature at the surface. You may have read that this depends on the 4th power of the surface temperature, and that would be true if the Earth were truly a blackbody and radiated at all frequencies. In fact, though, neither of these things is applicable, and the actual temperature dependence is somewhere between T¹ (if you’re dealing with just low-frequency stuff) and T⁴ (if the whole spectrum is involved), or some temperature polynomial that depends on, among other things, the emissivity at every frequency for every part of the Earth, which also varies quite a lot from one time and place to another. “””
Well I’m not sure you are giving radiation the credit it is due dr. bill.
First of all; I do NOT discount the energy transport effects due to conduction, convection and evaporation; those are all legitimate thermal energy transport processes that DO come into play in moving energy around the planet.
But Radiation is ultimately the only way for it to exit to space; absent the exodus of large amounts of material.
And the Earth may be much more “Black Body” like than you think. For one thing, aboiut 73% of the earth surface is oceans, and the radiant energy absorption by water is almost total. The optical reflectance is only 2% for normal incidence, and maybe averages 3% over all angles so about 97% of incident energy; certainly in the solar spectrum, does enter the water. It is either absorbed by the water, or propagates deeper, until something else absorbs it. Well shallow waters around beaches, will have some small bottom reflectance; which then must then return to the surface through the same absorbing water. And the reflected energy is diffuse; so a good fraction of it will find itself trapped in the water by Total Internal Reflection. I’ve never actually calculated the total TIR trapping by a water surface; but it’s an 8th grade optics calculation.
So the deeper ocean areas are quite good as near total absorbers of solar energy.
The Black Body; Stefan Boltzmann calculation of emitted radiation sets a maximum envelope to a surface emission. No surface can emit more than a Black Body, due to Thermal Radiation alone (as a result of its Temperature).
The 4th power of T integral is not strongly dependent on the actual emission spectrum. The peak of the BB spectrum increases as the fifth power of the Temperature, and the actual spectrum of the particular surface gets applied to that as a spectral emissivity factor.
But I think you will find, that for most real terrain surfaces, any elemental area will have a total emittance that does vary as the 4th power of the Temperature; modified only by a spectral emissivity. The oceans would look quite black if it wasn’t disguised by the blue light scattering in the atmosphere.
I have looked into really deep and really clean Sea of Cortez water at sea level with overhead cover to remove local direct sunlight; and that water looks plain black to me.
Total spectral coverage ius not necessary to get close to 4th power response.
Anybody familiar with BB spectra knows that 98% of the Total energy is contained between 0.5 of the peak wavelength, and 8 times the peak wavelength; so for the incoming solar spectrum, that is about 250 nm out to 4.0 microns; with 1% straggle at each end. For the 15 deg C global mean thermal radiation, that range would be from about 5 microns min to 80 microns max, and over that spectral range, water is pretty much totally absorbing in just a few microns of thickness.
Even with a quite narrow emission spectrum (can’t imagine what material) the emitted power is hardly likely to ever be linear with Temperature; I would think it is more likely to be higher than 4th power than lower, because of the spectral peak emittance being 5th power.
People keep talking about how the higher colder atmospheric layers will radiate to space at lower emittances; and Dr Spencer even mentions an effective radiating Temperature of 255 K. I don’t disagree with that number based on how it is defined; but effective Temperature or not; far more energy is raidated from surfaces that are more like 330 K or higher, than 255 K.
At Vostok station and similar Antarctic highlands locations; the radiating Temperature might be as low as 185K, which is a pitiful contribution to cooling the planet.
Dr Roy’s 255 K is maybe the equilibrium earth orbit black body Temperature under one assumption; but that doesn’t mean that most of the earth is emitting such a spectrum; it’s not.
The highest energy losses are from the hottest most arid tropical desert surfaces, with peak spectral emittances around 8-9 microns right in the water window (with little water to block anyway); and such spectra are less captured by CO2 which operates at 15 microns (maybe 13.5 to 16.5).
The idea that surface energy is transported by various processes to the upper Troposphere, and then radiated to space from there at some low temperature BB spectrum rate, is quite wrong. Those processes do occur of course and are useful contributions; but direct surface radiation to space; is much more prevalent than is suggested by Trenberth’s isothermal planet cartoon energy budget.
tallbloke says:
June 28, 2010 at 5:14 am
I have now watched Dr. Scafetta’s presentation here;
http://yosemite.epa.gov/ee/epa/eed.nsf/vwpsw/360796B06E48EA0485257601005982A1#video
And let me say this; It was very interesting. Scafetta only points out certain correlations, and does not in any way say that all can be explained, he just says these correlations are interesting, and shows us some interesting ideas.
I can understand that the warmers (taking into account the objective paragraph of the IPCC) do not want ANYONE to hear about this.
I am sure Dr. Scafetta is safely on the blacklist?
In fact, we can use the blacklist to find interesting papers on the climate from now on?
Because we can hope that the people on the blacklist is not on the HockeyTeam,
and therefore looks at interesting science, not related to insignificant trace-gases?
Yes?
Tom Vonk,
I think the readers of this blog have gotten the message that you are incapable of doing a simple heuristic calculation. You have left no stone unturned in your head long rush to hide your ignorance with (totally) uneccesary complexity.
You do not seem to understand that zeroth or first order calculations require broad approximations. The trick is to know what approximations to make. The whole idea of making first order calculations is not give you a definitive answer that is chiseled in stone but to direct you towards a useful estimate of the quatity that you are seeking.
I do not necessarily agree with Dr. Spencer’s confidence in his answer but I do applaud him for at least trying to get a hand on a possible estimate of the feadback parameter.
I am not going make the effort of pointing out all the absudities in your post but I will try address a few. Now where to start…?
Let start with these zingers…
a) the atmosphere can represented by a verticle column of gas with a cross-section of 1 metre^2,
No . The column must also contain water because most of the surface are oceans .
In Thermodynamics you can define a system. How you define the system depends on the problem you are addressing. If you are talking about the Earth’s atmosphere, it is usual to define the system as being a vertical column of gas, that has a lower boundary in contact with either the Earth’s Ocean or Earth’s surface and an upper boundary in contact with space. It is quiet valid to regard the Earth’s Oceans and space as being external to the system you have chosen. If you do so then you can regard any energy or mass transport across the upper or lower boundaries as transfers from the environment. If on the other hand, you regard the system as a coupling between the atmosphere and the top 100 m of the oceans, then you would have to define your upper and lower boudaries accordingly.
b) the volume and mass of the “model atmosphere” are fixed,
No . The volume can be fixed but the mass not . The density is variable depending whether the column contains solids (above land) or liquids (above oceans) .
Have you every heard about the approximation that 2/3 of all the Earth’s surface is water ? When you are doing first order calculations – you do rough and ready approximations like this.
delta Q = delta U
I already commented on it . Even the assumption of “constant mass” doesn’t imply that delta W = 0 and we have seen that the mass is not constant for every column anyway . The work of gravity is certainly not 0 . Neither the one of viscous forces for that matter .
You do not seem to understand even the most basic thermodynamic principles. If you define a system, work done inside that system between particles in not part of the dW
term in the First Law:
dQ = dU + dW
dQ referes to the heat energy gained or lost by the system (i.e. the atmsophere) and dU referes to the change in internal energy of this system.
dW referes to work done on the system by the environment (dW is postive) or work done by the system on the environment (dW is negative). Mechanical work is achieved by moving a force applied to (or by) the system through a distance. It is a reasonable assumption to assume that on time scales of a few months or years that the net work done by (or to) the system (i.e. the atmosphere) on the environment (i.e. the oceans and space) is negligible. Under these circumstances it is entirely valid to claim that
dQ = dU
dU/dt = [Radiative Forcing]/dt – [Radiative Feedback]/dt
However, if you refer to the last figure in Dr. Spencer’s presentation, you will see that once the pertubation has been established, the net forcing and net feed back are both linearly decreasing with time of the short period of the purtubation. Thus,
dU/dt = constant * ([Radiative Forcing] – [Radiative Feedback])
= constant * (F – lambda * T)
From your rantings, I have to assume that your mathematical training is in Pure rather than Applied Mathematics.
You are right in pointing out that I made a mistake in this part of my analysis. I should have said that the assumption that Dr. Spence had (unknowingly) made is that both the forcing and radiative feedback terms used need to exponentially decreasing functions of time [and not linear] in order for his formular to make any sense.
First, if the radiative forcing (F) can be approximated by function a short sharp increase followed by an exponentially decreasing function of time, then it is a reasonable assumption to represent F as :
F(t) = Fo * exp(-kt) where Fo=the initial forcing term and k=constant
related to the e-folding time of the exponential decay,
Hence, dF/dt = -k Fo * exp( -kt)
dF/dt = -k F
Of course, for my final statement to make sense, the e-folding decay rates for the radiative forcing and feed-back would have to be in the same ball-park i.e.
dU/dt = [Radiative Forcing]/dt – [Radiative Feedback]/dt
dU/dt = constant * ([Radiative Forcing] – [Radiative Feedback])
and hence, Cp*[dT/dt] = const*(F – lambda*T)
As for your comments on dt being infinetesimally small, all that required for this to be true is that the time interval you are considering for the perturbation (dt ~ 3 years) must be small compared climatic time scales (> 30 years).
TomVonk says:
June 29, 2010 at 2:42 am
In summary there are so many wrong assumptions in this formula that whatever it describes , it doesn’t belong to our Universe .
I think you are on thin ice here. Please go through the derivation of (4) in the below paper that is based on sound physics – and you will understand why…
http://www.ecd.bnl.gov/steve/pubs/HeatCapacity.pdf
I guess that you have a solid bacground in mathematics, but that you have little experience with common engineering physics that is always based on a number of approximations. Usually mathematicians cannot tell the difference between approximations and erroneous assumptions, so their conclusions are not always valid…
@Arno Arrak says:
June 29, 2010 at 9:12 am
Well observed. There also are a number of earlier eruptions where no cooling can be seen.
Read;
Ulric Lyons says:
June 29, 2010 at 8:37 am
An interesting discussion on the subject by a swedish professor of mathematics;
http://claesjohnson.blogspot.com/search/label/climate%20sensitivity
re George E. Smith: June 29, 2010 at 9:17 am
I think that everyone with any interest in these matters has long since learned that the only way to permanently get energy off-planet (in the absence of giant rail-guns) is by means of radiation, but that wasn’t the point of my note to Ike. You will also note that I stated explicitly that there are many variables involved, that few of them ‘stand still’, and that quantitative calculations are difficult. Your own list of complications, conditionals, caveats, and uncertainties only serves to emphasize that conclusion. If this were easy to do, we’d all have long since done it, and this blog likely wouldn’t exist, at least with its current focus.
Regarding the temperature dependence that you mention, it is an uncontested fact that the simple T⁴ behaviour is only true when the complete spectrum is involved. In the case of absorption and re-radiation by H2O, CO2, and the other gases in the atmosphere, the relevant ranges are quite specific and limited, but can be affected by things like pressure broadening, which varies with altitude, time of day, time of year, and many other quantities, and thus the re-radiation spectrum definitely isn’t complete.
The T³ dependence you mentioned for the peak height of the irradiance formula (Planck’s Law) is not relevant. It is a spurious consequence of using wavelength as a variable to obtain Wein’s Displacement Law, which gives the location of the peak in the Planck’s Law expression. The other common forms of Planck’s Law use frequency or photon energy as variables, and give a different location for the peak ‘irradiance colour’. They also give a peak height that it proportional to T³. This is a well-known ‘problem’, but it doesn’t really matter.
What does matter, is not the irradiance, but the integrated irradiance, which gives the same result for the emitted power, regardless of what variable is used, and that result is strictly limited to a maximum temperature dependence of T⁴. If there are gaps in the spectrum, the temperature dependence is either reduced to a lower order, or at the very least, lower orders are present in the result.
/dr.bill
Correction: “The T⁵ dependence you mentioned…..”
/dr.bill
Tom Vonk provides an excellent deconstruction of the derivation of Spencer’s
elementary linear DE feedback model. I simply add the analytic observation
that, under the best of circumstances, such a model is incapable of
revealing anything meaningful about climate sensitivity, let alone about heat from
ocean depths.
Let’s assume, for sake of discussion, that GMST is an (unknown) analytic
function T = u(f) of the insolation flux f. Then, by the Chain Rule, its
time derivative is given exactly by
dT/dt = (du/df) x (df/dt)
where the first r.h. term represents the sensitivity and the second term is
the time derivative of the observed insolation. Adding the feedback term
(-lambda x u) destroys that exact multiplicative relationship. Furhermore,
it tries to explain an effect due entirely to a transient decrease in
insolation (excitation) by ascribing it to a self-induced dissipation of
temperature (response). The feedback formulation of the model is
fundamentally wrong.
Many climate scientists seem to labor under the misapprehension that
feedback is somehow necessary in order for the system response to input to
be distributed over time. Nothing can be further from the truth. In
systems with elements of storage (capacity or memory), the output can be
time-distributed without any feedback whatsover. For any linear system,
the output y(t) in response to any bounded input x(t) can always be
represented exactly by a convolution time-integral (from minus infinity to
the present time) of the impulse response function h(tau) and the input
time-history x(t). The characteristic “time-constant” of the decay of
impulse-response function, rather than feedback, is what controls the
duration of effects from past values of input.
BTW, it was IPCC’s fantastic claim that the entire climate system can be
represented by a transfer function expressed as the product of the reciprocals (!) of the transfer functions of its subsystems is what convinced me that
they lack basic analytic understanding of systems. It made me an instant
sceptic after their first report.
Just noting we can also estimate the climate sensitivity by how much temperatures have increased to date compared to the forcing increase to date.
Temps are up 0.7C or so (give or take an artificial 0.2C or 0.3C added by the adjustments of Tom Karl) and the forcings have increased by 1.6 watts/m2 or up to 1.9 watts/ m2 by the time we include the newest numbers.
1.6 / 0.7 =
Accidently hit post comment before I was finished.
1.6 / 0.7 = 2.3 (double to 50% more than expected depending on how much lag one wants to build in and maybe up to three times too high if you don’t accept the high Aerosols negative forcings numbers).
Trenberth published about this recently and included a new term “Negative Radiative Feedback” to explain the missing energy or the missing temperature response.
http://img638.imageshack.us/img638/8098/trenberthnetradiation.jpg
What is the “Negative Radiative Feedback” – it could be clouds, it could be missing energy absorption in the deep oceans or it could be that the theory is just wrong to start with.
sky says: The characteristic “time-constant” of the decay of impulse-response function, rather than feedback, is what controls the duration of effects from past values of input.
Good point! An intuitive explanation of a thermal time constant is provided by MIT,
“The time constant tau is in accord with our intuition, or experience; high density, large volume, or high specific heat all tend to increase the time constant, while high heat transfer coefficient and large area will tend to decrease the time constant.”
http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node129.html (Equation (18.18) is the most important)
Now, I wonder whether our spinning earth may have a large number of different thermal masses, all with different time constants, and that an analogy with a forced oscillator may be useful? Obviously the earth is a dissipative system, but still there may be natural oscillations that can be sustained with very little effort.
Energy is always conserved but comes in so many forms that it is highly nontrivial to do a detailed energy budget. Still, it is possible to have an idea about the most important contributions; we know that the thermal mass of the oceans is the largest thermal mass, and we know that most of the energy dissipation is radiation to space. Then equation (4) in this paper follows,
http://www.ecd.bnl.gov/steve/pubs/HeatCapacity.pdf
Sure this is an approximation only, but it is a valid approximation and not an erroneous assumption. It is also a meaningful equation, it tells us that,
1. Temperature is an accumulative property.
2. It takes time to change the temperature due to ocean thermal mass.
3. The temperature may or may not be close to the equilibrium temperature.
4. Energy is dissipated mainly due to radiation back to space.
I think one of Dr. Spencer’s main points is that the temperature here on our planet may be far from the equilibrium temperature and that it may take several centuries to oscillate between cold and warm periods on each side of the equilibrium.
It seems to me that “Tom Vonk” and partly “sky” lack the required physical understanding and intuition to imagine a simplified heat balance equation for our planet. Then, I cannot see how you picture our planets climate variations in simplified mathematical terms and you should possibly go back to the Feynman Lectures on Physics once more and re-read the chapters about Energy.
Invariant (3:56pm):
Despite your self-projected proficiency in physics, you fail to notice a crucial difference between Spencer’s model formulation and Schwartz’s self-admitted Ansatz (premise without rigorous basis). The latter involves T^4 in a dimensionally-consistent algebraic restatement of the energy difference between outgoing and incoming energy. It tells us nothing about system operation. Spencer’s simple model involves T to the FIRST power as the explicitly-proclaimed “feedback” and is dimensionally inconsistent!
I find your reference to Feynman ironically amusing. I’ll let you guess from whom I learned my physics.
Invariant (3:56pm):
Despite your professed proficiency in physics, you fail to notice a
crucial difference between Spencer’s model formulation and Schwartz’s
self-admitted Ansatz (premise without rigorous basis). The latter involves
T^4 in a dimensionally-consistent algebraic restatement of the
difference between outgoing and incoming energy fluxes. It tells us nothing
about system operation. Spencer’s simple model involves T to the FIRST
power as the explicitly-proclaimed “feedback” and is dimensionally
inconsistent!
I find your reference to Feynman ironically amusing. I’ll let you guess
from whom I learned my physics.
sky: June 30, 2010 at 5:44 pm
If you look again at the Schwartz paper, you will note that he seemed pleased with his Ansatz (eq’n 3), not ashamed of it in the sense of something to be ‘admitted‘. It is also not a bad approximation anyway, and has been used by many others.
This leads to the T⁴ term in his eq’n 4, but he notes that the solution (eq’n 6) is not his, but was obtained by others who had also used this approximation.
If you check the Math, you will see that eq’n 6 is not a solution to eq’n 4, but to a linearized version of eq’n 4, in which the T⁴ term has been replaced by the linear term of an expansion around the mean temperature.
Finally, to get to Dr. Spencer’s equation, you just need one more simplifying approximation to eq’n 6, i.e. to take the time interval involved to be small in comparison to the time constant τ.
All in all, it’s just a regular back-of-the-envelope calculation of a type that is carried out by physicists and engineers every day in order to get a ballpark estimate of the main effects, as has been pointed out by several other readers.
/dr.bill
dr. bill:
These sort of back-of the-envelope approximations may satisfy academics, but when they result in dimensionally inconsistent model equations they wind up misleading everyone about how real-world physics operates. Furthermore, the assumption that the time-constant of the interacting uppermost ocean layer (which should not be confused with the lag-variable tau in the convolution integral) is small relative to the duration of the insolation disturbance due to Mt. Pinatubo is particularily onerous. Few realize that it is dependent on the strength of winds and is typically on the order of weeks to months in the upper mixed layer. That, rather than mathematical justifications of academic handwaving, is what physically matters.
sky: July 1, 2010 at 3:11 pm
Actually, it’s pretty clear that Dr. Spencer is a ‘real-world’ person using real-world data, and working on real-world problems. Whether he works at a university (say UAH) or at a private corporation (say RSS) doesn’t change that at all. You might also want to check out the underpinnings of your vaunted sense of omniscience and your erroneous sense of what other people do or don’t know.
/dr.bill
dr. bill:
I’m not impugning Spencer’s work with real-world data, which I much admire. The analytic mis-formulation of his model is the issue at hand. And decades of experience in analyzing and modeling real-world processes is what I bring to the discussion. Enough said.
Seems like a good idea. Here’s an article I read yesterday.
Perhaps there’s something in there for both of us. ☺
/dr.bill
but when they result in dimensionally inconsistent model equations they wind up misleading everyone about how real-world physics operates.
Challenges to “sky”
1. Ensure WUWT readers that Dr. Spencer has a unit/dimension of lambda that leads to a dimensionally inconsistent model equation.
2. Write down your favourite heat balance equation for our planet, use as few letters/symbols as possible.
That “Pinatubo cooling” is abject nonsense. Pinatubo eruption happened to coincide with peak warmth of the 1991 El Nino that was immediately followed by a temperature drop of half a degree to the bottom of the 1991/92 La Nina. The eruption was perfectly timed to make it look like it caused that temperature drop and Self et al. fell for the illusion. But that La Nina was a perfectly normal La Nina and had nothing to distinguish it from the previous two as Figure 7 in my book demonstrates. That figure is an analysis of satellite temperatures and brings out the five El Nino peaks and their accompanying La Ninas in the eighties and nineties. They can be clearly identified and all are part of the ENSO oscillation in the Pacific. ENSO has a global temperature influence and shows up in all accurate global temperature records. But Self et al. [in “Fire and Mud” edited by Newhall & Punongbayan (University of Washington Press, 1996), pp. 1089-1115], however, had no idea that the temperature oscillations in the satellite record belonged to ENSO and simply appropriated that particular La Nina cooling for their volcano. To justify it they show an out of context segment of the satellite record in their Figure 12A and if you don’t know what you are looking at it is easy to see why they thought the volcano had done it. And since they are the big experts everyone just copied them. Their data also showed that the volcanic aerosols that were blasted into the stratosphere first warmed it and that stratospheric cooling did not start until 1993. That’s two years after the tropospheric cooling they claimed. Also, an observation they report should have alerted them. They start to wonder why surface cooling is “…clearly documented after some eruptions (for example, Gunung Agung, Bali, in 1953) but not others – for example, El Chichon, Mexico, in 1982)…” The answer is pot luck. If the eruption coincides with the start of a La Nina cooling it looks like the volcano did it. But when it takes place when a La Nina period just ended and an El Nino is building up there is no observable cooling. It’s all in timing. Pinatubo erupted precisely when a La Nina was just starting to form. But for El Chichon the timing was inopportune: it erupted just when an El Nino was beginning to build up. No one could find its cooling simply because it did not exist, and neither did Pinatubo cooling exist. But Self et al. think they have found it and pontificate: “Pinatubo climate forcing was stronger than the opposite warming effects of either the El Nino event or anthropogenic greenhouse gases in the period 1991-1993.” Complete bullshit but it passes for science among the global warming gang.