Guest essay by Leland Park
Global warming theories propose positive feedbacks to explain magnified greenhouse effects that might trigger catastrophic warming. Naturally, any clues in temperature observations that might indicate feedback would be of great interest to climate science.
It turns out that climate feedback is very real, large and negative.
Climate Cause and Effect.
The concept of feedbacks presupposes a dynamic system in which cause and effect are linked through a consistent timing relationship. Thus, evidence of climate operating as a system, if it exists, should be found in comparing a cause of climate behavior with its effects. It is well known that solar energy is a dominant factor in the climate. Thus, the timing relationship between solar levels and temperatures might provide insight into climate cause and effect timing.
Figure 1 is a seasonal timeline of US average daily highs (Tmax) for US HCN stations at 36 degrees North Latitude. On that graph is an overlay of the daylight hours for same latitude, where the amount of daylight serves as a rough proxy for the pattern of solar level changes. (The two vertical scales are not adjusted for proportionality.)
Figure 1 Patterns of Daylight and Tmax for US HCN Stations at 36 No Latitude.
Several interesting observations can be made from this figure;
o the patterns of daylight hours and temperatures are both sinusoidal,
o Temperatures lag solar levels by about a month throughout the year,
o winter to summer temperature changes in excess of 40 deg F are entirely normal,
o this seasonal pattern is a consistent, recurring feature of climate behavior,
Though climate is often thought of as chaotic, it is readily apparent that the historical, repetitive pattern of cause (solar level) and effect (temperature change) means this pattern is not accidental. In fact, the cause and effect nature linking the two functions is a systematic behavior known as stimulus-response in control systems. Since this seasonal pattern actually repeats every year, the huge lag is actually a characteristic of climate behavior. In systems terms, this lag, from cause to effect, constitutes negative feedback.
Stimulus-Response Precision
As a systematic behavior we are interested in the consistency of the stimulus-response behavior. Most aspects of climate behavior exhibit variation over time, it is of interest to examine the statistical behavior associated with the seasonal patterns. Unfortunately, the ideal solar function used in Figure 1 cannot be used to match up with short term temperature variations. However, the statistical pattern of temperatures through the seasonal changes may provide insight into climate operation under change.
Figure 2 is a composite of the histograms for four selected months representing the solstice and equinox periods. Because of the climate lag, the solstice and equinox periods fall in the months of January, April, June and October. Variations in Tmax were calculated for individual stations by reference to station historical averages before inclusion into the histogram counts.
The statistical patterns for each month are nominally symmetrical about those historical averages so they correlate with the Tmax function in Figure 1. The results of these calculations is presented in Figure 2 where the mean of the distributions (and the monthly Tmax average) is represented by the 0 axis. While we do not have measured solar levels for comparison, it is clear that the climate is following the solar pattern with remarkable precision.
Figure 2 Tmax Variation Statistics for Selected Months
There are many factors that can produce variations in the seasonal Tmax. Among these are variations (by time and location) in the albedo, cloud cover, water vapor and ocean and atmospheric circulations. Despite the many reasons for variation, the climate follows the seasonal pattern closely even though the solar level is undergoing continuous change. In fact there is at least a 90 deg F round trip from winter to summer that is entirely normal at this latitude.
Figure 2 is, thus, an excellent illustration of the dynamic stability in the climate system. A passive system could not deliver the demonstrated seasonal tracking precision in following the solar stimulus. In short, the climate is behaving as if it is an active control system. More precisely, Figures 1 and 2 illustrate behavior that is consistent with that of a linear control system subjected to a sinusoidal stimulus.
Naturally, the temperature data alone is inadequate for explaining all of the effects. It should be sufficient, however, to establish that the dynamics are not “out of control”..
The Lag is Produced by Retarding Heat Changes.
During the January to July phase, the effect of the negative feedback is to retard temperature increases despite the increasing solar levels (Figure 1). Conversely, from July on, the negative feedback retards the loss in temperatures despite the waning solar levels. So the effect of the negative feedback is to retard, or delay, the effects of the solar level changes, whether increasing or decreasing.
So the cause of the temperature lag is something in the climate that can retard both heat gains and losses across the entire continent. Although atmospheric lag contributes, the daily lag of about 4 hours is far too small to account for the seasonal lag.
It is possible to surmise major characteristics of the cause of the lag. These are; 1) high relative heat capacity, 2) very large total heat capacity and 3) global impact. Those characteristics can only be met by the water in the oceans. In effect, the ocean contains such enormous amounts of water (high relative heat capacity) that they represent a vast thermal reservoir for heat absorption and subsequent release as the seasonal solar changes require.
In addition to its high heat capacity, water has special thermal properties that provide a critical link between the oceans and global climate. For example, as the sun heats the ocean a portion of the surface is evaporated carrying a large amount of heat energy into the atmosphere to be globally circulated. The relationship between water and the climate is far too complex for elaboration here, but it is clearly critical to the global climate.
Implications for Climate When Feedback is Negative.
A system with a large negative feedback is inherently stable in its operation. In this case, climate behavior is operating as if it is a linear control system where the stimulus is sinusoidal. That is, the climate will dutifully follow the solar stimulus but with a persistent delay (lag). Not only is it responding to the solar stimulus, it is responding with great precision, despite the approximately 90 deg F seasonal round trip in ambient temperatures over the year.
In fact, a system with this much lag would be quite stable with respect to minor perturbations. Inducing such a system to a permanent change in equilibrium conditions would not cause catastrophic instability. Instead the system would slowly seek a new equilibrium operating condition. For a major change in equilibrium conditions, however, a correspondingly large change in the system fundamentals would be required as well as considerable time for the change to take effect. Suffice it to say that minor changes in atmospheric trace gases would not be likely to force an equilibrium change in the system.
Implications for Climate Science.
US HCN data was mined for the graphs used in this analysis and the illustrative graphs. However, awareness of the climate lag and its approximate size could have been gleaned from an ordinary calendar. The seasonal pattern of solar level changes is well known and the solstice and equinox points are often marked on the calendars. Furthermore, calendars and almanacs have long noted that the warmest and coldest temperatures lag the solstice points.
Climate science should have begun to understand that the climate is stable when they found it necessary to continuously adjust temperature measurements to maintain the fiction that the earth is warming. If ever there was an excuse for “adjusting” field measurements, making continuing adjustments demonstrates that climate science has knowingly perpetrated a fraud.
The negative feedback between solar levels and temperatures has always existed – but never noticed, officially. I, for one, will be interested to learn how quickly climate science can adapt CO2 theory to explain away its implications.
The features you describe will also be exhibited by climate models with strongly positive climate feedbacks. The time lags are due to the heat capacity of the system, not feedbacks. While i agree climate feedbacks are likely negative, they cannot be estimated this way. It’s not clear that they can be measured at all…we’ve spent years trying.
The seasonal lag from heat capacity is not a negative feedback, but only a lag. The stability on ~1-month time scale is due to much of the positive feedbacks, such as albedo due to sea ice, having longer response times due to large heat capacity of the upper ocean. In a shorter time scale the Planck feedback, a negative one, is dominant.
Your comment highlights one of the problems that bedevils discussions of whether feedback is positive or negative; in a sense the conclusion is arbitrary, depending on what one includes in the open-loop system that the feedback closes.
The Planck feedback is, as you say, negative, and it closes an otherwise open-loop system whose equilibrium gain is infinite. But in many discussions the resultant, finite-equilibrium-gain system is treated as itself being the open-loop system upon which other feedbacks act, and the question becomes whether they are on balance positive or negative. It is at the latter level of abstraction that Dr. Spencer says, “I agree climate feedbacks are likely negative.”
Failure to specify context (a failure of which I am not always innocent myself) tends to muddy the waters in blogosphere feedback discussions.
Joe Born says, “Your comment highlights one of the problems that bedevils discussions of whether feedback is positive or negative; in a sense the conclusion is arbitrary, depending on what one includes in the open-loop system that the feedback closes.”
The point is well taken. You might be interested in the feedback discussion in this paper, which I think makes a similar point.
J. Ray Bätes, Estimating Climate Sensitivity Using Two-zone Energy Balance Models, 2015.
http://onlinelibrary.wiley.com/doi/10.1002/2015EA000154/pdf
Frederick Colbourne:
Thank you for the link. I haven’t read it all yet, but it may suggest a feedback taxonomy I hadn’t considered, so I look forward to reading the whole thing.
The first indication that the AGW claim had reached me and I was skeptical is my invitation to my 1984 CoSy MidWinter party invitation :
http://cosy.com/Science/CG84-tempsEnhanced.jpg
The temperature data is from Buffalo , ~ 43 degrees North , the closest I could find to Rochester NY . I just approximated the insolation with a sine . ( I’d like it if someone would point me to the correct function wrt lati
dtude , particularly in APL . It takes the sort of detailed thought I ration because tasks like getting the reference counted recursive lists-of-lists in 4th.CoSy crystalline sucks lots . )My estimate is that the temperature lag is around 2 to 3 weeks . Note there appears to be a ( 3rd harmonic ) bulge in the fall versus the spring .
The lag is not feedback ; it is inertia . And what impressed me at the time was how damned little there is . How little heat the atmosphere holds . My immediate thought was that if the Sun went out on Friday , we’d be a lifeless snowball before Monday . No way some change could take decades to show itself .
—
The image ( and the text ) are all constructed as a bitmap in Phil Van Cleave’s APL68000 on this Sage computer :
http://www.cosy.com/language/cosyhard/sagehds.gif
which had 500k of memory in a single address space . This was a few years before the InferiorButMarketable 8088 PC with its 16k segments ate the world and Apple soldered 256k into its 68000 based Mac defeating the potential of the chip’s 24 bit address space and making it useless for business .
Nice, thanks for that Bob. A lot can happen in half a lifetime 😉
Well , 0.43 anyway 🙂
Now I’m seeking other minds to form the language community fleshing out 4th.CoSy in all their different goals , including , I hope , an understandable because it is APL succinct model of planetary physics .
+1
Leland Park
Thanks for exploring the temperature lag behind the solar.
May I recommend the work by David Stockwell modeling a 90 degree lag for the 11 year solar cycle etc.
David R.B. Stockwell, “Key evidence for the accumulative model of high solar influence on global temperature” 4 August 23, 2011 http://vixra.org/pdf/1108.0032v1.pdf
See especially Fig. 3 http://vixra.org/pdf/1108.0032v1.pdf
David R.B. Stockwell shows that the direct correlation of solar irradiance with temperature R^2 is only 0.028 while the cumulative solar irradiance has a correlation R^2 of 0.72 and solar + volcanic has R^2 of 0.78. See Fig. 4 in
David R.B. Stockwell “On the Dynamics of Global Temperature” August 2, 2011 http://vixra.org/pdf/1108.0004v1.pdf
David R.B. Stockwell, “Accumulation of Solar Irradiance Anomaly as a Mechanism for Global Temperature Dynamics” 9 Aug. 2011
http://vixra.org/abs/1108.0020
Stockwell further shows a 2.75 year Phase Shift in Spencer’s Data
As I mentioned in connection with Willis Eschenbach’s post comparing daily with yearly time lags, the 2.75-year lag that Stockwell observed is interesting because it suggests more of a lumped-parameter character than does the shorter-term behavior described by Mr. Eschenbach and the head post here.
Maybe I’m being dense here but it seems to me the time lag is indicative of an reactive element in the system. The heat capacity of the system may be the “reactive element” and is being used as a feedback element in the control loop. Therefore the time lag is indicative of some kind of a feedback which is what the article attempts to identify. The results of that feedback are probably debatable.
Sorry but the article does not show anything about negative feedbacks. If I take a large object and apply heat to it in a sinusoidal pattern (including no solar at night), it will take a long time to heat up. The maximum temperature may occur AFTER the maximum heat application. But this lag has nothing to do with feedback. To see feedback, one can refer to clouds. If, as Willis postulates, high temperatures cause thunderstorms which dissipate more heat to space, THIS is a negative feedback. The fourth-power of temperature radiative heat loss law is a simple negative feedback (hot objects lose heat faster). But the seasonal lag has nothing to do with it.
…this lag has nothing to do with feedback.
I think ‘lag’ has nothing to do with anything but a time delay. Some lags may be due to feedback. Some are not.
For example, ∆CO2 lags ∆temperature. That observation is seen on many different time scales. But it isn’t a forcing. Is it a feedback? I’m not certain.
The problem is the language. ‘Lag’ is a temporal term; nothing more, unless it’s defined further.
The author writes:
“For a major change in equilibrium conditions, however, a correspondingly large change in the system fundamentals would be required as well as considerable time for the change to take effect. Suffice it to say that minor changes in atmospheric trace gases would not be likely to force an equilibrium change in the system.”
The first sentence – absolutely correct. The effect of our CO2 emissions, for example, are a large change in system fundamentals, and the impact is creating a long-term change before the climate comes to equilibrium – at a higher temperature.
The second sentence is misleading. There isn’t a minor change in trace gas that we’re experiencing right now. There is a “correspondingly large change” in concentrations of persistent greenhouse gases.
Persistent greenhouse gases ARE “system fundamentals” in the Earth climate. They are why the surface of the planet is 33C warmer than it should be at this distance from the sun. This has been known for hundreds of years (see Fourier, 1824 and 1827). Without them, the oceans would be frozen over, with liquid below the ice surface, like on Europa.
The significance of a component in a system is not always based upon its ratio to the bulk. That persistent GHGs make up a small portion of the bulk atmosphere is just as irrelevant as saying that less than a microgram of botulin toxin isn’t a lethal dose for a human because it is proportionally a nearly undetectable trace. That tiny trace would certainly create an equilibrium change.
Leland: The” existence of negative climate feedback” is fully understood by climate science, but the quantitative aspects are somewhat uncertain. Your post suggests an engineering background, so I’ll use calculus.
“Negative climate feedback” is provided by fact that all materials emit more thermal radiation as they get warmer. In the case of a simple blackbody, W = -oT^4 and the increase in radiation emitted with temperature dW/dT = -4oT^3 = -3.8 W/m2/K when the temperature is 255 K. (dW/dT is negative, because it is the increase in heat LOST with increasing temperature. The average photon escaping to space is emitted from about 5 km above the surface, where the temperature is about 255 K.) This is also called “Planck feedback”, but I prefer the non-standard term “Planck response” to avoid confusion with other feedbacks. The reciprocal (dT/dW = 0.26 T/(W/m2)) is called the “no-feedbacks” (or more accurately, the no-additional-feedbacks”) climate sensitivity. This value is usually multiplied by 3.7 (W/m2)/doubling of CO2 to give 1 K/doubling.
The Earth doesn’t emit like a blackbody. As temperature rises, more water vapor probably will be found in the atmosphere, allowing less thermal emission to reach space. That water vapor will also effect the albedo from clouds and the lapse rate. The surface of a warmer planet will probably be covered by less ice and snow, changing the planetary albedo. These changes are also called feedbacks and they have units of W/m2/K, just like the Planck response.)
If you want to think of the Earth as being a graybody with emissivity e, then: W = -eoT^4 and dW/dT = -4eoT^3 – oT^4*(de/dT). dW/dT for the Earth is called the “climate feedback parameter” and every climate scientist believes it is negative. The de/dT term represents the change in radiation to space associated with a change in the Earth’s emissivity – the sum of the WV, LR, cloud, and albedo feedbacks in the previous paragraph.. If de/dT is negative (making -oT^4*(de/dT) positive) – as most climate scientists believe – they say that positive feedback exists. However, that does not imply that dW/dT – the climate feedback parameter for the whole planet is positive. It simply means that the -oT^4*(de/dT) is positive and negates SOME of the negative Planck response (-4eoT^3). As long as the sum of these two terms is negative – as all climate scientists believe – the overall negative climate feedback you expect to see exists.
If the -oT^4*(de/dT) is big enough, then the climate feedback parameter can be zero or positive. That is called a runaway greenhouse effect. Climate scientist don’t believe that dW/dT is positive, but some believe that the positive feedbacks come close to canceling the negative Planck response. That is how they come up with a high climate sensitivity (the reciprocal of the climate feedback parameter dW/dT). The whole debate about climate change comes down to the sign and magnitude of de/dT and there is uncertainty about this subject.
Therefore, you and climate scientists are in complete agreement: dW/dT (the climate feedback parameter) is negative. However, they believe that it isn’t as negative expected for Planck feedback alone. They call the phenomena that make the climate feedback parameter LESS NEGATIVE, BUT NOT GREATER THAN ZERO “positive feedbacks”.
Why can’t people get past this 255K meme . The temperature calculated by simply summing the energy impinging on a point in our orbit ( virtually all from the sun ) is about 278.5 +- 2.3 from peri- to ap-helion . That is the black body temperature in our orbit and it is exactly the same for any gray , defined as flat spectrum , body in our orbit .
I see no hope for any forward progress in “climate science” until this most fundamental fact is understood .
Bob asked: “Why can’t people get past this 255K meme?”
I don’t particularly care for the 255K meme either. Therefore I said: “If you WANT to think of the Earth as being a graybody with emissivity e”, … emissivity changes with temperature (de/dT) and this change is the result of feedbacks that modify Planck response/feedback). What does “emissivity” mean when you are talking about a gas? Engineering types like this approach, and it is easier to discuss than the alternative used by chemists and physicists, who are used to thinking in terms of absorption and emission by groups of molecules. I judged that the author of this post didn’t think in terms of the behavior of molecules.
At a more fundamental level, CHANGES in the spectral intensity of radiation (dI) at a particularly wavelength (lambda) traveling through a layer of the atmosphere (dz thick, but thin enough that temperature, pressure and radiation can be treated as constants) are calculated using the Schwarzschild eqn:
dI = n*o*B(lambda,T)*dz – n*o*I*dz
where n is the density of GHG, o is the absorption cross-section for the GHG at wavelength lambda, B(lambda,T) is the Planck function, and I is the spectral intensity of the radiation entering the layer. The first term is the emission and the second term is the absorption by the molecules in the layer. This equation is numerically integrated over a path, usually from the surface to space for OLR or space to the surface for DLR and then the spectral intensity is integrated over all relevant wavelengths to produce the power flux. Using the “plane-parallel approximation”, only the component of the flux perpendicular to the surface is used since the fluxes in the other two directions cancel. Online programs like MODTRAN and HITRAN contain a database of absorption cross-sections and automate the numerical integration – at the cost of making the whole process seem like as much of a “black-box” as the graybody approach. If you are in the laboratory using a powerful light source, the emission term can be neglected and integration affords Beer’s law. Additional terms are added when scattering is important. Above the stratosphere, molecular collisions are less-frequent, a Boltzmann distribution of energy states (or local thermodynamic equilibrium, LTE) no longer exists and the emission term needs to be modified. Fortunately, all the processes important to climate occur where LTE exists.
When absorption and emission have come into equilibrium and dI is zero, the radiation has “blackbody” spectral intensity, B(lambda,T). Planck’s Law and the S-B equation tells us what happens when radiation is in equilibrium with its surroundings (absorption = emission). Such an equilibrium does not exist everywhere in the atmosphere at all wavelengths. The Schwarzschild eqn tell us what happens when such equilibrium doesn’t exist . Hopefully, this helps get past the 255 K meme.
http://climatemodels.uchicago.edu/modtran/
In the end, the planet emits more radiation to space as it warms and therefore has a negative climate feedback parameter. Climate scientists believe that Planck response/feedback is made less negative by the effect of “positive feedbacks”. Confusing. If you want to get past the flawed science presented at WUWT, try scienceofdoom.com . Despite the name, the host is generally agnostic about CAGW, but fanatic about getting the science correct.
Sorry , I work at the most superficial level . So we agree that before one starts descending into the atmosphere the “temperature” in our orbit which must be matched by any purported graph of temperature as a function of altitude is ~ 278.5 +- 2.3 .
All my energies right now are fleshing out 4th.CoSy to make it more accessible and useful to other minds and perhaps seduce some of them to join in implementing such necessary vocabulary as the Schwarzschild absorption equation and the vocabulary of physics . generally .
Looking more closely at what is being claimed for this Schwarzschild equation , I see it is claimed to be the crux equation for the trapping of heat by a spectral phenomenon . That certainly raises my interest in implementing it .
However , tip the standard image on its side and consider a tube with a surface of some particular absorptive , reflective surface at one end , and a series of spectra filters between it and a radiant source of some particular power spectrum at the other . The claim is that integrating across the stack of filters , the energy density , ie : temperature , next to the surface can be made to be higher than the energy density between the first filter and the radiant source .
Surely if you can construct such a surface and such a stack of spectral filters , we have the makings of a perpetual heat engine . And surely you can specify what the optimal absorption reflection spectrum of the surface , and the absorption transmission spectrum of the filters have to be and how much of a temperature gradient will created ,
Actually , since all the filters can be collapsed into 1 thru the appropriate multiplication , can you give us an example surface ( absorption ; reflection ) spectrum and
dzfilter and radiant source to demonstrate the effect ?David Appell claims its intrinsically not possible to replicate the effect which is claimed to trap an energy density at Venus’s surface 25 times ( 400k greater ) that just 250km next to it in its orbit .
Surely you could construct such a tube to conduct such an experiment in the tunnel along side the SLAC 3.2km accelerator for a lot less money than the cost of the accelerator creating over a 5c temperature difference if you can come close to matching Venus’s gradient .
Can you show us a numerical example of a surface , a filter
dzand an input power spectrum creating a greater energy density , ie : temperature , on the side of the filter away from the source ?“Climate scientists believe that Planck response/feedback is made less negative by the effect of “positive feedbacks”.
This is something that climate science has so wrong its absurd. Feedback, positive or negative has little effect on the sensitivity associated with the Planck response (dT/dW). The dependence of the sensitivity on everything (slope of the relationship between temperature and input) where input == emissions at LTE goes as 1/e*T^3, where e is the ‘effective’ emissivity. The sensitivity decreases far more rapidly owing to the temperature term than decreasing the effective emissivity of the planet can increase it. Both increasing clouds and increasing GHG’s will linearly decrease the effective emissivity and nothing else has any effect.
Another problem is that climate science is absolutely clueless about feedback system analysis and have horribly misapplied Bode’s feedback analysis to the climate system. Bode’s amplifier measures the input and feedback to determine how much output to deliver from an implied, infinite source. The climate system ‘amplifier’ consumes the input and feedback to produce the output and this COE constraint has never been accounted for and is why many believe that positive feedback has such a large effect. In fact, the very idea of runaway feedback is precluded by the absence of the infinite source assumed by Bode. Note that the assumption of a climate system ‘amplifier’ with an implied power supply was baked into AR1 and has never been fixed.
Earth is not a ideal black body like the Moon (after albedo), but is a non ideal black body which is called a gray body and if you consider that the temperature of this gray body is the surface temperature and its output is the emissions by the planet, the measured response of the planet to LTE changes in solar input (changes to planet output in LTE) is so close to what you would expect from a gray body, especially the long term averages, to claim anything else is to deny the data and the physics. I’ve shown this plot before and as far as I’m concerned, the only way to explain the data (3 decades of satellite data from ISCCP/GISS) is that the planet acts like a nearly ideal gray body whose temperature is the average surface temperature (about 287K) and whose effective emissivity is about 0.62 (the green line in the picture is the theoretical response).
http://www.palisad.com/co2/tp/fig1.png
Each little dot is 1 month of average emissions vs. average surface temperature for constant latitude slices of the planet. The larger dots are the 3 decade averages per slice. The solid lines represent the SB relationship plotted at the same scale as the data. Note that this plot also shows that the IPCC sensitivity of 0.8C per W/m^2 seems to have arisen as the result of a linearization error (i.e. ignoring the T^4 dependence of emissions on temperature).
Main stream climate science (and many skeptics) deny the applicability of gray bodies because they think that the system can not be this simple. In fact much of the complexity was added to IPCC AR’s as assumptions without adequate peer review in order to provides the necessary wiggle room to accommodate the desired result of a high sensitivity.
If Earth had an atmosphere containing only N2 and O2 it would behave like a nearly ideal black body. Adding trace amounts of GHG’s plus clouds will not cause the system deviate so far away from the requirements of physical laws that the absurdly high sensitivity claimed can be supported. As an exercise, start from a black body planet with an atmosphere containing only N2/O2 and incrementally add GHG’s and clouds to see what happens.
Bob said: ” I work at the most superficial level.”
At the most superficial level, you need consider conservation of energy. The energy flux passing downward through the atmosphere needs to be equal the energy flux passing upward through the atmosphere at every altitude – or the temperature (internal energy) is changing and a steady-state doesn’t exist. There are two major mechanisms by which energy can travel through the atmosphere: radiation and convection (mostly the latent heat associated with the evaporation and condensation of water).
The Schwarzschild eqn tells us how both LWR and SWR behave in a clear atmosphere. (To a first approximation, clouds can be modeled as semi-transparent surfaces that transmit, absorb, reflect and emit radiation.) To use the Schwarzschild eqn., you need to know how the GHG density (especially humidity), temperate and absorption coefficients vary with altitude. In this equation,it would be more accurate to write: dI(z), I(z), n(z), T(z), and o(z). So, one needs to INPUT the state of the atmosphere (everywhere on the planet) to calculate the radiative fluxes passing through it. Locations where absorption is greater than (or less than) emission are being warmed (or cooled) by the net flux of radiation into or out of them. The Schwarzschild eqn is what allows us to predict that an instantaneous doubling of CO2 (leaving all other parameters the same), will change the net flux of radiation by -3.7 W/m2. That implies that the atmosphere will warm because it is now absorbing more radiation than it is emitting. If you calculate the temperature vs altitude profile for an atmosphere heated from below by SWR and cooled only by emission of LWR, the temperature rises exponentially as one approaches the surface. If no convection existed and radiative equilibrium controlled temperature, the surface of the planet would be about 350 K.
Convection involves fluid flow, so we can’t calculate convective flux of energy in W/m2 from first principles, (as we can with radiation). However, we can calculate from first principles the maximum “lapse rate” (-dT/dz) that can exist without creating instability to buoyancy-driven convection. Instability occurs when a rising packet of air expands and cools, but still remains warmer and less dense than the surrounding air. The maximum stable lapse rate is 9.8 K/km for “dry air” and as low as 4.9 K/km for the most humid air on the planet, which releases heat as water condenses due to falling temperature. Frustratingly, we can only calculate when convection will occur (an unstable lapse rate), but not how much heat is transported upwards (in W/m2) when an unstable lapse rate develops.
So purely radiative equilibrium says that the lapse rate increases exponentially as you go lower in the atmosphere, but there is a maximum lapse rate compatible with stability to buoyancy-driven convection. In the 1960’s Manabe combined these two principle into the concept of radiative-convective equilibrium: Whenever net radiation flux alone can’t maintain a stable lapse rate, convection will carry heat upwards fast enough to produce a marginally stable average lapse rate. Probes descending through the Venusian atmosphere show a nearly constant lapse rate agreeing with theory from near the top of the atmosphere at 250 K to the surface at 750 K. On Earth, the lapse rate averages 6.5 K/km through the troposphere. The tropopause is where radiation can cool enough to maintain a stable lapse rate without any assistance from convection. At the surface, net LWR (OLR-DLR = 390-333 = 57 W/m2) needs to be supplemented by about 100 W/m2 of convection to equal SWR absorbed by the surface. Average rainfall of about 1 m per year represents about 80 W/m2 of latent heat.
Pressure differences also drive horizontal mass flow via winds.
AOGCMs and weather prediction programs calculate mass, radiative, and convective fluxes between grid cells. Because condensation and turbulent flow occur on scales much smaller than the size of a grid cell, adjustable parameters are needed to represent these processes. Do clouds begin to form in a grid cell when the average relative humidity is 98% or 99%? The planet’s albedo changes dramatically with this parameter. How much turbulent mixing occurs between nearby rising and falling air masses? ECS can change 1 K/doubling by adjusting this parameter! Increasing aerosols reduce the initial droplet size when clouds condense, increasing their reflection, but these smaller droplets evaporate and condense many times produce thermodynamically more-stable larger droplets. What parameters best describe the radiative properties a cloudy grid cell? The rate of evaporation is controlled by the speed of the wind and relative humidity (and therefore temperature) and a tunable parameter. One can tune such parameters one at a time and find a set of parameters that provide a reasonable representation for today’s climate and the historical temperature record. One can’t find a set of parameters that optimally describes all aspect of today’s climate at the same time. The record of surface and troposphere warming is statistically inconsistent with the projections of the mean of the IPPCs models. Observations of the seasonal cycle in OLR and SWR associated with an seasonal changes in each hemisphere (that produce a 3.5 K increase in GMST not seen with temperature anomalies) show that all of today’s models are flawed. This paper is by the Manabe who first published radiative-convective equilibrium and the first AOGCM:
http://www.pnas.org/content/110/19/7568.full.pdf
People like the author of this post spread the idea that climate science is plagued by massive errors that even amateurs can spot. For example, the lack of negative feedback in climate PROVES climate science is a hoax. Their ignorance is appalling and they don’t want to know why they may be wrong. (My reply got no response.) The deceptions by the other side are equally appalling, and many of them are real scientists who understand the science, but can’t accept their that science can’t reliably describe a future with rising CO2 and thereby save the planet. David Appell is an example of a partisan press who provide more propaganda than accurate science.
ScienceofDoom.com was the biggest help on my journey to this level of understanding. Most WUWT readers believe that it is hoax, another sign of ignorance. I journeyed there at the recommendation of Steve McIntyre. (Real Climate sent me to ClimateAudit by insulting my intelligence with their defense of AIT and insults of McIntyre. If the hated him so much, I wanted to know why.
I’m superficial , but I like equations and computations .
An equation is worth a thousand words and a computation demonstrating it ten thousand , and an experimental demonstration the whole bible .
I really find it mentally useful to tip the problem no its side . Let’s say the surface is on the left , and the radiant source on the right and a filter following the Schwarzschild equation in between . You are claiming that for some set of parameter values the Schwarzschild differential will cause a higher temperature to be maintained on the left of the filter than between it and the radiant source .
Could you please give us a set of parameters , that is , surface , filter and source spectra quantitatively demonstrating this effect .
I’m sorry , I’ve read far too many words that go wandering off talking about all sorts of things like climate models like your link does , or word wave about pressure ( a gravitational phenomenon ) or clouds .
Can we prove and test just this one equation which is claimed to be the crux ? That’s the way real classical physics is done .
Bob wrote: “Can we prove and test just this one equation which is claimed to be the crux ? That’s the way real classical physics is done.”
Great question.
ScienceofDoom has several series of posts on understanding and visualization atmospheric radiation, some of which include comparison of theory and observation. The best might be the post linked below. The relevance of the Schwarzschild eqn to our atmosphere has been demonstrated by studying the spectrum of OLR reaching satellites in space and planes in the upper atmosphere, as well as the spectrum of DLR reaching the ground. Usually the temperature, density and humidity of the atmosphere at all altitudes is collected by a radiosonde as part of the study.
https://scienceofdoom.com/2010/11/01/theory-and-experiment-atmospheric-radiation/
Some of SOD’s material is taken from Grant Petty’s $39 text for meteorologists (AGW is not mentioned), “A First Course in Atmospheric Radiation”. If you want to be able to look up information from a reliable source, this is the place to start. I own a copy.
http://www.amazon.com/First-Course-Atmospheric-Radiation-2nd/dp/0972903313/ref=sr_1_1?ie=UTF8&qid=1463379463&sr=8-1&keywords=a+first+course+in+atmospheric+radiation
dI = n*o*B(lambda,T)*dz – n*o*I*dz
The atmosphere is a difficult place to accurately study fundamental physics. Carefully controlled experiments in the laboratory have many advantages. Using hot high intensity light sources in the laboratory makes the first term negligible.
dI = – n*o*I*dz
I/I_0 = exp(-noz) Beers Law
When n and o are constant, the equation has an analytical solution: Beer’s Law. This equation is used in laboratories every day. The absorption cross-sections (o) for GHGs have been studied for decades in the laboratory using Beer’s Law and the HITRAN database containing the best values was started in the 1960s so aeronautical engineers could calculate radiative heat flux around high performance aircraft and spacecraft. One paper I read used apparatus with a path length of 190 m via multiple reflections so samples could be studied at the low pressures and temperatures found at the tropopause.
Is is hard to study emission of thermal infrared in the laboratory, because your sample and everything else in the lab emits thermal infrared. The first term comes from the study of the blackbody radiation emitted by heated black cavities (designed to promote equilibrium between absorption and emission) that led to the development of quantum mechanics. The Planck function (B(lambda,T)) tells us how the relative intensity of such radiation varies with wavelength. Conservation of energy demands that the Planck function be multiplied by n*o*dz in the Schwarzschild eqn so that light of blackbody intensity, i = B(lambda,T), where absorption equals emission produces dI = 0.
FWIW, I was intensely frustrated for a long time, because I recognized the doubled CO2 would be absorb AND EMIT twice as much radiation. No one could convincingly explain why the net result of two large opposing changes would be a slight reduction in OLR at the TOA. Within minutes of seeing the Schwarzschild eqn, everything became clear. For example, dI varies with n, but what determines whether dI is negative or positive? (:))
When people used to talk about “settled science”, the only part that was really settled IMO was the Schwarzschild eqn. Nobody wants to write it down, because there is nothing intuitive about an equation that requires numerical integration. So the consensus waves their hands about “trapping” heat in the atmosphere – which is a gross and embarrassing over-simplification
I’m glad to see the highly ambiguous, simplistic notion of “climate feedback” examined by others from the standpoint of first principles. Two caveats with respect to climate “sensitivity” seem to be in order: 1) what climate science calls feedback is more akin to time-variable changes in system-response characteristics (or to lagged recirculation of thermal energy) than to any rigorous concept of instantaneous feedback in time-invariant control systems with op-amps and 2) on the basis of available empirical evidence there is no compelling reason to assume thermodynamic equilibrium beyond LTE.
IMHO, co2isnotevil comes very close to portraying the actual physical situation. I cannot agree, however, with Frank’s assessment of scienceofdoom as devoted to physical rigor, when it portrays the effect of atmospheric backradiation as being the equivalent of a “second sun.” Unlike any sun, the atmosphere does NOT produce any energy; it merely absorbs and isotropically re-emits (recirculates) LWIR arising from thermalization of insolation at the surface.
Such are the popular myths of “climate science.”
Frank wrote: “Climate scientists believe that Planck response/feedback is made less negative by the effect of “positive feedbacks”.
CO2isnotevil says: “This is something that climate science has so wrong its absurd. Feedback, positive or negative has little effect on the sensitivity associated with the Planck response (dT/dW).”
Frank responds: CO2isnotevil doesn’t have the slightest idea of what he is talking about. Planck feedback (dW/dT. not dT/dW) is the response (increased emission of radiation) of a blackbody to warming. Although Planck feedback varies slightly with temperature (dW/dT = 4oT^3 = 3.8 W/m2 at 255 K and 5.4 W/m2 at 288K), other feedbacks do not change Planck feedback. The tendency of all matter to emit more radiation as it gets warmer is innate and not unchanged by outside factors. Other feedbacks change the climate feedback parameter for the planet, not Planck feedback. If you surround a blackbody with some material (like our atmosphere) that changes how much radiation escapes from our planet to space (or changes how much incoming SWR is reflected back to space), the amount of net radiation leaving the planet will not be the amount expected for Planck feedback alone.
CO2isnotevil shows us a graph of surface temperature at various locations on the planet and the amount of OLR exiting the planet for space above that location. From that information dT/dW is deduced for the planet. First, he has the wrong variable on the wrong axis: the independent variable is surface temperature and the dependent variable is temperature. Differences in local surface temperature cause differences in local TOA OLR, not vice versa. Now let’s look at the temperature range: 100K to 300K. Where on the planet is the temperature between 100K and 250K. Mostly Antarctica and perhaps some Arctic regions in winter. So the slope of this graph is mostly determined by what happens in Antarctica when it warms, and it contains little information about what is happening on the rest of the planet. Antarctica is extremely dry, so water vapor feedback and lapse rate feedback and cloud feedback between 100 K and 250 K are not typical of what happens over most of the planet.
If you switch axes and look at OLR from the tropics (T greater than 295 K), you see a lot of points that are not on the curve that roughly fits the rest of the data. OLR ranges from 250-330 W/m2 almost independently from temperature. This region represent about half of the surface of the planet.
What we really want to know is how total GLOBAL OLR AND reflected SWR (not shown on this graph) change when the surface temperature rises a few degK, not how local OLR varies with local surface temperature across 200 K. The best information on this subject comes from the seasonal 3.5 K increase in GMST associated with summer in the NH (which is due to its lower heat capacity). (The process of converting to temperature anomalies removes this large annual change from the data we normally see.) We have observed this cycle from space for more than 20 years. OLR increases about 2.2 W/m2/K from both clear skies (water vapor and lapse rate feedback only) and all skies, not the 3.8 W/m2/K we expect for Planck feedback at 255 K. There is no doubt that water vapor+lapse rate feedback reduces OLR from that expected for Planck feedback alone. However, seasonal warming is not global warming – one hemisphere is cooling while the other is warming. There are also changes in reflected SWR from surface snow cover, but little there is little seasonal snow cover in the SH. A full interpretation is difficult. However, one thing is clear: the data in this paper show that all AOGCMs get some aspects of the seasonal change in OLR and reflected SWR wrong, and that they are all mutually INCONSISTENT.
http://www.pnas.org/content/110/19/7568/F1.medium.gif
http://www.pnas.org/content/110/19/7568.full.pdf
CO2isnotevil also says: “Another problem is that climate science is absolutely clueless about feedback system analysis and have horribly misapplied Bode’s feedback analysis to the climate system. Bode’s amplifier measures the input and feedback to determine how much output to deliver from an implied, infinite source. The climate system ‘amplifier’ consumes the input and feedback to produce the output and this COE constraint has never been accounted for and is why many believe that positive feedback has such a large effect. In fact, the very idea of runaway feedback is precluded by the absence of the infinite source assumed by Bode. Note that the assumption of a climate system ‘amplifier’ with an implied power supply was baked into AR1 and has never been fixed.
Frank responds: More nonsense from CO2isnotevil. Unlike an amplifier, which needs a power source, the Earth has a power source – the sun. The Earth can warm because OLR escapes to space more slowly or because less SWR is reflected to space. A doubling of CO2 will reduce OLR escaping to space by about 4 W/m2. Using just a Planck feedback of about 4 W/m2/K, temperature will need to rise about 1 K to emit the same amount of radiation as before doubling. Warming of 1 K will change the amount of water vapor in the atmosphere (and thereby change lapse rate), change cloud cover, and reduce seasonal snow cover (surface albedo). Suppose the total of these non-Planck feedbacks reduce OLR and reflected SWR by 2 W/m2/K. 1 K of warming will then produce a radiative imbalance of 2 W/m2. That change in radiative balance will cause an additional 0.5 K of warming. That 0.5 K of warming will reduce OLR and reflected SWR by 0.5 K * 2 = 1 W/m2/K That 1 W/m2 change in radiative balance will produce an additional 0.25 K of warming. The result is an infinite geometric series 1 + 1/2 + 1/4 + 1/8 … = 2 K of warming. If all of the non-planck feedbacks total 3 W/m2/K, the geometric series is 1 + 2/3 + 4/9 + 8/27 … = 3 K. If the total of all non-Planck feedbacks is only 1 W/m2/K, then the series is 1 + 1/4 + 1/16 + 1/64 … = 4/3 K. One doesn’t need to draw any analogies to Bode amplifiers to work out the consequences of feedback, but the mathematics is similar.
However, one doesn’t need to consider “amplification” by feedback at all. The average photon escaping to space is emitted from about 5 km where temperature is 255 K. If we think in terms of a blackbody + feedbacks, the Earth loses an additional 4 W/m2 for every 1 K rise in surface temperature: -4 W/m2/K. The additional water vapor in the atmosphere blocks about 2 W/m2 for every 1 K rise in surface temperature: +2 W/m2/K. The fact that more humidity cause more warming in the upper troposphere than at the surface (lapse rate feedback) adds an additional 1 W/m2 to OLR: -1 W/m2/K. During the winter, a 1 K increase in temperature will reduce seasonal snow cover and reflect a few tenths of a W/m2 less SWR from the surface to space, and if we wait for centuries, changes in polar ice caps also will reflect a little less: perhaps +0.3 W/m2/K. Total: -2.7 W/m2/K before accounting for changes in clouds. We know that cloud cover diminishes in the summer, so cloud feedback is likely positive, +0.7 W/m2/K to make a wild guess. Total -2 W/m2/K. Take the reciprocal: 0.5 K/W/m2. A doubling of CO2 is 4 W/m2, making ECS = 2 K/doubling. This is about what Otto (2013) and Lewis&Curry (2014) deduce from the change in forcing and the observed change in surface temperature. If cloud feedback is increased to +1.7 W/m2/K, the sum of all feedbacks is -1 W/m2/K and climate sensitivity rises to 4 K/doubling.
Slightly different numbers are used for some of the round numbers I used above. Observations of OLR through clear skies from space (Planck + water vapor + lapse rate feedbacks) amount to about -2 W/m2/K. Climate scientists don’t have the ability to distinguish between an ECS of 2 or 4+ K/doubling. Any higher gets us too close to a runaway greenhouse effect to be believable.
None of CO2isnotevil’s scientific criticism is accurate. This post is wrong, climate feedback is negative in the eyes of the consensus, whether you are an engineer who thinks of the atmosphere as a blackbox with a single temperature and emissivity or whether you are a chemist who thinks in terms of molecules or GHG and uses the Schwarzschild eqn – which has been validated by observations of the atmosphere. The existence positive feedbacks that reduce negative Planck feedback is not controversial, but the uncertainty in the magnitude of these feedbacks is high.
1Sky1 wrote: “I’m glad to see the highly ambiguous, simplistic notion of “climate feedback” examined by others from the standpoint of first principles. Two caveats with respect to climate “sensitivity” seem to be in order: 1) what climate science calls feedback is more akin to time-variable changes in system-response characteristics (or to lagged recirculation of thermal energy) than to any rigorous concept of instantaneous feedback in time-invariant control systems with op-amps and 2) on the basis of available empirical evidence there is no compelling reason to assume thermodynamic equilibrium beyond LTE.”
Except for changes in ice caps, most feedbacks occur quickly – within one or perhaps two months. If you take the average amount of water in the atmosphere (25.3 cm) and divide by average rainfall (2.7 mm/day), you will find that the average water molecule remains in the atmosphere for about 9 days. So feedbacks associated humidity and cloud cover respond to changes in surface temperature in weeks, not years. When you are looking a monthly temperature averages, the response appears instantaneous. Typical surface prevailing wind speed is around 10 m/s or 36 km/hr, or 1000 km/day or around the world in 1 month. Jet streams in the upper atmosphere move far faster. T Seasonal snow cover (surface albedo) responds quickly to changes in local temperature. The atmosphere is effectively stirred on a time scale of 1 month and local feedback about as fast as we record temperature – i.e. monthly. There is little lag between temperature and most feedback.
Total precipitable water vapor: http://www.esrl.noaa.gov/gmd/publications/annual_meetings/2014/slides/22-140327-C.pdf
(However, Roy Spenser claims that it takes about 2 months for the heat from El Nino to rise to the top of the troposphere.)
1sky1 wrote: IMHO, co2isnotevil comes very close to portraying the actual physical situation. I cannot agree, however, with Frank’s assessment of scienceofdoom as devoted to physical rigor, when it portrays the effect of atmospheric backradiation as being the equivalent of a “second sun.” Unlike any sun, the atmosphere does NOT produce any energy; it merely absorbs and isotropically re-emits (recirculates) LWIR arising from thermalization of insolation at the surface.
Frank responds: The graph provided by co2isnotevil is dominated by signals from Antarctic and ignores feedbacks in SWR (See above). I provided a graph and link to a paper on the GLOBAL response to warming – which is far more relevant than local OLR emitted in response to local temperature.
Whether you believe in a 2-way flux of photons (OLR and DLR) or some form of cancellation producing a 1-way flux, we agree on the net flux (390-333 = 57 W/m2 for example). So the world of SOD isn’t significantly different from yours.
Of course, anyone who has taken statistical mechanics realizes that the behavior of individual photon and molecules is not constrained by the 2LoT. The laws of thermodynamics are not fundamental; they are a consequence of universe comprised of a very large number of molecules and photons following the laws of quantum mechanics. Temperature is only defined thermodynamically for large collections of colliding molecules and the NET FLUX of photons is always from hot to cold when the terms hot and cold have meaning. It makes no difference whether we have OLR and DLR or a net flux equal to OLR-DLR, but it is far more practical to calculate OLR and DLR using consensus physics.
Or, as Feynman says below, “if you can’t accept the way nature really behaves, go to another universe” ,,, we the rules are simpler .. more philosophically pleasing.
Or go to Venus, where the 740 K surface emits 17,000 W/m2 of thermal radiation and almost no sunlight penetrates to the surface. Why doesn’t that surface cool rapidly?
Frank wrote :
Venus is the test . It’s surface temperature is about 2.25 that of a gray body in its orbit as opposed to our 1.03 . Our 3% could possibly be explained as a spectral phenomenon ( altho our ToA spectrum is apparently in the other direction giving an equilibrium temperature of about the endlessly parroted 255K ) . But no material spectrum can create a 225% solar heat gain . And as you point out , only about 3% of the impinging solar energy reaches the surface .
When talking about temperature in a situation like Venus’s surface , I prefer to think in terms of energy density which is simply the power times a light-second . It’s a presumption to assert the energy is going anywhere .
This is what leads me to conclude that HockeySchtick’s and others computations based on constant total energy balance , including gravity , is the only possible answer .
Of course , that is outside the entire GHG paradigm and means any “forcing” calculations not based on effects on the ToA spectrum are void . It is also why the Schwarzschild absorption differential , which is the only equation I have seen claimed to explain the “trapping” of energy in excess of that calculated for the ToA spectrum on the side of a filter away from the source , is the next thing I’d like to see implemented in an APL — so its parameter space can be easily played with and explored .
Bob commented: “the next thing I’d like to see implemented in an APL — so its parameter space can be easily played with and explored.”
There is a group in England (climateprediction.net) that recruited thousands of volunteers to donate unused time on their personal computers to running large (1000+) ensembles of climate models with parameters chosen from within a physically reasonable range. They have shown modelers – but not the public or even the wider climate science community – that the output from the IPCC’s models represents only a tiny fraction of future climates that are compatible with physics. Most of their papers are linked. not behind paywalls and accessible through this web page:
http://www.climateprediction.net/publications/?letter=&type=&theme=
Looking I see a lot of recent politicized science trying to attribute extreme weather to aGHGs (a scientifically dubious and meaningless activity). But interspersed are papers that demonstrate that the parameters of climate models are not globally optimum values and that no set of parameters provides a superior representation of today’s climate. In fact, systematic exploration proved that they couldn’t narrow range of ANY parameter because some part of that range consistently produced inferior results.
http://www.climateprediction.net/wp-content/publications/nature_first_results.pdf
http://www.cgd.ucar.edu/ccr/bsander/subgrid.pdf
http://rsta.royalsocietypublishing.org/content/365/1857/2145.short
I suspect most skeptics will find these papers too close to the consensus position, but they illustrate the problems with the IPCCs models. Recognition of greater uncertainty means expansion at both the high and low ends of the IPCC range.
The problem with “consensus physics” as portrayed by climate scientists is that it stands basic principles on their head and ignores empirical observations. Nowhere is that more evident than in the failure to recognize that the climate system responds, albeit in a very complicated way, to the virtually sole forcing of insolation as a FEED-THROUGH system, wherein the heat produced by thermalization of the surface is transferred primarily by moist convection to the atmosphere and thereupon by radiation to space. Feedback in any rigorous sense of looped response is not at play in that process. Treatments relying upon radiative transfer alone are simply inadequate analyses of the thermodynamic problem on an aqueous planet.
1sky1 ,
I’ve been working on a image to make the equation and calculation I want to see before I’ll buy off on the GHG heat trapping hypothesis . It’s been complained that my analysis of a simple uniformly colored ball is too simple represent a planet . This complaint is made by people who parrot the 255K meme without understanding that that calculation is just the special case a particular step function spectrum with 0.7 absorptivity=emissivity
aeover the solar power peak and 1.0 over longer wavelengths .So I’ve gone even simpler : a 1 dimensional representation :
http://cosy.com/Science/1DeqDiagram640.jpg
The computation for a simple colored ball , eg , the lumped earth-atmosphere spectrum , is modeled on the lower row . The surface either absorbs=emits or reflects as a function of wave length . The filter either absorbs=emits or transmits as a function of wavelength .
The spectrum is the result of what’s absorbed and reflected by the surface and atmosphere together and the temperature works out to whatever the Top of Atmosphere spectrum is measured to be . Call that
T.The top line has separated the effective atmospheric filter , the same on both sides . The GHG claim is that there exists filter function such that
b + c % 2, ie , the average temperature at the surface is greater thanT.Ok , show me a power spectrum for the source , the absorption=emission spectrum of the surface and absorption=emission , transmission spectrum of the filter which displays this phenomenon .
As I say , this is just a draft I’m whipping up . But I think it clarifies the essential physical equations I have never seen presented and don’t ever expect to .
Bob Armstrong:
I’m not sure that I’m following your logic. Since the “filtration” done by the atmosphere is very much different in the SW range than in the LWIR, it would seem that the two processes would not combine readily into one effective filter. BTW, in modeling, I’ve found it very effective to treat the system as an LRC filter, rather than just a RC filter, to account for the internal storage of thermal energy.
1ski1
An RC (or RL) filter is either a low pass or high pass filter depending on where the C (or L) is and both C and L store energy. An LC circuit is a bandpass filter while an RLC circuit is a lossy bandpass filter. The potentially relevant feature of an LC or RLC circuit model is the periodic transfer of energy back and forth between being energy stored in the L and being stored in the C (resonance).
The Earth/atmosphere system could be modelled with the C being the surface and L being the atmosphere. The size of the relevant R can be determined by how long it takes this periodic exchange, once initiated by some change, to decay down to nothing. This can be precluded, or at least considered to be insignificant, as the difference in energy stored by the surface and the atmosphere is orders of magnitude. One thing we don’t see is the characteristic ringing expected for this kind of filter if the L was of any consequence relative to R and C.
Another possibility is to model cold as C and hot as L. The total amount of hot and total amount of cold on the planet at any one time varies periodically with the seasons (the p-p magnitude in the N hemisphere is not exactly cancelled by that in the S), although the average across a year is relatively constant and the average seasonal variability is easily cancelled out (anomaly analysis depends on this). As long as analysis covers a multiple of 12 months, an RC circuit model should be more than accurate enough.
How are you mapping the R, L and C to the actual properties of the physical system? What is the resonant frequency? It it matches to a circuit with a resonant frequency of 1 cycle per year, yearly averaging will eliminate the requirement for an L.
For the RC model, you can trivially map the product of RC (the time constant) to the amount of time it would take for all of the energy stored by the system to be emitted at its current average rate (see my reply to Esilex in these comments). The way this maps physically is that the C represents where energy is stored by the planet (ocean, land and atmosphere and the R represents the limiting factor for energy emitted into space (Planck emissions from the energy store, some of which are prevented from escaping out into space and instead are returned back to the surface).
I got comfortable computing with spectra when I was learning the math required to think about visual psychophysics in the ’70s . It’s when I gained respect for the
dot : { +/ * } | sum across products of corresponding pairs( generalized in APL to{ f/ g }) as one of the most important computations in any quantitative field . Thus the equation for the radiative balance for any source power spectrum and any particular color , absorptivity=emissivity , spectrum and any particular sink , in terms of energy is simply the ratio of the dot products( dot[ Source ; object ] % dot[ sink ; object ] ) * TotalIrradianceThis computation applies to any measured spectra , and is easily extended to any arrangement of source and object and sink spectra . It turns into the 4th root of the ratio times the gray body temperature in our orbit of about 278.5 +- 2.3 .
Given that actual spectra are available , in particular the spectrum of the lumped earth and atmosphere as seen from space and the solar spectrum , why isn’t that observed and computed temperature cited instead of the endlessly parroted 255 value . That crude approximation is useless in quantitatively understanding the 4th decimal place variations we’ve observed over the century .
While models in terms of simple filter functions may have some explanatory value , they too are hopelessly crude when the actual measured spectra are available .
Note : I’d love to see a description of just how the
aespectrum of the planet is measured . Certainly satellites at the Lagrangian points should provide the essential data .co2isnotevil:
The only reason I brought up LRC filters as an aside is because they have impulse responses that maximize not at zero, but at a lagged time. That seems to be the characteristic of linearized planetary temperature response to variations in TSI over intervals much longer than a year. It’s purely an empirical “blackbox” relationship involving a critically damped series filter, without any obvious association with specific physical variables (although I have my ideas). Without imposing sharp resonance, it provides a mathematical means of exploring weakly oscillatory second-order response that is unavailable with first-order RC models. Let’s not get distracted by this side comment.
Sometimes the net feedback is negative, and sometimes the net feedback is positive. This means that the level of feedback varies as conditions vary. Why is it not logical to presume that net feedback effects increase as the temperature level increases?
And if the net feedback is a variable, and not a constant, how can 100 years possibly hope to be enough data to determine the function, especially when we’re talking about multiple functions all interacting together?
Stephen Obeda,
The net feedback can never be positive. If it was, there would be runaway global warming (or cooling, depending on the positive feedback).
There has never been runaway global warming, despite CO2 being more than 15X higher than now.
The net feedback can certainly be positive while the climate system is out of equalibrium – that’s the current problem. We’re warming the planet and that kicks off positive feedbacks in albedo change, water vapor content in the atmosphere and, possibly, in net cloud impact. This will continue until the top-of-atmosphere radiation again matches incoming radiation, and that equilibrium keeps getting pushed farther out in time as we continue increaseing the persistent greenhouse gases.
No researchers say we are headed into a runaway greenhouse, in fact, they specifically rule it out for a very long time. Here’s from Pierrehumbert’s Principles of Planetary Climate – section 4.6, page 284:
“Runaway greenhouse on Earth: With present absorbed solar radiation (adjusted for net cloud effects) of 265 W/m squared, the Earth at present is comfortably below the Kombayashi-Ingersoll limit for a planet of Earth’s gravity. According to Eq. (1.1), as the solar luminosity continues to increase, the Earth will pass the 291 W/m squared threshold where a runaway becomes possible in about 700 million years. In 1.7 billion years, it will pass the 310 W/m squared threshold where a runaway becomes inevitable for an atmosphere with 1 bar of N2 and no greenhouse gases other than water vapor.”
So the science doesn’t say “runaway greenhouse is coming soon”, the science says “our increasing greenhouse gas concentrations will raise the planet’s temperature a certain amount as it has done when greenhouse gases increased in the past”. That’s all.
B Fagan.,
During the summer, each hemisphere is receiving more than its emitting and warms; during its winter each is emitting more than its receiving and cools. Around the spring and fall equinox, the planet is in near perfect instantaneous equilibrium. The sinusoidal seasonal variability in the N hemisphere, delayed slightly from the sinusoidal seasonal stimulus from the Sun, is significantly larger in the N than the S (larger fraction of land in the N), so the planet exhibits a net variability of the N which again passes through perfect balance in the spring and fall which is complicated by the fact that the profile of solar variability owing to the 20 W/m^2 average difference (80 W/m^2 total) between perihelion and aphelion is 180 degrees out of phase with the N hemispheres dominant seasonal response. We can analyze the hemispheres independently as long as we are considering yearly averages since when averaged across a multiple of years, there is very little net energy crossing the equator relative to what’s coming from the Sun.
The imbalance you speak of is not real given the approximate seasonal variability around balance of about +/- 100 W/m^2 per hemisphere. The planet adapts very quickly to change, otherwise, we would see little difference between summer and winter and the onset of volcanic related cooling would take decades. About the best you can claim regarding any imbalance consequential to emissions is that given measured seasonal time constants on the order of a year, the equivalent amount of CO2 emissions from all prior years that is not already accounted for in the balance is about equal to the prior years emissions which at the measured sensitivity of about 0.2C per W/m^2 works out to a unrealized temperature gain of almost nothing. Even at the highly inflated 0.8C per W/m^2 claimed by the IPCC, it results in only a slightly larger nothing.
I don’t know if it’s a coincidence that you mention “measured sensitivity of about 0.2C per W/m^2 “, but there is a measured increase in surface energy of exactly that amount, due to the change in CO2 concentrations over a decade. So that measured increase in downwelling radiation from CO2 increase in ten years is just a fraction of what the overall increase will be as we shoot past doubling and eventually stop adding more.
First Direct Observation of Carbon Dioxide’s Increasing Greenhouse Effect at the Earth’s Surface
Berkeley Lab researchers link rising CO2 levels from fossil fuels to an upward trend in radiative forcing at two locations
News Release • FEBRUARY 25, 2015
http://newscenter.lbl.gov/2015/02/25/co2-greenhouse-effect-increase/
That’s an open-access summary to a Letter in Nature titled “Observational determination of surface radiative forcing by CO2 from 2000 to 2010” Nature (2015) doi:10.1038/nature14240
Over a decade they measured changing IR radiation on the North Slope in Alaska, and at a site in Oklahoma.
From the final section of the paper:
“Increasing atmospheric CO2 concentrations between 2000 and 2010
have led to increases in clear-sky surface radiative forcing of over
0.2 W m at mid- and high-latitudes. Fossil fuel emissions and fires
contributed substantially to the observed increase. The climate perturbation
from this surface forcing will be larger than the observed effect,
since it has been found that the water-vapour feedback enhances greenhouse
gas forcing at the surface by a factor of three and will increase,
largely owing to thermodynamic constraints. The evolving roles of
atmospheric constituents, including water vapour and CO2 (ref. 30), in
their radiative contributions to the surface energy balance can be tracked
with surface spectroscopicmeasurementsfrom stand-alone (or networks
of) AERI instruments”
B Fagan
The w/m cannot be 265. Using CAGW math it’s about 255. The only way for a run away greenhouse is for either of two things to occur or both. The magnetic field goes away, and as a result water gets reduced and blown away, and/or the surface of the planet becomes hot. The other thing that you’ve implied is that co2 hangs around in the atmosphere for centuries, it doesnt. Look at the current sink rate and the amount of co2 increase each year.
I don’t think anyone would doubt AGW if it were getting as warm as claimed. The measured amount of incoming is 363. If we are in fact holding on to an additional 25, up from the stated amount back in 2002 of 240, everybody would notice. You wouldn’t have to play adjustments.
Hi, rishrac.
First thing – I didn’t say there would be runaway greenhouse, dbstealey brought it into the conversation as a possibility if net feedbacks were positive and I typed in a few sentences from my copy of “Principles of Planetary Climate” to show that the scientists who study this stuff know we’re not running ourselves into a runaway greenhouse – which is a relief.
https://wattsupwiththat.com/2016/05/12/negative-climate-feedbacks-are-real-and-large/#comment-2214863
Note that net feedbacks don’t have to be permanent – net feedbacks for a period of time can and have been either positive or negative over the history of the earth – negative feedbacks have given us icehouse earths that only thawed when volcanic CO2 emissions (not drawn down by biological uptake or silicate weathering) reached concentrations high enough to turn the cycle to positive feedbacks.
I also disagree with the terms CAGW math or AGW math. There’s greenhouse gas and its effect, and the various feedback cycles that ensue until the global energy balance comes to equilibrium with the change in GHG concentration (and, of course, other forcings and variations acting during the same time). The fact that we’re how the gas is emitted now doesn’t matter to the physical processes – it’s just a gas.
I don’t imply CO2’s residence – again, the science is there for us to point to possible paths. According to some information in Chapter 4 of David Archer’s “The Global Carbon Cycle” (Princeton Press)
– average CO2 molecule – maybe 3.5 years before uptake by a plant or the ocean
– a huge change in atmospheric concentration becomes more difficult. Residence of a large burst of CO2 will result the the concentration dropping as more and more goes into plants and especially as it goes into the deeper ocean, but here feedback loops also complicate things. The atmospheric concentration drops a lot within a fairly short time (century or more?) of emission stopping, but then there’s the long tail.
Warmer water holds less gas, declines in pH slow CO2 intake, ocean circulation changes could slow transport to the polar regions where deepwater is formed, etc. On land, warming might increase soil outgassing, boreal or tropical forests might reach a growth limit based on other nutrients, etc.
But for the lifetime of an increased CO2 concentration, it is reasonable to think hundreds to tens of thousands of years or longer. Why? Because while the CO2 fraction in the atmosphere might decline, it will then result in outgassing of CO2 that had been pulled into deeper water, and by the time we’re done adding CO2, the ocean will be absorbing a pretty hefty slug.
The long-term reduction of CO2 in nature is by combining with minerals in rock – so subject enough mountains to weathering and you solve the problem. Read up on CO2 concentrations during the PETM for a preview – relatively sudden spike in concentration followed by warming followed by long, slow drawdown.
Per year the sink for co2 is 1 and half times greater now than all of the man made co2 in 1965. That requires some explaining. The long slow draw down is speculation. Supporting an idea from speculation is not science. A great deal of things have been said about what sinks co2. Supposedly tropical rainforest were a big sink. Are they bigger today than they were in 1965? A warmer ocean can not hold as much co2 as a cooler one. Is the ocean warmer or cooler? And before you get into a long winded explanation of how’s that’s possible, let me point out that during the el nino of 1998 was the year of the highest level per any year of co2 added. In spite of the fact that we added in increasing amounts over an extra billion metric tons of co2 per year.
Next, run away greenhouse effect is implied in the ever increasing retention of heat. The “window” the IPCC talks about is the amount of net heat that escapes into space. With more co2 the less heat escapes and more warming ensues. There isn’t any way of separating the two in AGW.
The surface warming I am referring to is the internal heat from the earth. Such as the volcanic activity under the ice sheet that is melting in the west Antarctic ice sheet, not due to atmospheric warming.
Seems to me that a reasonable definition of feedback doesn’t mean just a
-k * dxsimple spring like response to deviation from equilibrium . These asserted feedbacks are asserted to move the equilibrium or eliminate it in a catastrophe ala Thom and Zeeman .Considering that thermal radiation is a
T ^ 4function , it’s rather hard , ie : bonkers , to imagine suppressing its4 * T ^ 3derivative below 0 .I’ve thought about it a lot, and I don’t know. We can have a good estimate of how much is man made, natural is a question. One feature that is in the co2 record that bothers the heck out of me, is that early on there are no negative numbers. It bothers me because without a huge input of natural co2, the carbon cycle would be close to zero in 100 years without the industrial Revolution. The cause of this huge sink is a mystery. It’s always been there and functioning or its developed in response to increased co2 or a combination of both.
I wonder what kind of assessment will be made if we are producing as much co2 as we are and the amount co2 does go negative or close to zero . That’s a scary thought.
One other thought I had is that co2 follows temperature that many here have stated. Until recently the highest year increase in co2 was 1998 at 2.93 ppm. The following year, 1999, it dropped to 0.93. ( that’s like saying in today’s sink an extra 24 billion metric tons vanished) 2008 and 2009, which I believe we’re colder years the levels were 1.60 and 1.89 respectively. With this current el nino dying it’ll be interesting to see if the ppm drops from this year.
Fractal geometry probably plays a big role in this. Similar to the laws of supply and demand with diminishing returns. We have to keep increasingly produce more co2 to get a small increase in ppm.
rishrac,
Correctomundo, compadre. Although B.E.S.T. claims to have measured AGW, they haven’t. If/when someone produces a verifiable measurement of AGW, it will be accepted across the board by scientists on both sides of the debate.
That hasn’t happened. If it did, then for one thing the climate sensitivity number would be accurately known for 2xCO2, and future global warming due to a rise in CO2 could be accurately predicted.
But as we know, no one was able to predict the most significant global temperature event of the past century: the fact that global warming stopped for nearly twenty years.
Conclusion: although AGW may well exist (I think it does), the fact that it has never been accurately quantified means that it must be quite small. In that case, for all practical purposes AGW can be disregarded as a non-problem.
Hi, dbstealey. My understanding is that BEST simply verified that the surface is warming, that it’s warming more or less as the other surface records show, and that even rural sites show similar patterns of warming. Muller wasn’t trying to measure “AGW” he was looking at warming without the adjective up front.
But as I reminded you here https://wattsupwiththat.com/2016/05/12/negative-climate-feedbacks-are-real-and-large/#comment-2216346 there HAS been a measurement of the change in greenhouse warming from the change in CO2 concentrations over a decade. Again, the instruments measured IR radiation in CO2 bands, showing an increase in IR as the CO2 concentration increased by about 20ppm during the study. They didn’t look at the molecules for “anthropogenic” tags, they were measuring the greenhouse effect.
So, when do scientists on “both sides” now accept it?
By the way, let me reframe your conclusion to point to a bit of a hole in your logic. You said: “Conclusion: although AGW may well exist (I think it does), the fact that it has never been accurately quantified means that it must be quite small. In that case, for all practical purposes AGW can be disregarded as a non-problem.”
So, to use an analogy, you bring your car to the shop for the mechanic to look at it, because the engine light came on and its running funny. He looks for a while and can conclusively something’s is wrong but that:
a) it’s not the bulb ( a small risk to your wallet)
b) it’s not the entire engine ( a large risk to your wallet)
but he’s going to run some more diagnostics to find out what it really is. Your conclusion is “can’t immediately quantify risk, so the car is fine”?. I’d see the same thing as “it’s going to cost more than that light bulb and less than an engine”.
Not having an exact number is part of life. But the overall range of sensitivity to doubled CO2 has been in a fairly constant range since scientists started looking at it. Look at Manabe and Wetherald 1967. Or lots of other places.
I have no idea what the final answer will be, and it’s complicated by the other feedbacks, by the strong likelihood that we’ll pass a doubled concentration, and by the response of natural systems like the frozen peat that’s much of global permafrost, what the boreal forests will do, etc.
I’d love that the risk be at the low end of the spectrum, but there’s no reason to pin our hopes realistically on that being the result. Great if it happens, but in the meantime, we need to think otherwise.
Hi b fagan,
First, your analogy is simply a version of the Precautionary Principle. If there was evidence of any global damage or harm as a result of the rise in CO2, you would have an argument.
But there isn’t any evidence of global harm from CO2. In fact, all the evidence indicates that the rise in CO2 has been completely harmless, and very beneficial to the environment. At what point will you admit that the “carbon” scare has no basis in reality? If we applied the Precautionary Principle to everything, there would be no progress at all.
Next, the debate is over anthropogenic global warming (AGW), not over global warming, which by all available evidence is natural, not man made. And if Muller and his B.E.S.T. team want to show that a rise in CO2 causes global warming, they’ve certainly failed to demonstrate it. For nearly twenty years CO2 has steadily increased, by more than one-third, while global warming stopped during that time (the “pause”, or the “hiatus”).
There is no correlation between rising CO2 emissions and global warming:
http://jonova.s3.amazonaws.com/guest/de/temp-emissions-1850-web.jpg
As we see, rising global T and rising CO2 are simply coincidental events.
You say:
Not having an exact number is part of life.
But there is no credible number at all! Again: at what point will you admit that any effect from CO2 is too small to matter?
Next, you say:
…the overall range of sensitivity to doubled CO2 has been in a fairly constant range since scientists started looking at it
I don’t know how you can claim that, when the guesstimates of the sensitivity number are all over the map:
http://jo.nova.s3.amazonaws.com/graph/models/climate-sensitivity/climate_sensitivity5.png
Climate sensitivity to 2xCO2 ranges from 6ºC, down in steps to ±3ºC, to 2º – 3ºC, to ≈1ºC, to a fraction of a degree. Dr. Ferenc Miskolczi published a peer reviewed paper showing that sensitivity to CO2 is 0.00ºC. And some scientists say that more CO2 causes cooling. So it all depends on who you ask.
Next, you ask:
…when do scientists on “both sides” now accept it?
Answer: All sides will accept it when a verifiable, replicable, testable measurement quantifying CO2=AGW is produced.
Currently there is no widely agreed-upon sensitivity number. The reason is clear: global warming due to rising CO2 (assuming CO2 causes warming) is still too small to measure. Therefore, it is a non-problem. And that’s why no one in the alarmist crowd ever discusses any kind of cost/benefit analysis. If they did, the “carbon” scare would crash and burn.
Finally, you say:
I’d love that the risk be at the low end of the spectrum, but there’s no reason to pin our hopes realistically on that being the result. Great if it happens, but in the meantime, we need to think otherwise.
The ‘risk’ you’re claiming is not only not at “the low end of the spectrum”; there is no observed risk at all. On balance, the rise in CO2 has been entirely beneficial, with no observed downside.
The biosphere is still starved of CO2, so even the very small rise in that airborne fertilizer has resulted in a measurable greening of the planet. Yet the alarmist crowd still argues that another one part in ten-thousand rise in CO2 is a bad thing!
It’s obvious that the alarmist argument is entirely emotion-based, since they have no credible evidence showing that the increase in CO2 is a problem of any kind. All their objections amount to: “Wait, what if…?” But science doesn’t operate on “What ifs”. Science operates on data. But there is no data quantifying AGW — which is the central question in the debate.
The entire “carbon” alarm is based on the Precautionary Principle. But if we gave that logical fallacy any credence, we would have never gone to the moon. So again: at what point would you admit that there is no credible, verifiable, or observational evidence to support your belief in the Precautionary Principle? Or do you think it should apply, no matter what?
I don’t think there has been any improvement over this grade school level extrapolation :
http://cosy.com/Science/CO2vTkelvin800.jpg
dT%dCO2 is at the very most less than 0.01 at these levels
+1..
This input-output relationship is perfectly modelled by the first order open loop transfer function
H(s) = K/(s + a). A well known physical realization of this transfer function is the resistor-capacitor low pass filter, where the capacitor is an energy storage. This seems to perfectly fit the solar input temperature output situation.
There is no way the observed sinusoidal physical quantities can be taken as proof of a feedback system, be it positive or negative.
Esilex Montagrius
Esilex,
Another way to express this is with the differential equation, Pi(t) = Po(t) + dE(t)/dt, where Pi(t) is the instantaneous solar energy arriving at the planet, Po(t) is the instantaneous power leaving the planet and E(t) is the energy stored by the planet. When Po > Pi, dE/dt is negative and the planet cools, otherwise, the planet warms. We also know that dT/dE is a constant, where T is the average surface temperature (i.e. 1 calorie -> 1 gm H20 -> 1 C)
We can define an arbitrary amount of time, tau, such that all of E can be emitted in tau seconds at the rate Po. Rewriting, we get Pi(t) = E(t)/tau + dE/dt, which you should recognize as the LTI describing an RC circuit, where tau is the time constant. The only difference is that the RC time constant is a real constant, while the climate time constant has a significant negative temperature coefficient since as T increases linearly, Po increases as T^4, thus tau must decrease as T^3.
Pi can be rewritten as a function of albedo which can itself be expressed as a function of ice amount, cloud coverage, surface and cloud reflectivity. Po can similarly be rewritten as a function of the clear sky emissions and cloudy sky emissions weighted by cloud coverage. each of these can be further decomposed spatially and temporally, I’ve done the comparison to satellite data (ISCCP) and the data conforms to the constraints of this formulation to a remarkable degree across a hierarchy of spatial and temporal averages from pixels to hemispheres and from days to years.