# A Decided Lack Of Equilibrium

Guest Post by Willis Eschenbach

I got to thinking about the lack of progress in estimating the “equilibrium climate sensitivity”, known as ECS. The ECS measures how much the temperature changes when the top-of-atmosphere forcing changes, in about a thousand years after all the changes have equilibrated. The ECS is measured in degrees C per doubling of CO2 (°C / 2xCO2).

Knutti et al. 2017 offers us an interesting look at the range of historical answers to this question. From the abstract:

Equilibrium climate sensitivity characterizes the Earth’s long-term global temperature response to increased atmospheric CO2 concentration. It has reached almost iconic status as the single number that describes how severe climate change will be. The consensus on the ‘likely’ range for climate sensitivity of 1.5 °C to 4.5 °C today is the same as given by Jule Charney in 1979, but now it is based on quantitative evidence from across the climate system and throughout climate history.

This “climate sensitivity”, often represented by the Greek letter lambda (λ), is claimed to be a constant that relates changes in downwelling radiation (called “forcing”) to changes in global surface temperature. The relationship is claimed to be:

Change in temperature is equal to climate sensitivity times the change in downwelling radiation.

Or written in that curious language called “math” it is

∆T = λ ∆F                               Equation 1 (and only)

where T is surface temperature, F is downwelling radiative forcing, λ is climate sensitivity, and ∆ means “change in”

I call this the “canonical equation” of modern climate science. I discuss the derivation of this equation here. And according to that canonical equation, depending on the value of the climate sensitivity, a doubling of CO2 could make either a large or small change in surface temperature. Which is why it the sensitivity is “iconic”.

Now, I describe myself as a climate heretic, rather than a skeptic. A heretic is someone who does not believe orthodox doctrine. Me, I question that underlying equation. I do not think that even over the long term the change in temperature is equal to a constant time the change in downwelling radiation.

My simplest objection to this idea is that evidence shows that the climate sensitivity is not a constant. Instead, it is a function inter alia of the surface temperature. I will return to this idea in a bit. First, let me quote a bit more from the Knutti paper on historical estimates of climate sensitivity:

The climate system response to changes in the Earth’s radiative balance depends fundamentally on the timescale considered. The initial transient response over several decades is characterized by the transient climate response (TCR), defined as the global mean surface warming at the time of doubling of CO2 in an idealized 1% yr–1 CO2 increase experiment, but is more generally quantifying warming in response to a changing forcing prior to the deep ocean being in equilibrium with the forcing  …

By contrast [to Transient Climate Response TCR], the equilibrium climate sensitivity (ECS) is defined as the warming response to doubling CO2 in the atmosphere relative to pre-industrial climate, after the climate reached its new equilibrium, taking into account changes in water vapour, lapse rate, clouds and surface albedo.

It takes thousands of years for the ocean to reach a new equilibrium. By that time, long-term Earth system feedbacks — such as changes in ice sheets and vegetation, and the feedbacks between climate and biogeochemical cycles — will further affect climate, but such feedbacks are not included in ECS because they are fixed in these model simulations.

Despite not directly predicting actual warming, ECS has become an almost iconic number to quantify the seriousness of anthropogenic warming. This is a consequence of its historical legacy, the simplicity of its definition, its apparently convenient relation to radiative forcing, and because many impacts to first order scale with global mean surface temperature.

The estimated range of ECS has not changed much despite massive research efforts. The IPCC assessed that it is ‘likely’ to be in the range of 1.5 °C to 4.5 °C (Figs 2 and 3), which is the same range given by Charney in 1979. The question is legitimate: have we made no progress on estimating climate sensitivity?

Here’s what the results show. There has been no advance, no increase in accuracy, no reduced uncertainty, in ECS estimates over the forty years since Carney in 1979. Let’s take a look at the actual estimates.

The Knutti paper divides the results up based on the type of underlying data upon which they were determined, viz: “Theory & Reviews”, “Observations”, “Paleoclimate”, “Constrained by Climatology”, and “GCMs” (global climate models). Some of the 145 estimates only contained a range, like say 1.5 to 4.5. In that case, for the purposes of Figure 1 I’ve taken the mean of the range as the point value of their estimate.

Next, I looked at the 124 estimates which included a range for the data. Some of these are 95% confidence intervals; some are reported as one standard deviation; others are a raw range of a group of results. I have converted all of these to a common standard, the 95% confidence interval. Figure 2 shows the maxima and the minima of these ranges. I have highlighted the results from the five IPCC Assessment Reports, as well as the Charney estimate.

The Charney / IPCC estimates for the range of the ECS values are constant from 1979 to 1995, at 1.5°C to 4.5°C for a doubling of CO2. In the Third Assessment Report (TAR) in 2001 the range gets smaller, and in the Fourth Assessment Report (AR4) in 2007 the range got smaller still.

But in the most recent Fifth Assessment Report (AR5), we’re back to the original ECS range where we started, at 1.5 to 4.5°C / 2xCO2.

In fact, far from the uncertainty decreasing over time, the tops of the uncertainty ranges have been increasing over time (red/black line). And at the same time, the bottoms of the uncertainty ranges have been decreasing over time (yellow/black line). So things are getting worse. As you can see, over time the range of the uncertainty of the ECS estimates has steadily increased.

Looking At The Shorter Term Changes.

Pondering all of this, I got to thinking about a related matter. The charts above show equilibrium climate sensitivity (ECS), the response to a CO2 increase after a thousand years or so. There is also the “transient climate response” (TCR) mentioned above. Here’s the definition of the TCR, from the IPCC:

Transient Climate Response (TCR)

TCR is defined as the average global temperature change that would occur if the atmospheric CO2 concentration were increased at 1% per year (compounded) until CO2 doubles at year 70. The TCR is measured in simulations as the average global temperature in a 20-year window centered at year 70 (i.e. years 60 to 80).

The transient climate response (TCR) tends to be about 70% of the equilibrium climate sensitivity (ECS).

However, this time I wanted to look at an even shorter-term measure, the “immediate climate response” (ICR). The ICR is what happens immediately when radiation is increased. Bear in mind that the effect of radiation is immediate—as soon as the radiation is absorbed, the temperature of whatever absorbed the radiation goes up.

Now, a while back Ramanathan proposed a way to actually measure the strength of the atmospheric greenhouse effect. He pointed out that if you take the upwelling surface longwave radiation, and you subtract upwelling longwave radiation measured at the top of the atmosphere (TOA), the difference between the two is the amount of upwelling surface longwave that is being absorbed by the greenhouse gases (GHGs) in the atmosphere. It is this net absorbed radiation which is then radiated back down towards the planetary surface. Figure 3 shows the average strength of the atmospheric greenhouse effect.

The main forces influencing the variation in downwelling radiation are clouds and water vapor. We know this because the non-condensing greenhouse gases (CO2, methane, etc) are generally well-mixed. Clouds are responsible for about 38 W/m2 of the downwelling LW radiation, CO2 is responsible for on the order of another twenty or thirty W/m2 or so, and the other ~ hundred watts/m2 or so are from water vapor.

So to return to the question of immediate climate response … how much does the monthly average surface temperature change when there are changes in the monthly average downwelling longwave radiation shown in Figure 3? This is the immediate climate response (ICR) I mentioned above. Figure 4 below shows how much the temperature changes immediately with respect to changes in downwelling GHG radiation (also called “GHG forcing”).

There are some interesting things to be found in Figure 4. First, as you might imagine, the ocean warms much less on average than the land when downwelling radiation increases. However, it was not for the reason I first assumed. I figured that the reason was the difference in thermal mass between the ocean and the land. However, if you look at the tropical areas you’ll see that the changes on land are very much like those in the ocean.

Instead of thermal mass, the difference in land and sea appears to be related to land and sea snow and ice. These are generally the green-colored areas in Figure 4 above. When ice melts either on land or sea, much less sunlight is reflected back to space from the surface. This positive feedback increases the thermal response to increased forcing.

Next, you can see evidence for the long-discussed claim that if CO2 increases, there will be more warming near the poles than in the tropics. The colder areas of the planet warm the most from an increase in downwelling LW radiation. On the other hand, the tropics barely warm at all with increasing downwelling radiation.

Seeing the cold areas warming more than the warm areas led me to graph the temperature increase per additional 3.7 W/m2 versus the average temperature in each gridcell, as seen in Figure 5 below.

The yellow/black line is the amount that we’d expect the temperature to rise (using the Stefan-Boltzmann equation) if the downwelling radiation goes up by 3.7W/m2 and there is no feedback. This graph reveals some very interesting things.

First, at the cold end, things warm faster than expected. As mentioned above, I would suggest that at least in part this is the result of the positive albedo feedback from the melting of land and sea ice.

There is support for this interpretation when we note that the right-hand part of Figure 5 that is above freezing is very different from the left-hand part that is below freezing. Above freezing, the temperature rise per additional radiation is much smaller than below freezing.

It is also almost entirely below the theoretical response. The average immediate climate response (ICR) of all of the unfrozen parts of the planet is a warming of only 0.2°C per 3.7 W/m2.

Discussion

We’re left with a question: why it is that forty years after the Charney report, there has been no progress in reducing the uncertainty in the estimate of the equilibrium climate sensitivity?

I hold that the reason is that the canonical equation is not an accurate representation of reality … and it’s hard to get the right answer when you’re asking the wrong question.

From above, here’s the canonical equation once again:

This “climate sensitivity”, often represented by the Greek letter lambda (λ), is claimed to be a constant that relates changes in downwelling radiation (called “forcing”) to changes in global surface temperature. The relationship is claimed to be:

Change in temperature is equal to climate sensitivity times the change in downwelling radiation.

Or written in that curious language called “math” it is

∆T = λ ∆F                                               Equation 1 (and only)

where T is surface temperature, F is downwelling radiative forcing, λ is climate sensitivity, and ∆ means “change in”

I hold that the error in that equation is the idea that lambda, the climate sensitivity, is a constant. Nor is there any a priori reason to assume it is constant.

Finally, it is worth noting that in areas above freezing, the immediate change in temperature per doubling of CO2 is far below the amount expected from just the known Stefan-Boltzmann relationship between radiation and temperature (yellow/black line in Figure 5). And in the areas below freezing, it is well above the amount expected.

And this means that just as the areas below freezing are showing clear and strong positive feedback, the areas above freezing are showing clear and strong negative feedback.

Best Christmas/Hannukah/Kwanzaa/Whateverfloatsyourboat wishes to all,

w.

AS USUAL, I ask that when you comment you quote the exact words you are discussing, so we can all be certain what you are referring to.

## 229 thoughts on “A Decided Lack Of Equilibrium”

1. Babsy says:

The Equation, ∆T = λ ∆F, is ‘derived’ all right! If it were actually measured the science would be settled. We do know the speed of light as it *HAS* been measured (and there is no controversy). Since no one has a paper published that shows an actual repeatable value for ‘F’, the scam goes on…

• Regarding its derivation, both dH/dt and dT/dt are zero in the steady state, thus C only affects the rate at which a steady state will be achieved and not what that steady state will be. So while temperature is linear to stored Joules, emissions increase as T^4 and in the steady state, only emissions matter, in other words, it’s just a radiant balance. The derivation would be more correct if the surface wasn’t also radiating energy away.

• Greg says:

temp is linear with thermal energy if you are talking about the same medium ( not mixing all media together ) and there is not change of state and not change of mass in and out of the “bit” you are looking at. No one of those ifs is actually the case.

• Mixed medium doesn’t really matter to the linearity. Since T is linear to stored energy and superposition applies to Joules, a proportional average of the heat capacity of its components is as linear as the heat capacity of any single component. In addition, if the constituents vary in time, but have discernible averages integrated over time, the linearity legitimately applies to the averages.

Superposition is a powerful tool to both simplify and conceptualize complexity. To the climate system this just means that 2 Joules can do twice the work of 1 Joule which makes perfect sense since Joules are the units of work and it takes work to both change and maintain the state (temperature). Changing the state requires work proportional to the temperature change, while maintaining the state requires work proportional to the absolute temperature raised to the 4’th power. Once a steady state has been achieved, only the work required to maintain it is required. The IPCC’s ECS seems based on the relationships between the temperature and the energy required to change it while ignoring the energy required to maintain it which is exactly the opposite to what should be done.

• Izaak Walton says:

Babsy,
The equation ∆T = λ ∆F is used to define λ. It is that simple. You change ∆F and wait several thousand years until the system settles down again and measure ∆T. Divide one by the other and you have λ.

Now the equilibrium climate sensitivity is clearly not a constant and no-one suggests that it is.
For a blackbody the radiated power goes as the 4th power of the temperature and so the equilibrium climate sensitivity depends on the cube root of the temperature. Furthermore many people think the climate is at least bistable (ice-ages and interglacials) and so the value of λ will depend on the starting point.

• The bi-stable nature is an illusion of ice affected albedo and is more like a hysteresis effect. While this is often modeled a ‘feedback’ it’s not and is actually a real change in forcing whose surface emissions sensitivity per W/m^2 arising from more or less reflection is the same as for any other W/m^2. This is only a significant influence when there’s a lot of surface ice to melt, or grow. Currently, we are close to minimum ice, so there’s not much headroom left in the warm direction and most of any influence that remains is in the cold direction.

The way that the IPCC defines forcing fails to distinguish between an increase in solar input (actual forcing) and a decrease in emissions at TOA caused by increased atmospheric absorption. They are not equivalent since all of the increase in solar forcing contributes to warming the surface and it’s radiant balance with the atmosphere, while a decrease in emissions resulting from an increase in GHG concentrations means that the atmosphere is absorbing more, only about half of which ends up contributing to the radiant balance at the surface while the other half contributes to the radiant balance at TOA.

It’s the difference between adding new energy to the system and recirculating existing energy within the system. Only by considering recirculated energy the same as new energy does the implicit violation of COE they depend on to claim an absurdly high ECS become plausibly deniable.

• Samuel C Cogar says:

Why wait several thousand years?

Excerpts from article:

THE PROBLEM: “the lack of progress in estimating the “equilibrium climate sensitivity”, known as ECS.

The ECS measures how much the temperature changes when the top-of-atmosphere forcing changes, in about a thousand years after all the changes have equilibrated.

The ECS is measured in degrees C per doubling of CO2 (°C / 2xCO2).

Here’s what the results show.

There has been no advance, no increase in accuracy, no reduced uncertainty, in ECS estimates over the forty years since Carney in 1979. Let’s take a look at the actual estimates.

Some of the 145 estimates only contained a range, like say 1.5 to 4.5.

Well lordy, lordy, lordy, …. of course there hasn’t been any advances, any increase in accuracy or any reduced uncertainty in/of ECS estimates.

Why in the world would one change the value of their “estimate” when they don’t know what the actual “value” is or should be …… or if there really is a “value”? Whatever “value” they claim is the correct “value” because no one can disprove it. “HA”, ….. forty years and still holding.

Personally, I say forget the ECS of atmospheric CO2 and concentrate on the ECS of atmospheric H2O vapor. Via H2O vapor they won’t have to wait another “thousand years” to equilibrate the “effect”.

• michael hart says:

Yup. Not only is there no reason to believe that “ECS” is constant, neither is there reason to believe that the users can define it in a plausible manner.

It is a product of the models and the modelers imagination, not an observable and measurable number in the real world. A better way to describe it, is one using plainer English: “If we do w in our model then the number x changes by amount y in time period z.”

It then becomes more obvious that the Emperor is dressed in fig leaves.

• Geo says:

Let me solve the equation:
λ =0

The Earth climate is a complex system. The idea that such a system, that has been relatively stable for millions of years, can be strongly influenced by one single factor creating a feedback loop is highly suspect – such systems are rarely stable for any period of time. Hence the feedback must be very, very, small, and the system must be inclined toward stability. The only thing that makes sense is for λ =0, or as close to zero as is possible. It may be that the number cycles around zero – at some temperatures it is slightly positive, at others slightly negative.

• Chaswarnertoo says:

Or 0.85 tops, may well be 0. As I keep posting…..

• commieBob says:

Such an equation is based on the assumption that the system it describes is linear time-invariant. The Earth’s climate, especially over century and millennium time scales, is neither linear, nor time-invariant. The equation is complete unmitigated hogwash.

• commieBob,

The equation is unmitigated hogwash, not because the climate system doesn’t have an identifiable linear response, but because what they’ve identified as being a linear relationship between W/m^2 and temperature is not even approximately linear to the actual response.

The AVERAGE relationship between the BB emissions corresponding to the surface temperature and the emissions at TOA above that point on the surface is testably linear where the steady state ratio between them is about 1.62 independent of the surface temperature or the emissions at TOA. The inverse of this verifiably constant ratio is the equivalent emissivity of a gray body whose temperature is that of the surface and whose emissions are that of the planet. The equivalent emissivity is 0.62 and when the expected behavior of this equivalent gray body is plotted in green against 3 decades of monthly measurements of the surface temperature vs. the emissions at TOA for each 2.5 degree slice of latitude from pole to pole plotted as small red dots, the correlation is undeniable. The long term averages for each slice are the larger green and blue dots where the correlation is unambiguous.

When plotted to the same scale, the IPCC nominal sensitivity (the short blue line) is represented by a slope that passes through the origin, rather than being a slope tangent to the actual response at the average temperature. This is represented by the short green line, which coincidentally? is the same as the slope of an ideal BB corresponding to the so called ‘zero feedback’ response shown in black.

Since in the steady state, the emissions at TOA are equal to the incident energy at TOA, this ratio is exactly equivalent to the incremental effect of the next W/m^2 of solar power, where 1.62 W/m^2 per W/m^2 of forcing corresponds to an ECS of about 1.1C based on doubling CO2 being equivalent to 3.7 W/m^2 of solar forcing keeping CO2 concentrations constant.

I’ll make another testable prediction for you. If you were to plot the same data coming from the output of any GCM attempting to mimick the historical data against the actual data in the same form as the plot I’ve shown above, the fact that there are serious errors in the model will become glaringly obvious.

2. There cannot be any delay in the equilibrium process without breaching the Gas Laws. If the surface temperature changes then the volume of the atmosphere changes instantly via conduction and that change is instantly neutralised by adjustments in convection.
The radiative theory of gases is in breach of the Gas Laws by virtue of the fact that during any delayed process of achieving equilibrium the atmosphere will not be behaving in accordance with the Gas Laws.

• Actually, the absorption and emissions of photons by GHG molecules is independent of the Gas Laws which only apply to the kinetic motion of gas molecules and not to the photons absorbed and re-emitted by GHG’s which are governed by the laws of Quantum Mechanics and Electromagnetics.

An error arises by assuming that a photon of energy absorbed by a GHG is quickly ‘thermalized’ into the kinetic energy of molecules in motion. It’s not and the data is unambiguously clear about this. When we examine the emitted spectrum at TOA over clear skies, the attenuation in absorption bands is only about 50%, yet the probability that a photon emitted by the surface in those bands will be absorbed is near 100%. The only possible source of these absorption band photons are the re-emissions by GHG molecules since O2 and N2 are completely transparent to these wavelengths.

The failure arises by considering that vibrational energy translated into rotational energy is a one way path, while in fact, it symmetrically bidirectional and thus there’s no net conversion.

• That may be so but if the temperature does change then equilibrium must occur instantly otherwise there is a breach of the Gas Laws.

• Stephen,

Nothing happens instantaneously since there’s always a time constant involved, moreover; an exact equilibrium can only be approached and never achieved. None the less, relative to the kinetic behavior of the atmosphere, the time constant is only on the order of hours in response to surface temperature changes.

Note that none of this violates the ideal Gas Laws, since the atmosphere is not an ideal gas.

• The response is instant at point of contact with the ground but does need a short time to flow through a complete convective overturning cycle.
The atmosphere is near enough to an ideal gas for the Gas Laws to apply for all practical purposes.

• Don K says:

Stephen — this discussion is basically theological — how many angels can dance on the head of a pin? — not scientific. But I’m pretty sure that the fine print for the ideal gas law (pv=nrt, right?) includes the magic phrase “in thermal equilibrium”. A CO2 molecule absorbing or emitting IR is not, I believe, in thermal equilibrium.

My understanding is that in most cases the energy exchanged by gases such as water vapor, CO2 etc via radiation is too energetic to be seamlessly converted to kinetic energy at Earthly temperatures. (That’s addressed in the Book of Quantum Mechanics which — like The Book of Revelation is beyond the understanding of most mortals. Certainly me. And probably you.). The scriptures^h^h^h … textbooks equate heat/temperature to kinetic energy. So the IR energy related energy (which is thought to be stored in molecule rotation/vibration, not molecular motion) is latent heat, not sensible heat. And therefore it is exempt from the gas laws.

But if it’s latent and not sensible, how is it going to warm the planet and exterminate humanity? Beyond my pay grade.

• Don,
It turns out that the energy of a 10u photon is about the same order of magnitude as the kinetic energy of a gas molecule at its nominal speed. It’s not that the photons are too energetic, but that in general, the energy manifested by photons is orthogonal to the energy manifesting translational motion which is the only energy that the Ideal Gas Law is concerned with.

Trenberth’s arbitrary conflation of the energy transported by photons with the energy transported by matter is a source of significant confusion.

• co2isnotevil, at ground level (1 Atm pressure, typical temperatures), the mean time for a CO2 molecule to give up the energy it absorbed from a 15 μm photon is:

● a couple of nanoseconds, for energy transfer by collision with another air molecule

● about 1 second , for energy loss by emission of another photon

In other words, when CO2 molecules absorbs a 15 μm photons, > 99.9999998% of the time they lose that energy by collision with another air molecule, rather than by emitting a photon. (At higher altitudes the ratio is somewhat reduced, of course.)

When people learn that, they often make the mistake of assuming that means the CO2 in the atmosphere emits almost no radiation. That’s incorrect, because the CO2 not only gives up energy to other air molecules by collisions, it also absorbs energy from other molecules by collisions.

When a CO2 molecule gives up kinetic energy (heat) by collision with another air molecule, the energy is not lost. The other air molecule gains exactly the same amount of energy. N2, O2, Ar, CO2, etc. molecules are continually exchanging energy by collisions with one another.

So, regardless of how much LW IR radiation the CO2 in the atmosphere absorbs, it remains at the same temperature as the rest of the atmosphere. The absorption of radiation by CO2 in the atmosphere doesn’t directly affect the amount of radiation emitted by the CO2. Rather, absorbing radiation simply raises the temperature of the atmosphere. The temperature of the atmosphere (and the CO2 partial pressure) governs the amount of radiation emitted by the CO2.

You’re correct that essentially all of the 15 μm radiation from the surface is absorbed by that atmosphere (mainly the CO2 in it). You’re also correct that when a satellite looks down at the Earth, it sees a substantial amount of outbound 15 μm LW IR. But that IR is not coming from the ground, it is coming from the CO2 in the atmosphere.

The average altitude from which those photons are emitted is called the “emission height.” It varies with wavelength. At the center of the 15 μm CO2 absorption band, the emission height is much higher than it is at the fringes.

Additional CO2 in the atmosphere raises the emission height. It is commonly claimed that doing so lowers the temperature of the CO2 at the emission height, according to the tropospheric lapse rate. That’s used to explain global warming, because lowering the temperature at the emission height would reduce emissions by CO2, “insulating” the Earth from some of the energy that it would otherwise have lost.

However, it turns out that at the center of the 15 μm absorption/emission band, the emission height is already at or near the tropopause, where the lapse rate is zero. So it’s only due to wavelengths at the fringes of the 15 μm absorption band that increased CO2 concentration has any warming effect at the Earth’s surface.

• Dave,

If as you say, collisions convert all of the state energy into translational motion upon a collision within nanoseconds, what’s the origin of the significant photon flux at TOA in strong absorption bands? If can’t be the GHG molecules, as the state energy from all the available photons emitted by the surface has already been converted into translational energy and quickly shared by subsequent collisions, so there’s not enough energy available in subsequent collisions to re-energize GHG molecules. To do so requires converting just about all, or more of the translational kinetic energy into state energy resulting in reducing the velocity of the colliding molecules to near zero (absolute zero in terms of the temperature), which clearly isn’t happening. Something similar happens with laser cooling, but this can only occur under highly controlled conditions. The bottom line is that atmospheric collisions are not energetic enough to either consume or provide state energy for GHG molecules, but it is enough of an impact to significantly increase the probability of de-energization by the emission of a photon.

What happens upon a collision, is that the GHG molecule may be de-energized by the emission of a photon. This isn’t a spontaneous emission, but an induced emission, where that emitted photon is quickly absorbed by another GHG molecule. Another process that can induce the emission of a photon is the absorption of a second photon, in which case the mean time befor a spontaneous emission decreases to near zero. What you’re saying does have a finite probability of occurring, it’s just so low as to be insignificant, although to be fair, Joules are Joules, so in principle, the 2 mechanisms could have the same net effect.

Why would all state energy will be converted into translational motion upon any collision? A collision with a ground state molecule of the same type has a finite probability to exchange state, but the velocities of the molecules are otherwise unaffected. Note that most absorption physics is based on there being only one type of molecule in the mix. Another problem is trying to support the magnitude of the flux in absorption bands at TOA if it can only come from spontaneous emissions since even in the stratosphere, collisions will likely occur before an actual spontaneous emission.

The only actual ‘thermalization’ occurs when an energized GHG molecule condenses upon or is absorbed by the liquid and solid water in clouds. We can even see this in the spectrum at TOA where there’s slightly more than a 50% reduction in the energy in the wavelength bands most affected by water vapor absorption.

• donb says:

CO2:
As Dave says, N2 & O2 collisions with CO2 maintain it at a steady velocity distribution, the Maxwell-Boltzman distribution, which actually allows a significant kinetic energy spread. IR emission from CO2 would be more probably when it resides near the upper energy side of that distribution. If all that CO2 energy distribution is too small for 10 micron emission to occur, then CO2 absorption of 10u IR would slowly raise the overall energy level of all molecules at that level, until CO2 does have enough energy to emit. And then, as I comment below, it only emits at a Rate proportional to its temperature.

• Don,

Yes, collisions of CO2 molecules with N2/O2 redistribute their translational kinetic energy upon collisions. The issue I’m pointing out is that whether or not a GHG molecule is energized has no bearing on the translational kinetic energy available to share with other molecules upon a collision. State energy can only be shared with other GHG molecules and only in whole quantums. The classic case is the transfer of state from an energized molecule to a ground state molecule upon collision. It seems that equapartition of energy has been overly generalized to degrees of freedom that aren’t actually free, but constrained by quantum mechanical considerations.

Have you noticed the similarities between the distribution of kinetic energy in molecules in motion corresponding to a temperature and the distribution of photon energies in an ideal Planck spectrum corresponding to a temperature?

• Crispin in Waterloo but really in Goleta says:

CO2 and Don

There is one thing missing from the explanations above which are generally on the mark. It is that the average path length of an average photon of IR radiation is 1.8 time the depth of the atmosphere.

The implication is that some of the IR is not absorbed at all, and some is absorbed once. Much less is absorbed twice and so on. The atmosphere is quite “holey” so don’t over-state the absorption/re-radiation thing.

If there were no IR rising from the surface the CO2 and other GHG’s would still emit photons because as they cooled by doing so they would be heated by collisions with other molecules.

Lastly, there is no functional difference between insulation wrapped around a baking oven and the atmosphere wrapped around the Earth other than the energy put into the oven is not passed through the walls.

Adding more energy input increases the internal temperature until equilibrium is attained, just as adding more insulation does. The net effect is the same. The discussion above about a new equilibrium based on more solar input v.s. more GHG was on the mark. The TOA output always balances the total energy input save when the oceans are absorbing energy, which long term can be ignored as an average energy content.

The idea that there is a TOA “imbalance” is laughable when the value claimed is one tenth of the uncertainty. The uncertainty is 50 Watts/m^2 and the value claimed is 3-5. Good luck with that on your student project. Gets an F.

• Willis Eschenbach says:

Crispin in Waterloo but really in Goleta December 28, 2019 at 10:32 am

CO2 and Don

There is one thing missing from the explanations above which are generally on the mark. It is that the average path length of an average photon of IR radiation is 1.8 time the depth of the atmosphere.

Cite? I ask because my bible, Geiger’s “The Climate Near The Ground”, gives the following percentages for the origin of downwelling LW radiation at the surface:

First 87 meters above the surface – 72% of downwelling radiation
Next 89 meters – 6.4%
Next 93 meters – 4.0%
Next 99 meters – 3.7%
Next 102 meters – 2.3%
Next 108 meters – 1.2%

So I’d doubt greatly that overall the mean free path for a LW photon is actuall greater than the thickness of the atmosphere.

Now, to be complete, I need to note that there is an “atmospheric window” allowing about 40W/m2 of LW of a particular frequency band to escape directly to space.

But for the rest of the frequencies, they’re absorbed down low and reabsorbed on their way further up from the surface.

w.

• butch123 says:

When a CO2 molecule gives up energy kinetically it spreads that energy over all the molecules of the local atmosphere in the form of kinetic energy. It is not reserved to only accelerating other CO2 molecules or to re-exciting other CO2 molecules. Therefore all the other atmospheric molecules are warmed. When a CO2 molecule is re-excited by collision it REMOVES heat from the atmosphere generally. Therefore there is NO net heating of the atmosphere from the IR photon that was originally emitted from the Earth.
The same with the theory of downwelling IR. If there is downwelling IR it has never been used to heat the atmosphere previously.

The originally emitted IR is captured by CO2 within a few tens of meters. Of the captured energy only about a billionth or so is able to be re-emitted because of the time constants of collision to re-emission as determined by the Einstein A coefficient and the time to collision in the atmosphere. A billionth! The lower Earth Atmosphere is a quenching atmosphere for an excited CO2 molecule, it practically NEVER re-emits. We see a huge amount of IR in the 15 micron missing from the TOA spectra.

I have been told that the method for calculating downwelling IR is to take the IR emitted from the Earth and then subtract the missing IR at TOA to calculate the amount of downwelling IR at the surface. (or to other CO2 molecules.)

If in fact there is downwelling IR, It simply cannot be measured directly. AERI instruments on the North Slope of Alaska and in Oklahoma, are specifically designed to DIRECTLY measure this radiation. They in fact measure downwelling radiation in other frequency ranges but in the 15 micron band they do not measure it. What they do is calculate a small amount by comparing the actual measurement with a simulated spectra.It is so tiny as to be unobtainable by direct measurement. Jon Gero originally claimed that it was only seen in the far wings. After a repeat study where Daniel Feldman applied a simulated spectra to the actual spectra was any difference discerned. It certainly did not account for the missing TOA energy.

• donb says:

However, the greenhouse heating generated by CO2 occurs because the RATE IR is emitted to space by those CO2 molecules is much lower than the rate the same IR would be emitted from the surface, directly to space. The reason is the surface–high atmosphere temperature difference.

• donb says:

YES, YES, YES !!

• John Francis says:

The atmosphere is heated by CO2 absorption. I fail to see how that increases the temperature of the surface, except when the surface is colder than the atmosphere. Any down-welling radiation to a warmer surface must be immediately re-radiated, i.e. reflected back upwards.

Have you felt the warming effect of CO2-initiated down-radiation on a cold cloudless night? Me neither. However we both have felt the warming effect of clouds on a cold night. Water condensation in clouds produces latent heat release that is the real reason that the Earth has a liveable climate. CO2 is beneficial to crops and has a negligible effect on the surface temperature. Am I wrong? Anyone?

• tty says:

“Any down-welling radiation to a warmer surface must be immediately re-radiated, i.e. reflected back upwards.”

No, but the NET radiation flux from the surface must be greater than the down-welling radiation (though not in the same frequency bands).

• Samuel C Cogar says:

tty – December 27, 2019 at 5:08 am

No, but the NET radiation flux from the surface must be greater than the down-welling radiation (though not in the same frequency bands).

Why not …… the same frequency bands? To wit:

Carbon dioxide has specific bands of high absorption (which correspond to high emissivity).

In the case of CO2, there is a specific emission band which corresponds to the absorption band. This means that any radiation emitted by CO2 is emitted at just the perfect wavelength to be reabsorbed by other molecules of CO2.

• tty is correct. The Earth’s surface’s does not become more reflective when it warms, and whether the surface absorbs or reflects incoming radiation does not depend on whether it is warmer or colder than the air above it.

You do, indeed, experience the warming effect of downwelling radiation from CO2 in the atmosphere, regardless of whether or not the air emitting it is cooler than your body temperature. Your skin just can’t separate the warming effect of that LW IR from the warming and cooling effects of convective/conductive and evaporative processes. However, the down-welling LW IR can be measured with instruments.

That’s what Feldman et al 2015 did: they measured the 1downwelling LW IR, during clear-sky conditions, using Atmospheric Emitted Radiance Interferometers. They took measurements over a ten year period, during which average atmospheric CO2 level increased about 22 ppmv.

They reported that a 22 ppmv increase (+5.953%, starting from 369.55 ppmv in 2000) in atmospheric CO2 level resulted in 0.2 ±0.06 W/m² increase in downwelling LW IR from CO2. (5.953% is about 1/12 of a doubling.)
(1/log2(1.05953)) × 0.2 W/m² = 2.40 W/m² (central value)
(1/log2(1.05953)) × (0.2 – 0.06 W/m²) = 1.68 W/m² (low end)
(1/log2(1.05953)) × (0.2 + 0.06 W/m²) = 3.12 W/m² (high end)
(note correction of a small typo in my first comment on this article, which affected the last digit)

So, Feldman’s results indicate that CO2 forcing is 2.40 ±0.72 W/m² per doubling, which is consistent with Happer’s result, and inconsistent with the commonly claimed figure of 3.7 ±0.4 W/m² per doubling.

• EdB says:

“The atmosphere is heated by CO2 absorption. I fail to see how that increases the temperature of the surface”

The mean height to say 90% absorption drops from say 7 meters to 5 meters with the higher CO2 concentration.. The net change to convection is negligible, but presumably happens to maintain the lapse rate.

The surface increase in DWR heats the surface more, producing more moist warm air convection, which is a cooling function.

In moist conditions, no temperature change happens on the surface. In dryer conditions, such as at the poles, warming should occur, which is moderated by T4 radiative cooling.

It is very hard for me to see any measurable greenhouse effect. It is a dynamic, turbulent system, with surface temperature governed by air density, and cloud/ozone controlled insolation. How does HITRAN compute that? It doesn’t.

Clouds alone produced 10X the radiative forcing during the “brightening” before the turn of the century. Whatever caused that is my subject of interest. We are living inside this experiment.

• Alan Tomalty says:

There must be some reflection of IR by the clouds to produce back radiation. However, John, if your theory was correct the oceans would boil over, because release of latent heat upon condensation is the only way that the 75 W/m^2 of latent heat produced by evaporation from ocean water can be got rid of by the water molecules in the air over the oceans. If even 50% of that came back down to surface as back radiation, the oceans couldn’t evaporate it fast enough to prevent from boiling over. The oceans emit only ~ 13 W /m^2 of direct LWIR on average, so evaporation dwarfs any other heat release process from the oceans. The sun shines directly on surface more than 8 hours per day in most of the US and huge parts of the ocean get much more sunlight. This sunlight radiation has to escape back out into upper atmosphere from the oceans somehow, and that process is by evaporation Since even a desert in summertime is very cold on a cloudless night there cant be enough back radiation by CO2 to worry about. There may be some latent heat release to surface upon condensation but in generally, hot air rises.

• Samuel C Cogar says:

Dave Burton – December 27, 2019 at 3:34 pm

tty is correct. The Earth’s surface’s does not become more reflective when it warms, and ………

That was nice, Dave, but I don’t recall responding to any such “reflective” claim by tty.

You do, indeed, experience the warming effect of downwelling radiation from CO2 in the atmosphere, regardless of whether or not the air emitting it is cooler than your body temperature. …… However, the (CO2’s) down-welling LW IR can be measured with instruments.

Well now, Dave B, iffen that’s what you believe, …… then it is you who has the problem, not me. Alan Tomalty will explain it to you in the following quote, to wit:

Alan Tomalty : June 12, 2019 at 11:23 pm “There is one way to separate out the H2O and CO2 greenhouse back radiation. Pick the driest desert in the world and measure the back radiation at nighttime only on the nights without any clouds. NASA refuses to do this and publish the results because their scientists have told them that the back radiation by CO2 alone is unmeasureable. In other words 0.0000000000 W/m^2
https://wattsupwiththat.com/2019/06/12/earth-as-a-solar-collector/#comment-2722350

So, Feldman’s results indicate that CO2 forcing is 2.40 ±0.72 W/m² per doubling, which is consistent with Happer’s result, and inconsistent with the commonly claimed figure of 3.7 ±0.4 W/m² per doubling.

Shur nuff, Dave B, ……. and my common sense results INDICATES that you are partially addicted to the AGW Kool Aide.

• Samuel C Cogar says:

Alan Tomalty – December 28, 2019 at 12:31 am

Since even a desert in summertime is very cold on a cloudless night there cant be enough back radiation by CO2 to worry about.

Alan T, very, very, very few people are willing to discuss “desert climates” when it involves “downwelling” radiation from atmospheric CO2. They all should try spending a “summer night” in a desert without a coat or blanket to keep warm.

Another example of why atmospheric CO2 is a literal failure as a global warming, …. uh, human body warming “radiant” or “conductive” in-home atmospheric gas.

“DUH”, in-home heating systems, both forced air and radiant heating are highly ineffective at keeping one’s body “warm” if the humidity (H2O vapor) is extremely low.

So, iffen you feel chilled, chilly or cold when your thermostat reads 75 F or so, then put a pot of boiling water on the stove or turn on your “humidifier”.

• Jean Parisot says:

How do we start with a broadband TSI estimate to separate energy contributions in the adsorption bands versus vibrational contributions in other millimeter wave bands?

• Rich Davis says:

Stephen, I think that the question of equilibrium here is one of equilibrium with the deep ocean (not within the atmosphere-surface system). Equilibrium with the deep ocean may take hundreds or thousands of years due to slow mass transport. Transient temperature gradients will persist for a very long time. In fact it seems to me that equilibrium can never actually occur due to chaotic, constantly changing factors that oppose each other. Equilibrium in this case assumes that every factor, including CO2 remains completely unchanged for many centuries. How impossible is that? Indeed the ridiculous assumption is that the earth was absolutely static until we came along and burned some fossil fuels. It’s a kind of anti-hubris, that assumes everything we do has catastrophic impacts and there is nothing we can do that is beneficial.

• Rich,

The temperature of the deep ocean is dictated by the pressure, temperature, density profile of water. As long as there’s cold water at the poles, the thermohaline circulation insures that even the deep ocean at the equator will be close to 0C independent of the planets average temperature. Gravity works very quickly and quite well to stratify hot and cold in the oceans.

Only the top few 100 meters of the oceans has anything to do with the steady state surface temperature. Assuming that the entire mass of the oceans needs to adapt to surface temperature changes is incorrect. The thermocline may shift up or down as a result of changing surface temperatures, but this only has a net effect on the energy content of the oceans for the small sliver of water at bottom edge of the thermocline and the water between the top of the thermocline and the surface.

• Rich Davis says:

Sure, there will always be “colder” surface water at the poles than at the equator, and I understand that cold water at depth is flowing from the poles and almost as cold near the equator as elsewhere, but I don’t see any justification to claim that the water sinking to the deep ocean is necessarily constantly at the freezing point of salt water, (and therefore a constant temperature over time) both in winter and in summer, or that the salinity level controlling that freezing point must be constant, or that geothermal heat flux is uniform across the seafloor and constant over time or that the currents flow at precisely the same rate or that there are no other factors that vary over time ultimately affecting the heat content of upwelling ocean currents.

It’s my understanding that some currents flow at such slow rates that it takes centuries for a volume of sinking water to reach the point of upwelling. Can you persuade me that something fixes the temperature of the polar water that sinks to depth such that there is a fixed temperature profile longitudinally from pole to equator along the seabed that is unaffected by the seasons? I grant you that the deep currents are likely in laminar flow with little turbulent mixing. If there is seasonal variability and there are currents that take hundreds of years (variable seasonal cycles) to complete a circuit, then there must be variation in the heat content of upwelling water that affects atmospheric temperatures and relative humidity. Moreover, if those variations are not perfectly cyclical, there must be long-term variation that is very complex. Do we observe variation in the temperature of upwelling currents? If you are correct, we should not.

• Ralph A Gardner says:

The water with CO2 sinks is heavier and sinks to the bottom of the oceans where it flows to the south pole and is upwelled due to winds that circulate around the south pole.

• Rich Davis says:

Ralph is that an attempt at sarcasm? You couldn’t get it much more backward, so I assume you were trying.

• Rich,

It’s a hydrological process where evaporation of tropical oceans exceeds tropical rainfall while rainfall across the rest of the planet exceeds local evaporation. The excess water falling on the oceans near the poles replaces the excess evaporation in the tropics by pushing up from the bottom owing to the intrinsic thermal stratification by gravity that keeps denser cold water below warmer surface waters. In addition, the transfer of water from the tropics to the poles is also a significant, if not the primary, mechanism for moving energy from the equator to the poles.

While we don’t often consider water to be a thermal insulator, at a sufficient thickness it can insulate hot from cold and that this sets the thickness of the thermocline which keeps the deep water cold, even in the tropics.

When you calculate the heat flow through the thermocline based on the thermal conductivity of water, it seems to exactly offset the average heat flow coming from the planet below, both on the order of about 1 W/m^2. This makes the NET energy flux through the thermocline approximately zero meaning that the NET heat being added to the deep ocean comes from the Earth’s interior which is completely independent of the temperature of the virtual surface of Earth in direct equilibrium with the Sun.

• I mean ‘hydraulic’ not hydrological although water is the uncompressible fluid involved.

• Stephen Wilde wrote, “There cannot be any delay in the equilibrium process without breaching the Gas Laws.”

No. The fact that it takes time for things to warm up, and takes a long time for large things to warm up, when they are absorbing energy, does not breach any laws. Neither does the fact that it takes time for things to melt, or evaporate, when they are absorbing energy.

The LW IR energy that is absorbed by the atmosphere’s GHGs (and thus warms the atmosphere) comes from the surface, which does not respond instantly to changes in the conditions that affect its temperature.

I do, however, suspect that the large ECS-to-TCR ratios built into most CMIP GCMs (rightmost column here) are very far from the mark.

• Dave,

Yes, there should actually be no difference between the ECS and TCR. The planet is adapting to changing CO2 concentrations about as fast as those concentrations are changing. The most missing effect that could be claimed is the equivalent of about half of the total effect from the CO2 emitted during the last 12 months.

The idea of warming yet to come is based on the amplification seen entering and leaving ice ages. They fail to realize that on the warming side, this only occurs when there’s significant average ice and snow coverage to melt, which isn’t the case today since we are close to minimum possible ice. We also know that both Antarctic and Greenland glaciers have already survived sustained warmer global average temperatures than we have today for thousands of years at a time. Even when the planet gets warmer, winters are still cold and dark.

• Samuel C Cogar says:

co2isnotevil – December 26, 2019 at 4:59 pm

The planet is adapting to changing CO2 concentrations about as fast as those concentrations are changing.

I think you got that “arse backward”, …… didn’t you?

• OK.

… just as slowly as those concentrations are changing …

• Loren Wilson says:

Henry’s law is a useful approximation for the absorption of a gas by a liquid. At low concentrations in the liquid phase and for nearly ideal gas behavior, the fugacity of the gas can be assumed to be equal to its partial pressure, and the solubility of the gas is nearly proportional to its partial pressure in the gas phase (at constant temperature). Thus we use H=Py/x as an approximation for the rigorous definition, which I have to look up to get correctly, and can’t type in this blurb. This applies for a nearly ideal gas phase, which our atmosphere is. It also applies for low concentration of the gas in the liquid phase and no reactions. Therefore, Henry’s law is not used for the solubility of ammonia in water. It is a reasonable approximation for CO2 since the reaction of CO2 with water does not proceed very far near room temperature. It is a very good approximation for the solubility of gasses like nitrogen, oxygen, argon, methane, helium, etc. in most fluids.

• Samuel C Cogar says:

Loren Wilson, the effects of Henry’s Law is the only sensible explanation for the plotted CO2 ppm on the Keeling Curve Graph, ….. both the biyearly (seasonal) and yearly quantities.

• Except that if you compare the 5 ppm seasonal variability to the 2 ppm linear trend and consider the 5 ppm the response to changing average temperatures, the 2 ppm linear trend would need to correlate to a temperature trend of about 1C per year. Henry’s Law can’t reconcile this. Yes, it has a small effect, but other factors have even larger effects.

You’re doing the same thing alarmists do by focusing on one finite, but relatively small contributing mechanism to the exclusion of everything else.

• Samuel C Cogar says:

co2isnotevi -l December 29, 2019 at 10:40 am

You’re doing the same thing alarmists do by focusing on one finite, but relatively small contributing mechanism to the exclusion of everything else.

Don’t be accusing me of what you are guilty of because it is obvious to me that you “can’t see the forest for all the trees that are blocking your view”.

co2is, ….. the following “max/min” monthly CO2 ppm data was extracted directly from NOAA’s Mauna Loa Record for the years 1979 thru 2019, …. to wit:

year mth “Max” _ yearly increase ____ mth “Min” ppm
1979 _ 6 _ 339.20 …. + ….. El Niño ___ 9 … 333.93
1980 _ 5 _ 341.47 …. +2.27 _________ 10 … 336.05
1981 _ 5 _ 343.01 …. +1.54 __________ 9 … 336.92
1982 _ 5 _ 344.67 …. +1.66 El Niño __ 9 … 338.32 El Chichón
1983 _ 5 _ 345.96 …. +1.29 _________ 9 … 340.17
1984 _ 5 _ 347.55 …. +1.59 __________ 9 … 341.35
1985 _ 5 _ 348.92 …. +1.37 _________ 10 … 343.08
1986 _ 5 _ 350.53 …. +1.61 _________ 10 … 344.47
1987 _ 5 _ 352.14 …. +1.61 __________ 9 … 346.52
1988 _ 5 _ 354.18 …. +2.04 __________ 9 … 349.03
1989 _ 5 _ 355.89 …. +1.71 La Nina __ 9 … 350.02
1990 _ 5 _ 357.29 …. +1.40 __________ 9 … 351.28
1991 _ 5 _ 359.09 …. +1.80 __________ 9 … 352.30
1992 _ 5 _ 359.55 …. +0.46 El Niño __ 9 … 352.93 Pinatubo
1993 _ 5 _ 360.19 …. +0.64 __________ 9 … 354.10
1994 _ 5 _ 361.68 …. +1.49 __________ 9 … 355.63
1995 _ 5 _ 363.77 …. +2.09 _________ 10 … 357.97
1996 _ 5 _ 365.16 …. +1.39 _________ 10 … 359.54
1997 _ 5 _ 366.69 …. +1.53 __________ 9 … 360.31
1998 _ 5 _ 369.49 …. +2.80 El Niño __ 9 … 364.01
1999 _ 4 _ 370.96 …. +1.47 La Nina ___ 9 … 364.94
2000 _ 4 _ 371.82 …. +0.86 La Nina ___ 9 … 366.91
2001 _ 5 _ 373.82 …. +2.00 __________ 9 … 368.16
2002 _ 5 _ 375.65 …. +1.83 _________ 10 … 370.51
2003 _ 5 _ 378.50 …. +2.85 _________ 10 … 373.10
2004 _ 5 _ 380.63 …. +2.13 __________ 9 … 374.11
2005 _ 5 _ 382.47 …. +1.84 __________ 9 … 376.66
2006 _ 5 _ 384.98 …. +2.51 __________ 9 … 378.92
2007 _ 5 _ 386.58 …. +1.60 __________ 9 … 380.90
2008 _ 5 _ 388.50 …. +1.92 La Nina _ 10 … 382.99
2009 _ 5 _ 390.19 …. +1.65 _________ 10 … 384.39
2010 _ 5 _ 393.04 …. +2.85 El Niño __ 9 … 386.83
2011 _ 5 _ 394.21 …. +1.17 La Nina _ 10 … 388.96
2012 _ 5 _ 396.78 …. +2.58 _________ 10 … 391.01
2013 _ 5 _ 399.76 …. +2.98 __________ 9 … 393.51
2014 _ 5 _ 401.88 …. +2.12 __________ 9 … 395.35
2015 _ 5 _ 403.94 …. +2.06 __________ 9 … 397.63
2016 _ 5 _ 407.70 …. +3.76 El Niño __ 9 … 401.03
2017 _ 5 _ 409.65 …. +1.95 __________ 9 … 403.38
2018 _ 5 _ 411.24 …. +1.59 __________9 … 405.51
2019 _ 5 _ 414.66 …. +3.42 __________9 … 408.50
La Nina – El Nino index: https://ggweather.com/enso/oni.htm
https://wattsupwiththat.com/2019/12/28/two-more-degrees-by-2100/#comment-2880462

And the above MLR data doesn’t lie. And if you subtract the September (9) “min” CO2 ppm from the May (5) “max” CO2 ppm of the same year, …… 99% of the time you will get an Average 6 ppm summertime decrease in CO2 …. regardless of what us humans or the plant world does or has been doing.

And. Co2is, if you subtract the September (9) “min” CO2 ppm of the previous year (say 1982) ….. from the May (5) “max” CO2 ppm of the next year (1983), …… 99% of the time you will get an Average 8 ppm wintertime increase in CO2 …. regardless of what us humans or the plant world does or has been doing.

So, CO2 has been down an average 6 ppm in summer, and up an average 8 ppm (6 + 2 ppm) in winter, …… steadily and consistently for the past 61 years.

Which means that the bi-yearly (seasonal) cycling of average 6 ppm CO2 is a direct result of the ingassing/outgassing of CO2 …… as a direct result of seasonal temperature changes of the ocean waters in the different hemispheres.

And that the yearly average 2 ppm increase in average CO2 ppm is a direct result of the ocean water re-warming from the cold of the LIA. As the ocean waters warm, they outgas more CO2 than they ingas, thus the 2% average increase. And of course, El Ninos, La Ninas and volcanic eruptions leave their “signature” in the MLO CO2 record as noted above.

• Sam,

“Don’t be accusing me of what you are guilty of …”

I’m doing no such thing. I never said that ocean temperatures have no bearing on CO2 levels, just that there are other things that have a larger influence, most importantly, CO2 emissions and biology where you seem to be ignoring both.

Don’t be disturbed by falsification tests of your hypothesis. This is how science works. Ignoring falsification is what alarmists do. Testing hypotheses is what I do and when I test yours, it breaks. Your test of the Moana Loa data may not fail, but it’s not the only test. In fact, there are few, if any, falsified hypotheses that don’t also have a test that would seem to confirm them and in fact, it was often the apparently confirming test that led to the incorrect hypothesis in the first place.

Your explanation doesn’t work for why a 6 ppm difference arises from a global 3C temperature swing, yet 2 ppm arises from a yearly temperature trend of a small fraction of a degree per year. If the 6 ppm seasonal swing arose from a 3C temperature swing, it would take an ‘LIA recovery’ warming trend of 1C per year to manifest a 2 ppm linear trend. In other words, it would take only 3 years to warm the planet by an amount equal to the difference between winter and summer.

Science is all about testing hypothesis, so lets apply some arithmetic for a more robust test. For a yearly trend of .02C per year (0.2C per decade) to cause a 2 ppm yearly increase, the effect is 1 ppm of CO2 per .01C. If the 3 C yearly global tempearature swing had the same effect, it should have caused a CO2 swing of 3/ .01 = 300 ppm! Time constants can’t explain this discrepancy, as both seasonal change and the LIA recovery are subject to the same time constants and the same rates of change, although relative to the solubility of CO2 in water, the time constant is too short to have any real effect.

Of course, the temperature change from year to year isn’t monotonic and some years are colder than the one before, yet CO2 continues to increase monotonically. Would you like to try and explain this?

• Samuel C Cogar says:

co2isnotevil – December 30, 2019 at 9:44 am

I’m doing no such thing. I never said that ocean temperatures have no bearing on CO2 levels, just that there are other things that have a larger influence, most importantly, CO2 emissions and biology where you seem to be ignoring both.

co2isnotevil, …… GETTA CLUE, ….. I earned an AB Degree in Biological Science, with emphasis on Botany, Bacteriology and other aspects of the “natural world” that we live in.

“DUH”, there is absolutely, positively NO greater influence on atmospheric CO2 ppm quantities than the GREATEST “sink” ever of CO2, which is the ocean waters, and primarily the ocean waters residing in the Southern Hemisphere, to wit:

The Northern Hemisphere is 60% land and 40% water. The Southern Hemisphere is 20% land and 80% water”.

Testing hypotheses is what I do and when I test yours, it breaks.

OH GOOD GRIEF, co2isnotevil, ….. of course my hypotheses “breaks” when you apply your “junk science” fictional quantities for “testing”. I and my hypothesis don’t give a damn about your estimated/guesstimated average near-surface air temperature of 3C.

GETTA CLUE. co2is, …. it is the “changing” of the seasonal temperature of the ocean surface waters that determines the rate of ingassing/outgassing CO2.

If the 6 ppm seasonal swing arose from a 3C (near-surface average) temperature swing, it would take an ‘LIA recovery’ warming trend of 1C per year to manifest a 2 ppm linear trend.

“If the, ..if the, ..if the, ……”If you say so, co2isnotevil, ….. but why in hell did you make such an asinine claim if you knew it was FUBAR?

Of course, the temperature change from year to year isn’t monotonic and some years are colder than the one before, yet CO2 continues to increase monotonically. Would you like to try and explain this?

co2isnotevil, ….. you really shouldn’t be asking dumb arsed questions and expecting intelligent answers.

Which temperature change(s) are you referring to in your above?

Local, regional or global average near-surface atmospheric temperature increases/decreases?

Or maybe local, regional or global average soil temperature increases/decreases?

Or local, regional or global SEASONAL average sea surface water temperature increases/decreases?

Or local, regional or global average Urban Heat Island temperature increases/decreases?

And last but not least, …. monotonic nature of the local, regional or global average yearly increase in ocean surface water temperature, ….. the result of the slow “warming up” following the demise of the LIA.

3. John Shotsky says:

Referring to the comment about CO2 being “well mixed”, I offer the following thoughts.
The earth emits over 95% of the annual emission of CO2. Humans are responsible for less than 5%. It is emitted at the surface, and it is reabsorbed at the surface. We know about the heat islands around cities, said to be due to high concentrations of CO2.
Thanks to gravity, the atmosphere is densest at the surface, and less dense with altitude. How can CO2 be well mixed with the rest of the atmosphere if the bulk of it is coming from and going back to the surface, and it is densest at the surface?

• Nicholas McGinley says:

Well mixed is a relative matter.
Certainly CO2 is well mixed compared to water vapor and clouds.

• Willis Eschenbach says:

John, nobody I know of is claiming that the surface CO2 is well mixed.

However, it is indeed well mixed in the bulk atmosphere compared to the major greenhouse gas, which is water vapor, which is what folks are referring to.

w.

• KcTaz says:

Re whether or not CO2 is a well mixed gas in the atm, I’m confused.

Carbon Dioxide Not a Well Mixed Gas and Can’t Cause Global Warming
By: John O’Sullivan
http://bit.ly/2kiynGC

“…Acceptance of the “well-mixed gas” concept is a key requirement for those who choose to believe in the so-called greenhouse gas effect. A rising group of skeptic scientists have put the “well-mixed gas” hypothesis under the microscope and shown it contradicts not only satellite data but also measurements obtained in standard laboratory experiments.
Canadian climate scientist, Dr Tim Ball is a veteran critic of the “junk science” of the International Panel on Climate Change (IPCC) and no stranger to controversy.
Ball is prominent among the “Slayers” group of skeptics and has been forthright in denouncing the IPCC claims; “I think a major false assumption is that CO2 is evenly distributed regardless of its function.“

“…CO2: The Heavy Gas that Heats then Cools Faster!
The same principle is applied to heat transfer, the Specific Heat (SH) of air is 1.0 and the SH of CO2 is 0.8 (heats and cools faster).  Combining these properties allows for thermal mixing. Heavy CO2 warms faster and rises, as in a hot air balloon.  It then rapidly cools and falls.”
“…The cornerstone of the IPCC claims since 1988 is that “trapped” CO2 adds heat because it is a direct consequence of another dubious and unscientific mechanism they call “back radiation.” In no law of science will you have read of the term “back radiation.” It is a speculative and unphysical concept and is the biggest lie woven into the falsity of what is widely known as the greenhouse gas effect.
Professor Nasif Nahle, a recent addition to the Slayers team, has proven that application of standard gas equations reveal that, if it were real, any “trapping” effect of the IPCC’s “back radiation” could last not a moment longer than a miniscule five milliseconds – that’s quicker than the blink of an eye to all you non-scientists…”

• Willis Eschenbach says:

Re whether or not CO2 is a well mixed gas in the atm, I’m confused.

This is where you gotta decide who you gonna believe—actual measurements from the OCO satellite I showed above … or someone’s claim in some paper somewhere who is waffling on about density and specific heat without any actual measurements.

Let me note, if it helps, that CO2 is generated at the surface. As a result, at the surface there are places with CO2 levels five or ten times the background level.

But that’s in a thin layer at the surface, and above that it gets well mixed by the action of the air.

I discuss some of these issues in detail here.

w.

• By way of clarification, the way KcTaz copied & pasted that material, some readers might think he was attributing it to Dr. Tim Ball. I think the only part of it actually by Dr. Ball was this lone sentence: “I think a major false assumption is that CO2 is evenly distributed regardless of its function.”

The rest of it, including the nonsense claiming that “back radiation” doesn’t exist, was written by Mr. John O’Sullivan.

I’ll be blunt: O’Sullivan’s organization, “Principia-Scientific International” (PSI), and their web site, are evil. It’s a disinformation site, run by crackpots (often called “slayers,” after their ridiculous book). Their role in the climate debate is to help climate activists discredit climate skepticism.

This despicable smear of Roy Spencer, Judith Curry, Anthony Watts, and Fred Singer is perfectly in character for them. Lies are the stock-in-trade of PSI.

Some of the material on the PSI site is good, but THAT material is always copied (and I think often plagiarized) from OTHER sites. The material which originates from PSI’s own member authors is predictably nonsense.

Unfortunately, that mix truth and fiction makes their web site even more deceptive, because it makes the fiction harder to recognize.

“Falsehood is never so false as when it is very nearly true.”
– G.K. Chesterton

Their leader, CEO John O’Sullivan, is a notorious fraudster. Three other longtime PSI authors are Pierre Latour, Joe Postma & Joe Olson. (They used to have a 5th prolific author, one Douglas Cotton, but he had a falling out with the others.)

I call O’Sullivan “the bad John O’Sullivan,” to distinguish him from “the good John O’Sullivan,” of National Review. The bad John O’Sullivan has been known to impersonate the good John O’Sullivan, on occasion.

Joe Olson is a “9-11 truther” who accused President W. Bush of staging the 9-11-2001 terrorist attack, and destroying the World Trade Center with “micro-nukes.” (Really, I’m not making this up. Hmmm… well, the recording of his 9-11 nutter conference presentation entitled “Unequivocal 9/11 Nukes,” which was linked from this page, is gone from YouTube, but, not to worry, I saved a copy; email me if you want it.)

In 2015 O’Sullivan & Pierre Latour blatantly lied on the PSI web site, and also in emails, about Dr. Fred Singer’s views. The web site article was entitled, “Singer Concurs with Latour: CO2 Doesn’t Cause Global Warming.”

After several people objected, O’Sullivan doubled down in an email, writing, “Fred Singer has now come over to PSI’s view that CO2 can only cool. I suggest you contact him.”

So I did. I forwarded that to Prof. Singer, who was then 90 years old, and asked him:

Dear Dr. Singer,
Please confirm or deny this allegation.
Warmest regards,
Dave

Prof. Singer replied succinctly:

“denied SFS”

That’s what I expected, of course. I forwarded it to O’Sullivan & the other slayers, plus some of the people who had been trying to persuade PSI to remove the dishonest article, including Dr. Singer’s friend, Lord Christopher Monckton. But O’Sullivan STILL refused to to remove the dishonest article from the PSI web site.

Prof. Singer elaborated in a subsequent email:

Friends, There is a sure way to smoke out deniers like PS [Principia Scientific]
Just ask them if GH models violate the 2nd Law of Thermo …
These people just won’t accept the existence of DWR (downwelling IR from atm to sfc)
— even if measured empirically
No point wasting more time — as Jo said
Fred

“Jo” is Jo Nova. She weighed in, and appealed to O’Sullivan (and cc’d the other three), to do the right thing:

John, this is a simple publishing issue. Claiming that Singer supports PS when he doesn’t, is using his name to promote your group at the expense of Singer and the skeptic movement as a whole. The simple request to take down the article, or correct it, should have been apologetically complied with IMMEDIATELY.
This ongoing fracas is damaging the skeptics as a whole, and wasting much time. We fight opponents with billions – let’s focus on the real enemy.
Jo

PSI’s Joe Postma weighed in, defending O’Sullivan’s and Latour’s dishonesty! He wrote:

Except it’s not an expense to Singer or the skeptic movement… Because PSI’s position of criticizing the alarmist greenhouse effect isn’t a negative. Unless you’re trying make it so or make it appear like it is so.
Which is strange, confusing, duplicitous behaviour in this skeptic movement.
If a scientist says that climate sensitivity might be indistinguishable from zero, it’s well within reasonable bounds of inference to say that it ipso facto agrees with PSI’s position that this is the result of there being no radiative greenhouse effect, whether the scientist agrees with that or not. It doesn’t harm the scientist unless you’re thinking you need approval from the alarmists…you don’t…

Eventually, after Lord Monckton threatened legal action, PSI finally did edit and tone down the article, making it less flagrantly false. It now says “Singer Converges on ZERO Climate Carbon Forcing.”

It is still misleading — as is just about everything from PSI.

• philf, Postma’s “stuff” is 100% crackpot nonsense.

I tried to watch one of those videos, the one he entitled “There is no Radiative Greenhouse Effect – Main Presentation”. It’s 70 minutes long on YouTube (where he has disabled comments, for obvious reasons).

The more I watched the worse it got. I made it through 17½ minutes of bovine dung before I couldn’t endure the stench any more.

But I took notes…

At 1:19 he showed several examples of very simple diagrams used to illustrate the principles of so-called “greenhouse warming,” and said, “You’ll find these sort of diagrams everywhere, exclusively.”

That’s untrue. You mostly find diagrams like this:

https://sealevel.info/Trenberth2009_fig1_global_energy_flows.png

The diagrams he showed are very simplified.

2:00 he said, “[these diagrams are] the consensus and the only derivation of where the greenhouse effect comes from.”

Wrong. Those aren’t “the derivation” of the so-called greenhouse effect. They are just simplified illustrations of it.

2:10 “There’s one, basic, fundamental problem with this diagram… What’s the problem? The Earth is flat.”

It’s an approximation, not an error — and it’s a very close approximation, too.

It is not a problem that the Earth and atmosphere are shown as flat in some highly simplified illustrations, since the atmosphere is just a very layer relative to the diameter of the Earth. The Earth’s diameter is nearly 8,000 miles. The height of the tropopause averages just 1/10 of 1% of that.

If the diagram represents ten miles of horizontal distance, then to correctly show the curvature, the center should be elevated by 4’2″. That’s 4.17/52,800 = 0.0079% above the horizontal.

If the scale is one mile to the inch (so that the horizontal line is ten inches long on your screen) then the true curvature would require the center of the 10″ line to be raised by eight ten-thousand-th of an inch.

(Also, the more commonly seen Trenberth diagram does not show the Earth as flat; in fact, it greatly exaggerates the curvature of the Earth’s surface — which makes no difference at all, w/r/t the concepts involved.)

5:30 “One error is that Earth’s effective temperature is being used as the solar imput.”

No, it isn’t. The solar input is measured, by satellites. It is between 1360 and 1370 W/m² if the Sun is directly overhead, and 1/4 of that averaged over the surface of the Earth.

Postma might be confused by the fact that the energy fluxes to and from the Earth are, on average, nearly identical. We know that fact because if they were not nearly identical then the Earth would be either heating or cooling rapidly.

6:08 “The solar input to the Earth does not occur over the same surface area as Earth’s radiative output.”

Can he really be that confused about his topic, that he doesn’t realize the figures in the diagrams are averages, over the entire Earth, including the dark side?

Roy Spencer did an excellent job of explaining Postma’s confusion, here:
https://wattsupwiththat.com/2019/06/04/on-the-flat-earth-rants-of-joe-postma/

6:23 “By equating fluxes, you dilute the solar input over the entire surface area of the Earth.”

“Dilute?” It’s called averaging.

7:08 “This derivation says that sunshine is unable to melt ice, and it says that sunshine isn’t responsible for summer…”

That’s total nonsense. He apparently thinks that calculating an average is equivalent to declaring that everything is at the average.

7:18 “[This derivation says that]… the greenhouse effect is responsible for melting ice, and for creating summer…”

Also complete nonsense. It says nothing of the kind.

7:24 “Now the real absorbed solar constant is actually really hot, it’s 121°C, raw. If you take off 30% absorption for albedo then it’s 88°C.”

That’s gibberish. The solar constant is an irradiance flux, not a temperature, and there’s no such thing as an “absorbed solar constant.”

The numbers he made up are meaningless nonsense, too.

10:24 “…so you have two sources that are at -18°C…”

The two sources he’s talking about are solar radiation, and LW IR back-radiation from radiatively active “greenhouse gases” in the atmosphere. They are not “at -18°C” or any other temperature. They are just radiation.

Radiation has energy density, and spectral characteristics, and it can even be polarized. But it does not have a temperature.

10:32 “…lets add two ice cubes together, do they make a higher temperature? …that’s fake thermodynamics.”

That’s complete gibberish. Ice cubes are not radiation.

If you pour energy into something from two sources, it will heat faster than if you cut off one of the two sources. If you turn on two burners on your electric stove, your kitchen will warm faster, and top out at a higher temperature, than if you turn on only one burner. Of course.

10:50 (waves pointer at a box labeled “energy balance at the Earth’s surface”) and says: “this part here, basically what they’re doing is adding two temperatures together and saying that this is going to create a new temperature.”

Apparently he doesn’t know the difference between energy and temperature. Unbelievable!

On the screen you can see that he has written, “Temperatures/fluxes never add in thermodynamics!” and he also says that at 12:00. Clearly, he thinks temperature and energy flux (power) are the same thing.

It astonishes me that he apparently managed to get two scientific degrees from the U. of Calgary, without learning such basic concepts.

12:12 “The only thing temperatures and fluxes do in thermodynamics is subtract, in which case you get heat.”

That’s nonsense. Energy fluxes sum, they don’t “subtract.” In an adiabatic system, temperatures tend toward an average, they don’t “subtract,” either.

And he obviously doesn’t know what heat is.

16:41 “… ‘if there are temperature differences, the heat energy flows from the hotter region to the colder region.’ Alright, so this establishes that the cooler atmosphere cannot send any heat to the surface.”

Nonsense. All it establishes is that if the surface is warmer than the atmosphere, the NET flow of heat will be from the surface to the atmosphere.

However, BOTH radiate (cooling in the process), and EACH absorbs radiation from the other (warming in the process). The reason the net flow of heat is from the surface to the atmosphere is simply that the transfer of energy from warmer to cooler is faster than the transfer of energy in the opposite direction, because warmer things radiates faster than the cooler things.

17:21 “Radiant flux from the cooler atmosphere cannot transfer as heat to the warmer surface.”

Nonsense. (The word “as” seems to be spurious.) Radiation from the atmosphere absolutely is absorbed by the surface, warming it in the process.

All the 2nd Law requires is that the NET heat flow is from warmer surface to cooler air.

If you heat the air, it’ll increase the rate at which the air radiates and heats the surface, thus reducing the NET rate of flow of heat from surface to air, thus making the surface warmer than it otherwise would have been.

And there I stopped. I hit the limit of the amount of ridiculous nonsense I’m willing to wade through in one day. I’m a patient man, but my patience is not limitless.

• Rich Davis says:

I don’t think that it’s the case that the UHI effect is said to be due to high concentrations of CO2. It’s due to changes in surface albedo from land use, structures with high heat capacity that absorb solar energy and then re-radiate at night, as well as the fact that all sources of energy used in an urban environment will dissipate as heat into the environment there.

• Brooks Hurd says:

John, I will grant you that urban areas have a higher concentration of CO2 than do rural areas. However. would need to see data showing that urban CO2 concentration causes the UHI.

• GregK says:

UHI
Nothing to do with CO2 bubbles around urban centres
Concrete, steel, glass, roads…comparatively few trees
Step inside a garden/park in a major city on a warm day and the temperature is significantly lower.

• Keith Minto says:

CO2 is about 60% denser that air with a molar mass of 44 g/mol, so it does sink. On the other hand H2O as a gas is about 18g/mol.
So, one gas sinks, the other rises until it undergoes another phase change.
Turbulence does some mixing but the molecular weights generally determines location.

• Willis Eschenbach says:

Not true, Keith. In fact, while a bolus of CO2 will sink to the ground and even run downhill, it quickly is broken up and mixed upwards by turbulence and wind.

We don’t have to guess about this. We can actually measure the vertical profile of CO2 from satellite observations. There’s an interesting journal article here with more than you’d ever want to know about it.

TL;DR version? Your claim that “Turbulence does some mixing but the molecular weights generally determines location” is incorrect.

w.

• Michael S. Kelly says:

Interesting paper, thanks for the link. Figure 6 was surprising to me, though. It shows that from about mid-July through early October of every year, the concentration of CO2 below 10 km altitude (for the 50 to 60 degree N latitude band) is lower than that above 10 km – significantly lower. The rest of the year, the situation is reversed. In fact, it appears that for most of the year, CO2 concentration in the 16 to 18 km altitude range is always about 8 ppm less than it is in the 8 to 10 km range. That has to have some kind of effect, but it makes my head hurt trying to figure out what it might be.

• Geoff Sherrington says:

John S,
UHI can be caused by many processes. I have never considered CO2 as being other than trivial. What mechanism do you imagine for CO2? Geoff S

• Joel O'Bryan says:

NASA/JPL’s OCO-2 mission was launched on July 2, 2014 with much fanfare from the press and Climate Change porn pushers.

“OCO-2, launched in 2014, gathers global measurements of atmospheric carbon dioxide with the resolution, precision and coverage needed to understand how this important greenhouse gas — the principal human-produced driver of climate change — moves through the Earth system at regional scales, and how it changes over time. From its vantage point in space, OCO-2 is able to make roughly 100,000 measurements of atmospheric carbon dioxide each day, around the world.”

Source and read much more here:
https://directory.eoportal.org/web/eoportal/satellite-missions/o/oco-2

NASA and the climate change science-pornography industry of course were expecting the actual data from this orbiting platform to confirm their ideas about the near-exclusive human sources, thus pinning the blame on humanity for the rise in the Evil Molecule concentration both in its gaseous form in our atmosphere and in partial pressure suspension in our oceans, lakes and rivers. It’s a gas we know as essential for all plant and animal life by providing the base-level carbon substrate. The Evil Molecule exists of course as a trace gas, which since the end of the LIA, has been rising with some portion from human burning long-sequestered stores of stored energy forms. Our burning for energy that is in their words (paraphrasing here), “polluting our atmosphere, threatening catastrophic warming and wrecking ecological destruction on the planet and an existential threat to civilization.”

Just a few months after OCO-2 was launched and while it was still in check-out phases and before any science-level worthy data was available, NASA and its climate science pornography industry supporters released a supercomputer model of what they thought at the time about the atmospheric carbon cycle. Here is the November 2014 YouTube video they released (worth viewing again even if you’ve seen it before):

But then the data started rolling in from OCO-2, and it wasn’t good for the narrative that had just come out of the IPCC AR5 WG1 report and the underlying “Bern Model” for CO2’s rise in atmospheric concentration.

First thing: OCO-2 initial data released in Fall 2015 put a stake through the heart of the carbon cycle as they modeled in their much vaunted super-computer simulation (described and shown above in the YouTube video).
Here is static map released for initial data Oct thru November 2014:
https://oco.jpl.nasa.gov/media/gallery/browse/1stmap.jpeg
Note the higher CO2 concentrations over Amazonia, The Congo, and Indonesia.

The last (from July 2017) graphic data product the OCO-2 team has on their Gallery portal is this:
https://oco.jpl.nasa.gov/media/gallery/browse/OCO2_XCO2_v8_Jul_2017_UHD.jpg
Note the blue (lower concentrations) over the higher latitudes of the Northern Hemisphere, and green colors (higher concentrations) over the Southern Hemisphere, especially the Southern Ocean.
But then going to the NH winter, January 2017 we see this:
https://oco.jpl.nasa.gov/media/gallery/browse/OCO2_XCO2_v8_Jan_2017_UHD.jpg
Note the yellows and reds from the west African Sahel (below the Sahara Desert to the equator). That’s not “supposed” to happen.

And OCO-2 JPL/NASA had the “smoothed global visualization” for the entire 2 year period of Sept 2014-Oct 2016, video for a while put for several years, BUT now those videos link have been nonfunctional here:
https://oco.jpl.nasa.gov/galleries/videos/

Fortunately some smart folks put this real first year’s OCO-2 data into their own YouTube movie here:
https://youtu.be/iUZJddSHyuI

When you actually look at ALL the details of what the initial OCO-2 data shows, it clearly makes the earlier (but much more widely disseminated and viewed) NASA supercomputer simulation as “Not just wrong.”

This clearly demonstrated that the NASA climate porn “scientists” can make whatever fantasy appear in a computer simulation they wanted and make it appear believable for public’s mass consumption. Of course, Hollywood CGI cinema wizards have long relied on this fact.

Since then what has happened with OCO-2 and the data?

JPL released a whole series of papers with OCO-2 in October 2017 in special feature issue of Science Magazine:

1.) Jesse Smith, “Measuring Earth’s carbon cycle,” Science 13 Oct 2017, Vol. 358, Issue 6360, pp. 186-187, DOI: 10.1126/science.358.6360.186, URL: http://science.sciencemag.org/content/sci/358/6360/186.full.pdf

2.) A. Eldering, P. O. Wennberg, D. Crisp, D. S. Schimel, M. R. Gunson, A. Chatterjee, J. Liu, F. M. Schwandner, Y. Sun, C. W. O’Dell, C. Frankenberg, T. Taylor, B. Fisher, G. B. Osterman, D. Wunch, J. Hakkarainen, J. Tamminen, B. Weir, “The Orbiting Carbon Observatory-2 early science investigations of regional carbon dioxide fluxes,” Science, 13 Oct 2017, Vol. 358, Issue 6360, eaam5745, DOI: 10.1126/science.aam5745

3.) Y. Sun, C. Frankenberg, J. D. Wood, D. S. Schimel, M. Jung, L. Guanter, D. T. Drewry, M. Verma, A. Porcar-Castell, T. J. Griffis, L. Gu, T. S. Magney, P. Köhler, B. Evans, K. Yuen, “OCO-2 advances photosynthesis observation from space via solar-induced chlorophyll fluorescence,” Science, 13 Oct 2017, Vol. 358, Issue 6360, eaam5747, DOI: 10.1126/science.aam5747

4.) A. Chatterjee, M. M. Gierach, A. J. Sutton, R. A. Feely, D. Crisp, A. Eldering, M. R. Gunson, C. W. O’Dell, B. B. Stephens, D. S. Schimel, “Influence of El Niño on atmospheric CO2 over the tropical Pacific Ocean: Findings from NASA’s OCO-2 mission,” Science, 13 Oct 2017, Vol. 358, Issue 6360, eaam5776, DOI: 10.1126/science.aam5776

5.) Junjie Liu, Kevin W. Bowman, David S. Schimel, Nicolas C. Parazoo, Zhe Jiang, Meemong Lee, A. Anthony Bloom, Debra Wunch, Christian Frankenberg, Ying Sun, Christopher W. O’Dell, Kevin R. Gurney, Dimitris Menemenlis, Michelle Gierach, David Crisp, Annmarie Eldering, “Contrasting carbon cycle responses of the tropical continents to the 2015–2016 El Niño,” Science, 13 Oct 2017, Vol. 358, Issue 6360, eaam5690, DOI: 10.1126/science.aam5690

6.) Florian M. Schwandner, Michael R. Gunson, Charles E. Miller, Simon A. Carn, Annmarie Eldering, Thomas Krings, Kristal R. Verhulst, David S. Schimel, Hai M. Nguyen, David Crisp, Christopher W. O’Dell, Gregory B. Osterman, Laura T. Iraci, James R. Podolske, “Spaceborne detection of localized carbon dioxide sources,” Science, 13 Oct 2017, Vol. 358, Issue 6360, eaam5782, DOI: 10.1126/science.aam5782

What was not widely covered by the Press was NASA’s own website discussion of an important finding that OCO-2 data:
“NASA Pinpoints Cause of Earth’s Recent Record Carbon Dioxide Spike”
“A new NASA study provides space-based evidence that Earth’s tropical regions were the cause of the largest annual increases in atmospheric carbon dioxide concentration seen in at least 2,000 years.

Scientists suspected the 2015-16 El Nino — one of the largest on record — was responsible, but exactly how has been a subject of ongoing research. Analyzing the first 28 months of data from NASA’s Orbiting Carbon Observatory-2 (OCO-2) satellite, researchers conclude impacts of El Nino-related heat and drought occurring in tropical regions of South America, Africa and Indonesia were responsible for the record spike in global carbon dioxide. “

Source: Alan Buis, Dwayne Brown, “NASA Pinpoints Cause of Earth’s Recent Record Carbon Dioxide Spike,” NASA/JPL, October 12, 2017, URL: https://www.jpl.nasa.gov/news/news.php?release=2017-267

Think about that… the 2015-2016 El Nino, as a natural phenomenon, resulted in the “largest annual increases in atmospheric carbon dioxide concentration seen in at least 2,000 years.” And it was from the tropics. That doesn’t play well at all with the “anthropogenic carbon pollution” propaganda message the climate pornography industry needs the public to consume.

Is it any wonder the OCO-2 team has gone quiet since then?
They do still of course release the Level 2 OCO-2 data on their web data portals and make available the analysis tolls for researchers. But you gotta have some pretty serious computing and data storage and retrieval services to make use of that.

There are more recent snippets of global CO2 visualizations, like this one from June-July 2018 (an ENSO ONI “neutral” period of about a +0.1 ONI value):
https://docserver.gesdisc.eosdis.nasa.gov/public/project/OCO/OCO2.Lite.9r.png

And when you really look at that 2018 non-El Nino period/ENSO neutral summer snapshot, it still looks nothing like the pre-OCO-2 data 2014 NASA CO2 supercomputer simulation that the climate change scam is built upon.

Thus the Bern Model and all the IPCC’s past WG1 assessments of CO2 rise from burning fossil fuels are seriously questionable in the attributed magnitude of human cause of the rise in Evil Molecule concentration. The bottom line is the actual OCO-2 data for the Evil Molecule is showing that the previous ideas of anthropogenic causes of the rise of atmospheric CO2 needs a serious, unbiased reassessment.
And…
Will the climate change pornographers deal honestly with the OCO-2 revelations in the IPCC’s AR6? I’m not hopeful.

• Loydo says:

“That doesn’t play well at all with the “anthropogenic carbon pollution” propaganda message…”

Are you doubting the 40+% increase in atmospheric CO2 concentration since 1850 is of anthropogenic origin?

• Steve Keohane says:

Have the oceans warmed? CO2 lags heating by 500-1000 years, ie. an effect not an affect.

• Joel O'Bryan says:

Probably half is anthropogenic, the other half natural. IPCC assumes it is all anthro. Clearly badly wrong from the OCO-2 data.
OCO-2 data is showing us all the pre-2015 assumptions about CO2 sources and sinks were either mostly wrong or very wrong.

• Joel, the increase in atmospheric CO2 level since 1958 (the start of the high-precision measurement record) is actually “more than all” anthropogenic.

It is true that warming temperatures can cause the oceans to outgas CO2. Indeed, that is thought to be the cause for about 1/3 of the atmospheric CO2 increases which are recorded in ice cores, as the Earth warms during deglaciations. (The rest is probably an effect of receding ice sheets uncovering buried soil & vegetation.)

However, nature is currently removing CO2 from the atmosphere, not adding it. We don’t need OCO-2 satellite data to know that fact. We know it because we know about how much CO2 we’re adding (because we have detailed financial data documenting the production and sale of coal, oil and natural gas). We’re currently adding about 5 ppmv worth of CO2 per year. But the atmospheric CO2 level is increasing at only about half that rate. That means nature is removing about 2.5 ppmv CO2 per year, not adding it.

Also, we can quantify the approximate amount of CO2 level increase that we could plausibly get from a temperature increase, and it’s nearly two orders of magnitude smaller than the CO2 level increase we’ve actually seen.

In pre-industrial times, the oceans sometimes outgassed CO2 and sometimes absorbed CO2. (Of course the oceans are always both outgassing and absorbing CO2, but I’m talking about the global net flux.) That’s no longer true. Now, with the partial pressure of CO2 in the atmosphere increased by about 47% (280 ppmv → 411 ppmv), the oceans are net absorbers of CO2 from the atmosphere, even in cool La Nina years, as required by Henry’s Law.

The ice cores show that, even after long equilibration, a full transition from glacial maximum to interglacial probably corresponds to a globally averaged temperature increase of about 5°C (and about twice that at the poles). That produces a CO2 level change totaling only about 90 ppmv. That’s only about 18 ppmv CO2 level change per °C.
In contrast, since 1958 mankind’s the atmospheric CO2 level rose by about 96 ppmv, during a time period when globally averaged temperatures rose by only somewhere between 0.4 and 0.9°C (depending on which temperature index you believe); here’s a graph:

I hope it’s obvious that <1°C of warming could not cause a 96 ppmv increase in CO2 level, nor even half that.

At first glance, if you believe GISS's high-end (0.9°C) figure, you might think that the warming could explain (0.9°C)×(18 ppmv/°C) = as much as 16 ppmv CO2. But the increase was over just sixty years. The (average at least 500 yr) delay seen in the ice cores between the reversals in temperature trend and the reversals in CO2 trend which follow, suggests that the equilibration time must be at least several hundred years. So, to get 18 ppmv CO2 from 1°C of warming would require hundreds of years of equilibration — and 60% of that 96 ppmv CO2 increase occurred over just the last 30 years.

The bottom line is simply that the recent atmospheric CO2 increase is entirely anthropogenic. There are a few people (Salby, Berry, Harde, Courtney, etc.) who dispute this, but they’re wrong.

Here’s the best examination of this dispute that I know of:

http://www.ferdinand-engelbeen.be/klimaat/co2_origin.html

• Gordon Dressler says:

Loydo,

Paleoclimate science has shown global atmospheric CO2 concentrations have ranged from a low of about 200 ppm to a high of at least 4000 ppm, without any contribution from humans.

That’s a 2,000% range of variation . . . and I should worry about a 40% change?

And, as the saying goes, “correlation does not necessarily indicate causation.”

• Samuel C Cogar says:

Joel O’Bryan – December 26, 2019 at 3:07 pm

Thus the Bern Model and all the IPCC’s past WG1 assessments of CO2 rise from burning fossil fuels are seriously questionable in the attributed magnitude of human cause of the rise in Evil Molecule concentration. The bottom line is the actual OCO-2 data for the Evil Molecule is showing that the previous ideas of anthropogenic causes of the rise of atmospheric CO2 needs a serious, unbiased reassessment.

IMLO, …… it was an absolutely asinine, idiotic and stupid to even consider the use of “satellite IR sensors” for determining the quantity of CO2 in any part or portion of earth’s atmosphere.

“DUH”, satellite sensors have “tunnel vision” and are “specific” frequency limited.

4. The reason the ECS is so wrong is because it must be in order to justify the existence of the IPCC/UNFCCC. The truth is an existential threat to these bureaucratic organizations.

Yes the presumed linearity between Wm^2 and degrees is nonsense and if we pull on this thread hard enough, the entire case for CAGW unravels.

The only linear sensitivity metric that matters is the physical sensitivity to solar W/m^2 which is the W/m^2 emitted by the surface as a consequence of W/m^2 of solar forcing. The data shows this relationship to be very linear and independent of forcing or temperature, so if the output was the change in surface emissions, rather than a change in the temperature corresponding to that change in emissions, lambda would have the constant value of about 1.62 W/m^2 of surface emissions per W/m^2 of solar forcing with less than 5% uncertainty. This compares to the 0.8C per W/m^2 required to support 3C which requires surface emissions to increase by 4.4 W/m^2 and has a stated uncertainty of +/- 50%.

How anyone can accept a metric with +/- 50% uncertainty as settled can only be described as insane, yet 3 C +/- 1.5C represents exactly that and even so, the low end of this estimate exceeds what the laws of physics can reasonably support.

• “The reason the ECS is so wrong is because it must be in order to justify the existence of the IPCC / UNFCCC.” CO2isnotevil

“Coming climate crisis” politics explained with less than two dozen words
= brilliant

• Stewart Pid says:

Right on CO2 …. my thoughts on the question:
“We’re left with a question: why it is that forty years after the Charney report, there has been no progress in reducing the uncertainty in the estimate of the equilibrium climate sensitivity?”

The grant money is still rolling in by the truck load for any POS study with “climate change” in it’s description & so any progress might impede the gravy train.

• Stewart,
Another reason is that politics got involved. Whenever politics inserts itself into a controversial issue, the first causality is objectivity which seriously undermines the scientific method when applied to controversial science.

• KcTaz says:

This, from:
NASA CO2 SATELLITES-OCO-2 data

https://www.nasa.gov/press-release/nasa-pinpoints-cause-of-earth-s-recent-record-carbon-dioxide-spike

Oct. 12, 2017
RELEASE 17-082

NASA Pinpoints Cause of Earth’s Recent Record Carbon Dioxide Spike

I am not at all sure how they came to their conclusion but it makes no sense to me. However, I’m not a scientist so, maybe I’m missing something? It seems to me that El Nino caused the Earth to warm and that created the increase in CO2 so, how they conclude that CO2 caused the warming is not something I understand.
“…In 2015 and 2016, OCO-2 recorded atmospheric carbon dioxide increases that were 50 percent larger than the average increase seen in recent years preceding these observations. These measurements are consistent with those made by the National Oceanic and Atmospheric Administration (NOAA). That increase was about 3 parts per million of carbon dioxide per year — or 6.3 gigatons of carbon. In recent years, the average annual increase has been closer to 2 parts per million of carbon dioxide per year — or 4 gigatons of carbon. These record increases occurred even though emissions from human activities in 2015-16 are estimated to have remained roughly the same as they were prior to the El Nino, which is a cyclical warming pattern of ocean circulation in the central and eastern tropical Pacific Ocean that can affect weather worldwide.

Using OCO-2 data, Liu’s team analyzed how Earth’s land areas contributed to the record atmospheric carbon dioxide concentration increases. They found the total amount of carbon released to the atmosphere from all land areas increased by 3 gigatons in 2015, due to the El Nino. About 80 percent of that amount — or 2.5 gigatons of carbon — came from natural processes occurring in tropical forests in South America, Africa and Indonesia, with each region contributing roughly the same amount.

The team compared the 2015 findings to those from a reference year — 2011 — using carbon dioxide data from the Japan Aerospace Exploration Agency’s Greenhouse Gases Observing Satellite (GOSAT). In 2011, weather in the three tropical regions was normal and the amount of carbon absorbed and released by them was in balance.

“Understanding how the carbon cycle in these regions responded to El Nino will enable scientists to improve carbon cycle models, which should lead to improved predictions of how our planet may respond to similar conditions in the future,” said OCO-2 Deputy Project Scientist Annmarie Eldering of JPL. “The team’s findings imply that if future climate brings more or longer droughts, as the last El Nino did, more carbon dioxide may remain in the atmosphere, leading to a tendency to further warm Earth…”

If anyone has an explanation as to how NASA came to the conclusion they did, I would be most grateful and interested in hearing it. Thanks.

5. c1ue says:

Willis,
So if I understand the implications correctly, the actual climate response is local: the local conditions dictate what response might be – conditions including present temperature, ocean vs. land, ice etc?
This is interesting and consistent with your other speculations concerning local temperature adjustments via tropical storms, etc.
It also is problematic from a modeling perspective: it would be necessary not only to increase the grid density, but to populate each grid is a reasonably accurate and updated “base condition”.
Is it possible to integrate the above delta T data points to get an “average”? And what would that “average” number be?
Postulate ahead of time that this doesn’t mean anything itself, the number wouldn’t be usable for projection, etc etc. Just am curious. My eyeball mark I thinks the resulting number is closer to 1 than 1.5 …

• Willis Eschenbach says:

c1ue December 26, 2019 at 10:55 am

Is it possible to integrate the above delta T data points to get an “average”? And what would that “average” number be?

c1ue, averages (area weighted, of course) for the globe and a bunch of other sub-areas are in the title area of Figure 4 above.

Best regards,

w.

• c1ue says:

Willis,
Sorry, I should have RTFM.
I didn’t even think about the temperature dependent nature of what you are graphing – hence the yellow line as opposed to a single ECS point.
The line extends from roughly 1.7 to 0.7, but seems to be mostly under 1.0

6. On the article:
TCR and ECR are unknown.
They may be unknown for another century.
Feedbacks are unknown, even whether they are positive or negative.
The exact causes of climate change are unknown.

The climate in 100 years is unknown, even whether it will be warmer or cooler.

The only climate change fact we “know” with high confidence is that the Holocene inter-glacial will end, maybe tomorrow, or maybe in 1,000 years, and when it does end, today’s global warming will be looked back at as “the good old days”.

The effect of CO2 on the global average temperature can only be guessed at, based on closed system lab experiments using artificially dried air.

The experiments suggest any warming effect would be mild and harmless.

The worst case for CO2, by assuming ONLY CO2 causes global warming, would be a harmless TCR of about +1 degree C.

There is no evidence of any water vapor positive feedback that triples the warming effect of CO2 alone

But we can’t have harmless (actually pleasant) global warming … because the coming climate crisis has almost nothing to do with real science.

The coming climate crisis is an imaginary boogeyman used by leftists to increase government power, while falsely claiming that power is needed “to save the planet for the children”.

“Save the planet” is the new way to “sell” socialism and Marxism.

The coming climate crisis, coming since 1957 according to some “scientists” (Roger Revelle), is propaganda that appears to be working.

We humans are living in the best climate in at least 800 to 1,000 years, since before the Little Ice Age centuries, yet few people are interested in climate history, or even enjoying the current climate !

As an atheist for about 60 years, I see all religions as being built around fairy tales — the coming climate crisis “cult” is similar, although it tends to attract people who dismiss the conventional religions.

After about 325 years of intermittent global warming, since the cold 1690s, during the Little Ice Age’s Maunder Minimum … those of us who prefer real science can not even get the Climate Alarmists to admit that mild warming was good news ( probably a total of about +2 degrees C. ) !

Global warming, and more CO2 in the air, support more life on our planet — we can’t even get Alarmists to admit that !

We can’t get Climate Alarmists to admit there was a Little Ice Age !

Data, facts and logic will NOT change minds of Climate Alarmists …
because data, facts and logic never created their beliefs in the first place !

What will change (other) minds is if “fossil fuels are evil believers” get even more radical — making many very expensive, and infeasible, demands, that most taxpayers will reject.

In real science, using a 1975 “formula” for CO2’s effect on the average temperature (+1.5 to +4.5 degrees C.) would have been rejected decades ago — because warming predictions using that formula ranged from 2x to 4x actual global warming.

In climate junk science, a scary claim for the future climate is made, stated with great certainty (at least stated loudly), and never changed.

The bad news is always coming in the future, but never arrives.

The only prediction change allowed is to say “it’s worse than we originally thought”, or better yet — “it’s worse than worse than we originally thought” !

The coming climate crisis is the biggest science fraud in the history of our planet.

Attempts to “stop” climate change are a shameful waste of money that could have been used to help the one billion people on Earth who are so poor that they have no electricity.

They have to burn wood and animal dung to cook food.

But leftists want lots of political power, to micro-manage YOUR life … so why should they care about one billion people with no electricity … who can only dream about being able to afford some coal for cooking their food !

• Joel O'Bryan says:

+100.

• Christopher Chantrill says:

Yes. That is why I have the maxim: “the only warrant for government power is existential peril.”

So, you want government power you gotta persuade the rubes that they are about to die.

• Willis Eschenbach says:

Richard Greene December 26, 2019 at 11:00 am
Richard, while I agree in the most part, I’d disagree with the following.

Feedbacks are unknown, even whether they are positive or negative.

As Figure 5 shows, below freezing feedbacks are generally positive, and above freezing they are generally negative.

Thanks for the good summation,

w.

• Loydo says:

If you think that is a good summation Willis then man are you in for a nasty surprise.

7. I like reading your posts because they are well-written and provide the data you use to make your conclusions. This one is thought provoking but I do have a question.

You said “We know this because the non-condensing greenhouse gases (CO2, methane, etc.) are generally well-mixed. “ My question is whether you, or anyone else reading this, has seen data that show that those GHGs are as well mixed as everyone, including me, generally believe. For the ECS over a thousand years that is not an issue. For the TCR and ICR maybe not and the satellite data you use might be affected by differences in GHG concentrations.

• Willis Eschenbach says:

Roger, we have satellites that measure CO2.

The color change makes it look unmixed … but that’s just the graphic. Note that full range of the global variation is about ± 2%, so of course, the standard deviation would be much smaller than that, which is indeed “well-mixed”.

w.

• Brandon says:

Heat islands due to CO2? I thought it was all the concrete and asphalt and their (relatively) higher thermal capacity than the surrounding soil…

Rotting plants.

• Brandon says:

Why do the concentrations look relatively higher over forested areas?

• Hugs says:

November pattern. Trees and grass grow during a season, rot and burn during another.

• Keith Minto says:

Just a thought,given the discussion, it would be interesting to see a vertical profile of CO2, at different locations.
Static and dynamic.

8. Duane says:

The only reason the climate alarmists stick with a constant factor is because they have no freaking idea how the earth responds to changes in CO2, and so to compensate the alarmists simply assert – not assume, assert – that it must be a constant. The corellary to asserting a constant is the presumption and assertion that the sole thermostat of Earth is CO2 concentration in ambient air, and that nothing else “controls” atmospheric equilibrium temperature.

Which assumption is, despite all their caterwauling that their ridiculous theory is “scientific”, is on its face utterly preposterous. If there is ANY other factor or process, natural or unnatural, that affects equilibrium atmospheric temperature, then by definition there can be no “constant”.

Logic utterly escapes the unwashed masses of climate alarmists who cling to their religious faith despite all evidence to the contrary. Any rational, thinking person who is even marginally familiar with the sciences – physics, astrophysics, biology, chemistry, geochemistry, etc. etc. etc. understands both empirically and logically that LOTS of processes have an effect on climatic performance, and that therefore there cannot be a single fixed constant that defines climate sensitivity to CO2 concentration in the atmosphere.

It seems incredible that this is even a matter of any debate, or denial, yet the climate alarmists unmask themselves as the true “climate deniers”.

• That is why they need such a long period for equilibrium. They try to allow time for all other variables to net out to zero. But as I said above there can be no time lag to equilibrium otherwise the Gas Laws are breached.

• Duane says:

The alarmists need a very long timeframe to achieve equilibrium mainly in order to make their theory unfalsifiable

9. Derg says:

Don’t try to frighten us with your sorcerous ways, Willis. Your sad devotion to that ancient religion known as Science has not helped you conjure up the stolen data tapes, or given you enough clairvoyance to find the rebels’ hidden fortress…

10. beng135 says:

The IPCC assessed that it is ‘likely’ to be in the range of 1.5 °C to 4.5 °C (Figs 2 and 3), which is the same range given by Charney in 1979.

I’m actually a bit shocked that the IPCC has let the 1.5C low-end number stand. They’re not particularly bothered w/facts (they prefer story-lines), so why haven’t they raised that over the yrs to 2.0 or 2.5C or even more? Just wondering….

• Rich Davis says:

I’d say because they fear it’s probably below their low-end estimate and they want to be able to say that they weren’t that far off. (While at the same time continuing to imply that the value may be 6 or 7). You know that if it eventually turns out to be 1.3, then every MSM story would be about how climate scientists always said it was 1.5 and they were pretty darn close, not that they routinely implied that it could be 6 or higher and were off by a factor of 5 after 40 years and billions of research dollars.

• Hugs says:

IPCC scientists are not all liars and charlatans, that’s why. The SPMs are political, but the science is much better than people think on both sides.

The IPCC reflect a true spread between optimism and alarmism inside the scientific community. I side with the optimists, but I must accept refuting all current climate alarmism is a gargantuan task that only time will do.

Averaging the sensitivity estimates is not a thing. Their spread shows what we call a true lack of consensus. We know some warming will come, but the jury is still out if it is serious or can we just adapt to it.

• Willis Eschenbach says:

Hugs December 27, 2019 at 12:29 am

Averaging the sensitivity estimates is not a thing. Their spread shows what we call a true lack of consensus. We know some warming will come, but the jury is still out if it is serious or can we just adapt to it.

Ummm … no. We do NOT know that “some warming will come”, because we don’t know what controls the earth’s temperature.

We don’t know why the Medieval times were as warm or warmer than the present. We don’t know why the earth cooled after that to the Little Ice Age. We don’t know why we didn’t continue to cool into a real ice age. We don’t know why we started to warm out of the LIA, or why it started at the end of the 1700s and not the 1800s. We don’t know why the warming lasted for the last couple centuries. Not knowing any of that, we also don’t know if tomorrow we might start to slide into another Little Ice Age, or even a real glaciation.

And when scientists cannot answer any of those questions, why on earth do you think we can say whether the earth will warm or cool over the next 100 years? That’s a voodoo magic claim, not scientific in any sense.

w.

• beng135 says:

Hugs sez:
IPCC scientists are not all liars and charlatans, that’s why.

Yeah, OK, maybe, but their bosses that write the summaries/headlines, write & direct the policies & divvy up the money are, and that’s what really matters.

• Willis Eschenbach says:

Hugs December 27, 2019 at 12:29 am

IPCC scientists are not all liars and charlatans, that’s why.

Ya gotta love it when the best that someone can say about an organization is that they are “not all liars and charlatans”.

w.

• Rich Davis says:

Ok hugs I will rise to your defense. As your editor I would have phrased it thus. I hope you and Willis both could agree…

IPCC scientists are not all liars and charlatans, that’s why. The SPMs are political, but the science is much better than people think on both sides.

The IPCC reflect a true spread between optimism and alarmism inside the scientific community. I side with the optimists, but I must accept refuting all current climate alarmism is a gargantuan task that only time will do.

Averaging the sensitivity estimates is not a thing. Their spread shows what we call a true lack of consensus. There is solid evidence for the low end estimates—solid science that can’t just be dismissed with the wave of a hand. There are also pal-reviewed sham papers supporting alarmism that can’t be dismissed for political reasons.

We expect that man’s continued use of fossil fuels—all other factors being equal—would bring some warming, but the jury is still out if it is serious or can we just adapt to it, or indeed if the warming will be entirely beneficial.

Not yet having a firm grasp on the mechanisms that caused significant and abrupt climate change, but that were clearly not driven by any effect related to CO2 concentration, it would be the height of hubris to base an 80-year weather forecast on our limited understanding of just one potential factor in a highly-complex system.

• Willis Eschenbach says:

Rich Davis December 28, 2019 at 7:07 am

Ok hugs I will rise to your defense. As your editor I would have phrased it thus. I hope you and Willis both could agree…

IPCC scientists are not all liars and charlatans, that’s why. The SPMs are political, but the science is much better than people think on both sides.

The IPCC reflect a true spread between optimism and alarmism inside the scientific community. I side with the optimists, but I must accept refuting all current climate alarmism is a gargantuan task that only time will do.

I’m sorry, but the IPCC is a corrupt organization.

See “Caspar and the Jesus Paper” for one of many examples. They are gatekeepers AGAINST anything but the party line.

w.

• Rich Davis says:

The IPCC is obviously a corrupt organization, but the statement was a much lower bar than that, which you yourself had remarked —NOT ALL IPCC SCIENTISTS are liars and charlatans.

To be honest I’m not sure that honest scientists are still permitted to participate, but I submit that Richard Lindzen is one example of a non-corrupt scientist who participated in the IPCC process.

• Herbert says:

beng 1 3 5,
In 2007 in Ernesto Zedillo’s book “ Global Warming: Looking beyond Kyoto,” Stefan Rahmsdorf published a paper, “ Anthropogenic Climate Change: Revisiting the Facts.”
It was in answer to Richard Lindzen’s paper in the same book,” Is the Global Warming Alarm founded on Fact?”
He wrote, “ With an observed temperature increase since the 19th Century of 0.8C , and ( as Lindzen posits , for the sake of argument) assuming this to be caused by greenhouse gases alone, we would infer a climate sensitivity of 0.8C . ( 3.7 W/m2)/ ( 2.0 W/m2) =1.5 C. This is at the lower end of, but consistent with the IPCC range.”
He concludes, “ But taking all ensemble studies and other constraints together , my personal assessment ( and that of other researchers) is that the uncertainty range can now be described more realistically as 3 degrees C +/- 1 degree C.
He cites for his conclusion, “ James Hansen and others, “ Global Temperature Change, Proceedings of the National Academy of Science 103, no. 39 (2006): 14288-93.
I don’t know why the IPCC are so obstinate in defending 1.5C to 4.5C with 3degrees C as ‘best estimate’ although AR5 has now dropped a “ best estimate”.

11. Javier says:

While λ does not have to be a constant, if one takes a fixed point to start the doubling as Knutti does with the preindustrial value defined as the average between 1850 and 1900, that particular doubling should have a single value of λ. Whether that value can be determined or not is a different matter.

But I agree that they are on a wild goose chase with climate sensitivity.

• Dr Deanster says:

Using Willis logic, which I agree with, the ECS seems highly dependent on snow and ice cover. Therefore, as higher latitudes become ice free, ECS will decrease. In an ice free world, the ECS theoretically falls to 0.2C per doubling for th entire globe.

Thus …. there is no runaway GHG effect. Further, the decrease in arctic ice in the summer decreases the ECS towards 0.2 C during the summer during maximum sunlight in the Arctic. The increase in ECS during winter in the Arctic is rather meaningless, as the temp is still far below freezing. This is fitting with the observation that Arctic temperature in the summer never goes above the green line on the D-MINN graph, but winter temps often do not fall back to the green avg. line during winter.

Great work Willis.

12. dh-mtl says:

The answer to the enigma is quite clear in Figure 5.

From Figure 5: Above freezing the feedback constant is clearly negative, and continues to decline until, at the highest temperatures, there is virtually no additional temperature rise with increasing down-welling radiation.

The cause of the negative feedback is clearly water vapor. The partial pressure of water vapor increases exponentially with temperature.

Any increase in temperature, caused by increasing down-welling radiation, is countered by exponentially, with temperature, increasing evaporation, and exponentially, with temperature, increasing rates of convective transport of this water vapor throughout the atmoshpere. The evaporated water condenses at the cloud tops, radiating into space and bypassing the green-house gases concentrated at lower altitudes.

The abundance of water on the earth’s surface, combined with water’s unique properties (high heat of evaporation, exponentially increasing (with temperature) partial pressure and low density relative to air), puts an upper limit on the earth’s temperature. This is the message of Figure 5.

• Yes, water vapour makes the by passing of any radiative greenhouse effect much more efficient than via convection alone because it is then not necessary to return all energy to the surface before radiation to space.
However, convection still varies so as to by pass any remaining radiative effects of all ghgs including water vapour.

• Chaswarnertoo says:

The hotter it gets the more it rains. Measurable. More rain is due to more evaporative cooling. Fact.

13. beng135 says:

Fig 3 (downwelling longwave) is interesting that it reflects ocean temp differences at the same latitudes at say cold/warm water-current boundaries like the Gulf Stream. Not sure I understand why other than maybe greater water vapor over the warmer water.

14. Kevin kilty says:

I live in a region above 2,000 m above sea level, which is often partially or entirely snow covered for six months of the year. I have noticed in the hourly temperature data a very disproportionate number of hours in the range of $-1 to 0 ^\circ C$. In fact the temperature distribution in the December through February season is sort of normal with a delta spike at zero. Zero temperature occurs almost exclusively during daylight hours in this winter season, so the ICR is often zero when there is a mix of water and ice or snow on the surface. During spring and early summer the soil is moist and the response to input solar energy is evaporation which suppresses the ICR. By late summer and early autumn the ground is dry and so we observe some immediate warmth when the sun shines, but this is also a season of dry, clear air which leads to substantial LWIR loss from surface to sky when the sun is low or shaded.

Lambda ($\lambda$) is a local function dependent on surface condition, temperature, season, etc. It possesses a global average but this may not be a useful number. It might be like the seasonal average temperature around here — a number obtained by averaging runs of cold days after passage of a front with runs of warm days ahead of a front, but which only rarely applies to any particular day. People informed by the seasonal average temperature are often surprised by what the weather is actually like.

• Clyde Spencer says:

Kevin
I think that it is best to think of Lambda as a first-order approximation of a variable, which is tangent to the function describing temperature change with respect to forcing, for different surface conditions. If the underlying function is highly non-linear, then a simple ‘average’ tangent for a limited range of observations of surface conditions may not properly represent the behavior of even the tangent, let alone the function it is following. But then, after all, the GCMs are based on physics!

15. Curious George says:

I don’t understand how you got data for your “temperature changes per 3.7 W/m2” pictures. Does it come from a model, or has it somehow been observed? Would it be 3.7 on the equator, at poles, or everywhere?

Glad to see that you are in a great shape again, and making a good use of your driving ban. As a saying goes, In vain does a sober man knock at the gates of Muses. Happy New Year.

• Willis Eschenbach says:

Good question, George, my bad for forgetting to cite that. The data is from the CERES EBAF dataset. I’ve edited the head post to reflect that

And thanks for the good wishes. As I wrote about on my blog, the bad news was that I had a seizure. However, that wasn’t the problem. They put me on a ghastly anti-seizure drug called Keppra, which left me without ambition or appetite.

So I stopped taking it. Couldn’t get through to my neurologist, he was out of town, so I just tapered it down over three days and quit … day and night.

w.

16. Doc Chuck says:

Willis’ figure 3 graph showing that while returning atmospheric (downwelling LWIR) radiation is maximal around the equatorial tropics, yet from figure 5 that is the region of a much declined further warming of already warm liquid ocean and land surface compared to Stefan-Boltzmann expectations along with a contrary much enhanced warming of especially frozen non-liquid oceanic/land surfaces, puts one in mind of those revealing NASA visualizations showing that the two major 20th century global average temperature rises (1920-1940 and 1980-2000) were both curiously concentrated in northern hemisphere arctic/subarctic regions of the planet, with the tropics mostly unaffected. Of course this is all at odds with much promoted hysterics that global warming is fixin’ to fry residents of already warm climes rather than beneficially relieve the deep chill faced by Scandinavians, Siberians, Alaskans, and Canadians.

17. mkelly says:

The climate sensitivity idea does not take into account the change in mass and specific heat of air as the concentration of CO2 goes up.

It is also in direct conflict with thermodynamics in regards to specific heat.

dT = Q / (m * Cp). So the right side of equation 1 should equal right side specific heat equation but doesn’t.

The specific heat tables don’t have an addendum regarding whether IR is involved in the energy source.

• Willis Eschenbach says:

mkelly December 26, 2019 at 11:50 am

The climate sensitivity idea does not take into account the change in mass and specific heat of air as the concentration of CO2 goes up.

There is so little CO2 in the atmosphere (four HUNDREDTHS of one percent), the change in mass and specific will be lost in the noise.

w.

• Patrick MJD says:

There are commenters who state that there is too much CO2 in the atmosphere with concentrations at ~410ppm/v. I ask, what is the “ideal” concentration or how much is “just right” for the planet/climate etc in their opinion. Most never answer that question, some answer with 280ppm/v. I kid you not!

Of course we know CO2 at ~280ppm/v, an estimate, never stopped climate from changing nor prevented bush fires in Australia (Or anywhere else).

• Jean Parisot says:

The answer is 1200ppm. The next question is can we get there before the interglacial ends.

18. The Atmosphere cannot heat itself, nor the surface of the Earth. Anyone who tells you it can is either lying or unaware of basic physics.

CO2 absorbs 15-micron radiation, both at the surface and at the TOA. At the surface the effect is essentially saturated, as 100% is absorbed in the first 10 meters. Extra CO2 might lower this height a centimeter or two, but this will hardly change surface temperatures, or the temp of the Atmosphere at the traditional 2-meter measuring height.

At the Top of the Atmosphere (TOA) extra CO2 raises the altitude at which the Atmosphere is freely able to radiate to space, thus lowering the temperature at which the Atmosphere freely radiates to space, thus radiating less energy and retaining more in the Atmosphere. “Downwelling radiation” means nothing, as none of it reaches the Surface.

No one can calculate the magnitude of this effect, so, no one knows ECS, and forty years of effort involving supercomputers and PhD.’s has gotten us no closer.

“Downwelling radiation,” other than that from the Sun of course, is a complete creation of the so-called “Climate Scientists,” I think because none of them actually understand the mechanism by which heat is retained in the Atmosphere.

Ask any Physics professor or Thermodynamics/Heat Transfer professor, who is not professionally involved in “Climate Science,” and this will be confirmed beyond the slightest doubt.

Cheers, and Happy New Year!

• Carbomontanus says:

Anything that is hot, does heat.

Heat is somthing that is measured in watts, not in centigrades or fahrenheit.

Nothing but the further conductivity of the field decides on how heat is spreading out. Heat is not affected or stopped by distal temperatures at all. Heat is not material. it is a material form or even a situation travelling in empty space by the speed og light.

The background heat radiation of the Universe at 3 Kelvin does heat up the antennas by which it is detected and observed, to a temperature a little bit higher than it else would have had due to local heating at very high above 3K, provided that those antennas are not superconductors;……

…try hard now to avoid denying and fighting the microwave ovens in order to defend your basic ignorance and illusions of the physical nature of heat and energy.

Proof:
A proper Winter coat does warm you up in terms of watt, of course.

This is old human Scientific experience and wisdom all through the stone age, that was quite long.

You can lit a fire, but you can also dress up in wools or furs; that will warm you as well and save both fat and firewwood for you..

A proper suit in the winter actually warms you back and heats you up, by Your own body heat by which you heated it up, warming you back from a temperature that is lower than Your skin and body temperature.

Second Proof:
By wearing such a good Winter coat, Your skin temperature will rise, you will shiver and freeze less, and you can relax and eat less bread and butter and sugar and cornflakes. Your fuel and food- budget in terms of joule or calories will benefit clearly from rather dressing up in wools and furs and with a proper anurak or dufflecoat when it is bitterly Cold..

Living on fighting and denying such an elementary fact, is trying more and more desperately to sell Perpetuum mobile first order.

It fights the permanence of heat and permanence of energy.

Quit that.

• Carbomontanus

You talking to me? Back to school for you, as in, Physics 1 and 2, Thermo 1 and 2 and both labs, Fluids and Fluids lab, and most important of all, Transport of Heat and Mass and the lab.

Report back after your schooling is more complete.

• Michael Moon wrote, “CO2 absorbs 15-micron radiation… At the surface the effect is essentially saturated, as 100% is absorbed in the first 10 meters.”

That’s technically true, “at” 15 μm. But near 15 μm, where CO2 absorbs only weakly, the effect is not saturated. The Earth radiates at those wavelengths, too, so adding more CO2 does have a warming effect.

Michael Moon wrote, “At the Top of the Atmosphere (TOA) extra CO2 raises the altitude at which the Atmosphere is freely able to radiate to space, thus lowering the temperature at which the Atmosphere freely radiates to space, thus radiating less energy and retaining more in the Atmosphere.”

The “emission height” (i.e., the altitude at which the atmosphere is so thin that most radiation emitted “upward” by CO2 escapes to outer space), is nowhere near the TOA. At and near the center of CO2’s absorption band, the emission height is at or near the tropopause (which varies in altitude, but averages around 11 km). So raising the emission height for those wavelengths does not lower the temperature at the emission height.

However, the emission height is not a single number, it varies with wavelength. At wavelengths where CO2 absorbs only weakly, the emission height is much lower, so for those wavelengths raising the emission height does lower the temperature at the emission height, thus reducing the emission rate, and causing the atmosphere to warm.

Michael Moon wrote, “‘Downwelling radiation; means nothing, as none of it reaches the Surface.

That’s incorrect. The CO2 in the atmosphere doesn’t just absorb 14-16 μm LW IR, it also radiates those wavelengths. (In fact, as a general rule, anything which can absorb at a particular wavelength can also emit at that wavelength, though perhaps with very different intensity.) Some of that radiation emitted by the CO2 in the atmosphere goes down. Some of it reaches the surface.

Some of it is even emitted by the air inside a Stevenson Screen… which might give you a clue about why detecting changes in downwelling radiation due to rising CO2 levels is challenging.

Michael Moon wrote, “No one can calculate the magnitude of this effect…”

Prof. Happer can, and he has. (He found that it’s about 40% less than the commonly claimed figure of 3.7 ±0.4 W/m² per doubling.)

Michael Moon wrote, “, so, no one knows ECS…”

The major difficulties determining ECS are not due to difficulty determining the warming effect of CO2. The big challenge is determining the effects of feedbacks, especially those involving clouds. There are dozens of feedbacks which either amplify or attenuate warming due to things like GHG level changes, or which affect GHG levels, themselves.

Michael Moon wrote, “‘Downwelling radiation,’ other than that from the Sun of course, is a complete creation of the so-called “Climate Scientists,” … Ask any Physics professor or Thermodynamics/Heat Transfer professor, who is not professionally involved in “Climate Science,” and this will be confirmed beyond the slightest doubt.”

What’s good for the goose is good for the gander: You should try your proposed test, yourself, before suggesting that others try it. I doubt you can find any physics professor, anywhere, who disputes the existence of downwelling LW IR radiation from radiatively active gases in the atmosphere, or who disputes that it raises surface temperatures, at least a little bit.

• Correction:

I wrote, “(He found that it’s about 40% less than the commonly claimed figure of 3.7 ±0.4 W/m² per doubling.)”

That was sloppy. I should have written, “(He found that the forcing from an increase in CO2 level is commonly overestimated by about 40%; i.e. that that the correct figure is about 29% less than the commonly claimed forcing of 3.7 ±0.4 W/m² per doubling of CO2.)”

Thank you for the help. What fraction of “Downwelling Radiation” is not thermalized before reaching the surface or near it? A vanishingly small fraction, as you know, but have failed to mention.

• Yes, and what assumptions about clouds did Professor Happer use? Not very scientific…

19. Marlow Metcalf says:

Could you find the time to make a chart for the U.S. Climate Reference Network and compare it to the WUWT approved in the US sites for the last 100 years? If the 12 years of USCRN match the last 12 years of the best located, been there for a 100 years sites, that might their last 100 years of data more interesting.

20. M Courtney says:

As by far the most potent GHG is water vapour, perhaps λ should be treated as a function of T and humidity rather than as a constant.

H2O forms hydrogen bonds easily.

21. A C Osborn says:

How much of the increase in temperature at the Poles involves massive movements of air from Tropics to poles and not just albedo?

• Greg says:

IMO the idea that albedo is a significant player has been blow out of the water (sic) in the last 10y when despite a vastly increased area of exposed water the Arctic ice coverage is indistinguishable from what it was when all the screaming about an “ice free Arctic” started.

If albedo was a significant factor we would be seeing the predicted +ve feedback and run-away melting. We are not. Observation trumps the naive, simplistic assumption and the gnashing of teeth.

• Willis Eschenbach says:

Greg December 26, 2019 at 1:52 pm

IMO the idea that albedo is a significant player has been blow out of the water (sic) in the last 10y when despite a vastly increased area of exposed water the Arctic ice coverage is indistinguishable from what it was when all the screaming about an “ice free Arctic” started.

If albedo was a significant factor we would be seeing the predicted +ve feedback and run-away melting. We are not. Observation trumps the naive, simplistic assumption and the gnashing of teeth.

Actually, the Arctic has been warming faster than the rest of the globe. However, there’s no reason to think that this perforce leads somehow to “runaway melting”.

Best to you,

w.

22. mobihci says:

“He pointed out that if you take the upwelling surface longwave radiation, and you subtract upwelling longwave radiation measured at the top of the atmosphere (TOA)”

this seems like mission impossible to me. it would be easy to measure a single point or group of points, but a uniform surface measurement of the entire spectrum is done how?

• Willis Eschenbach says:

mobihci, the full explanation of your very relevant question is here.

w.

• Greg says:

Thanks Willis, I guess that answers my excellent question too.

So the outgoing “surface” radiation is measured for TOA. You’ve gotta love climatology.

… using a radiative transfer model…. where the global net is constrained to the ocean heat storage …more closely match modeled downward longwave surface fluxes ….

In short, it’s sausage meat.

23. rovingbroker says:

Shouldn’t we be searching for the change in the earth’s energy and its distribution rather than some single value that represents the temperature of the entire atmosphere?

24. Robber says:

Willis, always appreciate your well presented arguments based on data.
One question: As you highlight in Fig 5, the response is far less at higher temperatures near the equator.
How then can an average be applied to the world as a whole?
I understand that the global average temperature is about 16 degrees C and the 1.5-4.5 increase applies to that average. But what does it imply for temperatures at the equator that average 26 degrees C year round versus the poles where temperatures vary from 5 degrees C to minus 35 degrees C?

• greg says:

This all comes back to Schneider’s “dilemma”. They have to make it simple and scary.

Physics does not allow averaging temperature: it is not an extensive quantity. The whole venture of attempting to “linearise” every non linear thing happening in a complex, non-linear, chaotic system and reduce the whole of climate down to one scale quantity should be laughable to any self respecting scientist.

Of course, that does not concern climatology because it is no more a science than numerology.

25. A few thoughts…

1. Prof. Will Happer has found evidence that that 3.7±0.4 W/m² estimate for the average increase in ground level irradiance, before feedbacks, is overestimated by about 40% [also Happer 2014]. So:
(3.7 ±0.4) / 1.4 = 2.64 ±0.29 W/m² / doubling.

That seems to have been confirmed by the measurements of reported in Feldman et al 2015. They measuring downwelling LW IR spectrum directly, over a ten year period, during which average atmospheric CO2 levels rose 22 ppmv.

A 22 ppmv increase (+5.953% from 369.55 ppmv in 2000) in atmospheric CO2 level resulted in 0.2 ±0.06 W/m² increase in downwelling 15 µm IR from CO2. (5.953% is about 1/12 of a doubling.)
(1/log2(1.0593)) × 0.2 W/m² = 2.41 W/m² (central value)
(1/log2(1.0593)) × (0.2 – 0.06 W/m²) = 1.68 W/m² (low end)
(1/log2(1.0593)) × (0.2 + 0.06 W/m²) = 3.13 W/m² (high end)

So, we find that CO2 forcing is 2.41 ±0.73 W/m² / doubling, which is consistent with Happer’s result, and inconsistent with the commonly heard 3.7 ±0.4 W/m² estimate.

2. AR5 Table 9.5 gives the average ECS in the CMIP5 models as 3.2 ±1.3 °C/doubling (rather than the “Mean 3.3°C” shown in Fig. 1 from Knutti), using a 90% (instead of 95%) confidence interval:
pdf: https://sealevel.info/AR5_Table_9.5_p.818.pdf
source: http://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter09_FINAL.pdf#page=78

3. I agree with your “heresy,” Willis, that climate sensitivity is not constant, but I don’t think that’s really very heretical. It’s pretty well understood that Global Warming disproportionately warms high northern latitudes. Climate activists call it “Arctic amplification,” and they all think that brutal northern climates becoming slightly less harsh is a terrible thing — which, IMO, is proof of insanity.

Arrhenius enthusiastically predicted it, and, sure enough, we’re enjoying “more equable and better climates, especially as regards the colder regions of the earth… [and] much more abundant crops… for the benefit of… mankind.”

In any event, we can finesse the objection by simply inserting the word “average” into the “sensitivity” definitions. E.g.,

TCR n. The effect on the Earth’s average global near-surface air temperature after seventy years, of increasing the atmospheric CO2 level by 1% per year (a rate of increase which would double the CO2 level in seventy years).

4. The most straightforward and obvious way of estimating climate sensitivity to a doubling of CO2 is by examining the result of the “experiment” which we’ve performed on the Earth’s climate, by raising the atmospheric CO2 level from about 311 ppmv in 1950 (or from 285 ppmv in 1850) to about 412 ppmv now. We simply examine what happened to temperatures when the atmospheric CO2 level was raised by 32% (or 45%), and extrapolate from those observations.

However, there are a few pitfalls with that approach. For one thing, natural global temperatures variations due to ENSO can be larger than the “signal” we’re looking for, especially for the earlier part of the record, when CO2 was rising very slowly, so it is important that we choose an analysis interval which avoids those distortions. For another thing, it would be a mistake to assume that all of the warming which the Earth has experienced since pre-industrial conditions was due to anthropogenic CO2, because much of that warming occurred when CO2 levels were still very low, and because we know of other factors which must have contributed to warming, such as rising levels of other GHGs, and probably aerosol/particulate pollution abatement.

I did that exercise, using the period 1960-2014, which encompasses most of the period of rapid CO2 level increases, while avoiding distortions from major ENSO spikes. I calculated TCR two ways:

1. Assuming 100% of the warming trend was anthropogenic (and 75% of that was from CO2). That yielded a TCR of 1.41°C per doubling of CO2.

2. Assuming 57% of the warming was anthropogenic. That yielded a TCR of 0.81 °C per doubling of CO2.

Details here:
https://sealevel.info/sensitivity.html

• (Sorry for the botched a </a> tag.)

• richard verney says:

Does it not follow from the T>4 relationship, that it is far more difficult to warm the warmer equitorial regions of the planet than it is to warm the cool polar regions of the planet?

Given that relationship is it not inevitable that we will see the cool polar regions warming faster and by larger increments than the warm equitorial regions of the planet?

• More difficult, yes, but I wouldn’t call it “far more” difficult.

At Prudhoe Bay, Alaska, the average temperature is -11°C.

The question is, how much more radiative emission do you get from one degree of warming at Guayaquil, compared to one degree of warming at Prudhoe Bay?

The S-B (T^4) rule applies to absolute temperatures. Absolute (Kelvin) temperature is Celsius temperature + 273.15.

Guayaquil: 26°C = 299.15K

Prudhoe Bay: -11°C = 262.15K

So, let’s compare the effect on radiative emissions of one degree of warming in each of those two places:
(300.15^4 – 299.15^4) / (263.15^4 – 262.15^4) = 1.485

So it’s 48.5% more difficult to raise temperature by a given increment in Equador than northern Alaska.

Or, rather, that’s what the difference would be, if radiation were the only way heat is lost by the surface. But, of course, it’s not. It’s not even the most important way. The water cycle is much more important, in most places.

Guayaquil and Prudhoe Bay are both harbors, so they’re at the same altitude, and both are next tot he ocean. But I’ll bet raising the air temperature from 26°C to 27°C would affect the evaporation rate at Guayaquil a lot more than raising the air temperature from -11°C to -10°C would affect the evaporation and sublimation rates at Prudhoe Bay.

• s/their at/they’re at/

Sheesh, I can’t believe I wrote THAT. Must be bedtime.

• Willis Eschenbach says:

Fixed. I hate typos and the like.

w.

26. Joe Born says:

I’m having trouble following the logic.

You say, “The ICR is what happens immediately when radiation is increased. Bear in mind that the effect of radiation is immediate—as soon as the radiation is absorbed, the temperature of whatever absorbed the radiation goes up.”

What do you mean by “immediately”? It seems to me that when you’re discussing absorption as opposed to emission some time duration must be involved for a given power to raise temperature by a given amount. Yet your only quantities are changes in powerdensity (W/m^2) and temperature; no time duration, at least not explicitly.

(In contrast, I suppose, emitted radiation changes immediately in response to a black body’s temperature.)

27. Michael Carter says:

I cannot think of anything more tedious and futile than seeking this holy grail through mathematical calculation.

Please establish for me a constant for wing-tip velocity of a butterfly’s wing vs lift, in the natural environment.

How can so many intelligent people be blinded by their computer screens?

28. Greg says:

Willis:

Now, a while back Ramanathan proposed a way to actually measure the strength of the atmospheric greenhouse effect. He pointed out that if you take the upwelling surface longwave radiation, and you subtract upwelling longwave radiation measured at the top of the atmosphere (TOA), the difference between the two is the amount of upwelling surface longwave that is being absorbed by the greenhouse gases (GHGs) in the atmosphere. It is this net absorbed radiation which is then radiated back down towards the planetary surface. Figure 3 shows the average strength of the atmospheric greenhouse effect.

Thought provoking as ever, Willis.

I know where TOA measurements come from but it is unclear from your presentation what you are using for the “surface” upwards LWIR. Could you explain where that comes from?

The one thing that strikes me about fig 5 is the step jump from about 0.2 K/2xCO2 to about 1K/2xCO2 which happens around zero deg C. What is the difference either side of that : the presence / absence of evaporation.
As you started pointing out years ago in relation to the tropics, it evaporation/advection/precipitation which controls tropical climate, not CO2.

The ramping as it gets much colder than zero points to the other key factor being absolute humidty. Air that cold has very little water vapour content and no open water.

As most here already know it is H2O which controls climate.

thx.

• philf says:

—————————-
Pangburn
Shows that temperature change over the last 170 years is due to 3 things: 1) cycling of the ocean temperature, 2) sun variations and 3) moisture in the air. There is no significant dependence of temperature on CO2.
https://globalclimatedrivers2.blogspot.com/
—————————–

29. Usurbrain says:

Princeton researchers have uncovered new rules governing how objects absorb and emit light, fine-tuning scientists’ control over light and boosting research into next-generation solar and optical devices.
The discovery solves a longstanding problem of scale, where light’s behavior when interacting with tiny objects violates well-established physical constraints observed at larger scales.

Being a nuclear engineer (retired) I have always questioned why blackbody rules were applied to individual atoms floating in the atmosphere separated from each other, and these distances increase with height in the atmosphere.

There is also a great article on Space dot com https://www.space.com/ionosphere-science-roundup.html

That at an altitude of roughly 50 to 400 miles (80 to 645 kilometers), is full of strange physical phenomena that scientists are only beginning to understand. In the ionosphere, charged particles released by the sun interact with gases at the top of Earth’s atmosphere in intriguing ways.
“For one, it showed scientists that when a solar storm hits Earth, atomic oxygen becomes more common at low latitudes and rarer at high latitudes. At the same time, molecular nitrogen prevalence does the opposite, decreasing at low latitudes and increasing at high latitudes.”
This article tells me that there are still BIG questions about what happens to our atmosphere and just how energy moves in the atmosphere.

30. Jit says:

@ Willis is Fig. 3 just showing that downwelling radiation is a constant fraction of the upwelling?

• Willis Eschenbach says:

Jit December 26, 2019 at 1:50 pm

@ Willis is Fig. 3 just showing that downwelling radiation is a constant fraction of the upwelling?

No. It’s showing the actual downwelling radiation, calculated as upwelling surface LW minus upwelling LW at the top of atmosphere, as described in the post.

Regards,

w.

• Jit says:

• Willis Eschenbach says:

No, it varies, both in space and time.

w.

31. Wim Röst says:

Fig. 3 and 4: The colder the location (Arctic and especially Antarctic) the higher the warming effect. The colder the location, the less water vapor (and clouds) are present in the air. At very low concentrations each added water vapor molecule gives a very high temperature response. High and dry and super cold Antarctica, nearly void of water vapor gives the highest response: each added water vapor molecule counts.

Nobody lives in Antarctica there where we find the highest response. Temperatures still remain very very low. In the Arctic we don’t find many people either.

Fig. 5: interesting the small blue flag representing 71% of the surface of the Earth, the oceans. Nearly nothing happens. Long wave radiation cannot penetrate water surfaces deeply, just 0.01 mm or so. If the energy remains in the upper upper layer of the ocean (conduction by water is low) the extra energy probably leads to extra evaporation and some loss of sensible energy and both are cooling the surface.

32. 1sky1 says:

ECS is nothing more than a pitiful attempt by those who don’t understand real-world thermodynamics to characterize climate-scale planetary temperature variations on the cheap.

• ∆T = λ ∆F

where T is surface temperature, F is downwelling radiative forcing, λ is climate sensitivity, and ∆ means “change in”.

… worse than astrology.

• Clyde Spencer says:

I think that trying to determine a single number (lambda) to predict future global warming resulting from a doubling of CO2 is an over-simplification of the problem. The approach is probably only valid for deserts (both cold and hot). That is, the inhibition of the loss of energy to space is a combination of the combined effects of water vapor and CO2 (actually all so-called Greenhouse Gases).

It is generally acknowledged that the Arctic is warming at a rate two or three times the average for the globe. I think that the most probable explanation is that there is negligible water vapor over the Arctic, so the only thing affecting absorption is increasing CO2. I suspect that one would find that the Sahara Desert is similarly warming at night more rapidly than the rest of the Earth.

However, I don’t know if anyone is looking because a warming Sahara won’t melt any ice or cause sea level to rise. My conjecture is that all climate zones will have a unique lambda, which may change over time. However, most of the world has substantial water vapor, which varies with the season and temperatures, but isn’t increasing as rapidly as CO2. Thus, focusing on CO2 in the definition of lambda implicitly uses CO2 as a proxy for all Greenhous Gases, including water vapor.

Thus, the proper approach to the problem would be to determine the unique Greenhouse Gas lambda for all Köppen Climate Zones, and do an area-weighting to determine a global average. It would be instructive to see what the predicted warming would then be for just the climate zones where most people live. As it is, it is naively assumed that whatever the calculated CO2 lambda is, it correctly predicts what people who live in humid areas can expect. Probably, the warming in the tropics and mid-latitudes will be considerably less than is predicted by the consensus CO2 lambda.

I’m reminded that after the Three Mile Island release of radioactivity, the government assured everyone that the average exposure to inhabitants in a circular area around the reactor, of a given diameter, was well below the threshold for concern. However, they failed to acknowledge that the radioactive releases were concentrated in plumes downwind of the reactor. Averages can be quite misleading!

• 1sky1 says:

[T]he proper approach to the problem would be to determine the unique Greenhouse Gas lambda for all Köppen Climate Zones

The problem lies not only in the variability of lambda, as simplistically defined here, but also in the failure to specify any physically meaningful spatio-temporal scale for “climatic” temperature variations. Furthermore, the driver of surface temperatures on any scale is by no means simply “downwelling radiative forcing,” but involves the non-radiative mechanisms of evaporation and convection.

In the real world, heat transfer by moist convection from the surface to atmosphere exceeds all other mechanisms combined. That’s why attempts to find the “true” value of lambda invariably produce values so widely scattered as to render any claim of having established a critical physical ratio totally risible.

33. William Haas says:

The IPCC has been systematically rejecting all rationale to support ideas that the climate sensitivity of CO2 is actually significantly less than the range of guesses that they have been publishing. I believe that the reason for this is politics. If the IPCC came out and declared that the the climate sensitivity is significantly less than previously thought, they would expect that their funding would be reduced significantly. Initial radiametric calculations came up with a value for the climate sensitivity of CO2, not including feedbacks, of 1.2 degrees C. Work by a group of scientists looking at temperature data from roughly 1850 to a few years ago showed that if all the warming were caused by CO2, the climate sensitivity of CO2 could not be more than 1.2 degrees C. A researcher from Japan pointed out the the original calculations neglected to take into consideration that a doubling of CO2 will cause a slight decrease in the dry lapse rate in the troposphere which would decrease the climate sensitivity of CO2 by more than a factor of 20 yielding a climate sensitivity of CO2 before feedbacks of less than .06 degrees C which is trivial. Then there is the issue of H2O feedback. Proponents of the AGW conjecture tend to ignore that fact that besides being the primary greenhouse gas, if you believe in such things, H2O is a primary coolant in the Earth’s atmosphere. The over all cooling effects of H2O are evidenced by the fact that the wet lapse rate is significantly less that the dry lapse rate. So hence instead of providing positive feedback, H2O provides negative feedback and retards any warming that CO2 might provide. Negative feedback systems are inherently stable as has been the Earth’s climate for at least the past 500 million years, enough for life to evolve. We are here. So there is rationale that the actual climate sensitivity of CO2 may be somewhere between .06 and 0.0 degrees C.

34. Roy Clark says:

The basic issue is that there is no such thing as a climate sensitivity to CO2 or any other so called ‘greenhouse gas’. Radiative forcing can politely be described as climate theology – how does a change in the atmospheric concentration of CO2 change the number of angels that may dance on the head of a climate pin? The climate equilibrium assumption was used by Arrhenius in his 1896 estimate of global warming. In this paper he traced the concept back to Pouillet in 1838. Speculation that changes in atmospheric CO2 concentration could somehow cause an Ice Age started with John Tyndall in 1863. To get to the bottom of the radiative forcing nonsense it is necessary to go back to Fourier in 1827 and start over with the real physics of the surface energy transfer.

The essential part that almost everyone seems to have missed in this paper is the time delay or phase shift between the solar flux and the surface temperature response. The daily phase shift in MSAT can reach 2 hours and the seasonal phase shift can reach 6 to 8 weeks. This is clear evidence for non-equilibrium thermal storage. The same kind of non-equilibrium phase shift on different time and energy scales occurs with electrical energy storage in capacitors and inductors in AC circuits – low pass filters, tank circuits etc.

The equilibrium average climate assumption was used by Manabe and Wetherald (M&W) in their 1967 climate modeling paper. They abandoned physical reality and created global warming as a mathematical artifact of their input modeling assumptions. The rest of the climate modelers followed like lemmings jumping off a cliff. In the 1979 Charney report, no-one looked at the underlying assumptions. The radiative transfer results were reasonable –for the total long wave IR (LWIR) flux at the top and bottom atmosphere – and the mathematical derivation of the flux balance equations was correct. The increase in surface temperature was the a-priori expected result. Radiative forcing and the invalid equilibrium flux balance equations were discussed by Ramanathan and Coakley in 1978. The prescribed mathematical ritual of radiative forcing in climate models was described by Hansen et al in 1981. They also introduced a fraudulent ‘slab’ ocean model and did a bait and switch from surface to weather station temperatures. The LWIR flux interacts with the surface, not the weather station thermometer at eye level above the ground. Radiative forcing is still an integral part of IPCC climate models [IPCC, 2013]. Physical reality has been abandoned in favor of mathematical simplicity. Among other things, M&W threw out the Second Law of Thermodynamics along with at least 4 other Laws of Physics. The underlying requirement for climate stability is that the absorbed solar heat be dissipated by the surface. This requires a time dependent thermal and or humidity gradient at the surface.

The starting point for any realistic climate system is that the upward LWIR flux from the top of the atmosphere does not define an equilibrium average temperature of 255 K. Instead it is the cumulative cooling flux emitted from multiple levels down through the atmosphere. The upward emission from each level is then attenuated by the LWIR absorption/emission along the upward path to space [Feldman et al, 2008]. Another fundamental error in the radiative forcing argument is the failure to consider the molecular line width effects. Part of this was due to the band model simplifications that are still used in the climate models to speed up the calculations. The IR flux through the atmosphere consists of absorption and emission from many thousands of overlapping molecular lines, mainly from CO2 and water vapor [Rothman et al, 2005]. As the temperature and pressure decrease with altitude, these lines become narrower and transmission ‘gaps’ open up between the lines. This produces a gradual transition from absorption/emission to a free photon flux to space.

The radiative forcing argument has also obscured the fact that the heat lost to space is replaced by convection, not LWIR radiation. The troposphere is an open cycle heat engine that transports heat from the surface by moist convection. It is stored in the troposphere as gravitational potential energy. As a high altitude air parcel cools by LWIR emission, it contracts and sinks back down through the troposphere. The upward LWIR flux to space is decoupled from the surface by the linewidth effects. The downward LWR flux from the upper troposphere cannot reach the surface and cause any kind of change in the surface temperature. Almost all of the downward LWIR flux reaching the surface originated from within the first 2 km layer of the troposphere and about half of this comes from the first 100 m layer.

Near the surface, the lines in the main bands for CO2 and water vapor are sufficiently broadened that they merge into a continuum. There is an atmospheric transmission window in the 8 to 12 micron spectral region that allows part of the surface LWIR flux to escape directly to space. The magnitude of this transmitted cooling flux varies with cloud cover and humidity. The downward LWIR flux to the surface from the broad molecular emission bands provides an LWIR exchange energy that ‘blocks’ the upward LWIR flux from the surface. Photons are exchanged without any net heat transfer. In order for the surface cool, it must heat up until the excess absorbed solar heat is removed by moist convection. This is the real cause of the so called ‘greenhouse effect’. It requires the application of the Second Law of Thermodynamics to the surface exchange energy. There is no equilibrium average climate so there can be no average ‘greenhouse effect temperature’ of 33 K. Instead, the greenhouse effect is just the downward LWIR flux from the lower troposphere to the surface. It can be defined as the downward flux or as an ‘opacity factor’ [Rorsch, 2019]. This is the ratio of the downward flux to the total blackbody surface emission.

The surface temperature has to be calculated at the surface using the surface flux balance. The change in local surface temperature is determined by the change in heat content or enthalpy of the local surface thermal reservoir divided by the specific heat [Clark, 2013a, b]. The LWIR flux cannot be separated from the other flux terms and analyzed independently. The land and ocean surface behave differently and have to be considered separately.

Over land, the various flux terms interact with a thin surface layer. During the day, the surface heating produces a thermal gradient both with the cooler air layer above and the subsurface layers below. The surface-air gradient drives the convection or sensible heat flux. The subsurface thermal gradient conducts heat into the first 0.5 to 2 meter layer of the ground. Later in the day this thermal gradient reverses and the stored heat is released back into the troposphere. The thermal gradients are reduced by evaporation if the land surface is moist. An important consideration in setting the land surface temperature is the night time convection transition temperature at which the surface and surface air temperatures equalize. Convection then essentially stops and the surface continues to cool more slowly by net LWIR emission. This convection transition temperature is reset each day by the local weather conditions.

The ocean surface is almost transparent to the solar flux. Approximately 90% of the solar flux is absorbed within the first 10 m ocean layer. The surface-air temperature gradient is quite small, usually less than 2 K. The excess absorbed solar heat is removed through a combination of net LWIR emission and wind driven evaporation. The penetration depth of the LWIR flux into the ocean surface is 100 µm or less and the evaporation involves the removal of water molecules from a thin surface layer [Hale and Querry, 1972]. These two processes combine to produce cooler water at the surface that sinks and is replaced by warmer water from below. This is a Rayleigh-Benard convection process, not simple diffusion. There are distinct columns of water moving in opposite directions. The upwelling warmer water allows the wind driven ocean evaporation to continue at night. As the cooler water sinks, it carries with it the surface momentum or linear motion produced by the wind coupling at the surface. This establishes the subsurface ocean gyre currents. Outside of the tropics there is a seasonal phase shift that may reach 6 to 8 weeks.

This phase shift can only occur with ocean solar heating. The heat capacity of the land thermal reservoir is too small to produce this effect. In many parts of the world, the prevailing weather systems are formed over the ocean. The temperature changes related to the ocean surface are stored by the weather system as the bulk surface air temperature and this information can be transported over very long distances. Such ocean related phase shifts can be found in the daily climate data for weather stations in places like Sioux Falls SD.

Over the oceans, the wind driven evaporation can never exactly balance the solar heating. This produces the ocean oscillations such as the ENSO, PDO and AMO. These surface temperature changes are incorporated into the various weather systems and can be seen in the long term climate data, particularly the minimum MSAT. The whole global warming scam is based on nothing more than the last AMO warming cycle coupled into the weather station data [Akasofu, 2010].

A fundamental failure of the radiative forcing argument is the lack of any error analysis. Over the last 200 years, the atmospheric CO2 concentration has increased by a little over 120 ppm. This has produced an increase in the downward LWIR flux at the surface of about 2 W m-2 [Harde, 2017]. Over the oceans this is coupled into the first 100 micron layer of the ocean surface. Here it is fully coupled to the wind driven evaporation. Using long term ocean evaporation data from Yu et al, 2008, an approximate estimate of the evaporation rate within the ±30 degree latitude region is 15 Watts per square meter for each change in wind speed of 1 meter per second. This means that the radiative forcing from an increase of 120 ppm in the CO2 concentration amounts to a change in wind speed of about 13 CENTIMETERS per second. This is at least two orders of magnitude below the normal variation in ocean wind speed. Similarly, a reasonable estimate of the bulk convection coefficient for dry land is 20 Watts per square meter per degree C difference between surface and air temperature. Here a 2 W m-2 change in convection requires a change of 0.1 C in the surface air thermal gradient. Once the physics of the time dependent surface energy transfer is restored, global warming and radiative forcing disappear into the realm of computerized climate fiction.

The topic of radiative forcing was recently reviewed in detail by Ramaswamy et al [2019] as part of the American Meteorological Society monographs series. This review provides a good start for a scientific and criminal fraud investigation into the climate modeling fraud. To begin, the scientific community should demand that this particular monograph be retracted and all further work on equilibrium climate modeling be stopped. Any climate model that uses radiative forcing is by definition invalid. There is no need to try and validate the computer code of any equilibrium climate model. The use of radiative forcing alone is sufficient to render the results totally useless. These modelers are not scientists, they are mathematicians playing with a set of physically meaningless equations. They left physical reality behind when they made the climate equilibrium assumption. They are now members of a rather unpleasant quasi-religious cult. They believe that the divine spaghetti plots created by the computer climate models come from a higher authority that the Laws of Physics.

Any realistic climate model must correctly predict the changes in ocean temperature caused by the ocean oscillations. These must then be used to predict the changes in the weather station data. This must include the minimum and maximum surface air temperatures, surface temperatures and the phase shifts. There are no forcings, feedbacks or climate sensitivities, just time dependent rates of heating and cooling. It is time to welcome the Second Law of Thermodynamics back to the climate models. It has always been part of the Earth’s climate system.

References

Akasofu, S-I, Natural Science 2(11) 1211-1224 (2010), ‘On the recovery from the Little Ice Age’
http://www.scirp.org/Journal/PaperInformation.aspx?paperID=3217&JournalID=69
Arrhenius, S., Philos. Trans. 41 237-276 (1896), ‘On the influence of carbonic acid in the air upon the temperature of the ground’
Charney, J. G. et al, Carbon dioxide and climate: A scientific assessment report of an ad hoc study group on carbon dioxide and climate, Woods Hole, MA July 23-27 (1979)
Clark, R., 2013a, Energy and Environment 24(3, 4) 319-340 (2013), ‘A dynamic coupled thermal reservoir approach to atmospheric energy transfer Part I: Concepts’
http://venturaphotonics.com/files/CoupledThermalReservoir_Part_I_E_EDraft.pdf
Clark, R., 2013b, Energy and Environment 24(3, 4) 341-359 (2013) ‘A dynamic coupled thermal reservoir approach to atmospheric energy transfer Part II: Applications’
http://venturaphotonics.com/files/CoupledThermalReservoir_Part_II__E_EDraft.pdf
Feldman D.R., Liou K.N., Shia R.L. and Yung Y.L., J. Geophys Res. 113 D1118 pp1-14 (2008), ‘On the information content of the thermal IR cooling rate profile from satellite instrument measurements’
Fourier, B. J. B.; Mem. R. Sci. Inst., (7) 527-604 (1827), ‘Memoire sur les temperatures du globe terrestre et des espaces planetaires’
Hale, G. M. and Querry, M. R., Applied Optics, 12(3) 555-563 (1973), ‘Optical constants of water in the 200 nm to 200 µm region’
Hansen, J.; D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind and G. Russell Science 213 957-956 (1981), ‘Climate impact of increasing carbon dioxide’
Harde, H., Int. J. Atmos. Sci.9251034 (2017), ‘Radiation Transfer Calculations and Assessment of Global Warming by CO2’, https://doi.org/10.1155/2017/9251034
IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change Chapter 8 ‘Radiative Forcing’ [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A.
Manabe, S. and R. T. Wetherald, J. Atmos. Sci., 24 241-249 (1967), ‘Thermal equilibrium of the atmosphere with a given distribution of relative humidity’
http://www.gfdl.noaa.gov/bibliography/related_files/sm6701.pdf
Ramanathan, V. and J. A. Coakley, Rev. Geophysics and Space Physics 16(4) 465-489 (1978), ‘Climate modeling through radiative convective models’
Ramaswamy, V.; W. Collins, J. Haywood, J. Lean, N. Mahowald, G. Myhre, V. Naik, K. P. Shine, B. Soden, G. Stenchikov and T. Storelvmo, Meteorological Monographs Volume 59 Chapter 14 (2019), DOI: 10.1175/AMSMONOGRAPHS-D-19-0001.1, ‘Radiative Forcing of Climate: The Historical Evolution of the Radiative Forcing Concept, the Forcing Agents and their Quantification, and Applications’
https://journals.ametsoc.org/doi/pdf/10.1175/AMSMONOGRAPHS-D-19-0001.1
Rorsch, A. ‘In search of autonomous regulatory processes in the global atmosphere’, https://www.arthurrorsch.com/
Rothman, L. S. et al, (30 authors), J. Quant. Spectrosc. Rad. Trans. 96 139-204 (2005), ‘The HITRAN 2004 molecular spectroscopic database’
Tyndall, J., Proc. Roy Inst. Jan 23 pp 200-206 (1863), ‘On radiation through the Earth’s atmosphere’
Yu, L., Jin, X. and Weller R. A., OAFlux Project Technical Report (OA-2008-01) Jan 2008, ‘Multidecade Global Flux Datasets from the Objectively Analyzed Air-sea Fluxes (OAFlux) Project: Latent and Sensible Heat Fluxes, Ocean Evaporation, and Related Surface Meteorological Variables’, http://oaflux.whoi.edu/publications.html

• Edim says:

Yes!

• E. Swanson says:

R. Clark, Nice rant. But, I must have missed your explanation for the positive lapse rate in the Stratosphere. Above the Tropopause (mol), the energy leaving the higher atmosphere to deep space must be the result of LWIR via the non-condensing greenhouse gases. Please describe how these processes function.

35. Bobl says:

I have been saying this (to you and others) for years that lambda is a function of temperature. This is obvious from the non linear response of oceans to warming in the tropics where a saturation effect and hysteresis (storm Genesis and destruction occur at different temperatures) occurs in tropical depressions. If the response is non linear then there has to be a nonlinear function there somewhere.

Also the equation assumes that the energy supply F is unlimited, yet usable energy is strictly limited to what is emitted by the earth within the absorption band of the GHGs this implies to me that a complete other integral term describing energy availability within these bands is missing.

• tty says:

Since CO2 has increased about 120 ppm (= 50 C) since 1800 this implies that the entire world ocean was ice covered in the nineteenth century.

36. And this means that just as the areas below freezing are showing clear and strong positive [CO2] feedback, the areas above freezing are showing clear and strong [CO2] negative feedback.

The magic molecule is now a contortionist?

Here’s a better idea: there is no TCS or ECS to CO2 – the true sensitivity is of CO2 to T.

37. On second thought there is no ECS, because there is no observed equilibrium, it’s always transient (reference to my previous comment just posted.)

It is a shame and comical that the whole world is hung up on CO2.

38. Thank you very much Willis, for this indepth analysis of this vexing concept and for inspiring those very interesting comments by CO2ISNOEVIL.
a great discussion
Thanks to all participants.

• oops
i was in such a hurry to submit my comment that i forgot to include the link i wanted to share
here it is ‘

it kind of goes along with the Dan Pangburn comment below where he writes about the water vapor issue

39. The concept of climate sensitivity ignores that CO2 is not the only ghg that has been increasing. According to NASA/RSS measurements, water vapor has been increasing about 1.5% per decade. Calculations using Hitran data show that each WV molecule is about 1.37 times more effective at absorbing the LWIR energy than a CO2 molecule. WV molecules have been increasing faster than CO2 molecules with the result that WV increase has been about 10 times more effective at warming than CO2 increase. (Forehead slap realization reduced it from 39 to 10)

A closer look at WV increase shows that the increasing trend for the last 3 decades is steeper than possible from feedback.
https://watervaporandwarming.blogspot.com

40. Matthew R Marler says:

Willis, thank you again for a clear exposition of a well-focused, insightful analysis.

41. Joel O'Bryan says:

#comment-2879229 stuck in moderation for 4 hours now. sigh.

42. Toto says:

Willis, are you familiar with the Connolly and Connolly research?
https://andymaypetrophysicist.com/2017/08/21/review-and-summary/

the temperature profile of the atmosphere to the stratosphere can be completely explained using the gas laws, humidity and phase changes. The atmospheric temperature profile does not appear to be influenced, to any measurable extent, by infrared active gases like carbon dioxide.

Connolly and Connolly have shown, using the weather balloon data, that the atmosphere from the surface to the lower stratosphere, is in thermodynamic equilibrium. They detected no influence on the temperature profile from infrared active (IR-active) gases, including carbon dioxide. This is at odds with current climate models that assume that the atmosphere is only in local thermodynamic equilibrium as discussed by Pierrehumbert 2011 and others.

• Willis Eschenbach says:

Toto, I’ve tried to follow their logic with no success. I’ve never been able to figure out how what they find with the balloons differs from what they claim it would look like if CO2 is involved.

They say, as you quote:

Connolly and Connolly have shown, using the weather balloon data, that the atmosphere from the surface to the lower stratosphere, is in thermodynamic equilibrium. They detected no influence on the temperature profile from infrared active (IR-active) gases, including carbon dioxide.

But I have no clue what “influence on the temperature profile” they expect. It seems they think that if CO2 affects the temperature, the atmosphere wouldn’t be in thermodynamic equilibrium … why not?

I also don’t understand what conclusion they are drawing. If it is that CO2 doesn’t absorb and emit LW, that’s clearly not true. But there’s no abstract, and no “Plain Language Summary” as we’d see in a normal journal paper, and I can’t find where they’ve laid that out.

If you have any clue where I can find the answers to those questions, I’d be interested. The paper you refer to doesn’t do it for me, at least.

Thanks,

w.

• Mike says:

W.
I have trouble understanding them as well because I’m not qualified to. But from what I can make out, they are saying that due to the ideal gas law, the atmosphere cannot hold heat in and of itself so the extra modern heat we observe can only be due to an on-going external source. (be it from above or below) They do not say that co2 does not absorb or emit IR, only that this cannot be ”held” or ”stored” and cannot lead to a permanent increase in temps as it (co2) increases. In other words – no greenhouse effect (as it is currently understood) regardless of the composition of the atmosphere.

• Toto says:

Probably the best starting point is a recent video of a talk and the slides from that talk:
https://youtu.be/XfRBr7PEawY

They have several levels: direct observations from weather balloons, and their interpretation regarding lapse rates, then on to new ideas about how heat energy gets transferred, and a proposed alternative to ozone heating. New ideas are of course controversial, but their weather balloon data is very interesting.

The talk goes into details about the gas laws, Boyle, Charles, Avogadro, and Ideal.
Then they talk about temperature profiles and lapse rates and why the climate models do not do these properly. And conclude that there is no need for CO2 heating to explain anything.

Their conclusions (edited):
1) The neglect of through air mechanical energy has led to the hypothesis that the atmosphere is only in local thermodynamic equilibrium i.e. conduction convection and radiation cannot transmit energy fast enough to maintain thermodynamic equilibrium with altitude . This was a mistake.
2) If the atmosphere can transmit energy quickly enough to restore thermodynamic equilibrium, our results say that it can, then […] the rate of absorption of radiation by IR active gases is equal to their rate of emission i.e. IR active gases ( so called greenhouse gases) do not trap or store energy for systems in thermodynamic equilibrium .
3) However greenhouse gases do absorb and emit radiation and can also absorb and loose energy due to collision with other gases . But […] where a thermal gradient exists, […] the net effect of greenhouse gases is to increase the flow of IR radiation from hot to cold and not the other way round.
4) […] increasing the concentrations of the so called greenhouse gases does not cause global warming.

So, I am presenting this without comment, leaving it up to readers to evaluate it themselves.

43. John Johnston says:

Heh heh. Who needs equilibrium, apart from climatologists?

“ain’t looking for the kind of man, baby,
Who can’t stand a little shaky ground” – Bonnie Raitt gets it…

44. DocSiders says:

It’s a tricky eyeball averaging process.

The raw data could give us the “correct” value for misrepresenting the data…for acquiring an ECS. But lacking that, a careful “eyeballing” can establish an upper limit…to the (probably incorrectly) calculated (estimated) ECS…and an approx upper limit to the uncertainty.

For starters, that dense patch of blue represents ~70% of the sample set and that blue blob’s average “grid cell ECS” is all well below 0.5 at around 0.25. The vast majority of the remaining 30% of the total set (the red dots) have “grid cell ECS’s” below 1.25 with far more below 0.5 than above 1.0.

So our probable misuse of the data from graph…while doing a moderately appropriate weighted averaging using ~equal area grid cells…produces a global average ECS of at most ~0.5 +/- 0.25 degrees C per CO2 doubling.

And that is likely closer to reality and less uncertain than “Charney and Onwards” 1.5 to 4.0.

45. angech says:

“This “climate sensitivity”, often represented by the Greek letter lambda (λ), is claimed to be a constant that relates changes in downwelling radiation (called “forcing”) to changes in global surface temperature. The relationship is claimed to be:
Change in temperature is equal to climate sensitivity times the change in downwelling radiation.”

“The ECS measures how much the temperature changes when the top-of-atmosphere forcing changes, in about a thousand years after all the changes have equilibrated. The ECS is measured in degrees C per doubling of CO2 (°C / 2xCO2”

An interesting set of views encompassing the whole spectrum of Climate Sensitivity.
A lot of statements that seem to prove the different views which means that some of them must be right and some wrong

“There has been no advance, no increase in accuracy, no reduced uncertainty, in ECS estimates over the forty years since Carney in 1979″ consensus on the ‘likely’ range for climate sensitivity of 1.5 °C to 4.5 °C”

I think Willis wins with the last comment. I had thought Lewis and Curry had helped push it down but there are an awful lot of new papers trying to prove it is high. Whatever the range is ridiculous for science.
Implicit in such a wide range is that no-one agrees on a uniform description of downwelling radiation [forcing].

I agree with Javier in part
” While λ does not have to be a constant, if one takes a fixed point to start the doubling as Knutti does with the preindustrial value defined as the average between 1850 and 1900, that particular doubling should have a single value of λ. ”
Of course there is a constant for a specified set of circumstances.
It does have a definite value within a small range of uncertainty.
There are a few questions and a few comments further to make.

• A C Osborn says:

Surely with a logorithmic scale the doubling has to start at 1ppm, so how can it possibly be a constant?
Each doubling is less than the one before it, how can anyone say what the baseline for the starting was?

• tty says:

At low concentrations the effect of a GHG is linear, not logarithmic since the absorption lines aren’t saturated.

It is this that lies behind all those claims that various trace GHG’s like CH4 are “umpteen times stronger than CO2”.

• Patrick MJD says:

The IR frequencies CH4 absorbs at are transparent to CO2 therefore CH4 absorbs all that IR! The fact alarmists keep prattling on about CH4 is x-times more potent a GHG than CO2 is totally bogus.

• Bobl says:

Not true if hysteresis exists within the climate system as it clearly does or say depends on the rate of change of T. The climate is not linear or invariant. Lambda can be different values depending on the direction and rate of temperature change IE dT/dt. No-one ever said that Lamba has to be directly proportional to T.

46. Concerning “well mixed CO2 at the surface” (Willis Eschenbach and KzTac discussion):
look here for a picture showing our surface CO2 measurements at meteoLCD (Luxembourg, Europe) where daily mixing ratio varies enormously. Mixing increases with air velocity (so it is higher, and CO2 levels lower, due to convection after noon); a long time ago I wrote a paper with the late E-G. Beck on how to estimate historic background CO2 levels from non-mixed surface measurements:
https://meteo.lcd.lu/papers/co2_background_klima2009.pdf

• Willis Eschenbach says:

Thanks, Francis. The paper you linked to is quite fascinating, and answers a variety of questions that people have raised here.

Dr. Beck was kind enough to write a comment on one of my earlier posts, clarifying that his measurements were NOT of the CO2 background and could not be directly compared with it.

w.

47. Joe Bastardi says:

Perhaps its because the sensitivity is so small compared to the dominant major factors that there is no way to improve what is so minute in the first place ( again in relation to the overall system) A look at saturation mixing ratios g/kg wv v temp explains, No such ratios exist for co2, because its effect on temps can not be correlated the way we can with WV

Generally the bigger, the more dominant a force, the more likely the solution lies with it

48. Joseph Zorzin says:

Question from a climate dummy. I keep hearing that the climate sensitivity of CO2 (not sure that’s the right phrase) is logarithmic. Skeptics say that but the supposed “consensus” is that it’s not true.

Otherwise, when I studied physics, I didn’t have much trouble with it. It all seemed so logical. To me, climate discussions are not so clear and logical. I can only conclude the subject is extremely complicated. It seems to this climate dummy that the science is not settled. If it was settled there would be few debates. I should think until it advances- we should be cautious in calling for spending trillions of dollars to fix “the problem”.
Joe

49. Michael Carter says:

Great topic, well covered by comments – thanks

Modelling without a constant becomes a very different animal, right?

M

50. Ktm says:

It would be nice to see the horizontal axis as percentiles rather than linear temperature.

The linear temperature is nice for the S-B curve, but not to visualize what’s happening on the planet as a whole.

51. Steve Reynolds says:

Willis,
Your figure 5 looks like it would be a good test of climate models. Do you know if anyone has tried to replicate the figure 5 data with a model?

• Willis Eschenbach says:

Steve Reynolds December 27, 2019 at 9:10 pm

Willis,
Your figure 5 looks like it would be a good test of climate models. Do you know if anyone has tried to replicate the figure 5 data with a model?

I just did the analysis and created Figure 5 a week ago, so nope. Does sound like an interesting project, though.

w.

52. Is it really just a coincidence, or are more and more papers written by authors whose names describe the paper. In this case, the Knutti Paper sounds very much like Nutty Paper. I always smile whenever I see this kind of coincidence, but lately, I’ve been laughing my head off.

Could these papers really just be bogus with authors names chosen for the effect on readers?

53. Ulric Lyons says:

If the strength of the greenhouse effect is dependent on surface emissions, then the globally uniform +3.7W/m^2 in figure 4 is specious. In the high altitude and dry atmosphere above the Antarctic CO2 acts as a coolant, and Arctic warming is largely driven by the warm AMO phase, which is normal during a centennial solar minimum.

54. Willis, you wrote,

“We’re left with a question: why it is that forty years after the Charney report, there has been no progress in reducing the uncertainty in the estimate of the equilibrium climate sensitivity?

I hold that the reason is that the canonical equation is not an accurate representation of reality … and it’s hard to get the right answer when you’re asking the wrong question. ”

This has been my contention all along. You are describing the pathognomonic sign of a wrong model.
When you have a good model, new and better observations will refine the model parameters ever more precisely. When your model is wrong, no amount of new observations will improve its predictive power. That is what Kepler learned when he was trying to compute the orbital radius of Mars. His error bars were huge until he changed the model to an ellipse. Then the model became extremely precise.

The last 40 years have seen an orders of magnitude increase in both the quantity and quality of observations that bear upon the question of climate sensitivity to radiative forcing. For ECS to essentially not budge means that the model is fundamentally wrong. The inability to be improved by more data is a universal property of wrong models.

• Somehow my quote ran afoul of formatting, it should have started:
Willis, you wrote, “We’re left with a question: why it is that forty years after the Charney report, there has been no progress in reducing the uncertainty in the estimate of the equilibrium climate sensitivity?

I hold that the reason is that the canonical equation is not an accurate representation of reality … and it’s hard to get the right answer when you’re asking the wrong question. ”

• Willis Eschenbach says:

First, Caveman, I fixed the error in your comment. I hate typos and the like.

Second, you’ve seen the essential point at the core of my post. If the model is good, further observations improve it. When it’s not, no amount of observations will improve it.

And when no amount of observations improve it … to me, that means that the model is wrong.

Best regards,

w.

55. Herbert says:

Willis,
In Garth Paltridge’s “ The Climate Caper, Facts and fallacies of global warming”, Chapter 2 is ‘Some Physics’.
In it Dr. Paltridge says at the chapter beginning “ There is a fair amount of reasonable science behind the global warming debate, but in general, and give or take a religion or two, never has quite so much rubbish been espoused by so many on so little evidence”.
Dr. Paltridge gives another single equation like yours in these terms-
“Imagine that the basic rise without feedbacks of global temperature from doubled CO2 is Delta To.Imagine as well that g1, g2,g3, and so on are the actual values of the individual feedback ‘gains’ associated with each of the various atmospheric processes dependent on surface temperature. They may be positive or negative.That is, they may amplify or reduce the basic rise in Temperature Delta To associated with the increase of CO2.
The total gain G of the overall system is simply the sum (g1+g2+g3+….)of all the individual gains, and the actual temperature rise Delta T when all the feedbacks are allowed to operate is simply the value of Delta To divided by a factor (1-G) as shown in the equation:
DeltaT = DeltaTo/ (1-G).
The mathematically minded of you will recognise that all sorts of trouble would arise if the total gain G were 1.0. The actual temperature response would be infinite.”
Dr. Paltridge then shows a graph of the relationship between Delta T and G.
“The 1.2 degree Celsius rise for no feedbacks, the infinite rise for G equal to 1, and the edges of the cross hatched area which indicate the range of total feedback gains and corresponding temperature rises for the ….respectable npmodels for which information on feedback is available ( are shown).
The range indicates that the total gain of an individual model falls somewhere roughly between 0.4 and 0.8. The corresponding range of temperature rise lies between 2 and 6 degrees Celsius.”
The g1, g2,g3, etc. are of course,Water Vapour(WV), Cloud(Cl), Reflection (Re), Lapse Rate (LR), CO2 and Greenhouse gases etc.
And then –
“ Certainly à large negative cloud feedback ( as likely a situation as any other) would drag the total feedback right down and lead to much smaller increases in temperature from increasing CO2 than are currently fashionable.”
“ As a final random thought, it is at least theoretically conceivable that the total feedback gain of the climate system is actually very close to 1.0. In such a circumstance one could imagine the climate skating from one extreme of temperature to another and back again. The extremes would be the points at which the total feedback gain became less than 1.0 – as for instance when cloud cover reached zero or 100% and could no longer contribute to the feedback. After all, the climate has always been flipping in and out of Ice Ages!
More to the present point, and were such a situation to exist, it wouldn’t matter very much whether or not man added lots more CO2 to his atmosphere.”
Willis, It is way above my pay grade but is Dr. Paltridge indirectly acknowledging that in the situation he outlines ECS is not a constant, whether your single equation or his applies?

56. eyesonu says:

This has been another excellent post by Willis and the resulting comment thread.

Figures 4 & 5 were very interesting. I have been through the post and studied the comments for the fourth time now. Breaking down the vague ‘global average’ into zones and temp plots as in fig 5 is a big step forward. You can hide a herd of elephants in a global average!

57. DMacKenzie says:

Willis,
You wrote after Fig 3

“The main forces influencing the variation in downwelling radiation are clouds and water vapor. We know this because the non-condensing greenhouse gases (CO2, methane, etc) are generally well-mixed. Clouds are responsible for about 38 W/m2 of the downwelling LW radiation, CO2 is responsible for on the order of another twenty or thirty W/m2 or so, and the other ~ hundred watts/m2 or so are from water vapor.”

My question is….why is the sum of your numbers about half of the commonly quoted 333 or so watts downwelling, used in Trenberth type Earth Heat Balance graphics ?

• Willis Eschenbach says:

DMacKenzie December 30, 2019 at 2:26 pm

My question is….why is the sum of your numbers about half of the commonly quoted 333 or so watts downwelling, used in Trenberth type Earth Heat Balance graphics ?

Excellent question. That’s just the downwelling IR resulting from absorbed upwelling radiation. There’a an additional ~ 70 W/m2 of energy in the atmosphere that is absorbed solar, plus another ~ 100 W/m2 from sensible and latent heat.

The Trenberth diagram is misleading in that the emission is different going upwards and downwards. Here’s my more accurate version …

w.

• Willis,

To make it more representative, separate the ‘back radiation’ term into the contributions from absorbed solar input and the contribution from absorbed surface output. The former is ‘forcing’ power and the later is ‘feedback’ power. Similarly, separate out the emissions by the water in clouds from the emissions by atmospheric GHG’s as well as cloud emissions originating from surface absorption and those arising from solar absorption. I think that dividing it into troposphere and stratosphere adds unnecessary complication that doesn’t help with understanding. A better approach would be to divide it into contributions from clear skies and those from cloudy skies, especially since it seems that the amount of clouds is what modulates the energy balance until the required balance is achieved.

The return of latent heat should be carved out which is mostly returned to the surface by liquid or solid rain that’s warmer that it would be otherwise, and weather. Latent heat plus its return to the surface has a zero sum effect on the radiant balance since its complete influence has already been manifested by the average surface temperature and its corresponding radiant emissions. The same can be said for all non radiant energy leaving the surface plus its offsetting return to the surface from the atmosphere. Distinguishing the energy transported by photons from the energy transported by matter does this by only considering only the energy transported by photons as contributing to the RADIANT balance.

Of all the energy absorbed by the surface, only the 390 W/m^2 required to offset the surface radiation corresponding to the average surface temperature of 288K (the 390 W/m^2 of surface radiation) is relevant to the RADIANT balance. Everything else is just zero sum noise that gets in the way of understanding what the balance actually means. Whether some of the offset of the non radiant energy entering the atmosphere is in the form of photons doesn’t even matter, even though a sufficient amount of energy seems to be returned by non radiant means.

The balance is only concerned with averages spanning intervals of time much larger than the nominal length of the hydro cycle and the component of the atmosphere that absorbs most of the solar energy absorbed by the atmosphere is the water in clouds which is tightly coupled to the water in the oceans. Across intervals of time much longer than the nominal length of the hydro cycle, the absorption and emission of solar energy by the water in clouds can be considered a proxy for solar energy absorbed and emitted by the water in the oceans.

• Willis Eschenbach says:

co2isnotevil December 30, 2019 at 5:35 pm

Willis,

To make it more representative, separate the ‘back radiation’ term into the contributions from absorbed solar input and the contribution from absorbed surface output. The former is ‘forcing’ power and the later is ‘feedback’ power.

The numbers are in the current version to do that.

Similarly, separate out the emissions by the water in clouds from the emissions by atmospheric GHG’s as well as cloud emissions originating from surface absorption and those arising from solar absorption.

Possible. The information is in the CERES data. Not sure what I’d learn, however.

I think that dividing it into troposphere and stratosphere adds unnecessary complication that doesn’t help with understanding.

The problem is, you cannot model the earth with a single layer model. You need two layer that are physically separated. That separation is the tropopause, and the two layers are the troposphere and the stratosphere. Without that … no workee.

A better approach would be to divide it into contributions from clear skies and those from cloudy skies, especially since it seems that the amount of clouds is what modulates the energy balance until the required balance is achieved.

That I can do, because the CERES database contains that information. And it’s an interesting question, the evolution of the cloud cover and how that affects the situation.

Best regards, thanks for the ideas.

w.

• Willis,

What’s preventing modeling the Earth’s atmosphere as a single layer with the equivalent average properties required to reproduce the average behavior at its boundaries with TOA and the surface?

I’ve used many different layering configurations and a single layer equivalent model of the atmosphere works quite well. Conceptually, the model is of a 2-body system consisting of an ideal BB and a single ‘graying’ layer inserted between the BB and its environment. A relatively simple 2×2 transform can represent the bidirectional transfer function of this single layer which I’ve then applied to represent an atmosphere. The idea is that the emissions of the BB are attenuated by the layer before reaching the environment by just enough to offset the incident energy while at the same time, it amplifies the incident energy by the reciprocal of the attenuation before reaching the BB in order to exactly offset its emissions. The loop is closed by considering the source of the energy powering the amplification of power arriving from the environment to be the surface energy that was attenuated on its way out to the environment.

The model is not specific to Earth, or any planet for that matter, and is just an idealized model for a 2-body system where the attenuation factor is the emissivity of an equivalent gray body representing the steady state condition. It just happens that when you ignore non radiant energy like latent heat and are concerned only with the radiant behavior at the boundaries, a single value of equivalent emissivity produces results that are surprisingly representative of the averages measured for slices of Earth’s latitude from pole to pole.

Here’s a bit of C code you can play with that demonstrates the basics of the model.

Note that the default transfer function {{-sqrt(a), 1}, {1, a}} is not that of an ideal gray body, but is a variant with gray behavior that converges to become ‘golden’ in the steady state. What differs from an ideal gray body is the behavior as it deviates away from the steady state. To see how it differs, one of the other transfer function choices available is that of an ideal gray body (see the comments).

• Willis Eschenbach says:

co2isnotevil December 31, 2019 at 4:56 pm

Willis,

What’s preventing modeling the Earth’s atmosphere as a single layer with the equivalent average properties required to reproduce the average behavior at its boundaries with TOA and the surface?

Not enough warming. Each shell in a “greenhouse” can only increase the surface temperature by a certain amount. There’s on the order of 235 W/m2 entering the system after albedo reflection.

A perfect single-shell greenhouse layer can double that at the surface, to 470 W/m2. But the earth’s system is far from perfect. Some sunshine is absorbed by clouds and aerosols. Some radiation escapes directly to space through the “atmospheric window”. Some is lost to sensible and latent heat.

When you add all of those known losses up, there’s not enough energy left over to make everything balance. You have 470 W/m2 to play with. Take away just the 100 W/m2 for sensible and latent heat, and 40 W/m2 for the atmospheric window, you have 330 W/m2 warming the surface. That puts the surface at only 3°C … no bueno.

You need a second shell to make it work, as shown above.

Finally, remember that each layer has to emit the same amount upwards and downwards. This is what first put me off of the Trenberth diagram, viz:

Note that the single layer emits 333 W/m2 downward, but only 239 W/m2 upwards …

Best regards,

w.

• Willis,

Yes, the ”ideal’ absorbing layer can return no more than 2x the incident energy, where ideal means that 100% of what the surface emitted was absorbed by the atmosphere (GHG’s and clouds) where half of this must escape into space to offset the incident energy and the remaining half is returned to the surface. Since the emissions at TOA are 1/2 the emissions of the surface, the equivalent emissivity becomes 0.5 and establishes the upper bound ‘amplification’ for the surface emissions at twice the incident solar energy. We can then bound the equivalent emissivity of the planet to between 0.5 and 1.0, bounding the ‘amplification’ between 1 and 2 and expect the action of clouds to result in a value between these limits since clouds decrease the effective emissivity by being colder than the surface below.

Where you lost me is by saying that inefficiencies resulted in there not being enough energy returned to the surface, therefore, you needed to model more layers to compensate. I don’t see how that works.

If you can’t get the desired behavior with 1 layer, then you will not be able to get it with 2, or N for that matter. In principle, you should be able to model the same behavior with an arbitrary number of layers. If you’re getting a different result between 1 and 2 layers, then you will necessarily see differences between 2 and 3 and so on and so forth, until after a sufficiently large number of layers, you approach the ‘ideal’ behavior. This tells me that either there’s something wrong at the interfaces of the layers being stacked or the equivalent properties of the combined layers are not consistent with the equivalent properties of the whole.

I think what’s happening is that you’re accounting for things that aren’t rally losses or inefficiencies and a compensating error in how the layers are stacked is hiding this.

The loss from latent heat and convection is not a loss at all as it’s returned and offset as part of the ‘back radiation’ term in excess of the energy required to offset the steady state radiant emissions. The loss from sensible heat is also not a loss since in the steady state, the system has already reached its equilibrium temperature and the time averaged change in the sensible heat will be zero by definition.

The trickier one is the impact on the balance of the water in clouds. Reflected energy is accounted for by the albedo affecting the 235 W/m^2 of solar input (which I think is a few W/m^2 too low). I consider solar energy absorbed and emitted by clouds a proxy for solar energy absorbed and emitted by the oceans, as the two are tightly coupled over the integration periods across which the average balance would be relevant. Since the thermal mass of the combined water is dominated by the oceans, the resulting equivalent temperature will be representative of the actual average surface temperature which itself is dominated by the oceans.

We can continue this next year.
Happy New Year.

• Trick says:

Willis: ”Finally, remember that each layer has to emit the same amount upwards and downwards.”

Each layer at the same temperature that is. In Trenberth’s, observe there are also two (more subtle) emission layers (169+30, 333), at different heights so different temperatures. The Trenberth cartoon is intended to be simplified.

CO2: ”In principle, you should be able to model the same behavior with an arbitrary number of layers”

Not if the surface layer is already opaque in the IR bands, the addition of more opacity has no change in T profile when it should have an effect. In that case you have pushed your 1 (or 2) layer for Earth analogy too far and need N layers & a computer (as is the case for Venus).

”which I think is a few W/m^2 too low”

The Trenberth cartoon covers an earlier time period than the energy budgets available today with higher OLR shown.

• Willis Eschenbach says:

co2isnotevil December 31, 2019 at 7:15 pm

Where you lost me is by saying that inefficiencies resulted in there not being enough energy returned to the surface, therefore, you needed to model more layers to compensate. I don’t see how that works.

Thanks, co2, and happy new year. I discuss your question in detail at my post, The Steel Greenhouse. Read that end-to-end and it should be clearer.

Best to you,

w.

• Willis,

I see what happened and the problem is as I stated earlier where you’re considering losses that aren’t really losses and that’s why your 1 layer model doesn’t work. I know a single layer model can work because I have existence proof that both single layer and multi-layer models of the atmosphere work just fine and both arrive to the same, verifiable, steady state result. In fact, my basic sanity test for an N-layer model is that it must get the same steady state result as an N-1 layer model, otherwise, the two models aren’t modeling the same steady state balance.

If a 1 layer model doesn’t return enough energy to the surface, but a 2 layer model based on the same assumptions does, then one or more of the models isn’t actually modeling the balance.

Look at the comments in the piece of C code I linked to earlier to see how a 1-layer model of the radiant balance works. This balance model can be extended as a parallel collection of grid cells, each represented as a sequence of 4×4 transforms representing the average behavior for that cell applying 1 transform per layer between the surface and space. Since superposition applies to joules, this can be collapsed into a single 4×4 transform representing the entire column as a single EQUIVALENT layer representing the average.

It’s also necessary to account for flux between the grid cells, which is why I like slices of latitude, since E/W flux cancels and average N/S flux per slice is more readily established. In addition, the average solar forcing is relatively constant across slices of latitude, allowing the relative behavior for varying solar forcing to emerge as the differences between slices which are otherwise topographically similar.

Regarding the steady state balance, you must consider 2 orthogonal energy fluxes with no NET conversion between them by the layer(s). The two fluxes are the energy transported by photons and the energy transported by matter. Trenberth’s error of conflating the two is the source of many other errors, but then again, this seems to have been the intent.

Only the energy transported by photons can contribute to the RADIANT balance. The energy transported by matter only serves to redistribute existing energy between and among the surface and the atmosphere. Trenberth incorrectly considers the redistribution of existing energy to contribute to the radiant balance when how energy is distributed on average has a zero sum effect on the balance. It may affect the temperature and subsequent radiant emissions, but whatever effect it’s having is already accounted for by the temperature and radiant emissions and a balance will be achieved regardless. Note the differences between modelling the balance and modelling the low level behavioral interactions that one hopes results in the emergence of a proper balance (i.e. a GCM). Trenberth is attempting to model things specific to the later with the former.

I looks like you’re mischaracterizing sensible heat. The energy balance is a reflection of the steady state which for all intents and purposes is defined as when the average sensible heat in and out of the surface (and atmosphere) is zero. If not, then either it’s not in the steady state, or matter will be heating and/or cooling without bound. In this case, sensible heat entering the atmosphere will cause it to get hotter and hotter as the surface gets colder and colder. This clearly isn’t the case and what you’re calling sensible heat is being offset as part of the back radiation term which can only be modeled as sensible heat returning to the surface. Subtract the offset of this and the latent heat term from the ‘back radiation’ term and then add what’s left to the solar input to offset the surface emissions. Most, if not all, of the non radiant energy returned to the surface is returned by matter anyway and not photons. Calling it ‘back radiation’ tends to hide this.

The only way to convert the energy transported by matter into energy transported by photons is when that matter radiates energy away. However; for that matter to be in a steady state equilibrium, it must be absorbing the same as it’s emitting, so whatever is emitted by matter in the atmosphere is replaced by subsequent absorption. If this wasn’t the case, the temperature of that atmospheric matter would either increase or decrease without bounds.

I think Trenberth just wanted to make the ‘back radiation’ term seem more important by including the return path of energy redistribution between the surface and atmosphere and implying that it’s mostly from GHG’s. In fact, most of the actual radiant component of the ‘back radiation’ term are cloud emissions returning to the surface. He did a similar thing by carving out solar energy absorbed by the atmosphere (clouds) and calling that part of the ‘back’ radiation term which is really still forward radiation from clouds.

Please examine this scatter plot which isolates the radiant behavior of the planet from the redistribution of existing energy by matter.

The thin green line is a prediction of my single layer model of the RADIANT balance along the path from the surface to space. Each small red dot is 1 month of measured data for each 2.5 degree slice of latitude from pole to pole across 3 decades of data and the larger green and blue dots are 3 decade averages for each slice. The correspondence to the predictions of the single layer model is very strong and unambiguous.

It gets even more interesting along the input path where you must account for energy passing N/S between slices. The effect of this is to bias surface emissions up by half of the absorption such that the incremental effect of solar forcing becomes 1 W/m^2 of surface emissions per W/m^2 of solar forcing. The steady state is defined to be where the biased up input path intersects with the unbiased output path of 1.62 W/m^2 of surface emissions per W/m^2 of forcing.

In this plot. the magenta line is the prediction of the input path and the red dots represent the per slice relationships between the solar input and the surface temperature. The steady state is where the magenta line intersects the green line.

• 1sky1 says:

The Trenberth diagram is misleading in that the emission is different going upwards and downwards. Here’s my more accurate version …

Actually, both diagrams are highly misleading, because the create the impression that radiation is far more important a mechanism in setting the surface temperature than the sum of evaporation and convection. It accomplishes this aphysical illusion by mixing concepts of unilateral heat transfer by the latter with bilateral, directional exchange of radiation. In reality, only the NET heat transfer by any mechanism is what truly matters.

”Change in temperature is equal to climate sensitivity times the change in downwelling radiation.”
In the hopes that there are no stupid questions… how is downwelling radiation possible? Radiation heat transfer only occurs from a source at a higher temperature to a receiver at a lower temperature. The temperature of the atmosphere cools rapidly as one moves up in altitude. At 15-20 km the temperature is -50 to -70 C, temperatures far below that of the surface or lower atmosphere. Energy cannot radiate from low to high temperature. So how is downwelling possible?

• Willis Eschenbach says:

Wade Burfitt December 30, 2019 at 4:25 pm

”Change in temperature is equal to climate sensitivity times the change in downwelling radiation.”

.
In the hopes that there are no stupid questions… how is downwelling radiation possible? Radiation heat transfer only occurs from a source at a higher temperature to a receiver at a lower temperature.

I fear you misunderstand radiation. You are conflating heat and energy. Heat is the spontaneous flow of energy from warm to cold. Second law.

Radiation, on the other hand, is emitted as photons by any source. It is absorbed by whatever it hits, REGARDLESS OF THE TEMPERATURE OF THE ABSORBER.

So, as crazy as it might seem, when you light a candle outside in the daytime, some of the photons strike the sun, where they are absorbed.

However, when the candle can “see” the sun, the sun can “see” the candle. And many more photons flow from the sun to the candle than from the candle to the sun.

So the NET flow of energy is from the sun to the candle, i.e. from warm to cold, and there’s no violation of the 2nd law. Here’s an example of the difference, using payments of money as an example.

The net flow of money only goes one way. However, it is made up of two separate actual physical flows of money in opposite directions … just as with the candle and the sun. Net heat flow is sun to candle. However, it is made up of two actual physical flows of photons in opposite directions.

You might enjoy my post Can A Cold Object Warm A Hot Object, that discusses this in a bit of detail.

Finally, there are stupid questions … they’re the questions you don’t ask, so you never learn.

Regards,

w.

Mr Eschenbach,
Thank you very much for taking the time to provide such an excellent explanation.
Cheers and Happy New Year

• DMacKenzie says:

The proper formula for radiative heat transfer between 2 objects is of the form
Q= Factor x [Thot^4 – Tcold^4]
Climate scientists like to call the [-Tcold^4] term “back radiation”. With this in mind, hold up a 20 C piece of paper in front of your face. Your face at 32 C is radiating heat to the paper at a rate of 490 watts/sq.M. and the paper is radiating back to your face at 420 watts/sq. M. An engineering graduate will just use the whole formula and say your face is radiating 70 watts/sq.M to the paper. As my old profs used to say, using the whole formula keeps numerous potential heat transfer and thermodynamic errors from being made.

• Willis Eschenbach says:

Thanks, DMacKensie. Clearly explained. 1sky1, pay attention ..

w.

• succinct and accurate. Well-played, sir!

• And, one more thing: the paper does not make your face warmer!!!

59. Herbert says:

Willis,
Correct me if I am wrong but taking the ECS définition you cite, we have instances in the Geological record of a doubling of the CO2 in several Epochs.
I am looking at a graph taken from Bernier 2001 of average CO2 levels in the last 11 geological periods, stretching over the last 600 million years.
As well as showing that our current geologic period ( Quaternary) has the lowest average CO2 levels in the history of the Earth, I can see periods where the global CO2 doubled over thousands of years.
The Pre- Cambrian period ( 600 MBP to 550MBP) to the Cambrian Period (550 MBP to 500 MBP) shows rise from ~ 3500 ppm to ~7800 ppm.
In the Permian period ( 300 MBP to 250 MBP) the rise is ~ 450 ppm to ~2000 ppm.
In the Jurassic period (200 MBP to 150 MBP) the rise is from ~ 1200 ppm to ~2900 ppm.
These are all eye ball assessments of the figures from the graph.
What were the temperature movements (indicating ECS) in these periods?
A: I don’t know, except to say there were no “ tipping points” or “runaway global warming”.
Geologists should be able to say whether the ECS from these various periods were constant or not.

60. Pat Smith says:

One of the most interesting and informative posts I have ever come across on WUWT or anywhere else, covering a vast swathe of the basic physics. I have a question (which may have been asked and answered in the 200 comments above and which I missed) that concerns the relative size of the downwelling at various points of the globe. You take the the downwelling caused by a doubling of CO2 to be 3.7 W/m2 which is the commonly accepted number. Presumably, this is a much larger amount of radiation at the poles where the sun’s radiation is much smaller and the longwave radiation going upward is similarly much less as the surface is much colder. MODTRAN shows that the upward longwave radiation flux might be half or even a third of that at the tropics so a fixed number of 3.7 W/m2 would have a much greater effect. Is this true?

61. Johann Wundersamer says:

Willis,

written in that curious language called “math” it is

∆T = λ ∆F Equation 1 (and only)

where T is surface temperature, F is downwelling radiative forcing, λ is climate sensitivity, and ∆ means “change in”

I hold that the error in that equation is the idea that lambda, the climate sensitivity, is a constant. Nor is there any a priori reason to assume it is constant.

____________________________________

Me holds that “climate sensitivity, is [ sold as ] a constant”;

In fact it’s claimed from the beginning to present a “models control knob” to adjust

observed, seen, lived through Climate to model outputs – and never the other way round.