A Decided Lack Of Equilibrium

Guest Post by Willis Eschenbach

I got to thinking about the lack of progress in estimating the “equilibrium climate sensitivity”, known as ECS. The ECS measures how much the temperature changes when the top-of-atmosphere forcing changes, in about a thousand years after all the changes have equilibrated. The ECS is measured in degrees C per doubling of CO2 (°C / 2xCO2).

Knutti et al. 2017 offers us an interesting look at the range of historical answers to this question. From the abstract:

Equilibrium climate sensitivity characterizes the Earth’s long-term global temperature response to increased atmospheric CO2 concentration. It has reached almost iconic status as the single number that describes how severe climate change will be. The consensus on the ‘likely’ range for climate sensitivity of 1.5 °C to 4.5 °C today is the same as given by Jule Charney in 1979, but now it is based on quantitative evidence from across the climate system and throughout climate history.

This “climate sensitivity”, often represented by the Greek letter lambda (λ), is claimed to be a constant that relates changes in downwelling radiation (called “forcing”) to changes in global surface temperature. The relationship is claimed to be:

Change in temperature is equal to climate sensitivity times the change in downwelling radiation.

Or written in that curious language called “math” it is

∆T = λ ∆F                               Equation 1 (and only)

where T is surface temperature, F is downwelling radiative forcing, λ is climate sensitivity, and ∆ means “change in”

I call this the “canonical equation” of modern climate science. I discuss the derivation of this equation here. And according to that canonical equation, depending on the value of the climate sensitivity, a doubling of CO2 could make either a large or small change in surface temperature. Which is why it the sensitivity is “iconic”.

Now, I describe myself as a climate heretic, rather than a skeptic. A heretic is someone who does not believe orthodox doctrine. Me, I question that underlying equation. I do not think that even over the long term the change in temperature is equal to a constant time the change in downwelling radiation.

My simplest objection to this idea is that evidence shows that the climate sensitivity is not a constant. Instead, it is a function inter alia of the surface temperature. I will return to this idea in a bit. First, let me quote a bit more from the Knutti paper on historical estimates of climate sensitivity:

The climate system response to changes in the Earth’s radiative balance depends fundamentally on the timescale considered. The initial transient response over several decades is characterized by the transient climate response (TCR), defined as the global mean surface warming at the time of doubling of CO2 in an idealized 1% yr–1 CO2 increase experiment, but is more generally quantifying warming in response to a changing forcing prior to the deep ocean being in equilibrium with the forcing  …

By contrast [to Transient Climate Response TCR], the equilibrium climate sensitivity (ECS) is defined as the warming response to doubling CO2 in the atmosphere relative to pre-industrial climate, after the climate reached its new equilibrium, taking into account changes in water vapour, lapse rate, clouds and surface albedo. 

It takes thousands of years for the ocean to reach a new equilibrium. By that time, long-term Earth system feedbacks — such as changes in ice sheets and vegetation, and the feedbacks between climate and biogeochemical cycles — will further affect climate, but such feedbacks are not included in ECS because they are fixed in these model simulations. 

Despite not directly predicting actual warming, ECS has become an almost iconic number to quantify the seriousness of anthropogenic warming. This is a consequence of its historical legacy, the simplicity of its definition, its apparently convenient relation to radiative forcing, and because many impacts to first order scale with global mean surface temperature. 

The estimated range of ECS has not changed much despite massive research efforts. The IPCC assessed that it is ‘likely’ to be in the range of 1.5 °C to 4.5 °C (Figs 2 and 3), which is the same range given by Charney in 1979. The question is legitimate: have we made no progress on estimating climate sensitivity?

Here’s what the results show. There has been no advance, no increase in accuracy, no reduced uncertainty, in ECS estimates over the forty years since Carney in 1979. Let’s take a look at the actual estimates.

The Knutti paper divides the results up based on the type of underlying data upon which they were determined, viz: “Theory & Reviews”, “Observations”, “Paleoclimate”, “Constrained by Climatology”, and “GCMs” (global climate models). Some of the 145 estimates only contained a range, like say 1.5 to 4.5. In that case, for the purposes of Figure 1 I’ve taken the mean of the range as the point value of their estimate.

Figure 1. Estimates of ECS (equilibrium climate sensitivity). Colors indicate what type of underlying data they are based on. Horizontal dashed lines show the canonical range of climate sensitivity, which is 1.5 – 4.5°C / 2xCO2.

Next, I looked at the 124 estimates which included a range for the data. Some of these are 95% confidence intervals; some are reported as one standard deviation; others are a raw range of a group of results. I have converted all of these to a common standard, the 95% confidence interval. Figure 2 shows the maxima and the minima of these ranges. I have highlighted the results from the five IPCC Assessment Reports, as well as the Charney estimate.

Figure 2. Tops (red dots) and bottoms (yellow dots) of the 95% confidence intervals for estimated ECS values. Red and yellow straight lines show the linear trend of the tops and bottoms of the 95%CIs respectively. The white lines show the Charney 1979 estimate, along with the estimates from the First through the Fifth Assessment Reports (FAR, SAR, TAR, AR4, and AR5). The blue dashed lines show the current (and past) IPCC interval, 1.5 to 4.5°C of warming from a doubling of CO2 (which is said to provide 3.7 W/m2 additional radiation).

The Charney / IPCC estimates for the range of the ECS values are constant from 1979 to 1995, at 1.5°C to 4.5°C for a doubling of CO2. In the Third Assessment Report (TAR) in 2001 the range gets smaller, and in the Fourth Assessment Report (AR4) in 2007 the range got smaller still.

But in the most recent Fifth Assessment Report (AR5), we’re back to the original ECS range where we started, at 1.5 to 4.5°C / 2xCO2.

In fact, far from the uncertainty decreasing over time, the tops of the uncertainty ranges have been increasing over time (red/black line). And at the same time, the bottoms of the uncertainty ranges have been decreasing over time (yellow/black line). So things are getting worse. As you can see, over time the range of the uncertainty of the ECS estimates has steadily increased. 

Looking At The Shorter Term Changes.

Pondering all of this, I got to thinking about a related matter. The charts above show equilibrium climate sensitivity (ECS), the response to a CO2 increase after a thousand years or so. There is also the “transient climate response” (TCR) mentioned above. Here’s the definition of the TCR, from the IPCC:

Transient Climate Response (TCR)

TCR is defined as the average global temperature change that would occur if the atmospheric CO2 concentration were increased at 1% per year (compounded) until CO2 doubles at year 70. The TCR is measured in simulations as the average global temperature in a 20-year window centered at year 70 (i.e. years 60 to 80).

The transient climate response (TCR) tends to be about 70% of the equilibrium climate sensitivity (ECS). 

However, this time I wanted to look at an even shorter-term measure, the “immediate climate response” (ICR). The ICR is what happens immediately when radiation is increased. Bear in mind that the effect of radiation is immediate—as soon as the radiation is absorbed, the temperature of whatever absorbed the radiation goes up.

Now, a while back Ramanathan proposed a way to actually measure the strength of the atmospheric greenhouse effect. He pointed out that if you take the upwelling surface longwave radiation, and you subtract upwelling longwave radiation measured at the top of the atmosphere (TOA), the difference between the two is the amount of upwelling surface longwave that is being absorbed by the greenhouse gases (GHGs) in the atmosphere. It is this net absorbed radiation which is then radiated back down towards the planetary surface. Figure 3 shows the average strength of the atmospheric greenhouse effect.

Figure 3. Downwelling longwave, calculated as upwelling surface longwave less upwelling top-of-atmosphere longwave. Data is from the CERES EBAF satellite dataset, Mar 2000 to Feb 2018.

The main forces influencing the variation in downwelling radiation are clouds and water vapor. We know this because the non-condensing greenhouse gases (CO2, methane, etc) are generally well-mixed. Clouds are responsible for about 38 W/m2 of the downwelling LW radiation, CO2 is responsible for on the order of another twenty or thirty W/m2 or so, and the other ~ hundred watts/m2 or so are from water vapor.

So to return to the question of immediate climate response … how much does the monthly average surface temperature change when there are changes in the monthly average downwelling longwave radiation shown in Figure 3? This is the immediate climate response (ICR) I mentioned above. Figure 4 below shows how much the temperature changes immediately with respect to changes in downwelling GHG radiation (also called “GHG forcing”).

Figure 4. Change in monthly surface temperature for each additional 3.7 W/m2 of downwelling GHG longwave. As is the common practice and as in Figures 1 & 2, I’ve expressed the temperature changes per 3.7 W/m2 of temperature increase (the amount of increased warming from a doubling of CO2).

There are some interesting things to be found in Figure 4. First, as you might imagine, the ocean warms much less on average than the land when downwelling radiation increases. However, it was not for the reason I first assumed. I figured that the reason was the difference in thermal mass between the ocean and the land. However, if you look at the tropical areas you’ll see that the changes on land are very much like those in the ocean. 

Instead of thermal mass, the difference in land and sea appears to be related to land and sea snow and ice. These are generally the green-colored areas in Figure 4 above. When ice melts either on land or sea, much less sunlight is reflected back to space from the surface. This positive feedback increases the thermal response to increased forcing. 

Next, you can see evidence for the long-discussed claim that if CO2 increases, there will be more warming near the poles than in the tropics. The colder areas of the planet warm the most from an increase in downwelling LW radiation. On the other hand, the tropics barely warm at all with increasing downwelling radiation.

Seeing the cold areas warming more than the warm areas led me to graph the temperature increase per additional 3.7 W/m2 versus the average temperature in each gridcell, as seen in Figure 5 below.

Figure 5. Scatterplot, temperature versus immediate climate response (ICR). Each dot represents a 1° latitude x 1° longitude gridcell. As in previous figures, I’ve expressed the temperature changes per 3.7 W/m2 of temperature increase (the amount of increased warming from a doubling of CO2).

The yellow/black line is the amount that we’d expect the temperature to rise (using the Stefan-Boltzmann equation) if the downwelling radiation goes up by 3.7W/m2 and there is no feedback. This graph reveals some very interesting things.

First, at the cold end, things warm faster than expected. As mentioned above, I would suggest that at least in part this is the result of the positive albedo feedback from the melting of land and sea ice.

There is support for this interpretation when we note that the right-hand part of Figure 5 that is above freezing is very different from the left-hand part that is below freezing. Above freezing, the temperature rise per additional radiation is much smaller than below freezing. 

It is also almost entirely below the theoretical response. The average immediate climate response (ICR) of all of the unfrozen parts of the planet is a warming of only 0.2°C per 3.7 W/m2.

Discussion

We’re left with a question: why it is that forty years after the Charney report, there has been no progress in reducing the uncertainty in the estimate of the equilibrium climate sensitivity?

I hold that the reason is that the canonical equation is not an accurate representation of reality … and it’s hard to get the right answer when you’re asking the wrong question. 

From above, here’s the canonical equation once again: 

This “climate sensitivity”, often represented by the Greek letter lambda (λ), is claimed to be a constant that relates changes in downwelling radiation (called “forcing”) to changes in global surface temperature. The relationship is claimed to be:

Change in temperature is equal to climate sensitivity times the change in downwelling radiation.

Or written in that curious language called “math” it is

∆T = λ ∆F                                               Equation 1 (and only)

where T is surface temperature, F is downwelling radiative forcing, λ is climate sensitivity, and ∆ means “change in”

I hold that the error in that equation is the idea that lambda, the climate sensitivity, is a constant. Nor is there any a priori reason to assume it is constant.

Finally, it is worth noting that in areas above freezing, the immediate change in temperature per doubling of CO2 is far below the amount expected from just the known Stefan-Boltzmann relationship between radiation and temperature (yellow/black line in Figure 5). And in the areas below freezing, it is well above the amount expected.

And this means that just as the areas below freezing are showing clear and strong positive feedback, the areas above freezing are showing clear and strong negative feedback.

Best Christmas/Hannukah/Kwanzaa/Whateverfloatsyourboat wishes to all,

w.

AS USUAL, I ask that when you comment you quote the exact words you are discussing, so we can all be certain what you are referring to.

3 2 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

229 Comments
Inline Feedbacks
View all comments
Ktm
December 27, 2019 5:04 pm

It would be nice to see the horizontal axis as percentiles rather than linear temperature.

The linear temperature is nice for the S-B curve, but not to visualize what’s happening on the planet as a whole.

Steve Reynolds
December 27, 2019 9:10 pm

Willis,
Your figure 5 looks like it would be a good test of climate models. Do you know if anyone has tried to replicate the figure 5 data with a model?

December 28, 2019 1:44 pm

Is it really just a coincidence, or are more and more papers written by authors whose names describe the paper. In this case, the Knutti Paper sounds very much like Nutty Paper. I always smile whenever I see this kind of coincidence, but lately, I’ve been laughing my head off.

Could these papers really just be bogus with authors names chosen for the effect on readers?

Alan Tomalty
December 28, 2019 2:50 pm

Can we really trust the TOA CERES measurements? https://ceres.larc.nasa.gov/documents/cmip5-data/Tech-Note_CERES-EBAF-Surface_L3B_Ed2-8.pdf
” Instantaneous top-of-atmosphere (TOA) irradiances are estimated from unfiltered radiances using empirical angular distribution models (ADMs; Loeb et al. 2003, 2005)”

This whole scenario of measuring IR in the earth atmosphere is a house of cards.

December 28, 2019 3:02 pm

If the strength of the greenhouse effect is dependent on surface emissions, then the globally uniform +3.7W/m^2 in figure 4 is specious. In the high altitude and dry atmosphere above the Antarctic CO2 acts as a coolant, and Arctic warming is largely driven by the warm AMO phase, which is normal during a centennial solar minimum.

December 29, 2019 8:07 pm

Willis, you wrote,

“We’re left with a question: why it is that forty years after the Charney report, there has been no progress in reducing the uncertainty in the estimate of the equilibrium climate sensitivity?

I hold that the reason is that the canonical equation is not an accurate representation of reality … and it’s hard to get the right answer when you’re asking the wrong question. ”

This has been my contention all along. You are describing the pathognomonic sign of a wrong model.
When you have a good model, new and better observations will refine the model parameters ever more precisely. When your model is wrong, no amount of new observations will improve its predictive power. That is what Kepler learned when he was trying to compute the orbital radius of Mars. His error bars were huge until he changed the model to an ellipse. Then the model became extremely precise.

The last 40 years have seen an orders of magnitude increase in both the quantity and quality of observations that bear upon the question of climate sensitivity to radiative forcing. For ECS to essentially not budge means that the model is fundamentally wrong. The inability to be improved by more data is a universal property of wrong models.

Reply to  UnfrozenCavemanMD
December 29, 2019 8:09 pm

Somehow my quote ran afoul of formatting, it should have started:
Willis, you wrote, “We’re left with a question: why it is that forty years after the Charney report, there has been no progress in reducing the uncertainty in the estimate of the equilibrium climate sensitivity?

I hold that the reason is that the canonical equation is not an accurate representation of reality … and it’s hard to get the right answer when you’re asking the wrong question. ”

Herbert
December 29, 2019 8:37 pm

Willis,
In Garth Paltridge’s “ The Climate Caper, Facts and fallacies of global warming”, Chapter 2 is ‘Some Physics’.
In it Dr. Paltridge says at the chapter beginning “ There is a fair amount of reasonable science behind the global warming debate, but in general, and give or take a religion or two, never has quite so much rubbish been espoused by so many on so little evidence”.
Now to your point about ECS not being a constant.
Dr. Paltridge gives another single equation like yours in these terms-
“Imagine that the basic rise without feedbacks of global temperature from doubled CO2 is Delta To.Imagine as well that g1, g2,g3, and so on are the actual values of the individual feedback ‘gains’ associated with each of the various atmospheric processes dependent on surface temperature. They may be positive or negative.That is, they may amplify or reduce the basic rise in Temperature Delta To associated with the increase of CO2.
The total gain G of the overall system is simply the sum (g1+g2+g3+….)of all the individual gains, and the actual temperature rise Delta T when all the feedbacks are allowed to operate is simply the value of Delta To divided by a factor (1-G) as shown in the equation:
DeltaT = DeltaTo/ (1-G).
The mathematically minded of you will recognise that all sorts of trouble would arise if the total gain G were 1.0. The actual temperature response would be infinite.”
Dr. Paltridge then shows a graph of the relationship between Delta T and G.
“The 1.2 degree Celsius rise for no feedbacks, the infinite rise for G equal to 1, and the edges of the cross hatched area which indicate the range of total feedback gains and corresponding temperature rises for the ….respectable npmodels for which information on feedback is available ( are shown).
The range indicates that the total gain of an individual model falls somewhere roughly between 0.4 and 0.8. The corresponding range of temperature rise lies between 2 and 6 degrees Celsius.”
The g1, g2,g3, etc. are of course,Water Vapour(WV), Cloud(Cl), Reflection (Re), Lapse Rate (LR), CO2 and Greenhouse gases etc.
And then –
“ Certainly à large negative cloud feedback ( as likely a situation as any other) would drag the total feedback right down and lead to much smaller increases in temperature from increasing CO2 than are currently fashionable.”
Now to your point-
“ As a final random thought, it is at least theoretically conceivable that the total feedback gain of the climate system is actually very close to 1.0. In such a circumstance one could imagine the climate skating from one extreme of temperature to another and back again. The extremes would be the points at which the total feedback gain became less than 1.0 – as for instance when cloud cover reached zero or 100% and could no longer contribute to the feedback. After all, the climate has always been flipping in and out of Ice Ages!
More to the present point, and were such a situation to exist, it wouldn’t matter very much whether or not man added lots more CO2 to his atmosphere.”
Willis, It is way above my pay grade but is Dr. Paltridge indirectly acknowledging that in the situation he outlines ECS is not a constant, whether your single equation or his applies?

eyesonu
December 30, 2019 7:55 am

This has been another excellent post by Willis and the resulting comment thread.

Figures 4 & 5 were very interesting. I have been through the post and studied the comments for the fourth time now. Breaking down the vague ‘global average’ into zones and temp plots as in fig 5 is a big step forward. You can hide a herd of elephants in a global average!

December 30, 2019 2:26 pm

Willis,
You wrote after Fig 3

“The main forces influencing the variation in downwelling radiation are clouds and water vapor. We know this because the non-condensing greenhouse gases (CO2, methane, etc) are generally well-mixed. Clouds are responsible for about 38 W/m2 of the downwelling LW radiation, CO2 is responsible for on the order of another twenty or thirty W/m2 or so, and the other ~ hundred watts/m2 or so are from water vapor.”

My question is….why is the sum of your numbers about half of the commonly quoted 333 or so watts downwelling, used in Trenberth type Earth Heat Balance graphics ?

Reply to  Willis Eschenbach
December 30, 2019 5:35 pm

Willis,

To make it more representative, separate the ‘back radiation’ term into the contributions from absorbed solar input and the contribution from absorbed surface output. The former is ‘forcing’ power and the later is ‘feedback’ power. Similarly, separate out the emissions by the water in clouds from the emissions by atmospheric GHG’s as well as cloud emissions originating from surface absorption and those arising from solar absorption. I think that dividing it into troposphere and stratosphere adds unnecessary complication that doesn’t help with understanding. A better approach would be to divide it into contributions from clear skies and those from cloudy skies, especially since it seems that the amount of clouds is what modulates the energy balance until the required balance is achieved.

The return of latent heat should be carved out which is mostly returned to the surface by liquid or solid rain that’s warmer that it would be otherwise, and weather. Latent heat plus its return to the surface has a zero sum effect on the radiant balance since its complete influence has already been manifested by the average surface temperature and its corresponding radiant emissions. The same can be said for all non radiant energy leaving the surface plus its offsetting return to the surface from the atmosphere. Distinguishing the energy transported by photons from the energy transported by matter does this by only considering only the energy transported by photons as contributing to the RADIANT balance.

Of all the energy absorbed by the surface, only the 390 W/m^2 required to offset the surface radiation corresponding to the average surface temperature of 288K (the 390 W/m^2 of surface radiation) is relevant to the RADIANT balance. Everything else is just zero sum noise that gets in the way of understanding what the balance actually means. Whether some of the offset of the non radiant energy entering the atmosphere is in the form of photons doesn’t even matter, even though a sufficient amount of energy seems to be returned by non radiant means.

The balance is only concerned with averages spanning intervals of time much larger than the nominal length of the hydro cycle and the component of the atmosphere that absorbs most of the solar energy absorbed by the atmosphere is the water in clouds which is tightly coupled to the water in the oceans. Across intervals of time much longer than the nominal length of the hydro cycle, the absorption and emission of solar energy by the water in clouds can be considered a proxy for solar energy absorbed and emitted by the water in the oceans.

Reply to  Willis Eschenbach
December 31, 2019 4:56 pm

Willis,

What’s preventing modeling the Earth’s atmosphere as a single layer with the equivalent average properties required to reproduce the average behavior at its boundaries with TOA and the surface?

I’ve used many different layering configurations and a single layer equivalent model of the atmosphere works quite well. Conceptually, the model is of a 2-body system consisting of an ideal BB and a single ‘graying’ layer inserted between the BB and its environment. A relatively simple 2×2 transform can represent the bidirectional transfer function of this single layer which I’ve then applied to represent an atmosphere. The idea is that the emissions of the BB are attenuated by the layer before reaching the environment by just enough to offset the incident energy while at the same time, it amplifies the incident energy by the reciprocal of the attenuation before reaching the BB in order to exactly offset its emissions. The loop is closed by considering the source of the energy powering the amplification of power arriving from the environment to be the surface energy that was attenuated on its way out to the environment.

The model is not specific to Earth, or any planet for that matter, and is just an idealized model for a 2-body system where the attenuation factor is the emissivity of an equivalent gray body representing the steady state condition. It just happens that when you ignore non radiant energy like latent heat and are concerned only with the radiant behavior at the boundaries, a single value of equivalent emissivity produces results that are surprisingly representative of the averages measured for slices of Earth’s latitude from pole to pole.

Here’s a bit of C code you can play with that demonstrates the basics of the model.

http://www.palisad.com/co2/code/gold.c

Note that the default transfer function {{-sqrt(a), 1}, {1, a}} is not that of an ideal gray body, but is a variant with gray behavior that converges to become ‘golden’ in the steady state. What differs from an ideal gray body is the behavior as it deviates away from the steady state. To see how it differs, one of the other transfer function choices available is that of an ideal gray body (see the comments).

Reply to  Willis Eschenbach
December 31, 2019 7:15 pm

Willis,

Yes, the ”ideal’ absorbing layer can return no more than 2x the incident energy, where ideal means that 100% of what the surface emitted was absorbed by the atmosphere (GHG’s and clouds) where half of this must escape into space to offset the incident energy and the remaining half is returned to the surface. Since the emissions at TOA are 1/2 the emissions of the surface, the equivalent emissivity becomes 0.5 and establishes the upper bound ‘amplification’ for the surface emissions at twice the incident solar energy. We can then bound the equivalent emissivity of the planet to between 0.5 and 1.0, bounding the ‘amplification’ between 1 and 2 and expect the action of clouds to result in a value between these limits since clouds decrease the effective emissivity by being colder than the surface below.

Where you lost me is by saying that inefficiencies resulted in there not being enough energy returned to the surface, therefore, you needed to model more layers to compensate. I don’t see how that works.

If you can’t get the desired behavior with 1 layer, then you will not be able to get it with 2, or N for that matter. In principle, you should be able to model the same behavior with an arbitrary number of layers. If you’re getting a different result between 1 and 2 layers, then you will necessarily see differences between 2 and 3 and so on and so forth, until after a sufficiently large number of layers, you approach the ‘ideal’ behavior. This tells me that either there’s something wrong at the interfaces of the layers being stacked or the equivalent properties of the combined layers are not consistent with the equivalent properties of the whole.

I think what’s happening is that you’re accounting for things that aren’t rally losses or inefficiencies and a compensating error in how the layers are stacked is hiding this.

The loss from latent heat and convection is not a loss at all as it’s returned and offset as part of the ‘back radiation’ term in excess of the energy required to offset the steady state radiant emissions. The loss from sensible heat is also not a loss since in the steady state, the system has already reached its equilibrium temperature and the time averaged change in the sensible heat will be zero by definition.

The trickier one is the impact on the balance of the water in clouds. Reflected energy is accounted for by the albedo affecting the 235 W/m^2 of solar input (which I think is a few W/m^2 too low). I consider solar energy absorbed and emitted by clouds a proxy for solar energy absorbed and emitted by the oceans, as the two are tightly coupled over the integration periods across which the average balance would be relevant. Since the thermal mass of the combined water is dominated by the oceans, the resulting equivalent temperature will be representative of the actual average surface temperature which itself is dominated by the oceans.

We can continue this next year.
Happy New Year.

Trick
Reply to  Willis Eschenbach
December 31, 2019 7:35 pm

Willis: ”Finally, remember that each layer has to emit the same amount upwards and downwards.”

Each layer at the same temperature that is. In Trenberth’s, observe there are also two (more subtle) emission layers (169+30, 333), at different heights so different temperatures. The Trenberth cartoon is intended to be simplified.

CO2: ”In principle, you should be able to model the same behavior with an arbitrary number of layers”

Not if the surface layer is already opaque in the IR bands, the addition of more opacity has no change in T profile when it should have an effect. In that case you have pushed your 1 (or 2) layer for Earth analogy too far and need N layers & a computer (as is the case for Venus).

”which I think is a few W/m^2 too low”

The Trenberth cartoon covers an earlier time period than the energy budgets available today with higher OLR shown.

Reply to  Willis Eschenbach
January 1, 2020 1:03 pm

Willis,

I see what happened and the problem is as I stated earlier where you’re considering losses that aren’t really losses and that’s why your 1 layer model doesn’t work. I know a single layer model can work because I have existence proof that both single layer and multi-layer models of the atmosphere work just fine and both arrive to the same, verifiable, steady state result. In fact, my basic sanity test for an N-layer model is that it must get the same steady state result as an N-1 layer model, otherwise, the two models aren’t modeling the same steady state balance.

If a 1 layer model doesn’t return enough energy to the surface, but a 2 layer model based on the same assumptions does, then one or more of the models isn’t actually modeling the balance.

Look at the comments in the piece of C code I linked to earlier to see how a 1-layer model of the radiant balance works. This balance model can be extended as a parallel collection of grid cells, each represented as a sequence of 4×4 transforms representing the average behavior for that cell applying 1 transform per layer between the surface and space. Since superposition applies to joules, this can be collapsed into a single 4×4 transform representing the entire column as a single EQUIVALENT layer representing the average.

It’s also necessary to account for flux between the grid cells, which is why I like slices of latitude, since E/W flux cancels and average N/S flux per slice is more readily established. In addition, the average solar forcing is relatively constant across slices of latitude, allowing the relative behavior for varying solar forcing to emerge as the differences between slices which are otherwise topographically similar.

Regarding the steady state balance, you must consider 2 orthogonal energy fluxes with no NET conversion between them by the layer(s). The two fluxes are the energy transported by photons and the energy transported by matter. Trenberth’s error of conflating the two is the source of many other errors, but then again, this seems to have been the intent.

Only the energy transported by photons can contribute to the RADIANT balance. The energy transported by matter only serves to redistribute existing energy between and among the surface and the atmosphere. Trenberth incorrectly considers the redistribution of existing energy to contribute to the radiant balance when how energy is distributed on average has a zero sum effect on the balance. It may affect the temperature and subsequent radiant emissions, but whatever effect it’s having is already accounted for by the temperature and radiant emissions and a balance will be achieved regardless. Note the differences between modelling the balance and modelling the low level behavioral interactions that one hopes results in the emergence of a proper balance (i.e. a GCM). Trenberth is attempting to model things specific to the later with the former.

I looks like you’re mischaracterizing sensible heat. The energy balance is a reflection of the steady state which for all intents and purposes is defined as when the average sensible heat in and out of the surface (and atmosphere) is zero. If not, then either it’s not in the steady state, or matter will be heating and/or cooling without bound. In this case, sensible heat entering the atmosphere will cause it to get hotter and hotter as the surface gets colder and colder. This clearly isn’t the case and what you’re calling sensible heat is being offset as part of the back radiation term which can only be modeled as sensible heat returning to the surface. Subtract the offset of this and the latent heat term from the ‘back radiation’ term and then add what’s left to the solar input to offset the surface emissions. Most, if not all, of the non radiant energy returned to the surface is returned by matter anyway and not photons. Calling it ‘back radiation’ tends to hide this.

The only way to convert the energy transported by matter into energy transported by photons is when that matter radiates energy away. However; for that matter to be in a steady state equilibrium, it must be absorbing the same as it’s emitting, so whatever is emitted by matter in the atmosphere is replaced by subsequent absorption. If this wasn’t the case, the temperature of that atmospheric matter would either increase or decrease without bounds.

I think Trenberth just wanted to make the ‘back radiation’ term seem more important by including the return path of energy redistribution between the surface and atmosphere and implying that it’s mostly from GHG’s. In fact, most of the actual radiant component of the ‘back radiation’ term are cloud emissions returning to the surface. He did a similar thing by carving out solar energy absorbed by the atmosphere (clouds) and calling that part of the ‘back’ radiation term which is really still forward radiation from clouds.

Please examine this scatter plot which isolates the radiant behavior of the planet from the redistribution of existing energy by matter.

http://www.palisad.com/co2/tp/fig1.png

The thin green line is a prediction of my single layer model of the RADIANT balance along the path from the surface to space. Each small red dot is 1 month of measured data for each 2.5 degree slice of latitude from pole to pole across 3 decades of data and the larger green and blue dots are 3 decade averages for each slice. The correspondence to the predictions of the single layer model is very strong and unambiguous.

It gets even more interesting along the input path where you must account for energy passing N/S between slices. The effect of this is to bias surface emissions up by half of the absorption such that the incremental effect of solar forcing becomes 1 W/m^2 of surface emissions per W/m^2 of solar forcing. The steady state is defined to be where the biased up input path intersects with the unbiased output path of 1.62 W/m^2 of surface emissions per W/m^2 of forcing.

http://www.palisad.com/co2/tp/fig2.png

In this plot. the magenta line is the prediction of the input path and the red dots represent the per slice relationships between the solar input and the surface temperature. The steady state is where the magenta line intersects the green line.

1sky1
Reply to  Willis Eschenbach
December 31, 2019 3:14 pm

The Trenberth diagram is misleading in that the emission is different going upwards and downwards. Here’s my more accurate version …

Actually, both diagrams are highly misleading, because the create the impression that radiation is far more important a mechanism in setting the surface temperature than the sum of evaporation and convection. It accomplishes this aphysical illusion by mixing concepts of unilateral heat transfer by the latter with bilateral, directional exchange of radiation. In reality, only the NET heat transfer by any mechanism is what truly matters.

Wade Burfitt
December 30, 2019 4:25 pm

”Change in temperature is equal to climate sensitivity times the change in downwelling radiation.”
In the hopes that there are no stupid questions… how is downwelling radiation possible? Radiation heat transfer only occurs from a source at a higher temperature to a receiver at a lower temperature. The temperature of the atmosphere cools rapidly as one moves up in altitude. At 15-20 km the temperature is -50 to -70 C, temperatures far below that of the surface or lower atmosphere. Energy cannot radiate from low to high temperature. So how is downwelling possible?

Wade Burfitt
Reply to  Willis Eschenbach
January 1, 2020 5:38 pm

Mr Eschenbach,
Thank you very much for taking the time to provide such an excellent explanation.
Cheers and Happy New Year
Wade

Reply to  Wade Burfitt
December 31, 2019 1:41 pm

The proper formula for radiative heat transfer between 2 objects is of the form
Q= Factor x [Thot^4 – Tcold^4]
Climate scientists like to call the [-Tcold^4] term “back radiation”. With this in mind, hold up a 20 C piece of paper in front of your face. Your face at 32 C is radiating heat to the paper at a rate of 490 watts/sq.M. and the paper is radiating back to your face at 420 watts/sq. M. An engineering graduate will just use the whole formula and say your face is radiating 70 watts/sq.M to the paper. As my old profs used to say, using the whole formula keeps numerous potential heat transfer and thermodynamic errors from being made.

Reply to  DMacKenzie
January 3, 2020 7:28 am

succinct and accurate. Well-played, sir!

Reply to  DMacKenzie
January 3, 2020 7:29 am

And, one more thing: the paper does not make your face warmer!!!

Herbert
December 30, 2019 10:06 pm

Willis,
Correct me if I am wrong but taking the ECS définition you cite, we have instances in the Geological record of a doubling of the CO2 in several Epochs.
I am looking at a graph taken from Bernier 2001 of average CO2 levels in the last 11 geological periods, stretching over the last 600 million years.
As well as showing that our current geologic period ( Quaternary) has the lowest average CO2 levels in the history of the Earth, I can see periods where the global CO2 doubled over thousands of years.
The Pre- Cambrian period ( 600 MBP to 550MBP) to the Cambrian Period (550 MBP to 500 MBP) shows rise from ~ 3500 ppm to ~7800 ppm.
In the Permian period ( 300 MBP to 250 MBP) the rise is ~ 450 ppm to ~2000 ppm.
In the Jurassic period (200 MBP to 150 MBP) the rise is from ~ 1200 ppm to ~2900 ppm.
These are all eye ball assessments of the figures from the graph.
What were the temperature movements (indicating ECS) in these periods?
A: I don’t know, except to say there were no “ tipping points” or “runaway global warming”.
Geologists should be able to say whether the ECS from these various periods were constant or not.

Pat Smith
January 1, 2020 9:19 am

One of the most interesting and informative posts I have ever come across on WUWT or anywhere else, covering a vast swathe of the basic physics. I have a question (which may have been asked and answered in the 200 comments above and which I missed) that concerns the relative size of the downwelling at various points of the globe. You take the the downwelling caused by a doubling of CO2 to be 3.7 W/m2 which is the commonly accepted number. Presumably, this is a much larger amount of radiation at the poles where the sun’s radiation is much smaller and the longwave radiation going upward is similarly much less as the surface is much colder. MODTRAN shows that the upward longwave radiation flux might be half or even a third of that at the tropics so a fixed number of 3.7 W/m2 would have a much greater effect. Is this true?

January 7, 2020 9:45 am

Willis,

written in that curious language called “math” it is

∆T = λ ∆F Equation 1 (and only)

where T is surface temperature, F is downwelling radiative forcing, λ is climate sensitivity, and ∆ means “change in”

I hold that the error in that equation is the idea that lambda, the climate sensitivity, is a constant. Nor is there any a priori reason to assume it is constant.

____________________________________

Me holds that “climate sensitivity, is [ sold as ] a constant”;

In fact it’s claimed from the beginning to present a “models control knob” to adjust

observed, seen, lived through Climate to model outputs – and never the other way round.