Effective Radiation Level (ERL) Temperature

Guest Post by Willis Eschenbach

Lord Monckton has initiated an interesting discussion of the effective radiation level. Such discussions are of value to me because they strike off ideas of things to investigate … so again I go wandering through the data.

Let me define a couple terms I’ll use. “Radiation temperature” is the temperature of a blackbody radiating a given flux of energy. The “effective radiation level” (ERL) of a given area of the earth’s surface is the level in the overlying atmosphere which has the physical temperature corresponding to the radiation temperature of the outgoing longwave radiation of that area.

Now, because the earth is in approximate thermal steady-state, on average the earth radiates the amount of energy that it receives. As an annual average this is about 240 W/m2. This 240 W/m2 corresponds to an effective radiation level (ERL) blackbody temperature of -18.7°C. So on average, the effective radiation level is the altitude where the air temperature is about nineteen below zero Celsius.

However, as with most thing regarding the climate, this average conceals a very complex reality, as shown in Figure 1.

average-erl-temperatureFigure 1. ERL temperature as calculated by the Stefan-Boltzmann equation.

Note that this effective radiation level (ERL) is not a real physical level in the atmosphere. At any given location, the emitted radiation is a mix of some radiation from the surface plus some more radiation from a variety of levels in the atmosphere. The ERL reflects the average of all of that different radiation. As an average, the ERL is a calculated theoretical construct, rather than being an actual level from which the radiation is physically emitted. It is an “effective” layer, not a real layer.

Now, the Planck parameter is how much the earth’s outgoing radiation increases for a 1°C change in temperature. Previously, I had calculated the Planck parameter using the surface temperature, because that is what is actually of interest. However, this was not correct. What I calculated was a value after feedbacks. But the Planck parameter is a pre-feedback phenomenon.

If I understand him, Lord Monckton says that the proper temperature to use in calculating the Planck parameter is the ERL temperature. And since we’re looking for pre-feedback values, I agree. Now, this is directly calculable from the CERES data. Remember that the ERL is defined as an imaginary layer which calculated using the Stefan-Boltzmann equation. So by definition, the Planck parameter is the derivative of that Stefan-Boltzmann equation with respect to temperature.

This derivative works out to be equal to four times the Stefan-Boltzmann constant time the temperature cubed. Figure 2 shows that value using the temperature of the ERL as the input:

average-planck-parameterFigure 2. Planck parameter, calculated as the derivative of the Stefan-Boltzmann equation. 

Let me note here that up to this point I am agreeing with Lord Monckton, as this is a part of his calculation of what he calls the “reference sensitivity parameter λ0” (which is minus one divided by the Planck parameter). He finds a value of 0.267 °C / W m-2 up to this point. as discussed in Lord Monckton’s earlier post, which is the same as the Planck parameter of -3.75 W/m2 per °C shown in Figure 3.

Now again if I understand both Lord Monckton and the IPCC, for different reasons they both say that the value derived above is NOT the proper value. In both cases they say the raw value is modified by some kind of atmospheric or other process, and that the resulting value is on the order of -3.2 W m-2 / °C . (In passing, let me state I’m not sure exactly what number Lord Monckton endorses as the correct number or why, as he is not finished with his exposition.)

The problem I have with that physically-based explanation is that the ERL is not a real layer. It is a theoretical altitude that is calculated from a single value, the amount of outgoing longwave radiation. So how could that be altered by physical processes? It’s not like a layer of clouds, that can be moved up or down by atmospheric processes. It is a theoretical calculated value derived  from observations of outgoing longwave radiationI can’t see how that would be affected by physical processes.

It seems to me that the derivative of a theoretically calculated value like the ERL temperature can only be the actual mathematical derivative itself, unaffected by any other real-world considerations.

What am I missing here?


My Request: In the unlikely circumstance that you disagree with me or someone else, please quote the exact words you disagree with. Only in that way can we all be clear as to exactly what you object to.

A Bonus Graphic: The CERES data is an amazing dataset. It lets us do things like calculate the nominal altitude of the effective radiation layer all over the planet. I did this by assuming that the lapse rate is a uniform 6.5°C of cooling for every additional kilometre of altitude. This assumption of global uniformity is not true, because the lapse rate varies both by season and by location. Calculated by 10° latitude bands, the lapse rate varies from about three to nine °C cooling per kilometre from pole to pole. However, using 6.5°C / km is good for visualization. To establish the altitude of the ERL, I divided the difference between the surface temperature and the ERL temperature by 6.5 degrees C per km. To that I added the elevation of the underlying surface, which is available as a 1°x1° gridcell digital dataset in the “marelac” package in the R computer language. Figure 3 shows the resulting nominal ERL altitude:

nominal-erl-heightFigure 3. Nominal altitude of the effective radiation level.

The ERL is at its lowest nominal altitude around the South Pole, and is nearly as low at the North Pole, because that’s where the world is coldest. The ERL altitude is highest in the tropics and in temperate mountains.

Please keep in mind that that Figure 3 is a map of the average NOMINAL height of the ERL …

A PERSONAL PS—The gorgeous ex-fiancee and I are back home from salmon fishing, and subsequent salmon feasts with friends along the way, and finally, our daughter’s nuptials. The wedding was a great success. Just family from both sides in the groom’s parents’ lovely backyard, under the pines by Lake Tahoe. The groom was dashingly handsome, our daughter looked radiant in her dress and veil, my beloved older brother officiated, and I wore a tux for the first time in my life.

The wedding feast was lovingly cooked by the bride and groom assisted by various family members, to the accompaniment of much laughter. The bride cooked her famous “Death By Chocolate” cake. She learned to cook it at 13 when we lived in Fiji, and soon she was selling it by the slice at the local coffee shop. So she baked it as the wedding cake, and she and her sister-in-law-to-be decorated it …

Made with so much love it made my eyes water, now that’s a true wedding cake for a joyous wedding. My thanks to all involved.

Funny story. As the parents of the bride, my gorgeous ex-fiancee and I were expected by custom to pay for the wedding, and I had no problem with that. But I didn’t want to be discussing costs and signing checks and trying to rein in a plunging runaway  financial chariot. So I called her and told her the plan I’d schemed up one late night. We would give her and her true love a check for X dollars to spend on the wedding … and whatever they didn’t spend, they could spend on the honeymoon. The number of dollars was not outrageous, but it was enough for a lovely wedding.

“No, dad, we couldn’t do that” was the immediate reply. “Give us half of that, it would be plenty!”

“Damn, girl …”, I said, “… you sure do drive a hard bargain!” So we wrote the check for half the amount, and we gave it to her.

Then I created and printed up and gave the graphic below to my gorgeous ex-fiancee …

its-not-my-wedding-2… she laughed a warm laugh, the one full of summer sunshine and love for our daughter, stuck it on the refrigerator, and after that we didn’t have a single care in the world. Both the bride and groom have college degrees in Project Management, and they took over and put on a moving and wonderful event. And you can be sure, it was on time and under budget. Dear heavens, I have had immense and arguably undeserved good fortune in my life …

I’m happy to be back home now from our travels. I do love leaving out on another expedition … I do love traveling with my boon companion … and I do love coming back to my beloved forest in the hills where the silence goes on forever, and where some nights when the wind is right I can hear the Pacific ocean waves breaking on the coast six miles away.

Regards to all, and for each of you I wish friends, relations, inlaws and outlaws of only the most interesting  kind …


0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
September 6, 2016 9:41 pm

So the ERL is a ‘locked in value’. 240W/m2 and -18.7 C. Was this value taken and calculated on a particular day/ date/ timeframe? And all subsequent ‘changes’ are anomalies?

george e. smith
Reply to  Willis Eschenbach
September 7, 2016 6:07 am

Willis does your idea of “Radiation Temperature” match the Stefan-Boltzmann total radiant emittance to the Black Body Temperature, or do you match the Planck Black Body Radiation Spectrum peak spectral radiant emittance wavelength (or frequency) to the Temperature of the Planck Spectrum ??
The first would assume that the total earth is a black body with 100% radiant emissivity, while the second would assume it is a grey body, with a less than 100% constant emissivity.
It seems the experimental measurability of those two variables is somewhat different.

Reply to  Willis Eschenbach
September 7, 2016 8:01 am

However, the local values at any given point are changing with the seasons.

To me, it seems we miss out on a lot of texture when we just smooth over the seasons (as one example).
Because it’s at the theoretical value only about 2 days a year. And what does it do at night?

Mark - Helsinki
Reply to  Willis Eschenbach
September 7, 2016 8:31 am

the earth isn’t a black body, there must be no convection, the sun isn’t one either.

Reply to  Willis Eschenbach
September 7, 2016 3:12 pm

Willis, if the ERL is a pseudo surface that is in essence that elevation at which out going and incoming energy are equal, then would it not in effect be the equivalent of the geoid? As regards the ERL being a theoretical surface, so is the geoid which marks a gravitational equipotential. It affected slowly by the movement of masses within the earth and even, very slightly by the mass of the atmospheric above. Wouldn’t changes in thermal mass below and incoming energy from above (since the sun is not quite constant) cause the ERL to undulate like the surface of a viscous liquid?

Reply to  Willis Eschenbach
September 7, 2016 3:14 pm

Not “equivalent” read “analogous to”

Reply to  Willis Eschenbach
September 7, 2016 5:28 pm

“However, the local values at any given point are changing with the seasons.”
I would think they change with night and day even . . It seems to me you are speaking of something rather like sea level, which varies locally with tides and currents and waves . . and latitude, yet is “real” enough in an effective sense, such that an average is usefully spoken of as if a single “surface” for calculative and understanding purposes of various sorts . .

Walter Sobchak
September 6, 2016 9:48 pm

More or less than $50,000?

Walter Sobchak
Reply to  Willis Eschenbach
September 7, 2016 8:05 am

And I thought my daughter ran an inexpensive wedding. But, she was paying big city prices.

Reply to  Willis Eschenbach
September 7, 2016 10:22 am

I am glad you and yours had a wonderful life defining event!
Plus, from the national new, it sounds like it was a good year to miss the burning man festival.
Many years of happiness to all!

Reply to  Willis Eschenbach
September 7, 2016 10:38 am

Wedding expense correlates negatively with marriage duration, so yours are off to a very good start. All the best.

Reply to  Mike Jonas
September 7, 2016 12:56 pm

Reminds me of a plot I saw of per-pupil school expenditures in Massachusetts town-by-town. Standardized test scores had a very distinct inverse correlation with per pupil expenditures. Makes a pretty good argument for reducing school budgets.

Reply to  Willis Eschenbach
September 7, 2016 1:19 pm

So… there’s another gorgeous ex-fiancee in your life? 😉

September 6, 2016 10:46 pm

The answer is that the earth is not a perfect black body and so the
reference system is modified to include some long-wavelength absorption
from the atmosphere. If you read the literature about feedbacks you will
see that lambda_0 is defined as the ratio of the change in output for an
idealised system for a change in the input for an arbitray reference system. You are thus free to decide for yourself what counts as the reference system and what counts as feedback. If you choose an ideal black body as the reference system then you have to include the long-wavelength absorption of the atmosphere (about the idealised emitting layer) as a feedback that will heat up the earth. But (and I am guessing) since this is very quick compared to other processes (such as the artic melting) it makes sense to include this in the reference system and only consider slow changes as the feedbacks.

Walter Sobchak
Reply to  Geronimo
September 7, 2016 8:05 am

What does that have to do with the cost of a wedding?

Reply to  Walter Sobchak
September 7, 2016 8:29 am

Just think of your wallet as a black body radiating $100-dollar bills. The local equilibrium level is reached when the number of outgoing $100-dollar bills = a wedding successfully pulled off with no loan sharks hunting for your kneecaps with baseball bats due to any disequilibrium in wedding-related expenses.

charles nelson
September 6, 2016 10:51 pm

What a pickle one gets oneself into when averaging out values for an entire planet.
How absurd and irrelevant those numbers become?
What motiviates ‘scientists’ to attempt this and to keep trying to draw conclusions and make predictions based on calculations using these meaningless numbers?
You would think as scientists they could address this compulsion.

george e. smith
Reply to  charles nelson
September 7, 2016 6:18 am

When you start filtering, you begin a process of throwing away information often very expensively obtained. If you continue the process over any finite amount of time, you ultimately can end up with a SINGLE number.
But now you have NO information whatsoever. And the problem with the output of your ZERO bandwidth low pass filter is that you have no idea where in the phase of that zero frequency cycle your number actually is.
Given that we have measured values for local Temperatures on the earth’s actual surface, that cover an extreme range from about +60 deg. C down to about -95 deg. C (well -94 anyway), I would expect the filter can give you almost any value in that range depending on what set of location points you actually measure.
Well it’s an exercise for people with a terra-flop computer sitting around idle.

Reply to  george e. smith
September 7, 2016 11:18 am

Filtering is done in an effort to improve the signal to noise ratio and is perfectly appropriate if concerned with the long term trends. The information you are discarding is the high frequency variations which by definition have no bearing on the trend.

george e. smith
Reply to  george e. smith
September 7, 2016 3:20 pm

The information you are discarding is the fluctuations in the signal.
Those are measured numbers; they aren’t noise.
What bearing does a “trend” calculated from a time limited data set have on anything that is outside that time limited window.
So just what is the trend for the last 4.5 billion years during which life is supposed to have existed on earth ??
There are NO algorithms of statistical mathematics that can tell you whether the very next observed point in a time data set will be higher, or lower or the same as the very last known point in that data set. Nor for that matter can it tell you that information for ANY single known data point in the set.
The results of such analyses are valid only for the specific set of elements in the data set to which the algorithm is applied. Such machinations work perfectly well for ANY finite data set of finite real numbers and they guarantee NO relationships of any kind between those finite real elements. It’s numerology.
One is sampling making (or faking) something out of nothing.
NO real physical system responds in any way to ANY kid of statistical mathematics calculation of any or all of the system variables.
Real systems respond only to instantaneous real time values of variables.

Reply to  george e. smith
September 8, 2016 5:13 pm

You can get your TF for about $600 now… (some programming required):
NVIDIA Jetson TX1 Development Kit Proprietary DDR4 Motherboards 945-82371-0000-000
4 out of 5 stars 1 customer review
Price: $599.99 & FREE Shipping.
256 CUDA Cores
1 TFLOPs (FP16) Peak Perf

Reply to  george e. smith
September 9, 2016 7:22 am

e. smith – I agree that time-trends are meaningless but there are other trends besides those involving time. For example we can look at the trend of temperature versus CO2 concentration.comment image
If we have some hypothetical physical model in mind to explain the relationship and we wish to find the model parameters which best explain the data we only help ourselves by removing the high frequency variations that have nothing to do with the physical relationship we are trying to model. In this context, the high frequency variance _is_ noise, meaning extraneous to the signal of interest. And filtering does not discard the data, it extracts (by means of a weighted sum) the information pertinent to the time-scale of interest.

Reply to  charles nelson
September 7, 2016 7:30 am

Stochastic models are not new or controversial.

Reply to  JonA
September 7, 2016 10:28 am

Now, that is an unusual claim.
Almost every stochastic model is controversial unless fully tested and certified; and even then viewed suspiciously.

Reply to  charles nelson
September 7, 2016 11:16 am

I don’t have the problem that some do with the idea of a mean global surface temperature. It is just a proxy for the over-all energy contained in the system which at any instant in time has some unknowable value but a value nonetheless. Averaged over time and space the variance tends toward zero. If the objective is to determine local weather patterns from such a proxy, then we agree that is a fool’s errand. But if the objective to is make policy relevant projections about the future state of the climate then the proxy is informative if, and it’s a big if, it can be shown that local _climates_ (not weather) correlates to the MGST over the relevant decadal time-scales and at various locales. I’ve never done this analysis but I assume someone has. If it could be shown the correlation is not existent then you have a point but I doubt this would be true since the MGST is a determined from these local time series.
This is also why I don’t get too concerned with the data “adjustments” as long as they are done homogeneously across the entire dataset. Such errors also tend to average out and do not have much impact on a model regressed over the entire series.

Reply to  Jeff Patterson
September 7, 2016 1:53 pm

“This is also why I don’t get too concerned with the data “adjustments” as long as they are done homogeneously across the entire dataset. Such errors also tend to average out and do not have much impact on a model regressed over the entire series.”
You put fairly clear, fairly strict, conditions on data-fudging.
“Homogeneously” for one.
I agree, but wonder whether all data-bludgeoning over the last decades would meet your [and my] criteria.
MWP ‘elimination’ suggests that not all such data-torture is acceptable.
And certainly not as the base for a religion – or a socialist, world-changing, movement.
Auto – absolutely a Nobel prize-co-winner [as absolutely an actual EU Citizen when the EU won a prize for a better pair of pyjamas, or similar]. Like c. 500,000,000 others, I suppose.

Reply to  Jeff Patterson
September 7, 2016 3:08 pm

In my comment I was referring to the instrumental period. The proxies you refer to a subject to all manners of mischief which Steve MacIntyre has done a remarkable job of exposing. He is much more circumspect than I would be about assigning motives. Given what he has uncovered regarding selective use of proxies and “unorthodox” statistical methods I can’t see how the charge of scientific fraud could not be sustained.

george e. smith
Reply to  charles nelson
September 7, 2016 3:32 pm

So Jeff, what is the appropriate “bandwidth” and the appropriate “matched filter” shape that will maximize the signal to noise ratio of the measurement of the Earth’s mean Temperature ??
If filtering improves the signal to noise ratio there is just one filter that will maximize the signal to noise ratio: the “matched filter”. Well other filters which are more practical to implement can come close; within a half dB of the perfect matched filter in the case of a Gaussian filter, for one particular type of signal.
But that still requires the correct choice of “bandwidth” for that filter. So what is the optimum bandwidth for the earth Temperature filter ??

Reply to  george e. smith
September 7, 2016 4:18 pm

The optimal matched filter presumes stationarity which unfortunately may not be assumed. This does not mean filtering doesn’t improve the SNR but rather that the optimum SNR is not achievable over long time scales. But as long as the cut-off frequency of a properly designed filter is large relative to 1/time-scale of interest, the filter will be stochastically transparent meaning that the off-peak autocorrelation function will be unaltered.

September 6, 2016 10:59 pm

“…the IPCC…say the raw value is modified by some kind of atmospheric or other process, and that the resulting value is on the order of -3.2 W m-2 / °C .
The problem I have with that physically-based explanation is that the ERL is not a real layer. It is a theoretical altitude that is calculated from a single value, the amount of outgoing longwave radiation. So how could that be altered by physical processes? It’s not like a layer of clouds, that can be moved up or down by atmospheric processes. It is a theoretical calculated value derived from observations of outgoing longwave radiation … I can’t see how that would be affected by physical processes.”
The IPCC-reported value is nothing like taking some imaginary emission level and differentiating the Stefan-Boltzmann equation. That’s nonsense, because as you point out the TOA emission is a mixture of what comes from the surface and what comes from all levels of the atmosphere.
A common way in which it is calculated is described in the methods section of this paper:

M Courtney
Reply to  MieScatter
September 6, 2016 11:38 pm

The methods section you link to seems to be very speculative. Uncertainty is only about 10% for something not measured over a quarter of a millennium. Should have been peer reviewed.

Feedback calculations are performed for climate change simulations from 14 different coupled ocean–
atmosphere models integrated with projected increases in well-mixed greenhouse gases and aerosols as prescribed by the IPCC Special Report on Emissions Scenarios (SRES) A1B scenario (Table 1). This scenario corresponds roughly to a doubling in equivalent CO2 between 2000 and 2100, after which time the radiative forcings are held constant. The estimated radiative forcing (i.e., the change in the global mean net radiative flux at the tropopause holding all other inputs to theradiative transfer fixed) under this scenario is 4.3 Wm2 [IPCC Third Assessment Report (Ramaswamy et al. 2001) Tables 6.14 and 6.15). The uncertainty in forcings is estimated to be 10% for the period 1750–2000 (Ramaswamy et al. 2001, p. 351), which includes uncertainty in the forcing given the concentrations as well asuncertainty in the historical concentrations of the forcing agents themselves. The uncertainty in projected forcings for 2000–2100 given the SRES A1B assumptions is presumably smaller since the concentrations of dominant forcing agents are specified.
Unfortunately, the data required for directly comparing model forcings are not yet available.

Reply to  M Courtney
September 7, 2016 12:14 am

“Uncertainty is only about 10% for something not measured over a quarter of a millennium. “
The relation between forcing and GHG is not gotten from observations over long time. It is calculated from the known radiative properties of GHG etc, which are not expected to change much in that time. That is what the 10% relates to. The scenario is of course speculative, but well defined (and not by S&H).

Paul Penrose
Reply to  M Courtney
September 7, 2016 1:59 pm

My problem is that this is all computer simulations again – written by amateurs (in software engineering), never properly designed, reviewed, tested, and verified. You can trust the output of these simulations about as much as you would a home-brew, uncalibrated instrument.

M Courtney
Reply to  MieScatter
September 7, 2016 8:49 am

Nick Stokes, Thank you for the knowledgeable reply.
However, the uncertainty from the lack of measurements that I was referring to was this.

as well as uncertainty in the historical concentrations of the forcing agents themselves

10% for soot and SOx concentrations over that period? Ice core measurements assume a well mixed atmosphere to be meaningful – that doesn’t apply for everything. We certainly don’t have a theory that can deduce that at every latitude.
This just doesn’t look justified.

Reply to  MieScatter
September 8, 2016 9:33 pm

Paul Penrose, which radiation-scheme verification and validation papers have you read? Why, specifically, do you think that they are all wrong?

September 6, 2016 11:21 pm

Hi Willis. It is possible to take this further by looking at the weighting of respective radiating bodies (surfaces, clouds and proportional cloud types and clear sky). Each has a calculable emissivity to a degree. This represents more truly how the Earth answers to space. It doesn’t do so as a black body. Atmospheric components that block direct surface radiations through opacity raise atmospheric emissivity.
As we have little care about representing the Earth’s temperature as a single averaged number, the most accurate one is from an ERL that is calculated through a weighted mean emissivity. This gives an effective mean radiative temperature much higher than 255K. Actually it is more like 280K and tells us that most radiations come from relatively high temperatures close to the Earth’s surface weighted towards those higher temperatures.
Best regards.

Reply to  nuwurld
September 6, 2016 11:30 pm

nuworld: you’re describing partly how it’s done. The main differences between your suggestion and the actual calculations are:
1) non-uniform temperature is accounted for by solving for atmospheric profiles at each lat-lon location on Earth.
2) the full radiative transfer equation is solved at each level for a number of wavelength bands. The solvers used may vary, but you end up with an emissivity defined for a layer by integrating the absorption coefficients through the layer thickness during the solution.
The overall method is explained in the methods here:

September 6, 2016 11:44 pm

“What am I missing here?”
Much less than Lord M. And this is all good:

The problem I have with that physically-based explanation is that the ERL is not a real layer. It is a theoretical altitude that is calculated from a single value, the amount of outgoing longwave radiation. So how could that be altered by physical processes? It’s not like a layer of clouds, that can be moved up or down by atmospheric processes. It is a theoretical calculated value derived from observations of outgoing longwave radiation … I can’t see how that would be affected by physical processes.

But the thing is, the IPCC, citing S&H don’t use ERL. They deal with a direct dependence of ΔT_S on ΔF₀, not through an intermediate ERL, but through the various intermediate (feedback) variables T, water, albedo etc. And by T they mean space/time continuum T. Over some cycle (annual, I think), they look at how each cell/3hr period value of T affects F, and add them all. They say:

To compute Kx, we first calculate the control top-of-the-atmosphere (TOA) radiative fluxes using 3-hourly values of temperature, water vapor, cloud properties, and surface albedo from a control simulation of the GFDL GCM. For each level k, the temperature is increased by 1 K and the resulting change in TOA fluxes determines (∂R/∂T_k).

And they get a rather complete set of information, which they separate into the dependence you would have with uniform change λ₀, and the rest that is due to the way ΔT changes over altitude and altitude λ_L:comment image
Here’s their lat/alt mapcomment image
More details here.

Reply to  Nick Stokes
September 7, 2016 12:40 am

It looks like if you torture the data enough you can get an image of a Teenage Mutant Ninja Turtle

Reply to  Alex
September 7, 2016 12:57 am

“if you torture the data enough”comment image

Reply to  Alex
September 7, 2016 1:24 am

The Scream came to mind first. The mask around the eyes made me think -TMNT.
Keep up the good work. I wasn’t having a go at you or your explanations

Reply to  Alex
September 7, 2016 3:10 pm

“if you torture the data enough”
Hilarious. Bravo.

Reply to  Nick Stokes
September 7, 2016 12:59 am

The thing is, in order for an increase in atmospheric opacity to outgoing IR to be able to create actual warming (an absolute rise in T) at the altitude-specific levels from the tropopause down to the surface, the OLR through the ToA needs to go down for any given initial set of temperatures. Meaning that, if T (including T_s) stays the same, then T_e drops (Z_e goes up without parallel warming, forcing subsequent warming), and if T_e stays the same, then T (including T_s) increases (Z_e goes up with parallel warming). This is THE “greenhouse” warming mechanism, “the raising of the ERL”, as schematically illustrated here (Soden & Held, 2000):

Reply to  Willis Eschenbach
September 7, 2016 1:29 am

“It sounds like the IPCC is saying that the Planck parameter lambda zero is the change in radiation dW if there is a change in temperature of 1°C from the surface up to the top of the atmosphere. Is this your understanding?”
Yes, I think so, as per the S&H extract quoted. They work out the effect of perturbing each cell by 1°. Then they add up to get the effect of perturbing them all by 1° equally, and call that Planck. The difference averaged is called Lapse Rate feedback.

Reply to  Willis Eschenbach
September 7, 2016 8:23 am

“Is this correct? If not, how do you describe the Planck parameter?”
The Planck parameter is the net change in radiation at the top of the atmosphere following a global-average warming of 1 K that is vertically uniform through the troposphere.
So there’s no change in lapse rate, and you properly calculate the contribution to flux changes from all levels of the atmosphere and from the surface. This is described in the Soden & Held (2006) methods section, if their description is unclear then you could quote the exact words you’re confused by and I’m sure Nick or I could try to clarify.

Reply to  Willis Eschenbach
September 7, 2016 11:47 am

Isaac Held believes that a non radiating atmosphere is isothermal (from personal com). In doing so he renders the lapse a function of opacity. Changing opacity has to change the lapse according to his belief. The old, ‘GHG’s cool the upper and warm the lower’.
So you have to decide upon whether long wave radiative heat transfer affects the lapse.
Data says long wave doesn’t affect the lapse so Held’s atmosphere is based upon an incorrect assumption.

Reply to  Willis Eschenbach
September 7, 2016 2:36 pm

“1) the IPCC says that the Planck parameter is the change in outgoing TOA radiation that you get when you raise the temperature from the surface to the TOA by one degree C, and
2) given the IPCC assumption that the lapse rate is constant under small changes in temperature …”

The IPCC defines sensitivity as the ratio of surface temperature change to flux at TOA. The AR5 mentions Planck feedback, but not very specifically, and it isn’t in the glossary. I think it comes back to Soden and Held, who define the Planck as TOA flux over 1°C uniform heating in the atmosphere. The difference between 1°C uniform and 1°C surface then becomes the lapse rate feedback, which may include more effects than just change in LR.
As to whether your method should get the same result, I’m not sure. S&H look at changes in the same location over time, due to forcing. You’re looking at comparing changes as location varies, at the same time. That may involve different causes of change.

Reply to  Willis Eschenbach
September 7, 2016 7:56 pm

Willis Eschenbach: “then it would seem to me that my calculation upon starting this quest should equal the IPCC findings.”
The lapse rate is expected to change under warming in real-world conditions but that’s the lapse rate feedback. If you read Soden & Held (2006), they explain how the split is done:
The Planck response in one location may be different from another. Imagine a zone with a dry upper troposphere versus a moist upper troposphere, but otherwise the same temperature profile and surface temperature. Under a 1 K warming, the dry case sees a bigger increase in flux at the top of the atmosphere.
The Planck feedback is defined for long-term warming patterns and with no change in moisture or clouds. CERES regressions don’t match this, you need to do the calculation as Soden & Held did. If you could quote the exact bit of their paper that’s confusing you it would help.

Reply to  Willis Eschenbach
September 8, 2016 1:12 pm

MieScatter, you have said,
“The lapse rate is expected to change under warming in real-world conditions but that’s the lapse rate feedback. If you read Soden & Held (2006), they explain how the split is done:”
Isaac Held believes that a non radiating atmosphere is isothermal. Also that a pure radiative, without other forms of heat transfer has an extreme lapse, referring the lapse a function of opacity.
In reality the tropospheric lapse appears to be totally independent of the whole of the proposed ‘greenhouse effect’.
So describing how they model and explain something that ‘isn’t there’ is questionable.

george e. smith
Reply to  Nick Stokes
September 7, 2016 6:27 am

Anybody who uses the term “surface albedo” in a science paper has already lost my interest.
Albedo is NOT a reflection coefficient of any surface. it is a single number for an entire planet, and for earth it is something like 0.367 or thereabouts.
Reflection coefficients (reflectances) are frequency (wavelength) dependent variables.
Albedo is NOT spectrally selective. it is a single number for the entire solar radiation spectrum.

Reply to  Nick Stokes
September 7, 2016 12:55 pm

Nick Stokes: ΔT changes over altitude and altitude λ_L
is that supposed to read “altitude and latitude”, as in the graphs?

Reply to  matthewrmarler
September 7, 2016 1:25 pm

Thanks, yes.

Reply to  matthewrmarler
September 7, 2016 10:58 pm

Or perhaps changes in latitudes, changes in attitudes?

September 6, 2016 11:48 pm

It is wrong to use equilibrium formulae for non equilibrium dynamics, especially when you are interested in the changes, but not only then. The emission spectrum from Earth is very far from gray/black body. Radiation temperature, black body physics, Stephan-Boltzmann and so on are not applicable. If you simplify a theory more than necessary, you get absurdities, because ex falso, quodlibet. Earth does not have a single temperature. Averaging intensive quantities and using them as if they are true physical values and calculating pseudo-physical results out of them guarantees wrong results, except in very few exceptional cases, but it’s not the case for Earth.

richard verney
September 7, 2016 12:20 am

One day, Climate ‘Science’ will enter in and engage with the real world, and when it does, it will be a revelation.

September 7, 2016 12:44 am

Good grief.
An object on the surface warms in sunlight. At night it cools.
After four and a half billion years, the surface is no longer molten, surface water is below boiling point, and rising CO2 levels are helping to avoid the extinction of the human race, by ensuring adequate plant life to feed us.
There is no radiative balance. As long as the core is hotter than the surface, it will continue to cool. The surface must follow suit, until the Earth is isothermal beyond the depth affected by the Sun’s influence.
Ah, the rich tapestry of life!

Peter Champness
September 7, 2016 1:10 am

I am heading toward a daughters wedding my self so I liked reading about your payment plan and the out come. I wish many years of happiness to your daughter and her husband.
In terms of the ERL I will read the comments here as they come up and see what I can learn. I can see that some radiation comes directly from the surface via the atmospheric window so ERL might be a non real mathematical concept. However I am still stuck on back radiation.
I have carried out some of my own experiments and it was quite easy to show that a silvered surface reduces radiant heat loss. I think the best way to view that is that the reflector is at the same temperature as the hot body, so no radiant heat transfer can occur. To me that gets rid of the light bulb in front of the mirror thought experiments. Alan Siddons analogy was wrong. However my experiments were not sensitive enough to show any difference between a radiative gas ( 99% CO2) and air as far as heat loss is concerned so the null hypothesis is still alive for green house gases.

David Cosserat
Reply to  Willis Eschenbach
September 18, 2016 2:39 am

Willis, you say: Returning to our two planetoids, one radiating 270 W/m2 and the other 420 W/m2. The NET transfer between them is 430 – 270 = 150 W/m2 from the warmer to the colder planetoid as per the Second Law.
I wholeheartedly agree. The net transfer of radiative energy between two bodies that are within view of each other is always in the direction from the warmer to the colder one. Why people cannot grasp this universal truth is beyond me.
It seems almost a daily occurrence on these threads that some well meaning climate skeptic earnestly tries to knock down climate alarmism with the patently invalid argument that the photonic radiation asserted by a cooler body towards a warmer one ‘violates the Second Law’. They think of the bi-directional photon flows involved, as envisaged by statistical thermodynamics, as two independent streams and then concentrate on just the one stream that they are uncomfortable with.
But the streams are not independent. They are always and everywhere locked together by the geometry of the situation. And, since it is always the case that the larger stream offsets the smaller one, there cannot possibly be any physical way in which the cooler body loses energy to the hotter one. ‘Back radiation’ is a term that should be expunged from the lexicon of climate science.
There is also an important bonus concealed in this interpretation of bi-directional photon flow. Taking your example of opposing radiation potentials of 430W/m2 and 270W/m2, which net to an energy flow of only 150W/m2, it is nevertheless the case that the temperature T of each surface is defined through the S-B law by its own radiative potential, irrespective of the value of the opposing radiative potential. This explains an issue that many seem to find puzzling on these threads – how the steady-state temperature of a body through which energy is flowing can seem high even though the rate of flow seems extremely low.
From which truth follows all sorts of good insights: for example, why the surface of Venus seems so very hot even though the power reaching its surface (and the balancing power returning from its surface) is known to be so very low.

David Cosserat
Reply to  Peter Champness
September 18, 2016 6:05 am

How are you? As we have discussed privately, radiation experiments are really tough because it is extremely hard to eliminate the effects of reflection and conduction, which tend to overwhelm the results, however hard one tries. So when people, including you and me, have done real experiments, it is easy for others explain them away (one way or the other according to their prejudices).
As you know I discovered this myself when a couple of years ago I constructed what I though was a very carefully designed scientific experiment to prove or disprove the ‘Steel Greenhouse’ thought experiment expounded by Willis, and at the time refuted by Postma, Siddons and several others. The mathematics involved is actually mainstream physics taught in the physics departments of all universities but the issue nevertheless caused much controversy.
In my innocence, I used a cheap vacuum pump to extract the air between the three concentric bodies in my apparatus down to a vacuum of 1mbar, assuming that such a reduction to one thousandth of an atmosphere would be more than sufficient to reduce conduction/convection between the surfaces to negligible proportions. But I found to my dismay that this reduction in pressure had absolutely no discernible effect whatsoever on eliminating conduction/convection – a discovery I only made accidentally when, on one occasion, I failed to run the pump before taking the steady-state temperature measurements of the three bodies and found that the results were unaltered!
There was however a happy end to that story. Some very kind people who run a small vacuum testing company near Bristol allowed me to go down to their premises and hook up my experiment to their industrial vacuum pump. So I was able to pump down to 0.0001mbar (a mere thousand times more rarified!). At this vacuum level conduction/convection was all but eliminated and I was then able to prove definitively that placing an intervening radiative shell between a cooler outer shell and a constantly powered inner shell does indeed cause the temperature of the inner shell to increase, and by an amount that is in line with standard theory.
Just as my experiment was initially ruined by conduction/convection, it would seem that your ‘back radiation’ experiment powerfully demonstrated the effects of reflection. This would certainly overwhelm any radiative transfer that might have occurred between the surfaces (which would have been minimal in comparison if the reflector was reasonably good). Your consequent assumption that a perfect reflector has no effect on the temperature of the source is sound and actually corresponds to common sense. The energy from the emitting device simply returns to it, resulting in no change to its temperature. And, of course, a perfect reflector itself emits no radiation at all.
All the best,

Peter Champness
September 7, 2016 1:12 am

Doh! Many years of happiness.

Leo Smith
September 7, 2016 1:35 am

Why are posts being silently discarded?

Reply to  Leo Smith
September 7, 2016 1:52 am

I had a problem with this the other day and was assured that it wasn’t happening. I took it as a glitch at my end because I use a VPN (am in China) and the VPN dropped out at the wrong moment and then reset itself.

george e. smith
Reply to  Leo Smith
September 7, 2016 6:29 am

It doesn’t like the name smith. mine get disappeared too.

Leo Smith
Reply to  george e. smith
September 7, 2016 9:08 am

Well I can its seems post a one liner, but not a page or so.

September 7, 2016 1:40 am

Is the point of this post that climate change isn’t happening because none of the scientists have considered this factor?
I’m pretty sure they have and all the other stuff Monckton brings up…
anyway a newlywed man should be thinking about other things…
(It has always bothered me that poet Matthew Arnold wrote his poem ‘Dover Beach while on his honeymoon – google it and see if that’s any frame of mind for a newly married man)

Reply to  Griff
September 7, 2016 1:48 am

My reading was that it was Willis’s daughter that was married not Willis.

Reply to  Griff
September 7, 2016 2:59 am

Griff, that shows how much you actually pay attention to what is written …It was his daughters wedding..

Reply to  Griff
September 7, 2016 6:26 am

….this explains a lot

John M. Ware
September 7, 2016 1:59 am

At a guess, Arnold had thought about the poem for a good while before the honeymoon, but it had not resolved itself in his mind because of the chaos of preparations for the wedding. Once married and on his honeymoon, he could write–and out it came. As a composer of sorts, I have had similar experiences–during complex times, it’s hard to write; once some things are resolved, the writing goes quickly. Also, in respect to Arnold, if the honeymoon had been at or near Dover Beach, he doubtless knew the history of invasions and battles associated with the place, though (as I recall from reading the poem long ago) he treats more the issues within the church at the time of writing than any specific land or sea battles.

September 7, 2016 3:45 am

Before our daughter’s wedding I calculated the cost of the party and then offered that amount less £100 to the happy couple rather than actually having the party.
Son – in – Law Elect asked “Why the £100 deduction?” I replied, “To buy the ladder for the elopement”.
Daughter said, “Unfair, Dad, we live in a bungalow”.
They went for the party anyway.

September 7, 2016 4:05 am

Dear Willis,
is the ERL the same as TOA (Top Of Atmosphere)? This it is called (by others) the average height where infrared radiation is going towards space. AFIR, each atmospheric gas has its own TOA, and is influenced by as well by cloud cover. The heigth of (the average) TOA is in th upper troposphere, about 8-10km above the surface.
And some infrared radiation is going out directly from the surface towards space. Can you sort that out?

Reply to  Johannes S. Herbst
September 7, 2016 4:45 am

I think Nick (above) gave Willis a method to sort all that out. Stay tuned.

September 7, 2016 4:30 am

Willis wrote, What am I missing?
A uniform background radiation in the microwave region of the spectrum is observed in all directions in the sky. Currently it is commonly called the Cosmic Microwave Background or just CMB, alluding to its Wien peak in the microwave region. It shows the wavelength dependence of a “blackbody” radiator at about 3 Kelvins temperature. It is considered to be the remnant of the radiation emitted at the time the expanding universe became transparent at about 3000 K temperature. The discovery of the 3K microwave background radiation was one of the crucial steps leading to the calculation of the standard “Big Bang” model of cosmology, its role being that of providing estimates of relative populations of particles and photons. Research using the Far Infrared Absolute Spectrophotometer (FIRAS) onboard the COBE satellite have given a temperature of 2.725 +/- 0.002 K. Previous experiments had shown some anisotropy of the background radiation due to the motion of the solar system, but COBE collected data showing fluctuations in the background. Some fluctuations in the background are necessary in big bang cosmology to give enough non-uniformity for galaxies to form. The apparent uniformity of the background radiation is the basis for the “galaxy formation problem” in big bang cosmology. The more recent WMAP mission gave a much higher resolution picture of the anisotropies in the cosmic background radiation.The precision of the mapping of the CMB was improved with the Planck satellite, giving the best current values for the descriptive parameters.
The data for the round figure of 109 photons per nuclear particle is the “most important quantitative conclusion to be drawn from the measurements of the microwave radiation background …”(Weinberg p66-70). This allowed the conclusion that galaxies and stars could not have started forming until the temperature dropped below 3000K. Then atoms could form and remove the opacity of the expanding universe; light could get out and relieve the radiation pressure. Star and galaxy formation could not occur until the gravitational attraction could overcome the outward radiation pressure, and at 109 photons/baryon a critical “Jean’s mass” of about a million times that of a large galaxy would be required. With atom formation and a transparent universe, the Jeans mass dropped to about 10-6 the mass of a galaxy, allowing gravitational clumping.
While there are some radar bands from 1,300 to 1,600 MHz, most microwave applications fall in the range 3,000 to 30,000 MHz (3-30 GHz). Current microwave ovens operate at a nominal frequency of 2450 MHz, a band assigned by the FCC. There are also some amateur and radio navigation uses of the 3-30 GHz range. In interactions with matter, microwave radiation primarily acts to produce molecular rotation and torsion, and microwave absorption manifests itself by heat. Molecular structure information can be obtained from the analysis of molecular rotational spectra, the most precise way to determine bond lengths and angles of molecules. Microwave radiation is also used in electron spin resonance spectroscopy.
For microwave ovens and some radar applications, the microwaves are produced by magnetrons.
Of great astrophysical significance is the 3K background radiation in the universe, which is in the microwave region. It has recently been mapped with great precision by the WMAP probe.

September 7, 2016 4:40 am

Nice post, with very nice graphics. I completely agree that a theoretical construct like the effective radiation level are as often as not more misleading than clarifying. The same applies to much of the discussion of the influence of GHG’s in the atmosphere. I find it a bit bonkers to start relating the derivative of a theoretical construct to physical behaviors. I understand the temptation to use such a construct to ‘simplify’ a complex process; such constructs, plus a few arm waves, can ‘explain’ everything, often in the complete absence of understanding of what is actually happening. I think succumbing to that temptation is unwise if the goal is to understand how a change in conditions will change a real physical process. It would be better if people focused more on the actual physical processes involved and less on ‘simplifying’ constructs.

September 7, 2016 4:43 am

I have challenged the IPCC, United Nations, CSIRO and Dept of Environment and Energy in Australia with the following FREEDOM of INFORMATION REQUEST. Their responses will be used as evidence in the class action that I plan for 2018 by large companies against the Australian government. Those with an understanding of entropy and thermodynamics may wish to read this copy of my FOI request:
In light of the matters outlined below, I ask why it is that, in considering the role of carbon dioxide, if any, in affecting Earth’s surface temperature, your Department or organization has apparently not taken into account the effect upon entropy of differing mean molecular gravitational potential energy at different altitudes in the troposphere? That effect causes the state of maximum entropy (that is, thermodynamic equilibrium) with isentropic conditions to have a stable temperature gradient in the troposphere, and that is what explains the fact that the surface temperature is higher than the temperature at the radiating altitude, not any atmospheric radiation. Radiation from carbon dioxide has no effect upon surface temperatures because the impinging Solar radiation is insufficient to explain observed temperatures in the first place.
Additional supporting information is below and in my website* and my 2013 paper Planetary Core and Surface Temperatures linked from that site.
* http://whyitsnotco2.com
The land based data is manipulated by incorrect “homogenization” based on weather stations affected by urban crawl. There is also selective elimination of weather stations that don’t show enough warming. Raw data from some Australian stations in Northern Victoria for example shows no warming in over 100 years. Only satellite measurements are reliable and, as is to be expected, they show no warming since 1998. There is, however, long-term warming of about half a degree per century since the “Little Ice Age” but it can be expected to become about 500 years of long-term cooling before the end of this century if past natural cycles continue.
However, regardless of any warming, carbon dioxide cannot be the cause as there is no valid physics that can give any reason for such. The infant science of climatology (in which there are few with qualifications in physics) has abused the laws of physics and ignored the prerequisites for such laws to apply.
Their first fundamental error was to assume that, in the absence of so-called “greenhouse gases” (1% water vapor, 0.04% carbon dioxide and some others) the Earth’s surface temperature would have been the same as that about 5Km up into the troposphere. This ignores the effect of gravity which (as has been discussed since the 19th century) forms a stable equilibrium non-zero temperature gradient in every planet’s troposphere. Now, in the 21st century, experiments with centrifuges and vortex cooling tubes demonstrate centrifugal force also creating a radial temperature gradient for the same reason that gravity does. Furthermore, a correct understanding of the process of entropy maximization in physics enables us to explain why this happens as gravity acts on molecules between collisions. So there is no need to explain the warmer surface temperature with radiation, and radiation is not the cause thereof.
The second fundamental error is that, in their unnecessary attempts to explain the fact that the surface temperature is warmer than that 5Km above, climatologists have incorrectly assumed that they can just add together the flux of radiation from the Sun and about double that flux from the colder atmosphere. The latter can have no warming effect what-so-ever on the warmer surface, whilst even the solar radiation does not always raise the existing surface temperature, especially in winter and in the early morning and late afternoon. Once again, we can confirm that radiation can not be compounded like that with a simple experiment. We can measure the temperature to which a single electric bar radiator will raise an object and then see if several such radiators achieve the results that climatologists would like to see. They don’t come anywhere near doing so.

charles nelson
September 7, 2016 5:04 am

Hey everyone, I decided to calculate the average Global Wage.
Using OECD figures I know that the average wage in the USA is thirty two thousand dollars, and the average wage in India is ten thousand dollars, and the average wage in China is seven thousand etc and the average wage in Mali is 103 dollars etc etc etc…after much calculation I calculate the Global Average wage to be $6003.26.
How useful is that!!!
Of course there are a few factors which need to be taken into consideration like for instance these figures only represent those persons of working age…currently in employment and do not include savings…pensions…tax rebates…or social security supplements) (Note: these figures are only only for countries where cash is the dominant mode of transaction…this does not include barter, co-operative sharing of produce or the narco-economy).
I’m sure this figure of $6003.26 is absolutely mathematically correct and can therefore be used in all further calculations.
….and I wonder why they call economics ‘the dismal science’!

Reply to  charles nelson
September 7, 2016 9:45 am

It only becomes a science once you provide calculations for an entire century. Bonus points if you adjust the early numbers downwards and bump the later numbers upwards somewhat. Special extra super bonus if you are able to defend those adjustments with a straight face.
I have already alerted the Nobel committee and I am sure your prize is in the mail.

Tom in Florida
September 7, 2016 5:38 am

“Both the bride and groom have college degrees in Project Management, and they took over and put on a moving and wonderful event. And you can be sure, it was on time and under budget.”
Well there goes their chance for a government job.

Tom in Florida
Reply to  Willis Eschenbach
September 7, 2016 3:17 pm

Just being sarcastic about government never finishing a project on time or under budget.

September 7, 2016 6:22 am

Last weekend we flew home. It was a cloudless day and I was sitting in a window seat with the Sun shining on my head on the ground. Although warm, it wasn’t really bothersome. Leaving Detroit, we took an ESE departure all the way to cruise altitude and as we climbed out, I noticed my face getting warmer and warmer. After 20 minutes or so we were still on the north shore of Lake Eire and got the Sun and the full reflection of the Sun off the water and had to put the shade down as it was actually burning.
At that altitude the Sun’s reflection was from almost one side of the Lake to the other. It was an impressive amount of radiant energy being sent back into space. I know you can come up with equations to approximate what it is, but throw in partial cloud cover over the Lake or high cirrus blocking the Sun and it is really impossible. I appreciate taking averages over time but to get really granular, I’m not so sure.

Reply to  rbabcock
September 7, 2016 8:45 am

Last weekend we flew home. It was a cloudless day and I was sitting in a window seat with the Sun shining on my head on the ground. Although warm, it wasn’t really bothersome. Leaving Detroit, we took an ESE departure all the way to cruise altitude and as we climbed out, I noticed my face getting warmer and warmer. After 20 minutes or so we were still on the north shore of Lake Eire and got the Sun and the full reflection of the Sun off the water and had to put the shade down as it was actually burning.
At that altitude the Sun’s reflection was from almost one side of the Lake to the other. It was an impressive amount of radiant energy being sent back into space. I know you can come up with equations to approximate what it is, but throw in partial cloud cover over the Lake or high cirrus blocking the Sun and it is really impossible. I appreciate taking averages over time but to get really granular, I’m not so sure.

This is why open water in the arctic does not lead to a runaway melting by itself.

September 7, 2016 6:27 am

I’m thinking it’s like saying the Sun’s surface temperature is 5,777 K, when the sun doesn’t really have a surface nor does it have a single temperature. Likewise if the color temperature of the Earth was taken from space through properly calibrated instruments and integrated, we would get an effective average temperature, it follows at some altitude or level the atmospheric temperature would be at the effective average temperature.

Reply to  bugenator
September 7, 2016 6:49 am

Yes, this has already been done and the blackbody temperature of Earth as seen from space is 254.3K

September 7, 2016 6:40 am

Moon/Earth Comparison Bulk parameters
Moon Earth Ratio (Moon/Earth)
Mass (1024 kg) 0.07346 5.9724 0.0123
Volume (1010 km3) 2.1968 108.321 0.0203
Equatorial radius (km) 1738.1 6378.1 0.2725
Volumetric mean radius 1737.4 6371.0 0.2727
Mean density (kg/m3) 3344 5514 0.606
Surface gravity (m/s2) 1.62 9.80 0.165
Escape velocity (km/s) 2.38 11.2 0.213
GM (x 106 km3/s2) 0.00490 0.39860 0.0123
Bond albedo 0.11 0.306 0.360
Visual geometric albedo 0.12 0.367 0.330
Visual magnitude V(1,0) +0.21 -3.86 –
Solar irradiance (W/m2) 1361.0 1361.0 1.000
Black-body temperature (K) 270.4 254.0 1.065
Topographic range (km) 13 20 0.650
Moment of inertia (I/MR2) 0.394 0.3308 1.191
Sorry for the data set, but it shows that the earth and moon , which are the same distance from the sun, have quite different radiative temperatures.
The simple explanation for the moon radiating more heat [ technically being a hotter object than earth] is that it has a lower albedo so it is on average hotter than earth at the surface.
Its atmosphere [virtually non existent is presumably quite cold?].
It’s ERL is presumably the surface of the planet.
If the earth ERL is -18.7°C the moon ERL must be -2.4 C.
The Planck parameter of -3.75 W/m2 per °C is presumably correct for the moon surface.
the earth annual average radiation is about 240 W/m2?
I would have thought the earth and the moon receive the same radiation.
Is this worked out from the Solar irradiance (W/m2) 1361.0
In which case albedo may not be being taken into account.
Does the difference between Lord Monckton and the IPCC,[ -3.2 W m-2 / °C ]and yourself [The Planck parameter of -3.75 W/m2 per °C] reflect the fact that the earths higher albedo simply means that the 240W/M2 is not strictly correct, ie that some of the heat is reflected meaning that less than 240 W/M2 actually is absorbed by the earth surface and atmosphere?
In which case one could work backwards I guess and work out the true heat absorption.

Mickey Reno
Reply to  angech
September 7, 2016 8:12 am

Thanks for this. I too would like I to see the premise tested that the Earth’s climate (ie, energy budget) changes more due to changes in reflectivity (albedo) than it does from tiny changes in one mechanism of outgoing long wave radiation.

Reply to  Mickey Reno
September 7, 2016 8:28 am

Mickey Reno:
What do you think of the work of Martin Wild at ETH? Specifically which papers did you read that make you think that albedo changes are not included? What do you think happened to the ~2 W m-2 heating being caused by greenhouse gases?

Mickey Reno
Reply to  Mickey Reno
September 7, 2016 11:20 am

MieScatter, I guess I was thinking more in terms of geologic time scale changes that could explain glacial and inter-glacial periods, more than just recent AGW time frames. I realize we may not have that many good proxies for such research. But thanks for the pointer to Martin Wild. I just finished a perusal of his 2008 paper “Global Dimming and Brightening, A Review” from the Journal of Geophysical Research (free to download-huzzah). I don’t believe I had read this before. His conclusions are that there exists a real dimming from the early part of the 1900s to the 1980s and a real brightening since then, with some (weak) evidence for a plateau after 2000. Based on my previous understanding of the pan-evaporation experiments or aggregations, the dimming part of this didn’t surprise me. But I’m ignorant of any change to pan-evaporation reality since the 1980s. Have they changed, globally? Another thing to investigate. Anyway, Wild’s conclusions seems to match fairly well the state of AGW conclusions over the past 75 years, if you accept post 1980 warming and a pause following the 1998 El Nino, don’t you think?
Wild clearly believes human activity is central to the brightening (since the 1980s) presumably via particulate air pollution, which he presume are better controlled by developed nations. But I’m not sure this conclusion will hold up globally, given China, India, Africa, and S. American build-out of coal fired electricity after 1980, . It could be that we’ve measured changes in Western behaviors wrt particulate air pollution, rather than globally. And Wild is careful in his language about this, and seems to dislike the term global dimming and global brightening, while accepting that their use is too ubiquitous to change at this point. I’m also happy that Wild means carbon soot, aerosols and hydrocarbons when he says “pollution.” But then he seems to place the blame for most non-volcanic particulates on human beings, which, if true, is a mistake. Nature throws up lots of dust and crud into the atmosphere, and some of it sticks. A simplifying assumption made in order to make ones equations more solvable, ought not be simplified too much. if your goal is to prove that humans matter to the climate, your simplifying assumptions ought never omit the so-called natural state of anything. Wild informs (and I realized this already) that we need a better satellite monitoring system before we can fully understand particulate and cloud reflection of UV. He seems to favor measuring particulates over clouds or water vapor. I would guess, based on his work that he has a simplifying assumption that given aerosols and particulates, clouds will necessarily follow. He takes the water vapor for granted. And speaking of water vapor, Wild, uncharacteristically seems to believe in CO2 as a proxy for all greenhouse gasses. If so, in my opinion that would be a mistake, as I think water vapor should always be presumed as primary GHG when talking about LWIR scattering.
There’s a lot of food for thought. There is still no accurate climate model. It’s still possible that some unknown unknown or complex combination or set of interactions masks the actual results on T from both the IR and UV sides.

September 7, 2016 6:43 am

340 W/m^2 ISR arrive at the ToA (100 km per NASA), 100 W/m^2 are reflected straight away leaving 240 W/m^2 continuing on to be absorbed by the atmosphere (80 W/m^2) and surface (160 W/m^2). In order to maintain the existing thermal equilibrium and atmospheric temperature (not really required) 240 W/m^2 must leave the ToA. Leaving the surface at 1.5 m (IPCC Glossary) are: thermals, 17 W/m^2; evapotranspiration, 80 W/m^2; LWIR, 63 W/m^2 sub-totaling 160 W/m^2 plus the atmosphere’s 80 W/m^2 making a grand total of 240 W/m^2 OLR at ToA.
When more energy leaves ToA than enters it, the atmosphere will cool down. When less energy leaves the ToA than enters it, the atmosphere will heat up. The GHE theory postulates that GHGs impede/trap/store the flow of heat reducing the amount leaving the ToA and as a consequence the atmosphere will heat up. Actually if the energy moving through to the ToA goes down, say from 240 to 238 W/m^2, the atmosphere will cool per Q/A = U * dT. The same condition could also be due to increased albedo decreasing heat to the atmosphere & surface or ocean absorbing energy.
The S-B ideal BB temperature corresponding to ToA 240 W/m^2 OLR is 255 K or -18 C. This ToA “surface” value is compared to a surface “surface” at 1.5 m temperature of 288 K, 15 C, 390 W/m^2. The 33 C higher 1.5 m temperature is allegedly attributed to/explained by the GHE theory.
BTW the S-B ideal BB radiation equation applies only in a vacuum. For an object to radiate 100% of its energy per S-B there can be no conduction or convection, i.e. no molecules or a vacuum. The upwelling calculation of 15 C, 288 K, 390 W/m^2 only applies/works in vacuum.
Comparing ToA values to 1.5 m values is an incorrect comparison.
The S-B BB ToA “surface” temperature of 255 K should be compared to the ToA observed “surface” temperature of 193 K, -80 C, not the 1.5 m above land “surface” temperature of 288 K, 15 C. The – 62 C difference is explained by the earth’s effective emissivity. The ratio of the ToA observed “surface” temperature (^4) at 100 km to the S-B BB temperature (^4) equals an emissivity of .328. Emissivity is not the same as albedo.
Because the +33 C comparison between ToA “surface” 255 K and 1.5 m “surface” 288 K is invalid the perceived need for a GHE theory/explanation results in an invalid non-solution to a non-problem.
ACS Climate Change Toolkit
Trenberth et. al. 2011 “Atmospheric Moisture Transports …….” Figure 10, IPCC AR5 Annex III

September 7, 2016 6:49 am

I wish I had time to comment on this or even read it in detail , but building the language in which to express the computations and a community building on it trumps all else . ( My friend Morten Kromberg just did a podcast , https://www.functionalgeekery.com/episode-65-morten-kromberg/ , which describes the nature of APL languages the backbone of which I’ve built in an open x86 Forth . )
I will just pose the question of the nature transition from that atmospheric minimum back up the orbital gray body temperature of about 278K ?

September 7, 2016 6:58 am

Following on from a question I put in comments on one of the earlier threads, I would be pleased if someone can explain the reasoning for flux*(1-albedo) is divided by 4 to get the average temp, when S-B is a T^4 average?
I used 1360 flux, albedo of 0.3:
(a) If you calculate by the divide by 4 method S-B gives an average temperature of -18.6 degC
(b) If you calculate S-B as a double cosine integral over 1 hemisphere and average on T you get -11.0 degC
(c) If you calculate S-B as a double cosine integral over 1 hemisphere and average on T^4 you get +13.3 degC
Still not clear to me why the flux is divided by 4 first…….anyone care to explain, or point me to an explanation?
What am I misunderstanding?

Reply to  ThinkingScientist
September 7, 2016 7:43 am

On average, heat leaves Earth at the same total rate Q that it arrives. But it arrives as a parallel beam, so its flux intensity is Q/(disc area=πR²). But it leaves radially, so its average flux intensity is Q/(surface area=4πR²).

Reply to  ThinkingScientist
September 7, 2016 7:46 am

ThinkingScientist September 7, 2016 at 6:58 am
“What am I misunderstanding?”
Because it is simple.
1,368 W/m^2 times cross sectional area of earth M^2 with r radius gives watts. The surface of a sphere of radius r is 4 times the surface of a disc of radius r. Divide/spread those watts over the entire ToA spherical surface area for 1,368 / 4 = 342 W/m^2. That’s how they do it. Ask them why.
There is no consideration for day or night, aphelion/perihelion, seasons, etc. It’s just a graphical tool to illustrate where the power flux gozintaz and gosoutaz. And Trenberth et al 2011jcli24 Figure 10 shows 7 out of 8 models, that’s 87.5% of ALL scientists (at least involved in these models) show the atmosphere as cooling, not warming, much to Trenberth’s dismay.

Kevin Kilty
Reply to  ThinkingScientist
September 7, 2016 9:09 am

TS, While Nick Stokes answered your question just fine, 4 is the ratio of surface area of a sphere to its projected area, your post brought to my mind other thoughts. First, there is always the issue of weighting these fool averages. If radiant emittance is a function of temperature to the fourth power, then small hot regions with clear, dry sky contribute a disproportionate amount of the outgoing radiation; so, why do we not weight for such influences? Moreover the mean temperature, a single number that is the focus of endless enjoyable argument and scientific employment, is calculated and its uncertainty estimated by assumption that all numbers going into it are IID values–I doubt they are. So, why do we place such significance on this? I doubt that mean earth temperature, in and of itself, has much significance for devining our future.
Finally, climate is experienced locally, climate change as well. So then, someone please tell me what is the usefulness of these blasted zero dimensional models of heat transfer? I am sick of them.
Thanks for letting me rant.

Alan McIntire
Reply to  ThinkingScientist
September 7, 2016 11:31 am

You’re not misunderstanding anything. The divide by 4 method is an over simplification.
Even your (c) won’t work for the real earth. Thanks to Hadley Circulation, heat is transferred from the tropics towards the poles, so the tropics radiate away less heat than they absorb, further poleward, more heat is radiated away than is absorbed directly from the sun.

September 7, 2016 7:18 am

It appears you have the same concerns that I have. Form a post yesterday [ https://wattsupwiththat.com/2016/09/06/feet-of-clay-the-official-errors-that-exaggerated-global-warming-part-3/comment-page-1/#comment-2294128 ] “Ask a good instrumentation engineer all of the things that need to be taken into consideration just to measure the water level in a fifty foot steam generator used at a nuclear power plant where you have cold water on the bottom heated water a few feet up, boiling water above that, saturated steam above that, and then superheated steam above that. All of this affects the delta-P as measured by the level instrument and all of that has to be taken into account in determining the level.”
As a 1°x1° gridcell is used to calculate this, there appears to be an awful lot of averaging going on. Not only are they averaging the conditions for an area that can be about 70 miles by 70 miles at the equator and About 0 by 0 miles at the poles. In Kansas you could have sunny weather and be water skiing on a lake on one side of that grid and have a F4 Tornado on the other side. Been there and seen it.
Similarly, you can place a DP level instrument on the side of a boiler, calibrate it for the specific gravity of H2O at the “average” temperature of the boiler and the average weight of steam above that assumed water level. What you get is a reading that is pure Bull droppings. 75 years ago that is exactly what they did. It was good enough to prevent uncovering the tubes in the boiler and preventing destroying the boiler – after restrictions were placed upon minimum water levels and other operation conditions were established. However, it will not give a true, actual, level of the water in the boiler. A 5° change in inlet water temperature can change the accuracy of the level gauge by more than 10 percent – enough to boil the steam generator dry and the gauge will tell you that you have more than adequate water level as you destroy the boiler.
It is my opinion that they are doing the same thing with “Radiation temperature” and the assumed “Effective Radiation Level.” None of this takes into account the massive differences that can exist in the actual 1°x1° gridcell over the entire height, depth or length of that column. How can they predict exactly what is causing the heat to escape (radiate out) or what is going to trap the heat – clouds, CO2, ice crystals, rain, snow, water vapor, particulates, whatever. Search “earth” images look at the pictures. How are the clouds, all the different types of clouds, factored in. Seems to me from what I read they are just lost in the “averages.” Averages do not work.

September 7, 2016 7:37 am

Hi Willis!
1. 240 W/m^2 escaping at the TOA is the flux density j needed for energy balance, a physical quantity. However, the theoretical construct comes from calculating a perfect Planck black body (i.e. emissivity = 1) that would emit that 240 W/m^2. Using the Stefan-Boltzmann law, this turns out to be equivalent to a temperature of 255 K. As you note, there is no physical layer in the troposphere (at 4.85 km altitude) that actually emits a 255 K Planck black body spectrum, as any infrared (IR) spectrometer carried by a balloon or aircraft or on the side of a mountain will show. So any “explanation” of the greenhouse effect invoking such a hypothetical layer at 4.85 km altitude is worthless. However, the change in TOA flux on doubling CO2 over a 20 km path length can be calculated quite accurately from molecular constants, for example the 3.39 W/m^2 shown on the MODTRAN spectrum available at https://en.wikipedia.org/wiki/Radiative_forcing . Lambda-zero can be calculated from the formula that the relative change in temperature is 1/4 the relative change in flux density (which I simply call “flux” as I consider 1 m^2, just as one can talk about energy instead of power when one considers 1 second). This factor of 1/4 comes from taking the derivative of the Stefan-Boltzmann law with respect to T, dividing both sides by j, and cross-multiplying.
2. Lambda-zero = (255)/[4(240)] = 0.266. Using 3.7 W/m^2 for delta j, the temperature change for the hypothetical layer would be (0.266)(3.7) = 0.98 K.
Why is this equal to the temperature change at the Earth’s surface, and therefore the climate sensitivity (not including feedbacks)? Temperature profiles of the troposphere at different locations on the Earth, where surface temperatures vary widely, show mostly parallel straight lines with a slope corresponding to the lapse rate of -6.8 K/km. Therefore a temperature change at 4.85 km will be equal to the temperature change at the surface. This is true even if we have modelled the actual troposphere by a single thin hypothetical Planck black body layer that would match both j and delta j at the TOA (which is not at 4.85 km, but at 20 km for the MODTRAN spectrum). This assumes no net absorption or emission between 4.85 and 20 km, and that the inverse square law does not spread out the flux (a valid approximation since 20 km is small compared to the radius of the Earth).
3. Why are the temperature profiles parallel? Because statistical mechanics tells us that the most probable way of adding a finite amount of energy to a finite number of molecules in equal energy states is to add equal amounts to each molecule. Since heat content (enthalpy, H) is heat capacity times temperature, and the heat capacity of linear molecules like N2 and O2 and CO2 is 7k/2 per molecule, where k is Boltzmann’s constant, adding equal amounts of H means delta T is the same for each average molecule. This is true even though density decreases with altitude.
4. Why are the molecules in a column of the troposphere in equal energy states? The dry adiabatic lapse rate can be derived by considering the gravitational potential energy U = mgh for a molecule of mass m at altitude h where g is the acceleration due to gravity, and the enthalpy H = 7kT/2. If there is no heat injected into or sucked from each layer (i.e. for adiabatic conditions), then a gain in altitude by a molecule must come at the expense of a drop in H. I.e. dU/dh = – dH/dh. Using the Chain Rule for derivatives,
dH/dh = (dH/dT).(dT/dh), so d(mgh)/dh = -[d(7kT/2)/dT].dT/dh , so mg = -(7k/2).dT/dh .
Therefore the dry adiabatic lapse rate is dT/dh = -2mg/(7k) which equals -9.8 K/km on substitution of appropriate values for m, g and k.
5. Note that d(U+H)/dh = 0, so U+H is constant, regardless of altitude in the column, and the molecules are in equal energy state. If now equal amounts of heat, delta H, are added to each molecule, U+H will now be greater than for the adiabatic states, but the total U+H for the heated molecules will be equal for each.
6. Schlesinger (1985) assumed a transmission factor of (255/288)^4 = 0.615 would convert j’ , the flux at the 288 K mean surface temperature to the TOA flux, j = 240 W/m^2, emitted by the hypothetical 255 K Planck black body, assuming emissivity 1 for the surface. j’ = sigma.(288)^4 = 390 W/m^2, where sigma = 5.67 x 10^-8 is the Stefan-Boltzmann constant. j = 0.615 j’, so (delta j) = 0.615 (delta j’), and (delta j’) = (delta j)/0.615. Since (delta Tav)/Tav = 1/4 (delta j’)/j’ where Tav is the average temperature of 288 K. Substitution for (delta j’) gives (delta Tav)/288 = 1/4 (delta j)/[ (0.615)(390)]
so that delta Tav = 0.300 (delta j). Why is this value for lambda-zero 0.300/0.266 = 1.13 times larger than the value of Point 2?
7. If we use emissivity 0.98 for the real surface of the Earth, lambda-zero becomes 0.300/0.98 = 0.306, a factor of 0.306/0.266 = 1.15 larger than 0.266. Since 7/6 = 1.166…., this might explain why the factor of 7/6 in front of lambda-zero appears in the formula In the literature cited by Lord Monckton. But this would predict a climate sensitivity of (0.306)( 3.7) = 1.13 K instead of 0.98 K. Why is there a difference of 15% in such a key factor?
8. The short answer is that the TOA flux of 240 W/m^2 escapes from a partially clouded real Earth, whereas Schlesinger’s transmission factor was calculated for a hypothetical cloudless column of the troposphere with a constant lapse rate.
9. Although clouds only partially absorb visible radiation (they are not totally black when we look upward through them during the daytime), they are composed of water droplets or ice crystals that are essentially miniature Planck black bodies that absorb and emit infrared (IR) radiation with emissivity approximately 1.
This absorption will be over all IR frequencies, not just at discrete frequencies corresponding to molecular vibration-rotation bands. Thus the net absorption from the Earth’s surface to the cloud-top will be greater than if there were no clouds. Even if the lapse rate from the cloud-top to 10 km remains at -6.8 K/km, there will be a smaller TOA flux than 240 W/m^2 from the column above clouds. Since clouds cover a non-trivial 62% of the Earth’s surface, the mean flux when both cloud-free and clouded areas are considered will be less than 240 W/m^2. I.e. Schlesinger’s numbers are wrong, since they do not correspond to energy balance. The factor 7/6 is wrong, and must be removed in all future derivations of lambda-zero.
10. Are the numbers in the MODTRAN spectrum also wrong? No, because they predict the correct observed lapse rate of -6.8 K/km. This may be understood by considering what happens when we rise 1 km in the troposphere. The dry adiabatic lapse rate predicts a drop of 9.8 K, and this corresponds to zero injection of heat. Since enthalpy H varies with T, consider the heat needed to bring this drop in temperature to zero. It would be proportional to 9.8 K.
11. Now consider a black body surface with emissivity 1 emitting 390 W/m^2 upward. If 100% of the energy were absorbed within the next 1 km, then by Kirchhoff’s law that a good absorber is a good emitter, 100% of the energy would be emitted upward. We can ignore the back-radiation of 390 W/m^2, because it is simply balanced by another 390 W/m^2 upward emitted by the lower surface; i.e. the emission and back-radiation simply indicate communication between surfaces at thermal equilibrium, with no net change in temperature in either. The process would be repeated in the next 1 km layer, etc. until finally 390 W/m^2 would escape to outer space from the last layer. The result is that the initial opaque surface is simply extended to the surface of the last layer, from which photons are emitted according to the Stefan-Boltzmann law. The temperature change would be zero.
12. The MODTRAN spectrum shows that at 20 km, the TOA flux is 260.12 W/m^2 from a cloud-free 288.2 K Earth’s surface. At emissivity 0.98, the Earth’s surface would emit 383.34 W/m^2, so the transmission factor would be 260.12/383.34 = 0.6786. This is 10% higher than Schlesinger’s estimate of 0.615 in Point 6, explaining most of the difference between the two values of lambda-zero. This higher transmission factor would mean less absorption in the troposphere, and so a smaller value for climate sensitivity.
13. If the transmission factor is 0.6786, then the absorbance is 1 – 0.6786 = 0.3214. If 100% absorption results in a temperature change of 9.8 K from the dry adiabatic change for each km gain in altitude, then 32.14% absorption would produce a temperature change of 0.3214(9.8) = 3.15 K, since equal amounts of added energy per molecule are proportional to equal temperature changes. Therefore the change in temperature would be only -9.8 + 3.15 = -6.65 K for each km rise in altitude. This is close enough to the observed lapse rate of -6.8 K/km that we can claim to have derived it from first principles applied to the numbers in the MODTRAN spectrum, which must be right.
14. The value of lambda-zero using Tav = 288.2 K, j’ = 383.34 W/m^2 and j’ = j/0.6786 (where we have used the correct value for transmission factor that explains the observed lapse rate) is then
(288.2)/[4(0.6786)(383.34)] = 0.277
14. Therefore if we apply the MODTRAN value of 3.39 W/m^2 for the value of (delta j’), the climate sensitivity is 0.277(3.39) = 0.94 K (not including feedbacks). If we use 3.7 W/m^2 for (delta j’), the value would be 1.02 K, a difference of 8%, reflecting the differences in radiative forcing. Since the MODTRAN spectrum is so accurate in predicting/explaining the observed lapse rate, something it was not designed to do, I place more faith in the value of 0.94 K. There are other comments I could make, but I have to leave now for a personal duty.

Reply to  rogertaguchi
September 7, 2016 8:20 am

adding equal amounts of H means delta T is the same for each average molecule.

Except, water transitions through 2 state changes, making it nonlinear, and there’s a lot of it.

Reply to  micro6500
September 7, 2016 1:19 pm

Yes, phase transitions are involved in the rate of heat transfer from surface to upper levels in the atmosphere. So are convection currents, radiation transfer between cloud particles and then by collision to the main molecules of the air, and radiation transfer between greenhouse gas molecules and then by collision to the main molecules of the air. At steady state, the energy transferred is stored in N2 and O2 that outnumber CO2 by 2500:1 and water vapour molecules by 60:1 at 15 Celsius. The temperature profiles at steady state are not labelled according to convection, cloud cover, etc. because these, like latent heat transfers are transient things (it cools down when a cloud temporarily blocks the Sun). One exception is the temperature inversion that might occur near the poles during the long night/winter, but this is an indication that steady state is not reached there. The total amount of heat absorbed is proportional to the areas of net absorption in the MODTRAN and experimental spectra, and how that heat is transferred (including by phase changes) is not important if steady state has been achieved. Phase changes, convection, and clouds are confined to the troposphere, and ultimately energy balance is achieved by the photons reflected, and the IR photons that escape to outer space as monitored in the spectrum. Hope this helps.

Reply to  rogertaguchi
September 7, 2016 1:54 pm

if steady state has been achieved

This is important. Late at night the rate of temperature change slows by at 75% or more, but it happens only as air temps start to near dew points.
All the while the temperature of the sky has not appreciably changed, and the morning breeze also hasn’t started.

September 7, 2016 8:06 am

For the Stefan-Boltzmann equation’s use, one cannot average radiation. The radiation happens during the day, when it corresponds to the 4th power of the temperature difference between the surface of the Sun and the surface of the Earth, and of course all the issues with angle of incidence. At night, there is no radiation from the Sun.
Averaging radiation is scientifically meaningless, and discussing average radiation with respect to the S-B equation is scientifically meaningless. You have to look at a sphere that is rotating every 24 hours, irradiated with the full power of the Sun half the time, and dark half the time, with varying albedo somewhere around .30-.35, of course with the 23 or so degrees of angle of the Poles, immensely complex.
And of course the ability of the surface of the Earth and the atmosphere of the Earth to radiate to space is affected by the composition of the atmosphere, with water vapor by far the most significant opacity to outgoing infrared. CO2 prevails higher up where there is little water vapor, and the atmosphere is radiating at a temperature to which CO2 is significantly opaque, until you get to the altitude where the molecules of CO2 are so sparse that the atmosphere can radiate freely to Space.
If the Sun irradiated the surface of the Earth 24-7 at half its flux (that is all you have to say, “Flux,” or even “Radiative Flux”), then you could discuss the black-body laws with your averaged radiation. But it doesn’t.
Man, am I glad I am not trying to calculate any of this…

Reply to  Michael Moon
September 7, 2016 9:31 am

” You have to look at a sphere that is rotating every 24 hours, irradiated with the full power of the Sun half the time, and dark half the time”
No, you have a disc intercepting the radiation 24/24 and you have a sphere radiating 24/24. That is the basis of the 1/4 scaling factor.
However, I will second your reticence about averaging such quantities across the globe when reflectivity and temperature vary so widely both geographically and over 24h.

Reply to  Michael Moon
September 7, 2016 5:44 pm

” At night, there is no radiation from the Sun.”
True but heat from the sun in the earth/sea/atmosphere still radiates out and some extra heat is brought in at the edges by air currents .Agree it is immensely complex.

Sun Spot
September 7, 2016 9:05 am

Ummmmm, haven’t you’all forgotten that the science is settled, no need for this mathy stuff just green-wash politics please?

Bill Illis
September 7, 2016 9:09 am

There is also the “actual effective radiation level”.
In reality, there is emission directly from the surface at 0 metres in the atmospheric windows (in the below depiction, this is a hot desert it appears since the temp is very high). There is emission from H2O across many spectrum and at almost all layers of the atmosphere, there is emission from CO2 at 15 um which primarily occurs at -50C high in the stratosphere (CO2 cools off the Earth if you think about it with its large emission spectrum to space in the stratosphere) and we have Ozone emitting near the surface a little but mostly high up in the ozone layer where it is a little warmer than the stratosphere because Ozone also intercepts solar radiation here.
There are in fact, many effective radiation levels and spectra.
And there is a large variation in the actual long-wave emission to space based on the geography and latitude and time of the season etc. In this CERES map from April (a good average month with little seasonal influence) the values are as low as 100 W/m2 in Antarctica to some 300 W/m2 in some place (average is right about 240 W/m2).comment image

Daniel Kaplan
Reply to  Bill Illis
September 8, 2016 12:17 am

Along the same line ot thought
The effective radiation level and corresponding temperature are concepts that are useful to
understand qualitatively the so-called “greenhouse effect”.
However, I believe they are only meaningful and useful if :
1-defined for a given IR wavelength
2-defined for a clear sky condition
3-defined for given local atmospheric conditions
Apologies to those for which the following is well known.
Since the pressure decreases exponentially with altitude z as exp(-z/z0) there is an altitude ERL at
which, for a given wavelength, the CO2 (or other GHG) absorption length equals z0. Any light emitted above z0 will mostly escape to space. Emitted below z0 most of it will be reabsorbed. The simplifying concept is that the TOA radiation at that wavelength is clearly governed by the black body radiation at the temperature occuring at the ERL.
This concept explains the shape of clear sky TOA radiation spectra, such as the one you show:

The CO2 absorption spectrum is between 550 and 800 cm-1. The dotted lines are theoretical black body spectra at various temperatures. The region 620-700 emits like a black body at 218K, except for a sharp peak at the center, which correspond to the CO2 central absorption line. In this wavelength region the ERL is in the tropopause, and the radiation temperature is the tropopause temperature. If CO2 concentration changes, ERL will essentially remain at the tropopause and the radiation will not change appreciably (saturated greenhouse effect). The center line is much more efficient and its ERL is in the stratosphere where the temperature is higher (hence the peak). All the dependence on CO2 concentration comes from the wings below 620 and above 700. There the ERL is in the troposphere and sensitive to CO2 concentration. It is easy to show that the exponential pressure dependence translates for those wavelengths, into the well known logarithmic concentration dependence of the CO2 effect.
One often overlooked issue is that any natural change of the tropopause temperature will modify the 620-700 radiation : a CO2 induced, but essentially CO2 concentration independent, natural change.
Given the above considerations, I do not see much pertinence in averaged URLs and their use with
global S-B radiation laws.

Daniel Kaplan
Reply to  Bill Illis
September 8, 2016 12:21 am

oops, corrected reply.
Along the same line of thoght
The effective radiation level and corresponding temperature are concepts that are useful to
understand qualitatively the so-called “greenhouse effect”.
However, I believe they are only meaningful and useful if :
1-defined for a given IR wavelength
2-defined for a clear sky condition
3-defined for given local atmospheric conditions
Apologies to those for which the following is well known.
Since the pressure decreases exponentially with altitude z as exp(-z/z0) there is an altitude ERL at
which, for a given wavelength, the CO2 (or other GHG) absorption length equals z0. Any light emitted above ERL will mostly escape to space. Emitted below ERL most of it will be reabsorbed. The simplifying concept is that the TOA radiation at that wavelength is clearly governed by the black body radiation at the temperature occuring at the ERL.
This concept explains the shape of clear sky TOA radiation spectra,such as below:
The CO2 absorption spectrum is between 550 and 800 cm-1. The dotted lines are theoretical black body spectra at various temperatures. The region 620-700 emits like a black body at 218K, except for a sharp peak at the center, which correspond to the CO2 central absorption line. In this wavelength region the ERL is in the tropopause, and the radiation temperature is the tropopause temperature. If CO2 concentration changes, ERL will essentially remain at the tropopause and the radiation will not change appreciably (saturated greenhouse effect). The center line is much more efficient and its ERL is in the stratosphere where the temperature is higher (hence the peak). All the dependence on CO2 concentration comes from the wings below 620 and above 700. There the ERL is in the troposphere and sensitive to CO2 concentration. It is easy to show that the exponential pressure dependence translates for those wavelengths, into the well known logarithmic concentration dependence of the CO2 effect.
One often overlooked issue is that any natural change of the tropopause temperature will modify the 620-700 radiation : a CO2 induced, but essentially CO2 concentration independent, natural change.
Given the above considerations, I do not see much pertinence in averaged URLs and their use with
global S-B radiation laws.

Leo Smith
September 7, 2016 9:13 am

What am I missing here?
With utmost and genuine respect Willis, you are missing something fundamental: The understanding that there is no distinction between ‘physical fact’ and ‘model’, that ultimately even what you would call ‘physical fact’ is the output of a model.
All ‘physical facts’ are in the end interpretations of our experience mapped onto a metaphysical model.
WE assume the existence of a ‘physical reality, in a space time ‘dimension’ where everything is interconnected via ‘causality’ and regulated by immutable ‘natural law’.
That the assumptions enable the construction of a coherent and self consistent picture of the world, does not in any sense guarantee that these are in fact fundamental characteristics of the ‘world-as-it-really-is’, as opposed to the ‘world-as-we-see-and-partially-understand-it’, is not denied.
Habitually to deal; with the concepts so derived, as if they were real beguiles us into the comfortable illusion that they are real, not model outputs.
Look at gravity. What is gravity? It is a term, and, post Newton, a mathematically exact term, used to describe a relationship between other model constituents – mass, time, space – that occur in our notion of what is actually generating our experience
We can introduce other less precise terms., like ionosphere, Heavyside layer, troposphere, stratosphere, eco sphere and so on, to describe other relationships.
Do these exist as clearly defined physical entities, with precisely micron less width boundaries? No. Do they physically exist?
Aye, there’s the rub, the $64,000 question.
If you say no, then neither does gravity. If you say yes, then maybe so too do unicorns.
It is a particular conundrum, and elephant in the philosophical and scientific bedroom.
Fortunately, in my case, I was an engineer before I became fascinated by philosophy. WE have a simple adage. “If it works, use it”.
And I am afraid that adage is becoming de rigeur amongst the bleeding edge physical scientists. Is Quantum Theory real? People can’t get their heads round the pictures it puts in their brains. It makes them feel very uncomfortable. In order to physics at this level, scientists simply don’t address the question of the reality or otherwise of the ‘introduced entities’. They simply try to find equations that fit the observable data, couched in terms of meter movements, flashes of fluorescence, and tracks of water vapour….
It works, so we use it, and this computer is living (sic!) proof that the equations of at least some quantum theory, allow predictions to be made that are borne out in practice.
What does it mean in terms of physical reality?
Mate, there is no physical reality Only the experience of one. That is the only way to understand the dilemma and resolve . Physical reality is a way that smart apes can tell each other ‘where’ the ‘best’ ‘banana’ ‘tree’ is. A model of experience, that works.
It shouldn’t be taken so literally.
IN this light the answer to your quandary lies in the misapprehension that there is such a thing as a ‘real world’ that is distinct from a model of it. I wouold say it makes sense to say there is, but with an extreme proviso. It will always lie beyond a later of models we used to represent it. Conscious rational thought itself is a mechanism to map experience into a pre-ordained set of co-ordinates, and the proof of the pudding is never more and never less, than the model output matches experience.
That is as good as it gets, I fear.
We cannot debate the reality or otherwise of concepts. That is either supremely in-decidable, or manifestly wrong. Concepts have no physical reality, but then again, neither does physical reality!
All that matters for the prosecution of accurate rational thought, and that is what science is based on, or rather its philosophical name, natural philosophy, is. And accurate rational thought should take no prisoners, and face up to its ultimate limit. Kant didn’t write ‘a critique of pure reason’ for no reason. It was a warning that in the end, a model is only a model. Not reality. That didn’t stop two centuries of scientists pretending and having success by assuming that in fact their equations WERE reality. Until Quantum physics came along and made the warining all too relevant.
You and I are products of that misapprehension. I just woke up one day and realised that it was a misapprehension. As many philosophers from Occam and onwards have realised.
The real question to ask about ERL is not whether its real or not. It’s whether it is useful. Ultimately does it generate output that agrees with and predicts what you would term ‘data’ ?
If ERL is anything in your model, it’s an ’emergent property’ of what you would call ‘physical processes’ so it can of course move. As that’s what ‘physical objects’ do.
Your confusion seems to stem from a hard assumption that there is a clear distinction between ‘reality’ and ‘theory’, and you know where the border is.
Unfortunately, that is a position that cannot withstand the assault of modern physics: At the bottom, may well be a certain sort of reality. We must assume so, or abandon rational inquiry altogether.
But between us and our notions, and It, I am afraid its ‘models all the way down’.
And it is a long way down.

September 7, 2016 9:19 am

Willis, thank you for the essay.

September 7, 2016 9:30 am

Here’s a conceptual real heat balance approach. These tables display the incoming/outgoing/balance over 24 hours on a horizontal square meter at the equator during the equinox as it rotates beneath the sun. It includes corrections for the oblique incidence where the spherical surface is angled to the incoming TSI. People who design and site solar panels understand this. Just need to repeat for the other 364 days, seasons from solstice to solstice and the other 5.1 E14 square meters minus 1. You will need a bigger computer.
Daylight Incoming Heat
Hour…..angle…..Gain W/m^2
Heat Loss, 24 hours, Q/A = U * dT, U = 1.5
Hour…….Surface, C……. ToT, C……..Loss W / m^2……Difference
0500………..3.33…………(40.00)…………..65.00 (sub total 890.00)
0600………..3.33…………(40.00)…………..65.00……………. (65.00)
0800………..5.56…………(40.00)…………..68.33…………… 615.67
……………………Net Balance……………………………10,791.7

Reply to  nickreality65
September 7, 2016 10:42 am

This is excellent.

Reply to  Michael Moon
September 7, 2016 10:50 am

Thanks, I guess.
Well, it’s the same process used to evaluate the furnace needs for a house. Not exactly rocket science – or “climate” science, just basic HVAC,

Reply to  nickreality65
September 7, 2016 10:59 am

Should be 967.32.

Reply to  nickreality65
September 7, 2016 11:58 am

How do they account for the fact that areas of the earth are heated from the light bent around the theoretical flat circular surface disk they calculat with. thus calculations need to be made based upon “apparent” sun rise/set and all of the problems associated with that?
“apparent sunrise/sunset – Due to atmospheric refraction, sunrise occurs shortly before the sun crosses above the horizon. Light from the sun is bent, or refracted, as it enters earth’s atmosphere. See Apparent Sunrise Figure. This effect causes the apparent sunrise to be earlier than the actual sunrise. Similarly, apparent sunset occurs slightly later than actual sunset. The sunrise and sunset times reported in our calculator have been corrected for the approximate effects of atmospheric refraction. However, it should be noted that due to changes in air pressure, relative humidity, and other quantities, we cannot predict the exact effects of atmospheric refraction on sunrise and sunset time. Also note that this possible error increases with higher (closer to the poles) latitudes.”
I see this phenomenon as adding heat, but not being accounted for in their assumed calculations of averages, based upon thumb rules for calculating what various parameters are/should be. Have you ever noticed on a hike in a dry desert how rapidly it gets cold as soon as the visible Sun disappears? It does not get cold at the “actual” (physical) sunset. At times it has felt like I walked into a meat locker when the last sliver disappeared.

Reply to  usurbrain
September 7, 2016 1:34 pm

The phenomenon you present is well known and taught in the Military, I have no idea how it is accounted for if at all:
“Nautical twilight is defined to begin in the morning, and to end in the evening, when the center of the sun is geometrically 12 degrees below the horizon. At the beginning or end of nautical twilight, under good atmospheric conditions and in the absence of other illumination, general outlines of ground objects may be distinguishable, but detailed outdoor operations are not possible. During nautical twilight the illumination level is such that the horizon is still visible even on a Moonless night allowing mariners to take reliable star sights for navigational purposes, hence the name.”
Where there is light there is energy.

Reply to  usurbrain
September 7, 2016 1:40 pm

Yes, the rapid cooling near/after Sunset is due to the low water vapour concentration. Since water vapour is the main greenhouse gas overall (twice the absorption as CO2), in desert areas the net outward flux is not balanced by incoming Solar radiation, so the solid surface of the Earth rapidly cools. Near the poles, since the nighttime lasts weeks or months, the loss of heat via radiation of IR photons to outer space is so great that a temperature inversion can occur in the lower km or so of the troposphere. “Back-radiation” from CO2 can then transfer heat stored during the summer/daytime in N2 and O2, but this is slow as it requires transfer from N2 and O2 during collision with CO2 to form excited state CO2 molecules that can emit IR back to the ground. If there is a high water vapour concentration (e.g. near the Equator over the Pacific Ocean), the greenhouse gas H2O vapour can increase the back-radiation, as can liquid water droplets suspended as fog, mist, etc. In addition, condensation or sublimation (to form frost crystals from water vapour, which is a gas, not liquid droplets) transfers heat from the lower troposphere to the ground. Obviously this mechanism is reduced over desert areas at night.

Ron Clutz
Reply to  nickreality65
September 7, 2016 1:31 pm

nick, thanks for this. Do you have a link where I can read more about this?

Reply to  Ron Clutz
September 7, 2016 1:38 pm

Don’t really have link. It’s math, algebra, geometry, physics, parameters from the web & astronomy & second year heat transfer class.

September 7, 2016 9:58 am

Interesting post as usual Willis.
Lately I’ve been doing a Bayesian analysis on the Hadcrut4 vs CO2 concentration. The physical model is a gray-gas atmosphere with the emissivity path-length calculated by integration over the US standard barometric curve. The idea being that averaged over space and time the effective temperature is governed by the atmospheric emissivity which in turn is correlated strongly to the CO2 concentration. Feedbacks are assumed linear over the small anomally ranges and thus only scale the form of the gray-gas model equation. The free parameters are the effective radiation temperature, the exponent of the emissivity vs CO2 concentration, and a temperature offset to model the arbitrary choice of Ta=0. The first plot shows the results of Bayesian parameterization;comment image
The best-fit parameters are Te = 251.3K, e = .454, offs= -48. These values are all close to the expected values. The gray-gas models for CO2/Water vapor mix in the literature have exp ranging from .42 -. 5, and the Te is close to the value Willis determined here. The plot below shows the posterior distributions which are nicely Gaussian except for the temperature distribution. This distribution can be explained by looking at the residual which shows a strong 67 year periodicity which modulates Te.comment image
The equivalent model preferred by the IPCC is the familiar linear transformation of the CO2 forcing which is assumed logarithmic with CO2 concentration. This in turn infers that the emissivity vs path-length curve never saturates. A Bayesian run with the model a*ln(CO2/Co)+b is shown below.comment imagecomment image
The best-fit parameters are:
a=3.05, Co=278.9 ppm, b= -.502 C
Both models fir the observed data equally well and the variance of the residual is nearly identical.comment image
However, projecting the models out to 800ppm gives drastically different resultscomment image
The gray-gas model gives a 2xCO2 prediction of 1.7 C versus the non-saturating model’s 2.11
Projecting back to low concentrations is also interesting;comment image
Note that the log model cannot account for the -8 deg anomaly observed in the paleo record at 100 ppm CO2.comment image

Reply to  Jeff Patterson
September 7, 2016 10:02 am

I don’t know why I can never seem to get my plots to show up in line here. I was under the impression that WP would figure out an image URL and insert it but that doesn’t seem to work. Any helpful hints would be appreciated.

Reply to  Jeff Patterson
September 7, 2016 10:46 am

I can see all but the last one which will not display so I assume there is an error in the link.

Reply to  Jeff Patterson
September 7, 2016 5:40 pm

See https://wattsupwiththat.com/test/ for more information. The one sentence version is image URLs have to be on their own line and end in an image extension, e.g. .jpg or .png.
Try it – over there, not here.

Reply to  Jeff Patterson
September 7, 2016 10:43 am

interesting work Jeff. I’ll look at this in more detail when I have time .
“However, projecting the models out to 800ppm gives drastically different results”
This is good example of why extrapolation way outside the calibration data is unscientific and likely to produce meaningless, misleading results.HoHowever, this comment stands out as being the wrong way to look at things. However, this comment stands out as being the wrong way to look at things. However, this comment stands out as being the wrong way to look at things. However, this comment stands out as being the wrong way to look at things. However, this comment stands out as being the wrong way to look at things.wever, this comment stands out as being the wrong way to look at things.
The whole of IPCC projections are based on models tuned to fit 1960-1990 and then projected to make meaningless speculation about whatever “may” happen in 2100 and beyond.
It is hard to believe that 25000 of the worlds “top scientists” are unaware that this had no scientific validity , so one is obliged to conclude that it is intentionally misleading.

Reply to  Greg
September 7, 2016 1:59 pm

The plot below shows the residual (left red) , a single tone sine fit (left blue) , the power spectral density of the residual (right blue) and the psd after subtracting the sine(right red).comment image
If you’ve ever tried canceling a sine wave in noise this plot should amaze. Getting that level of cancellation over the 170 year period implies the frequency, phase and amplitude were all constant over that period as a mismatch or variance in any of these terms would result in incomplete cancellation which would show up in the PSD plots. Starts to make me believe those who point to an astronomical source of the AMO. Anyway, the point was to show how the AMO modulates Te. Rerunning the Bayesian fit with the sine fit removed from the Ta data makes the Te posterior PDF more Gaussian and reduces the standard error of the parameters.comment image
For what it’s worth, the 2xCo2 projection with the “denoised” data is 1.6C

Reply to  Jeff Patterson
September 7, 2016 10:48 am

Corrections to the above:
The offset for the gray gas model = -0.48C not 48 C
fir should be fit

Reply to  Jeff Patterson
September 7, 2016 1:56 pm

Where in the Paleo record do you find 100 ppm CO2?

Reply to  tty
September 7, 2016 2:11 pm

My bad. I was trying to post from memory because the plot had been renamed in my archive. I tracked down the plot I was thinking of and it’s more like 190 ppm at the minimum.comment image
I can’t remember where this plot comes from (any help?) but as I recall the temps are either Arctic or Antarctic which would be expected to vary by some multiple (2-3) of the GMST. A purely log CO2 curve can not explain the variance in the paleo record while the GGM does much better.

September 7, 2016 10:46 am

Many thanks, Willis, for yet another interesting and informative article. However, I have to differ from you on this: “The problem I have with that physically-based explanation is that the ERL is not a real layer. It is a theoretical altitude that is calculated from a single value, the amount of outgoing longwave radiation. So how could that be altered by physical processes? It’s not like a layer of clouds, that can be moved up or down by atmospheric processes. It is a theoretical calculated value derived from observations of outgoing longwave radiation … I can’t see how that would be affected by physical processes.“. (a) At least two values are involved, amount of radiation, and temperature. Maybe others too. (b) Nothing has to move physically for ERL to change, only one of the values involved. So a physical process that affected any of the values involved could easily change the ERL.

September 7, 2016 1:32 pm

Per NASA ERL is 100 km. This is where 240 W/m^ ISR and 240 W/m^2 OLR must balance. It is where molecular density has fallen to the point that conduction/convection/heat/energy concepts fall apart and all that remains is radiation and S-B BB properly applies, where the material gas changes to a photon gas.

Reply to  nickreality65
September 7, 2016 6:00 pm

“Per NASA ERL is 100 km.”
Hmm, only by NASA definition for a NASA defined ERL.
The chance of any measurement level being exactly 100 of any unit Km, feet , miles , calories etc by chance is astronomical.
They just set an arbitrary easy to calculate measure for their own convenience. True ERL as Willis said is very variable and for the earth averaged depends on the input you assign to the sun. “the ERL is a calculated theoretical construct,”
“This is where 240 W/m^ ISR and 240 W/m^2 OLR must balance. ”
At exactly 100km? I find this hard to believe. Either the distance is wrong or the input is wrong.

Reply to  angech
September 7, 2016 6:23 pm

340 – 100 albedo = 240. Albedo is out of the equation so 240 ISR and 240 OLR are left to work out the balance.

September 7, 2016 1:43 pm

Thanks Willis for your post.
I apologize if this is off topic, but is this analysis consistent with this from the Climate Change 2007: Working Group I: The Physical Science Basis?comment image
I am surprised that the thermals are so small at 24 W/m^2 especially considering along the sea shore the noticeable on shore cooler winds displacing the rising warmer air.

Reply to  Catcracking
September 7, 2016 5:58 pm

If you subtract the non radiant component of 24 W/m^2 from thermals and 78 W/m^2 from latent heat from 324 of ‘back radiation’, you are left with 222 W/m^2 which when added to 168 W/m^2 is the 390 W/m^2 of surface radiation corresponding to an average surface temperature of about 288K.
First, the average surface temp per satellite data is closer to 287K and he over-estimates the incident solar power. Second, the return of thermals (what goes up must come down) and latent heat (rain, weather, etc.), neither of which are in the form of photons and can not be properly considered radiation. He also fails to recognize that solar energy absorbed by the atmosphere is primarily by clouds, which are part of the same thermodynamic system as the surface, most of which is ocean which is tightly coupled to the clouds by the hydro cycle, thus separating these components only adds unnecessary wiggle room. For all intents and purposes relative to the equilibrium surface temperature, solar energy absorbed by clouds is equivalent to solar energy absorbed by the oceans (the surface) and this different is not properly lumped in as another back radiation term either.
He also underestimates the size of the transparent window and when asked where he got his value, he says it was an ‘educated guess’. Line by line simulations of the clear sky show the transparency to be about 46% at nominal GHG concentrations. Considering 2/3 of the planet is covered by clouds and traps surface emissions anyway, the resulting transparency is about 1/3 * .46 = .153. Multiplying 390 by .153 results in about 60 W/m^2 as opposed to the 40 W/m^2 claimed. Since clouds are not 100% opaque and about 20% of the surface emissions passes through them, of which 46% passes into space through the transparent window for another 24 W/m^2 is also unaccounted for as passing through the transparent windows in the atmosphere. In total, he underestimates the power passing through the transparent window by about a factor of 2.

Reply to  co2isnotevil
September 7, 2016 6:42 pm

So if I understand correctly, you are indicating that the IPCC diagram has many errors? I think it is still widely quoted.
Although I do not have specifics, I tend to think that the system is so complicated and chaotic that anyone could fudge any of the numerous questionable input factors to get whatever answer you want.

Reply to  co2isnotevil
September 7, 2016 7:23 pm

co2isnot evil
Thanks for your comments, they are helpful

Reply to  Willis Eschenbach
September 11, 2016 9:25 am

Hi Willis,
Nice work. Very helpful. A couple of questions. Is it your understanding that the latent heat is transported to the troposphere where it becomes part of the radiation budget at that layer? If so, how do we account for the latent energy released as precipitation? And (this one’s probably stupid), what about the energy required to lift all that water?

Reply to  Catcracking
September 8, 2016 9:51 pm

Catcracking: there are more updated versions of those, with slightly different estimates from different groups. Look for work by Kevin Trenberth, Graeme Stephens and Martin Wild.
This paper summarises the Trenberth results as of 2009, and explains the data sources:

September 7, 2016 4:35 pm

So by ‘ex-fiancee’ do you mean ‘wife’ Willis?
Whatever, congratulations on your relationship 🙂

September 7, 2016 4:49 pm

The 390 W/m^2 upwelling is calculated from inserting 15C, 288 K, in the S-B BB equation. This is incorrect.
1) 24 + 78 + (390 – 324) or 66 = 168. All the surface power flux is accounted for, the 324 appears out of nowhere.
2) The downwelling cannot equal the 324 upwelling as that would be 100% efficient perpetual energy motion and can’t happen.
3) The GHGs in the troposphere are at low temperatures, i.e. -20 C to -40 C so their S-B BB output would be about half of the 324, heat can’t flow from cold to hot, and radiating in all directions, not just back to the surface.
4) And the emissivity of CO2 is low, 0.10 or less (Nahle Nasif). There is no way this loop as represented is possible nor is the GHE theory that proposes it.
1) through 4) are violations of basic thermodynamic laws.
Plus this graph originated w/ Trenberth who has an updated version in Trenberth et al 2011jcli24 Figure 10.

Reply to  nickreality65
September 8, 2016 11:41 am

Thanks nick

September 7, 2016 5:05 pm

Another peace of the puzzle that you are missing Willis
You might remember the heroic role that newly-invented radar played in the Second World War. People hailed it then as “Our Miracle Ally”. But even in its earliest years, as it was helping win the war, radar proved to be more than an expert enemy locator. Radar technicians, doodling away in their idle moments, found that they could focus a radar beam on a marshmallow and toast it. They also popped popcorn with it. Such was the beginning of microwave cooking. The very same energy that warned the British of the German Luftwaffe invasion and that policemen employ to pinch speeding motorists, is what many of us now have in our kitchens. It’s the same as what carries long distance phone calls and cablevision.

Dr. S. Jeevananda Reddy
September 7, 2016 5:29 pm

Does it serve any purpose building castles on the loose soil like imaginative numbers? Why not try this with satellite data?
Dr. S. Jeevananda Reddy

September 7, 2016 5:31 pm

The problem with using the slope of SB at the surface is that there is an atmosphere between that surface and the other relevant observables. Relative to the system, a surface emitting 385 W/m^2 of net emissions is resulting in only 240 W/m^2 of output emissions. If instead, you used the slope of SB at T=287 with an emissivity of 0.62 (emissions at 287K = 385 W/m^2 and 385/240 == 1/0.62). Now. the so called ‘zero feedback’ sensitivity has a value of 1/(4 * 0.62 * 5.67E-8 * 287^3) = 0.3. Of course, this is only the zero feedback sensitivity when you consider the open loop gain to be 1/0.62 = 1.61 which is the current steady state closed loop gain with the RELATIVE feedback normalized to zero. Keep in mind that the feedback fraction and open loop gain can be traded off against each other to get whatever closed loop gain is required.

September 7, 2016 8:26 pm

“At any given location, the emitted radiation is a mix of some radiation from the surface plus some more radiation from a variety of levels in the atmosphere.”
Based on modtran investigations I disagree. According to modtran, effectively zero surface radiation makes it to any significant altitude in the atmosphere. The radiation from the surface is absorbed and thermalized for water, CO2 and ozone; the most significant GHG’s.
To be sure, kinetic energy can re emerge as radiation when the conditions are appropriate, but apparently that altitude is 1 km for CO2 and maybe 4.5 km for water. This may seem like quibbling, but there is a huge difference between the speed of light and the speed of…sound?
“the Planck parameter is how much the earth’s outgoing radiation increases for a 1°C change in temperature”
No. the Planck constant is not quantized. It is a sliding scale. We choose 1 degree increments to suit our fancy.
Willis, you are a wonderful person, but you need more physics. The Effective Radiative Level differs for each molecule and each wavelength of light. It further differs for isotopologues. The differences are enormous. You can average them if you wish, but this average will conceal multiple orders of magnitude differences. When we are dealing with a “coupled chaotic system” where the “attractors” are ephemeral pockets of high entropy/low energy, methinks we need to do way better than these averages.

Reply to  Willis Eschenbach
September 8, 2016 10:09 pm

the IR window is affected by clouds, just as other parts of the spectrum.
page 11, fig 11
any water in the system will make a mess of any clearsky calculation as far as i am concerned. eg how can line by line caculations be done without taking into account the emission from water on surrounding water/vapour molecules. when you have all the lines, then you can calculate? well no, because at any given time, a lot of those frequencies that can be absorbed are being absorbed which alter the capacity to absorb the line in question.

Reply to  Willis Eschenbach
September 11, 2016 6:27 am

Willis, I have read your guest post a while ago, it was quite interesting for me layman.
But here, coming back at this your reply to gymnosperm, I have again the same problem as with many places in guest posts and comments, concerning this wonderful atmospheric window.
Everybody talking about it refers to the corresponding Wikipedia entry, and so you do too.
Though I myself do as well regularly refer to Wiki (en, de, fr) when I want to supply info I guess be valuable, I’m not quite sure wether or not the info stored there concerning this special topic still is really accurate.
A counterexample of Wiki’s explanation we easily can find, e.g.
(please: don’t tell me “To NASA I never trust”).
Here you may read: The atmosphere is nearly opaque to EM radiation in part of the mid-IR and all of the far-IR regions.
And it is immediately visible on that page that the atmospheric window
indeed appears considerably more restricted than the one presented at Wikicomment image
The german Wiki page concerned with the atmospheric window
speaks a little bit different as well
Would I use spectralcalc.com every day, I’d obviously obtained from them a license to access their entire panoply, especially
where I would select “Observer” in the menu, and might upon that obtain, for the range 8-14µ, a nice transmittance plot telling us pretty good how the window looks like today (49US$ for one month).

Reply to  Willis Eschenbach
September 12, 2016 9:25 am

Fair enough on the lecture.
I have definitely not overlooked the infrared window. The window is actually rather dirty thanks to water, especially in the tropics; and it has a huge “bite” taken out of the middle of it by ozone.
If my work with up and down Modtran is anything but an aberration of the program itself, there is zero lessening of radiation escaping from the surface to space in the water, CO2, or ozone bands until about a kilometer for CO2, 4 km for water and 5 km for ozone.
We are ultimately discussing the greenhouse effect-particularly from CO2-on the ERL. According to Modtran the greenhouse effect begins, not at the surface, but at one kilometer.
Another way of looking at it is that all IR radiation is completely absorbed and thermalized and the atmosphere radiates as a perfect blackbody up to a kilometer.
We could say all the rocks and dirt and the ocean surface and the various “layers” of the atmosphere CAN be averaged up to a km.
Above a km the differences prevail. Co2 radiates continuously from 1 km well into the mesosphere. Water radiates from 4 to 17 km. Ozone radiates from 5 to 40 km.
The putative average ERL at 5.3 km, which your maps show varying considerably from place to place (methinks largely variation in atmospheric pressure), is a porridge of the blackbody radiation below 1 km, and the blackbody radiation less the greenhouse reductions from 1 to 40 km.

Reply to  gymnosperm
September 20, 2016 6:56 am

Another way of looking at it is that all IR radiation is completely absorbed and thermalized and the atmosphere radiates as a perfect blackbody up to a kilometer.

The same thing should be what happens on the surface with water vapor when air temp nears dew point temp. All the available receptors at that wavelength are full. What conditions are you setting in modtran for water vapor conditions?

Reply to  micro6500
September 20, 2016 9:26 pm

1? I never mess with the defaults.

Reply to  gymnosperm
September 13, 2016 8:58 am

Willis Eschenbach on September 12, 2016 at 3:17 pm
Thanks / Danke / Merci

September 7, 2016 8:28 pm

If you break the outgoing radiation into the primary sources:
Surface, cloud tops @ 2.5 km water vapor @ 5 km and CO2 @ 10km
Using Trenberth, et al under average cloudy conditions,
The values are 40 30 112 and 53 adding to 235 Wm-2.
Using a lapse rate of 6.5 C / km, the temperatures are in C
15 –1.25 –17.5 and -50
Using Stefan Boltzmann at these temperatures, the changes per C of warming are
0.5584, 0.4440, 1.7640 and 0.9572 adding to 3.724 Wm-2 per C.
The inverse is 0.2685 C/Wm-2 (the temperature sensitivity factor) is close to that of Lord Monckton.
The temperature sensitivity of each source is 0.1846, 0.2197, 0.2644 and 0.3976. The weighted average using their emissions to space is 0.2751.
If one simply uses the 235 total Wm-2 leaving the atmosphere and applies it to a black body, the temperature sensitivity is 0.2699 very close to the above more accurate estimates.

September 8, 2016 12:02 am

The ERL is a dynamic entity which both causes and responds to convective changes from place to place (including within the vertical column) such that radiative imbalances are neutralised on average over time.
Thus do radiative gases fail to alter the long term thermal equilibrium between energy in from space and energy out to space

September 8, 2016 6:14 am

The claim that “nobody” doubts the greenhouse theory is false as even a little due diligence research reveals. Spencer Weart points out in his discovery of global warming book how highly controversial and contentious the theory was when first introduced. And for good reason. It was bogus then, it remains bogus now and for the same reasons.
The only way CAGW pseudoscience Frankentheory continues to stumble around is because it powers a progressive world order agenda with all of the attendant wealth redistribution, abuse of power, oppression and misery. How many Bond villains held that same vision and how does it usually turn out?

James at 48
September 8, 2016 1:58 pm

We are a borderline ice planet and it’s actually a bit of a win of the dice that we have warmth in enough areas to support the rich Biosphere and Civilization we enjoy..

September 13, 2016 7:10 am

“The putative average ERL at 5.3 km, which your maps show varying considerably from place to place (methinks largely variation in atmospheric pressure), is a porridge of the blackbody radiation below 1 km, and the blackbody radiation less the greenhouse reductions from 1 to 40 km.”
I should have said 1 to 70 km. This is the general average of the polar orbiting satellites ant the upper limit of modtran. It should be noted that CO2 radiates considerably above 70 km according to other models, and this energy loss remains unaccounted for in Modtran.

%d bloggers like this:
Verified by MonsterInsights