I only ask because I want to know

By Christopher Monckton of Brenchley

I propose to raise a question about the Earth’s energy budget that has perplexed me for some years. Since further evidence in relation to my long-standing question is to hand, it is worth asking for answers from the expert community at WUWT.

A.E. Housman, in his immortal parody of the elegiac bromides often perpetrated by the choruses in the stage-plays of classical Greece, gives this line as an example:

I only ask because I want to know.

This sentiment is not as fatuous as it seems at first blush. Another chorus might say:

I ask because I want to make a point.

I begin by saying:

You say I aim to score a point. Not so:

I only ask because I want to know.

Last time I raised the question, in another blog, more heat than light was generated because the proprietrix had erroneously assumed that T / 4F, a differential essential to my argument, was too simple to be a correct form of the first derivative ΔT / ΔF of the fundamental equation (1) of radiative transfer:

, | Stefan-Boltzmann equation (1)

where F is radiative flux density in W m–2, ε is emissivity constant at unity, the Stefan-Boltzmann constant σ is 5.67 x 10–8 W m–2 K–4, and T is temperature in Kelvin. To avert similar misunderstandings (which I have found to be widespread), here is a demonstration that T / 4F, simple though it be, is indeed the first derivative ΔT / ΔF of Eq. (1):

. (2)

Like any budget, the Earth’s energy budget is supposed to balance. If there is an imbalance, a change in mean temperature will restore equilibrium.

My question relates to one of many curious features of the following energy-budget diagrams for the Earth:

Energy budget diagrams from (top left to bottom right) Kiehl & Trenberth (1997), Trenberth et al. (2008), IPCC (2013), Stephens et al. (2012), and NASA (2015).

Now for the curiosity:

“Consensus”: surface radiation FS falls on the interval [390, 398] W m–2.

There is a “consensus” that the radiative flux density leaving the Earth’s surface is 390-398 W m–2. The “consensus” would not be so happy if it saw the implications.

When I first saw FS = 390 W m–2 in Kiehl & Trenberth (1997), I deduced it was derived from observed global mean surface temperature 288 K using Eq. (1), assuming surface emissivity εS = 1. Similarly, TS = 289.5 K gives 398 W m–2.

The surface flux density cannot be reliably measured. So did the “consensus” use Eq. (1) to reach the flux densities shown in the five diagrams? Yes. Kiehl & Trenberth (1997) wrote: “Emission from the surface is assumed to follow Planck’s function, assuming a surface emissivity of 1.” Planck’s function gives flux density at a particular wavelength. Eq. (1) integrates that function across all wavelengths.

Here (at last) is my question. Does not the use of Eq. (1) to determine the relationship between TS and FS at the surface necessarily imply that the Planck climate-sensitivity parameter λ0,S applicable to the surface (where the coefficient 7/6 ballparks allowance for the Hölder inequality) is given by

= 0.215 K W–1 m2 ? (3)

The implications for climate sensitivity are profound. For the official method of determining λ0 is to apply Eq. (1) to the characteristic-emission altitude (~300 mb), where incoming and outgoing radiative fluxes are by definition equal, so that Eq. (4) gives incoming and hence outgoing radiative flux FE:

= 239.4 W m–2 (4)

where FE is the product of the ratio πr2/4πr2 of the surface area of the disk the Earth presents to the Sun to that of the rotating sphere; total solar irradiance S = 1366 W m–2; and (1 – α), where α = 0.3 is the Earth’s albedo. Then, from (1), mean effective temperature TE at the characteristic emission altitude is given by Eq. (5):

= 254.8 K. (5)

The characteristic emission altitude is ~5 km above ground level. Since mean surface temperature is 288 K and the mean tropospheric lapse rate is ~6.5 K km–1, Earth’s effective radiating temperature TE = 288 – 5(6.5) = 255 K, in agreement with Eq. (5). The Planck parameter λ0,E at that altitude is then given by (6):

= 0.310 K W–1 m2. (6)

Equilibrium climate sensitivity to a CO2 doubling is given by (7):

, (7)

where the numerator of the fraction is the CO2 radiative forcing, and f = 1.5 is the IPCC’s current best estimate of the temperature-feedback sum to equilibrium.

Where λ0,E = 0.313, equilibrium climate sensitivity is 2.2 K, down from the 3.3 K in IPCC (2007) because IPCC (2013) cut the feedback sum f from 2 to 1.5 W m–2 K–1 (though it did not reveal that climate sensitivity must then fall by a third).

However, if Eq. (1) is applied at the surface, the value λ0,S of the Planck sensitivity parameter is 0.215 (Eq. 3), and equilibrium climate sensitivity falls to only 1.2 K.

If f is no greater than zero, as a growing body of papers finds (see e.g. Lindzen & Choi, 2009, 2011; Spencer & Braswell, 2010, 2011), climate sensitivity falls again to just 0.8 K.

If f is net-negative, sensitivity falls still further. Monckton of Brenchley, 2015 (click “Most Read Articles” at www.scibull.com) suggest that the thermostasis of the climate over the past 810,000 years and the incompatibility of high net-positive feedback with the Bode system-gain relation indicate a net-negative feedback sum on the interval –0.64 [–1.60, +0.32] W m–2 K–1. In that event, applying Eq. (1) at the surface gives climate sensitivity on the interval 0.7 [0.6, 0.9] K.

Two conclusions are possible. Either one ought not to use Eq. (1) at the surface, reserving it for the characteristic emission altitude, in which event the value for surface flux density FS may well be incorrect and no one has any idea of what the Earth’s energy budget is, and still less of an idea whether there is any surface “radiative imbalance” at all, or the flux density at the Earth’s surface is correctly determined from observed global mean surface temperature by Eq. (1), as all five sources cited above determined it, in which event sensitivity is harmlessly low even under the IPCC’s current assumption of strongly net-positive temperature feedbacks.

Table 1 summarizes the effect on equilibrium climate sensitivity of assuming that Eq. (1) defines the relationship between global mean surface temperature TS and mean outgoing surface radiative flux density FS.

 Climate sensitivities to a CO2 doubling Source Altitude λ0 f ΔTS,100 ΔTS,∞ AR5 (2013) upper bound 300 mb 0.310 K W–1 m2 +2.40 W m–2 K–1 2.3 K 4.5 K AR4 (2007) central estimate 300 mb 0.310 K W–1 m2 +2.05 W m–2 K–1 1.6 K 3.3 K AR5 implicit central estimate 300 mb 0.310 K W–1 m2 +1.50 W m–2 K–1 1.1 K 2.2 K AR5 lower bound 300 mb 0.310 K W–1 m2 +0.75 W m–2 K–1 0.8 K 1.5 K M of B (2015) upper bound 300 mb 0.310 K W–1 m2 +0.32 W m–2 K–1 0.7 K 1.3 K AR5 central estimate 1013 mb 0.215 K W–1 m2 +1.50 W m–2 K–1 0.6 K 1.2 K M of B central estimate 300 mb 0.310 K W–1 m2 –0.64 W m–2 K–1 0.5 K 1.0 K M of B upper bound 1013 mb 0.215 K W–1 m2 +0.32 W m–2 K–1 0.5 K 0.9 K M of B lower bound 300 mb 0.310 K W–1 m2 –1.60 W m–2 K–1 0.4 K 0.8 K M of B central estimate 1013 mb 0.215 K W–1 m2 –0.64 W m–2 K–1 0.4 K 0.7 K Lindzen & Choi (2011) 300 mb 0.310 K W–1 m2 –1.80 W m–2 K–1 0.4 K 0.7 K Spencer & Braswell (2011) 300 mb 0.310 K W–1 m2 –1.80 W m–2 K–1 0.4 K 0.7 K M of B lower bound 1013 mb 0.215 K W–1 m2 –1.60 W m–2 K–1 0.3 K 0.6 K

Table 1. 100-year (ΔTS,100) and equilibrium (ΔTS,) climate sensitivities to a doubling of CO2 concentration, applying Eq. (1) at the characteristic-emission altitude (300 mb) and, boldfaced, at the surface (1013 mb).

It is worth noting that, even before taking any account of the “consensus’” use of Eq. (1) to govern the relationship between TS and FS, the reduction in the feedback sum f between IPCC’s 2007 and 2013 assessment reports mandates a corresponding reduction in its central estimate of climate sensitivity from 3.3 to 2.2 K, of which only half, or about 1 K, would be expected to occur within a century of a CO2 doubling. The remainder would make itself slowly and harmlessly manifest over the next 1000-3000 years (Solomon et al., 2009).

Given that the Great Pause has endured for 18 years 6 months, the probability that there is no global warming in the pipeline as a result of our past sins of emission is increasing (Monckton of Brenchley et al., 2013). All warming that was likely to occur from emissions to date has already made itself manifest. Therefore, perhaps we start with a clean slate. Professor Murry Salby has estimated that, after the exhaustion of all affordably recoverable fossil fuels at the end of the present century, an increase of no more than 50% on today’s CO2 concentration – from 0.4 to 0.6 mmol mol–1 – will have been achieved.

In that event, replace Table 1 with Table 2:

 Climate sensitivities to a 50% CO2 concentration growth Source Altitude λ0 f ΔTS,100 ΔTS,∞ AR5 (2013) upper bound 300 mb 0.310 K W–1 m2 +2.40 W m–2 K–1 1.3 K 2.6 K AR4 (2007) central estimate 300 mb 0.310 K W–1 m2 +2.05 W m–2 K–1 0.9 K 1.8 K AR5 implicit central estimate 300 mb 0.310 K W–1 m2 +1.50 W m–2 K–1 0.6 K 1.3 K AR5 lower bound 300 mb 0.310 K W–1 m2 +0.75 W m–2 K–1 0.4 K 0.9 K M of B (2015) upper bound 300 mb 0.310 K W–1 m2 +0.32 W m–2 K–1 0.4 K 0.7 K AR5 central estimate 1013 mb 0.215 K W–1 m2 +1.50 W m–2 K–1 0.3 K 0.7 K M of B central estimate 300 mb 0.310 K W–1 m2 –0.64 W m–2 K–1 0.3 K 0.6 K M of B upper bound 1013 mb 0.215 K W–1 m2 +0.32 W m–2 K–1 0.3 K 0.5 K M of B lower bound 300 mb 0.310 K W–1 m2 –1.60 W m–2 K–1 0.2 K 0.4 K M of B central estimate 1013 mb 0.215 K W–1 m2 –0.64 W m–2 K–1 0.2 K 0.4 K Lindzen & Choi (2011) 300 mb 0.310 K W–1 m2 –1.80 W m–2 K–1 0.2 K 0.4 K Spencer & Braswell (2011) 300 mb 0.310 K W–1 m2 –1.80 W m–2 K–1 0.2 K 0.4 K M of B lower bound 1013 mb 0.215 K W–1 m2 –1.60 W m–2 K–1 0.2 K 0.3 K

Table 2. 100-year (ΔTS,100) and equilibrium (ΔTS,) climate sensitivities to a 50% increase in CO2 concentration, applying Eq. (1) at the characteristic-emission altitude (300 mb) and, boldfaced, at the surface (1013 mb).

Once allowance has been made not only for the IPCC’s reduction of the feedback sum f from 2.05 to 1.5 W m–2 K–1 and the application of Eq. (1) to the relationship between TS and FS but also for the probability that f is not strongly positive, for the possibility that a 50% increase in CO2 concentration is all that can occur before fossil-fuel exhaustion, for the IPCC’s estimate that only half of equilibrium sensitivity will occur within the century after the CO2 increase, and for the fact that the CO2 increase will not be complete until the end of this century, it is difficult, and arguably impossible, to maintain that Man can cause a dangerous warming of the planet by 2100.

Indeed, even one ignores all of the considerations in the above paragraph except the first, the IPCC’s implicit central estimate of global warming this century would amount to only 1.1 K, just within the arbitrary 2-K-since-1750 limit, and any remaining warming would come through so slowly as to be harmless. It is no longer legitimate – if ever it was – to maintain that there is any need to fear runaway warming.

Quid vobis videtur?

Article Rating
Inline Feedbacks
June 27, 2015 7:12 am

I am not a scientist, but a little common sense would seem to indicate that 400 or even 540 parts per million of CO2 in the atmosphere could not have the major temperature driving force that the “science is settled” crowd would have us believe.
That is putting a lot of power in a trace gas that is even just a tiny percentage of the greenhouse gases in the atmosphere.

Hugh
June 27, 2015 9:13 am

I really do think that is a pretty weak argument.
Common sense might tell you something iff you have strong grasp on the underlying facts, if not, it is worth nothing. When you have some knowlegde on this, iwe are not talking about ‘common’ sense any more.

June 27, 2015 11:00 am

That is an invalid response to a simple point. OK, you disagree, but you fail to provide any rationale or argument to support your disagreement.
Not one of the supposed physics processes have addressed the ‘magic’ of how one lone molecule causes 10,000 other molecules to warm 0.5, 1.0, 2.0, 3.0, 4.0 and so on degrees Kelvin.
Instead the individual molecular interactions are ignored and buried with gross assumptions.

Bernard Lodge
June 27, 2015 2:02 pm

Hugh,
I’m with Dave on this one. Common sense questions often reveal good insights.
Here is another one …
In the IPCC energy budget for the Earth, it shows 342 W/sqm as the down-welling energy from ‘greenhouse gasses’. Can you please tell me where on that chart the down-welling energy is shown from the 99.96% of the atmosphere that is not CO2?
You won’t find it because it is a fudge. All gasses emit photons downwards to the earth, not just ‘greenhouse gasses’. The percentage that come from CO2 is tiny.

morLogicThanU
June 27, 2015 3:40 pm

Perhaps we should use the special man made c02 to insulate our houses in winter.
It seems its the most insulative material in the universe, trapping all that heat.
Just think off all the polar bears we can save not having to burn that evil fossil fuel…
The possibilities are endless…. you can have pockets of man made x02 on our jackets, earmuffs, gloves, etc

Stephen Richards
June 28, 2015 1:40 am

Don’t be bloody rude. Einstein said that physics is not common sense. However, many people will naturally will apply common sense to physics as they do to their everyday lives

Mike McMillan
June 28, 2015 10:25 pm

morLogicThanU June 27, 2015 at 3:40 pm
Perhaps we should use the special man made c02 to insulate our houses in winter.

Not too wild an idea. We currently use argon gas in the insulating space between panes of multi-pane windows, but CO2 has a thermal conductivity ~5% lower than argon. Argon is around 23 times more abundant than CO2, however, and is the main residual gas left from the industrial production of oxygen and nitrogen, so it’s handier to use. And that leaves more CO2 in the air to help grow broccoli and keep us warm.

ICU
June 29, 2015 12:06 am

It would appear that a basic course in thermodynamics is a prerequisite.
“The emissivity of a real surface varies with the temperature of the surface as well as
the wavelength and the direction of the emitted radiation.”
http://faculty.uoh.edu.sa/n.messaoudene/Heat%20Transfer/Heat%20Trans%20Ch%2012.pdf
https://en.wikipedia.org/wiki/Emissivity
So, if emissivity is a function of T (temperature), as the textbooks suggest, then everything shown after Equation (1) is wrong.

Wagen
June 27, 2015 10:37 am

From wikipedia (atmosphere of earth):
“If the entire mass of the atmosphere had a uniform density from sea level, it would terminate abruptly at an altitude of 8.50 km (27,900 ft).”
Pre-industrial levels of CO2 at 280 ppm would equal a layer of air of 2.38 m at sea level pressure (this ballpark figure assumes for simplicity that each molecule in the atmosphere takes up equal space: 8500*280/1000000). At present there is around 400 ppm suggesting that that mankind added around 1 m of infra-red absorbing CO2 (at sea level pressure). Are you sure that this could not possibly have an effect because it is a trace gas?

scot
June 27, 2015 10:57 am

Take a 0.3mm thick sheet of aluminum foil and measure how much light passes through it. Now add another 0.3mm thick sheet and measure again.
Are you sure doubling the total thickness couldn’t possibly have an effect?

Wagen
June 27, 2015 11:05 am

Scot,
No, I am not sure about that 🙂
However, I am really not sure what point you are trying to make (really!).

June 27, 2015 1:41 pm

linear thought in a logarithmic world will get you lost every time.

Wagen
June 27, 2015 1:59 pm

Joel,
Are you saying that Scot is pointing out the logarithmic relation between greenhouse gas concentrations and temperature? I know that. I still don’t know why it needs pointing out 😉

gbaikie
June 27, 2015 5:27 pm

–At present there is around 400 ppm suggesting that that mankind added around 1 m of infra-red absorbing CO2 (at sea level pressure). Are you sure that this could not possibly have an effect because it is a trace gas?–
Suppose one had flat piece of land which 1 km square. And one measured the air temperature in the middle of this field in white box 5 feet above the ground and it had average temperature of 12 C
And then put a glass box 1 square km by 5 meter high. And it it elevated above the ground by 5 meters. So top of box is 10 meters above the ground.
Then you filled the box with air will enough pressure so that the air does become a lower pressure then outside air when air is the coldest. So it’s sealed and designed to withstand any higher pressure were air to become warmer.
Then measure the average temperature for another year with white box in the middle. And it should be warmer temperature than compared to not having
the big glass box. Then you replace the air with pure CO2 and so thereby added 5 meters of CO2.
And question would be how warmer would the average temperature as measured in white box be?
And does increase the highest daytime temperature by how much. And/or does it increase the night time temperatures by the most?

Wagen
June 28, 2015 4:40 pm

Gbaikie,
Can you please re-write? As an example:
“Then you filled the box with air will enough pressure so that the air does become a lower pressure then outside air when air is the coldest. ”
I can try to interpret what you mean, but it will only lead to misunderstandings.

gbaikie
June 28, 2015 6:26 pm

Gbaikie,
Can you please re-write? As an example:
“Then you filled the box with air will enough pressure so that the air does become a lower pressure then outside air when air is the coldest. ”
I can try to interpret what you mean, but it will only lead to misunderstandings.
Have it slightly over pressurized- say 1/2 psi.

Ian Macdonald
June 27, 2015 10:44 am

Whether CO2 is a trace gas is actually irrelevant; the key issue is whether an absorbtion-band photon leaving the surface would make it through to space. Surprisingly, in spite of the low concentration, the answer is that such a photon will encounter a CO2 molecule ever few tens of metres. We call this the mean free path. Thus, even at 400ppm, an outbound phton will encounter many CO2 molecules on its route upwards. This should explain why increasing the concentration beyond a certain level doesn’t have all that much of an effect on the odds of a photon escaping, ending up back at the surface, or being converted into bulk atmospheric heat.
Here is the mathematics behind it: http://www.biocab.org/Mean_Free_Path_Length_Photons.html

Jai Mitchell
Reply to  Ian Macdonald
June 27, 2015 11:10 am

In my view, the minutae of arguments atmospheric warming rates and responses within the frame of fluctuations of natural variability under decadal time scales are nonsense. The absolutely only thing that matters is that the earth is warming with MASSIVE amounts of excess heat. This is happening when the sun is at it’s longest and deepest minimum in over 100 years, and with nearly 40% of the total warming effects of CO2 being shielded by aerosols (dust and pollution) in the atmosphere.
please observe the following graphic:
The amount of heat represented in this graph is about 93% off the total global heat accumulation that has occurred over the last 10 years (I am only considering the very accurate ARGO buoy analysis annotated in red). This warming represents so much heat energy that, if it was to be put into the earth’s atmosphere, the atmosphere would warm by nearly 17C ( and we would all be dead).
The reason that this matters is that the only way that the earth reaches equilibrium is through blackbody radiation to space. This heat energy represents an continual investment of energy into the earth that will NEVER GO AWAY, until the earth’s surface warms enough to equalize with the incoming energy.
So my question to you all is:
With this definitive evidence of heat accumulation in the earth, why do you still believe that humans are not the primary contributor to this effect?
and
If you believe that humans are the primary contributor to this effect, why would you want to do anything but maximize the implementation of alternative energy and distributed generation, using clean fuel sources that give people the independence and freedom to charge their own cars from the solar panels on their own roofs and never send a single more dime to Saudi Arabia?
???

Bernard Lodge
Reply to  Ian Macdonald
June 27, 2015 1:30 pm

Ian,
What about the molecules of N2 and O2 or other atmospheric gases? They also absorb and emit photons – how do you isolate the CO2 effect? The outbound photon is much more likely to hit an N2 or O2 molecule which will block its path to space.
The chances are 99.96% that the photon will hit a air molecule other than CO2. This other molecule will either emit a new photon or simply vibrate a bit more.
The link you provided actually gives the answer in its first conclusion:
“The results obtained by experimentation coincide with the results obtained by applying astrophysics formulas. Therefore, both methodologies are reliable to calculate the total emissivity/absorptivity of **any** gas of any planetary atmosphere.”
Any gas acts this way – not just CO2. That is why CO2 only being a trace gas is relevant.

Wagen
Reply to  Ian Macdonald
June 27, 2015 2:20 pm

Bernard,
https://en.m.wikipedia.org/wiki/Greenhouse_gas
See section under non greenhouse gases.

Bernard Lodge
Reply to  Ian Macdonald
June 27, 2015 2:38 pm

Wagen,
Thanks for that link. I have read it before and its definition of radiative gases is helpful. My first point is that all matter radiates photons – provided it is above absolute zero. This includes N2 and O2. Larger gas molecules tend to be more radiative than smaller molecules as your link describes – but they all do radiate. My second point is that the chances of an emitted photon hitting a non CO2 molecule are 99.96% which means that most of what happens next is driven by the properties of those molecules, not CO2. Hence CO2 only being a trace gas is relevant.

richard verney
Reply to  Ian Macdonald
June 27, 2015 3:15 pm

And given that these photons are traavelling at 299792.458 km/sec, what does it matter if they interact with some CO2 molecules on their way out to space.
So the photon takes a zig-zag course on its way through the 50km of atmosphere whilst travelling at 299792.458 km/sec, it merely delays the photon by seconds at most (possibly only fractions of seconds).
Given that on average, there is 12 hours of darkness, this gives plenty of time for photons to escape to space before the next surge of incoming photon from the sun is received and the entire process repeats itself, such that there is no build up in temps. All the energy received during the day has plenty of opportunity to find its way out to space during the nighttime hours..

Wagen
Reply to  Ian Macdonald
June 27, 2015 3:28 pm

Sorry, wiki again (on N2):
“Molecular nitrogen is largely transparent to infrared and visible radiation because it is a homonuclear molecule and, thus, has no dipole moment to couple to electromagnetic radiation at these wavelengths. Significant absorption occurs at extreme ultraviolet wavelengths, beginning around 100 nanometers.”
If your point is that all molecules in the atmosphere absorb and re-radiate, then you have difficulties explaining the direct effects of sunshine. I may misunderstand your point, though.

Wagen
Reply to  Ian Macdonald
June 27, 2015 3:34 pm

Richard,
CO2 molecules hit by infra-red may radiate back (and this could be back to the ground) or they may give the energy of their excited state to another molecule they are colliding with.

Bernard Lodge
Reply to  Ian Macdonald
June 27, 2015 9:56 pm

Wagen,
Do you agree that all matter with a temperature greater than absolute zero emits thermal radiation? I had assumed that was a generally accepted principle but please correct me if I am wrong as what I say next comes from this.
If you do agree then you will agree that N2 and O2 will therefore emit radiation according to their temperature. Approximately half of that will be back down towards the Earth’s surface.
Question is – where did they get their thermal energy from? You say that N2 and O2 are largely transparent to visible and IR radiation. Logically that means they get must get their energy from other frequencies? Or maybe they simply get it from convection and conduction in the atmosphere?
Whatever the source of their energy, N2 and O2 are emitting thermal radiation to the ground just like CO2. The wavelengths may vary but even if they are not in the LWIR range, they will still be absorbed by the earth and re emitted at longer wavelengths just like when sunlight or UV hits the surface of the earth. Bottom line is that CO2 and the other ‘greenhouse gasses’ are not the only sources of down-welling radiation.

Frank
Reply to  Ian Macdonald
June 28, 2015 12:40 am

Jai: When you look at that “very accurate” ARGO data, please note that the heat content of the top 100 m has fallen (through 2013 at least) and that of the next 200 m has remained almost unchanged. Most of the heat has accumulated from 700-2000 m. That heat accumulation is large, but so is the mass warming, so the temperature change over a decade is ridiculously small – averaging about 0.01 degK. IMO, the ARGO data is interesting, but doesn’t agree with what was logically expected and climate models projected. Perhaps we will know more in a decade or two, but I see little reason to believe such tiny changes in temperature constitute conclusive evidence of global warming, much less the amount of warming expected from anthropogenic CO2.

Reply to  Ian Macdonald
June 28, 2015 1:15 am

Ian Macdonald, may I suggest that there is more to it than the CO2 absorption band for photons leaving the Earth’s surface. What is missing from the IPCC is any mention of the absorption bands across the entire spectrum, in particular, for the incoming Sunshine.
The absorption spectrum for CO2 has its primary maximum at about 4.3 microns, within the spectral range of our incoming Sunshine. The secondary maximum is at about 15 microns, within the spectral range of the average Earth’s surface emission. All of the other spectral lines are at shorter wavelengths, that is, not within the range of the Earth’s emissions.
The primary peak for CO2 absorbs about three times the energy from the incoming Sunshine as does the secondary peak in absorbing energy out-going from the Earth’ surface. Hence any increase in CO2 atmospheric concentration will cause cooling of the Earth as less energy reaches its surface in the first place, that is, before any surface emission that might be “trapped” by the atmosphere.
As the Earth’s temperature has been stable throughout this century, with neither warming nor cooling, it is apparent that the four molecules of CO2 to the 10,000 molecules of other atmospheric gases is insufficient to cause any measurable effect on the Earth’ temperature regardless of what path the photons may take. This is especially important for the back-radiation of the incoming Sun’s energy as those photons escape directly into space never to return.

Frank
Reply to  Ian Macdonald
June 28, 2015 1:48 am

Bernard, Wagen and others: Your comments reflect some significant misunderstanding about the behavior of GHGs and radiation.
1). The troposphere and most of the stratosphere are in local thermodynamic equilibrium. This means that an excited vibrational state of CO2 is relaxed by collisions much faster than the excited state can emit a photon. Therefore essentially all thermal infrared photons arise from GHG molecules that were excited by collisions, not by absorption of a photon. Therefore emission of thermal infrared photons by GHGs in the atmosphere depends only on the local temperature – local thermodynamic equilibrium. “Re-emission of absorbed photons is negligible. Essential all absorbed photons become kinetic energy (heat) when they are absorbed. “Thermalized” is a name for this process.
2). Essentially all the radiative cooling of the atmosphere is done by CO2 and water vapor. “All matter above absolute zero may radiate” at some low rate, but the power radiated by N2 and O2 in our atmosphere is negligible compared with GHGs.
3). The atmosphere is warmed by absorption of thermal infrared by GHGs and cooled by emission of thermal infrared by GHGs. These energy gains or losses are transmitted to and from N2 and O2 by collisions. In general, more energy is lost than gained by radiation in the troposphere and the difference is supplied by latent heat from the condensation of water vapor into liquid water and ice (clouds). About 1/3 of sunlight that isn’t reflected back into space is absorbed by the atmosphere before it reaches the ground. Convection of latent heat stops at the tropopause, so all heat transfer above occurs by radiation. Since DLR and OLR are similar near the surface, convection is more important than net radiation in removing heat from the surface.
4). The strongest absorption lines for CO2 are saturated by all of the CO2 in the atmosphere, but weaker lines are not. And all lines have width, so that, even if the center is saturated, the sides may not be. The 3.7 W/m2 forcing for 2XCO2 is a small change in the 100+ W/m2 contribution of CO2 to OLR and comes mostly from the weaker lines and sides of stronger lines. So saturation is a real phenomena, but increasing CO2 still decreases OLR.
Hope this helps.

gbaikie
Reply to  Ian Macdonald
June 28, 2015 3:27 am

— Bernard Lodge
June 27, 2015 at 9:56 pm
Wagen,
Do you agree that all matter with a temperature greater than absolute zero emits thermal radiation? I had assumed that was a generally accepted principle but please correct me if I am wrong as what I say next comes from this.-*-
Well if matter emits thermal radiation, one can assume such emission would involve energy leaving this matter.
Matter does not inherently have infinite amount of energy to emit, so one say it has some finite amount of energy it could emit. And this finite energy would be what meant by being at a temperature greater than zero.
Gas temperature is kinetic energy. A gas molecule can stop moving [and *could* do this hundreds of times within one second] and then regains it’s velocity via collision with other gas molecules. And were all gas molecule in area were to stop moving then that gas would be at absolute zero temperature.
So gas molecule is like fast moving bullet, the difference is a bullet could also be warm and radiating while racing towards something before it hits anything- a molecule does not likewise radiate between hitting something [another molecule or say a photon].
All gas can interact with other matter and it can gain energy by such interaction which can then cause the gas molecule to emit a photon. But the molecule itself does not have some finite amount energy which it emits.
Or the temperature of gas is the average velocity of million or billions of gas molecules.
And a gas molecule absorbs and emits electromagnetic energy by having it’s electrons going to higher energy level and then returning to more stable energy level.
Since matter is comprised of protons and electrons and all electron could go to higher energy level, All matter can absorb and emit thermal radiation.
Now, according to ideal gas law, ideal gas don’t lose their kinetic energy via
collision with other ideal gases. If they radiated energy via collision, that would mean they would lose kinetic energy [or that’s all the energy they have to lose].
So CO2, N2, O2 are considered ideal gases. H20 is not ideal gas
because at earth temperature and pressure it can condense [form into water] So H20 collision with other H20 molecules can lose energy in the collision [they stop being a gas and be liquid- of course the liquid gains the kinetic energy, but the gas loses a member. But also briefly sticking or clumping together, adds heat which can radiated].
So what CO2 and other greenhouse gases are is they are much more easily [then non-greenhouse gas] to have have the electron energized by infrared light. They gaining electromagnetic energy and then emit the same electromagnetic energy- and emit it in random direction [all matter emits in random direction],
So the greenhouse gas aren’t transparent to some wavelengths of IR, whereas non greenhouse are mostly transparent to IR [but being transparent, they can reflect, scatter some of this light, but don’t absorb and re-radiate very much of it].

Hugh Eaven
Reply to  Ian Macdonald
June 28, 2015 6:26 am

Jai Mitchell:
“…warming with MASSIVE amounts of excess heat. This is happening when the sun is at it’s longest and deepest minimum in over 100 years,”
It sounds like you’re saying the sun has been during the whole 20th century in some kind of minimum? It seems if anything could be said about max or min, some kind of minimum started in 21th century mostly during the so-called “hiatus”. However, climatic responses would be normally deduced on meta-decal timescales, often 30 years periods, with significant trends around half of that, around 15 years. That’s why the 18+ years hiatus in isolation is still significant and at the same time not yet definitive in term of climate science.
” and with nearly 40% of the total warming effects of CO2 being shielded by aerosols (dust and pollution) in the atmosphere.”
It sounds like you’re saying adding dust and pollution would be a good protection from global warming? If we’re already blocking 40% of the looming disaster now, perhaps doubling the aerosols might be a solution? :-p

Reply to  Ian Macdonald
June 28, 2015 1:48 pm

I’m a little confused as to how NOAA got 1955-2015 data for 0-2000m heat content, when their own data page only shows data for 0-700m for that timescale. For 0-2000m, the data only goes back to 2005.

papiertigre
Reply to  Ian Macdonald
June 28, 2015 3:54 pm

Co2 by it’s nature accelerates the evaporative heat transport. So the individual photon’s incidental contact with a co2 molecule gives the photon a boost on it’s way out of the system.
Same reason why the evaporation of liquid co2 causes minus (-109.3) degree dry ice. Sucks all of the heat out of the local area and keeps on running. http://www.thegreenhead.com/2007/10/portable-dry-ice-maker.php

papiertigre
Reply to  Ian Macdonald
June 28, 2015 4:03 pm

Jai Mitchell is putting up Mister Karl’s delusional mish mash of boat intake corrections applied to sea temperature buoys.
That’s a party foul. IT’s on a level with barfing in the punch.

Wagen
Reply to  Ian Macdonald
June 28, 2015 5:03 pm

Bernard,
“Do you agree that all matter with a temperature greater than absolute zero emits thermal radiation? I had assumed that was a generally accepted principle but please correct me if I am wrong as what I say next comes from this.”
No. Some molecules only move faster. I.e. Atmosphere warms (which may lead to other molecules to radiate more though).

Wagen
Reply to  Ian Macdonald
June 28, 2015 5:40 pm

Hugh.
“It sounds like you’re saying adding dust and pollution would be a good protection from global warming?”
Yes it would (lowering ocean ph ignored here). You would have to live with the smog though.

Bernard Lodge
Reply to  Ian Macdonald
June 28, 2015 9:08 pm

Frank,
You say …. ‘the power radiated by N2 and O2 in our atmosphere is negligible compared with GHGs’.
Do you have a source for that?
‘Negligible’ is an opinion – can you quantify it?

Rob
Reply to  Ian Macdonald
June 29, 2015 11:54 am

Jai Mitchell
My question to you is, why do you ignore the geologic and glacial history that natural processes dominate the climate system? We’ve seen large fluctuations in climate and temperatures in the past, with no CO2 “cause” for the fluctuation. From Rasmussen et.al., (2014) “About 25 abrupt transitions from stadial to interstadial conditions took place during the Last Glacial period and these vary in amplitude from 5 °C to 16 °C, each completed within a few decades”, and no, the world did “die”….Please leave your bubble and join the real world….. http://www.sciencedirect.com/science/article/pii/S0277379114003485

Frank
Reply to  Ian Macdonald
June 30, 2015 12:47 am

Bernard Lodge wrote: “You say …. ‘the power radiated by N2 and O2 in our atmosphere is negligible compared with GHGs’. Do you have a source for that? ‘Negligible’ is an opinion – can you quantify it?”
Rats, you noticed the modestly vague words I used when I was sure of these facts but didn’t have proof at my fingertips. Time to do better.
You can go to the online MODTRAN calculator at the link below and remove carbon dioxide, water vapor and methane from the spectrum of outgoing radiation by putting zeros in the top five boxes. The software calculates the spectrum of OLR at the top of the atmosphere (70 km). The spectrum of the radiation reaching space looks like the blackbody spectrum emitted by the surface of the earth at a single temperature. At no wavelength is there any evidence that the N2 and O2 in the atmosphere have absorbed and emitted any OLR. When the traditional GHGs absorb and emit, less radiation is emitted at characteristic wavelengths. For example, with the default setting of 400 ppm of CO2, the intensity emitted around 15 um is appropriate for a temperature of 220 degK because all of the 15 um radiation emitted by the earth’s surface has been absorbed. The average photons that escapes to space at this wavelength is emitted by CO2 molecules near the tropopause where the temperature is 220 degK (about 13 km). 15 um photons emitted upward from lower in the atmosphere have little chance of escaping to space. If you play with the model, perhaps you can convince yourself that traditional GHGs have a much bigger impact on OLR and therefore N2 and O2 are negligible in comparison.
http://climatemodels.uchicago.edu/modtran/modtran.html
You can find many closely figures with the absorption spectra of GHGs on the web (search infrared spectrum gases), such as the one below. Such Figures never include N2, because “everyone” knows it is “negligible”. They show O2 plus O3, and strongest bands in this spectrum are due to O3. (O2 does absorb UV.)
http://www.meteor.iastate.edu/gccourse/forcing/spectrum.html
At http://www.spectracalc.com, you can plot the spectrum of N2 and CO2. The strongest lines for CO2 are 10 orders of magnitude stronger than for N2. For O2 and CO2, the difference if about 6 orders of magnitude. I don’t have much experience with this website.
Hope this helps.

Reply to  Ian Macdonald
July 2, 2015 7:59 am

A different view, putting you in the position of a photon trying to make it from the surface to space past those ‘few’ CO2 molecules is pursued at http://moregrumbinescience.blogspot.com/2009/11/how-co2-matters.html
Similar answer ultimately, but perhaps easier to visualize.

Reply to  Ian Macdonald
July 2, 2015 12:15 pm

Bernard,
There’s an easier way to think of this. Sure, the N2/O2 is matter with the potential to emit EM energy, but it has an emissivity of near zero and near unit transmittance. Would you assert that if the Earth’s atmosphere contained only its present amounts of N2 and O2 and nothing else, significant emissions from the O2/N2 would be seen from space and that the average temperature and sensitivity would be any different than the case for no atmosphere at all? The only photons we can see that are escaping the planet either originate directly from the surface or clouds and pass through the N2/O2 and GHG’s without being absorbed and emissions by GHG molecules. If you examine the spectral properties of the planets emissions, those in the GHG absorption bands are about 3db lower then they would be (about 1/2) without GHG absorption. While the N2/O2 do have emission/absorption bands, they are not in the visible or relevant LWIR spectrum.
George

June 27, 2015 6:32 pm

daveandrews723 June 27, 2015 at 7:12 am
Not only are you not a scientist but you aren’t a philosopher or logician either. What you have written comes into that famous “not even wrong” category.
Argument from personal incredulity is no argument at all. it irks me that Mr Watts permits such dreck to be published here as it makes us climate realists look stupid.
https://yourlogicalfallacyis.com/personal-incredulity

June 28, 2015 9:00 am

Of course, the other fudge is showing DWIR from “greenhouse gases”, and then pretending that is coming from CO2, when H2O is doing almost all of it.

AndyG55
Reply to  Ron Clutz
June 28, 2015 4:16 pm

CO2 does not radiate below 11km.

Climate Pete
June 29, 2015 12:01 pm

Unexpected effect of CO2
The weight of air pressure is equivalent to the pressure of 10m of water. On a simple pro rata calculation 280 parts per million of this is equivalent to a 3mm thick layer of solid CO2.
The radiation emitted to space treating earth as a black body would imply a black body temperature of 255K, which is -18 degrees C, yet the surface temperature is effectively 288K or +15 degrees C, giving a 33 degree C (or K) difference with and without atmosphere.
The standard texts say that water vapour is responsible for most of this greenhouse gas-driven rise, and that CO2 is responsible for only around 23% of it. So you can work out that the pre-industrial level of 280 ppm CO2 is responsible for an 8 degree C higher surface temperature than would be the case without it. Not bad for a 3mm thick solid layer – and contradictory to what at least your common sense is telling you.
Common sense might then tell you that doubling the thickness from 3mm to 6mm (concentration rising from 280 ppm to 560 ppm) would double the temperature rise. But again common sense would be wrong, because it is somewhat less than that. All the discussion centres around how much more water vapour would be in the atmosphere if CO2 caused an initial temperature rise, and what the additional rise from the extra water vapour greenhouse effect would be – and there is a lot more water vapour in the atmosphere than CO2.
So common sense is really not a good guide when a calculation based on known physics can get you the right answer.

Reply to  Climate Pete
June 29, 2015 7:37 pm

Climate Pete
You and the general population have been grossly mislead. The fact is that the Greenhouse Effect of 33 degrees Celsius is completely fictitious.
A primary claim for the cause of global warming has always been that the Earth suffers a “greenhouse effect”, being 33 degrees Celsius warmer than it would be if there were no greenhouse gases in the atmosphere. This is calculated from the difference between the estimated global surface temperature of 15 deg.C and the estimated temperature of the Earth without greenhouse gases. How many of the public are aware of the method used to calculate the latter?
The calculated temperature is derived from the surface temperature of the Sun, the radius of the Sun, the distance of the Earth’s orbit from the centre of the Sun and the radius of the Earth using the Stefan Boltzmann equation with an albedo of 0.3 to give an effective blackbody temperature of -18 deg.C.
Behind this seemingly innocuous calculation is hidden rarely stated assumptions. For example the Stefan Boltzmann equation applies to radiation from a cavity in thermal equilibrium emitting directly into a vacuum. That means that the model Earth is a perfectly smooth sphere with a uniform surface, no oceans, forests, deserts, mountains, ice sheets and, amazingly, no atmosphere with the same surface temperature everywhere, that is, no day or night. Furthermore the model assumes that the Sun’s insolation is spread uniformly across the whole Earth surface so no tropical equatorial zone and no ice-covered poles, just a uniform solid surface with a uniform albedo of 0.3 everywhere receiving a uniform insolation of one quarter of the Sun’s known insolation, everywhere across the spherical surface of the Earth. Does anyone, other than the IPCC and its followers, think that this is a reasonable model of the Earth from which to estimate the “greenhouse effect”, that is, the temperature difference between an Earth with and without “greenhouse gases” in the atmosphere when the model does not even include an atmosphere? I certainly do not.

george e. smith
July 1, 2015 4:31 am

Well Dave, you seem to have diverted the whole thread away from MofB’s question regarding the appropriateness of the 390 Wm^-2 surface total radiant emittance value.
Where I think Christopher, might have had some gear slippage, is that this question seems to be based on an expectation that TSI – albedo attenuation losses, should match the simple BB radiation emittance of the surface, using of course the 1/4 sphere / circle surface area factor; a factor which personally I find totally bogus.
342 Wm^-2 over the entire earth surface, less the 30% or so albedo losses, can’t possibly raise the surface Temperature to 288 K, corresponding to the 390 value.
So your Lordship, have you considered the additional energy over and above the Planckian radiative assumption, that is conveyed from the surface to the atmosphere, and subsequently lost to space.
You have direct thermal conductivity of HEAT energy (noun) from the total surface to the atmosphere, followed of course by convective transfer of this energy (as heat) to the upper atmosphere, which at some point must be converted to some spectrum of LWIR radiation (also BB like) for transfer out to space (at least half of it.
Then of course, for the oceanic parts of the earth condensed surface, you will have a considerable latent heat of evaporation that gets added to the atmospheric “heat energy”, and also convected to higher altitudes, where it will get extracted in the condensation and possibly freezing phase changes.
Remember that this “heat energy” must get extracted by transfer to other colder atmospheric gases, or of course by THERMAL (BB like) emission from the H2O molecules, so when the water droplets or ice crystals fall to earth as rain or snow or other precipitation, that latent heat does not return to the surface with the water, since the water had to lose it already before the phase change can occur.
So the total energy in the upper atmosphere that must eventually be lost as thermal radiation to space, is somewhat greater than just the original surface total radiant emittance number.
And frankly, I’m not convinced that we have good data on just what all those processes contribute to the mix.
I also do not like the feedback model that has increased CO2 simply creating increased downward GHG LWIR radiation back to the surface.
In my view, the controlling feedback connection, is the direct Surface Temperature change resulting in an evaporation change of 7% per deg. C, as found by Wentz et al (SCIENCE for July 13 2007, “How much more rain will global warming bring ?” ) or words close to that.
That atmospheric H2O change implies somewhere, a commensurate change in cloud cover that directly attenuates the incoming solar spectrum radiation that is able to reach the surface.
And that effect is a huge and negative feedback factor, as it directly affects the amount of atmospheric (CAVU) attenuation that reduces the exo TSI from 1362 Wm^-2 down to a surface value closer to 1,000 Wm^-2.
The outgoing CO2 LWIR capture is of course real. But I also believe it is quite irrelevant to the outcome, as the water feedback to the solar input, is where the stabilizing control is.
And Dave you say that you are NOT a scientist.
Evidently not a solid state Physicist either.
You computer of perhaps some finger toy, or whatever you logged in here on, contains silicon chips that contain silicon atoms in a diamond lattice at a density of about 5.6 E+22 Si atoms per cc.
Surface layers in CMOS structures that make the thing work contain tiny amounts of dopant atoms at densities of maybe 10^16 to 10^18 impurity atoms per cc, and without that impurity content of one part in five million, to one in 50,000, which is way less than the atmospheric CO2 abundance, you simply couldn’t be here conversing with us all.
So don’t hang your hat on the “too little to do anything” mantra.
Denying the CO2 or other GHG capture process, is not a good hill to choose to die on.
Just my opinion of course.

george e. smith
Reply to  george e. smith
July 1, 2015 4:35 am

I forgot to add, that with the sun beating down on the surface (below it) at about 1,000 Wm^-2, rather than 240 Wm^-2 as Trenberth asserts, will easily raise the surface Temperature during the day to Temperatures, which we all experience every day.

Otter (ClimateOtter on Twitter)
June 27, 2015 7:26 am

I’m lost on one point- *when* did they decide it was 2C since 1750? I never heard that until the last year or so. And I thought I understood that the Avg Temp had already risen a fair chunk of that 2C between 1750 and 1850…?

Sturgis Hooper
June 27, 2015 9:53 am

IMO it’s because the Industrial Age began shortly after 1750, not 1850, and CO2 levels were similar in both centuries.
Much of the warming since the depths of the little Ice Age during the Maunder Minimum occurred in the early 18th century. After 1750 there was both cooling during the Dalton Minimum and warming after it until the Modern Warming Period began in the late 19th century.

Reply to  Sturgis Hooper
June 28, 2015 6:05 am
June 28, 2015 5:59 am

I believe an economist came up with the idea a few decades ago. Sorry, I can’t remember the name now. And… before you ask… yes, I mean an economist, and, yes, I mean a few decades ago.

cnxtim
June 27, 2015 7:27 am

Well done CM of B. I guess Trenberth and his cohorts didn’t expect to ever be questioned,
I mean, such a pretty set of illustrations!
But interestingly, the little ‘o” knows 97% of “real scientists” are “in agreement” ?
This reminds me of Barry’s Dame Edna’s stage show when singling out any foreigners in the audience;
“They don’t understand a word I’m saying, but they love colour and movement”.

Matt
June 27, 2015 7:33 am

Should it be “I ask only because I want to know” and not “i only ask because ————”
I ask only because I want to know.

rogerknights
June 27, 2015 11:44 pm

Fowler, in Modern English Usage, argued strongly that the more logical placement of “only” should not be made a guideline for copy editors.

Monckton of Brenchley
June 28, 2015 8:27 am

“I ask only because I want to know” would not scan, which was why Housman wrote “I only ask because I want to know”.

Robert of Texas
June 27, 2015 7:34 am

I am chuckling to myself after reading this… So in summary, if we assume that the “Scientific Consensus” actually had a valid point in understanding the “:heat budget”, M of B can demonstrate using the same process it leads to a tiny amount of warming… Precious. LOL

Monckton of Brenchley
Reply to  Robert of Texas
June 28, 2015 8:28 am

It takes a Texan to get straight to the main point. Well done, Robert!

June 27, 2015 7:45 am

CO2 has been found guilty
When no crime’s been reported,
But the alarmists’ agenda
Demands facts are distorted!

Gerald Machnee
June 27, 2015 9:07 am

So is that what the judge did in the Netherlands? Guilty without evidence?

Reply to  Gerald Machnee
June 27, 2015 9:11 am

Judges aren’t scientists
And neither am I,
But we’re being lead down a path
And you need to ask why?

Bill Illis
June 27, 2015 7:56 am

Given that surface temperatures are changing at different rates (higher rates that is) than the troposphere temperature trends, we should, indeed, move the sensitivity estimates to be based on the surface only rather than on the troposphere like it is in climate theory.
And, the change in temperature (K) per additional forcing (1.0 W/m2) is, indeed, smaller at the surface (0.184 K/W/m2) than at the tropospause (0.265 K/W/m2).
And, the surface needs to increase its forcing (including the feedbacks) by +16.5 W/m2 in order to increase its temperature by 3.0K while the tropopause only needs +11.5 W/m2.
In the theory, the tropopause goes up in temperature by 3.0K per doubling because there is +4.2 W/m2 of direct GHG forcing per doubling (including other GHGs besides CO2) and the water vapor and cloud and albedo and lapse rate feedbacks add another 7.3 W/m2 of forcing.
And the lapse rate feedback is supposed to be negative. The lapse rate of 6.5K/km is supposed to be decrease in climate theory (the troposphere hotspot) while it is clearly increasing in the current environment given that the troposphere is increasing in temperature by much less than the surface.
Climate science uses a bunch of shortcuts in order to avoid doing all this simple math properly. Lately, they have been avoiding it because the math does not work to get to 3.0C per doubling.
If we move the calculations to the surface and include the proper feedbacks for water vapor and clouds that are actually showing up, we get only 1.1C of warming per doubling and a total forcing change from the current 390 W/m2 at the surface to 395.9 W/m2.
And that only occurs because we are also ignoring the Planck feedback whereby outgoing longwave radiation should increase as the surface temperature rises. You never ever hear anyone in the climate science community talking about this.

David A
Reply to  Bill Illis
June 28, 2015 5:09 am

How much energy does it take to accelerate the hydrological cycle, and to grow 20 percent more bio-life.

David A
Reply to  David A
June 28, 2015 5:10 am

I forgot the question mark, but I only ask because I want to know.

chris riley
Reply to  David A
June 28, 2015 5:28 pm

A minimum of some.

Anne Ominous
Reply to  Bill Illis
June 28, 2015 4:15 pm

Bill Illis:
The answer is that it doesn’t matter. Whatever the source of warming — assuming that there is indeed some such source of warming — the hotspot is still required in theory.
The reason is that hot gases become less dense and rise, to the point at which they shed their heat via radiation to space. So no matter what was causing it to get hot, there would still be a hotspot. Different gases might have to rise to somewhat different altitudes before they shed that heat, but remember too that atmospheric gases tend to be rather well-mixed.
So the absence of a hotspot is not just evidence against CO2-based warming, it is evidence against significant warming, period.

Matt Schilling
Reply to  Anne Ominous
June 29, 2015 8:47 am

+1

June 27, 2015 8:06 am

Hi Christopher,
Subject to certain limitations that don’t apply here, the inverse of a derivative equals the derivative of the inverse. So your starting assumption is correct. We can double check by taking the derivative of the inverse explicitly:
F = kT^4, therefore
T = (F/k)^0.25
= (1/k)^0.25 * F^0.25
So:
dT/dF = 0.25 * (1/k)^0.25 * F^(-0.75)
= 0.25 * (1/k)^0.25 * F^(0.25) * F^(-1)
= 0.25 * T * F^(-1)
= T/(4F)

June 27, 2015 8:08 am

Without quibbling over the use of S-B at all, the consensus method seems to describe a stable system. Which is what we have.
Are the tales of runaway warming in the Summary of Policymakers or described in the actual AR5 report?
(or did I completely misunderstand your math this early morning).

Mike M.
June 27, 2015 9:13 am

jeanparisot,
You wrote “the consensus method seems to describe a stable system. Which is what we have.”
You are correct.
The tales of runaway warming exist only in the fevered imaginations of a certain type of “skeptic”. And, I suppose, in history since in the early days (perhaps 40 years ago) there was some speculation that it *might* be a possibility.

Sturgis Hooper
Reply to  Mike M.
June 27, 2015 9:59 am

Hansen still warns of the Venus Express, does he not? Or do you include the former head of GISS and perpetrator of the global warming scare of the 1980s to 2010s as a certain kind of skeptic?

Reply to  Mike M.
June 27, 2015 11:17 am

I love it when Venus is mentioned. 96.5% concentration of CO2 and 96 times the atmospheric mass. So approximately 230,000 times as many CO2 molecules as here in Earths atmosphere, but only about 2.5 times the absolute temperature at the surface. If you still believe in “dangerous climate change” after those stats, well, there is no helping you!

Mike M.
Reply to  Mike M.
June 27, 2015 11:33 am

Sturgis Hooper wrote: “Hansen still warns of the Venus Express”.
I was going to answer “not so far as I am aware” but then I thought, “well this is Hansen we’re talking about” so I decided I better do a search.
So I correct myself: “only in the fevered imaginations of a certain type of “skeptic” and certain kooks on the alarmist fringe”.

Gary Pearse
Reply to  Mike M.
June 27, 2015 12:20 pm

Mike M.
June 27, 2015 at 9:13 am
You appear to have joined the debate late. Your observation is, however, evidence that as little as 4 to 5 years ago, the consensus was pushing thermogeddon and that since the reality of the pause became reluctantly accepted by the consensus (despite the recent NOAA attempt “hide the pause”), the number of death by fire CAGW types has shrunk considerably (your certain type of ‘sceptic’ I guess – a funny word for a the remaining end of the world zealots that used to make up the mainstream).
I can see that your late arrival at a time when real sceptics had already chopped the crisis in half would make it look like sceptics were quibbling over a degree or so.

Menicholas
Reply to  Mike M.
June 27, 2015 3:31 pm

Just in the past week we have been treated to actual “end of the world and no human can survive” warnings.
And all manner of other crazy s**t.
And it is not coming from skeptics.
This is a bizarre contention from Mike M.

Reply to  Mike M.
June 27, 2015 3:37 pm

A paper from just two years ago in Nature Geosciences, calculates that, “The runaway greenhouse may be much easier to initiate than previously thought”.
http://news.nationalgeographic.com/news/2013/13/130729-runaway-greenhouse-global-warming-venus-ocean-climate-science/
So in fact “tales of runaway warming” still do exist in the fevered imaginations of Catastrophic Anthropogenic Climate Change Alarmists (CACCA).

Mike M (the original one)
Reply to  Mike M.
June 28, 2015 11:40 am

Mike M. (the imposter) wrote:

“The tales of runaway warming exist only in the fevered imaginations of a certain type of “skeptic”.”

So Gavin Schmitt is a “certain type of skeptic” by even acknowledging it nine years ago?
http://www.realclimate.org/index.php/archives/2006/07/runaway-tipping-points-of-no-return/
And oh my lying eyes, it must be my imagination allowing me to read this as well – http://arxiv.org/abs/1201.1593
I think most informed people know that the ESSENTIAL ingredient for CAGW theory has always been that more CO2 will beget warming that will then beget more water vapor which begets even MORE warming thus begetting even more water vapor (plus methane from tundra under the north pole, heh heh), ad nauseum until there is so much begetting going on that the planet is ….
Gee, I wonder who it was who introduced the idea that we would reach a point in man made global warming where it would be impossible to do anything about it? In 2006 someone said –

“Many scientists are now warning that we are moving closer to several “tipping points” that could — within as little as 10 years — make it impossible for us to avoid irretrievable damage to the planet’s habitability for human civilization. “

Mike M.
Reply to  Mike M.
June 28, 2015 12:46 pm

Mike M (the one who struggles with reading comprehension):
I was responding to a comment by jeanparisot who wrote: “Are the tales of runaway warming in the Summary of Policymakers or described in the actual AR5 report?”
Mainstream climate science, even IPCC, does not warn about runaway warming. Here is a link that talks about why, and also about the often inapt use of the phrase “tipping point”: http://www.realclimate.org/index.php/archives/2006/07/runaway-tipping-points-of-no-return/
Maybe if you read it a second time, you will understand it.
And your other link says that “almost all lines of evidence lead us to believe that is unlikely to be possible, even in principle, to trigger full a runaway greenhouse by addition of non-condensible greenhouse gases such as carbon dioxide to the atmosphere”.
In my original reply to jeanparisot, I overstated by implying that none of the alarmists make such ridiculous claims. I have since corrected that.
Yeah, “(the one who struggles with reading comprehension)” was needlessly nasty. Just wanted to make the point that two can play that game. Maybe you were just trying to be funny, but calling someone and imposter is not funny.

co2islife
June 27, 2015 8:17 am

I applaud your efforts Lord Monckton, the further you dig into the data and theory the less valid it becomes. Observations:http://drtimball.com/wp-content/uploads/2011/05/average-global-temperature.jpg
1) CO2 traps heat most efficiently at 15 micrometers. That is consistent with -80 degrees C. The absorption band does spread out to include 0 degree C or 255 degree K. At 5 K into the atmosphere both H2O and CO2 exist so 255 degree K IR should be absorbed. If CO2 were the cause of atmospheric warming would be in the upper atmosphere, not the surface temperatures. CO2 doesn’t efficiently absorb 10 micrometer IR very well. The problem is, there isn’t a hot spot 5 k into the atmosphere.
http://wattsupwiththat.com/2014/08/04/what-stratospheric-hotspot/
2) We have 800 million years of geologic history when CO2 reaches levels as high as 7,000 PPM. Never in the past 800 million years when 99.999% of the record had higher CO2 levels, did the earth experience catastrophic warming…never. Clearly, using the data from the past 600 million years, the sensitivity of CO2 and temperature is non-existent. Are climate scientists not ware of the data they have already published that totally refutes their theory?
3) The “evil sister” of Global Warming is Ocean Acidification. For your next article you may want to do the math exploring how much CO2 would be required to alter the pH of the Oceans, and put that in terms of CO2 existing in the atmosphere and fossil fuel consumption.
Once again, keep up the great work.

Menicholas
June 27, 2015 9:46 am

It really does seem to me that if the chart you posted is accurate, then it proves there is nothing to worry about.

NeedleFactory
June 27, 2015 11:06 am

What is the source of this graph, please?

June 27, 2015 1:45 pm
Mike M.
June 28, 2015 1:58 pm

The graph at the link provided by Nick Boyce is very similar to the one posted by co2islife, but they are not the same. The one posted by co2islife does not include the error limits (more than a factor of 2). But I suspect that the ultimate source for the “data” in the two graphs is the same. The link to the paper is provided on the web page linked to by Nick. It’s a model!

Samuel C Cogar
June 28, 2015 2:48 pm

I believe the original source is: http://www.biocab.org/carbon_dioxide_geological_timescale.html
And here is the revised version from the above source, to wit:
http://www.biocab.org/Geological_Timescale.jpg

June 27, 2015 11:06 am

Whenever I see these charts it appears that everyone seems to think that gravity has been constant; however, there is no way that the giant dinosaurs could have survived with our existing gravity. Therefore, how might a gravity of about 50% of the present have affected our atmosphere millions of years ago.?

June 27, 2015 11:24 am

You are assuming that the fossil interpretation of giant land animals is correct. An easier explanation is that all large dinosaurs were water based. Perhaps the T-Rex, for example was half submerged in water, like a Carnivorous hippo.

richard verney
June 27, 2015 2:29 pm

Or gravity has remained the same, but atmospheric pressure was greater during the time of the dinosaurs. Some suggest that the atmosphere must have been about 2 bar.

Menicholas
June 27, 2015 3:32 pm

O2 was far higher.

Menicholas
June 27, 2015 3:38 pm

“there is no way that the giant dinosaurs could have survived with our existing gravity.”
If one is to assume that gravity is variable, then the implications of that render just about all of scientific inquiry a waste of time, as one must allow that all the other physical constants and forces also varied.
And if they vary, why just in time? Why not in space?
Separately, such logic as is attempted by this statement also concludes that bumblebees cannot fly, and various other conclusions that are at variance with observation.
Anyway, this train of thought makes as much sense as the guys who say there is no such thing as water vapor (calling it the myth of cold steam) or convection.

June 27, 2015 3:47 pm

Earl,
There is every way that the giant dinosaurs could have and did survive under gravity the same as at present.
The only suborder of dinosaurs outside the size range of Cenozoic land animals were the sauropods, and far from all of them. Estimates of their weight are coming down, as more is learned of their anatomy, but they’re still bigger than anything before or since.
However mechanical studies of their hollow-boned structure show that they violate no laws of mass or motion.
It may be that some spent part of their lives in water, but that former hypothesis is no longer required to explain their unusual size. At least some of them did however apparently float, as shown by footprints. Sauropods apparently returned to North America in the Late Cretaceous across water gaps from South America, after a long absence.
By what mechanism do you suppose gravity to have been the same as now in the Paleozoic, when land animals were smaller than now, then suddenly much greater for part of the Mesozoic, then back down again?

Eugene WR Gallun
June 27, 2015 8:08 pm

Earl
Out in the world there is a doctor who needs a patient. Help him out. Go find him.
Eugene WR Gallun

Samuel C Cogar
June 28, 2015 3:45 pm

there is no way that the giant dinosaurs could have survived with our existing gravity.

Earl,
To paraphrase another poster above this post, ……. that claim about gravity makes as much sense as ……. the claim that the giant pterosaur named Quetzalcoatlus was one of the largest known flying animals of all time …… with a wingspan estimated to be 10–11 meters (33–36 ft) ……. and an estimated weight of around 200–250 kilograms (440–550 lb).
Read more @ https://en.wikipedia.org/wiki/Pterosaur_size
http://news.nationalgeographic.com/news/2009/01/images/090107-pterosaur-picture_big.jpg

David Ball
June 28, 2015 4:48 pm

There were some very large mammals during the Oligocene epoch. Approx. 30 million years ago.

June 29, 2015 1:38 pm

Samuel,
Unless gravity still hadn’t reached present strength 25 to six million years ago, how then to explain these flying giant birds:
http://www.washingtonpost.com/national/health-science/a-newly-declared-seabird-species-may-be-the-largest-to-ever-live/2014/07/07/0940783c-05e4-11e4-8a6a-19355c7e870a_story.html
No reason to suppose that the largest pterosaurs didn’t fly, as all evidence suggests that they did. Their remains have been found far out to sea. They didn’t paddle there. They soared, like big sea birds, such as the one reported in the above link.

June 29, 2015 6:15 pm

Biomechanical study of Argentinosaurus, the largest sauropod known from good material, shows no need to invoke variable gravity or mass of earth:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3864407/
Conclusion:
Forward dynamic simulations shows that an 83 tonne sauropod is mechanically competent at slow speed locomotion. However it is clear that this is approaching a functional limit and that restricting the joint ranges of motion is necessary for a model without hypothetical passive support structures. Much larger terrestrial vertebrates may be possible but would probably require significant remodelling of the body shape, or significant behavioural change, to prevent joint collapse due to insufficient muscle.

June 29, 2015 6:31 pm

PS: One reason herbivores eating such low quality vegetation as conifer needles could reach such great mass was the abundance of trees even in their often arid environments, thanks to the healthy levels of ambient CO2 in those lush, luxurious periods.
Mid-Cretaceous CO2 was probably around 2000 ppm, but might have been higher. Arboreal vegetation probably maxes out its potential at a little lower level than that, but the extra plant food in the air didn’t hurt.

Samuel C Cogar
June 30, 2015 3:10 pm

@ sturgishooper June 29, 2015 at 1:38 pm

Samuel,
Unless gravity still hadn’t reached present strength 25 to six million years ago,

I fail to comprehend the inference that the force of gravitational attraction on the body mass of land animals, …… as measured at sea level, ….. was increasing prior to 6 million years ago, let alone 25 million years ago. But on the contrary, it would be logical to assume that the force of gravitational attraction on the body mass of land animals, …… as measured at sea level, ….. was decreasing post-22,000 years BP simply because of the Post Glacial Sea Level Rise of 450+ feet.

how then to explain these flying giant birds:

Now flying giant birds is one thing, but flying giant reptiles is a per se, …. horse of a different color. And the last statement in your cited reference pretty much supports my claim about the inability of giant pterodactyls, and possibly all pterodactyls, of having the ability of flight or flying. To wit, quoting from your cited source:

“This is pushing the boundary of what we know about avian size, and I’m very confident that the wingspan is the largest we’ve seen in a bird capable of flight.”

The ability of “self powered” flight surely evolved on or near sea shores by the smaller species of dinosaurs that were in the process of evolving a “feathered” covering of their body …. simply because of the steady and consistent “on-shore” and “off-shore” winds which made for an easy “lift-off” from the ground …. as well as an “easy task” for said evolving dinosaurs to remain “off-ground” for extended periods of time. There are several Sea Bird species of today that are dependent on said “sea breezes” for getting airborne.

No reason to suppose that the largest pterosaurs didn’t fly, as all evidence suggests that they did.

Phooey, …. there are several good reasons why. To wit:
1. The length of their wing-span and calculated/estimated body weight.
2. Their lack of sufficient wing muscles for self-powered flight.
3. Their inability to achieve “lift-off” without the aid of an extremely strong breeze or wind.
4. Their extremely large “bill-snipper” compared to body size would be of no use whatsoever to a “flying” predator ….. and would probably prove quite dangerous in trying to control their flight path …. due to a weight “imbalance”, … a very short tail …… and/or the applied forces of unanticipated “wind shear”.

Their remains have been found far out to sea. They didn’t paddle there. They soared, like big sea birds, such as the one reported in the above link.

Found far out to sea” means nothing …. because that locale may have been a swampy area, a tidal zone or a very shallow Inland Sea ….. many, many eons ago.
Me thinks and actually believe, that all pterosaurs were shallow-water feeding reptiles that employed the same “feeding technique” as does the present day Black Heron, to wit:
http://ibc.lynxeds.com/files/pictures/IMG_0954_0.jpg

morLogicThanU
June 27, 2015 4:08 pm

The hot spot only mattered when the believers thought they found it. Once it was gone it was irrelevant.
The entire theory of CAGW is based on a positive feed back loop… as your chart shows; when does this feed back occur?
My belief is man made c02 has special properties that regular co2 doesn’t have … You simply don’t know it yet… /sarc

Kitefreak
June 28, 2015 8:36 am

I have that chart pinned up on my wall at work, along with the 18+ year satellite measured temperature increase pause graphs, regularly updated from the good Lord.Monkton.

June 28, 2015 9:12 pm

+1
The cool upper atmosphere cannot warm the warm lower atmosphere, particularly the lower two-to-two hundred meters where most of us live.

Frank
June 27, 2015 8:20 am

Lord Monckton: Your math is correct only in the limit that the change in T and F approach zero, the standard assumption one makes in elementary derivatives. Your equation can be re-written in a form I find more useful:
dF/F = 4*(dT/T)
A 1% change in F (2.4 W/m2 post albedo) produces a 0.25% change in T (about 3/4 of a degK). This equation is independent of any assumptions about emissivity.

Frank
June 27, 2015 9:27 am

This above equation becomes inaccurate with large % changes in T.
[T*(1+dt/T)]^4 = T^4 * [1 + 4*(dt/T) + 6*(dt/T)^2 + 4*(dt/T)^3 + (dt/T)^4]
This equation ignores the higher powers of dt/T which are negligible for the changes of few percent encountered in climate change on Earth.

Roy W. Spencer
June 27, 2015 8:23 am

If the entire system warmed by 1 deg., then (as you say) the increase in surface emission is much greater than the increase in emission of the whole system to outer space. But it is the latter that matters in climate sensitivity, since it is what is actually lost by the system. Does that answer your basic question?

Reply to  Roy W. Spencer
June 27, 2015 9:38 am

Correct.
The F involved here is forcing. That’s the hypothetical initial radiation reduction in which an instantaneous change in (say) atmospheric opacity would result because of the higher effective radiation altitude. To restore equilibrium, what needs to happen is for that new radiation altitude to warm up enough to get outgoing radiation back up to the incoming level, and to a first approximation it warms about the same as the surface.
So it’s the temperature at the effective radiation altitude, not at the surface, that goes into the Stefan-Boltzmann equation.

Reply to  Roy W. Spencer
June 27, 2015 11:25 am

I seem to remember some previous difficulty in explaining forcing to Lord Monckton, so I’ll elaborate.
When we talk about sensitivity, we’re talking about response to a given forcing. So you have to know what the forcing is. If you start at pre-industrial CO2 concentration (and thus atmospheric optical density), the effective radiation altitude is $h_1$, where the temperature is $T_{eq}$, the temperature at which the Stephan-Boltzmann equation gives an outgoing radiation intensity the same as the incoming value from space. By convention, we can consider that concentration’s forcing to be zero.
Now, if optical density were to increase instantaneously from that zero-forcing level to a level at which the new radiation altitude is $h_2>h_1$, then the new radiation altitude’s temperature would initially be $T_{eq}-\Delta T$, where $\Delta T =(h_2-h_1)r$ and $r$ is lapse rate. This would cause an initial radiation imbalance of $\Delta F\approx 4\epsilon\sigma T_{eq}^3\Delta T$. The value of that hypothetical initial imbalance is what is referred to as the “forcing” associated with the concentration increase.
Because of that imbalance the surface warms. How much would the surface warm without feedback? If we ignore things like lapse-rate feedback, etc., it warms by $\Delta T$, i.e., by the change in radiation-altitude temperature that would be needed to redress the initial radiation imbalance. As Dr. Spencer observed, the change in surface, as opposed to radiation-altitude, radiation would be more than the imbalance to be eliminated. But it isn’t the surface radiation we’re interested in when we talk about climate sensitivity; it’s the outgoing radiation.
So the change in surface temperature would need to be more than would be required for the surface radiation to increase by the forcing; it would instead be the temperature change required for the radiation-altitude radiation to increase by that quantity: a larger number. Although we are indeed talking about a temperature change at the surface, therefore, it’s the absolute temperature at the effective radiation altitude, not at the surface, that needs to be used in Stefan-Boltzmann to determine the “Planck parameter.”

Pierre DM
Reply to  Joe Born
June 27, 2015 1:26 pm

John, perhaps you can explain to a simpleton like me on a couple of confusing issues. I am fully aware of the fact that energy, radiation and temperature are not the same things so I don’t think my confusion is there.
My first confusion has to do with the gas laws. I understand that there can be an energy gain due to GHG but I do not understand how that can translate to higher atmospheric temperatures at surface altitudes unless the atmospheric pressure increases as a result of this increase in energy. Common sense tells me that the atmospheric volume would increase a little and the temps would stay the same rather than pressure go up. To me you seem to be somewhat implying this but still translate that into a surface temperature increase. What am I missing between the universal gas laws an the atmosphere? The second confusion that I have is in the radiation budget diagrams. I see no inclusion implying long term storage of energy within the biomass and the rate of storage in the biomass increases with increasing GHG. Would this have something to do with my confusion to the first question? Maybe I am doomed to stay confused.

Reply to  Joe Born
June 27, 2015 2:05 pm

Perre DM:
Not sure I understand your question, but maybe this will help:
To a first approximation, don’t worry about the gas-law issues. Just think about the radiators.
Radiation that reaches space comes from the molecules at the surface and at various altitudes in the atmosphere, but the overall effect is as though they were all at an effective radiating altitude, which is somewhere in the troposphere that is higher if there is more greenhouse gas and there thus are more radiators to make a thicker “blanket.” Due to the lapse rate, a higher effective altitude means a larger temperature difference between that altitude and the surface. But that altitude’s temperature must over time equal the level that results in the same amount of radiation that comes from space (and isn’t reflected), so it can be thought of as fixed independently of what the effective radiation altitude is. Since the effective radiating altitude is therefore always at the same temperature independently of how high it is, but the difference between its temperature and the surface depends on that altitude, the surface temperature has to be higher for a greater effective radiating altitude.

Eugene WR Gallun
Reply to  Joe Born
June 27, 2015 9:00 pm

Joe Born
I would question you about the assumption you make in the first paragraph — and assumption it most certainly is — as you yourself admit.
“If you start at pre-industrial CO2 concentration (thus atmospheric optical density) the effective radiation altitude is h1, where the temperature is Teq, the temperature at which the Stephan-Boltzmann equation gives an outgoing radiation intensity the same as the incoming value from space. By convention, we can consider that concentration’s forcing to be zero.
“By convention” means that the basis of your whole argument begins with an assumption.
Do you have any proof for it? Isn’t it just another warmists meme that is “cherry picked”? To put what you are saying another way using a well known cultural image —
O Noes, we were living in the garden of carbon dioxide paradise and we screwed it up!
Really, isn’t that what you are assuming? That we were just a couple of hundred years ago living in the garden of carbon dioxide paradise?
What if I say that carbon dioxide levels during pre-industrial times were too low and then blame the Little Ice Age on the lack of CO2 in the atmosphere?
I then could say that if in fact there is CO2 caused global warming we are merely returning to a time when CO2 concentration was truly optimal — and soon we will actually reach the concentration at which “the Stephan-Boltzmann equation gives an outgoing radiation intensity the same as the incoming value from space”.
Because you “assume” at the beginning all you after arguments are worthless. You are employing a well-known Sophist trick. Let me set the initial assumption and I will have you admitting that pigs can fly.
Eugene WR Gallun

Reply to  Joe Born
June 27, 2015 9:24 pm

Joe Born: So the change in surface temperature would need to be more than would be required for the surface radiation to increase by the forcing; it would instead be the temperature change required for the radiation-altitude radiation to increase by that quantity: a larger number. Although we are indeed talking about a temperature change at the surface, therefore, it’s the absolute temperature at the effective radiation altitude, not at the surface, that needs to be used in Stefan-Boltzmann to determine the “Planck parameter.”
You have completely missed, or decided to ignore, the point that the calculated change at the effective radiation altitude is not necessarily accurate for the calculated change at the surface. Indeed, the change at the surface depends on the changes in the non-radiative fluxes from the surface to the upper troposphere.

Reply to  Joe Born
June 28, 2015 3:42 am

Eugene WR Gallun:
I am doing what Lord Monckton is doing: accepting the conventional assumptions for the sake of argument and determining what they imply. But I’m showing that they don’t imply what he imagines they do.
It’s true that I’ve additionally called the forcing at pre-industrial concentration levels zero, but nothing depends on that, since the sensitivity calculations depend on forcing and temperature changes, not on those quantities’ absolute values.
In other words, if I called the forcing at pre-industrial concentration levels, say, -300 W/m^2 instead of zero, the conclusions would have been exactly the same.

Reply to  Joe Born
June 28, 2015 3:55 am

matthewrmarler: “You have completely missed, or decided to ignore, the point that the calculated change at the effective radiation altitude is not necessarily accurate for the calculated change at the surface.”
I have no idea what you’re trying to say; I explicitly did say that a given temperature implies a smaller radiation change at altitude than at the surface, where the temperature is higher..
As for the rest of your comment, it would perhaps help matters if you were to specify what quantities you refer to when you use “change.”

Frank
Reply to  Joe Born
June 28, 2015 3:19 pm

Joe Born: Any simple calculation of the no-feedbacks climate sensitivity has at its HEART the assumption that the earth can be treated as a blackbody. This is the simplest reason why such a calculation must be based on the blackbody equivalent temperature (255 degK). An object that emits 240 W/m2 and is not 255 degK is not a blackbody. Such calculation are meaningful if all of the non-blackbody behavior of the planet can later be introduced as feedbacks.

Reply to  Joe Born
June 29, 2015 10:28 pm

Joe Born: As for the rest of your comment, it would perhaps help matters if you were to specify what quantities you refer to when you use “change.”
Basically, I refer to changes in energy transfers from the surface to the troposphere that are “forced” by the hypothesized change in Earth surface temperatures. Specifics are in the post I wrote calculating “a senstivity” of the Earth surface temperature to a 4 W/m^2 increase in down welling LWIR “forcing”. It is possible, as I wrote later, that a 1C increase in Earth surface temperature would produce a 2C increase in the temperature of the “effective radiating surface”. The estimates are not to be considered terribly accurate, but can be improved by research.

Reply to  Roy W. Spencer
June 27, 2015 1:11 pm

Roy is of course right. In the expression for sensitivity, you need to use the total forcing. It’s true that a small rise in surface T will lead to a large surface upflux. But the air also radiates more. The downflux increases by a fairly similar (and linked) amount. It is the nett change that counts for sensitivity.

Roy W. Spencer
Reply to  Roy W. Spencer
June 27, 2015 2:09 pm

PierreDM, if the global atmosphere warms, then surface pressure remains the same because the total mass above the surface remains the same and the volume is not constant, that is, the atmosphere expands upward. Actually, the lower atmosphere would expand upward…since the upper atmosphere cools with global warming, it contracts. I don’t know what the net effect at top-of-atmosphere would be.
In other words, PV=nRT with increasing T only implies increasing P if V is held constant…which it isn’t.

Menicholas
Reply to  Roy W. Spencer
June 27, 2015 3:43 pm

So what it comes down to is…it’s very complicated?
I agree with that.

morLogicThanU
Reply to  Roy W. Spencer
June 27, 2015 4:32 pm

Isn’t P only going to increase in a “closed” vessel… if the vessel is infinite (space)… V has the theoretical possibility to increase forever… well until gravity looses …

Pierre DM
Reply to  Roy W. Spencer
June 27, 2015 7:42 pm

I suspected that I might be looking at that equation backwards assuming that the increase in volume with T being constant keeps T constant because its a gas instead of, volume increasing as a response to T. That would make what John said make sense. With a bit of further thinking along those lines I think the upper troposphere hot spot makes more sense.
I am glad that I asked. I have a habit of looking at things backwards all the time because that is how I make my living. The golden rule for me is “strong correlations between parameters (especially in nature), does not necessarily equate to causation”.
I work with inventing new adhesive formulations. I often look in the rear view mirror at material I have already covered for answers. Many times the answer I have discarded as incorrect because of biases or knowledge at the time of experimentation turns out to be the right answer. That was the case for relativity with Einstein.
I always remember a lesson I learned from my father about hunting deer. HIs words “Don’t worry about finding the deer because as soon as you enter the woods making noise the deer find you and follow you to keep track of you. To find them, turn around and freeze still, they will walk up on you. He was right but I still couldn’t shoot-em.

Menicholas
Reply to  Roy W. Spencer
June 28, 2015 11:56 am

Have you looked into how mussels and barnacles attach themselves to rocks so tightly that can barely be blasted of with great effort?
That they do so while immersed in salt water and while being pound by waves makes it all the more incredible.
I had heard some years back that the military was looking into this.
Any insight?
(Part of the work I do involves repairing electrical machinery, and sometimes this equipment located in conditions that allow these organisms to become a real problem.
I have seen these things stuck with amazing tenacity on surfaces that are usually thought of as being impossible to paint or glue because nothing will stick to them.

Reply to  Roy W. Spencer
June 27, 2015 9:03 pm

Roy W. Spencer: If the entire system warmed by 1 deg., then (as you say) the increase in surface emission is much greater than the increase in emission of the whole system to outer space. But it is the latter that matters in climate sensitivity, since it is what is actually lost by the system.
What matters to human and other life is the climate change on and near the surface. What happens at the “effective radiating temperature” and “effective radiating altitude” matters only as it affects the near surface changes. As it happens, a reasonable estimate of surface sensitivity can be calculated from published data about surface flows, as I posted below (I have posted it earlier at RealClimate and ClimateEtc.) An earlier version has been downloaded a few dozen times from my ResearchGate Page.
It is at the surface where the changes produce losses and gains to human and other life. The deep oceans and upper atmosphere complicate the calculations and require some sort of “equilibrium” to get an approximation relevant to the surface.

Reply to  Roy W. Spencer
June 27, 2015 9:50 pm

Dr Trenberth admitted to Dr Noor Van Andel that the atmospheric window was actually 66 w/m2 and not 40. If that is correct then there is no need for the right hand side of the heat balance cartoon. Evaporation 80 + convection 19 and radiation 40 then equals the incoming at the surface of 165 w/m2.
The cartoons are cartoons and do not reflect reality.

AlecM
June 27, 2015 8:25 am

To claim the net mean IR energy flux leaving the surface is given by the S-B equation for the mean surface temperature is puerile nonsense from people with very limited knowledge of physics. That’s because the S-B equation does not predict a real energy flux, rather the Potential energy flux from the emitter to a radiation sink at absolute zero; a radiant exitance.
The problem is that since Goody and Yung (1989) Atmospheric Science has believed in a bowdlerisation of the two-stream approximation’ as a bidirectional photon gas diffusion argument. This breaches Maxwell’s Equations (you must use two S-B equations to get the vector sum of all the Poynting Vectors) and the 2nd Law of Thermodynamics (it claims vast increase of radiation entropy production rate for the Open Thermodynamic System).
This bad mistake is about the same as adding Electromotive Force (Volts), another EM potential term, as a scalar. They assume that the Pyrgeometer instrument, which outputs mostly the S-B exitance, proves the point, but it’s an exitance, not real.
The real net mean surface IR is determined experimentally AND SHOWN in Trenberth’s cartoon as 63 W/m^2. 40 W/m^2 is from image analysis of satellites in the Atmospheric Window; 23 W/m^2 is the difference of 160 W/m^2 SW heating and (17 W/m^2 convection + 80 W/m^2 evapo-transpiration). It is also the vector sum of (396 W/m^2 surface exitance and 333 w/m^2 atmospheric exitance), which is 63 W/m^2 integral of net Poynting Vectors.
So they exaggerate surface IR 6.3 times. They then put in an imaginary Down |OLR|, 238.5 W/m^2, making the lower atmosphere’s energy balance = 1.4 times real SW thermalisation. They add the imaginary 94.5 W/m^2 to the real 63 W/m^2, which cannot heat the lower atmosphere, to give 157.5 W/m^2 ‘Clear Sky Atmospheric Greenhouse factor’.
The experimental proof that this does not exist as a heating energy is that it implies a mean ~20 m atmospheric temperature of ~0.5 deg C, lower than at any time in the past 444 million years. The Last Glacial Maximum was ~10 deg C. A decade ago, Hansen admitted NASA had set out to measure Surface Air temperature (he uses ~50 ft), but couldn’t do it, so model it. In my view since that time they have been bluffing it out hoping that no-one realises they were always wrong in copying Sagan and Pollock’s original mistakes, three of them. Sagan later messed up aerosol optical physics as well – the sign of the AIE is reversed.
Also in 1977, Houghton in ‘The Physics of Atmospheres’ Fig. 2.5 shows that Lapse rate convection, needec for the gravitational temperature gradient from a virtual work argument, keeps that temperature difference at zero. Later when he co-founded the IPCC and accepted Hansen’s false science, Houghton gave up science to become a religiously-driven politician preaching false science.
This is a mess and you can always tell who was taught this incorrect radiative physics; they transpose ’emittance’, an old term for the SI Unit ‘exitance’, for ’emissivity’, so they cannot communicate with the rest of science. This course module at MIT and the next one, show how easy it is to mislead students: http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node134.html

Anne Ominous
June 28, 2015 5:51 pm

AlecM
Your link does not support your argument. I rather think it is you who misunderstands.
A body at temperature (gray bodies, not black bodies, are usually used for a more realistic approximation) emits in a range of wavelengths. As your link points out, these wavelengths change with temperature.
Real materials (not black or gray bodies) also have preferred absorbance and emittance bands. These vary by material and also by temperature.
However, the relation ɛσT⁴ is not some kind of “theoretical emittance” as you appear to think; it is the integral of the total radiant output across all frequencies. As such, it is not just some theoretical mathematical construct, but rather a real effect; multiply by area and you get total actual radiative power output in W/m². (I hope those symbols make it through WordPress.)
This total radiant output is dependent only on emissivity and temperature. Again for real materials you have to account for their actual radiation bands, but for most purposes a gray body is going to give you a decent real-world approximation at a given temperature. Also in the real world, emissivity can vary somewhat with actual temperature so nobody is claiming that it’s exact, but it’s a very good rule of thumb.
You appear to be saying that isn’t “real” output because the atmosphere interacts with that radiation. But that’s a misunderstanding. The output is the output. What it may interact with later might change the effective output at a distance, but that is the real output from the surface.
Heat transfer is another matter and depends on other factors. But saying the Stephan-Boltzmann equation is only some kind of theoretical beast divorced from reality is simply false. It’s a good approximation of the REAL radiative output. What you do above the surface can affect what that output does, but at a given emissivity the ACTUAL output at the surface is dependent only on temperature, according to that equation.
But to say “the S-B equation for the mean surface temperature is puerile nonsense from people with very limited knowledge of physics. That’s because the S-B equation does not predict a real energy flux, rather the Potential energy flux from the emitter to a radiation sink” indicates that is is you who does not understand how it actually works. There need be no “sink”… at a given temperature, the output is the output. It can be in a styrofoam box or in deep space, as long as the temperature is constant, so will be the output. Nor is it measured at absolute zero; in fact other than emissivity, temperature is the sole variable on which everything depends.

June 29, 2015 10:50 pm

AlecM: To claim the net mean IR energy flux leaving the surface is given by the S-B equation for the mean surface temperature is puerile nonsense from people with very limited knowledge of physics. That’s because the S-B equation does not predict a real energy flux, rather the Potential energy flux from the emitter to a radiation sink at absolute zero; a radiant exitance.
So far, so good, but how inaccurate in practice is the S-B equation when used this way? 10%? (a figure in Pierrehumbert’s text); 20%?
The rest of your post tries, I think, to cover too many topics. If at the surface the S-B equation is accurate to within 20% (as with barely adequate electrical components in days long past), then the calculations using it are accurate enough to guide research toward more accurate approximations, and maybe enough for practical considerations like planning for future climate change. A 20% error from using S-B to calculate radiant energy change at the surface is smaller than the error (prevalent in most of these discussions) from ignoring the changes in the non-radiative energy transfers.
This course module at MIT and the next one, show how easy it is to mislead students:

Editor
June 27, 2015 8:28 am

An interesting use of their own game rules against the Global Warming Theorists. Well done.
But they will just change their rubber rules…
BTW, the more basic problem with these energy budget graphs is that they are static scored. Daily we have day night cycles, so all those formulas need a sine wave cycle (and at a fourth power function this is a dramatic deal… what with sun in and IR out on different shifts). On a longer term, does anyone really think clouds and precipitation are a global average constant? Precipitation represents giant heat flow, via convection and phase change, to TOA, and has big decadal scale changes. (Clearly visible in flood reports vs drought cycles). So what happens to a 2 W radiative flux calculation when there is a 20 W precipitation change?
In short, you play their model game well, but it is about as real as a game of bridge.

Erik Magnuson
June 27, 2015 9:10 am

What’s even worse is the GCM’s don’t model clouds very well, or local scale events such as tropical thunderstorms.

Menicholas
Reply to  Erik Magnuson
June 27, 2015 9:50 am

Thunderstorms are likely to be the very largest movers of thermal energy on the planet, outside of the ocean currents.
And the hotter it is, the stronger the storm and the faster they transport energy to altitude.

David A
Reply to  Erik Magnuson
June 28, 2015 5:29 am

Indeed, I often ask, but have no answers..Do all energy inputs have to manifest as heat?
“How much energy does it take to accelerate the hydrological cycle, and to grow 35 percent more bio-life? I only ask because I want to know.

Reply to  Erik Magnuson
June 28, 2015 11:22 am

@Menicholas I’m not a scientist, but as I recall it has only been since the long term orbiting of the space shuttle and international space station that we discovered that the electrical energy involved in thunderstorms had been underestimated by orders of magnitude. I seem to recall there is also evidence that interactions of the solar wind with the earths magnetic field induces as least some of the current in thunderstorms which is an interesting addition to the atmosphere’s energy budget that is induced instead of radiated. I don’t know but it seems to me there have been a number of astonishing discoveries in the last half a century (such as the biomass of the “other” kingdom of life that resides near deep sea vents outweighs ALL the photosynthetic dependent Animal Kingdom) that haven’t yet been fully worked into the various “budgets” and “cycles”. In fact I find it curious that we live beneath a cascade of newly created energy and everyone is wanting to find equilibrium! It is really interesting to me that no one wants to assign the “perfect” equilibrium for the planet and how we plan to engineer staying at that point. Talk about interfering with nature!

Menicholas
Reply to  Erik Magnuson
June 28, 2015 11:03 pm

Fossilsage,
I actually just commented on that very subject recently, but will have t look for it.
I recall a very interesting discussion here…let me look for the link:
http://wattsupwiththat.com/2015/04/22/using-cosmic-rays-to-reveal-earths-thunderstorm-processes/
But I see you were already there.

Richard G.
June 27, 2015 10:13 am

The diagrams need to incorporate an image of Bio-Spongebob who is out there busily sucking up energy and creating high energy chemical bonds.
http://www.lyricsmode.com/i/bpictures/4795.jpg

June 27, 2015 10:16 am

“In short, you play their model game well, but it is about as real as a game of bridge.”
I don’t think their model game is even as real as a nice game of bridge. One can look at Kiehl & Trenberth (1997) and see the stupid. It is amazing to me that the scientific community thinks that horror is science.

June 28, 2015 7:30 am

Like bridge but, it’s not played at a penny a point.

Kent Noonan
June 27, 2015 8:29 am

Well done. You should get this into a peer reviewed publication. Otherwise it will be ignored by those who should be paying attention.

RCS
June 27, 2015 8:30 am

I may be missing something here, but I don’t understand equation 2.
if F =sT^4, then T=[F/s]^1/4.
dT/dF=(1/s)^1/4 . (F^-3/4)/4=(1/s)^1/4 ,1/(4.x^3/4)
I think the mistake is inversion of the derivative. In general dx/dy does not = 1/(dy/dx).
For Example:
y=x^2, dy/dx=2x
then
x=y*1/2, dx/dy=1/(y^1/2) which does not = 1/(2x)

Editor
June 27, 2015 11:29 am

Lord Monckton, I’m sorry but RCS is correct. You’ve made a mathematical mistake in Equation 2. In general dx/dy does NOT equal 1/(dy/dx).
In the current case we have (per Mathematica):

In[101]:= eqn1 = sigma * epsilon * T^4
Out[101]= epsilon sigma T^4
In[102]:= eqn2 = (F/(sigma*epsilon))^(1/4)
Out[102]= (F/(epsilon sigma))^(1/4)

Those are the standard S-B equations going first one way and then the other. At that point I differentiate the two.

In[137]:= D[eqn1, T]
Out[137]= 4 epsilon sigma T^3
In[138]:= D[eqn2, F]
Out[138]= 1/(4 epsilon (F/(epsilon sigma))^(3/4) sigma)

I’ll put the Mathematica answers into Latex for easier reading;
$\frac{dF}{dT} = 4 \epsilon \sigma T^3$
$\frac{dT}{dF} = \frac{1}{4 \epsilon \sigma \left(\frac{F}{\epsilon \sigma }\right)^{3/4}}$
Note that dF/dT is NOT the inverse of dT/dF.
Best regards to you,
w.

Editor
Reply to  Willis Eschenbach
June 27, 2015 11:40 am

A final note. The second of the two derivatives shown above, dT/dF, can also be written as
$\frac{dT}{dF} = \frac{\sqrt[4]{\frac{F}{\epsilon \sigma }}}{4 F}$
which in turn is
$\frac{dT}{dF} = \frac{T}{4 F}$
Interesting … looks like I might be wrong and his Lordship was indeed correct … per Mathematica

In[142]:= Simplify[Expand[D[eqn2, F]]]
Out[142]= (F/(epsilon sigma))^(1/4)/(4 F)


w.

treyg
Reply to  Willis Eschenbach
June 27, 2015 1:53 pm

Willis, I was about to reply to your comments, but you got there first. The math for eqn 2 is correct.

treyg
Reply to  Willis Eschenbach
June 27, 2015 1:58 pm
Reply to  Willis Eschenbach
June 28, 2015 9:48 pm

what happens if you substitute F=eps*sig*t^4 in dT/dF? It looks to me like you get 1/(dF/dT).

June 29, 2015 3:02 pm

oops! I missed that Willis had already corrected his error before I wrote my post. Sorry.

RCS
June 27, 2015 8:32 am

Sorry, the last line should be
x=y^1/2, dx/dy=1/(y^1/2)

Trey
June 27, 2015 10:22 am

No. You have to use the chain rule. dx/dy =(1/2) y^(1/2-1)=(1/2) 1/(y^(1/2))=1/(2 x)
If you’re still in doubt, use wolfram alpha.

co2islife
June 27, 2015 8:32 am

BTW, the Oceans are the most significant factor driving the climate, not CO2.
“Ocean circulation is responsible for the immense global heat transport that affects every facet of world climate. And yet, the scientists remain perplexed as to the mechanics and intricacies of the entire system.
But what do the climate scientists know for sure? Well…that human CO2 emissions have absolutely nothing to do with this natural ocean circulation heat transport and weather system. That part is settled.”\
Just look at the temperature graph and how El Ninos totally distort the charts.
BTW, how does atmospheric CO2 warm the oceans? It doesn’t. So we either have 3 forces in action. 1) One force to warm the atmosphere ie man made CO2 2) One force to warm the oceans (ie visible light reaching the oceans) and 3) one force that is increasing all non-man made GHGs other than CO2. Or we can have one natural force doing all three, the sun radiation reaching the earth warms the oceans and surface, the warming causes outgassing of the oceans and biosphere, and it is all natural, or a trace gas made by man is the cause? I go with choice #1 that explains all observations with a common sense explaination.

wayne Job
June 28, 2015 2:45 am

We have a sun our only heater, the cycles and moods controlled by the various perambulations of the giant planets. These planets are the final arbiters of our heating and cooling, not only is it all predictable, the forces involved also create our vulcanism which can easily be tied to the various moods of our sun and planetary system. Solar science, warmanist science and indeed standard model science have been barking up the wrong trees for my entire life. Lord Moncton is trying valiantly to disprove their nonsense by playing in their backyard and using their faulty maths against them. Thank you sir.

Phillip Bratby
June 27, 2015 8:34 am

The energy budget diagrams are a load of nonsense. They aren’t in units of energy and they assume the earth is flat and doesn’t have a day and a night. How they ever got through a peer review process beggars belief – but this is climate “science”.

Mike M.
Reply to  Phillip Bratby
June 27, 2015 9:06 am

Phillip Bratby
You wrote: “The energy budget diagrams are a load of nonsense. They aren’t in units of energy and they assume the earth is flat and doesn’t have a day and a night.”
Your comment is nonsense. The top of atmosphere flux is given as 340 W/m^2; that is plainly for a sphere, not a flat surface. The numbers are global averages. You are aware that is always daytime over about half of the Earth’s surface?

Go Whitecaps!!
Reply to  Mike M.
June 27, 2015 10:58 am

Have to agree with you Mike. It’s unfortunate that Phillip makes statements that make no sense. Incoming insolation is around 1366wm2 and I’ll let him figure out why the diagram shows 1/4 of this amount. Also, what units would Phillip use? Maybe he doesn’t know that the first energy balance diagram was made in 1911 and the numbers are only being refined with more modern equipment.

Phillip Bratby
Reply to  Mike M.
June 27, 2015 11:37 am

Of course I am aware that it is daytime over half the surface. Since when has energy flux been in units of W/m^2? Are you not aware that global averages are totally meaningless (unless you think it makes sense to average day and night and the polar regions with the equatorial region)?

Mike M.
Reply to  Mike M.
June 27, 2015 1:06 pm

Phillip Bratby wrote: “Since when has energy flux been in units of W/m^2? ”
Since always. https://en.wikipedia.org/wiki/Flux#Flux_as_flow_rate_per_unit_area
“Are you not aware that global averages are totally meaningless”
So you think that the total amount of energy entering and leaving the planet is meaningless? Really?
I suppose you will respond that the issue is not “total” but “average”. If you are so innumerate that you don’t see that the difference is trivial, I don’t know what to say.

KevinK
Reply to  Phillip Bratby
June 27, 2015 9:57 am

Phillip you are totally correct. These “energy budget” cartoons are a bad joke. Start with the wrongs units, add error bars that are larger than the supposed “imbalance” and end up “proving” that CO2 “traps heat”.
Any further analysis based on these cartoons is also flawed.
It is a large macro system that is never ever in equilibrium anywhere. You cannot assign a global “energy flow” and do any useful analysis.
The whole climate science “energy budget” exercise is pseudo science.
Cheers, KevinK

June 27, 2015 10:26 am

Agreed in total.
I did see a nice 3D energy budget for the planet once, but this site forbids linking to where I saw it.

Go Whitecaps!!
June 27, 2015 4:42 pm

Kevin, you have no idea what you are talking about.

RCS
June 27, 2015 8:42 am

Sorry, I apologise. Eq 2 is correct. Ignore the above – I had a Brainf**t

Mike M.
June 27, 2015 8:44 am

Lord Monckton,
You wrote: “Two conclusions are possible. Either one ought not to use Eq. (1) at the surface, reserving it for the characteristic emission altitude … or … sensitivity is harmlessly low”.
It is neither.
There is another possibility, as already indicated by Roy Spencer: You have misunderstood something fundamental. Specifically, delta_F is the change in flux, it is only equal to the change in forcing at the emission altitude. So no, you have not discovered some silly error in computing the radiation budget.
I must say that I find your claim that “you only ask because you want to know” to be somewhat disingenuous. If your claim was honest, you’d have stopped with the question, rather than following it with extensive unfounded speculations based only on your misunderstanding.

Aphan
Reply to  Mike M.
June 27, 2015 10:20 am

It is entirely logical to present the information/understanding one currently has and ask a question that would help clarify what, if anything, is wrong with that information. LM stated that he had asked the question before and it resulted in more futility than productivity, so is it not likely that he presented his course of thinking here along with the question to make responses to him as efficient as possible?

June 27, 2015 8:44 am

Dear Lord Monckton,
My knowledge of radiation is too long ago, so no comment on that part. Only a comment on:
Professor Murry Salby has estimated that, after the exhaustion of all affordably recoverable fossil fuels at the end of the present century, an increase of no more than 50% on today’s CO2 concentration – from 0.4 to 0.6 mmol mol–1 – will have been achieved.
As we have used some 370 GtC up till now and the recoverable fossil fuels seems to be at least a 10-fold of that quantity (including coal), I am not sure that it will end at 500 ppmv. With “business as usual”, human emissions simply go up slightly quadratic over time and as the increase in the atmosphere follows more or less in ratio, we may end at over 800 ppmv in 2100. Except for a massive switch to nuclear…

Sturgis Hooper
Reply to  Ferdinand Engelbeen
June 27, 2015 10:06 am

It seems improbable to me that over the next 85 years people will add an average of 4.7 ppm to the air when we’re currently only at 2 ppm.

Reply to  Sturgis Hooper
June 27, 2015 10:42 am

Sturgis,
Probably not in Western countries, but China is assumed to triple its CO2 emissions around 2030 and India may double in the same time frame… I suppose that it is also a matter of cost of fossil vs. cost of the alternatives where hydro and nuclear are the main large scale competitors.

Reply to  Sturgis Hooper
June 27, 2015 3:11 pm

Ferdinand,
Against China, I would set the US and other countries which have lowered their CO2 contribution by relying on natural gas instead of coal, and those benighted states which are trying to control their emissions at the cost of lives and treasure.
IMO it’s improbable that, even with China and India burning more coal and oil, the average annual increase in CO2 levels will climb from ~1.2 ppm in the 1958-2015 period (67 years) to 4.7 ppm during 2016-2100 (85 years), a near quadrupling. IMO the total should be less than for the most rapidly rising region, ie the forecast tripling in China.
IMO, 600 ppm by 2100 should be more like it. IIRC, Gosselin also computed that concentration as likely “peak CO2”. But of course no one knows the sink rate nor the future emission rate with any robustness, to put it mildly. IMO however, future CO2 growth rate is more likely to average 2-3 ppm per annum than the 4-5 ppm implied by a doubling by end of the century. The lower rate range yields CO2 levels for AD 2100 of 570 to 655 ppm rather than 800 ppm.
The 570 ppm level would of course be a doubling in 250 years from the 280-285 estimated for AD 1850.

Reply to  Sturgis Hooper
June 27, 2015 3:30 pm

Lance Wallace on WUWT in 2012 estimates a little more rapid CO2 growth than my simplistic arithmetic. I didn’t look at the actual decadal rates since 1958:
http://wattsupwiththat.files.wordpress.com/2012/06/image2.png

richard verney
Reply to  Sturgis Hooper
June 27, 2015 10:45 pm

The point is that with increasing amounts of manmade CO2 emissions, the carbon sinks are expanding and that is why the annual increase in CO2 is considerably less than the quantity of CO2 emitted by man on an annual basis.
The capacity of the carbon sinks in 1960 is less than their capacity today.
If that trend continues (ie., the capacity of the carbon sinks today is likely to be less than those in 2050 etc.), and it has for the past 50 years so there is no reason to presume that it will suddenly stop, even if China and India ramp up CO2 emissions by far more than the developed West cuts CO2 emissions, the increasing capacity of the sinks will mean that only a fraction will end up in the atmosphere.
It is difficult to see how a figure of more than 600 ppm can be reached by the end of this century, and it is likely to be less than that. It is probable that there will be about one full doubling from the 1800s level of CO2, ie., an increase from about 280ppm to about 560ppm.
Given that present observation suggests a sensitivity to CO2 so small that it cannot be measured using our best measuring devices, this increase in CO2 may add nothing to temperatures over and above that brought about by natural variation.

Reply to  Sturgis Hooper
June 28, 2015 12:55 am

Sturgis,
Indeed as usual, it is impossible to predict the future… Maybe you are right and fossil fuel use per capita in China and India will not increase as high as in the Western world, either by better efficiency and/or more alternatives…
The “air borne fraction”, the part of human emissions (in mass, not origin) remaining in the atmosphere, varies between 10-90% year by year and 40-60% decade by decade. Sink rates were highest 1975-1995 and 2000-current.
Richard,
CO2 sinks do expand with the increased CO2 pressure in the atmosphere. That behaves as a remarkably linear process: since 1960 (Mauna Loa), the increase in the atmosphere quadrupled and so did the net sink rate, of course with the usual temperature caused year-by-year and decadal variability.
The IPCC with their Bern model expects an increase in airborne fraction, as they assume that the deep oceans buffer gets saturated, but until now there is no sign of that and the uptake by vegetation doesn’t have a restriction at all.

Mike M.
Reply to  Ferdinand Engelbeen
June 27, 2015 11:39 am

Ferdinand,
What exactly is assumed by “business as usual”? Does it account for the fact that developed country emissions have leveled off? And that developing country emissions should eventually level off? And that population should level off, and perhaps even stop falling after about mid-century?

Reply to  Mike M.
June 27, 2015 1:19 pm

The IPCC has different “scenario’s”, which are based on growth and leveling off of population but increase in energy use per capita in developing countries, where energy use is diverted – or not – from fossil fuels.
“Business as usual” is the scenario where there is practically no shift away from fossil fuel use.
The remarkable point is that until now the CO2 emissions – and the increase in the atmosphere – follow the IPCC scenario of maximum emissions, but temperatures follow the lowest scenario, where emissions leveled off after 2000.

Mike M.
Reply to  Mike M.
June 29, 2015 8:24 am

Ferdinand,
“Business as usual is the scenario where there is practically no shift away from fossil fuel use.”
I’ve just done a bit of checking and I think I must disagree with this. IPCC does not seem to use the term “business as usual”. And whereas that term implies some sort of best estimate in the absence of concerted action, it seems that PCP8.5 is really a worst case scenario, with a high population of 12 billion in 2100 and most additional energy coming from coal. See figures 12 and 14 at: http://www.skepticalscience.com/rcp.php?t=3#popgdp

gino
June 27, 2015 8:49 am

What is the physical mechanism that allows the atmosphere, which is assumed to be well mixed for this hypothesis, to radiate non uniformly?

Phillip Bratby
June 27, 2015 8:55 am

According to the K-T energy budget, only about a third of the energy absorbed by the earth’s surface comes from the sun. I have not found anybody who can explain what the source is of the other two-thirds of the energy absorbed by the earth’s surface. I sure can feel my skin absorbing energy from the sun when it is shining, but I have never noticed any other source of energy warming my skin.

Toneb
Reply to  Phillip Bratby
June 27, 2015 9:21 am

Philip:
All the energy comes from the Sun – just that some of it is back-radiated after being absorbed and re-emitted as IR by the ground. Like sitting next to a south facing wall in the sun LWIR comes back at you. So you have to factor in the fraction of that that is re-radiated back to the ground, and integrate as it does the same again, and again etc. There is also re-emission from direct absorption of insolation by the atmosphere.

Ian Macdonald
June 27, 2015 11:40 am

We don’t actually know that all the energy comes from the Sun. The Earth’s core is hot, and a certain percentage of surface heat must be from that source.

Phillip Bratby
June 27, 2015 11:49 am

What complete nonsense. I am heated directly by the sun. The wall is also heated directly by the sun. So the wall is not being additionally heated by something else again and again. It is heated once by the sun just as I am.
What a marvellous scheme it would be otherwise Next time I light my woodburning stove I’ll see if I can get three times as much heat into the room by surrounding it with some magic compound. What would you suggest I should use?

jinghis
June 29, 2015 8:43 am

Phillip,
Just insulate the room and it is easy to get three times as much energy into the room. The better the insulator the more energy, simples.

July 2, 2015 8:12 am

Ian:
The heat flow from the earth through the surface is about 0.05 W/m^2. It varies, higher at hotspots like Hawaii and Iceland, lower in mid-continent. But that’s the ballpark. Quite a lot less than solar energy flux.

Reply to  Phillip Bratby
June 27, 2015 9:36 am

I agree with you, Phillip.
Not being a scientist I tend to listen and (hopefully) learn but there seem to be some very odd figures in here.
For a start the incoming radiation is 340 (watts, I assume) of which 180 – round figures – is either reflected at TOA it absorbed by the atmosphere. Yet 185 reaches the surface so there is a discrepancy to start with.
Then we’re told that 65 are “lost” to thermals and evaporation yet surface radiation accounts for 400.
It looks very much like pluck a number out of the air to support your hypothesis.
Like you I’m not aware of anything other than the sun that actually warms anything.

Mike M.
June 27, 2015 10:04 am

newminster,
The diagrams are not always as self-explanatory as they might be, but they do balance. For example, using the IPCC diagram (which I picked as the most legible), 340 in at TOA (units are W/m^2), 76 reflected by atmosphere, 79 absorbed by atmosphere, 185 reaches surface, 161 absorbed by surface, 24 reflected by surface making 100 total reflected. No discrepancies. For the surface, 503 total absorbed (161 from the sun, 342 from the atmosphere), 502 total up (398 emitted IR, 104 convection of sensible and latent heat). An imbalance of 0.6 (rounds to 1) due to net heating of the oceans. No discrepancies.

jinghis
June 27, 2015 9:10 am

I find all the energy budget cartoons rather amusing, as they should be.
Their basic mistake is averaging a square meter at the equator with a square meter at the pole. The square meter at the pole absorbs zero solar radiation and is continuously losing energy, whereas a square meter at the equator gains more energy than it loses, via radiation and evaporation.
The cartoons would be a lot more accurate and physically representative, if they showed the net energy flux by latitude.

MikeB
June 27, 2015 9:20 am

Well that might be more informative, maybe you could make it your project.
Then, when you have done that, you could average over all the square metres and summarise the results in a neat, easy to understand at a glance, cartoon.

Mike M.
June 27, 2015 10:11 am

jinghis,
Why stop there? You also have to distinguish by longitude, since emission, reflection, and cloud cover all vary with longitude. And of course, time-of-day and time-of-year. Oh, and absorption and emission in the atmosphere vary with height, so that should be included. All those details are in the models, so it is not like they don’t have the numbers.
I don’t see why they can’t put all that into a nice, easy to understand cartoon. Ridiculous.
Well, at least something is ridiculous here.

Go Whitecaps!!
Reply to  Mike M.
June 27, 2015 11:11 am

Hey Mike, wouldn’t it be easier to measure the energy coming directly from the sun and divide that by 4.😎

Reply to  Mike M.
June 27, 2015 12:03 pm

Mike M
“340 in at TOA (units are W/m^2), 76 reflected by atmosphere, 79 absorbed by atmosphere, 185 reaches surface, 161 absorbed by surface,” — Yep. With you so far.
“For the surface, 503 total absorbed (161 from the sun, 342 from the atmosphere),” — You just lost me.
161 absorbed by the surface from the sun. Since when was the atmosphere a heat generator? That’s my problem. How does the surface reflect (or radiate) more energy than it receives? Indeed more energy than the entire system receives.

jinghis
Reply to  Mike M.
June 27, 2015 12:57 pm

Mike M. – “Why stop there? You also have to distinguish by longitude, since emission, reflection, and cloud cover all vary with longitude.”
No. they don’t. One square meter on the equator is pretty much equal to any other square meter on the equator.
Perhaps you are confused by the difference between latitude and longitude?

Reply to  Mike M.
June 28, 2015 3:03 am

The proper distribution of spatial error in these estimates for our ovoid at varying inclination (not a sphere) with gridding mechanisms that date back to when detailed DEM/surface tomography was not available are a significant problem. As is the mechanism for distributing and calculating clouds. None of which are going to well represented in a cartoon.
I wish the various cartoon would simply represent the measured data differently than the calculated/hypothesized/modeled/guessed at values.

Ian Macdonald
June 27, 2015 11:44 am

Or, averaging a square meter at night with one on the day side. We know from the Moon that day and night temperatures vary widely with no atmosphere and a slow rotation.

Frank
June 27, 2015 9:11 am

Lord Monckton wrote: “Two conclusions are possible. Either one ought not to use Eq. (1) at the surface, reserving it for the characteristic emission altitude, in which event the value for surface flux density FS may well be incorrect and no one has any idea of what the Earth’s energy budget is …”
One applies equation 1 to the temperature at the characteristic emission altitude, not the surface temperature. When applying equation 1, you are assuming that the earth behaves like a blackbody, so you need to use its blackbody equivalent temperature 255 degK, not its surface temperature. In doing so, we lump all of the non-blackbody behavior into “feedbacks”. Lapse-rate feedback controls whether warming at the surface will be greater or less than warming at the critical emission altitude. When we talk about the no-feedback climate sensitivity of 1 degK for a 3.7 W/m2 forcing from 2XCO2 (without your factor of 7/6), we then make the assumption that warming at both locations is the same. Studies with climate models give a no-feedbacks climate sensitivity of 1.15 degK, because the earth doesn’t have a uniform temperature and the average of T^4 is greater than (average T)^4.
When you do calculations that treat the earth as a blackbody, the most you SHOULD say is that after an instantaneous doubling of CO2, the planet will warm SOMEWHERE until the 3.7 W/m2 imbalance is eliminated. Nothing in a blackbody analysis says that all of the warming couldn’t occur only in the upper atmosphere or just in polar regions. The surface emits only about 10% of the photons that escape to space, so it certainly isn’t required to warm. However, if you ASSUME equal warming everywhere, the no-feedbacks climate sensitivity would be 1.0 degK at the surface. However, by eliminating the possibility lapse rate feedback, a disguised REQUIREMENT for equal warming everywhere has been imposed. This requirement doesn’t directly follow from the physics of blackbody radiation – it’s a function of what we chose to call a feedback.

June 28, 2015 9:46 pm

+1
The TOA is cold, and warming there does not dictate warming where we live. Warmista’s talk about adding energy to “the system” but they do not mention that it is the system’s five-miles-up area. I like your SOMEWHERE.

Frank
Reply to  Michael Moon
June 30, 2015 10:18 am

Michael: I’d like the “somewhere” that warming occurs to be mostly 5 miles above the surface, but I can’t convince myself that it will be. The drop off in temperature with altitude – the lapse rate – is controlled by the rate heat is convected upwards. A high lapse rate (rapid temperature drop with altitude) promotes more convection, but the convected heat increases the temperature of the upper atmosphere, reducing convection. So it is unlikely that all warming can occur high in the atmosphere, because that would reduce convection and leave the surface warmer. No simple calculation yields the average observed lapse rate, so we can’t predict from simple physics how the lapse rate will change.
Fortunately, increasing absolute humidity – a feedback – is certain to decrease lapse rate, so the GHE from increase water vapor is expected to be partially offset at the surface by a falling lapse rate. In other words, there will be more warming higher in the atmosphere than lower.
Saying that warming must occur SOMEWHERE shouldn’t be taken to imply it won’t happen at the surface of the earth. It is meant to remind us that calculation of a no-feedbacks climate sensitivity at the SURFACE from blackbody considerations requires ASSUMING that warming will be equal everywhere.

Roy W. Spencer
June 27, 2015 9:13 am

Gino, the atmosphere is not well mixed in absolute terms of water vapor and CO2 content, the main gases which absorb and emit IR. Lower atmosphere contains a larger absolute amount of these gases than the upper atmosphere. THEN…the temperatures are vastly different, and the IR emission is a strong function of temperature (but the IR absorption much less so).

Sturgis Hooper
Reply to  Roy W. Spencer
June 27, 2015 10:09 am

Also, water vapor is not well mixed at all. It’s concentrated at lower latitudes.

Reply to  Roy W. Spencer
June 27, 2015 11:20 am

Dr. Spencer,
Water vapor is widely variable in the atmosphere and rapidly reduces with pressure, but CO2 is quite well mixed: +/- 2 % of full scale from the North Pole to the South Pole, everywhere over the oceans and above a few hundred meters over land up to over 30 km height, except for a lag from NH to SH and near ground level to high altitudes, as human emissions are mainly in the NH at ground level. See:
http://www.nature.com/nature/journal/v288/n5789/abs/288347a0.html
A difference of 7 ppmv between tropopause and 33 km height on a level of 400 ppmv…
The highest variability is in the first few hundred meters over land where there are lots of near ground sources and sinks at work. But even if the first 1,000 meter was at 1,000 ppmv, that has hardly a measurable influence on the radiation balance (per Modtran)…

Roy W. Spencer
Reply to  Ferdinand Engelbeen
June 27, 2015 2:01 pm

yes, which is what I was alluding to.

June 27, 2015 9:30 am

‘Sins of emission’ is very good.

Mike Maguire
June 27, 2015 9:49 am

It’s great that we can do all these diagrams and math to represent the earths energy budget. They are definitely useful in understanding the overall climate system/radiation balance. Probably most of the guesstimates are very close, many are right on the money. However, the amount of certainty in some and the final estimate, represented as the climate sensitivity for instance, is greatly exaggerated vs the reality of uncertainty.
Wouldn’t it be great if we did have all the correct equations to accurately represent all processes and only had to plug in all the accurately measured values to come out with the precise solutions to tell us what we need to know………out to the year 2100.
I lost some of my math skills from 35 years ago, when learning atmospheric science. However, I gained much more in observing the global atmosphere since then. What is clear, is the false illusion by some(more educated and better at math than me), that representing the atmosphere with mathematical equations has provided them with the power to project beyond what is realistic……..because you can’t have absolute certainty, when several elements that contribute towards your product have(not clearly defined) uncertainty.

Menicholas
Reply to  Mike Maguire
June 27, 2015 10:11 am

I am with you here Mike. I got excellent grades in several semesters of engineering calculus, but have little ability to remember much of it now.
My take is a little different on why these equations seem unlikely, to me anyway, settle much of anything…at least not at the present state of understanding.
At every single place I have ever witness discussions of radiation physic and the math involved in all of these calculations, I have never seen a single one which did not have various individuals or factions of individuals from various disciplines and many of them apparently highly knowledgeable on both side, arguing vociferously and with great acrimony about disagreements over details large and small of nearly every single aspect of every part of the process.
I find it unlikely, under these circumstances, that there is even one single person who has everything figured out correctly.
Maybe there is, but I have yet to see evidence of such a shining beacon of calculative and physical genius in our midst.
Murray Salby sure seems convincing, but so do many others.

June 27, 2015 11:42 am

“Like”

June 27, 2015 9:55 am

Christopher
I can’t answer your question. However, I’d like to mention something which I believe has a bearing on it. Using average TSI throws off the scientists’ inputs for the radiative imbalance. It relates to your equation (4).
Let me begin by stating something that I’m sure you and many others know: using the assumed average TSI value of 1366Wm^-2 is a short cut which is supposed to average out the NH and SH summer variations in TSI of 1323Wm^-2 and 1413Wm^-2 respectively. This difference is due to the elliptical nature of the Earth’s orbit. Everyone accepts this averaging as being a reasonable expedient thus saving the labour of doing 365 daily calculations of equation 4 with different TSI inputs.
However, by conflating the average TSI figure with an equation that is derived from the T^4 element in the SB laws, it ignores the greater emissivity of the Earth’s surface during the SH summer months due to experiencing a higher equilibrium temperature. This won’t be offset by the lower emissivity of the NH summer months because the equilibrium temperature at both points in the Earth’s orbit is calculated using a T^4 input. Therefore, the surplus emissivity in the SH summer (over and above the average using the average TSI value in equation (4)) more than cancels out the deficit in emissivity during the NH summer. This in turn means that there is a seepage of heat flux during the SH summer that isn’t being accounted for in equation (4). Indeed, it is also the case for that entire half of the orbital ellipse which contains the SH summer though it’s much less marked towards the equinoxes.
The daily TSI readings are adjusted to 1AU so that at the height of the SH summer the higher figure of around 1413Wm-2 (depending on what the sun is doing that day) is scaled down to 1366Wm^-2 using the 1/r^2 orbital radius ratio and any daily fluctuations of a few tenths of a Watt are carried through and mirrored in the final daily figure. That then gets averaged with the other daily values to 1366Wm-2. If I recall, the measurements are actually 6-hourly. This means that the raw data is not being used in equation (4) and consequently, the surface emissivity when averaged over the year is too low due to the exponentially higher SH summer emissivity not accounted for due to the T^4 element. The result is that the calculated average surface temperature is too high and should be commensurately less.
There is a confounding issue here, which is that the Earth travels faster round the sun during the SH summer and so it experiences the higher TSI values for a shorter time. However, orbital speed is dependent on root 1/r, giving rise to a circa 3.3% variation in orbital speed over the year. TSI varies by circa 6.4% due to the 1/r^2 element. Seeing as the total energy received by the Earth’s disc is the integration of TSI over time and time is directly proportional to speed, it can be seen that the 3.3% speed-up during the SH summer isn’t enough to offset the 6.4% increase in TSI at that time. So, although the speeding up of the Earth dampens the extent of the surplus emmissivity that’s unaccounted for, it by no means cancels it out.

David A
June 28, 2015 5:47 am

scute says…”However, by conflating the average TSI figure with an equation that is derived from the T^4 element in the SB laws, it ignores the greater emissivity of the Earth’s surface during the SH summer months due to experiencing a higher equilibrium temperature”
==================================================================
This is not in fact evident as far as I know. The atmosphere cools in the SH summer, despite this greatly increased input. And yes, the albedo increases in the NH. But the solar input into the worlds oceans massively increases, and that energy is lost to the atmosphere for a time as well, but unlike the NH albedo lost, it is still within the earth. In my view there is much to be learned from the annual energy pulse.

June 27, 2015 10:04 am

Even if, for the sake of the argument, one allows the IPCC warming forecast of 4degC by 2100 due to manmade CO2, then thanks to Count Rutherford (Benjamin Thompson) and his famous experiment to help clarify the identity between Energy = Work = Quantity of Heat (all measured in joule, J) it is evident that in comparison to solar power reaching the Earth, the CO2 effect is about as trivial as a flea on the equator jumping off in an easterly direction will have in retarding the Earth’s rotation according to Newton, when compared to solar and lunar gravity effects. Unless, of course, anyone can show me where I may have gone wrong at http://tinyurl.com/ot2hlp4

Rex
June 27, 2015 10:37 am

“Should it be “I ask only because I want to know” and not “I only ask
because I want to know.”
Yes: it should be “I ask only because I want to know.”
Whoever penned the other, like 99% of those writing today, has
no idea where to put the word ‘only’: like most submissions to WUWT.
They will say something like “The earth will only heat up when such
and such conditions obtain”, when what they really mean to say is
“The earth will heat up only when such and such conditions obtain.”
& etc ad nauseam.

June 27, 2015 10:49 am

Monckton. I just watched, on U-tube, you ask Salby a question on quantifying convective (evaporative) cooling at his March presentation. Here is an answer from Trenberth see below Fig 2 at
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
http://3.bp.blogspot.com/-ZBGetxdt0Xw/U8QyoqRJsWI/AAAAAAAAASM/ewt1U0mXdfA/s1600/TrenPPT.png

June 27, 2015 11:01 am

To quibble over mathematics, one first has to accept the central premise of the diagrams, which is that the majority of the atmospheric surface temperature comes from heating of the surface itself. I reject this premise, so all the mathematics that follows is meaningless. It should be pointed out that the mathematics have been calculated on the assumption that the central premise is correct and variables adjusted until the outcome matches observation. But there is no experimental evidence that can verify this premise to begin with. In short the diagrams are nothing more than illogical assumptions with no basis in reality. There is no Greenhouse Effect.
It is the mass of the atmosphere itself that causes temperatures higher than a planetary body with no atmosphere at equal distance from the sun. A conclusion that should be self evident when one observes temperatures on other planetary bodies with gaseous atmospheres in our solar system.
Still, if you persist with the paradigm then a simple graph with CO2 levels on the x axis and global mean temperatures on the y axis, plotted once in 1850 and once today, should give you three different types of mathematical trajectories to connect the two dots. All three of which, if you extend the line along each trajectory in both directions, will disprove some aspect of the Global Warming hypothesis. Try it for yourself and then reread both the claimed CO2 contribution for the greenhouse effect as a whole and the future temperature predictions for the 560ppm mark.

Leonard Weinstein
June 28, 2015 3:54 pm

wickedwenchfan, you (and some others) continue to show your lack of understanding of the difference between lapse rate (due to gravity, specific heat of gas in atmosphere, and possible phase change of vapor to liquid/solid), which produces a temperature GRADIENT, and the cause of the actual temperature levels in that gradient. The actual temperature level is set by the energy balance at some average effective altitude of radiation to space. The change in effective altitude of radiation to space due to the change in quantity of optically absorbing gases (or particles) is the basis for the change in the atmospheric greenhouse effect.

rgbatduke
June 27, 2015 11:03 am

I’ll tell you why answering your question (or trying to answer it) would give me a headache. Count the assumptions and approximations that go into each and every line of the algebra. Here is a partial list:
* Unit emissivity — emissivity of the Earth’s surface is a) not a constant either in space or in time; b) varies with wavelength (one really has to do an integral to compute emissions/absorptions even from t very simple systems) b) does not have an average value of 1 (averaged over space, time, and wavelength).
* Assuming some “average” value for top of atmosphere insolation. TOA insolation varies by 91 watts per square meter over the course of the year, peak to peak, as the Earth orbits with its current eccentricity. It spends longer out where it is farther away, so TOA insolation isn’t a nice, symmetric sine function with a mean in the middle of the peaks.
* Surface homogeneity of other parameters like albedo (which, like emissivity, is not constant in space, time or wavelength). Again, this almost certainly matters, as it is necessary to assume a peculiar inversion in order to explain why the Earth is coldest at perihelion (southern hemisphere summer) and warmest at aphelion (northern hemisphere summer). In southern hemisphere summer TOA insolation is well over 1400 W/m^2 — a “forcing” of roughly 45 W/m^2 compared to your assumed mean, with NH summer forcing down in the ballpark of 1325 W/m^2 (being very generous with the subtraction). Note that albedo comes off of the top in a single layer model — if you want to look for a parameter that has a direct, immediate effect on the climate, albedo has to be close to number 1 as the total radiation that one has to balance is the TOA insolation times $(1 - \alpha)$. Increase $\alpha$ (which again is ot constant in space, time or wavelength) by just a tiny amount and because TOA insolation is LARGE, you drop it by a rather lot.
* Then there is the dazzling detail in these figures. They have energy going up, going down, and the numbers are always for the entire planet. Have we a plausible way of measuring latent heat transfer as a function of height, integrated over the entire planetary surface, over a long enough period of time, over a spanning set of the other hidden variables in the system (things like the decadal oscillations, where to average over them would properly take centuries of detailed observations in depth) to be able to pin it down to within a percent? Two percent? Ten percent? And what are the assumptions that permit one to make ANY choice between these (Bayes theorem requires us to weaken any conclusions we draw according to the certainty of these assumptions).
One cannot plausibly solve the Navier-Stokes equations for the Earth at a resolution of 100x100x1 km^3 and expect to get a good answer for the climate decades into the future. One cannot solve the NS equations better by turning the entire planet into a single cell with an assumed cross-section to the sun of $\pi R^2$ and an assumed homogeneous radiating surface are of $4 \pi R^2$ and an assumed uniform constant average TOA insolation and an assumed unit emissivity and an assumed constant albedo when the real albedo looks more like these graphs:
http://www.climateforcing.info/Forcing/Forcing/albedo.html
Even if you google it up, you see estimates that range from 0.31 to 0.39. Let’s see:
P_1 = 1366(1 – 0.31) = 942.5 W/m^2 (TOA)
P_2 = 1366(1 – 0.39) = 833.3 W/m^2 (TOA)
P_1 – P_2 = 109.3 W/m^2
This gives one a small idea of the silliness of the entire enterprise. A change of 0.01 in albedo — roughly 3% — is equivalent to a change of roughly 14 W/m^2 in “average” TOA insolation. CO_2, in contrast, is estimated to be on the order of 1-2 W/m^2. Albedo changes are local, not global, and would have an amplified effect in the tropics.
Some very good questions are then — how accurately can we measure “global average albedo”? How variable is it? Does it have long term variability on top of short term variability? Is it part of a general nonlinear feedback process? Does it vary, nonlinearly, the same way in response to changes in the climate system at different points in space and time? And the big one:
Can we predict the albedo one, ten, a hundred, a thousand months into the future? Can we even predict the variation in the average albedo, whatever that means?
If we don’t know the albedo within 1%, and if we cannot predict the future evolution of the albedo within 3%, then using any single value of it in a computation that also assumes a constant TOA insolation and a planet with unit emissivity etc is not going to make much sense, is it?
If your only purpose is to show that there is sufficient uncertainty in climate science that the total climate log sensitivity to CO_2 could be anywhere from barely positive to 2-3 C — mission accomplished. But I don’t think your question has an answer outside of that.
rgb

June 27, 2015 1:19 pm

Rgb, why do you think it has to be positive? How much is your “barely positive”?

mothcatcher
June 27, 2015 2:10 pm

Thanks, rgb.
I’ve been really struggling with Lord Monckton’s equations (never really mastered maths, keep going back to remind myself of the terms, etc) and trying to understand the consequences, and suddenly you put it into a proper perspective. A few earlier commenters – such as Joe Born and ‘Frank’, certainly have relevant things to say but I’m really relieved you came in! Proof or disproof of the CAGW by circuitous appeal to S-B is actually getting pretty tedious, and I don’t think we are going to learn anything that route.
Please keep those posts coming – saves my brain hurting too much

June 27, 2015 2:12 pm

“does not have an average value of 1”
Trenberth says:
“The surface emissivity is not unity except perhaps in snow and ice regions, and it tends to be lowest in sand and desert regions, thereby slightly offsetting effects of the high temperatures on longwave (LW) upwelling radiation. It also varies with spectral band (see Chédin et al. 2004 for discussion). Wilber et al. (1999) estimate the broadband water emissivity as 0.9907 and compute emissions for their best estimated surface emissivity versus unity. Differences are up to 6 W m-2 in deserts, and can exceed 1.5 W m-2 in barren areas and shrublands. “
“TOA insolation varies by 91 watts per square meter over the course of the year, peak to peak”
They are calculating a “global annual mean energy budget”. They calculate the annual total. It adds up.
“Have we a plausible way of measuring latent heat transfer…”
Yes, and a very simple one. What goes up must come down. Total rainfall is quite well known.
“One cannot solve the NS equations better by turning the entire planet into a single cell “
They are not solving the Navier-Stokes equations.
‘how accurately can we measure “global average albedo”’
Global albedo is primarily measured using CERES and ERBE. This caused them to correct their 1997 figure from .313 to .298.

Reply to  Nick Stokes
June 28, 2015 10:07 pm

Nick Stokes continues to defend the indefensible. Let’s talk albedo: rocks, sand, dirt, vegetation in all its wondrous and beautiful variety, pavement, roofs, fresh water, salt water, ice, snow, waves, clouds, each with dozens to thousands of varieties. The satellites give us a value, which was wrong originally and is still wrong. The albedo of the Earth varies minute by minute and acre by acre. Sure, tell me you know what it is…

Frank
June 28, 2015 2:57 am

RGB: IMO, You exaggerate the difficulties. dF/F = 4*(dT/T) allows one to calculate small changes in F and T without worrying about emissivity. A 1% error in albedo and therefore F is a 1.5% error in dT. The error caused by assuming a uniform temperature is real, but modest (15%). The elliptical orbit is worth 91 W/m2 in terms of irradiance, but only 23 W/m2 in terms of the whole surface of the earth and 16 W/m2 post albedo. So post albedo radiation is about 240 +/- 8 W/m2 during the year. Using an annual average introduces negligible error. Temperature dependent changes in albedo are cloud and ice-albedo feedbacks and not relevant to no-feedback climate sensitivity. N-S is only needed if you want to know where warming will occur, not average warming without feedbacks.
Most of the error in calculating a no feedbacks CS of 1.0 degK comes from non uniform temperature being raised to the fourth power. A reasonable estimate of uncertainty might be +25% to -10%. All climate models agree with 1.15 degK within +/-1%, though that doesn’t include systematic errors. The biggest problem with treating the earth like a blackbody and calculating a no feedbacks climate sensitivity does not come from uncertainty – it comes from a lack of understanding of the assumptions being made in the calculation process. For example, Lord Monckton doesn’t understand why T should be 255 K instead of 288 K.

cba
July 2, 2015 7:49 am

hi bob. while it’s been almost a year since i had time to work on a little project on the subject but a good example of what is going on with our system is that presently, the SH gets an average of around 3-5 W/m^2 more power in a year than the NH yet the NH is almost a degree C warmer on average. (this is mostly from sat. database info). It should be obvious that all those essentially unknowable little details you mentioned give substantially different values that are in the order of this CAGW effect that is being attempted to be measured. If one simply assumed to use the NH vs SH values from this, the conclusion would be that an increase in 4 W/m^2 would cause a decrease in T of nearly 1 deg C. LOL.
One thing I cringe at is the attempt at using the feedback eqn. It might be somewhat useful but I think the system is way too messy to yield anything useful. Breaking it down as a direct delta T effect by co2 and then seeing what delta F can occur additionally due to that delta T can be instructive. Absolute humidity increase is about the only thing one can get that is positive feedback capable of causing a radical increase in T. It’s easily shown co2 by itself can increase T less than 1 deg C for a doubling. With a relative humidity that is constant, the absolute humidity increase is far from a doubling for this (not even for a 5 deg C increase) and that means the CAGW crowd’s major helper isn’t going to be nearly enough. Basically, their case actually hangs on the notion that an increase in water vapor and evaporation will somehow result in less cloud formation and lower albedo.

whiten
June 27, 2015 11:09 am

Very interesting point.
But,,,,,as far as I can tell, in principle, it does not really matter what formula equation or method used for the calculation or a mathematical estimation of what is called climate sensitivity, for as long as the “scientific” definition of it is plainly wrong, and actually does not even play right and in accordance with any equation or method of such a calculation, any number assigned to the CS is wrong, in principle.
According to the CS definition, either while CS~3C as per IPCC or CS~0.7K as per L. Monkton there is an unavoidable accumulation of heat in the system.
So in both cases, which actually represent two different systems there will be an ever increasing of energy in the system……if for example in the CS~ 3C there will be ~1.6C continuing accumulation of heat for every CO2 “doubling” whenever that happens, in the case of L. Monckton .that will be ~0.4C….and even in the Monckton case the heat accumulation has the same problem of the ~ the same magnitude as in the IPCC for the “energy balance” book of such systems as the one in question.
An ~0.4C heat accumulation anomaly for a system with a CS~7K is as problematic as an ~1.6C anomaly for a system with a CS~3C….or an ~0.8C anomaly for a system with a CS ~1.5C…
So in principle either they all wrong or they all right, regardless of the actual CS number.
Or put it another way….. in principle either all AGWs are wrong or they all are right regardless of what CS scenario, either when catastrophic benign or mild all AGW scenarios must all be right or all be wrong, in principle….in accordance with the one principle hammered in to the CS definition, in a very AGW manner.
I don’t know how that may help Lord Monckton with his question…but at least he may consider the above when considering for which one of the AGW scenarios he goes and favors.
I have a hard time in considering how some one, who ever that one be, can have the courage to approach and try to understand or explain the intricacy of the Earth’s energy budget (in climatic terms) through the CS angle when the very definition of the CS violates the very principles and essentials of the methods the formulas or the equations used to calculate it….but maybe that is just me, probably missing some thing here! (hopefully not my mind 🙂 )
Lord Monckton, no disrespect meant, honestly, and I still do like very much your CS estimate…:-)
Apart from all this, I do really appreciate and value your courage shown through years now, in this particular issue.
Cheers

Mike
June 27, 2015 11:24 am

Like any budget, the Earth’s energy budget is supposed to balance. If there is an imbalance, a change in mean temperature will restore equilibrium.

Ah, so you’ve finally accepted what I said when discussing your ‘scibull’ paper, that the Planck feedback is a feedback and should be treated like one. Since it is THE feedback that keeps the planet stable it must be viewed that way.
You have come up with a simple but effective article. The only defect I can see is that it relies heavily on gross assumption, ballpark figures about albedo. For example I doubt that saying spectral emissivity is 1 across the spectrum is realistic.
How would a somewhat lower figure affect the results?
Also, since I’m sure you’d want to get the terminology correct you should correct the following:
“where F is radiative flux density in W m–2”
That is the total flux leaving the body, not the flux density. There is no surface area, shape or dimension implicit in that formula.
Good article.

Monckton of Brenchley
June 28, 2015 9:07 am

No, the Planck “feedback” is not treated in the climate-sensitivity equation in the same way as the true feedbacks.

Mike M.
Reply to  Monckton of Brenchley
June 28, 2015 6:39 pm

“No, the Planck “feedback” is not treated in the climate-sensitivity equation in the same way as the true feedbacks.”
There are two ways to do it, either treating all the responses the same (in which case they are all called feedbacks, as in IPCC AR5) or by splitting out the Planck (Stefan-Boltzmann) response and calling the rest feedbacks. Some authors use one, and some the other. Mathematically identical, but semantically confusing.
Doing it the first way, the total feedback must be (and is) net negative for a stable system. Doing it the second way, the total feedback must be less than unity for a stable system. In that case positive or negative feedback means a temperature change greater than or less than the Planck response.

Monckton of Brenchley
Reply to  Monckton of Brenchley
July 2, 2015 11:54 pm

In the climate sensitivity equation, the initial forcing and separately the sum of the true feedbacks are multiplied by the Planck parameter, by that parameter alone, and not by ant of the true feedbacks. For a discussion, see Roe (2009).

TRBixler
June 27, 2015 11:33 am

While the math may be interesting what is often left off is the assumptions. The primary assumption is the well mixed assumption. The primary gases are probably well mixed (nitrogen and oxygen). The others my guess are not well mixed at all. This raises considerable doubt to the use of the log function on some of the “green house” gases when indeed we do not know their actual concentrations in each instance.

June 27, 2015 11:39 am

Whiten You say
“as far as I can tell, in principle, it does not really matter what formula equation or method used for the calculation or a mathematical estimation of what is called climate sensitivity, for as long as the “scientific” definition of it is plainly wrong, and actually does not even play right and in accordance with any equation or method of such a calculation, any number assigned to the CS is wrong, in principle.”
I agree – but surprisingly so does the IPCC which has itself now given up on estimating CS – the AR5 SPM says ( hidden away in a footnote)
“No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies”
but paradoxically they still claim that we can dial up a desired temperature by controlling CO2 levels .This is cognitive dissonance so extreme as to be crazy.
For a complete discussion of the inutility of the GCMs in forecasting anything or estimating ECS see Section 1 at
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
The same post also provides estimates of the timing and amplitude of the coming cooling based on the 60 and especially the millennial quasi- periodicity so obvious in the temperature data and using the neutron count and 10 Be data as the most useful proxy for solar “activity”.

Menicholas
Reply to  Dr Norman Page
June 27, 2015 2:52 pm

Thank you Dr. Page, RGB at Duke, Whiten (and many others above) for these comments.
I too very much appreciate and support the courage and efforts of Lord Monckton, but the points you all bring up contain many ideas which float around in my head in some form, but I am usually not concise or eloquent enough to verbalize the ideas and jot them down in a way which is coherent.
Reading this article and the comments make me feel much better about what I think I know, and what I am sure that no one really “knows”.
Everything I read from the point of view of CAGW, or from those who believe those who preach it, makes me feel very bad (At turns angry, dismayed incredulous, fatalistic, mirthful, and sometimes physically ill), every time, and these feelings seems to be getting worse.
Where is it going to end? So many are actually doubling down on the lunacy, and so many of these have great influence and power.

poitsplace
June 27, 2015 11:41 am

RE:WARMING IN THE PIPE
If you do the math, you’ll find that it would take over 500 years at the current “energy imbalance” to raise the temperature of the oceans just 1C. And even with that, the difference between the ocean temperature and the average surface temperature would still be about 15C (making little difference in heat absorbed). Because of this, with respect to anthropogenic emissions…the ocean is effectively a bottomless pit of thermal storage.

mothcatcher
June 27, 2015 2:14 pm

Only if you assume we have to heat the whole volume of the ocean before atmospheric effects become significant

VikingExplorer
June 28, 2015 10:06 am

In the words of Darth Vader “Nothing can stop that now”. The heat exchange between the warm surface and the rest of the ocean is far more effective than between surface and air. To claim otherwise is to claim that 2nd Law is violated (see here).
Even if hot surface water temperatures did raise atmospheric temperatures, it would be temporary. As the surface energy spreads throughout the ocean, temperatures would drop. There is no getting around the fact that a steady state increase is extremely difficult to pull off without many orders of magnitude more energy.

whiten
June 29, 2015 12:31 pm

VikingExplorer
June 28, 2015 at 10:06 am
Unless I do misunderstand your point above, the only violation of the 2nd law you mention is in the interpretation of the given event, like in your case.
There is actually no violation of that 2nd law, unless that will satisfy and please one’s beliefs and predetermined world view.
When the energy (heat) moves from the oceans to the atmosphere, both the surface and the atmosphere are expected to have a warming-up signature, and in the mean time the CO2 emissions go up.
A significant discrepancy in this one, like one in question lately, when and where the warming signature of the surface is considerably higher than that of the atmosphere, does not necessary mean any violation of that 2nd law, because simply by relying on that very 2nd law you may just have the answer to it.
If the atmosphere is losing energy (heat) to the outer space than in this particular case the surface will be showing warming for a while when in the same time the atmosphere may show no any warming at all.
That actually explains why in the first place the CO2 emissions keep going up with no any atmospheric observed warming and why actually the heat is and must be moving from the oceans into the atmosphere (and out to the deep freezing space).
One’s beliefs and world view can not actually violate or break such a law.
cheers

Somebody
June 27, 2015 11:43 am

“If there is an imbalance, a change in mean temperature will restore equilibrium.” There is no equilibrium. Earth is a system that’s not at equilibrium (a rotating body with a Sun close by and the cold space around cannot be). A dynamical, complex, non linear system that evolves at non equilibrium. There is not even a dynamical equilibrium. And mean temperature has nothing to do with an equilibrium, anyway. It’s an unphysical quantity. For a system that evolves towards equilibrium, there are other mechanisms that drive it, the mean temperature is not among them.

VikingExplorer
June 28, 2015 9:50 am

Exactly. poitsplace and Somebody are exactly correct.

Like any budget, the Earth’s energy budget is supposed to balance. If there is an imbalance, a change in mean temperature will restore equilibrium.

No comment on the overall point, but the above is incorrect or misleading. The statement is incorrect because it’s missing two things. (1) Any hint that this is a transient phenomena of unspecified time scale. (2) Any reference to the size of the reservoir.
An unbalance simply implies that the energy level of earth would change, and that would be reflected in some internal temperature or phase change or work performed. Mercury is still warming up, and Jupiter is still cooling down.
Consider Lake Erie, where the water level is determined by water coming in (Lake Huron, rain, rivers) minus water going out (Niagara Falls, evaporation). Most of these processes are not directly dependent on the water level.
The inflow from Huron depends on the level of Huron minus the height of the land blocking the flow. The outflow depends on the level of Lake Erie minus the height of the land blocking the flow. An increase in water also increases surface area, increasing evaporation.
Lake Erie may be slowly evolving from a lake to a river that flows from Lake Huron to Lake Ontario. However, there is no physical law of equilibrium that is striving to restore balance.
In this analogy, claiming that increasing CO2 would raise the temperature of Earth would be equivalent to claiming that blocking off a small part of the American Falls would raise the level of Lake Erie.
Even IF man were raising the radiative resistance slightly, there are several possible consequences:
a) The incoming radiative resistance is also raised (20 – 24% of TSI and a majority of near infrared radiation is absorbed by the troposphere).
b) The extra energy could cause additional phase change or work to be performed (additional emergent phenomena as Willis describes it).
c) It could simply result in a warmer troposphere. Any down-welling radiation from this warmer troposphere is simply reducing how much the troposphere is warmed. The TOA is thus warmer than it otherwise would be, and radiates more to space.

June 27, 2015 11:47 am

res mihi quidem uidetur esse incerta.

geran
Reply to  Steinar Midtskogen
June 27, 2015 2:46 pm

positum est certitudo quaedam

Menicholas
Reply to  Steinar Midtskogen
June 27, 2015 2:58 pm

Certi sumus de incerto vis an non pro certo habere possumus et incerta comprehendis?

Menicholas
June 27, 2015 2:59 pm

“Do you mean the uncertainty we are certain of, or do you include the uncertainties we can not be certain of yet?”

June 28, 2015 12:43 pm

difficile est dictu quod incertum quoque uidetur utrum incerta re uera scimus an nescimus. et de eo quod nescimus nec certi neque incerti esse possumus.

Pierre R Latour
June 27, 2015 11:52 am

Good work L V M of B.
I see you have been reading my email exchanges on S-B Law and role of emissivity since April 12, 2015. You are closing in on the vanishingly small effect of CO2 on global T. Thanks again for editing http://www.principia-scientific.org/professor-singer-finds-co2-has-little-affect-global-temperature-v2.html
The forcing of interest is not F but [CO2], which affects emissivity ε. We want
dT/dCO2 = dT/dε * dε/dCO2
Rearranging S-B Law for
T = (F/εσ)^0.25
dT/dε = 0.25(F/εσ)^-0.75 * – (F/σ) ε^-2 = -0.25(F/σ)^0.25 * ε^-1.25 0
Therefore,
dT/dCO2 < 0.
This means cooling, so long as assumption F independent of [CO2] is valid.
To estimate how much, all you have to do is integrate atmosphere ε(T, P, C) through altitude to find bulk atmosphere effective εa and dεa/d[CO2] and use laws of radiant energy transfer to quantify the change in atmosphere Fa and surface Fs = 239 – Fa to find global ε and corresponding T’s (T, Ta , Ts).
Better to think of S-B Law giving F or I, intensity, irradiance, rather than flux. It is only flux when surroundings are at T = 0k. Driving force for radiant energy transfer from 2 to 1 is I2 – I1.

Monckton of Brenchley
Reply to  Pierre R Latour
July 3, 2015 12:24 am

Mr Latour, in the dishonest fashion of the Sloyers, suggests that I am “closing in” on their position. He also suggests I “edited” his nonsense dishonestly stating that Prifessor Singer is also “closing in” on the “there is no greenhouse effect” rubbish. Professor Singer accepts, as does everyone who accepts the results of oft-repeated experiments, that there is a greenhouse effect. Accordingly, my alleged “edit” was confined to removing references to Professor Singer’s name.
For some years, the corrupt organisation that promotes the nonsensical notion that there is no greenhouse effect has freudulently used the names of many eminent scientists to attract donations by falsely asserting that they support its bizarre belief system. That matter is now before the investigating authorities, who are reviewing the evidence and will decide in due course whether and whom to prosecute for obtaining a pecuniary advantage by deception.
It would be wiser, therefore, if Mr Latour were to stop using Dr Singer’s name and, for that matter, mine in any context that might in any way be interpreted as implying we do not think there is a greenhouse effect,
Mr Latour is additionally and characteristically dishonest in his false implication that I am shifting to a position I have not held from the outset. In my first ever public statement on the greenhouse effect I said our enhancement of it could be expected to cause some warming, but on balance not very much. That remains my position and, so far at any rate, the last nine years have indicated that I am correct.
There is no incompatibility between recognising that there is a greenhouse effect and expecting an enhancement of that effect under modern conditions to be small.
The authorities have also been asked to investigate whether the true purpose of the corrupt organisation that thus makes free with the names of eminent researchers who do not in fact endorse its lunatic notions exists precisely for the purpose of discrediting not only them but all skeptics by creating confusion through its repeated false suggestions that various prominent skeptics do not believe there is a greenhouse effect,

Idiot of Village
June 27, 2015 12:00 pm

‘Quid vobis videtur?’
Cacoethes carpendi 😉

Pierre R Latour
June 27, 2015 12:08 pm

Good work L V M of B. I see you have been reading my email exchanges on S-B Law and role of emissivity since April 12, 2015. You are closing in on the vanishingly small effect of CO2 on global T. Thanks again for editing http://www.principia-scientific.org/professor-singer-finds-co2-has-little-affect-global-temperature-v2.html
The forcing of interest is not F but [CO2], which affects emissivity ε. We want
dT/dCO2 = dT/dε * dε/dCO2
Rearranging S-B Law for T = (F/εσ)^0.25
dT/dε = 0.25(F/εσ)^-0.75 * – (F/σ) ε^-2 = -0.25(F/σ)^0.25 * ε^-1.25 0
Therefore, dT/dCO2 < 0.
This means cooling, so long as assumption F independent of [CO2] is valid.
To estimate how much, all you have to do is integrate atmosphere ε(T, P, C) through altitude to find bulk atmosphere effective εa and dεa/d[CO2] and use laws of radiant energy transfer to quantify the change in atmosphere Fa and surface Fs = 239 – Fa to find global ε and corresponding T’s (T, Ta , Ts).
Better to think of S-B Law giving F or I, intensity, irradiance, rather than flux. It is only flux when surroundings are at T = 0K. Driving force for radiant energy transfer from 2 to 1 is I2 – I1.

Stephan
Reply to  Pierre R Latour
June 28, 2015 9:35 pm

Your article you included with your link has a logical error. In your article, you claim that a body that has a higher emissivity is cooler than a body with lower emissivity under the same constant radiant flux. One could envision the inside of a large sphere that is a perfect blackbody emitter (but not necessarily). We then place another, smaller sphere in its center. Independent of the emissivity of the inner body, it will reach the same temperature as the outer body. It reaches its equilibrium only faster, the higher its emissivity is! Otherwise, you could run a heat engine between these two bodies, and that violates the 2nd law of thermodynamics.

Monckton of Brenchley
Reply to  Pierre R Latour
July 3, 2015 12:25 am

See my reply to Mr Latour’s drivel a little upthread.

June 27, 2015 12:19 pm

“Professor Murry Salby has estimated that, after the exhaustion of all affordably recoverable fossil fuels at the end of the present century, an increase of no more than 50% on today’s CO2 concentration – from 0.4 to 0.6 mmol mol–1 – will have been achieved.”
A 50% increase in CO2 is unlikely as CO2 would partition into the oceans at 50 to 1. Because of this effect, if we burned everything we have, everything, our homes included, we might be able to raise atmospheric CO2 by 20% The oceans work against us as, for every CO2 added to the atmosphere, 50 are added to the ocean as it tries to go to equilibrium.

June 27, 2015 1:40 pm

higley7,
You forget the time factor: until now, about halve the human emissions per year as mass are absorbed by the oceans and the biosphere. The whole carbon cycle seems to behave as a linear process in disequilibrium, where the sink rate is directly proportional to the extra quantity (=pressure) in the atmosphere above steady state for the current (ocean) temperature, which is around 290 ppmv.
The past (1960) and current (2012) sink rates show a similar e-fold decay rate of slightly over 50 years, or a half life time for the excess CO2 of ~40 years. That is not fast enough to remove all human CO2 in the same year as emitted. If the emissions go on as was the case until now, that is slightly quadratic over time, we can easily reach far higher levels (~800 ppmv for “business as usual”). Only if human emissions level off or drop, the levels in the atmosphere will level off too until emissions and net sink rate are equal, or drop to steady state if all emissions ceased.
Finally, the human emissions will be redistributed over atmosphere, biosphere and (deep) oceans, but that needs a lot of time, as the uptake speed of oceans and vegetation is limited.

geran
Reply to  Ferdinand Engelbeen
June 27, 2015 2:07 pm

Consider a REAL greenhouse in full sun. If the greenhouse has no ventilation, how long before the CO2 is effectively depleted?
Hint: it does not take days (“a lot of time”).

Reply to  Ferdinand Engelbeen
June 28, 2015 1:10 am

Geran,
Take the same greenhouse and add a lot of manure and/or plant debris: the CO2 levels will go up to 1,000 ppmv, only somewhat lower during the day for weeks to come…
The big greenhouse called earth has several carbon cycles, where the exchange between atmosphere and biosphere is about 60 GtC in and 61 GtC out over the seasons: an uptake of some 1 GtC extra into the biosphere caused by the 30% extra CO2 pressure in the atmosphere. See:
http://www.bowdoin.edu/~mbattle/papers_posters_and_talks/BenderGBC2005.pdf
Thus removing the current 230 GtC extra CO2 in the atmosphere above the steady state equilibrium for the current temperature (per Henry’s law: ~290 ppmv) will take a lot of time…

patmcguinness
Reply to  Ferdinand Engelbeen
June 28, 2015 3:24 pm

“If the emissions go on as was the case until now, that is slightly quadratic over time, we can easily reach far higher levels (~800 ppmv for “business as usual”).
No we cannot. There is no credible scenario where we get to 800ppm this century.
Here’s why:
1. Carbon uptake has consistently increased and the ocean uptake will increase further, on a path that increases the greater the concentration of Co2. I’ts about 5 GT of carbon, over 50% of the 9GT of carbon emissions. As ppm of Co2 goes up, so does rate of carbon uptake; when we get to 550ppm, the carbon uptake will be 10 GT, equal to current emissions. So if all we do is simply keep emissions at current levels, we will never go above 550ppm. The ocean sink is so massive (unlimited actually, due to calcium carbonate sedimentation), we could emit 10 GT of carbon for 280 years, and the oceans would soak up 7/8ths of it.
2. It’s not credible to suppose emissions will increase quadratically in coming decades, in 2014 it did not increase AT ALL over 2013. If trends continue, my 8 year old son will be 50 feet tall by age 35. OECD countries have flatlined and now even reduced emissions, and the rise of emissions due to China has now ended. The US doubled GDP since 1970, yet uses less oil per capita. With or without carbon taxes or limitations, we will not emit that much more carbon, because we can grow without increasing energy use.
3. There’s not enough economic growth to sustain much higher emissions. Even a doubling of emissions globally would require 4x – 5x increases in developing nations’ emissions, but that implies more growth than is actually happening *AND* a reversal of the trends towards renewables and energy efficiency.
4. We are increasing Co2 by 2 ppm per year. Given #1, even if we increase emissions, the carbon sinks will grow as well, and given #2 and #3 its very unlikely that emissions will grow that much. A 2% increase/ year in emissions will lead to about 550 ppm by 2100.
5. Technology is moving forward at a pace that implies renewables will be cost competitive and displace fossil fuels by 2040.
6. Since the alarmists are hyping up this threat, enough to force commitments to emit WELL BELOW the 2% increase per year, its likely we will not even hit 550ppm.

patmcguinness
Reply to  Ferdinand Engelbeen
June 28, 2015 3:40 pm

“A 2% increase/ year in emissions will lead to about 550 ppm by 2100.”
I should correct/clarify myself. I meant 2% increase until 2040 not 2100. I did an analysis wherein we estimate carbon uptake as increasing as ppm rise, and the emissions are increase 2% per year until 2040 and then flatlines at around 16GT emissions almost double current emissions – that leads to 585 ppm. if you project a 1% decline in emissions after 2050, bringing emissions from 16GT to 10GT it goes to 550ppm.
The uptake assumption is that it increases linearly with increasing ppm.
In short, realistic scenarios in which we global energy use in 21st century track what has happened to OECD nations in the past 40 years would lead to 550-580 ppm by 2100.
Given both technology trends and uptake trends, we could make 600ppm the upper limits. And given the work that shows TCR is around 1.3-1.4C, the upper limit of temperature change now until 2090 is about 0.7C.
A 2% increase ad infinitum (which does lead to 40 GT of output per year and higher Co2 level), as I noted, is unrealistic on many levels (resource constraints, defies economic models, contradicts historical consumption patterns/trends).

Menicholas
Reply to  Ferdinand Engelbeen
June 28, 2015 10:43 pm

Hey Pat, what are you feeding that kid?
You are going spend a fortune on clothes and shoes!

Reply to  Ferdinand Engelbeen
June 29, 2015 8:19 am

patmcguinness,
If and only IF the emissions don’t grow as “business as usual”, then you may be right, but until now the CO2 levels have grown with the worst scenario. One year of lower emissions isn’t a trend, the more that a lot of Western countries (and even China) didn’t grow in economical activity or less than expected.
As the response of the oceans and biosphere to the increased pressure in the atmosphere seems to be quite linear, steady emissions indeed would lead to a new steady state where emissions and sinks are equal but “business as usual” can lead to 800 ppmv and more in the atmosphere…
Thus everything depends of the future emissions…

bw
June 27, 2015 2:14 pm

Seawater dissolved inorganic carbon (DIC) is known to be about 2.2 millimolar at the surface. The air in contact with that surface is 0.4 millimolar CO2. So, 2.2/0.4 equals 5.5.
Basically, for 65 molecules of CO2 added to the atmosphere, about 55 will end up in the ocean and 10 will remain in the atmosphere.
That 50 to 1 number comes from using equal volume ratios.
For the change in CO2 from 280 to 400 ppm, the difference is 120 ppm. Basically, the same ratio applies, with about 100 ppm entering (and staying) in the ocean, and the remainder, about 20 ppm remaining in the atmosphere. This is consistent with many other approaches to the estimate of how much of the human CO2 is actaully in the atmosphere today, about 20 ppm.
CO2 never “accumulates” in the atmosphere, it is part of a flowing biogeochemical river with huge abiotic (ocean) and biotic exchanges.

Menicholas
June 27, 2015 3:02 pm

Does any of this explain why CO2 in the atmosphere seems to be on a more or less linear trend, even while CO2 emissions have increased, and the rate of increase is also increasing?

June 28, 2015 4:59 am

Menicholas,
All three observations are slightly quadratic: human emissions, increase in the atmosphere and increase in net sink rate. There are of course year by year (10-90%) and decadal (40-60%) variations in sink rate, mainly caused by temperature, but the average sink rate is 45-50% of the emissions over the past 115 years, making that the increase in the atmosphere is 50-55% of the emissions:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/temp_emiss_increase.jpg

June 28, 2015 5:14 am

bw,
You forget a few points… Solubility of CO2 in fresh water is very low. In seawater about a factor 10 higher (the buffer/Revelle factor), but even so, including all buffering, a 100% change in the atmosphere results in a 100% change in free CO2 in seawater, per Henry’s law, but only a 10% change in DIC, as free CO2 is only 1% of all forms of carbon. The rest is 90% bicarbonate and 9% carbonate.
That means that the 30% CO2 increase in the atmosphere is good for a 3% increase of DIC in the ocean surface or from ~1000 GtC to ~1030 GtC. Far from your 50:1 ratio.
The 50:1 ratio may apply to the deep oceans, but the exchange rate of the atmosphere with the deep oceans is much more restricted to polar sinking and equatorial upwelling, each only 5% of the ocean surface.
Thus while the ultimate distribution may be 1:50, that needs a lot of time and as human emissions still are increasing year by year, that accumulates in the atmosphere, because the response of the sinks is not fast enough…

June 28, 2015 2:26 pm

FE, what about dissolving of into fresh water as raindrops fall. Is it plausible that the pressure, friction, turbulence and mixing within a drop allow a significant amount of CO2 to be absorbed while a drop falls?

patmcguinness
June 28, 2015 3:52 pm

There’s another way of getting the ocean uptake amounts. The keys are to know the amount of DIC (dissolved inorganic carbon) and the revelle factor, which calculates how a change in atmospheric ppm changes the total DIC %age. basically, most of the DIC is in the form of carbonates, and adding CO2 changes the chemistry ratios of Co2, HCO3 and H2CO3 that make up the DIC. The amount of DIC in the whole ocean is MASSIVE – 37,000 gigatonnes of carbon as DIC. This is 70x the amount of Co2 in the atmosphere (at about 560 GT or so). the revelle factor is about 10.
What this means is that if atmospheric Co2 goes up by 30%, you divide by 10 and DIC in the ocean goes up by 3%. You multiple that by the ocean DIC. so what if we double CO2?
then ocean DIC can go up by about 10% of 37,000 GT, or 3700 GT. Since double CO2 means going from 560GT to twice that, the oceans will take up 3700GT vs the atmosphere 560GT.
Hence, over time (and this is a slow process, taking up about 1.6 GT of CO2 from surface ocean to deep ocean per year), 7/8ths of emissions will end up in the ocean. It also means that some level of emissions are such that we will not increase Co2 ppm, and that zero emissions will actually cause Co2 ppm to go down.

Menicholas
June 28, 2015 10:31 pm

Mr. Engelbeen,
I appreciate the response, but I am not sure that you have explained anything.
BTW I dispute the temperature part of the graph you posted.
I do not think it represents objective reality.

Menicholas
June 28, 2015 10:36 pm

Curious, has a rate of diffusion of dissolved gasses and ions into the water column ever been measured?
In other words, how long might it take for CO2/bicarbonate/carbonate in surface water to diffuse to the deep ocean?
Do we have a number for that? I understand it must be rather low, but not sure how low.

June 29, 2015 3:58 am

Aaron,
I once calculated the amounts of CO2 in rainwater: while that dissolves carbonate rocks (but even that needs millions of years to carve the beautiful caves…), what is dissolved at where the raindrops form is maximum 1.32 mg/l at 0°C. 1 l rainwater is formed out of 400 m3 of air and takes time to form, thus the water there is probably completely saturated with CO2. Even so, that hardly affects the CO2 levels at height. While the drops fall down, temperature in general increases which makes that even less CO2 is retained…
1 l/m2 gives 1 mm rain where the drops fall down. If all that water evaporates, setting all CO2 free, that gives less than 1 ppmv extra in the first meter of air without wind mixing…
Thus while the total water cycle is enormous and a lot of CO2 is moved along, that is hardly measurable in the concentrations, both in the high atmosphere and near ground.

June 29, 2015 5:39 am

Menicholas,
The temperature trend is from HadCRU ocean temperature. No matter which other trend you take, the trends are similar, but one can doubt the slope. What is sure is that the impact of temperature (variability and trend) on the CO2 levels is restricted: 4-5 ppmv/K for short term variations up to 8 ppmv/K for (very) long term influence.
That is also what Henry’s law says for the solubility of CO2 in seawater (4-17 ppmv/K in the literature).
In the above graph it is clear that the huge variability in temperature has little influence on the CO2 increase in the atmosphere. Neither has the trend, as the period 1945-1975 shows a cooling and 2000-current is flat while CO2 levels simply follow human emissions.
Diffusion of CO2 in water is very slow. Only by wind and waves the mixing between ocean surface layer (the “mixed layer”) and atmosphere is fast. Besides some carbon particles (organic and inorganic – shells) of dead plankton and fish excrements there are only limited exchanges between the ocean surface / atmosphere and the deep oceans.
There are several works which have followed human CO2 into the deep oceans, one of then is here:
http://www.pmel.noaa.gov/pubs/outstand/sabi2683/sabi2683.shtml
As there is a slight difference in 13C/12C ratio between fossil fuels carbon and oceanic carbon, the changes can be traced back.
Other general exchanges are know by using tracers like the 1950-1960 peak in 14C caused by the open air nuclear bomb tests and millions of measurements over the years by research ship cruises from the surface to depth…

KevinK
June 27, 2015 1:29 pm

Here is an example of the fatal flaw in the “energy budget” posited by the climate science community. And you can do this in your front yard (but not so easily in the winter).
Take two pieces of pipe, same diameter/length. One made of steel (or copper) and the other made of PVC (or another plastic material). Paint them both gray to make the albedo’s identical (not that it matters). Place them both on your front lawn exposed to bright summer sunlight. Wait a while.
Both pipes are receiving the same incoming energy flux (units of Watts per Surface Area), both will heat up to about the same temperature as the surrounding grass. Convective cooling will keep them both at about the same temperature.
Now pick one up in each hand, in “sunny” areas of the Earth you are likely to wince in pain and drop the steel pipe while the plastic pipe will feel comfortable to hold in your hand.
Why ? Thermal capacity and thermal diffusivity.
The metal pipe has absorbed more thermal energy and stored it internally than the plastic pipe. The metal pipe has more thermal capacity so it can hold, contain or “trap” more thermal energy than the plastic pipe.
The second part of the explanations has to do with thermal diffusivity, from a systems perspective thermal diffusivity is essentially a measure of the velocity of heat flowing through a material. The velocity of heat flow through metals is faster than the velocity of heat flow through human skin/flesh.
When you pick up the metal pipe the heat flows quickly from the interior of the pipe into your hand at the pipe/skin interface, this causes the pain you feel.
When you pick up the plastic pipe there is less thermal energy in the pipe and it travels more slowly from the interior of the pipe to your hand. Thus it feels comfortable to hold.
You cannot perform a correct energy budget analysis of the thermal energy “trapped” in a material after being absorbed from light radiation WITHOUT properly considering the thermal capacities of the materials involved in the system.
Yes, you can make a “budget” and put measured values into said “budget” but it is a complete “made up” version of what is happening. And it will not provide any useful or predictive information about what the temperatures will be at locations in the system.
The thermal capacity of the Ocean’s of the Earth is huge, the thermal capacity of the gases in the atmosphere is much smaller. The thermal capacity of the “GHG’s” in the atmosphere is miniscule in comparison. The GHG’s are