By Christopher Monckton of Brenchley
I propose to raise a question about the Earth’s energy budget that has perplexed me for some years. Since further evidence in relation to my long-standing question is to hand, it is worth asking for answers from the expert community at WUWT.
A.E. Housman, in his immortal parody of the elegiac bromides often perpetrated by the choruses in the stage-plays of classical Greece, gives this line as an example:
I only ask because I want to know.
This sentiment is not as fatuous as it seems at first blush. Another chorus might say:
I ask because I want to make a point.
I begin by saying:
You say I aim to score a point. Not so:
I only ask because I want to know.
Last time I raised the question, in another blog, more heat than light was generated because the proprietrix had erroneously assumed that T / 4F, a differential essential to my argument, was too simple to be a correct form of the first derivative ΔT / ΔF of the fundamental equation (1) of radiative transfer:
, | Stefan-Boltzmann equation (1)
where F is radiative flux density in W m–2, ε is emissivity constant at unity, the Stefan-Boltzmann constant σ is 5.67 x 10–8 W m–2 K–4, and T is temperature in Kelvin. To avert similar misunderstandings (which I have found to be widespread), here is a demonstration that T / 4F, simple though it be, is indeed the first derivative ΔT / ΔF of Eq. (1):
Like any budget, the Earth’s energy budget is supposed to balance. If there is an imbalance, a change in mean temperature will restore equilibrium.
My question relates to one of many curious features of the following energy-budget diagrams for the Earth:
Energy budget diagrams from (top left to bottom right) Kiehl & Trenberth (1997), Trenberth et al. (2008), IPCC (2013), Stephens et al. (2012), and NASA (2015).
Now for the curiosity:
“Consensus”: surface radiation FS falls on the interval [390, 398] W m–2.
There is a “consensus” that the radiative flux density leaving the Earth’s surface is 390-398 W m–2. The “consensus” would not be so happy if it saw the implications.
When I first saw FS = 390 W m–2 in Kiehl & Trenberth (1997), I deduced it was derived from observed global mean surface temperature 288 K using Eq. (1), assuming surface emissivity εS = 1. Similarly, TS = 289.5 K gives 398 W m–2.
The surface flux density cannot be reliably measured. So did the “consensus” use Eq. (1) to reach the flux densities shown in the five diagrams? Yes. Kiehl & Trenberth (1997) wrote: “Emission from the surface is assumed to follow Planck’s function, assuming a surface emissivity of 1.” Planck’s function gives flux density at a particular wavelength. Eq. (1) integrates that function across all wavelengths.
Here (at last) is my question. Does not the use of Eq. (1) to determine the relationship between TS and FS at the surface necessarily imply that the Planck climate-sensitivity parameter λ0,S applicable to the surface (where the coefficient 7/6 ballparks allowance for the Hölder inequality) is given by
The implications for climate sensitivity are profound. For the official method of determining λ0 is to apply Eq. (1) to the characteristic-emission altitude (~300 mb), where incoming and outgoing radiative fluxes are by definition equal, so that Eq. (4) gives incoming and hence outgoing radiative flux FE:
where FE is the product of the ratio πr2/4πr2 of the surface area of the disk the Earth presents to the Sun to that of the rotating sphere; total solar irradiance S = 1366 W m–2; and (1 – α), where α = 0.3 is the Earth’s albedo. Then, from (1), mean effective temperature TE at the characteristic emission altitude is given by Eq. (5):
The characteristic emission altitude is ~5 km above ground level. Since mean surface temperature is 288 K and the mean tropospheric lapse rate is ~6.5 K km–1, Earth’s effective radiating temperature TE = 288 – 5(6.5) = 255 K, in agreement with Eq. (5). The Planck parameter λ0,E at that altitude is then given by (6):
Equilibrium climate sensitivity to a CO2 doubling is given by (7):
where the numerator of the fraction is the CO2 radiative forcing, and f = 1.5 is the IPCC’s current best estimate of the temperature-feedback sum to equilibrium.
Where λ0,E = 0.313, equilibrium climate sensitivity is 2.2 K, down from the 3.3 K in IPCC (2007) because IPCC (2013) cut the feedback sum f from 2 to 1.5 W m–2 K–1 (though it did not reveal that climate sensitivity must then fall by a third).
However, if Eq. (1) is applied at the surface, the value λ0,S of the Planck sensitivity parameter is 0.215 (Eq. 3), and equilibrium climate sensitivity falls to only 1.2 K.
If f is no greater than zero, as a growing body of papers finds (see e.g. Lindzen & Choi, 2009, 2011; Spencer & Braswell, 2010, 2011), climate sensitivity falls again to just 0.8 K.
If f is net-negative, sensitivity falls still further. Monckton of Brenchley, 2015 (click “Most Read Articles” at www.scibull.com) suggest that the thermostasis of the climate over the past 810,000 years and the incompatibility of high net-positive feedback with the Bode system-gain relation indicate a net-negative feedback sum on the interval –0.64 [–1.60, +0.32] W m–2 K–1. In that event, applying Eq. (1) at the surface gives climate sensitivity on the interval 0.7 [0.6, 0.9] K.
Two conclusions are possible. Either one ought not to use Eq. (1) at the surface, reserving it for the characteristic emission altitude, in which event the value for surface flux density FS may well be incorrect and no one has any idea of what the Earth’s energy budget is, and still less of an idea whether there is any surface “radiative imbalance” at all, or the flux density at the Earth’s surface is correctly determined from observed global mean surface temperature by Eq. (1), as all five sources cited above determined it, in which event sensitivity is harmlessly low even under the IPCC’s current assumption of strongly net-positive temperature feedbacks.
Table 1 summarizes the effect on equilibrium climate sensitivity of assuming that Eq. (1) defines the relationship between global mean surface temperature TS and mean outgoing surface radiative flux density FS.
| Climate sensitivities to a CO2 doubling | |||||
| Source | Altitude | λ0 | f | ΔTS,100 | ΔTS,∞ |
| AR5 (2013) upper bound | 300 mb | 0.310 K W–1 m2 | +2.40 W m–2 K–1 | 2.3 K | 4.5 K |
| AR4 (2007) central estimate | 300 mb | 0.310 K W–1 m2 | +2.05 W m–2 K–1 | 1.6 K | 3.3 K |
| AR5 implicit central estimate | 300 mb | 0.310 K W–1 m2 | +1.50 W m–2 K–1 | 1.1 K | 2.2 K |
| AR5 lower bound | 300 mb | 0.310 K W–1 m2 | +0.75 W m–2 K–1 | 0.8 K | 1.5 K |
| M of B (2015) upper bound | 300 mb | 0.310 K W–1 m2 | +0.32 W m–2 K–1 | 0.7 K | 1.3 K |
| AR5 central estimate | 1013 mb | 0.215 K W–1 m2 | +1.50 W m–2 K–1 | 0.6 K | 1.2 K |
| M of B central estimate | 300 mb | 0.310 K W–1 m2 | –0.64 W m–2 K–1 | 0.5 K | 1.0 K |
| M of B upper bound | 1013 mb | 0.215 K W–1 m2 | +0.32 W m–2 K–1 | 0.5 K | 0.9 K |
| M of B lower bound | 300 mb | 0.310 K W–1 m2 | –1.60 W m–2 K–1 | 0.4 K | 0.8 K |
| M of B central estimate | 1013 mb | 0.215 K W–1 m2 | –0.64 W m–2 K–1 | 0.4 K | 0.7 K |
| Lindzen & Choi (2011) | 300 mb | 0.310 K W–1 m2 | –1.80 W m–2 K–1 | 0.4 K | 0.7 K |
| Spencer & Braswell (2011) | 300 mb | 0.310 K W–1 m2 | –1.80 W m–2 K–1 | 0.4 K | 0.7 K |
| M of B lower bound | 1013 mb | 0.215 K W–1 m2 | –1.60 W m–2 K–1 | 0.3 K | 0.6 K |
Table 1. 100-year (ΔTS,100) and equilibrium (ΔTS,∞) climate sensitivities to a doubling of CO2 concentration, applying Eq. (1) at the characteristic-emission altitude (300 mb) and, boldfaced, at the surface (1013 mb).
It is worth noting that, even before taking any account of the “consensus’” use of Eq. (1) to govern the relationship between TS and FS, the reduction in the feedback sum f between IPCC’s 2007 and 2013 assessment reports mandates a corresponding reduction in its central estimate of climate sensitivity from 3.3 to 2.2 K, of which only half, or about 1 K, would be expected to occur within a century of a CO2 doubling. The remainder would make itself slowly and harmlessly manifest over the next 1000-3000 years (Solomon et al., 2009).
Given that the Great Pause has endured for 18 years 6 months, the probability that there is no global warming in the pipeline as a result of our past sins of emission is increasing (Monckton of Brenchley et al., 2013). All warming that was likely to occur from emissions to date has already made itself manifest. Therefore, perhaps we start with a clean slate. Professor Murry Salby has estimated that, after the exhaustion of all affordably recoverable fossil fuels at the end of the present century, an increase of no more than 50% on today’s CO2 concentration – from 0.4 to 0.6 mmol mol–1 – will have been achieved.
In that event, replace Table 1 with Table 2:
| Climate sensitivities to a 50% CO2 concentration growth | |||||
| Source | Altitude | λ0 | f | ΔTS,100 | ΔTS,∞ |
| AR5 (2013) upper bound | 300 mb | 0.310 K W–1 m2 | +2.40 W m–2 K–1 | 1.3 K | 2.6 K |
| AR4 (2007) central estimate | 300 mb | 0.310 K W–1 m2 | +2.05 W m–2 K–1 | 0.9 K | 1.8 K |
| AR5 implicit central estimate | 300 mb | 0.310 K W–1 m2 | +1.50 W m–2 K–1 | 0.6 K | 1.3 K |
| AR5 lower bound | 300 mb | 0.310 K W–1 m2 | +0.75 W m–2 K–1 | 0.4 K | 0.9 K |
| M of B (2015) upper bound | 300 mb | 0.310 K W–1 m2 | +0.32 W m–2 K–1 | 0.4 K | 0.7 K |
| AR5 central estimate | 1013 mb | 0.215 K W–1 m2 | +1.50 W m–2 K–1 | 0.3 K | 0.7 K |
| M of B central estimate | 300 mb | 0.310 K W–1 m2 | –0.64 W m–2 K–1 | 0.3 K | 0.6 K |
| M of B upper bound | 1013 mb | 0.215 K W–1 m2 | +0.32 W m–2 K–1 | 0.3 K | 0.5 K |
| M of B lower bound | 300 mb | 0.310 K W–1 m2 | –1.60 W m–2 K–1 | 0.2 K | 0.4 K |
| M of B central estimate | 1013 mb | 0.215 K W–1 m2 | –0.64 W m–2 K–1 | 0.2 K | 0.4 K |
| Lindzen & Choi (2011) | 300 mb | 0.310 K W–1 m2 | –1.80 W m–2 K–1 | 0.2 K | 0.4 K |
| Spencer & Braswell (2011) | 300 mb | 0.310 K W–1 m2 | –1.80 W m–2 K–1 | 0.2 K | 0.4 K |
| M of B lower bound | 1013 mb | 0.215 K W–1 m2 | –1.60 W m–2 K–1 | 0.2 K | 0.3 K |
Table 2. 100-year (ΔTS,100) and equilibrium (ΔTS,∞) climate sensitivities to a 50% increase in CO2 concentration, applying Eq. (1) at the characteristic-emission altitude (300 mb) and, boldfaced, at the surface (1013 mb).
Once allowance has been made not only for the IPCC’s reduction of the feedback sum f from 2.05 to 1.5 W m–2 K–1 and the application of Eq. (1) to the relationship between TS and FS but also for the probability that f is not strongly positive, for the possibility that a 50% increase in CO2 concentration is all that can occur before fossil-fuel exhaustion, for the IPCC’s estimate that only half of equilibrium sensitivity will occur within the century after the CO2 increase, and for the fact that the CO2 increase will not be complete until the end of this century, it is difficult, and arguably impossible, to maintain that Man can cause a dangerous warming of the planet by 2100.
Indeed, even one ignores all of the considerations in the above paragraph except the first, the IPCC’s implicit central estimate of global warming this century would amount to only 1.1 K, just within the arbitrary 2-K-since-1750 limit, and any remaining warming would come through so slowly as to be harmless. It is no longer legitimate – if ever it was – to maintain that there is any need to fear runaway warming.
Quid vobis videtur?
The other fly in this IR warming the oceans ointment is that the IR the CO2 would be “trapping” would have been emitted from the ocean itself. Considering that the CO2 molecule emits IR back in a sphere, only a small % of the radiation it absorbs would be re-radiated back to be re-absorbed by the ocean. I don’t see how heat emitted by the oceans can be re-radiated back to the oceans causing them to warm. Even the IR lamp experiment wouldn’t prove much given that the IR lamp would be new energy being introduced to the system, whereas the CO2 theory recycles emitted energy.
OK, here’s another experiment.
Place a closed gallon jug of water under the heat lamp and a gallon in an open pan. Which one will get the hottest?
Place a gallon jug of water in the shade in 115 F Phoenix and a gallon in an open pan. Which one will get the hottest?
The bottled up water gets hot, but the pan that’s free to evaporate stays relatively cool.
That’s why if you want to keep your pool at 85 F you still have to heat it even though it’s105 F air temp.
That’s how those canvas water bags work, a wet bandana, an evaporative cooler.
Here is another simple experiment:’
1) Have an insulated bath in a closed glass container filled with pure CO2 or at least 7000 PPM
2) Have an insulated bath in a closed glass container filled with N2/O2
Heat the bath to 99 degree C, turn out the lights to replicate night, and measure the decay in temperature of the two baths. If the bath in the container filled with CO2 cools slower than the other one than CO2 may actually lead to warming of the oceans. My bet is that it won’t. Of course 100% pure CO2 isn’t representative of anything, and a chamber filled with 7,000 PPM CO2 might be better considering that is the highest level reached in the past 600 million years.
I would love to see the results of this experiment if anyone out there can use their lab facilities to run it. The results should then be presented to congress. The simpler the experiment the better.
What I find so fascinating is that there are so many unanswered questions regarding this “settled” science. There very fact that “scientists” claim that something as complex as the climate can ever be settled pretty much proves real scientist don’t occupy our climate science departments, especially just 30 years ago they were screaming an ice age was coming. The arrogance is of epic proportions. Give that a simple question like can ocean radiated IR be re-radiated to cause the oceans to warm can’t be answered definitively pretty much proves the climate scientists really aren’t interested in truly understanding the system. The most basic of experiments seem to have been overlooked. Even their models fail miserably. Anyone with an ounce of knowledge about building multivariate models would know that they have systematical over-weighted an insignificant coefficient. Climate science is like a dietician that created a model about weight loss trying to sell Under Armor shoes. Their model would be weight loss = f(exercise, caloric intake and a dummy variable for if you owned Under Armor shoes). When they created the model they weighted the model as such Weight Loss = 0.2x caloric intake + 0.2 exercise + 0.6x Under Armor Shoes. This makes a great marketing piece, but even an idiotic model like the one outlined would have better results than the IPCC CO2 focused climate models. The only way you get a systemic overestimation of temperature is if you have a model biased to a faulty variable. We see it in market models all the time. The Climate Scientist are learning lessons Wall Street has known for decades. If we can’t model the S&P 500, we certainly can’t model something infinitely more complex like the climate 100 year in the future.
” especially just 30 years ago they were screaming an ice age was coming”
…
I don’t think so.
..
http://journals.ametsoc.org/doi/pdf/10.1175/2008BAMS2370.1
I do think so. I remember being scared to death as a kid about the coming ice age.
You can’t deny history. Here are the headlines:
https://stevengoddard.wordpress.com/2013/05/21/the-1970s-ice-age-scare/
https://stevengoddard.wordpress.com/2014/01/31/hiding-the-1970s-ice-age-scare/
http://wattsupwiththat.com/2013/03/01/global-cooling-compilation/
I see your problem.
You were listening to the media instead of looking at the scientific literature.
..
Got anything better than blog links?
This recent chart by Dr Box pretty much proves just now nonsensical the conclusions reached by climate “scientists” are. They clearly can’t see the forest through the trees. Just look at this chart.
Volcanic activity is blamed for the cooling of the period 1850 to 1900, as demonstrated by the chart labels.
Point #1) Dr Box doesn’t seem to grasp the simple concept that is the volcanoes were causing cooling, the temperatures were artificially depressed, meaning the actual warming since 1850 to 2000 would have been LESS if were not for the volcanoes. That means the according to the graph there would have been LESS that 1 degree C warming since 1850.
Point #2) It cooled from 1940 to 1990 without any real volcanoes, during a time in which CO2 dramatically increased. Why not blame CO2 for the cooling if we blamed the volcanoes for the cooling? What caused the cooling between 1940 to 1990?
Point #3) We aren’t even above the 1930 peak?
Would someone point to a period were the case can be made that CO2 is causing warming? How does this chart condemn CO2?
http://notrickszone.com/wp-content/uploads/2015/06/Box-Fig-11.jpg
More:
http://notrickszone.com/2015/06/29/greenland-temperatures-weaken-theory-co2-drives-climate/#sthash.uXe6WS2L.oItaVklW.dpbs
More:
http://agwunveiled.blogspot.com/
I suggest you look at global temperature anomalies instead of one geographical location. When did the top of the Greenland ice sheet become a proxy for the entire globe?
In addition, the sensitivity is higher at lower temperatures and we expect more variability. At 1K, the sensitivity is about 64 C per W/m^2, while at 287K its only about 0.2C per W/m^2. A second factor with an additional effect on the sensitivity is that when ice covers the surface, the effective negative feedback from the difference in reflectivity between the surface and clouds disappears. Of course, the very definition of the IPCC metric of forcing excludes the negative feedback effect from cloud reflection and is why this is often overlooked.
“I suggest you look at global temperature anomalies instead of one geographical location. When did the top of the Greenland ice sheet become a proxy for the entire globe?”
I would but that data is so manipulated and inconsistent it is hard to put any faith in it. Given that CO2 evenly blankets the globe I find it hard to understand how the impact of CO2 can vary much, in any model its impact should be a constant, unless for some reason the physics of CO2 absorption vari per geographical region.
Anyway, here is another question to ponder. How does CO2 impact daytime temperatures? How can CO2 be causing the record high temperatures?
Given this chart that clearly demonstrates that CO2 is transparent to visible light, and visible light is what warms the earth. How can record high daytime temperatures during the day be caused by CO2? Once again, this chart should be shown to congress and a Warmist should be forced to answer that question. The very foundation of this climate change theory is pure nonsense. The real evidence of the fraud is that climate scientists know this chart well, and yet they remain silent when the press and president promote this lie. All evil needs to prevail in society is for good men to do nothing. The silence of climate “scientists” in the face of such outright lies speaks volumes about the integrity of climate “science.”
http://www.ces.fau.edu/ces/nasa/images/Energy/GHGAbsoprtionSpectrum-690×776.jpg
Joel Jackson wrote:
“I see your problem.
You were listening to the media instead of looking at the scientific literature.
..
Got anything better than blog links?”
I read your post claiming there was no consensus of global cooling in the 1970. That research has about as much credibility as the claim that there is a consensus today for global warming…err, sorry, climate change. BTW, just look at the ice core data. can anyone show me a period where climate change wasn’t the norm? It would be abnormal is climate wasn’t changing. The very fact that they claim climate change is abnormal proves this “science” is nonsense, and to claim man can change a pattern that has existed since the beginning of time is even more nonsense, and to make matters worse, global cooling threatens mankind, global warming doesn’t, and we are almost certain to have another ice age, and never in 600 million years has CO2 resulted in run away warming, not even when it was 7000 PPMs.
While we are trying to apply common sense to an extremely difficult to define atmospheric physics problem let us see what nature has done to solve the problem for us. All received solar energy must be balanced by radiation leaving earth. Water vapor is the mechanism which does conservatively 90% of this job other than the IR radiation window which sends 33-40 watts/m^2 directly to space. We ‘know’ that water vapor does the balance (by some estimates ~104 watts/m^2) since there is no other competing mechanism to do the job. Mountains of studies of the complex atmospheric physics addressing convection, radiation, condensation, rain, etc. etc. have not reached a consensus on exactly how, the fact remains that water vapor does it. Water vapor NET effect is cooling in response to solar warming, as such it is a negative feedback.
How could any small increment (even a few watts) added to water vapor production, including positive feedback vaporization at the surface ever be construed to suddenly become a warming effect on climate? Such an illogical conjecture boggles the logical mind.
Ronald,
If you want to know why “only a few watts” – normally held to be up to 2W/m^2 can have such a warming effect long-term then do some simple physics for yourself. Assume the atmosphere is equivalent to 10m depth of water (it is almost) and that the heat capacity of water is 4.2J /g / degree C (not such a good match). Then try percentages of 100%, 10% and 1% of the heat going into this 10m depth of water. You will soon see why the climate scientists are worried.
Better to do the sums for yourself, because then you are more likely to believe the answer. You could always put the working up here and hopefully receive gentle corrections.
Here’s a calculation for you.
At 100% positive feedback, 100% of surface emissions would be captured by the atmosphere, half would escape to space and half returned to the surface since the atmosphere emits over twice the area it’s absorbing from. Of this half returned to the surface all is re-emitted and subsequently absorbed by the atmosphere, half of which escapes and half of which returns and so on and so forth. If Pi is the post albedo input power, the power entering (and in LTE leaving) the surface is Pi*(1 + 1/2 + 1/4 + 1/8 + …) = 2*Pi W/m^2. One W/m^2 of Pi to the surface results in 2 W/m^2 of emissions by it at 100% positive feedback, corresponding to s sensitivity of less than the lower limit claimed by the IPCC of about 2.2 W/m^2 of incremental surface emissions (0.4C) per W/m^2 of incremental input.
It would interesting to hear you actually answer the fundamental question raised by this article which is where is the origin of the roughly 16 W/m^2 of incremental surface emissions that must be emitted for a 3C average surface increase from only 3.7 W/m^2 of forcing. This is a gain in excess of 4, while the theoretical upper limit at 100% feedback is only 2.
While you’re at it, explain why each of the 239 W/m^2 of post albedo input doesn’t result in the 4.3 W/m^2 of surface emissions predicted by the consensus sensitivity, yielding a surface temperature close to the boiling point of water. How does the incremental gain get this high when the T^4 relationship between Planck emissions by the surface and temperature dictate that the incremental sensitivity must be less than the average for all previous W/m^2 of forcing and the average for all 239 W/m^2 of input is only 1.6 W/m^2 of surface emissions per W/m^2 of forcing resulting in a temperature of about 287K?
Christopher,
Consensus climate science conflates the EM energy transported by photons (Planck emissions) with the non EM energy transported by matter (latent heat, thermals, etc.). They must do this in order to provide the wiggle room to support a higher sensitivity and claim that the effects of latent heat and clouds provide significant positive feedback to amplify the sensitivity above that dictated by the slope of the Stefan-Boltzmann relationship, which most should agree is the zero feedback sensitivity.
The magnitude of the feedback required to amplify 1 W/m^2 into the 4.3 W/m^2 necessary to sustain a 3C rise from only 3.7 W/m^2 of forcing is staggering. They support this by incorrectly mapping Bode’s control theory to the climate. The error is that Bode’s analysis assumes an amplifier element with an external source of power that measures the input+feedback to decide how much to deliver to the output. The climate amplifier is mostly the atmosphere which consumes input and feedback power to produce its output power and this COE constraint has never been accommodated by consensus climate science. This actually sets the absolute upper limit of how much 100% positive feedback can do at 2 W/m^2 of incremental surface emissions per W/m^2 of forcing which is well below the lower limit set by the IPCC of 1.5 C per W/m^2, or about 2.2 W/m^2 of surface emissions per W/m^2 of forcing.
A hurricane also contradicts the claim of massive positive feedback from latent heat and clouds, for if this was true, a Hurricane would leave a trail of warmer water in its wake, rather than the trail of colder water we observe and that indicates negative feedback. A Hurricane is just a self contained version of the global heat engine that drives the weather and of course the second law tells us that a heat engine can not warm its source of heat.
Another problem is that the metric of forcing is ambiguous. The IPCC would call 1 W/m^2 of instantaneous incremental post albedo input power 1 W/m^2 of forcing, but would also call 1 W/m^2 of instantaneous decrease in the transparent window from increased GHG’s 1 W/m^2 of forcing. The difference being that 1 W/m^2 of incremental post albedo solar power passes right through the atmosphere, while in the steady state, only about half of the surface power absorbed by GHG’s reaches space and the remainder does back to the surface to make up the difference between emissions at its higher temperature and the available post albedo input power.
George
George,
Believe it or not, climate scientists have had a pretty good handle on how the sunlight, IR and mass transfer of the atmosphere works since 1964 when Manabe and Strickler showed you can only reproduce the atmospheric temperatures by height from a simple model if you include both greenhouse gas radiative transfer and convection in the model. And the results are pretty good, given it’s an “average lattitude” calculation not taking lattitude into account. See http://www.gfdl.noaa.gov/bibliography/related_files/sm6401.pdf
And they understand most of the processes down to a fine level of detail, which generally is not understood here. The training is available in various free online courses, but few here choose to avail themselves of it.
Sure there are some things which need to be understood better by climate scientists, and progress is being made as time goes on. But you do not need to understand absolutely everything in order to know what is going to happen when CO2 levels go up – 1964 Manabe and Strickler level of understanding is adequate for a first stab, which would not include the amplifying effect of increased water vapour.
As far as absorption through the atmosphere of sunlight and infrared radiation is concerned there is a very important finding from very simple physics. If, though the atmosphere, the sunlight absorption rate was higher than the infrared absorption rate by atmosphere of the same density, then the earth’s surface would be colder than you would expect from the outgoing IR radiation at the top of the atmosphere.
But because the atmosphere is pretty transparent to sunlight but pretty opaque to IR, then the surface is warmer than you would expect (called the greenhouse effect).
It probably wouldn’t come as a surprise now to you to learn that if the rate of absorption of sunlight and IR were equal, then the surface temperature would be exactly equal to what you would predict – no warming or cooling. This isn’t because there’s no greenhouse effect, but because it would be exactly countered by a “fridge” effect in stopping sunlight getting to the surface.
Climate Pete,
The temperature profile in the atmosphere is largely irrelevant and is mostly a function of a gravity induced lapse rate. All that really matters, relative to the LTE response of the system to change, is what physical laws are relevant at the 2 boundaries of the atmosphere. One is the boundary with space and the other is the boundary with the surface. Without an atmosphere both boundaries are the same and I expect you to agree that in LTE, Pi=Po and the Stefan-Boltzmann Law precisely predicts what the average temperature would be and the slope of this relationship at some temperature is the sensitivity at that temperature.
At the top boundary, only COE matters and that in LTE, the average energy flux entering the planet must be equal to the average energy flux leaving the planet. The same holds true for the boundary at the surface, although there is also non EM energy entering from the surface that must be returned to the surface in some form (believe it or not, most is returned as warmed liquid water and not as radiation). The surface, whose intrinsic emissivity is near unity must also obey the Stefan-Boltzmann Law and the processes driving the change from one LTE state to another must obey the second law of thermodynamics. All three of these laws must be violated in order to support the high sensitivity claimed by the IPCC. How do you reconcile this failed test of the CAGW hypothesis?
To clarify, the three tests of CAGW where it fails to conform to physical laws are:
1) COE -> the required positive feedback is > 100% implying that new energy must be coming from somewhere.
2) Stefan-Boltzmann -> The incremental sensitivity must be less than the average for all W/m^2 that preceded.
3) Second Law -> If water vapor and cloud feedback exhibited net positive feedback, hurricanes would leave a trail of warmer water and not the trail of colder water observed.
http://www.theozonehole.com/images/atmprofile.jpg
co2isnotevil said
This is just not true. Gravity induced lapsed rate causes a reduction of temperature with height, but the temperature profile shows as many regions where temperature increases with height as it does regions where it reduces. So surely with such a profile there must be some significantly complex processes going on which you must understand before looking at the two interfaces.
All that really matters, relative to the LTE response of the system to change, is what physical laws are relevant at the 2 boundaries of the atmosphere.
The same physical laws apply everywhere in the atmosphere (and everywhere else for that matter).
If the greenhouse effect becomes larger due to more CO2 and/or water vapour, then this changes the temperature distribution. This has to change the energy flows at the two interfaces, because the temperatures are not the same.
The GHG effect never breaks any physical laws. It does not conjure up any new energy. All it does is stop a proportion of the energy coming in as sunlight at the top of the atmosphere (TOA) from escaping as infrared from the TOA by reradiating it down towards the surface.
Think of it as a transistor. A transistor does not create energy, but it allows a small control signal (CO2) at the base / gate to have a much larger effect allowing or restricting an energy flow which comes from the power supply – in the case of AGW by the IR coming originally from the surface.
Since there’s plenty of energy leaving the TOA there’s plenty of scope for CO2 warming causing increased water vapour in the atmosphere, and the increased water vapour then causing an increased GHG effect and a further temperature rise. All without creating energy anywhere because plenty is already present.
But the feedback from increased water vapour is less than 100%, because otherwise a small increase in the CO2 GHG effect would soon cause the atmosphere to lock up into a high temperature state. That doesn’t stop you having a multiplication factor greater than one i.e. water vapour causes a doubling of the temperature rise for a given rise in temperature due to the CO2 GH effect. But that isn’t the same as a feedback factor.
Sorry, I don’t get your point reading this in conjunction with your other post above. Certainly there aren’t any surfaces with unity emissivity – that would be a black body surface.
Could you rephrase the point in some other way please.
This doesn’t sound right.
Hurricanes get the energy to whip up huge fast atmospheric vortices directly from a very warm sea surface. if they remove energy from the sea surface it is surely going to cool. Further, once formed, the fast winds will also cause increased evaporation, which you would expect to reduce the
temperature.
Before the hurricane forms you would expect a warmer sea surface temperature than normal, and the GH effect from increased CO2 may or may not have been a cause of the increased sea surface temperatures. But during the actual hurricane lasting some days but not months, greenhouse warming is far too slow a process to have any effect on what happens.
Climate Pete,
You misunderstood some key points. First. I never said the GHG effect violates COE, but that the consensus sensitivity that presumes massive positive feedback violates COE by requiring positive feedback in excess of 100% where anything above 100% requires an external source of power (i.e. powered gain). For a surface at 287K, increasing this by 0.8C increases surface emissions by 4.3 W/m^2 which is 430% of the initial forcing. Can you say where the 3.3 W/m^2 are coming from without invoking positive feedback in excess of 100%? Of course you miss this because you implicitly ignore the relevance of SB relative to the surface temperature and its Planck emissions and will likely deny that increasing the surface temperature by 0.8C increases Planck emissions by 4.3 W/m^2 per the SB relationship. BTW, you do understand that the EQUIVALENT average surface temperature extracted from weather satellite data assumes an ideal black body surface and that this remotely sensed equivalent temperature is very close to the average temperature measured by surface thermometers implying that assuming unit emissivity is a reasonable approximation.
Referring to your transistor analogy, a transistor amplifier requires an external power supply and effectively measures the input to decide how much power to deliver to its output from its external supply. The climate has no such external supply and the input power (and feedback power) are consumed to supply the output power. This mistake was originally made by Hansen in an 1984 paper applying control theory to the climate and when Schlesinger ‘fixed’ some other mistakes in the Hansen paper he actually made it worse by breaking the only 2 things Hansen had right, which is quantifying sensitivity as the dimensionless ratio of output power to input power and conceptualizing the effects of GHG’s as gain rather than as feedback. The COE violation supporting consensus feedback ‘theory’ has never been corrected as Hansen’s and Schlesinger’s erroneous papers have been cited as foundation science as far back as AR1.
While the planet, relative to the surface, has an emissivity close to 0.6, the surface itself has an emissivity very close to 1. This reduction in emissivity as seen from space is a property of GHG’s and clouds in the atmosphere but is not intrinsic to the surface itself. This plot shows how the average of many 10’s of billions of remote sensed measurements from weather satellites confirms that from space, the planet appears to be very close to an ideal gray body whose temperature is the average surface temperature and whose emissivity is about 0.6.
http://www.palisad.com/co2/fb/Figure1.png
Again, I stand by my assertion that all that matters for the LTE RESPONSE TO FORCING is the behavior at the boundaries of the atmosphere. This is classic black box modelling or reverse engineering where the unknown atmosphere is replaced with the simplest structure that has the correct behavior at its boundaries. As I have continued to say, consensus climate science adds all kinds of complexity to work around the problem that first principles physical laws can not support a high sensitivity.
If you still believe that clouds and water vapor/latent heat exhibit the massive positive feedback required to support the consensus sensitivity, how can you explain the surface cooling that results from a hurricane and which is a clear signature of net negative feedback?
Addressing the second Law issue, under what circumstances will feedback from clouds, water vapor and latent heat become massively positive enough to support the consensus sensitivity when a Hurricane is just a spatially condensed version of the heat engine that drives all weather and exhibits the clear signature of net negative feedback on the surface temperature? Of course, all heat engines are subject to the Second Law, so what physics to you propose allows violating this law outside the confines of a Hurricane?
TO BE CLEAR, I’m not saying that the GHG effect violates any of these laws but only that the high sensitivity claimed by consensus climate science violates these laws.
Now, I will explain the graph. The Y axis is power emitted by the planet and the X axis is the surface temperature. Each of the roughly 23K small red dots is the average emissions vs. average surface temperature for one month of data across one 2.5 degree slice of latitude. Roughly 3 decades of satellite imagery across 72 2.5 degree slice of latitude are represented. The larger dots are the averages for each 2.5 degree slice across the entire data set. Blue dots are the N hemisphere slice and green dots are the S hemisphere slices, although they align closely with each other and mostly appear as black dots. The slope of the best fit to the data at 287K is about 0.3C per W/m^2 and represents an upper limit on the sensitivity. The radiative transfer model used for determining the surface temperature is the relatively simple one used by Rossow to produce the ISCCP data set. My tests using a more accurate HITRAN driven line by line model don’t make much difference and if anything, the results get even closer to that of an ideal gray body.
If we superimpose the post albedo input power vs. surface temperature across the same slices of latitude, the results get even more interesting.
http://www.palisad.com/co2/fb/Figure3.png
The slope of this relationship is only about 0.2 W/m^2 and is aligned with the theoretical behavior of an ideal gray body shown by the magenta line which is the unit emissivity SB relationship at the average surface temperature shifted up and to the left by the gain of 1.6 which is the reciprocal of the measured emissivity of about 0.62.
Keep in mind that this is not the highly processed, homogenized sparse data whose speculative interpretations are all that can support CAGW, but real data with nearly 3 decades of continuous and complete coverage of the entire planet where planet emissions are directly measured by satellite sensors and the post albedo input is trivially calculated from visual imagery. Moreover; all of the processing of the data in these graphs was done by GISS. My plots only present their data in a more revealing form.
Its illogical to deny that the data precludes a high sensitivity as it unambiguously supports a low one, yet somehow this is consistently denied by consensus climate science.
co2isnotevil,
Let us just concentrate on the one key point, which is the source of the energy for increased surface IR emissions after water vapour GH feedback has kicked in. That is where the misunderstanding is.
You are correct that if the surface warms it will emit more IR. The Stefan-Boltzmann law will give a reasonable indication of the scale of the extra IR emissions, though ocean (non-ice) emissivities / absorptivities (which must be identical) are below 90% up to 20um, and there is a lot of ocean. But the best way to approach the problem is from cause and effect.
Firstly, the energy available to the surface to emit as IR (and other things) comes from two major source :
– incoming sunlight, not changed much by increasing CO2 (maybe some by cloud changes)
– downwelling IR, of which most comes from the GH effect
When additional CO2 is added to the atmosphere, you hopefully agree it causes more downwelling IR, which eventually increases the surface temperature. Increasing surface temperature (from SB) increases the upwelling IR plus convection and sensible upwards heating. The increase in downwelling IR is at the expense of the IR which exits at the top of the atmosphere and is reduced.
This means that the increase in surface temperature due to CO2 is not the result of more energy being created. It is the same energy going around the surface emission / upwelling IR / GH effect reflection / downwelling IR / surface absorption route, except it is doing slightly more cycles around this route, which means the power emitted from the surface will be increased.
As the surface and atmospheric temperatures increase due to the CO2 GH effect, this enables more water vapour to persist in the atmosphere, because the saturated water vapour pressure is higher with the higher temperatures. Since water vapour is also a GHG, then the downwelling IR once more increases. You get exactly the same effects as for initial CO2 increase above.
The fraction of upwelling IR reflected increases due to the increased GHG atmospheric content, become downwelling IR, again at the expense of IR leaving the top of the atmosphere. So there is a further increase in both the upwelling IR power and the downwelling GH IR power. But because the outgoing TOA IR is being further reduced by the water vapour GH effect, there is less energy leaving the system. Therefore the increased surface temperatures and surface emission powers are due solely to retaining a higher fraction of the incoming sunlight energy in the system, and not due to any breaking of the first law.
So perhaps a better analogy than a transistor would be a microwave cavity. I apologise if you are not into microwave cavities and will think of another analogy in due course.
Assume a constant generator power, but intially the generated frequency does not match the cavity resonant frequency. The losses will be high and the microwave energy density in the cavity will be low. But if you change the frequency to bring it in line with the cavity resonant frequency then the microwaves are reflected more times before being absorbed, and the microwave energy density increases. This increase has nothing to do with an increase in the power of the microwave generator, purely that the losses due to detuning are reduced.
So it is with the greenhouse effect. The incoming sunlight gets converted to IR at the surface, and can then go round the surface emission / upwelling IR / GH effect reflection / downwelling IR / surface absorption route more times with an increased GH effect.
Note that the above process does not depend on whether the water vapour amplification factor is less than one or more than one. It is purely a matter of how many times the IR can go round the loop, which is dictated by the GH effect reflection fraction of IR back downwards. And in turn the number of times the IR goes round the loop determines the energy density and therefore the surface temperature and emission rates.
So in response to
the answer is :
The number of times IR goes around the loop is not high. In fact from a previous post of mine it is (342 – 92)/ 398 = 250/398 which is around 0.63. The point is that any increase in this figure will inevitably cause an increased surface temperature and surface emissions without requiring any magic energy addition.
The only source of input power is the Sun. Yes, some fraction of the surface emissions absorbed by the atmosphere are returned to the surface and this makes up the difference between the 239 W/m^2 of post albedo input from the Sun and the 386 W/m^2 emitted by the surface. The remainder exits into space to make up the difference between the surface and cloud emissions passing through the transparent window of the atmosphere and the required 239 W/m^2 of LTE output emissions. For the combined EM energy of photons emitted by the surface and absorbed by GHG’s and the water in clouds, about half must be returned to the surface and the remaining half exits to space in order for LTE to be achieved. This is consistent with energy entering the atmosphere across half the area which energy is leaving the atmosphere. This also indicates that non EM power entering the atmosphere (latent heat, thermals, etc) has no effect on LTE and must be exactly offset with additional power returned to the surface, mostly in the form of rain and weather. If it did have an effect on LTE, offsets would be required to make the system balance and yet no offsets are required to fit the measured data when only EM energy is considered relative to the EM balance of the planet.
Yes, the extra power comes from the return of power entering the atmosphere from the surface, but as you point out, it only goes around 0.62 times, resulting in a sensitivity of 1.62 W/m^2 of surface emissions per W/m^2 of input forcing and not the 4.3 W/m^2 of incremental surface emissions per W/m^2 of forcing claimed by the IPCC. The same W/m^2 going around and around assumes that all the power absorbed by GHG’s is returned to the surface, when the data tells us that only about half of what is absorbed is returned to the surface. Again, this is a consequence of the errors made when applying control theory to the climate and failing to account for the COE constraint arising from the lack of powered gain that Bode’s model it’s based on otherwise assumes.
Certainly, incremental CO2 has an effect on the amount of energy captured by the atmosphere and according to line by line analysis, doubling CO2 decreases the power passing through the transparent window by about 3.7 W/m^2 and no where near the more than 16 W/m^2 that needs to be supplied to support a 3C increase in the average surface temperature. Moreover; only half of this is available to be returned to the surface as the remainder eventually escapes out into space.
Reading these comments makes me realize that I’m in den of physics affectionados. To head off the need for endless mia culpa responses I may need to state the obvious that I was taking the tack of LM of B in the opening conjecture; that of accepting the IPCC claims of CO2 forcing a positive water vapor feedback. And even if it did result in additional water vapor, it would ultimately result in negative climate feedback through the normal ongoing hydrologic cycle processes.
First I do not agree that CO2 is a forcing parameter in this surface boundary layer since it has no additional source of energy to add to the system. It picks up its additional share of the 15u spectrum where it is immediately thermalized within 10’s of meters of surface due to the short mean free path of IR/CO2 at this density. This results in IR capture and boundary warming at a slightly lower height than before CO2 doubling. To the extent that this warming might be transmitted to the water surface it could contribute to additional water vaporization but the of available energy through the surface mass convection/conduction resistance will limit energy rate to the surface to prevent any additional total vaporization taking place. I still believe in the law of conservation of energy no matter how confusing the understanding of the physics may be.
After convection of the water vapor to altitudes where condensation, freezing and heat release can take place there is the possibility of ‘greenhouse’ warming due to the much described raising of the 15 u spectrum radiation to higher altitudes along the negative lapse rate temperature line. However given the 15 u spectrum radiation temperature measurement of 217 K, ie the tropopause temperature, one must ask just how much greenhouse effect is taking place. In any event because of the a fore mentioned energy limits there will be no additional water vapor to enhance the effect.
This is another place where the conflation problem I mentioned adds confusion. The kinetic temperature of atmospheric CO2, N2 and O2 are all the same and this is a property of matter and its collisions. A thermometer will respond to both collisions owing to kinetics and absorbing photons. In principle, the power behind the two manifestations of temperature can perform equivalent amounts of work, but other than that, they are independent. Think of shining a laser through the atmosphere at a LWIR wavelength that the atmosphere is completely transparent to. While the laser is on, a thermometer in its path will read a higher temperature, but will immediately return to the kinetic temperature once the laser is turned off. If not for this property, laser weapons would be useless.
I’m sorry, has anyone been able to explain how CO2 can cause record high daytime temperatures? That to me seems to be a very very very basic question, but unless I’ve missed it, I don’t think anyone has an answer…yet.
CO2 doesn’t directly cause record high daytime temperatures. In fact CO2 actually warms the nightime temperatures more than the daytime, and the winter temperatures more than the summer, because the sunlight obviously builds up the temperatures faster than IR emissions lower then, so CO2’s most marked effect is to stop the heat from leaving the surface once the sun goes down or is less powerful in winter.
Record high maximum (daytime) temperatures are thus less common than record high minimum) nightime temperatures.
But obviously if you start the day with a higher minimum temperature, then you are more likely to go on to develop a record high maximum temperature, which, for some funny reason seems to be the only thing anyone cares about.
Incidentally we had one yesterday in London – 37 degrees C at Heathrow is a record for London in July, and I have to say it was unbearably hot walking to the tube (underground, subway) and I had to wear a hat.
People simply have to stop saying dumb things like “…CO2’s most marked effect is to stop heat from leaving the surface…”. Build your arguments on actual physics. And don’t say “You know what I meant.” If you meant to say “…added CO2 slows the cooling rate…”, then actually say that. The whole “humans are destroying the Earth” meme is built on inaccuracies like this and I’m tired of it.
Go see Goddard’s Real Science site. The frequency of plus 90, plus 100 events has dropped drastically the past few decades. The record highs are NOAA’s delusions.
I don’t think Goddard’s site is peer reviewed.
So? Peer reviewed or pal reviewed like all the CAGW work? On blogs everybody takes a shot, not just co-authors and good buddies. Goodard is reviewing NOAA & NCDC data which has obviously been “adjusted.”
The point about peer review is to do with the scientific consensus on climate science. A consensus consists of three things.
1. A correspondence of published results identifying that AGW is happening, across a wide variety of methods used.
2. An agreement across climate scientist in different countries, different climate science disciplines as to what constitutes AGW and how it should be measured e.g. radiative forcing, ocean heat content, surface and stratospheric temperatures etc.
3. A set of people working on it who agree about it and are from different politics, countries, races.
And that is why the national science academy of every country which has one (and this includes the Vatican which has the Pontifical Academy of Science) has put out a statement saying that AGW is for real and we should do something about it.
And here is a very telling point. Going back 10 or 15 years the weather forecasters were about 50:50 on whether AGW was real. Because weather forecasts were only good for limited times they had little formal training on certain long-term weather effects. However the advent of much faster supercomputers enabling ensemble weather forecasts has increased the time period over which forecasts are accurate. This required the weather forecasters to mug up on those longer-term weather effects. As part of the learning process they have generally looked at AGW too.
And guess what! Nowadays the vast majority of weather forecasters accept AGW, and some even work it into their forecasts to better inform the public. That includes those who did not accept AGW before.
In other words, the more a bunch of scientists understands the detail of climate and longer-term weather, the more likely they are to accept AGW.
In order to reject AGW you have to believe that a) the vast majority of the experts (those who have been working in the field for a long time and have a number of climate science publications to their name) are lying or stupid. Not likely. b) All the science academies of the world are wrong. Not likely. c) The more you know about climate science the more wrong you get. Not at all likely.
I only ask because “they” are full of crap & I want everybody to see it. Not that it matters. Crap sells.
Could you rephrase the point in some other way please.
“
3) Second Law -> If water vapor and cloud feedback exhibited net positive feedback, hurricanes would leave a trail of warmer water and not the trail of colder water observed.
This doesn’t sound right
IPCC gives clouds a -20 W/m^2 RF. That’s cool & kewl.
I should point out that cloud feedback isn’t always negative. Below 0C when the surface is covered by ice and snow, increasing clouds warm the surface. Above 0C, increasing clouds cool the surface because incremental reflection is a larger effect than incremental trapping of surface heat by clouds. As a result, the sensitivity is higher at lower temperatures (not withstanding the the additional effect of the T^4 relationship between temperature and input/output power). There being far more surface area > 0C than < 0C, the average equivalent feedback from clouds is net negative.
http://www.palisad.com/co2/sens/st_ca.png
The cloud model I've found that most accurately predicts the measured cloud properties is one where cloud volume increases monotonically with the surface temperature and water vapor as the ratio between cloud height and cloud area dynamically adjusts to achieve the steady state equilibrium that results in the warmest possible surface temperature given the input power constraints and average atmospheric properties. CO2 concentrations affect the average atmospheric properties so doubling it is easily quantifiable and results in an equivalent forcing of less than 3.7 W/m^2 resulting in an even smaller LTE effect on the surface than even most skeptics will claim.
Many of the speculative estimates of a high sensitivity apply the same broken homogenization techniques to extrapolate polar sensitivities to the tropics. Keep in mind that at 1K, the Stefan-Boltzmann sensitivity alone is about 65C per W/m^2, at 255K its about 0.3C per W/m^2 and at 287K its about 0.2C per W/m^2. Adding the effects of clouds at temperatures below 273K significantly increases the apparent local sensitivity to incremental solar input. At 0C, the sensitivity is very high owing to melting/forming ice, but this also represents a tiny fraction of the total surface and can not be extrapolated to represent the whole planet, but often is.
Cloud feedback is probably very mildly positive
See http://www.skepticalscience.com/clouds-negative-feedback.htm
High level clouds provide positive feedback. Low level clouds provide negative feedback. There is some evidence that increasing temperatures means less low-level cloud, but the evidence is not conclusive at present.
Either way, the evidence is that the effect is not large.
Climate Pete,
Yes, the net effect from water vapor, clouds and latent heat is very small and the net feedback acting on the climate system is close to zero, although it is both quantifiably and demonstrably slightly negative (sorry, but SS caries little credibility with me). At zero feedback, the climate sensitivity is the slope of the Stefan-Boltzmann relationship at the current average surface temperature, or about 0.2C per W/m^2 and is why the measured sensitivity is about 0.2C per W/m^2.
I expect you to agree that for a Earth like planet with no atmosphere, or even an atmosphere with no GHG’s or clouds, the Stefan-Boltzmann LAW would exactly quantify the average temperature of the surface relative to total input forcing and the slope of this relationship would be the sensitivity of the surface temperature to forcing. Thus the zero feedback sensitivity is the slope of the SB relationship at the current average temperature.
co2isnotevil,
I said clouds. I did not say water vapour.
It’s pretty clear from physics that the warmer the atmosphere is the more (gaseous, invisible) water vapour can reside in it. This is hardly new science.
And since you and I both understand that all the water vapour in the atmosphere causes around 3x the GH effect that all the CO2 does, we both understand that more water vapour has to mean more GH effect.
Since this represents a very distinct mechanism, for amplification of the AGW effect from CO2, the onus is on those who feel it can be ignored to justify why the mechanism just does not apply.
And it is never a matter of credibility. It is about physics and physical mechanisms.
While I agree with you on planets with no atmosphere, ours most certainly does have an atmosphere with a dynamic and continuous IR radiative energy flux both up and down which provably does change the temperature profile with height.
You accept that the relationship between forcing and the surface temperature for an Earth with no GHG’s or clouds in its atmosphere is completely specified by the SB Law and that this results in a sensitivity of less than 0.2 C per W/m^2. You seem to agree that the net feedback from clouds/weather is low, so the net feedback from water vapor, clouds and latent heat (all the things that manifest the heat engine driving weather) must also be low. You can’t separate the effect of water vapor without considering all of the other offsetting effects, for example, reflection by incremental clouds. Examining the heat engine that drives weather shows us the net of all of these effects combined.
You seem to be overwhelmed by the apparent complexity of multi-path fluxes traversing through the atmosphere. I surely understand as this confusion is purposeful based on how the consensus has framed the science. If you stick to the electromagnetic behaviors at the boundaries and extract a transfer function for how surface emissions/temperature change in response to solar input, this confusion quickly fades away and the reality of a low sensitivity is unavoidable.
You still haven’t explained why the high sensitivity claimed by the IPCC can violate so many first principles laws, other than by blindly claiming that they do not, even in the face of overwhelming evidence to the contrary.
co2isnotevil,
I’ve put together a very simple spreadsheet model of the flows, assuming every flow into the atmosphere itself is split by GHG reflection factor IR downwards and 1 – that upwards. It shows that for a 6W initial RF at the TOA the rise in temperature at the surface (calculated from the SB emission surface output power required to restore the equilibrium) is 2.6 degrees C. This is 0.45 K / W / m^2 – twice your figure. If you allow for 90% of emissivity (based on mostly ocean) then the figure becomes 0.5 K / W / m^2. This seems to be in line with a temperature rise of 2 degrees C.
The model input is the GHG reflection coefficient, which changes both RF and temperature rise.
I’ll change it a more accurate spreadsheet model handling the surface emissivity and absorption coefficient properly, then release the model.
However, your claim that sensitivities above 0.2 K / W / m^2 have to break COE are not looking good at this point.
Without thinking about it too much it does seem as if your result are out by a factor of1 / (1 – 0.6) = 2.5, which must have to do with the fraction of upwelling IR which is reflected back to the surface by the GH effect.
Are you among those who deny the conflict of interest at the IPCC or are you among those who more dangerously considers it a necessary means to an end? Do you deny that if the physics is correct and mankind’s CO2 emissions have an effect somewhere between benign and beneficial, the IPCC has no reason to exist or do you simply deny the immutable physical laws that conclusively support a sensitivity far less than needed to result in catastrophic consequence?
You seem to think that because the IPCC exists, CAGW must be important when the only reason consensus climate science foolishly believes that CAGW is important is because the IPCC exists. If it never existed, the fear of CAGW would be a distant memory and the science would have been settled long ago. It’s an abomination of science that we allowed a political body to interfere and in fact direct scientific progress. Once partisan politics chose sides, any sense of objectivity completely disappeared. Maybe you’re not a scientist and this doesn’t bother you. I am and it bothers me to no end and my sole motivation here is to free science from the corruption of politics.
I suggest you review the charter of the IPCC which by design has driven the climate science consensus since its inception. They only support and summarise science consistent with their reason to exist and have systematically excluded or denigrated science that is not. The decades of bias has turned climate science into a joke, tainted peer review and publishing (who wants to do climate science that the IPCC doesn’t recognise) and used lies and misstatements of fact to foment political divisiveness all in an effort to justify redistributive economics through climate reparations. The IPCC is engaged in the crime of fraud against humanity and those behind this fraud should go to jail.
Regarding GCM’s, I’ve developed, debugged and validated more models of more kinds of systems that you can imagine including feedback control systems that make the climate system look trivial by comparison. When a model doesn’t converge to the same answer from different initial conditions, the model is surely broken. The climate model I work from has no assumptions, no arbitrary coefficients, strictly follows physical laws and converges to the same final state regardless of initial conditions. Until you can point to a GCM with these properties, they have no interest to me. The typical GCM has so many dials and arbitrary coefficients that you can get any answer you want to see. GCM’s are barely good at predicting the weather a week out and that’s what they were developed for. Do you really think the divergence problem doesn’t affect multi-decade simulations of the weather?
co2isnotevil,
Forget the emissivity – fourth root of 0.9 isn’t going to make any difference.
Here’s a published Google Drive version of the model. Hopefully you can update it.
https://docs.google.com/spreadsheets/d/1zgogjC8LUzP1jV1jM_slkhAYGLWTeURBqvSOgAP0Cso/pubhtml?gid=1870701629&single=true
Its not the fourth root of 0.9 that’s important, but the non-linearity of the T^4 relationship that is being ignored. Consensus climate science supports expressing a sensitivity as degrees per W/m^2 by claiming that the relationship around the current temperature and emissions/forcing is approximately linear. This is certainly true, but they made a novice mistake and assert a sensitivity based on the current operating point and the origin, rather than a sensitivity based on the slope of the relationship at the current operating point.
239 W/m^2 / 287K = 0.82 C per W/m^2
slope of SB at 287K = 0.2 C per W/m^2
This is the kind of mistake I would expect from a high school physics student and not from ostensibly intelligent scientists. Few believe that scientists could make this kind of silly mistake and is why many have trouble believing how incredibly wrong consensus climate science really is.
The spreadsheet you sent is nonsense. Where does 450 W/m^2 of upwelling emissions coming from when the surface temperature is no where near the 298K required to emit this much. Are you assuming that latent heat and other non radiant forms of energy are subject to GHG absorption? Also, where does the 6 W/m^2 come from? Line by line simulations of the standard atmosphere tells us that that doubling CO2 reduces the transparent window by about 3.7 W/m^2 resulting in only 1.85 W/m^2 of equivalent forcing from the Sun.
It really seems to me that you can’t get past the obfuscation perpetrated by Trenberth where he conflates the energy transported by photons (EM energy) with the energy transported by matter (non EM energy) relative to establishing the radiant (EM) balance of the planet.
If you consider only the photons,
The surface emits 385 W/m^2 of LWIR photons
The incident power is 239 W/m^2 of visible photons
The deficit entering the surface is 146 W/m^2
Accounting for emissions from clouds and the fraction of surface emissions that pass through clouds, the transparent window of the atmosphere is about 93 W/m^2 of LWIR photons.
The deficit exiting the planet is 239 – 93 = 146 W/m^2
For a transparent window of 93 W/m^2, 385 – 93 = 292 W/m^2 of surface emissions are being absorbed by GHG’s and clouds.
The 146 W/m^2 deficit to the surface plus the 146 W/m^2 deficit into space is 292 W/m^2 and exactly equal to the amount of surface emissions absorbed by the atmosphere where half escapes to space and half returns to the surface.
If the non EM flux between the surface and atmosphere made any difference at all, an offset would be required in order for balance to be achieved. No such offsets are required.
Minor correction. The 93 W/m^2 transparent window is from surface emissions from cloudless skies and the fraction of surface emissions that pass through clouds. Cloud emissions that pass through the transparent window are accounted as part of the absorbed surface emissions that are eventually emitted into space. The calculation of the net transparent window is as follows:
fraction of surface covered by clouds = 0.67 (from ISCCP)
fractional width of the transparent window for the clear sky = 0.47 (from HITRAN based simulations)
average fraction of surface emissions that passes through clouds = 0.275 (from ISCCP)
net transparent window fraction
((1 – .67) + .67*.275) * .47 = .2417
385 * .2417 = 93 W/m^2
co2isnotevil,
OK the Excel version of the spreadsheet did not start off in the right state, and did not contain the simple instructions which were in the Google Drive version, but that was not updateable.
Here is a correct version http://api.ning.com/files/wElhaJF9fXKRHZ-FY15ZBHqq78OgIptC0hihYW2WoaVr1BBF7svOsx11Os4Oc5lhAesu*imSVp5ugY3wOqExz3YydJdgk2Sl/SurfaceEmissionsByGHGReflectionCoefficientV3.xlsx
You have to use it by changing values to work out sensitivity. You can’t just read it off a single version.
The spreadsheet’s assumption is that the energy entering the atmosphere gets redistributed down with a GHG reflection coefficient, and up with one minus the GHG reflection coefficient. The calculation for more CO2 or water vapour is straightforward – you change the GHG reflection coefficient upwards to a value of your choosing. It doesn’t particular matter what you choose as long as it is a small change. My choice was the increase GHG reflection from 0.6 to 0.61.
The first thing that then happens is that you can record the RF which that causes. This figure started off as zero with GHG reflect = 0.6, but on increasing it to 0.61 the RF becomes 6 W/m^2.
Now you have to change the spreadsheet to reach the equilibrium ECS state. You do this by changing the value in cell E9 until either it matches C16, or the RF has been reduced to zero.
Then the equilibrium IR upwelling is used as input to the inverse of the Stefan-Boltzmann law which you know all about, to predict what the new surface temperature has to be to give the new upwelling IR energy. In the example the new surface temperature is 295.5. Helpfully the original temperature is in the cell below and is automatically subtracted from the new temperature to give the value in the red box. In this case the answer is 2.65 degrees. So the climate sensitivity is 2.65 / 6 = (K / W / m^2).
This use of Stefan Boltzmann is exactly what you were requiring, so this is what happens here.
Since the spreadsheet implements both conservation of energy and the Stefan Boltzmann calculation of surface temperature, then it proves that a climate sensitivity of 0.44 W / m^2 can be achieved without breaking any laws of physics.
This conclusion is directly contradictory to your claim that the sensitivity has to be 0.2. It is a very simple spreadsheet which just implements the flows given in the pretty flow diagram posted many times by me above.
Ensure you have a play with the spreadsheet, check that it works, and take the time to understand how it works because it is the basis for my claim that 0.2 is not the relevant climate sensitivity given by applying SB properly to the radiation flows in the climate system.
Note too, that the climate sensitivity does not depend much on the change in the GHG reflection coefficient, provided the changes are small. Using 0.63 instead of 0.62 gives a sensitivity of 0.45 instead of 0.44, for instance.
Any approach to calculation of climate sensitivity that ignores the impact of the internal climate GH effect flows is not correct. The spreadsheet takes these flows into account.
Climate Pete,
I’ve downloaded your spreadsheet and will debug it for you when I get a chance. Clearly there’s an error if you think you are getting a sensitivity more than about 0.2C per W/m^2.
George
Climate Pete,
I’ve had a chance to look at your spread sheet and have identified several errors.
When you apply the nonsense parameter ‘GHG REFLECTION’ to the total input to the atmosphere, you are inferring GHG effects on things like latent heat, thermals and other forms of energy that are not in the form of photons and not subject to GHG absorption or GHG reflection, whatever that is. The actual amount of surface emissions from GHG absorption that is returned to the surface are 1/2 of what was absorbed and the remaining half exits to space. This is dictated by geometric properties and I verified this conclusively with real data in a previous posting. Why do you have such a problem understanding that only radiant energy matters for the radiant balance of the planet? Trenberth conflates these because he didn’t, he has no wiggle room to make a case for an otherwise impossibly high sensitivity.
You keep repeating the same errors over and over and I keep having to point them out.
Another error you made is to lump together power passing through the transparent window of the atmosphere with GHG captured power that eventually escapes to space. These are completely different from each other relative to GHG effects.
George
Another error in your spreadsheet is one commonly made by those pushing the CAGW POV. You are increasing the deficit at the TOA and decreasing the effective emissivity independently. This is counting the effect twice. Remember that the IPCC quantification of the effect of CO2 as equivalent forcing means that only forcing changes and all else remained constant, i.e. forcing is an equivalent change in post albedo input power keeping all else constant. You can either change the absorption (part of what you called GHG reflection) or increase the deficit at TOA keeping the absorption constant by increasing solar input. You can’t do both.
Here is a spreadsheet that works. Its pretty self explanatory.
http://www.palisad.com/co2/corrected.xlsx
george
co2isnotevil,
Google Drive version of climate sensitivity model is not updateable, so here is a download link to the raw Excel spreadsheet.
http://api.ning.com/files/oTw*v75BEJEs0sKW5eoGpcoMJXM93kkN0VqalfyIYN6xRDm5FxvV-E4CQb0jwFZ7eZ4S6bP8dNR9RzVn5d5vSMC4qL4Ldy5U/SurfaceEmissionsByGHGReflectionCoefficient.xlsx
Climate Pete,
I’ve updated the link at
http://www.palisad.com/co2/corrected.xlsx
to include reporting differences from nominal and added some comments specifying values to use for emulating 3.7 W/m^2 of solar forcing and a 3.7 W/m^2 decrease in the width of the transparent window (GHG forcing). Note that the IPCC considers both to be equivalent to forcing, yet they have significantly different effects!
I took the time to understand your spreadsheet well enough to find the errors, please see if you can identify errors in mine. You may need to consult with your friends at SS or RC, but be forewarned, once you understand my spreadsheet well enough to look for errors, you will have no choice but to believe that the IPCC’s consensus is wrong and the skeptics are right.
George
“These questions have been settled by science.” Surgeon General
IPCC AR5 TS.6 Key Uncertainties. IPCC doesn’t think the science is settled. There is a huge amount of known and unknown unknowns.
According to IPCC AR5 industrialized mankind’s share of the increase in atmospheric CO2 between 1750 and 2011 is somewhere between 4% and 196%, i.e. IPCC hasn’t got a clue. IPCC “adjusted” the assumptions, estimates and wags until they got the desired mean.
At 2 W/m^2 CO2’s contribution to the global heat balance is insignificant compared to the heat handling power of the oceans and clouds. CO2’s nothing but a bee fart in a hurricane.
The hiatus/pause/lull/stasis (IPPC acknowledges as fact) makes it pretty clear that IPCC’s GCM’s are not credible.
The APS workshop of Jan 2014 concluded the science is not settled. (Yes, I read it all.)
Getting through the 1/2014 APS workshop minutes is a 570 page tough slog. During this workshop some of the top climate change experts candidly spoke about IPCC AR5. Basically they expressed some rather serious doubts about the quality of the models, observational data, the hiatus/pause/lull/stasis, the breadth and depth of uncertainties, and the waning scientific credibility of the entire political and social CAGW hysteria. Both IPCC AR5 & the APS minutes are easy to find and download.
IPCC most certainly does not say mankind’s share of atmospheric CO2 is between 4% and 196%. It says mankind is responsible for virtually all the rise from 280 ppm to 400 ppm.
That does not mean there is not a rapid dynamic exchange of CO2 between atmosphere, biosphere and oceans. But generally it is a net zero exchange.
2W/m^2 sounds small until you start to work out how much surface and atmospheric temperatures would rise if only a few % of this went into heating them. Do the sums for yourself.
And sure, the heat handling of oceans is much bigger than this – most of the RF ends up heating oceans. But it is the tiny bit that doesn’t which matters.
And weather effects totally swamp 2 W/m^2. The warming predicted is much much smaller than the daily range of temperatures. but in the long term a tiny movement of average temperatures is important, so you have to look at impacts carefully, not dismiss them on the basis something is very small so must be unimportant.
UAH 6.0 lower tropospheric data set has removed most of its dependence on surface temperatures, like RSS. UAH 5.6 showed signficant warming and did include a signficant weighting of surface temperatures.
So all you can actually say from RSS and UAH 6.0 is that it looks like a particular range of heights (lower troposphere) with varying contributions (but including far less of the surface than it once did) show static temperatures for a period. It doesn’t actually say much any more about the surface temperatures which cause drought and evaporation and other problems. There’s a good case now for saying only the surface temperature data sets, no longer the satellite data sets, give any meaningful data about surface temperatures.
However, it’s not the surface temperatures which prove AGW. It is the radiative forcing, which you seem to accept as 2W / m^2.
The thing the GCM’s have to get right to be correct is not surface or lower tropospheric temperatures (which are at the mercy of weather), but the radiative forcing. And this they appear to do. If this is correct then over the long term the random weather fluctuations will settle down to an average state and the temperature projections of the GCM’s wil be validated.
You refer to the IPCC AR5 etc. as expressing doubt. This is not the case. It expresses uncertainty which is what all good science is expected to do – you need to know the potential error bounds of your findings. But this is not the same thing at all as saying that the conclusion is in doubt that AGW is real and caused by humans. The IPCC is pretty clear – humans cause AGW.
The uncertainty and scope for improvement is key to getting a more certain (less uncertain) range into the next IPCC report, which is pretty important for determining how much faster or slower efforts to reduce CO2 emissions should go.
But until those at the IPCC who drive the so called consensus can wrap their collective heads around a sensitivity closer to the 0.2C per W/m^2 +/- 20% dictated by first principles physics then the 0.8C per W/m^2 +/- 50% dictated by political necessity, climate science will remain controversial.
BTW, you should also agree that science does not operate by consensus, but by the scientific method. A consensus is only required to ‘settle’ subjective controversies which still doesn’t imply universal acceptance. Science is, or should be, completely objective and driven only by the scientific method.
Science does indeed operate by the scientific method. And over a period of time the scientific method leads to a consensus. This will be after a body of knowledge has been probed by a variety of different techniques and by a diverse range of scientists. And that is where climate science is at. not everything is known, but there is a large shared body of knowledge confirmed by multiple lines of evidence.
The IPCC do not drive any such consensus. All they do is to summarise all the research results available in different areas of climate science. Where the evidence all points in one direction that is easy. Where different analyses and different modelling points to a large possible range of values it is the IPCC’s job to summarise that into a range and give an expert view as to the strengths and weaknesses of the different lines of research leading to a diverse outcome.
As far as the sensitivity goes, this article argues that the lower end of the IPCC AR5 1.5 to 4.5 K warming should now be increased – http://www.skepticalscience.com/challenges-constraining-climate-sensitivity.html .
Within the article one heading which is very relevant is “Nature As An Ensemble Member, Not An Ensemble Mean” which sums up the correct way to think about it very well for me. The section goes on to explain that the limited number of GCM runs, started with random slightly differing initial conditions, which approximately follow the actual temperatures since 1998 do not produce any lower sensitivity than those which do not follow actual temperatures.
Re: Christopher Monckton of Brenchly on Kiehl & Trenberth’s Budget
Kiehl & Trenberth’s Energy Budget (1997), above, is the cornerstone of IPCC’s climate modeling, featured prominently in both the TAR and the AR4. Kiehl’s name appears 68 times in the TAR; Trenberth’s 76 times. Both are contributing authors to the TAR, and Trenberth to AR4. The first time their names appear in the mainbody of the TAR is as a reference to their Budget. The Budget is the second figure in TAR, preceded only by a cartoon of the climate system.
The Budget is a static, radiative forcing model of the the long-term equilibrium state of the climate system, which requires the net inflow of energy at both the top of the atmosphere and at the surface to be zero. K&T (1997) p. 198. IPCC’s paradigm for anything but minor climate models, also called Radiative Forcing (though defined differently), implicitly adds a few parameters to the K&T budget, animating it computationally into a scientific climate model, that is, a model that can predict (e.g., temperatures) in response to inputs (called forcings).
A point of clarification on equilibrium: IPCC uses the word equilibrium 418 times in its third and fourth assessment reports, including reference titles. The usage explicitly refers to thermodynamic equilibrium twice in reference titles, and again in a few applications to ocean chemistry which refer to the work of Zeebe & Wolf-Gladrow. There are 17 citations to thermodynamics, though never the Second Law, which would imply thermodynamic equilibrium. In one instance, IPCC refers to equilibrium thermodynamics, and to be fair, two of those 17 citations note that the work was without regard to thermodynamics. Thermodynamic equilibrium requires simultaneous mechanical, chemical, and thermal equilibrium. The latter requires a uniform temperature throughout the system, ruling out both thermodynamic equilibrium and thermal equilibrium from the climate system, all as those terms are used in thermodynamics.
IPCC routinely runs its climate models until they are fully adjusted to any change in radiative forcing, a process it calls running an equilibrium climate experiment. However, this definition does not specify which of the many state parameters undergo adjustment, though any such list certainly ought to include temperature. The problem is that the Budget schematic includes 15 explicit parameters (thermodynamics refers all such parameters as coordinates), and a few other implicit parameters, such as greenhouse gas concentration (though not temperature). Consequently, IPCC’s definition of equilibrium is at best ostensible, that is, defined by its instant usage. Thus equilibrium inherited from the K&T energy budget (“K&T equilibrium”) is simply net zero energy at the surface and at the top of the atmosphere. Note that the energy balance is more than radiation, including thermals and evapotranspiration. Additionally, K&T balanced the energy in the atmosphere, the middle node of its three-node model.
The K&T model does not explicitly contain temperature, nor does it rely on emissivity, an essential parameter in equations for radiation and absorption spectra and in Stefan-Boltzmann’s equation. The authors’ 1997 paper mentions temperature just once, and that is to 288K (14.8ºC) to support its choice of 390 Wm-2 for longwave surface emission using the S-B equation. The budget schematic will balance not just for 390 Wm-2, but for any longwave surface emission between 0K and at least 292K (18.9ºC). That is, the equilibrium specific to the energy budget and the radiative forcing models is not just at 288K (14.8ºC), implied by 390 Wm-2, but to 14.7 or 14.9ºC, or the nominal Equilibrium Climate Sensitivity (ECS) of 3K, i.e., 17.8ºC, or any other temperature up to at least 18.9ºC (proof on request). Of the Budget’s 15 parameters, one is solar radiation, a constant, and another is the surface longwave emission, an independent parameter (equivalent to surface temperature). The Budget implements two constraint equations for balancing, leaving 11 degrees of freedom for an uncountable number of possible configurations.
The fact that other papers use surface radiation values other than 390 Wm-2, i.e., 396, 398, 398±5, and 398.2, is a matter of the free choice of the respective authors. Any of these values can be the basis for K&T Equilibrium. Even less important than this seemingly bothersome variance in stability is the conversion from the longwave surface emission to temperature, and whether the emissivity is correct, or whether an average surface and an average surface temperature has any factual basis (that is, whether those parameters are measurable). If the emissivity, for example, is inappropriate, the assigned temperature label changes, not the energy balance.
When IPCC introduced the K&T Budget as TAR Figure 1.2, it made a great, indelible, unwarranted presumption:
For a stable climate, a balance is required between incoming solar radiation and the outgoing radiation emitted by the climate system. Therefore the climate system itself must radiate on average 235 Wm−2 back into space. Details of this energy balance can be seen in Figure 1.2, … . TAR, ¶1.2.1 Natural Forcing of the Climate System, p. 89.
This assumption, that the budget represents a stable state, is not found in the K&T 1997 paper. When AR4 reintroduced the budget as FAQ 1.1 Figure 1, that presumption quietly vanished. Nevertheless, the notion persists in climatology and its literature. It underlies IPCC’s climate modeling paradigm that lets its models seek new higher temperatures in equilibrium climate experiments. The notion is repeated in the article above:
Like any budget, the Earth’s energy budget is supposed to balance. If there is an imbalance, a change in mean temperature will restore equilibrium.
Nothing in the K&T paper, nor implicit in its Energy Budget, comprises even an implicit restorative force, in any direction, as a result of a climate forcing. Nothing is present in the budget analogous to a state of least potential energy in mechanics, which would prefer one state of K&T equilibrium over other states. The only stability in thermodynamics is thermodynamic equilibrium, the state of maximum entropy, which does not exist in IPCC claims for its modeling, much less in the thermodynamics of the real world.
When IPCC animated the K&T budget, it found that the atmospheric CO2 it could arguably attribute to man was insufficient to show the presumed greenhouse catastrophe. So, among other things, IPCC modeled CO2 as a forcing to initiate warming, triggering additional water vapor as a positive feedback, a sound reliance on the Clausius-Clapeyron relationship. However, IPCC failed to report that a change in surface temperature would likely affect about a half dozen of the baseline budget parameters. For example, IPCC ignored Henry’s Law and the positive feedback of CO2 from a warmer ocean. Nor did IPCC model the effects of increased water vapor on cloud cover, the most powerful feedback in all of climate, positive with respect to the Sun and negative with respect to warming from any source.
In 1938, Guy Callendar published his pioneering calculations, including many of the features of today’s AGW model, but most significantly an ECS of 2ºC. AR4, ¶1.4.1, p. 105. That was 77 years ago, decades before most professional journals became house-organs for doctrines de jour instead of advocates for science. He was published, notwithstanding the opinion of his most senior reviewer, meteorologist and director of the Met Office from 1920 – 1938:
Sir George Simpson expressed his admiration of the amount of work which Mr. Callendar had put into this paper. It was excellent work. It was difficult to criticise it, but he would like to mention a few points which Mr. Callendar might wish to reconsider. In the first place he thought it was not sufficiently realised by non-meteorologists who came for the first time to help the Society in its study, that it was impossible to solve the problem of the temperature distribution in the atmosphere by working out the radiation. The atmosphere was not in a state of radiative equilibrium, and it also received heat by transfer from one part to another. In the second place, one had to remember that the temperature distribution in the atmosphere was determined almost entirely by the movement of the air up and down. This forced the atmosphere into a temperature distribution which was quite out of balance with the radiation. One could not, therefore, calculate the effect of changing any one factor in the atmosphere, and he felt that the actual numerical results which Mr. Callendar had obtained could not be used to give a definite indication of the order of magnitude of the effect. Bold added, Callendar (1938), Discussion, p. 237.
Undeterred, IPCC, relying on radiative forcing and the K&T model, today provides several values for ECS, each with a confidence level. Its ECS numbers are 4.5ºC (83%), 3ºC (50%), 2ºC(17%), and 1.5ºC(10%). Logarithmically, these lie on a straight line (R^2 = 0.98; p ~ 0.045•ECS2), yielding a 95% confidence level of 1.05ºC. Today’s measurements by Lindzen and others are all below 1ºC. A scientist can be at least 95% confident that the AGW model is invalid, depriving the K&T Budget of its utility.
Except for the politics and emoluments, Kiehl, Trenberth, and friends at IPCC have jointly managed to retrace Callendar’s footsteps.
Jeff,
I wish to add that thermodynamic equilibrium consists of 2 parts, a kinetic equilibrium from matter obeying the Kinetic Theory of Gases and fluxes of photons obeying the Laws of Quantum Mechanics. While joules are joules, a mistake often made is that these 2 distinct manifestations of temperature must be in LTE with each other for the system itself to be in LTE. The counter example is a laser passing through an inert gas. A thermometer placed in the beam will register a higher temperature only while the beam is active and the beam has no effect on the kinetic temperature of the inert gas molecules.
The only possibility for conversion between the two is by liquid water absorbing photons or by liquid water re-emitting photons as BB radiation. In LTE, these two processes must be equal and opposite, thus no net conversion between forms actually occurs.
George
Re: George @ur momisugly co2isnotevil, 7/4/15 @ur momisugly 12:13 pm
Very interesting. In researching your note, I found this:
Real atmospheres are not in local thermodynamic equilibrium [LTE] since their effective infrared, ultraviolet, and visible brightness temperatures are different. Scattering is another non-local thermodynamic equilibrium effect. http://scienceworld.wolfram.com/physics/LocalThermodynamicEquilibrium.html
The climate problem is to estimate the long term (three decades plus) statistics of weather, and first with regard to an unmeasurable, hypothetical, global, surface temperature. Candidate models must yield a prediction experimentally shown to be better than a chance prediction. Radiative Forcing and GCMs in their various forms have failed, and for a multitude of good causes.
However, I hold great hope for heat modeling (known vernacularly and redundantly as a heat-flow model) with only a few nodes in addition to the K&T model. These are the Sun, deep space, and the deep ocean with something akin to the K&T model in the middle. These models will also add heat capacity, heat resistance, and long term ocean circulation, so they will be dynamic and transient rather than imaginary and static equilibrium models. This means the trend should be toward the macro, not the micro; somewhat like GTE (Global Thermodynamic Equilibrium) but without the equilibrium, and not LTE, even before discovering the Wolfram Research warning sign.
CM of B
Any diagram of a complex process, such as the energy budgets presented here, always contains implicit assumptions and consequently has limitations of use. In my opinion the most obvious limitation in the diagrams is the assumption of a common base level for the land and ocean surfaces. The diagrams clearly fail to take any account of the significant variation in elevation that occurs at the Earth’s surface. The surface elevation of each of the major continents is obviously not sea level, Africa for example has an average elevation of 600m (~2,000ft).
In addition we know that mountains generally experience a different climate, with higher levels of rainfall and lower temperatures, than the surrounding plains and that this rainfall is a consequence of moisture lifted from the ocean surface to altitude by convective and/or orographic atmospheric processes. Water lifted up onto a high altitude land surface and deposited as rain or snow possesses potential energy. This potential energy is released during stream flow, allowing for both the natural erosion of the land and also the transport of sediment, as the water moves down slope to lower levels. The water’s potential energy and can also be harnessed for power generation and this available energy is a component of the overall energy budget that is missing from the diagrams.
In addition to the budgetary issues of potential energy, there is also the issue of latent heat. The energy of latent heat is stored in matter not as kinetic energy of particle motion (heat) but as bonding energy of molecular separation (phase change). Consequently, because the storage of latent heat is not a thermal process, the transport of latent heat is not a radiative process. Latent heat transport can only occur as a component of the mass transport of a condensing fluid.
In addition to the vertical mass movement of water, which is implicit in the diagrams as latent heat transfer, we also have the mass movement of the non-condensing gases associated with the water cycle. The circulation of moist air aloft and its separation into dry air and condensed precipitation that falls out back to the surface demands a vertical return, not just of the precipitated water, but also of the dry air. In both cases the descent of mass (water and dry air) in a gravitational field produces thermal energy, as the potential energy both of the descending water and also the descending dry air is converted into energy of motion. The descent of atmospheric fluids and gases returns heat energy to the ground, warming the surface air as the Foehn wind for example, clearly demonstrates.
As a foot note, the history of radiation budget diagrams goes back further than is generally thought. The earliest example I have seen is Figure 4 “Schematic representation of heat balance of earth and atmosphere” in the 1954 paper by Henry G. Houghton On the Annual Heat Balance of the Northern Hemisphere Journal of Meteorology, Vol. 11, No. 1, 1-9
Re: Philip Mulholland, 7/5/15 @ur momisugly 4:29 am:
Science imposes no requirement that a model have fidelity to the real world. Even observing that a model must link to the real world through facts (observations reduced by measurements and compared to standards), the facts can themselves be reductions via models. The Global Average Surface Temperature is just such a fact. It matters not that the calculation of GAST includes adjustments for altitude or local weather phenomena, or even for the fact that the average surface is some kind of an imaginary dilute mud of earth and water. The value of a scientific model is solely in its predictive power, not its fidelity.
Newton made models of the highest quality — laws, as it turns out — using parameters (previously) unknown to human senses and which some contemporaries even doubted existed, e.g., forces (especially at a distance), mass, momentum, and inertia. Models of physics routinely bear no resemblance to the physical world they predict. In addition to Newton’s Laws, consider the whole of thermodynamics and Henry’s Law of Solubility, a couple of compound laws that directly bear on climate. There is no premium on fidelity of emulation, on copying the real world.
Of course, such principles of science do not apply to climatology, the poster child for Post Modern Science. In PMS, the models don’t even have to work, so long as they are (1) peer-reviewed, (2) published in certified journals, and (3) lay claim to support by consensus of some narrow body of certified practitioners. The problem arises when the public and Modern Scientists get wind of the fact that these (post modern) scientific models, fully validated through the three intersubjective tests, are powerless to predict anything significant.
As to the history of radiation budgets going back to 1954, Kiehl and Trenberth (1997) note
The first such budget was provided by Dines (1917). P. 197. The heat balance of the atmosphere. Quart. J. Roy. Meteor. Soc., 43, 151–158. P. 207.
Jeff Glassman: July 5, 2015 at 7:28 am
Jeff, You say “The value of a scientific model is solely in its predictive power” – I agree
You also say “not its fidelity” – I totally disagree.
The fidelity of a model is its ability to determine the present from prior data. We establish the fidelity of a model by conducting a blind test, the model is created without using all the available priors. We then run the model and “predict” the missing data. So you see Jeff, fidelity is prediction after all. If the model cannot predict the present from the past, then it clearly cannot predict the future from present and so the model has no value.
BTW thanks for the additional reference to Dines 1917.
Re: Philip Mulholland 7/5/2015 @ur momisugly 1:38 pm
You’ve defined fidelity differently, I think to incorporate predictive power. I meant fidelity in the sense of emulating the real world by copying the real world, feature by feature, in whole or in any part. The distinction I intended is that between a simulation and an emulation. To help Monckton, I was taking issue with passages such as this from your post at 4:29 am:
The diagrams clearly fail to take any account of the significant variation in elevation that occurs at the Earth’s surface.
You seemed to be asking that the budget diagrams explicitly show surface elevation variations. That would be the feature of an emulation, which science neither requires nor rewards.
When a model has demonstrated predictive power, it has a degree of validity. Everything in the real world is taken into account, as you say. Future generations of the model, might strengthen that taking into account, one way or another, as its designer sees fit. Improvement in predictive power might include a higher degree of emulation, but need not. About the only other kind of improvement science rewards for the same predictive power is the principle of Occam’s Razor: simplification.
Jeff,
According to my dictionary emulation means “the effort or desire to equal or excel others”.
Works for me.
Re: Philip Mulholland, 7/5/15 @ur momisugly 3:58 pm:
Keep looking. To be relevant, you need a definition applicable to the discussion comparing simulation and emulation, one applicable to climate modeling. That’s not easy to find, and I regret not having a recommendation for everyone. A short form is that emulation means to model objects in the real world by copying them observable-by-observable, fact-by-fact.
By contrast, simulations, especially on a large scale, deal explicitly or implicitly with statistical objects, which are not directly observable. Temperature for example, while spatially observable and measurable (microparameters), it is not so globally — not for the lumped parameters representing the atmosphere or Earth’s surface. Nor are the temporal average temperatures directly observable. Still, local measurements do yield quite usable global estimates, both spatially and globally (macroparameters). And so long as the estimating rules remain unchanged, science looks for models to predict those estimates. Distinguishing simulation and emulation on the basis of scale can be helpful.
Climatologists are now engaged in changing the estimating rules for temperature to make their estimates fit their model predictions. That may not seem right to everyone, but it’s OK in the postnormal, academic world so long as the effort is (1) peer-reviewed, (2) published in certified journals, and (3) claimed to be supported by a certified consensus.
Climate Pete Said:
“CO2 doesn’t directly cause record high daytime temperatures. In fact CO2 actually warms the nightime temperatures more than the daytime, and the winter temperatures more than the summer, because the sunlight obviously builds up the temperatures faster than IR emissions lower then, so CO2’s most marked effect is to stop the heat from leaving the surface once the sun goes down or is less powerful in winter.”
Bingo, if CO2 were the cause of any warming it would be at Night, and it wouldn’t be warming, it would be slower cooling. To implicate CO2 one would need to demonstrate that the spread between the Daytime Peak and the following Nighttime peak was narrowing. I have not found any data showing that the spread between day and nighttime temperatures has been narrowing. I also haven’t seen any data showing that the winters have been getting warmer relative to the summers.
Bottom line, increases in daytime temperatures is due to more radiation reaching the surface and oceans. That has nothing to do with CO2. To implicate CO2 one would need to show that Nighttime termperatures in areas void of humidity would be showing narrowing of the spreads, ie nighttime temps in the desert. I doubt anyone can produce a data set showing that desert nighttime temperatures have been narrowing the spread with daytime temperatures. Also, the best proxy for CO2’s impact would be the extreme S Pole, and that data station hasn’t shown any warming in 50 years.
It looks like nights have been warming, but it isn’t due to CO2. The very fact that there doesn’t seem to be any research on the spreads between daytime and nightime temperatures in the dry deserts pretty much proves either the climate “Scientist” aren’t looking, or they don’t know what questions they should be asking. BTW, every Ice Core Data set has temperature and CO2, clearly demonstrating that they are looking only at CO2. If all you have is a hammer, everything looks like a nail.
“I spoke with Phil Duffy, Climate Central’s chief scientist, about why nighttime lows are warming faster than the daytime highs. He replied that the answer isn’t straightforward, and then he referred me to research that has shown that an increase in cloudiness (as well as a few other factors) has warmed nights more than days. During the day, clouds both warm and cool, as they act like a blanket to reflect heat back to the surface (warming), but they also reflect sunlight back to space (cooling). At night, they only warm temperatures, acting like an insulating blanket. Thus, nights warm more than the days, and this is exactly what climate models predict. In fact, this is a good example of climate models making a prediction (warmer nights), and then having the prediction born out by the data.”
http://www.climatecentral.org/blogs/record-warm-nighttime-temperatures-a-closer-look
Other questions Climate Scientists Don’t seem to answer:
1) How many ice core data sets show that current temperatures are at peaks for the holocene?
2) How many ice core data sets show that the temperature variation over the past 50 and 150 years is statistically different (2 std dev) from the previous 15k years?
3) How many ice core data sets show similar ranges and variations? Basically, how good of a proxy is an ice core?