Feedback about feedbacks and suchlike fooleries

By Christopher Monckton of Brenchley

Responses to my post of December 28 about climate sensitivity have been particularly interesting. This further posting answers some of the feedback.

My earlier posting explained how the textbooks establish that if albedo and insolation were held constant but all greenhouse gases were removed from the air the Earth’s surface temperature would be 255 K. Since today’s temperature is 288 K, the presence as opposed to absence of all the greenhouse gases – including H2O, CO2, CH4, N2O and stratospheric O3 – causes 33 K warming.

Kiehl and Trenberth say that the interval of total forcing from the five main greenhouse gases is 101[86, 125] Watts per square meter. Since just about all temperature feedbacks since the dawn of the Earth have acted by now, the post-feedback or equilibrium system climate sensitivity parameter is 33 K divided by the forcing interval – namely 0.33[0.27, 0.39] Kelvin per Watt per square meter.

Multiplying the system sensitivity parameter interval by any given radiative forcing yields the corresponding equilibrium temperature change. The IPCC takes the forcing from a doubling of CO2 concentration as 3.7 Watts per square meter, so the corresponding warming – the system climate sensitivity – is 1.2[1.0, 1.4] K, or about one-third of the IPCC’s 3.3[2.0, 4.5] K.

I also demonstrated that the officially-estimated 2 Watts per square meter of radiative forcings and consequent manmade temperature changes of 0.4-0.8 K since 1750 indicated a transient industrial-era sensitivity of 1.1[0.7, 1.5] K, very much in line with the independently-determined system sensitivity.

Accordingly. transient and equilibrium sensitivities are so close to one another that temperature feedbacks – additional forcings that arise purely because temperature has changed in response to initial or base forcings – are very likely to be net-zero.

Indeed, with net-zero feedbacks the IPCC’s transient-sensitivity parameter is 0.31 Kelvin per Watt per square meter, close to the 0.33 that I had derived as the system equilibrium or post-feedback parameter.

I concluded that climate sensitivity to the doubling of CO2 concentration expected this century is low enough to be harmless.

One regular troll – one can tell he is a troll by his silly hate-speech about how I “continue to fool yourself and others” – attempted to say that Kiehl and Trenberth’s 86-125 Watts per square meter of total forcing from the presence of the top five greenhouse gases included the feedbacks consequent upon the forcing, asserting, without evidence, that I (and by implication the two authors) was confusing forcings and feedbacks.

No: Kiehl and Trenberth are quite specific in their paper: “We calculate the longwave radiative forcing of a given gas by sequentially removing atmospheric absorbers from the radiation model. We perform these calculations for clear and cloudy sky conditions to illustrate the role of clouds to a given absorber for the total radiative forcing. Table 3 lists the individual contribution of each absorber to the total clear-sky [and cloudy-sky] radiative forcing.” Forcing, not feedback. Indeed, the word “feedback” does not occur even once in Kiehl & Trenberth’s paper.

In particular, the troll thought we were treating the water-vapor feedback as though it were a forcing. We were not, of course, but let us pretend for a moment that we were. If we now add CO2 to the atmospheric mix and disturb what the IPCC assumes to have been a prior climatic equilibrium, then by the Clausius-Clapeyron relation the space occupied by the atmosphere is capable of holding near-exponentially more water vapor as it warms. This – to the extent that it occurred – would indeed be a feedback.

However, as Paltridge et al. (2009) have demonstrated, it is not clear that the water vapor feedback is anything like as strongly positive as the IPCC would like us to believe. Below the mid-troposphere, additional water vapor makes very little difference because its principal absorption bands are largely saturated. Above it, the additional water vapor tends to subside harmlessly to lower altitudes, again making very little difference to temperature. The authors conclude that feedbacks are somewhat net-negative, a conclusion supported by measurements given in papers such as Lindzen & Choi (2009, 2010), Spencer & Braswell (2010, 2011), and Shaviv (2011).

It is also worth recalling that Solomon et al. (2009) say equilibrium will not be reached for up to 3000 years after we perturb the climate. If so, it is only the transient climate change (one-third of the IPCC’s ’quilibrium estimate) that will occur in our lifetime and in that of our grandchildren. Whichever way you stack it, manmade warming in our own era will be small and, therefore, harmless.

A true-believer at the recent Los Alamos quinquennial climate conference at Santa Fe asked me, in a horrified voice, whether I was really willing to allow our grandchildren to pay for the consequences of our folly in emitting so much CO2. Since the warming we shall cause will be small and may well prove to be beneficial, one hopes future generations will be grateful to us.

Besides, as President Klaus of the Czech Republic has wisely pointed out, if we damage our grandchildren’s inheritance by blowing it on useless windmills, mercury-filled light-bulbs, solar panels, and a gallimaufry of suchlike costly, wasteful, environment-destroying fashion statements, our heirs will certainly not thank us.

Mr. Wingo and others wonder whether it is appropriate to assume that the sum of various different fourth powers of temperature over the entire surface of the Earth will be equal to the fourth power of the global temperature as determined by the fundamental equation of radiative transfer. By zonal calculation on several hundred zones of equal height and hence of equal spherical-surface area, making due allowance for the solar azimuth angle applicable to each zone, I have determined that the equation does indeed provide a very-nearly-accurate mean surface temperature, varying from the sum of the zonal means by just 0.5 K in total. In mathematical terms, the Holder inequality is in this instance near-vanishingly small.

Dr. Nikolov, however, considers that the textbooks and the literature are wrong in this respect: but I have deliberately confined my analysis to textbook methods and “mainstream-science” data precisely so as to minimize the scope for any disagreement on the part of those who – until now – have gone along with the IPCC’s assertion that climate sensitivity is high enough to be dangerous. Deploying their own methods and drawing proper conclusions from them is more likely to lead them to rethink their position than attempting to reinvent the wheel.

Mr. Martin asks whether I’d be willing to apply my calculations to Venus. However, I do not share the view of Al Gore, Dr. Nikolov, or Mr. Huffman that Venus is likely to give us the answers we need about climate sensitivity on Earth. A brief critique of Mr. Huffman’s analysis of the Venusian atmospheric soup and its implications for climate sensitivity is at Jo Nova’s ever-fragrant and always-eloquent website.

Brian H asks whether Dr. Nikolov is right in his finding that, for several astronomical bodies [including Venus] all that matters in the determination of surface temperature is the mass of the atmospheric overburden. Since I am not yet content that Dr. Nikolov is right in concluding that the Earth’s characteristic-emission temperature is 100 K less than the 255 K given in the textbooks, I am disinclined to enquire further into his theory until this rather large discrepancy is resolved.

Rosco is surprised by the notion of dividing the incoming solar irradiance by 4 to determine the Wattage per square meter of the Earth’s surface. I have taken this textbook step because the Earth intercepts a disk-sized area of insolation, which must be distributed over the rotating spherical surface, and the ratio of the surface area of a disk to that of a sphere of equal radius is 1:4.

Other commenters have asked whether the fact that the characteristic-emission sphere has a greater surface area than the Earth makes a difference. No, it doesn’t, because the ratio of the surface areas of disk and sphere is 1:4 regardless of the radius and hence surface area of the sphere.

Rosco also cites Kiehl and Trenberth’s notion that the radiation absorbed and emitted at the Earth’s surface is 390 Watts per square meter. The two authors indicate, in effect, that they derived that value by multiplying the fourth power of the Earth’s mean surface temperature of 288 K by the Stefan-Boltzmann constant (0.0000000567 Watts per square meter per Kelvin to the fourth power).

If Kiehl & Trenberth were right to assume that a strict Stefan-Boltzmann relation holds at the surface in this way, then we might legitimately point out that the pre-feedback climate-sensitivity parameter – the first differential of the fundamental equation of radiative transfer at the above values for surface radiative flux and temperature – would be just 288/(390 x 4) = 0.18 Kelvin per Watt per square meter. If so, even if we were to assume the IPCC’s implicit central estimate of strongly net-positive feedbacks at 2.1 Watts per square meter per Kelvin the equilibrium climate sensitivity to a CO2 doubling would be 3.7 x 0.18 / (1 – 2.1 x 0.18) = 1.1 K. And where have we seen that value before?

In all this, of course, I do not warrant any of the IPCC’s or Kiehl and Trenberth’s or the textbooks’ methods or data or results as correct: that would be well above my pay-grade. However, as Mr. Fernley-Jones has correctly noticed, I am quite happy to demonstrate that if their methods and values are correct then climate sensitivity – whichever way one does the calculation – is about one-third of what they would like us to believe it is.

All the contributors – even the trolls – have greatly helped me in clarifying what is in essence a simple but not simpliste argument. To those who have wanted to complicate the argument in various ways, I say that, as the splendid Willis Eschenbach has pointed out before in this column, one should keep firmly in mind the distinction between first-order effects that definitely change the outcome, second-order effects that may or may not change it but won’t change it much, and third-order effects that definitely won’t change it enough to make a difference. One should ruthlessly exclude third-order effects, however superficially interesting.

Given that the IPCC seems to be exaggerating climate sensitivity threefold, only the largest first-order influences are going to make a significant difference to the calculation. And it is the official or textbook treatment of these influences that I have used throughout.

My New Year’s resolution is to write a short book about the climate question, in which the outcome of the discussions here will be presented. The book will say that climate sensitivity is low; that, even if it were as high as the IPCC wants us to think, it would be at least an order of magnitude cheaper to adapt to the consequences of any warming that may occur than to try, Canute-like, to prevent it; that there are multiple lines of evidence for systematic and connected corruption and fraud on the part of the surprisingly small clique of politically-motivated “scientists” who have fabricated and driven the now-failing climate scare; and that too many who ought to know better have looked the other way as their academic, scientific, political, or journalistic colleagues have perpetrated and perpetuated their shoddy frauds, because silence in the face of official mendacity is socially convenient, politically expedient, and, above all, financially profitable.

The final chapter will add that there is a real danger that the UN, using advisors from the European Union, will succeed in exploiting the fraudulent science peddled by the climate/environment axis as a Trojan horse to extinguish democracy in those countries which, unlike the nations of Europe, are still fortunate enough to have it; that the world’s freedom is consequently at immediate and grave risk from the vaunting ambition of a grasping, talent-free, scientifically-illiterate ruling elite of world-government wannabes everywhere; but that – as the recent history of the bureaucratic-centralist and now-failed EU has demonstrated – the power-mad adidacts are doomed, and they will be brought low by the ineluctable futility of their attempts to tinker with the laws of physics and of economics.

The army of light and truth, however few we be, will quietly triumph over the forces of darkness in the end: for, whether they like it or not, the unalterable truth cannot indefinitely be confused, concealed, or contradicted. We did not make the laws of science: therefore, it is beyond our power to repeal them.

0 0 votes
Article Rating
244 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
December 30, 2011 9:18 am

Well said sir, as ever.

Joel Shore
December 30, 2011 9:18 am

The working definition of “troll” that Monckton seems to be using is someone who actually injects science into his posts. I will respond to this latest piece of nonsense once I am back home with access to a reasonable computer.

Darkinbad the Brightdayler
December 30, 2011 9:20 am

You don’t see any contradiction in you penultimate paragraph and its predecessor?

Mydogsgotnonose
December 30, 2011 9:24 am

The 33 K claimed GHG warming is an elementary mistake. It’s obtained by imagining that if you remove all the atmosphere the surface temperature of the Earth would be the same -18°C it is at present for radiative equilibrium of the composite emitter at the top of the atmosphere with space.
Wrong: the albedo of the Earth would fall from 0.3 to 0.07 because there’d be no clouds or ice. Redo the radiation calculation and the equilibrium radiative solution is close to 0°C.
That means the maximum present GHG warming is 15°C. Redo the calculation for the remaining aerosols and realistic convection and it falls to ~9°C. So you scale all IPCC claims by 9/33. But there’s more. Because the net AIE is slightly positive [incorrect aerosol optical physics], you lose at least another 44% [median AR4]. So, the real scaling factor is 1/6.7 =~ 0.15.
That means maximum CO2 climate sensitivity =~0.45 K. I’ll leave the other major errors until later so as not to upset oxymoronic climate science too much before the end of 2011….;o)

Scott Covert
December 30, 2011 9:40 am

Awesome post!
Monkton is a nut case IMO but I think he’s right.
Thanks again for your well worded posts.

William
December 30, 2011 10:02 am

Lord Monckton of Brenchley,
Thank-you for the thoughtful succinct piece on feedbacks.
As the papers you quoted note, the observational data (top of the atmosphere radiation measurement vs planetary temperature changes) supports the assertion that the planet’s feedback response to a change in forcing is negative – planet resists a forcing change – as opposed to positive – planet amplifies a forcing change. There is scientific consensus that if the planet’s feedback response to a change in forcing is negative (planet resists rather than amplifies the forcing change) the expected warming due to a doubling of atmospheric carbon dioxide from 0.028% (280 ppm) to 0.056% (580 ppm) will be less than 1.2C.
As the warming due to a doubling of CO2 is modest (less than 1.2C) and primarily at higher latitudes (cloud cover in the tropics increases or decreases to resist forcing changes) where the growing season is currently limited by temperature, it appears there is no CO2 problem to solve. It seems in fact, as commercial greenhouses inject carbon dioxide into the greenhouse (1000 ppm to 1500 ppm) to increase yield and reduce growing times – plants eat CO2 – a strong argument could be made that increased atmospheric carbon dioxide is beneficial to the biosphere.
This is fortunate, as all Western countries are deeply in debt and there is no end of issues – including environmental protection – to address with tax payer funds.
I enjoyed your book on the scientific details concerning the Hockey stick. I look forward to your book on feedbacks.
Best wishes,
William

Moray Watson
December 30, 2011 10:04 am

[SNIP: This has absolutely nothing to do with the thread. If you have a concern, try e-mailing Anthony using the contact option on the about tab. If you are trying to hijack the thread, forget it. -REP]

John Mason
December 30, 2011 10:05 am

I am very interested to see how the Unified Climate Theory folks back up their 100k greater figure since as Christopher Monckton points out, till that little change in conventional wisdom is explained the rest of the UCT may just be added whimsey.
Yet, since this earth with no atmosphere temp figure is so far different from conventional wisdom, I would think the authors would have realized this and not published unless they a good basis for challenging that bit of CW.
Still, even if the UCT is wrong, their points of CO2 levels being an effect of temp rather than a cause will finally have to be addressed as the team and others critique their coming papers.
Personally I hope the lower 100k figure is right. Wouldn’t that rock climate science to the core!

December 30, 2011 10:05 am

“The army of light and truth, however few we be, will quietly triumph over the forces of darkness in the end: for, whether they like it or not, the unalterable truth cannot indefinitely be confused, concealed, or contradicted. We did not make the laws of science: therefore, it is beyond our power to repeal them.”
Thank you Christopher. I’m wondering if we can apply this rationale to some other threads here?

Bill Illis
December 30, 2011 10:23 am

We do the math backwards and forwards and in various different forms and it always comes out the same. We look at the actual observations of the climate so far, backwards and forwards and in various different ways and it provides the same answer.
Either the low sensitivity value is right or the climate has magical properties that will be revealed later.

December 30, 2011 10:24 am

I think Monckton has done a good job by using the tools IPCC use to prove them wrong. That makes the article understandable even to me being an electronic engineer and a programmer.

December 30, 2011 10:29 am

The army of light and truth, however few we be, will quietly triumph over the forces of darkness in the end: for, whether they like it or not, the unalterable truth cannot indefinitely be confused, concealed, or contradicted. We did not make the laws of science: therefore, it is beyond our power to repeal them.
A real and literal “apocalypse”: A Revelation from above: The darkest hours of night are those which precede dawn.
This is why some “photophobic” creatures are so scared 🙂

Mike G
December 30, 2011 10:44 am

Please stop trashing compact flourescent light bulbs. I completely agree windmills and solar panels for bulk power generation are completely useless and a major con, although not for many smaller scale specialised applications
CFL bulbs are a genuine improvement on those small glowing electric fires for most uses. Who would ever dream of using filament lighting for any commercial building or public space, but somehow it is best for use at home? Fair enough early CFL were not that good, but now they are a huge improvement in many locations, long lasting, cool running, very economical, and even quite quick starting.
It is a shame that the politicians took a stand on this issue, but as Churchill observed, even a fool (and presumably many fools) can be right sometimes.
Talking of progress, it won’t be that long before LEDs rule the world.
M

J Martin
December 30, 2011 10:51 am

Lord Monkton,
Thankyou for pointing me in the direction of the Joanne Nova post on the subject. I shall duly examine it.
May I take it that the use of the word “fragrant” perhaps heralds a change of career, into the Judiciary ?

David, UK
December 30, 2011 10:55 am

Scott Covert says:
December 30, 2011 at 9:40 am
Awesome post!
Monkton is a nut case IMO but I think he’s right.
Thanks again for your well worded posts.

Hey – he might be a nut case, but he’s our nut case! Seriously, Monck: thank you for your tireless efforts, and more power to you.

Rob Crawford
December 30, 2011 11:00 am

“CFL bulbs are a genuine improvement on those small glowing electric fires for most uses.”
Until they fail or break. Then you have a situation that, normally, would have Greens demanding government action.

December 30, 2011 11:12 am

Lord Monckton said:
“My earlier posting explained how the textbooks establish that if albedo and insolation were held constant but all greenhouse gases were removed from the air the Earth’s surface temperature would be 255 K. Since today’s temperature is 288 K, the presence as opposed to absence of all the greenhouse gases – including H2O, CO2, CH4, N2O and stratospheric O3 – causes 33 K warming.”
With all due respect: no!, it would not. No matter how many textbooks say that. That’s what would happen to a grey body, in static state being uniformely hit by radiation from all directions. But the earth happens to rotate every 24 hours and gets all the energy from a single source, instead of uniformely distributed. The diffence gives a surprising result, and I like to repeat something I posted a few days ago, the real null hypothesis:
I think there are a few essential elements missing in this generation of the null hypothesis,
1: The standard no-atmosphere “black body” model uses temperature of the surface, whereas the global temperature is the atmospheric temperature at 1.5 meters above the surface. Think about that.
2: Non radiating gasses cannot lose their heat by radiation, only conduction and convection and there is only the earth surface to conduct heat to. Nothing can be emitted to the atmosphere.
3: There is no negative convection.
This is how I wrapped the null hypothesis up in UKww:
“The null hypothesis is usually describing the situation which would it make different from the actual -or alternative- hypothesis. It’s used mostly in statistics, I think, but why not try and see what happens if we apply it to the GHG hypothesis.
We read all over internet that the black body temperature of the Earth would have been -18C, but the actual average temperature is +15C; consequently this 33 degrees difference is supposed to be the greenhouse effect. But is this true?
Is the blackbody situation the “null hypothesis”? I don’t think so. The black body calculation assumes a sphere with a constant flux of light energy, uniformely distributed over the surface, using the Stefan Boltzman equation to derive it’s temperature like this.
But the earth is nowhere near a blackbody and if we want to really look at the null hypothesis, we would have to look at an earth without greenhouse effect, but still with an (inert) atmosphere and still rotating in 24 hrs, with seasons and all.
Now, instead of using an average steady state solar radiation, we need to realize that we have the diurnal cycle with max insolation radiation at noon and no radiation incoming when the sun is below the horizon. So during daytime the earth surface warms up and much more than the according the average radiation. Equilibrium temperature at the equator in a steady state with the sun in zenith, using the full incoming 1365 w/m2 (albedo 30%) would be 360K or 87C. This follows from applying the Stephan Boltzman equation for the spot directly under the sun, instead of a uniformely distributed radiation.
So this much higher temperature of the earth surface is transmitted via conduction to the lowermost boundary layer of the atmosphere. This heated air gets is less dense, and it becomes buoyant so it rises up; Convection, the very basics of meteorology. So at daytime the atmosphere receives thermal energy of the earth. How can it lose this energy again? Remember we are in the null hypothesis, no radiation, no greenhouse effect, so the inert atmosphere cannot lose the energy by radiation.
Now, at night time the Earth does not receive radiation energy from the sun but it radiates energy out and cools quickly, obviously much more quickly in the null hypothesis even than with the greenhouse effect, which would have directed (“reflects”) some radiation back to earth. Now the cooler earth also cools the boundary layer of the atmosphere by conduction again, however there is no negative convection as the cool air gets more dense and tends to stay put; the inversion; also very basic meteorology.
So despite the cooling of the earth, the missing radiation from the atmosphere prevents it from cooling at night and the next day more conducted energy is convected into the atmosphere, that stays there again.
Obviously we have an unbalance. And equilibrium can only be reached, maybe after thousands of years, when the convection at daytime has reduced so much to balance heat loss at night time via conduction back to the surface. For that the lower level atmosphere needs to be at the same temperature / density than the boundary layer would reach due to the conduction of heat from the surface.
Conclusion, in the null hypothesis, without greenhouse effect, the average temperature of the lower atmosphere would be considerably higher than the black body temperature of the surface. How much I don’t know. But the main point is that a certain part of the temperature difference between black body and actual atmospheric temperature is not due to greenhouse effect but due to the inability of the inert atmosphere to cool down by radiation. “

Neil
December 30, 2011 11:25 am

Thank you for your forceful feedback, Sir.
But I must ask you, what in your last paragraph but one do you mean by “adidact?”

December 30, 2011 11:28 am

Christopher Monckton of Brenchley says:
“…a real danger that the UN, using advisors from the European Union, will succeed in exploiting the fraudulent science peddled by the climate/environment axis as a Trojan horse to extinguish democracy…”
Sir on this I stand in full agreement. This has never been about climate change it has always been about getting more power in the hands of government. Freedom lost is not likely regained. First light bulbs next what??? All other quibbles are tertiary after this.
If I control the price of, how much, and what kind enegy you can have, I can control you.

Seasons Greetings
December 30, 2011 11:31 am

Mydogsgotnonose says:
December 30, 2011 at 9:24 am
“The 33 K claimed GHG warming is an elementary mistake. It’s obtained by imagining that if you remove all the atmosphere the surface temperature of the Earth would be the same -18°C it is at present for radiative equilibrium of the composite emitter at the top of the atmosphere with space.”
Fact: The earth and the moon are made of the same rocks.
Fact: The measured average temperature of the moon is 250K.
The above observations were obtained by Apollo missions to the moon.
So if we strip the earth of its atmosphere and ocean its albedo and distance from the sun will be the same as the moon and it should therefore have the same average temperature. This is pretty close to the calculated value of 255K for the earth sans greenhouse gases and ocean.
The problem I have is that the ocean is neglected as a source of greenhouse warming. Water is transparent to shortwave and opaque to longwave. It is that combination of transparency and opacity at incoming and outgoing radiative frequencies respectively that makes a greenhouse gas a greenhouse gas. Water is a greenhouse fluid. Moreover, unlike the combination of GHGs in the atmosphere, liquid water’s opacity to longwave infrared is complete across the LWIR spectrum. As well, just the first 10 meters of the ocean has as much mass and twice the heat capacity as the entire column of air above it. Furthermore the ocean’s albedo is far lower than rocks and when the sun is directly overhead and the water calm its albedo is nearly zero. All these factors combine to make the ocean, so long as it is liquid, the major source of greenhouse warming.
The greenhouse gases above the ocean have little effect because so-called back-radiation is completely absorbed in a skin layer just a few microns thick. This does little more than raise the evaporation rate. Indeed if one examines ocean heat budget studies in the literature one finds that fully 70% of ocean heat loss is latent (i.e. evaporation), 20% is radiative, and 5% is conductive. This is quite unlike how land surfaces heat and cool where greenhouse gases do have a significant effect on surface air temperature.
Once a person understands the great difference between how land and water heat and cool then all the observations start making perfect sense.
“Wrong: the albedo of the Earth would fall from 0.3 to 0.07 because there’d be no clouds or ice. Redo the radiation calculation and the equilibrium radiative solution is close to 0°C.”
Albedo would only fall that low if the ocean was still there but if you’re subtracting clouds and ice it seems you must also subtract surface water too otherwise you will have clouds and ice. Absent clouds, ice, and oceans the earth’s albedo would presumably be the same as the moon which is 0.16 since they’re made of the same materials.
“That means the maximum present GHG warming is 15°C. Redo the calculation for the remaining aerosols and realistic convection and it falls to ~9°C. So you scale all IPCC claims by 9/33. But there’s more. Because the net AIE is slightly positive [incorrect aerosol optical physics], you lose at least another 44% [median AR4]. So, the real scaling factor is 1/6.7 =~ 0.15.”
Actually I’d tend to agree that’s the amount of greenhouse warming that comes from greenhouse gases. The other 24C of greenhouse warming comes from the global ocean.
“That means maximum CO2 climate sensitivity =~0.45 K. I’ll leave the other major errors until later so as not to upset oxymoronic climate science too much before the end of 2011….;o)”
That’s about right for the mean value over the earth’s surface. Over land it’s still going to be the oft cited 1.1K which is why, as Monkton points out, this value is both calculated by well established 19th century experimental physics and by 20th observation of land-based surface thermometers in conjuction with atmospheric CO2 partial pressure history over that time. Everything makes sense once you understand or at least accept that the global ocean does most of the greenhouse warming over the 70% of the earth’s surface that it covers and greenhouse gases only have a large effect over land surfaces because of the difference in the way land heats and cools versus how water heats and cools.

David Walton
December 30, 2011 11:33 am

Re: “It is a shame that the politicians took a stand on this issue, but as Churchill observed, even a fool (and presumably many fools) can be right sometimes.”
Au contraire. Thanks to politicians and the legions of ignorant, feel good eco-fools GE’s breakthrough incandescent that had nearly the efficiency of CFLs and none of the associated disposal hazards was canned and never brought into production.
Fools, of course, are rarely ever right. If and when they are it is only by sheer coincidence. I believe Mr. Churchill was aware of that.
WUWT carried the story of GE’s incandescent breakthrough but I cannot find it now using the WUWT search engine. Long term readers might recall it.
Several years back GE announced that because of the political climate surrounding incandescent bulbs it was abandoning further research and manufacture of high efficiency incandescents. This was sad news because that product might have been a reasonable, non-polluting option and gap filler while the development of inexpensive and better LED lighting solutions progressed.
Thank you politicians, ignoramuses, and fools. You win again and the rest of us lose.

R. Gates
December 30, 2011 11:33 am

Another most interesting and even entertaining post by Lord Monckton. But even if CO2 levels froze at todays ~390 ppm, we’ve not yet reached the equalibrium temperature for this 40% additional amount of CO2 since 1750, and wouldn’t until all feedbacks fast and slow, bio, ice, clouds, etc. have run their course. As the nature and interactions of these feedbacks are not entirely understood, anyone making a claim to know what even the current equalibrium temperature would be is of course, simply making a guess, but one thing is certian– we’ve not reached it yet, even if CO2 levels stayed where they are today– which is of course unlikely. In regards to what the climate sensitivity is to a doubling of CO2, up to 560 ppm, that would be even more of an educated guess at best. But some confidence can be taken from the fact that both paleoclimate data and GCM’s seem to be converging on a reasonably consistent range in the area of 3C, with error bars of about 1C on either side at a 95% confidence level. Thus, by 2100, it is not at all unreasonable to suppose global temps could increase at least 2C. Whether or not these globally higher temps will present a problem or a benefit in feeding what is likley to be be something in the order of 12+ billion humans in another issue entirely, but the the high level of certainty that Lord Monckton states that the sensitivity to a doubling of CO2 by 2100 is “low enough to be harmless” is both scientifically and logically unsupportable.

wayne
December 30, 2011 11:44 am

“Brian H asks whether Dr. Nikolov is right in his finding that, for several astronomical bodies [including Venus] all that matters in the determination of surface temperature is the mass of the atmospheric overburden. Since I am not yet content that Dr. Nikolov is right in concluding that the Earth’s characteristic-emission temperature is 100 K less than the 255 K given in the textbooks, I am disinclined to enquire further into his theory until this rather large discrepancy is resolved.”
You can be assured Dr. Nikolov is correct. I have just taken the time to numerically integrate over a sphere 33 million evenly spaced points, as Dr. Nikolov describes in his article, calculating first the irradiance at each point and then computing the effective gray body temperature for each, then averaging the temperatures. This produces precisely the same figures he gave, 154.3K mean temperature for an atmosphere-less Earth. That means the boost in temperature from this figure to the mean temperature of today on Earth is ~133°C higher, varying on what figure you call today’s global mean temperature. That I now know is true.
I give the chart below since you seemed to miss it Christopher.
That chart also verifies part of what followed in his article. Here the main there bodies temperatures are calculated simply by the ideal gas law which only uses base-units of mass, length, time, moles (number of particles) and temperature as you look at the unit components are for pressure, density, molar mass, and the gas constant.
Because of this lack of any radiative terms needed to precisely calculate each mean temperature, all infrared absorbing gases (IRAG), with absorption lines in the infrared, and formerly known as “greenhouse” gases, have precisely zero affect on the long-term temperatures. That is self evident. This information is going to Congress ASAP (being dressed up a bit☺).
Used: T = P/ ρ • M/R

                    Venus      Earth       Mars
                 --------   --------   --------
P - pressure      9220000     101325        605  N/m2 (Pa)
ρ - density            65      1.217      0.015  kg/m3
M - molar mass     0.0434    0.02897    0.04334  kg/mol
R - gas constant  8.31451    8.31451    8.31451  J/K/mol
                 --------   --------   --------
T - temperature    740.40     290.09     210.24  K

It is also curious that this gives Earth’s ‘natural’ mean temperature to be 290.1 K, where we usually here it stated as 288 or 288.15 or by K&T, 289K. Maybe 1) we have not fully recovered from the LIA yet, or 2) the mean sea level atmospheric pressure is over-stated, for we know all of density, molar mass, and the gas constant to many decimal places, they are all measurable in labs, average pressure is not.

R. Gates
December 30, 2011 11:58 am

Wayne,
if an atmosphere-less earth would have an average temperature of 154K, why, does the atmosphere-less moon have one of 250K? Both would receive about the same energy from the sun.

Luther Wu
December 30, 2011 12:01 pm

R. Gates says:
December 30, 2011 at 11:33 am
…”the high level of certainty that Lord Monckton states that the sensitivity to a doubling of CO2 by 2100 is “low enough to be harmless” is both scientifically and logically unsupportable.
_________________________
There are only two possible states: either the effects of a doubling of CO2 is ‘low enough to be harmless’, or a doubling of CO2 will cause harm.
Why don’t you prove how your point of advocacy dominates Lord Monckton’s assertion?
After all, isn’t the science settled?

Warren in Minnesota
December 30, 2011 12:16 pm

G
Here in Minnesota in the winter, I find that little electric fires heat my house and also give light. An incandescent light that offers heat in the winter works for me. Where do you live?

“adidact” took me aback at first. A person can be didactic and could be called a didact. One who is not didactic would be an adidact.

R. Gates
December 30, 2011 12:22 pm

Luther Wu says:
December 30, 2011 at 12:01 pm
R. Gates says:
December 30, 2011 at 11:33 am
…”the high level of certainty that Lord Monckton states that the sensitivity to a doubling of CO2 by 2100 is “low enough to be harmless” is both scientifically and logically unsupportable.”
_________________________
There are only two possible states: either the effects of a doubling of CO2 is ‘low enough to be harmless’, or a doubling of CO2 will cause harm.
Why don’t you prove how your point of advocacy dominates Lord Monckton’s assertion?
After all, isn’t the science settled?
________
The only science that has been “settled” is that to a high degree of confidence, the human fingerprint can be found on at least a portion of the warming during the past century. Far from settled is the issue of sensitivity…certainly not settled enough to justify Lord Monckton’s certainty that the sensitivity is “low enough to be harmless.” There is a range of potential effects from a doubling of CO2 that could be lead to both positives and negatives. Suggest you read Judith Curry’s blog about this, here:
http://judithcurry.com/2011/12/28/evaluative-premises/
Regardless, Lord Monckton’s certainty that the sensitivity to a doubling of CO2 is “low enough to be harmless” is simply not founded on either science nor logic. Neither the best climate models nor the paleodata support his position. We’ve not yet fully seen what 390 ppm of CO2 will do, how can we know all the feedbacks fast and slow from 560 ppm?

Ralph
December 30, 2011 12:26 pm

There seem to be many people here, who think that an atmosphere alone, whatever is gasseous mix, will be enough to create surface warmth (via a lapse rate).
In my humble opinion, I think this argument is in error.
An atmosphere that absorbs longwave radiation (with greenhouse gasses) will be warmer than one that does not (no greenhouse gasses). A warmer atmosphere will be thicker thanone at is not. At present our median pressure in the atmosphere is as 18,000 feet, but with a colder atmosphere it might be only, say, 12,000 feet. But I see no reason why the lapse rate on the shorter and denser atmosphere would be any different, as indeed it is not any different between our tropics and poles (where the tropopause is at very different levels).
Thus the non-greehouse gas (cooler and denser) atmosphere will reult in lower surface temperatures, even though the total mass of the atmosphere is the same as the greenhouse gassed (warmer and ‘taller’) atmosphere.
Any errors in this logic?

December 30, 2011 12:27 pm

@R. Gates :
[SNIP: This sort of repartee adds nothing to the discussion. Please. -REP]

Luther Wu
December 30, 2011 12:29 pm

R. Gates says:
December 30, 2011 at 12:22 pm
“Neither the best climate models nor the paleodata support his position.”
________________
LOL

Kohl P
December 30, 2011 12:30 pm

Mike G objects to the ‘trashing’ of CFL tubes. “Fair enough early CFL were not that good, but now they are a huge improvement in many locations, long lasting, cool running, very economical, and even quite quick starting.”
I continue to be dismayed by the casual way that you and many others pass off CFL tubes as being long-lasting etc.
My experience has been very much the opposite. I installed 4 units in my kitchen to replace the (admittedly bad) halogen spotlights etc, Within 12 months, 3 have had to have tubes replaced, and 1 has failed altogether. The said halogen spotlights had been in place for some 15 years without more than a couple of bulbs being changed!
My conclusion is that they are wonderful in theory, but in practice they are bloody awful. So I support Monckton’s comments in relation to same.
Kohl P

Ron
December 30, 2011 12:34 pm

“My earlier posting explained how the textbooks establish that if albedo and insolation were held constant but all greenhouse gases were removed from the air the Earth’s surface temperature would be 255 K. Since today’s temperature is 288 K, the presence as opposed to absence of all the greenhouse gases – including H2O, CO2, CH4, N2O and stratospheric O3 – causes 33 K warming.”
So, what if the calculation is actually wrong? The whole issue might be a chimera.

December 30, 2011 12:36 pm

R. Gates said @ December 30, 2011 at 11:33 am
“…but the the high level of certainty that Lord Monckton states that the sensitivity to a doubling of CO2 by 2100 is “low enough to be harmless” is both scientifically and logically unsupportable.”
So why don’t you point out precisely where the Good Lord’s (actually Kiehl & Trenberth’s) science is incorrect and where the failure in logic occurs?

December 30, 2011 12:36 pm

“the vaunting ambition of a grasping, talent-free, scientifically-illiterate ruling elite of world-government wannabes”
Ouch, that’s got to hurt. Well said!

Kohl P
December 30, 2011 12:39 pm

R Gates – “Regardless, Lord Monckton’s certainty that the sensitivity to a doubling of CO2 is “low enough to be harmless” is simply not founded on either science nor logic. Neither the best climate models nor the paleodata support his position. We’ve not yet fully seen what 390 ppm of CO2 will do, how can we know all the feedbacks fast and slow from 560 ppm?”
OK, you disagree with Monckton. But how about giving us the benefit of your analysis of his argument? I would love to see exactly where his argument is wrong. Stating that it “is simply not founded on either science nor logic” etc just doesn’t do it for me. On the face of it, his logic appears good. But, as I say, if you know otherwise please set it all out. I for one am happy to be educated.
Kohl P

December 30, 2011 12:40 pm

Tsk Tsk, my Lord; my understanding from school is that Cnut did not try to stop the tide, he tried to demonstrate to his Court that even someone as powerfull as himself was unable to rule the tide.

December 30, 2011 12:43 pm

Neil said @ December 30, 2011 at 11:25 am
“Thank you for your forceful feedback, Sir.
But I must ask you, what in your last paragraph but one do you mean by “adidact?””
The Good Lord made this up, but it’s not therefore meaningless. Didactic a. and n. Having the character or manner of a teacher or instructor; characterized by giving instruction; having the giving of instruction as its aim or object; instructive, preceptive. (from the OED). Hence adidact means uneducated.
The more fascinating word in this piece is gallimaufry, a dish better known to many as bubble and squeak 🙂

Dr Burns
December 30, 2011 12:47 pm

I would love to see how Christopher Monckton of Brenchley and other climate sensitivity experts respond to this excellent paper:
http://wattsupwiththat.com/2011/12/29/unified-theory-of-climate/#more-53850

December 30, 2011 12:53 pm

Kohl P said @ December 30, 2011 at 12:30 pm
“Mike G objects to the ‘trashing’ of CFL tubes. “Fair enough early CFL were not that good, but now they are a huge improvement in many locations, long lasting, cool running, very economical, and even quite quick starting.”
I continue to be dismayed by the casual way that you and many others pass off CFL tubes as being long-lasting etc.
My experience has been very much the opposite.”
Mine too. The room wherein I type has been typical. The incandescent I installed when I finished the House of Steel in 2003 died a few months ago. That is, it lasted 8 years. The CFL that replaced it lasted six months and cost 20 times as much; it wasn’t a Chickenfeed el cheapo! Not one of the half dozen CFLs I have purchased has managed to last even a year!

HankHenry
December 30, 2011 1:00 pm

My grandchildren? My grandchildren! For heavens sake what I wouldn’t give to be born into the world my grandchildren are being born into… and thank goodness I wasn’t born into the world of my grandparents – no telephone, no air conditioning, no indoor plumbing, but plenty of horses to feed and harness. Progress is real and solutions to any problems from global warming (if it be real) will be engineered.

R. Gates
December 30, 2011 1:00 pm

thepompousgit says:
December 30, 2011 at 12:36 pm
R. Gates said @ December 30, 2011 at 11:33 am
“…but the the high level of certainty that Lord Monckton states that the sensitivity to a doubling of CO2 by 2100 is “low enough to be harmless” is both scientifically and logically unsupportable.”
So why don’t you point out precisely where the Good Lord’s (actually Kiehl & Trenberth’s) science is incorrect and where the failure in logic occurs?
______
Actually, Keihl & Trenberth don’t make the same error of logic that Lord Monckton does, as they are not specifiing sensitivity as the good Lord is, but simple radiative forcing from greenhouse gases. On the other hand, Lord Monckton seems to want to pin down the current equalibrium temperature from the CO2 and other greenhouse gas increases we’ve seen so far to some specific number, and seems to implicitly imply that all the feedback processes, fast and slow, have already run their course for the temperature we currently have– yet this is patently not the case. How logically, can we infer what future temperature increases will be from even higher levels of greenhouse gases, if we haven’t yet reached an equalibrium point from the current levels? The interactions from all the components in the climate system are so massively complex, with the full scale of feedbacks still uncertain, that only a computer model can even come close to figuring it out, and even then, with levels of uncertainty. For Lord Monckton to express a high level of certainty that the sensitivity to a doubling of CO2 by 2100 is “low enough to be harmless”, flies in the face of the complex science behind the climate, as well as the fact that we don’t even know what the sensitivity is to our current levels of CO2. In short, I’ll trust the unncertainty and range of estimates from the supercomputer models and paleodata before I’ll trust the implausible and illogical cerainty of Lord Monckton.

Brian
December 30, 2011 1:01 pm

Lord Monckton,
Thank you for your calculation of climate sensitivity. I only want to add one thing you chose not to mention. Your statement that the lapse rate will not change significantly over the next few hundred years, with which I agree, carries the implication that no tropical tropospheric “hot spot” exists. That is, tropospheric temperatures will warm at the same rate as the surface. Most of the GCMs predict otherwise, of course, and the issue of WHY they predict otherwise cuts to the heart of the matter. High sensitivity is only physically possible if the effective blackbody surface of Earth (at ~5 km, as you say) is rapidly expanding. Earth’s surface would then be warmed by the greater distance of fall in the gravitational field by the molecules starting at the effective surface (i.e., exactly what the lapse rate describes). Fortunately, satellite and balloon data all agree that the hot spot is not in evidence, providing further confirmation that Earth’s climate sensitivity is nothing more than that of a blackbody, the usual 1.2K per doubling of CO2.

Richard M
December 30, 2011 1:03 pm

I think Lord Monckton has done a good job of demonstrating that the warmists claims of alarm fail even if you take their assumptions as gospel. However, we now have the UCT and we no longer have to take their assumptions as gospel. In addition, the alarmists continue to ignore the “cooling effect” of GHGs. I thank Brain H for pointing to a paper that attacks CO2 alarmism from first principles.
http://jinancaoblog.blogspot.com/2011/11/physical-analysis-shows-co2-is-coolant.html?showComment=1325277941758#c4933496089610088013
With this new evidence that CO2 can only cool our atmosphere and the UTC which provides another methodology for explaining the existing temperature, at long last it appears we no long need to accept anything from the alarmists.

Dave
December 30, 2011 1:15 pm

Lord Monckton.
As a keen observer with a thirst for the truth. I finally get it! I am a layman to the science, But I do want to thank you and WUWT for shining a light on a very dark period of science. I find that WUWT and most skeptical Sites and Blogs are filled with fact laden science, information and discussion. As apposed to the warmist elitism their panicked breathless never ending alarmism. The nasty put downs and their junk manipulated science, they editing out of anything unfavorable to the CAGW /Climate disruption cause put out by the warmists sites and blogs and their glassy eyed watermelon Troll army’s. No wonder they are losing the belief and interest of the people.
I wish to thank Anthony, Lord Monckton, and the fantastic contributors and commentators who keep me coming back to the greatest open university in the world.
A Happy New Year to All.
David.

J Martin
December 30, 2011 1:15 pm

R Gates.
You said, “But some confidence can be taken from the fact that both paleoclimate data and GCM’s seem to be converging on a reasonably consistent range in the area of 3C, with error bars of about 1C on either side at a 95% confidence level.”
Really ? And yet current temperatures have not reached any of Hansen’s 3 scenarios and is currently below his estimate for a zero percentage increase in co2, whereas the increase in co2 has climbed relentlessly. It would be difficult to be more wrong. I can’t find the graph right now, but I think current temperatures are below even the lowest error bar and heading downwards.
/ Sarc But don’t worry, all the satellite measurements will be so obviously wrong that Hansen can simply correct them to suit his models. After all how can an upstart multimillion dollar satellite possibly measure the planet’s temperature more accurately than Hansen can model it ? School textbooks will praise the Hansen coefficient, one of it’s primary virtues being the way it increases over time to correct those pesky satellites. Sarc/
What I want to know from you is what temperature drop would convince you that co2 caused global warming is a load of nonsense, and it’s the sun that is the major player. In say ten years. What would it take ? A Dalton size drop or perhaps a Maunder size drop ? Less perhaps. Do you have a view on where temperatures will be in ten years ? Go on, commit yourself, make a prediction.

Richard G
December 30, 2011 1:16 pm

Kudos Christopher Monckton for proving simultaneously that climate sensitivity is zero, that this produces positive feedback (in the comments), and creates instability (in the fear mongers) which is leading rapidly to a tipping point in the climate debate. You are without peer. Keep up the great work.
Happy New Year WUWTers.

R. Gates
December 30, 2011 1:19 pm

Richard M says:
“With this new evidence that CO2 can only cool our atmosphere and the UTC which provides another methodology for explaining the existing temperature, at long last it appears we no long need to accept anything from the alarmists.”
______
And a new age of “science” has been born! Of course, such beliefs as “CO2 can only cool our atmosphere” will not qualify as science, but certainly, in the minds of those believing such nonsense, a new “age” will have begun.

RockyRoad
December 30, 2011 1:19 pm

R. Gates says:
December 30, 2011 at 11:33 am

… but the the high level of certainty that Lord Monckton states that the sensitivity to a doubling of CO2 by 2100 is “low enough to be harmless” is both scientifically and logically unsupportable.

I’m afraid I’ll be perhaps the 10th person to ask for scientific substantiation of your above claims, R., and I’m also afraid I’ll be just as disappointed as the other 10.
But let me add what I’m beginning to understand, which is that sensitivity is so low that it is indeed “low enough to be harmless”. We haven’t seen appreciable warming for a dozen years now even against increasing atmospheric CO2 levels, and somehow that’s supposed to keep hidden heat at bay and release it at some future date? Stay tuned for another 5 until we reach the magic “17 year” mark to determine if we’re really seeing something climatically significant or not.
So the bottom line is that your arguments are just a bunch of conjecture, R. They sound plausible, but so are a lot of things that don’t happen because the laws of nature prevent them from happening. It seems just as logical to blame warming on the lagging 800-year bounce of CO2 found after major changes in the earth’s temperature–it is tempting to apply “Back to the Future” concepts but we all know that’s not even true in Hollywood.
And “low enough to be harmless” completely destroys your political (and probably professional) objectives, doesn’t it, R, so obviously you’d push back. It just isn’t working.

AndyG55
December 30, 2011 1:20 pm

@RGates “GCM’s seem to be converging on a reasonably consistent range in the area of 3C, ”
DOH !! That’s because they make assumptions that bias them in that direction.
The GCM’s are UNvaliidated models, and even people like Mr Jones from the CRU have admitted that they are ALL “not correct”. (ie they do not pass ANY validation procedures, such as reality)
Now I know that in so-called “Climate Science”, they take all these “not correct” models and somehow use them for averages and somehow create error bars, but , I’m sorry, many wrongs DO NOT make a right !!!

December 30, 2011 1:29 pm

Lord Monkton – Truly appreciate your providing prompt and detailed feedback here. It is a testament to your intellect and drive to engage ‘the masses’.
But I implore you to not assume the absorbing sphere and emitting sphere are identical in radius.
If my assumptions are right, the primary energy absorption layer is the oceans surface – not the atmosphere! Given the incidence angle issue, the solar flux per square meter would be much smaller than the total surface divided by 4. If you averaged the total inbound flux of one hemisphere across the total surface area of the globe it would be similar to comparing a sphere with half the surface area – would it not? With your quarter sized equation it is a sphere one fourth in surface area. This is a dramatic difference when you compare inbound verses outbound fluxes.
I agree the emission layer is very high up in the atmosphere, and covers an area 4pi-squared-r. It is also uniform in emission capacity.
So, if you compute the emissivity rate per metre-squared at a high altitude and the absorption rate which is modeled by a sphere with much less than a 4th of the surface area of the emission layer – you realize altitude does matter.
Also, do not forget the fact the Earth’s core is also warming things up, and the flux from the core to the oceans is not steady either.

TerryC
December 30, 2011 1:29 pm

Sorry if this is somewhat off topic, but the statement that the temperature of the moon is -18C has appeared repeatedly in this and other recent threads. Where does this number come from and how was it calculated. My perusal (admittedly brief) of NASA sources indicate daytime temps of >100C, nighttime temps of < -150C and polar temps of -240C. These temperatures are also those of equilibrated objects since there is no effective atmosphere and therefore no possible atmospheric temperature.

Logan in AZ
December 30, 2011 1:30 pm

The feedback factors treated on WUWT are physical mechanisms. The dimethylsulfide feedback from the oceans is a major factor that is ignored by those who only study or think about physics. The Idso group has a nice summary of such research:
http://www.co2science.org/subject/d/summaries/dms.php
— and the concluding paragraph is:
In conclusion, it is unfortunate that in light of the overwhelming empirical evidence for both land- and ocean-based DMS-driven negative feedbacks to global warming, the effects of these processes have not been properly incorporated into today’s state-of-the-art climate models. Hence, the warming they predict in response to future anthropogenic CO2 emissions must be considerably larger than what could actually occur in the real world. In fact, it is very possible that these biologically-driven phenomena could totally compensate for the warming influence of all greenhouse gas emissions experienced to date, as well as all those that are anticipated to occur in the future.
As usual, one concludes that the so-called climate models are just political propaganda.

December 30, 2011 1:39 pm

R. Gates said @ December 30, 2011 at 1:00 pm
“I’ll trust the unncertainty and range of estimates from the supercomputer models and paleodata before I’ll trust the implausible and illogical cerainty of Lord Monckton.”
The supercomputer models predict:
* Increasing water vapour in the atmosphere that cannot be observed
* Upper troposphere temperatures rising faster than surface temperatures that cannot be observed
* A tropospheric hotspot that cannot be observed
* A different rate of temperature change in the early 20th C than observed
and you trust them. You’re a strange sample of humanity Mr Gates.

Roger Knights
December 30, 2011 1:41 pm

“The working definition of “troll” that Monckton seems to be using ”
“I give the chart below since you seemed to miss it Christopher.”
“Lord Monckton of Brenchley, …”

Hey, guys, here’s another way of referring to him: “Brenchley.” That’s how the kings in Shakespeare referred to each other: as “France” and “England.” If he were to adopt that style himself, it would really rattle his critics’ cages.

RockyRoad
December 30, 2011 1:42 pm

R. Gates says:
December 30, 2011 at 1:19 pm

Richard M says:
“With this new evidence that CO2 can only cool our atmosphere and the UTC which provides another methodology for explaining the existing temperature, at long last it appears we no long need to accept anything from the alarmists.”
______
And a new age of “science” has been born! Of course, such beliefs as “CO2 can only cool our atmosphere” will not qualify as science, but certainly, in the minds of those believing such nonsense, a new “age” will have begun.

The above reply is submitted as “Exhibit A” that R. doesn’t reply with anything except belittling comments.
I’m waiting for your scientific substantiation, R. Still waiting.

R. Gates
December 30, 2011 1:43 pm

J Martin says: (to R. Gates)
“What I want to know from you is what temperature drop would convince you that co2 caused global warming is a load of nonsense, and it’s the sun that is the major player. In say ten years. What would it take ? A Dalton size drop or perhaps a Maunder size drop ? Less perhaps. Do you have a view on where temperatures will be in ten years ? Go on, commit yourself, make a prediction.”
_____
I would anticipate that we’ll see at least 2 record setting warm years (warmer than any in the instrument record) in the next 5, however, the current quiet sun, extended La Nina periods, and higher aerosols certainly help to balance out the forcing from CO2.
I do expect at least a Dalton type solar minimum, and even think a Maunder type is possible. These will provide a very good opportunity to compare the strength of CO2 and other greenhouse forcing to the cooling caused by the solar slowdown. We could even see a 20 year period of flat temps, though I think this is highly unlikely.
What would it take for me to no believe that anthropogenic CO2 and other greenhouse gas increases are not affecting our climate? In addition to some huge revolution in science that refutes basic radiative theory, I suppose if by 2030 if we see Arctic Sea ice return to the levels we saw in the mid 20th century and long-term global temps continue in the pattern downward we saw generally after the Holocene climate optimum, I might begin to suspect that the feedbacks related to CO2 increases were not as strong as the models indicated.

peter_ga
December 30, 2011 1:45 pm

I have two problems with your analysis. Firstly, the lumped thermostat control system should be an integral rather than a proportional control system. Secondly, the small-signal negative feedbacks are much too low.
Physically, the surface of the earth has a certain heat capacity. There are various heating and cooling influences that are estimated in units of power per unit area. Temperature is proportional to the time integral rather than the sum of these influences. Feedbacks are proportional to the resulting temperatures. These feedbacks may easily be summed to determine their influence if expressed in units of power per unit area per unit of temperature.
If the system is an integral controller, then positive feedback is completely unstable. With a derivative controller, positive feedback may reach unity.
Concerning the size of the feedbacks, clearly the temperature is very stable, which indicates a large gain and large amount of negative feedback.
The water vapour GHG positive feedback effect saturates exponentially. The small signal effect is the differential of the power with respect to temperature. This value exponentially decays from the large signal value.
Average surface temperature is closely pegged to the saturated convective lapse rate from that altitude where the atmosphere is in approximate blackbody equilibrium. That level is about 20000 feet, the lapse rate is about 1.5 degrees K per thousand foot, and surface temperatures are about 30 K warmer. If this temperature differential was even slightly less, convection would stop, and that avenue of surface heat loss would cease. The differential of this heat loss mechanism with respect to temperature must be at least an order of magnitude greater than its large signal value or any GHG positive feedbacks.
So if you could address these issues in your forthcoming book I would definitely consider buying it.

Luther Wu
December 30, 2011 1:45 pm

R. Gates says:
December 30, 2011 at 1:00 pm
In short, I’ll trust the unncertainty and range of estimates from the supercomputer models and paleodata…
We all know that you are a spokesman for the CAGW camp, but in light of the complete ineptitude of all known climate models at predicting anything and the frequently dishonest, yet widely heralded inanities emanating from those studying paleodata, why do you even bother?
Games aren’t won by the noisiest cheer leaders, but those without proper underpinnings are certainly a distraction, just like you: that’s the extent of your success.

December 30, 2011 1:50 pm

Logan in AZ said @ December 30, 2011 at 1:30 pm
“The feedback factors treated on WUWT are physical mechanisms. The dimethylsulfide feedback from the oceans is a major factor that is ignored by those who only study or think about physics.”
But of course the biological effects must be left out, or else there’s nothing to be alarmed about. I was amused when someone decided to test the release of clathrates from permafrost idea in situ. The plant growth shaded the ground enabling the permafrost and clathrates to persist under warmer conditions. And contra R Gates’ claim that paleoclimatology validates the models, we know that temperatures in the high latitudes supported trees where now there is tundra only three thousand years ago. Temperatures supposedly high enough to release the methane from the permafrost.

R. Gates
December 30, 2011 1:54 pm

TerryC says:
December 30, 2011 at 1:29 pm
Sorry if this is somewhat off topic, but the statement that the temperature of the moon is -18C has appeared repeatedly in this and other recent threads. Where does this number come from and how was it calculated. My perusal (admittedly brief) of NASA sources indicate daytime temps of >100C, nighttime temps of < -150C and polar temps of -240C. These temperatures are also those of equilibrated objects since there is no effective atmosphere and therefore no possible atmospheric temperature.
_____
if you were to integrate the surface temperatures of the moon's surface, broken down into as small of regions as possible, at any given point in time the "average" of that integration would be about 250K or about -23C. This comes from:
http://ode.rsl.wustl.edu/moon/

peter_ga
December 30, 2011 1:55 pm

Correction. Where I said “With a derivative controller, positive feedback may reach unity” I meant a “With a proportional controller, positive feedback may reach unity”

Neil
December 30, 2011 1:55 pm

in Minnesota
@thepompousgit
Thanks for your replies. You’re both right. But WUWT is about “puzzling things in life,” so here’s one for you.
“didactic” – Pocket Oxford Dictionary, 1928 – “meant or meaning to instruct.”
“didact” – dictionary.com, 2011 – “a didactic person; one overinclined to instruct others.”
I take the second definition; which is why I queried the good Viscount’s use of the negative of the word. Warmists are way over-inclined to “instruct” us; are they not?
How did it come about that the meaning of “didactic” has changed to almost its opposite in just 83 years?

Steve from Rockwood
December 30, 2011 2:01 pm

What kind of troll doesn’t travel with a reasonable computer? Even Shrek has a dual core Pentium.

Richard M
December 30, 2011 2:02 pm

R. Gates says:
December 30, 2011 at 1:19 pm
Richard M says:
“With this new evidence that CO2 can only cool our atmosphere and the UTC which provides another methodology for explaining the existing temperature, at long last it appears we no long need to accept anything from the alarmists.”
______
And a new age of “science” has been born! Of course, such beliefs as “CO2 can only cool our atmosphere” will not qualify as science, but certainly, in the minds of those believing such nonsense, a new “age” will have begun.

Anyone want to make a bet as to whether Gates actually read the link? It’s all about science rather than the religion practiced by Gates. Let’s see if R. Gates can actually provide a scientific argument that refutes the link. The clock is ticking …

Roger Knights
December 30, 2011 2:04 pm

Luther Wu says:
December 30, 2011 at 1:45 pm
Games aren’t won by the noisiest cheer leaders,

I put it thusly: It isn’t the bull that wins the bullfight.

Babsy
December 30, 2011 2:22 pm

mkelly says:
December 30, 2011 at 11:28 am
“If I control the price of, how much, and what kind enegy you can have, I can control you.”
This is EXACTLY what drives this ‘scientific’ fiasco. It is all about control. Taking from the producers and giving to the parasites.

Bob Fernley-Jones
December 30, 2011 2:24 pm

R. Gates @ December 30, 11:58 am

[Wayne], if an atmosphere-less earth would have an average temperature of 154K, why, does the atmosphere-less moon have one of 250K? Both would receive about the same energy from the sun.

I’m curious to know how meaningful any calculation of the so-called average temperature of the moon is, and just how it is integrated. Can you advise please?
One source gives a T range of 260K, which is greater than your average. BTW, three major differences between the moon and an airless & ocean-less Earth, (with current geology), are the regolith thermal factors, and rotation speed, together with speculation on albedo, which I guess would vary much more on Earth for very different regional geological reasons. It would be interesting to study the thermodynamics of heating and cooling on the moon, (at the surface and at depth), over its 4-week cycle.
Wayne, sorry for interjecting, but I found it hard to resist.

Brian H
December 30, 2011 2:33 pm

Kohl P. et al;
Yes, stop trashing CFLs!! After all, once you put them in the trash they’re illegal hazardous waste.
And since they conk out so fast, the only way to not trash them is not buy them. That’s the solution I’ve long since adopted.
LOL

Richard G
December 30, 2011 2:37 pm

As some one who was schooled in science BCE (before computer era) I am always surprised by the blind faith of Digital Natives (born after computers) in the ability of super computers to produce valid results from flawed computer models.

AndyG55
December 30, 2011 2:54 pm


Yep, supercomputers do calulations REALLY FAST,
That means that any errors in the programming, inputs, assumptions etc get reproduced REALLY FAST !!

AndyG55
December 30, 2011 2:58 pm

ps.. because of this “Really Fast” stuff, it also gives anyone with the desire to do so (eg the AGW bretheren), time to adjust their data, inputs and assumptions until they get the “answer” they were looking for.

mondo
December 30, 2011 3:00 pm

Lord Monckton. I hesitate to say this, but it seems to me that you may have made an error when you say: “Rosco is surprised by the notion of dividing the incoming solar irradiance by 4 to determine the Wattage per square meter of the Earth’s surface. I have taken this textbook step because the Earth intercepts a disk-sized area of insolation, which must be distributed over the rotating spherical surface, and the ratio of the surface area of a disk to that of a sphere of equal radius is 1:4.”
Your calculation would be correct if the disc-sized area of insolation was spread over the entire surface of the earth, but I think that actually that insolation is being spread only over the hemisphere facing the sun (and not the other hemisphere), thus the ratio used should be 2, not 4.

Kohl P
December 30, 2011 3:03 pm

R Gates – I asked for your analysis of Monckton’s argument. I notice that I am not alone in asking you for more than generalities and statements amounting to ‘arm waving’. Why do you not give us your analysis? Can you not? Will you not? With all these negatives I am tempted to resort to the ‘D’ word! 😉
I noticed also that you asked : “What would it take for me to no believe that anthropogenic CO2 and other greenhouse gas increases are not affecting our climate?” Now, I know that you were merely replying to a question, but I can’t help thinking that you are most unfortunately undermining the very foundations of your own position(s) on global warming (and mine).
Surely, the warming effect of CO2 and the warming of the earth are not the issues? The issue is, was and will be how much warming is caused by CO2 ( I suppose this is the sensitivity issue) and, secondarily, how much CO2 is caused by human activities (the anthropogenic issue).
All else is fluff and nonsense is it not?
Kohl P

thingadonta
December 30, 2011 3:09 pm

The bureaucratic-academic complex has never liked change, that is why they couldn’t see the obviousness of biological evolution until the second half of the 18th century, that is why they didnt like the fact the earth spun and moved and the sun didn’t, and that is also why they hate climate change, and are attempting to stop it from happening. Their pathological need to control and supervise anything and everything, whether they understand it or not, knows no bounds.

December 30, 2011 3:12 pm

My wife thinks CFLs are a waste of space as they fail to adequately light her reading of many great works of literature, philosophy and political theory.
Her greatest complaints come in the winter months when the old tungsten bulbs used to supplement our central heating as well as provide adequate lighting. Coincidently that is when the evenings are dark and cold and artificial light and heating is needed. In fact when it is difficult to read by CFLs she has a tendancy to watch more TV which uses far more watts and so contributes even more to cutting our heating bills although I’m sure it is not good for her mental health or the total CO2 content of the atmosphere.
I personally tell her that none of the mumbo jumbo about CFLs really matters and I have wisely invested in a bulk purchase of 60w and 100w tungsten bulbs. The TV will last years longer now it gets less use, so reducing the need for a replacement at massive CO2 deficit by one of those plasma jobbies that use oodles of electricity, her mental health is not at risk, she is happy, I have brownie points but my heating bill has increased.
I did consider solar panels but the sun doesn’t shine very effectively in the winter and in the summer we don’t use much electricity although the shadow cast by the the solar panels would keep our loft cooler. That would be economically efficient if I were to use an air conditioning system to keep my loft cool in the summer months , but since I am not insane I don’t do that.
Even the UK government has realised that giving massive tarrifs to UK users of solar panels is a bit naff even though it does continue the process of filtering money to the wealthier among us.
They should stick to more traditional methods of trickle up like giving 125% mortgages to people who can’t afford them, so raising house prices so the benefits of such rises flow to those who don’t need the money.
Funny things feedbacks, and feedbacks of feedbacks, as the man in the other place says “keep your eye on the pea.”

gnomish
December 30, 2011 3:13 pm

hey richardM- nice that some practial engineering physics is allowed these days..
any cooling system is improved by a better conducting fluid, eh?
increasing the heat capacity of the working fluid in any way also improves efficiency of heat transport.
for serious refrigeration, though, a phase change is the ticket.

December 30, 2011 3:16 pm

If a block of ice is placed in a warm room, will the room cool slower or quicker? The Trenberths of this world, who believe that back radiation from CO2 in the troposphere can create heat on the surface or at least ‘slow down’ heat loss from the surface, tell us that the radiating ice block will help keep the room warmer for longer. The idea really is that dumb.

Doug Proctor
December 30, 2011 3:28 pm

Lord Monckton, or anyone else for that matter:
If the Earth were wrapped in a thermally low conductivity blanket, the loss of heat from the interior would create a cozy 18*C (or so, by my estimates) place to sleep on the bedrock. If it weren’t for the oceans drawing the heat away, the bottom of the oceans would be hotter than that, as the thin oceanic crust makes the seafloor closer to the mantle’s magma than the prairie grasses are above the continental crust.
As I sit here at my keyboard typing, 6 meters below my fingertips it is 14*C, increasing to 36*C at 1500m and the PreCambrian surface. The only reason the PreCambrian stays 36*C is because all the heat bleeding up through the sedimentary cover is being replaced from within the PreCambrian, and the only reason 6m below me it remains 14*C is that the surface keeps losing all the heat it gains from the rocks below.
In a time where Trenberth wrings his hands about a “missing” 0.85 W/m2, this portion of the planetary energy cycle seems important. I don’t know what the energy flux is, but it is enough to bring some subglacial surfaces to melting. It is enough to power a home heating unit with 100m of piping 16m deep. Perhaps I don’t see where it is included in the TOA/Surface/Atmosphere/Oceanic energy partitioning.
Is the planetary loss of heat a) significant, or b) hidden within other factors? What is the flux?
Anyone out there with thoughts on this?

Harriet Harridan
December 30, 2011 3:31 pm

Great post Monckton.
I’m no scientist, and claim no skill at maths, but I do have Excel..! Plugging in -19c at 0ppm CO2, +14c at 280ppm, and +14.7c at 390ppm plots a very nice log curve, with the formula:
Temp=2.11.ln(CO2)+2.09
So for a doubling of CO2, plugging in 580 ppm CO2, gives a temp of 15.5c: a rise of a 1.5c. Slightly more than M’lord, but substantially less than the IPCC’s “super”computer.
Do I win a prize?

Joel Shore
December 30, 2011 3:42 pm

Monckton of Brenchley says:

One regular troll – one can tell he is a troll by his silly hate-speech about how I “continue to fool yourself and others” – attempted to say that Kiehl and Trenberth’s 86-125 Watts per square meter of total forcing from the presence of the top five greenhouse gases included the feedbacks consequent upon the forcing, asserting, without evidence, that I (and by implication the two authors) was confusing forcings and feedbacks.

I have given not only evidence but analogies to help people understand. I will repeat the analogy for the benefit of the readers in a subsequent comment.

No: Kiehl and Trenberth are quite specific in their paper: “We calculate the longwave radiative forcing of a given gas by sequentially removing atmospheric absorbers from the radiation model. We perform these calculations for clear and cloudy sky conditions to illustrate the role of clouds to a given absorber for the total radiative forcing. Table 3 lists the individual contribution of each absorber to the total clear-sky [and cloudy-sky] radiative forcing.” Forcing, not feedback. Indeed, the word “feedback” does not occur even once in Kiehl & Trenberth’s paper.

It is irrelevant what words appear in their paper. What is relevant is that you need to understand how whether something is considered a “forcing” or a “feedback” depends on context. By considering the 85-125 W/m^2, you are by your own admission including the radiative effects of water vapor. However, the question is what would happen to water vapor if one removed all of the non-condensable greenhouse gases from the atmosphere (and hence decreased the forcing only by an amount equal to that for the non-condensable greenhouse gases). The answer is that the resulting cooling would result in much of the water vapor condensing out…and as a result you would lose much, if not most, of the radiative effects due to water vapor. It is clear to see that your calculation of the climate sensitivity is now in error because you have assumed that one has to take out the water vapor from the atmosphere in order to lose its radiative effect, whereas the actual fact is that you lose most of the radiative effect just by taking the non-condensable greenhouse gases out.
So, your calculation that you claimed was giving a climate sensitivity that includes the water vapor feedback is clearly seen not to be…It is assuming there is no water vapor feedback.

However, as Paltridge et al. (2009) have demonstrated, it is not clear that the water vapor feedback is anything like as strongly positive as the IPCC would like us to believe. Below the mid-troposphere, additional water vapor makes very little difference because its principal absorption bands are largely saturated. Above it, the additional water vapor tends to subside harmlessly to lower altitudes, again making very little difference to temperature. The authors conclude that feedbacks are somewhat net-negative, a conclusion supported by measurements given in papers such as Lindzen & Choi (2009, 2010), Spencer & Braswell (2010, 2011), and Shaviv (2011).

You can cherrypick a few papers that support your point-of-view on the feedbacks while ignoring the mountain of other papers that do not. However, even if you think the water vapor feedback is not important, it does not mean that you are allowed to do a calculation that ASSUMES it doesn’t exist and then use this to demonstrate this assumption. That is a circular argument.

It is also worth recalling that Solomon et al. (2009) say equilibrium will not be reached for up to 3000 years after we perturb the climate. If so, it is only the transient climate change (one-third of the IPCC’s ’quilibrium estimate) that will occur in our lifetime and in that of our grandchildren. Whichever way you stack it, manmade warming in our own era will be small and, therefore, harmless.

What a silly argument! The correct way to look at things is to ask what percentage of the equilibrium is reached in a given amount of time, not to create a false dichotomy between the equilibrium and transient responses. And, it is also worth noting that nearly all of the IPCC projections show what the temperature will be in 2100, so they already have essentially ignored the longer term effects: The fact that the approach to equilibrium is fairly slow does not reduce the IPCC projections for temperature change by 2100; it merely means that these projections underestimate the eventual temperature change that will occur (even if greenhouse gas levels have completely stabilized by 2100).

Joel Shore
December 30, 2011 3:56 pm

Here is the analogy that makes it very clear how Monckton’s calculation of climate sensitivity is in error:
Suppose that Bill Gates makes an offer that for every dollar the public contributes to fight hunger, he’ll throw in a certain amount of money that he does not disclose. Now, let’s suppose that this program operates for one year: The public contributes a certain amount, Bill Gates does his matching and the total amount of money that goes to fight hunger is $100 million. Suppose that with this $100 million, 1 million people can be fed.
How much would the public have to contribute in order to feed 5 million hungry people the next year? What Monckton would say is the following: Since we have found it takes $100 million to feed 1 million people, it costs $100 to feed one person. Therefore, you should multiply the 5 million by the $100 and the conclusion is that the public has to contribute $500 million.
What I am arguing is that Monckton is ignoring the “Bill Gates” feedback. For example, let’s imagine that the actual fact is that Bill Gates is matching public contributions at a 4-to-1 match. This means the first year, of the $100 million dollars that was spent, $20 million came from the public and $80 million from Bill Gates’s match.
So, in fact, to feed 5 million people, the public only has to contribute $100 million because Bill Gates will throw in $400 million for a total of $500 million and hence 5 million people will be fed.
What Monckton seems to be thinking is that his calculation included the Bill Gates feedback because he calculated the result that it costs $100 per person to feed the poor using both the amount that the public had contributed ($20 million) and the amount from the Bill Gates feedback ($80 million). [In fact, he was not even able to separate them because he did not know how much Gates contributed vs. how much the public had.] However, by including the “Bill Gates feedback” as a “forcing” rather than a “feedback”, we see that his calculation fails miserably: the amount that it predicts the public must contribute is correct only if the Bill Gates feedback were completely absent.
Really, this “forcing” vs “feedback” stuff is not that hard to understand. The terms may seem unfamiliar to people, which is why mistakes such as Monckton’s can be made so easily. (Willis Eschenbach has made the same error.) But, I think if one works with analogies it becomes possible for anyone to understand it.

December 30, 2011 3:57 pm

Neil said @ December 30, 2011 at 1:55 pm
in Minnesota
@thepompousgit
Thanks for your replies. You’re both right. But WUWT is about “puzzling things in life,” so here’s one for you.
….
How did it come about that the meaning of “didactic” has changed to almost its opposite in just 83 years?”
Popular usage changes the meanings of words over time. That’s not much of an explanation; it’s merely an observation. The OED is particularly useful for tracing such shifts.
“Climate (from OED):
[… The meaning passed in Greek through the senses of ‘slope of ground, e.g. of a mountain range’, the supposed ‘slope or inclination of the earth and sky from the equator to the poles’, ‘the zone or region of the earth occupying a particular elevation on this slope, i.e. lying in the same parallel of latitude’, ‘a clime’, in which sense it was adopted in late L.]
A region considered with reference to its atmospheric conditions, or to its weather.
Condition (of a region or country) in relation to prevailing atmospheric phenomena, as temperature, dryness or humidity, wind, clearness or dullness of sky, etc., esp. as these affect human, animal, or vegetable life.”
None of these meanings capture what the warmists, MSM etc mean by climate…

December 30, 2011 4:12 pm

Bob Fernley-Jones said @ December 30, 2011 at 2:24 pm
“I’m curious to know how meaningful any calculation of the so-called average temperature of the moon is…”
Temperature is the measure of the average kinetic energy of the particles of a substance in Local Thermodynamic Equilibrium (LTE). Or at least it was when I studied physics these many long years ago. It’s far more likely that the moon is made of green cheese than it being in LTE.

Philip Bradley
December 30, 2011 4:13 pm

TBH, I find Lord Monckton’s arguments no more persuasive than I do the IPCC’s.
The Earth’s climate is so complex and poorly understood that I need real world measurements that unequivocably support any theoretical explanation before I find that explanation persuasive.
Although in fairness, LM’s point is to undermine the IPCC’s arguments rather than present a competing (theoretical) explanation.

Rosco
December 30, 2011 4:14 pm

“Rosco is surprised by the notion of dividing the incoming solar irradiance by 4 to determine the Wattage per square meter of the Earth’s surface. I have taken this textbook step because the Earth intercepts a disk-sized area of insolation, which must be distributed over the rotating spherical surface, and the ratio of the surface area of a disk to that of a sphere of equal radius is 1:4.”
I am not surprised in the slightest by this method of calculating the so-called “effective” temperature of the Earth – what I am surprised about is the use of an average outgoing IR over the sphere of the Earth to calculate the maximum temperature the Sun can heat the Earth to is accepted by anyone as reasonable.
In the radiative balance equations I see all the time – (Insolation) S(1-a)x pi r^2 = sigmaT^4 x 4 pi r^2 (outgoing IR) – which by removing pi and r reduces to
S(1-a) = sigmaT^4 X 4.
Surely this simply says:-
1. Radiative balance is achieved if the Earth radiates one quarter of the insolation. Incoming = outgoing x 4.
2. The temperature of 255 K is clearly associated with the outgoing radiation, not the insolation.
3. This construct does not justify this statement from Kiehl & Trenberth :-
“Here we assume a “solar constant” of 1367 W m-2 (Hartmann 1994), and because the incoming solar radiation is one-quarter of this, that is, 342 W m-2, a planetary albedo of 31% is implied.”
Clearly, if one uses S(1-a) to calculate the maximum temperature the Sun “could” heat the Earth to (minus all other considerations such as evaporation, convection etc etc) you arrive at 360 K or ~ 87 degrees C.
Is this reasonable ?
That this is so is easily verifiable – let’s use the moon with an albedo of 0.12. We have 1367 x 0.88 = sigma T^4 which gives T as about 381 K or about 107 degrees C.
Is this reasonable ? Especially as Wikipedia state a “If an ideal thermally conductive blackbody was the same distance from the Sun as the Earth is, it would have a temperature of about 5.3 °C.”
Well the moon is certainly less than “an ideal thermally conductive blackbody” therefore if this divide by four stuff is right it should be less than 5.3 degrees C.
In the spirit of “you can’t argue with verifiable facts” the moon does indeed reach temperatures in excess of 5.3 degrees C – 107 degrees C with a maximum of 123 degrees C quoted by NASA and numerous other sources.
So the paradox that climate “scientists” need to explain is why the Sun can heat the moon, sans magical greenhouse gases, to > 107 degrees C but can only manage a pitiful minus 18 degrees C on Earth.
If you really believe that when the temperature in places like Death Valley approaches 60 degrees C the Sun is responsible for minus 18 C while the remaining 70 plus degrees C is provided by greenhouse effect you have been completely conned.
There is evidence the theory of quartering the solar constant to determine the greenhouse effect is wrong (actually I believe it is a deliberate con dressed up as plausible).
If the radiation from the sun is capable of heating the Earth’s surface to temperatures approaching 87 degrees C then the anomaly that needs examination is why is it so cool ?
I think convective heat distribution and evaporation of water can help explain this.
So I do not buy the theory that Solar radiation is not responsible for heating Earth and that the cooling at night is moderated by warmed oceans, a warmed atmosphere and a fortunate coincidence that before it all goes to pieces and freezes the Earth spins and the warming begins again.
Call me simple but this seems far more likely than less than 2 % of the atmosphere – water vapour and CO2 et al creating energy to warm the Earth.
The whole thing becomes laughable when considering < 0.04% of the atmosphere for CO2 or even tinier fractions for other greenhouse gases.
And of course they ignore IR from Nitrogen and Oxygen which MUST emit IR unless they are not at an equilibrium temperature with the "GHG's". Can anyone prove Nitrogen and Oxygen in the atmosphere are not at equilibrium temperature are therefore emitting IR?
If they can I will recant and donate heavily to all NGO's in attendance at Durban.

Joel Shore
December 30, 2011 4:21 pm

Philip Foster says:

If a block of ice is placed in a warm room, will the room cool slower or quicker? The Trenberths of this world, who believe that back radiation from CO2 in the troposphere can create heat on the surface or at least ‘slow down’ heat loss from the surface, tell us that the radiating ice block will help keep the room warmer for longer. The idea really is that dumb.

What is really dumb is your misinterpretation of what Trenberth et al. say. You seem not to understand the concept of having the correct comparison case.
If you have a block of ice there in place of something warmer, then it will not keep the room warmer for longer. If you put a block of ice in place of a vat filled with liquid nitrogen then it would indeed cause the room to stay warmer for longer.
All that the greenhouse effect says is this: If you have the Earth emitting all its radiation out to space (at ~3 K), it will cool faster than if some of that radiation is being absorbed by the atmosphere, which is then because of its temperature then emitting some radiation back to the Earth.

Rosco
December 30, 2011 4:23 pm

“So I do not buy the theory that Solar radiation is not responsible for heating Earth and that the cooling at night is moderated by warmed oceans, a warmed atmosphere and a fortunate coincidence that before it all goes to pieces and freezes the Earth spins and the warming begins again.” should be
So I do not buy the theory that Solar radiation is not responsible for heating the Earth more than climate “scientists” claim. The cooling at night is moderated by warmed oceans, to a lesser extent by a warmed atmosphere and a fortunate coincidence that before it all goes to pieces and freezes the Earth spins and the warming begins again.

Luther Wu
December 30, 2011 4:28 pm

Harriet Harridan says:
December 30, 2011 at 3:31 pm
Do I win a prize?
______________________
We shall avert our eyes when confronted with any implications of ill repute.

Bill Illis
December 30, 2011 4:34 pm

Here is the implied CO2 sensitivity over the last 50 million years. +/- 40C per doubling ?
http://img163.imageshack.us/img163/8312/co2sensitivitylast50my.png
Obviously, there are a huge number of factors that must be taken into account. The global temperature is most likely influenced by:
– Albedo which has changed over this timeframe from 27.6% to 34.0% (based on the amount of the highly reflective ice that develops).
– GHGs which strongly absorb/slow down long-wave energy from escaping from the Earth surface through the atmosphere to space in certain specific wave-lengths.
– Water vapour which has varied considerably from about 20 mms per given area to 30 mms per given area over this timeframe. This will have a large impact in terms of precipitation, cloud cover and the GHG impact of water vapour.
– Changing continental positions. Greenland would not have glaciers if it was just 200 kms farther south – like it was 20 millionm years ago. Antarctica would have few glaciers if South America was still attached to it (or if the Drake Passage was shallow).
– Changing ocean currents which depend mostly on the continental positions which can alternatively move warmth from the equator to the poles, move sea ice rapidly away from the poles to melt at lower latitude and/or isolate a region in a polar/equatorial climate.
– Milankovitch Cycles which can drop the summer solar insolation at 75N/75S for short periods so that the winter snow does not melt in the summer (just a 30 W/m2 drop in summer insolation can build up glaciers that take 100,000 years to be broken up).
– Atmospheric pressure. The higher the pressure, the slower is the escape of energy from the surface to space. This is simple physics and noone should dispute it.
– Solar variation. The Sun would need to vary by more than it currently appears to but the fact that the Land lags behind the solar irradiance in the seasons (the solstices) by 35 days, the freshwater lakes by 40 days and the ocean surface by 70 days, means that solar energy accumulates in surface energy levels. Therefore any change in solar irradiance will be fully absorbed into surface temperatures within a short period of time. 4 W/m2 worth of variance in total solar irradiance would be required to make a measureable change.
GHGs make it to Number 2 on my list, but it is a very long list.

December 30, 2011 4:36 pm

R. Gates said @ December 30, 2011 at 1:00 pm
“I’ll trust the unncertainty and range of estimates from the supercomputer models and paleodata before I’ll trust the implausible and illogical cerainty of Lord Monckton.”
Here’s some paleodata to chew on:
http://www.biocab.org/Holocene_Delta_T_and_Delta_CO2_Full.jpg
It looks to me like CO2 has been increasing gradually over the last few thousand years yet temperature during that period has been falling. You say you “trust the… paleodata” yet the paleodata contradicts your previous statements. Also you have yet to substantiate your claim that the Good Lord is illogical. It’s not enough to state a proposition; logic demands that you provide evidence for your proposition. Something like:
The Good Lord says if X then Y; not Y therefore not X.
You do know what logic is, don’t you?

AndyG55
December 30, 2011 4:47 pm

@jimbojinx from that link about Gingrich.
“Hayhoe, whose husband is an evangelical pastor, recently wrote a book about climate change from an evangelical perspective.”
I have often wondered how anyone who believes that God is in control, can side with the AGW bretheren.
If He created man, and the planet, then He also created the coal and the oil, and He made it so that it was available at just the right time for human kind to use it for their advancement. Therefore, without doubt, He intended us to be using it.
The fact that it is also highly beneficial for plant growth, thus making it easier for mankind to feed themselves, also backs up this arguement… (so long as we don’t go against his wishes and start using the increased cropping for fuel instead of using the oil and coal as intended.)

December 30, 2011 4:48 pm

Harriet Harridan said @ December 30, 2011 at 3:31 pm
“Do I win a prize?”
A silver star 🙂 The gold star when you do the same in R because you ate frustrated with Excel bugs.
Are you really “a decayed strumpet”? I don’t think I’ve ever met one of those before…

Arno Arrak
December 30, 2011 4:51 pm

As regards climate sensitivity, we need observations to show us how it is expressed in nature as more carbon dioxide is added to the atmosphere. We know, for example, that carbon dioxide has been increasing essentially linearly ever since the Mauna Loa observatory went on line in 1954. Theory would require that global temperature should follow suit, just like Al Gore told us. It is a fact, however, that temperature has behaved quite differently. The best and most accurate temperature records come from satellites and these data have been available since 1979. But NASA, NOAA and the Met Office have refused to use satellite data because they have their own ground-based data that can be manipulated. Let’s see how they differ. Starting with the eighties and nineties the satellite record shows a series of ENSO oscillations consisting of El Nino peaks and La Nina valleys. (1) Their average is a straight horizontal line, meaning no warming. The ground-based data, on the other hand, show a steady warming in this time slot which they call “late twentieth century warming.” But the only warming during the entire satellite era is a short spurt that began with the super El Nino of 1998, raised global temperature by a a third of a degree in four years, and then stopped. It was oceanic, not greenhouse in origin. This leaves us without any proof that greenhouse warming has even existed since 1979. Undoubtedly greenhouse fans will now point to the existence of Arctic warming as proof of greenhouse warming. Arctic warming is certainly real and has existed since the turn of the twentieth century when it suddenly began. Before this there was nothing but slow cooling for two thousand years. It took a break in mid-century, then resumed, and is still going strong. But a sudden warming requires an equally sudden cause. We know that the amount of carbon dioxide did not suddenly increase when the warming began and this rules out the greenhouse effect as its cause. Laws of physics simply do not allow it. What started it was a rearrangement of the North Atlantic current system at the turn of the century that started bringing warm Gulf Stream water into the Arctic. Direct measurements of water temperature reaching the Arctic in 2010 indicate that it exceeds anything seen within the last 2000 years.(2) All this leaves us without any proof whatsoever that greenhouse warming even exists. But this is exactly what should be expected from the work of Ferenc Miskolczi, a Hungarian scientist who worked for NASA. Using NOAA weather balloon database that goes back to 1948 he was able to show that the transparency of the atmosphere in the infrared where carbon dioxide absorbs had been constant for 61 years. During this same period of time the amount of carbon dioxide in the air increased by 21.6 percent. This means that the addition of this amount of carbon dioxide to air had no effect whatsoever on the absorption of IR by the atmosphere. And no absorption means no greenhouse effect, case closed. This is in accord with the inability of satellites to detect any actual greenhouse warming in nature, and also in accord with lack of greenhouse warming in the Arctic.It also follows that climate models using the greenhouse effect to predict dangerous warming ahead are all dead wrong. And it also tells us that the vaunted sensitivity of climate to doubling of CO2 concentration is exactly zero.
Ref.: (1) “What Warming?” available on Amazon; (2) Energy & Environment (2011) 11(8), pp. 1069-1084.

AndyG55
December 30, 2011 4:59 pm

A question I have always wanted to ask.
You cannot increase the concentration of CO2 in the atmosphere (in ppm) without reducing other constituents of the atmosphere by a total of the same number of ppm.
So what effect does this reduction in concentration of other constituents of the atmosphere have, and are all other constituents affected equally ?

gbaikie
December 30, 2011 5:03 pm

“an atmosphere-less earth would have an average temperature of 154K, why, does the atmosphere-less moon have one of 250K? Both would receive about the same energy from the sun.”
if this earth had 28 day duration day, would affect blackbody temperature?
Would it warm, cool or have no effect upon average temperature?

R. Gates
December 30, 2011 5:14 pm

Harriet Harridan says:
So for a doubling of CO2, plugging in 580 ppm CO2, gives a temp of 15.5c: a rise of a 1.5c. Slightly more than M’lord, but substantially less than the IPCC’s “super”computer.
Do I win a prize?
_____
Nope. Figure out all the fast and slow feedbacks with calculations that take into account the rate at which CO2 is accumulating in the atmosphere and come back with a number…oh, and we won’t hold our breath as you’ll need a supercomputer and several months of processing time…and of course, the correct formulas for the feedbacks. And as we haven’t quite got all those feedbacks completely figured out yet, we might just want to take an ensemble average of climate models, and then average that with what the paleodata tell us…and what do you know…about 3C is a good number for a doubling of CO2 from preindustrial levels.

R. Gates
December 30, 2011 5:23 pm

Luther Wu says:
December 30, 2011 at 1:45 pm
“Games aren’t won by the noisiest cheer leaders, but those without proper underpinnings are certainly a distraction, just like you: that’s the extent of your success.”
____
Yet I should believe Lord Monckton who is certain that he has figured out that climate sensitivity is “low enough to be harmless” when we haven’t even seen an equalibrium point reached from the current level of CO2, and have no idea exactly where that point is as we are not sure exactly the full nature of all the feedbacks? Case in point, Arctic sea ice, which a huge factor in feedbacks to global warming seems to be diminishing faster than models forecast just a few years ago. This alone shows that models are doing a poor job at understanding the dynamics of all the feedback processes, and without a understanding them, projections of climate sensitivity are equally poor. So how can Lord Monckton be so certain of a sensitivity “low enough to be harmless”?

Ric Locke
December 30, 2011 5:24 pm

“Didact” is a teacher, though the word is seldom seen except in combination.
“Didactic” is “like a teacher”, i.e. giving instructions. However, the word Lord Monckton is riffing from is “autodidact”, “self-taught”. The prefix “a-” is a linguistic NOT operator, so an “adidact” is “not taught”, that is, what us rednecks refer to as “pig-ignernt.”
Regards,
Ric

DirkH
December 30, 2011 5:36 pm

R. Gates says:
December 30, 2011 at 5:23 pm
“This alone shows that models are doing a poor job at understanding the dynamics of all the feedback processes, and without a understanding them, projections of climate sensitivity are equally poor. ”
You seem to be the only warmist that can at the same time
– admit that the models are junk
– still believe in the IPCC’s climate sensitivity exaggerations (which they derived from model runs).

Joel Shore
December 30, 2011 5:55 pm

Rosco says:

If you really believe that when the temperature in places like Death Valley approaches 60 degrees C the Sun is responsible for minus 18 C while the remaining 70 plus degrees C is provided by greenhouse effect you have been completely conned.
There is evidence the theory of quartering the solar constant to determine the greenhouse effect is wrong (actually I believe it is a deliberate con dressed up as plausible).

You are misunderstanding the argument. The argument is one for GLOBAL ENERGY BALANCE. It does not allow one to make statements about the local temperature. What is true globally is that the amount of power coming in has to equal the amount going out.
The amount coming in is equal to the solar constant times pi*R^2 where R is the Earth’s radius; however, ~30% of this is reflected because of the Earth’s albedo, so the actual amount absorbed by the Earth and its atmosphere is (1-alpha)*pi*R^2 where alpha is the albedo. The amount being emitted is equal to the quantity (epsilon*sigma*T^4) integrated over the Earth’s surface where epsilon is the emissivity, sigma is the Stefan-Boltzmann constant and T is the temperature. epsilon is very nearly equal to one (within about a percent) for most terrestrial surfaces in the infrared. Hence, to a good approximation, this integral is give by sigma*(ave(T^4))*(4*Pi*R^2) where ave(T^4) is the average of the quantity T^4 (absolute temperature to the fourth power) over the 4*Pi*R^2 of surface area of the Earth.
Hence, what global energy balance constrains is the average of T^4 over the surface of the Earth. It does not constrain the maximum temperature, or the minimum or anything like that. It constrains the average. [The last piece of the puzzle is to apply Holder’s Inequality, which tells you that the fourth root of ave(T^4) will be greater or equal to the average of T. Hence, when you get that the 4th root of the average of T^4 is about 255 K, this means the average temperature T is at most 255 K. In practice, for Earth-like temperature variations, the difference between the average of T and the 4th root of the average of T^4 is pretty small…and since the error due to that and the error due to assuming that the emissivity epsilon is exactly 1 are of similar magnitude and act in opposite directions, the 255 K estimate is a pretty reasonable value for what the average temperature would be in the absence of a greenhouse effect.]

December 30, 2011 5:58 pm

Once again, many thanks to everyone for your comments. Here are some answers.
First, the troll. Joel Shore says he thinks I define a “troll” as someone who injects science into his comments. Yet I had specifically stated that a “troll” is one who cannot keep his argument civil and polite, but resorts instead to hate-speech, of which Shore is serially guilty. Besides, the only “science” he “injected” was a repeated misstatement to the effect that Kiehl & Trenberth’s greenhouse-gas forcings total of 86-125 Watts per square meter included feedbacks consequent upon the forcings. It doesn’t. Get used to it.
R. Gates says the Earth has not yet reached its equilibrium temperature, in that not all of the feedbacks consequent upon the greenhouse-gas forcings of 2 Watts per square meter that we have added to the system since 1750 have acted. However, these feedbacks are a very small fraction of the total feedbacks generated by the presence as opposed to absence of all greenhouse gases in the atmosphere. In that context, very nearly all the feedbacks have acted, and the remaining feedbacks (even if they are as net-positive as the IPCC would wish) will have little influence on the determination of the equilibrium system sensitivity of 1.2 K per CO2 doubling.
Of course, R. Gates’ argument is correct as far as the transient sensitivity calculation since 1750 is concerned: but I had already pointed this out in my postings, drawing the legitimate conclusion that if the transient industrial-era sensitivity (1.1 K) is near-equal to the equilibrium system sensitivity (1.2 K) then it is likely that temperature feedbacks are net-zero or thereby. I submit, therefore, that I have not made an “error of logic” here.
R. Gates expresses a touching faith in models and paleoclimate data. He is entitled to his religion. However, Shaviv (2011), using models and a great deal of paleoclimate data, finds climate sensitivity to be around 1 K per doubling, not the IPCC’s 3 K. Douglass and Knox, in paper after paper, have also used modeling and have found climate sensitivity low. My own approach is to try to use empirical rather than numerical weather prediction for the determination of future climate states, because the uncertainties in modeling are too great, especially since the climate object is (or behaves as though it were) mathematically chaotic and hence inherently resistant to very-long-term prediction of its future states (IPCC, 2001, para. 14.2.2.2).
R. Gates also suggests I expess a “high level of certainty” that climate sensitivity is low and harmless. However, I have repeatedly made it plain that I do not warrant the reliability of the values for radiative forcing or for pre-greenhouse-gas global surface temperature that I have used. However, if those textbook/mainstream/IPCC values and methods are correct, then – like it or not – low climate sensitivity necessarily follows. Do the math.
AJStrata begs me to appreciate that the “absorbing and emitting sphere” at the characteristic-emission altitude are not of the same radius. However, Kirchhoff’s radiation law is entirely clear: absorption and emission of radiation from the characteristic-emission surface of an astronomical body are – and are treated by the textbooks and the IPCC as – simultaneous and identical. It is as simple as that.
Mydogsgotnonose says that the textbook value of 33 K for the warming effect of the presence as opposed to the absence of all greenhouse gases is an “elementary mistake”, on the ground that there would be no clouds or ice if there were no water vapor in the atmosphere, so that the albedo would be different. I don’t know how many more times I need to explain that in order to determine purely the warming effect of the greenhouse gases one must hold the albedo, emissivity, and insolation artificially constant. This is elementary climatology, and it makes perfect sense if the objective is to determine climate sensitivity robustly, rather than to determine the actual mean surface temperature that would subsist on the naked terrestrial lithosphere.
Wayne, supported by Dr. Burns and by Richard M, is attracted to Dr. Nikolov’s “Unified Climate Theory”. He says he has calculated the temperature of millions of points on the surface of the Earth and has concluded that the characteristic-emission temperature would indeed be 155 K, not the 255 K that the textbooks give. Well, I too have done that calculation, but I have not used points because that would introduce errors. I have done the calculation zonally, using up to a million zones of equal altitude and hence of equal spherical-surface area. I find the textbooks to be correct. If Dr. Nikolov wishes to convince the scientific community of his theory, he will have to deal with the alarmingly large discrepancy between his value and the currently-accepted value for the characteristic-emission temperature before proceeding further.
Wayne also produces a table of mean surface temperatures for three astronomical bodies in the solar system, apparently calculated by Dr. Nikolov solely from the respective atmospheric pressures, densities, and molar masses, and without reference to greenhouse gases at all. Perhaps I am misunderstanding the units used, but they do not cancel to yield Kelvin, as they should if they were to mean anything: instead, they seem to yield Kelvin Newton-meters per Joule. So I suspect that something may be amiss here. At present, my advice would be to treat Dr. Nikolov’s theory – however interesting and attractive it is at first blush – with caution.
Bill Illis has it absolutely right. If one starts empirically from commonly-accepted empirical data and methods, as I have tried to do, whichever way one attacks the numbers the climate sensitivity is about one-third of the IPCC’s central estimate. On the other hand, as Bill says, perhaps the climate possesses some “magical properties” that will be revealed later. Were it not for the ludicrous politicization of what should be a straightforward scientific question, it would by now be generally agreed that climate sensitivity is low enough to be harmless, and that, even if it were as high as the IPCC imagines, it would still be at least an order of magnitude more cost-effective to wait and adapt to any consequences of any warming that may occur than to try – futilely – to stop it happening by taxing, trading, regulating, reducing, or replacing CO2.

December 30, 2011 6:04 pm

With all due respect, Lord Monckton, you are indeed serving well in the cause. But I firmly believe you need to study more carefully and come to grips with Prof. Claes Johnson’s “Computational Blackbody Radiation” http://www.csc.kth.se/~cgjoh/blackbodyslayer.pdf in which he proves (backed up by Prof Nahle’s experiment in Sept 2011) that any back radiation simply does not have sufficient energy to ionise surface molecules and thus be converted to thermal energy. Only direct solar insolation does any warming.
This removes the possibility of any feedback what-so-ever. It removes the power source of any “greenhouse effect” and renders all models of sensitivity irrelevant.
PS. If you want data that really hits AGW on the head, use the Arctic trends on my site http://climate-change-theory.com

Larry Goldberg
December 30, 2011 6:08 pm

R Gates – you invoke Judith Curry on one hand, but cite that “paleoclimate data and GCM’s seem to be converging on a reasonably consistent range in the area of 3C, with error bars of about 1C on either side at a 95% confidence level.” Your (unsubstantiated) claim would not please Ms. Curry, who has a much clearer idea of the uncertainty inherent in the GCMs (regardless of whether they are run on a supercomputer or not), nor would she be impressed by any unsubstantiated claims of certainty (or “convergence”) of the paleoclimate data (particularly some paleoclimate “data” that even the team finds difficult to stomach). Monckton has proposed a theory: either expose the flaws in it, or stand back and let the adults deal with the issues.

Dave Wendt
December 30, 2011 6:21 pm

Doug Proctor says:
December 30, 2011 at 3:28 pm
“Is the planetary loss of heat a) significant, or b) hidden within other factors? What is the flux?
Anyone out there with thoughts on this?”
From what I’ve seen the CW is that the geothermal contribution to the global heat budget is about 88 milliwatts/m2, both terrestrially and in the ocean abyss. Having looked at several of the supposedly canonical works on this topic the only point that jumped out at me was that in any of the calculations the contribution from any volcanically active regions was systematically excluded. This work suggests that, at least for the oceans, the geothermal contribution has been underestimated and is in fact is not negligible
http://www.ocean-sci.net/5/203/2009/os-5-203-2009.pdf
Given that in doing their analysis they continue the practice of excluding any actual hot spots and use the most conservative estimates for the rest, it suggests, to me at least, that underestimating the geothermal component may have compromised the energy budget calculations, but in any case I suspect even the most thorough accounting is unlikely to bring the number up to much beyond a half a Watt/ m2, but then we’re talking about an imbalance that is in the area of only 3W/m2 so half a watt is still significant.

wayne
December 30, 2011 6:34 pm

R. Gates @ December 30, 11:58 am
@ Bob Fernley-Jones says:
December 30, 2011 at 2:24 pm
R. Gates @ December 30, 11:58 am

[Wayne], if an atmosphere-less earth would have an average temperature of 154K, why, does the atmosphere-less moon have one of 250K? Both would receive about the same energy from the sun.

I’m curious to know how meaningful any calculation of the so-called average temperature of the moon is, and just how it is integrated. Can you advise please?
One source gives a T range of 260K, which is greater than your average. BTW, three major differences between the moon and an airless & ocean-less Earth, (with current geology), are the regolith thermal factors, and rotation speed, together with speculation on albedo, which I guess would vary much more on Earth for very different regional geological reasons. It would be interesting to study the thermodynamics of heating and cooling on the moon, (at the surface and at depth), over its 4-week cycle.
Wayne, sorry for interjecting, but I found it hard to resist.
>>
Hey Bob, no problems matee. Unlike some ‘people’ here I do have to sleep every day at some point. I say ‘people’ for it sure seems some must be entire sleepless organizations, I find no time period when they are not always there, posting dribble.
R. Gates, do you not retain anything. This has been answered many times before. This is using KT07 and TFK09’s radiative computations. In their case they used a perfect, mass-less, black-body flat disc within their calculations to get the 33 K GHE.
Nikolov’s case is staying parallel but using a perfect massless gray-body sphere. Neither have any thermal inertia properties.
You see, even the concept is not real, it is not calculating a real temperature for there is mass. You must have some mass to even define a temperature. These calculations in all three cases are computing an effective radiative temperature. Big difference.
But on the moon, you have real mass, real specific heats and conductivity, real absorption and dispersion of that energy, stored to release at a cooler time, so the temperatures from the lit side to dark side are modulated to a huge degree. (but I was wrong in assuming that all readers here knew that)
There R. Gates, that is why.
Also you are talking about the temperature on the moon of 250K. That is assumed near the maximum and on a real body with mass but since there is no atmosphere, this temperature must be taken in the top layer of the dust. Right? Well, in Nikolov case that I integrated, there are also locations where the effective temperature is 250 K. In fact there is one location, directly under the sun, where it is receiving 1198 Wm-2 and at an emissivity of 0.955 the effective radiative temperature is 386K… at that one maximum point, but there is no thermal inertia, it is perfect massless radiator. It is just a fact that most of that sphere receive MUCH less on the average due to pure geometry and that’s right, it averages to a mean of 154.3 K, giving the “GHE” of 133 K.

wayne
December 30, 2011 7:09 pm

Monckton of Brenchley says:
December 30, 2011 at 5:58 pm
Wayne also produces a table of mean surface temperatures for three astronomical bodies in the solar system, apparently calculated by Dr. Nikolov solely from the respective atmospheric pressures, densities, and molar masses, and without reference to greenhouse gases at all. Perhaps I am misunderstanding the units used, but they do not cancel to yield Kelvin
>>
Hi Lord Monckton, I know you are trying and will accept science if it comes to be in the end correct.
On the units in my equation. That equation came from a thermodynamics text and the units do, in fact, all cancel but Kelvin. Here’s their breakdown to erase that doubt.
Starting at T = (P/ ρ) • (M/R)
using a little dimensional analysis you get
K = ( [kg • (m/s2)/m2] / [kg/m3] ) • ( [kg/mol] / [kg•m2/s2/mol/K] )
inversing the denominators gives
K = [kg • m • s-2 • m-2] • [kg-1 • m3] • [kg • mol-1] • [kg-1 • m-2 • s2 • mol • K]
reducing the simple and obvious
K = [m • m-2] • [m3] • [] • [m-2 • K]
coalescing and cancelling m4 and m-4
K = [] • [] • [] • [K]
and you do in fact end up in Kelvin.
Sorry Christopher. Once again, no radiation terms needed.

December 30, 2011 7:17 pm

The 33 K claimed GHG warming is an elementary mistake. It’s obtained by imagining that if you remove all the atmosphere the surface temperature of the Earth would be the same -18°C it is at present for radiative equilibrium of the composite emitter at the top of the atmosphere with space.
Wrong: the albedo of the Earth would fall from 0.3 to 0.07 because there’d be no clouds or ice. Redo the radiation calculation and the equilibrium radiative solution is close to 0°C.

In the absence of atmosphere the albedo of the Earth would range between that of Mercury and the Moon, from about 0.12 to 0.14, depending on the amount of iron on the surface. There is no such thing as an equilibrium temperature for an airless world, it varies in the sunlight to the cosine angle of the sun and drops to below 100k after dark.

davidmhoffer
December 30, 2011 7:48 pm

R. Gates;
And as we haven’t quite got all those feedbacks completely figured out yet, we might just want to take an ensemble average of climate models, and then average that with what the paleodata tell us…and what do you know…about 3C is a good number for a doubling of CO2 from preindustrial levels.>>>
Since you admit we don’t have all the feedbacks figured out yet, how does averaging climate models accomplish anything? Do you propose that averaging various numbers we surmise to be wrong with each other somehow arrives at an accurate average? And why would you average those with the paleodata? and which paleodata? The paleodata like Briffa’s that purports to represent the temperature of the earth over 1000 years based 50% on a single tree in Siberia? Mann’s paleodata that actually isn’t data it is only the graph that Mann draws from the data because he won’t show us the data or the methods? Or perhaps the paleodata that Mann and Jones quietly threw out because it showed a deline in temps instead of a rise for exactly the time periods that the models predicted a major rise for?
Do you actually think your arguments through before you make them?

George E. Smith;
December 30, 2011 8:36 pm

Well I believe the earth is warming; and has, off and on for the last 8-10 thousand years as we emerge from the last ice age; and I agree that anecdotal evidence of this or that glacial field or ice back receding now and then, indicates that long term warming trend.
I also believe that CO2 (for example) captures a narrow band of LWIR radiation from the earth surface (13.5 to 16.5 microns ?) for the degenerate bending mode of oscillation, as well as the (3.25 to 4.75 microns ?) that excites the assymmetrical stretch mode of CO2. Of course 98% of that earth emitted LWIR which Trenberth et al say is the 390 W/m^2 corresponding to a 288 K black body spectrum; lies between about 5.0 and 80.0 microns; most of which, is entirely oblivious of CO2.
But ! I do NOT believe the claims that CO2 is “well mixed” in earth’s atmosphere. My idea of “well mixed”, would mean that ANY statisticaslly valid sample of earth atmosphere, taken from any ordinary place on earth at any ordinary time, would assay as having the same molecular abundance of the common atmospheric molecular species; at least down to the 390 ppm of present day CO2.
It would not be statistically valid to take 2500 molecules of atmosphere, and declare it correct to have one CO2 molecule. But 25 million atmospheric molecules, should yield 10,000 CO2 molecules, and the shot noise in that should be 100 molecules of CO2, or about 1% of the mean number. Well, I’m sure the professional statisticians, would put a factor of three in there somewhere or other; and climatists seem to like that fudge factor. Well one cubic mm of STP air contains about 3 x 10^13 molecules, so it would seem that I only need a 10 micron cube of STP air to be valid as a sample.
Now NOAA in a famous three dimensional pole to pole plot (which is now apparently hidden from view) showed that at Mauna Loa in Hawaii, there is an annual CO2 abundance cycle of about 6 ppm peak to peak; but simultaneously at the North Pole, and over essentially all of the arctic, that cycle is 18 ppm, while at the south pole it is -1 ppm being out of phase with ML.
So the NP and the SP differ by about 19 ppm peak to peak over an annual cycle, out of a mean 390 ppm.
That’s a 5% difference in composition; or five times the shot noise RMS for a 25 million molecule sample. So no I don’t believe CO2 is even approximately well mixed in the atmosphere.
As to the effect of that CO2, or changes in it, I blieve CO2 is a rapid, virtually instantaneous moderator of the atmosphere energy.
For example, let’s say a new CO2 molecule emerges from the tailpipe of an SUV, at say one third of a metre above the ground. Now a photon of that at risk LWIR radiation at say 15 microns wavelength, can travel 300 km in just one millisecond, and essentially escape from the atmosphere. Well it can go 300 metres or about 1,000 feet in one microsecond; so it can get to my new tailpipe CO2 molecule in just one nano-second, and be absorbed; well unless it got absorbed in one picosecond by an already existing CO2 molecule just 0.3 mm from the ground.
So yes I think it is fair to say, that in the climate scale of things, 30 years for example, CO2 is virtually instantaneous, in its effect on the earth emitted LWIR radiant energy.
Now at STP near my SUV tailpipe, that CO2 absorbed energy is rapidly thermalized, by molecular collisions with atmospheric gas molecules, and results in an almost instantaneous local warming of the atmosphere AKA a rise in atmospheric Temperature. Yes; I believe all of those things happen all the time.
What happens next, seems to be less certain. The air is warmer so it ought to radiate more energy, and in an isotropic distribution pattern, so about half of that radiation should be directed earthwards, from whence the energy came, and about half should be directed to space and eventually escape, to cool the planet.
There is nothing much up there that could reflect that upward radiation, heading to space, certainly not the atmospheric gas molecules. Well of course there are clouds occasionally, but they don’t reflect LWIR; they ABSORB it due to the water content of clouds. Those water molecules can of course radiate LWIR radiation themselves, in an H2O specific spectrum.
The upward atmospheric emission of LWIR is; according to some people, only at GHG specific wavelengths, many of which will not be absorbed by the H2O molecules of clouds, since not all GHG spectral lines, match for all GHG species.
But in any case, it seems to me, that at least the earth’s atmosphere should react instantaneously in climate terms, to changes in CO2 molecular abundance. I’ve not seen or read much peer reviewed research papers that show this immediate response of the atmospheric Temperature to the obvious and well documented changes in CO2 in the atmosphere. In particular, the atmospheric Temperature seems to exhibit, absoluterly no response at all, to a 5% cyclic change in the molecular abundance of CO2, that takes place annually on earth.
Apparently, something else far more significant, is controlling the earth’s atmospheric Temperature. I have no idea what that is; but it seems that CO2 is not cutting the mustard.

December 30, 2011 8:36 pm

Larry Goldberg said @ December 30, 2011 at 6:08 pm
“Monckton has proposed a theory: either expose the flaws in it, or stand back and let the adults deal with the issues.”
The Good Lord has done no such thing! He has taken currently accepted theory and numbers therefrom, and analysed the results of applying them. Here is what he says:
“I have deliberately confined my analysis to textbook methods and “mainstream-science” data precisely so as to minimize the scope for any disagreement on the part of those who – until now – have gone along with the IPCC’s assertion that climate sensitivity is high enough to be dangerous. Deploying their own methods and drawing proper conclusions from them is more likely to lead them to rethink their position than attempting to reinvent the wheel.”

wayne
December 30, 2011 8:56 pm

@ Christopher Monckton of Brenchley
Christopher, I started to say no, but you are right, I am attracted to Dr. Nikolov’s “Unified Climate Theory”. I am just trying as hard as possible o stay strictly in proper science. I just feel like a lightning bolt hit me and I feel so foolish. And I can trace my misunderstanding back to, guess who, Joel Shores some years ago. I have been lied to and manipulated while getting acclimated to atmospheric physics. My forte has always been more in astronomy and astrophysics, and mainly there in gravitational effects, not thermodynamics though I have taken courses in it.
I have not verified N&Z’s work past the first column on the poster but you know, everything is checking out perfectly so far. Your integration must have a problem in the particular way your bands you used and I’m assuming latitudinal bands. Be sure to reduce the radiation field over each higher latitude band for their ever decreasing area at each latitude. That might be your mistake if being too high. I went to great effort to make sure the distribution was perfectly even. So, no error from bad distribution here. Triple checked and agrees with there figures given to the fourth digit. So, until I can find some great flaw, I will give them the benefit of the doubt as any good scientist deserves.
I also realize this negates many points and conjectures that I have commented on over the last few years, and it seems will negate huge swatches of other people’s understanding, maybe even yours. I keep asking how I could have discarded the aspects of pure thermodynamics so very easily, and I learned one good lesson, never, ever listen to a known troll.

George E. Smith;
December 30, 2011 9:00 pm

“”””” Mike G says:
December 30, 2011 at 10:44 am
Please stop trashing compact flourescent light bulbs. I completely agree windmills and solar panels for bulk power generation are completely useless and a major con, although not for many smaller scale specialised applications
CFL bulbs are a genuine improvement on those small glowing electric fires for most uses. Who would ever dream of using filament lighting for any commercial building or public space, but somehow it is best for use at home? Fair enough early CFL were not that good, but now they are a huge improvement in many locations, long lasting, cool running, very economical, and even quite quick starting.
It is a shame that the politicians took a stand on this issue, but as Churchill observed, even a fool (and presumably many fools) can be right sometimes.
Talking of progress, it won’t be that long before LEDs rule the world. “””””
Well Mike; why NOT trash those compact flourescent light bulbs; and also the compact fluorescent ones as well.
The “greens” trash coal fired power plants simply because (for one reason) they emit mercury into the atmospher in totally uncontrolled amounts.
So the all knowing gummint decrees that we shall replace our successful more than 100 year old Edison light bulbs with compact fluorescents; every one of which is guaranteed to contain that poisonous mercury that is not supposed to be good for us, when it comes from abundant coal energy.
Most CFLs; the small twisty kinds, are optically inefficient, since a lot of their white light emission is radiated right back into the light bulb, and lost.
Also, CFLs, and indeed ALL fluorescent lamps, are UV pumped phosphor systems, so essentially 100 percent of their light output, suffers from the quite unavoidable Stokes Shift energy loss, which results in waste heat.
So Fluorescents, and CFLs in particular are simply never going to be as efficient as LEDs, which contain no mercury at all. Even the cheapest highest market volume white light LED lamps are already inherently more efficient than the very best fluorescents, and are rapidly getting better.
And for niche markets, which demand, and can afford more expensive white light sources, with better color rendition indices, the multicolor (RYGB) types will be even higher efficiency.
So to mandate CFLs now, which are already obsolete, is totally insane.

R. Gates
December 30, 2011 9:41 pm

Larry Goldberg said:
” has proposed a theory: either expose the flaws in it, or stand back and let the adults deal with the issues.”

I would hardly call what Lord Monckton has proposed a “theory”. A conjecture perhaps, but hardly a theory. But regardless of what you choose to call it, it fails at the very start in assumptions made about the equalibrium temperature from even the current amount of CO2 (~390ppm). The Earth has not reaced that even yet as all feedback processes have not yet fully responded. Furthermore, it is unlikely that we will find out what the equalibrium temperature is of the Earth at 390 ppm of CO2 as we are adding approximately 2 ppm additional CO2 every year. These increases, along with increases in other greenhouse gases such as N2O and methane means we are chasing a currently moving target. In light of all this, it seems highly unlikely that Lord Monckton can possibly know what the sensitivity of the Earth is to a doubling of CO2 from preindustrial levels– certainly not well enough to assert that it will be “low enough to be harmless”.

davidmhoffer
December 30, 2011 9:48 pm

R. Gates;
The Earth has not reaced that even yet as all feedback processes have not yet fully responded.>>>
Since you keep referring to the many unknowns which you admit to regarding feedbacks, how can you assert with any degree of confidence that the feedback responses have not yet fully responded?

dalyplanet
December 30, 2011 10:09 pm

I would suggest that R. Gates proposed a reasonable benchmark prediction at 1:43 Dec 30
R Gates
~~I suppose if by 2030 if we see Arctic Sea ice return to the levels we saw in the mid 20th century and long-term global temps continue in the pattern downward we saw generally after the Holocene climate optimum, I might begin to suspect that the feedbacks related to CO2 increases were not as strong as the models indicated.~~

AndyG55
December 30, 2011 10:22 pm

@RGates
“certainly not well enough to assert that it will be “low enough to be harmless”.”
and certainly not well enough to assert that it will be high enough to cause any harm.
Face it.. WE DON”T KNOW WITH ANY SURETY AT ALL, EITHER WAY, so why the heck are we wasting so much money tackling a problem that we don’t even know exists !! (except in the minds of those with the agenda to make it exist)
PLANTS LOVE CO2.. please don’t starve the plants !!!

December 30, 2011 10:36 pm

R. Gates said @ December 30, 2011 at 9:41 pm
“I would hardly call what Lord Monckton has proposed a “theory”. A conjecture perhaps, but hardly a theory. But regardless of what you choose to call it, it fails at the very start in assumptions blah, blah, blah…”
R Gates, it’s neither theory, nor conjecture. The Good Lord takes as his assumptions the very same assumptions your Lords and Masters promulgate and then deduces from them. It’s called *logic*. I highly commend you study the subject. It’s difficult and will keep you out of mischief for a while.

davidmhoffer
December 30, 2011 10:38 pm

dalyplanet says:
December 30, 2011 at 10:09 pm
I would suggest that R. Gates proposed a reasonable benchmark prediction at 1:43 Dec 30>>>
Really? What did he predict? I’ve read his comment in full at least four times and I have no idea.

December 30, 2011 10:41 pm

AndyG55 said @ December 30, 2011 at 10:22 pm
“PLANTS LOVE CO2.. please don’t starve the plants !!!”
I won’t, promise! Pompous Gits like making plant food. Don’t tell the greenie-weenies, but making compost entails converting lotsa cellulose into CO2 (carbon perlewshun). And Pompous Gits like burning firewood in the cookstove to make even more carbon perlewshun (plant food).

December 30, 2011 11:57 pm

Brian says, December 30, 2011 at 1:01 pm:High sensitivity is only physically possible if the effective blackbody surface of Earth (at ~5 km, as you say) is rapidly expanding. Earth’s surface would then be warmed by the greater distance of fall in the gravitational field by the molecules starting at the effective surface (i.e., exactly what the lapse rate describes).
I agree completly with the the general point you are making. However you have made a significant error in your explanation.
The lapse rate is not created by a “fall in gravitational field” (which would be completly negligible) but by a fall in pressurewith height due to the progressive reduction in the mass of atmosphere above.

Mydogsgotnonose
December 31, 2011 12:35 am

Dennis Ray Wingo: ‘In the absence of atmosphere the albedo of the Earth would range between that of Mercury and the Moon, from about 0.12 to 0.14, depending on the amount of iron on the surface.’
Wrong: the oceans, 80% of the surface would still remain in the ice and cloud free World so hypothetical albedo would be ~0.07!

December 31, 2011 1:53 am

Wayne,
Would it be possible for you to supply your program? My experience is that verbal descriptions of calculations–such as yours above–usually contain latent ambiguities readily dispelled by viewing the actual code.
Before you object that there is no ambiguity since you’ve done nothing more than numerically evaluate the expression in Nikolov’s Equation 2, I’ll confess that, being a layman, I’m confused by that equation. My no-doubt naive interpretation is that phi is longitude and mu is (at an equinox) cosine of latitude (call it theta), in which case I would have thought that a differential of area (for a unit-radius sphere) would be cos(theta) d theta d phi = -[mu/sqrt(1-mu^2)] d mu d phi, and I was not able to detect the bracketed factor in Nikolov’s equation. So I’m hoping that your code would help us laymen understand what’s going on.
Any help would be appreciated.

John Marshall
December 31, 2011 2:16 am

I commend the noble Lord to read:-
Unified Theory of Climate. by Ned Nikolov PhD and karl Zeller PhD 2011.
And forget about the theory of GHG’s.

Capell
December 31, 2011 2:57 am

Paragraphs 1-7 seem logical. But instead of calculating the linear average sensitivity, it should be remembered that the greeenhouses gases exhibit logarithmic sensitivity. That would imply that the present-day sensitivity would be a small fraction of the linear average.

Merrick
December 31, 2011 4:43 am

Some questions about what Lord Monckton meant by “adidact”.
It could just have been a typo, but I suspect he means a-didact, where a didact is a teacher. So, as someone who is amoral lacks a moral sense an a-didact lacks a teaching sense or ability.
Would love to know the actual intent for sure.

wayne
December 31, 2011 5:19 am

Joe Born says:
December 31, 2011 at 1:53 am
Wayne,
Would it be possible for you to supply your program? My experience is that verbal descriptions of calculations–such as yours above–usually contain latent ambiguities readily dispelled by viewing the actual code.

>>>
Of course I would. You are the first to ever ask for the actual science proof! Glory be! I must admit I had a bit of trouble at first, never allowing for the latitudes decreasing area, and probably took the harder of two possible tacks, direct or monte carlo. The later might have been even easier.
It’s a bit messy and I have squished it but here is the function, place it in a simple c++ console application then call it. It’s quite simple. BTW, the un-lit side is never calculated for all points on it would be zero anyway. That is it only integrates the lit side and divides by two to account for the half un-lit. Make sense?

 #include
#define _USE_MATH_DEFINES
#include
void integrateMeanTperUTC()
{
    int divisions = 2048;
    double init = (90.0/2/divisions)*M_PI/180;
    double step = init*2;
    double accum = 0, weight = 0;
    for (double lon=-M_PI/2+init; lon<M_PI/2; lon+=step)
    {
        double cos_lon = cos(lon); // pre-calc
        for (double lat=-M_PI/2+init; lat<M_PI/2; lat+=step)
        {
            double cos_lat = cos(lat); // pre-calc
            double F = (1362.0+ 13.25e-05)*(1-0.12)/0.955 * cos_lon * cos_lat;
            // cos_lat below is to correct for decreasing area with latitude
            accum += sqrt(sqrt(F/5.6704e-08)) * cos_lat;
            weight += cos_lat;
        }
    }
    printf("T:   %.1f K\n", accum/weight/2);
    printf("GHE: %.1f K\n", 288 - accum/weight/2);
}
Joel Shore
December 31, 2011 5:39 am

Monckton of Brenchley says:

irst, the troll. Joel Shore says he thinks I define a “troll” as someone who injects science into his comments. Yet I had specifically stated that a “troll” is one who cannot keep his argument civil and polite, but resorts instead to hate-speech, of which Shore is serially guilty. Besides, the only “science” he “injected” was a repeated misstatement to the effect that Kiehl & Trenberth’s greenhouse-gas forcings total of 86-125 Watts per square meter included feedbacks consequent upon the forcings. It doesn’t. Get used to it.

You are mischaracterizing my argument. What your argument shows is that you don’t understand what a “forcing” or a “feedback” even mean and the fact that whether something is a forcing or a feedback depends on context. Why don’t you try answering my actual arguments ( http://wattsupwiththat.com/2011/12/30/feedback-about-feedbacks-and-suchlike-fooleries/#comment-848206 and http://wattsupwiththat.com/2011/12/30/feedback-about-feedbacks-and-suchlike-fooleries/#comment-848211 ) rather than just engaging in misdirection? (No, that’s not hate speech; it is an accurate description of what you are doing.)

Joel Shore
December 31, 2011 5:46 am

But I firmly believe you need to study more carefully and come to grips with Prof. Claes Johnson’s “Computational Blackbody Radiation” http://www.csc.kth.se/~cgjoh/blackbodyslayer.pdf in which he proves (backed up by Prof Nahle’s experiment in Sept 2011) that any back radiation simply does not have sufficient energy to ionise surface molecules and thus be converted to thermal energy.

Utter and complete nonsense. Claes has not “proven” anything as I have discussed before: http://wattsupwiththat.com/2011/12/29/unified-theory-of-climate/#comment-847777 . You make additional claims that even he wouldn’t make since they violate Conservation of Energy.
I won’t even get into Nahle’s nonsense. He makes Claes look like a genius by comparison.

Joel Shore
December 31, 2011 5:50 am

thepompousgit says:

The Good Lord takes as his assumptions the very same assumptions your Lords and Masters promulgate and then deduces from them. It’s called *logic*.

No he doesn’t and no it isn’t. He just gets things wrong, pure and simple.

R. Gates
December 31, 2011 6:10 am

Lord Monckton said:
“R. Gates says the Earth has not yet reached its equilibrium temperature, in that not all of the feedbacks consequent upon the greenhouse-gas forcings of 2 Watts per square meter that we have added to the system since 1750 have acted. However, these feedbacks are a very small fraction of the total feedbacks generated by the presence as opposed to absence of all greenhouse gases in the atmosphere. In that context, very nearly all the feedbacks have acted, and the remaining feedbacks (even if they are as net-positive as the IPCC would wish) will have little influence on the determination of the equilibrium system sensitivity of 1.2 K per CO2 doubling.
Of course, R. Gates’ argument is correct as far as the transient sensitivity calculation since 1750 is concerned: but I had already pointed this out in my postings, drawing the legitimate conclusion that if the transient industrial-era sensitivity (1.1 K) is near-equal to the equilibrium system sensitivity (1.2 K) then it is likely that temperature feedbacks are net-zero or thereby. I submit, therefore, that I have not made an “error of logic” here.”
——-
First of all, I appreciate that we can have an actual exchange of ideas here, and once more tip my hat to Anthony for providing this opportunity. I have a few questions about your reply and some underlying assumptions that you seem to be making. Perhaps you have a good reason for making them, but as the form the cornerstone of much of the uncertainty surrounding sensitivity, and are far from settled even among the supposed experts, I am curious as to how you can be so certain of them.
1. We are concerned with the climate sensitivity to the change in greenhouse gases since 1750, and given that the change, in percentage terms, of CO2, N2O, and methane, have been between 30 and 40%, how can be certain that the feedbacks generated by these changes are a “small fraction” of the total feedbacks, when in fact feedbacks clearly are not linear in nature at all, especially when dealing with a system as complex as climate existing as it does, at the edge of chaos, where a seemingly small nudge can send the system into a new state seeking an entirely new state of equalibrium? A perfect example of this is the melting of ice, where the small percentage change in energy added to ice just below 0C can initiate a change which was not predictable if one simply looked at the effects that adding similar amounts of energy from say -10C up to 0C. If we know anything for certain about our studies of paleoclimate it is that it is not as stable as once imagined, and that there are tipping points where it can suddenly shift into a new regime, forced by the smallest of nudges (but not random in the least). So again, on what do you base your presupposition that the feedbacks even initiated by the changes in greenhouse gases that we’ve already seen since 1750 will be a small fraction of the total feedbacks generated by the existence of greenhouse gases when feedbacks are not linear in nature?
2. On what do you base what you call a “legitimate conclusion” that the transient industrial-era sensitivity of 1.1K is near-equal to the equalibrium sensitivity of 1.2K? Making such a conclusion would require that you had complete knowledge of all fast and slow feedback processes, and given that these are the very crux of much on-going resenarch, I must politely suggest that you can’t possibly draw this conclusion. For example, before the rather surprising drop in arctic sea ice in summer 2007, which has caused all climate models to readjust their projections as to when the arctic will be ice free in summer, did you predict such an acceleration downward to the already diminishing arctic sea ice? As this acceleration in the loss of arctic sea ice will have an impact on equalibrium sensitivity, you would have had to have foreseen this steepening of the curve downward in order to make the conclusion that the transient industrial-era sensitivity is near-equal to the equalibrium sensitivity. Point of fact– no one is certain or can be certain that these two are anything close to being equal as the system is still undergoing change and the feedbacks to the increases we currently have in greenhouses gases since 1750 are still active and the system has not yet found a new equalibrium point. Moreover of course, the amounts of these greenhouse gases continue to increase and likely will for several decades to come at the very least, pushing out any future equalibrium point far into the future and given the nonlinear nature of the feedbacks, making predictiion of what that temperature will be quite difficult at best, but causing me great suspicion in the validity of your certainty that the sensitivity to a doubling of CO2 will be “low enough to be harmless” given the dynamic changes already underway.
Alway respectful of your keen intellect,
R. Gates

December 31, 2011 6:13 am

Joel Shaw says:
“If you have a block of ice there in place of something warmer, then it will not keep the room warmer for longer. If you put a block of ice in place of a vat filled with liquid nitrogen then it would indeed cause the room to stay warmer for longer.
“All that the greenhouse effect says is this: If you have the Earth emitting all its radiation out to space (at ~3 K), it will cool faster than if some of that radiation is being absorbed by the atmosphere, which is then because of its temperature then emitting some radiation back to the Earth.”
Granted my ice block is not a very good analogy: it has considerable Thermal Capacity. CO2 et al have negligable thermal capacity. They absorb LFIR and instantly re-radiate it. No heat loss from the ground is ‘slowed’ by their presence in the troposhere or elsewhere. This is why the computer models predict a warming troposphere, but reality shows no warming at all. The effect is purely imaginary: Cartesian rather than Newtonian scientific methodology.
Further, these same so-called ‘GHGs’ absorb LFIR from insolation and re-radiate it in all directions – ie just over half back out into space, cutting off more than half reaching the ground in the first place. Any LFIR absorbed from the gound is also re-radiated in all directions, and those downward have zero heating effect on the ground as the troposphere is usually between 50-100deg C colder than the ground.

R. Gates
December 31, 2011 6:25 am

davidmhoffer says:
December 30, 2011 at 9:48 pm
R. Gates;
The Earth has not reaced that even yet as all feedback processes have not yet fully responded.>>>
Since you keep referring to the many unknowns which you admit to regarding feedbacks, how can you assert with any degree of confidence that the feedback responses have not yet fully responded?
——–
We may not know all the feedbacks, nor where tipping points are, but we are seeing some already in play that strongly indivcate that feedbacks to even the currentl level of greenhouse gases are still responding and a new equalibrium point has not been reached. My response to Lord Monckton is specific on this point. He’s made what he calls a “legitimate conclusion” that the transient industrial-era sensitivity is near-equal to the equilibrium industrial- era sensitivity, yet as we don’t yet know what the industrial- era sensitivity is as the game is still afoot, this conclusion he makes is far from legitimate.

December 31, 2011 6:47 am

Wayne,
Thanks a lot; it is indeed clear–and it appears that, as I would have, you declined to integrate with respect to mu, as Nikolov did in his Equation (2).
Now that I’m sure what you (and, presumably, Nikolov) did, I’ll have to figure out whether it means anything.
By the way, here’s my attempt to place into a reply some code (R, in this case), which largely (but not quite) does the same thing. As you’ll see if this ends up being readable (I don’t know how to enter a scrollable window the way you did), the code emphasizes exposition over efficiency, and, since I’m using a scripting language, I only used about 68,000 points. The answer I get with the default parameter values is 158 Kelvins.
NikolovEqn2 = function(nLat = 180, nLong = 360, S_0 = 1366, c_s = 1.325e-4,
a_gb = 0.125, epsilon = 0.955, sigma = 5.670373e-8){
# A unity-radius sphere of emissivity epsilon is irradiated by (1) essentially
# planar radiation whose power density is S_0 (W/m^2) and (2) isotropic
# backgraound radiation whose power density is c_s. The (imaginary)
# sphere has zero thermal conductivity and zero thermal inertia, so the
# temperature at each point on its surface at every instant assumes
# precisely the temperature that places it in equilibrium with the radiation
# it receives at that instant. Find the temperature of a uniform-temperature
# sphere that radiates the same total power
#
# Solar radiation at noon GMT on an equinox as a function of location on the
# earth’s surface :
S = function(phi, theta) {S_0 * max(0,cos(phi)) * cos(theta);}
# Positional increments:
dLat = 180 / nLat; dTheta = pi * dLat / 180;
dLong = 360 / nLong; dPhi = pi * dLong / 180;
# Grid-cell centers:
lats = -90 + dLat/2 + 0:(nLat – 1) * dLat; thetas = pi * lats / 180;
longs = -180 + dLong/2 + 0:(nLong – 1) * dLong; phis = pi * longs / 180;
# Initialize total area scanned and cumulative area-weighted temperatures
A = 0; sumT = 0;
# Turn the crank:
for(phi in phis){
for(theta in thetas){
dA = cos(theta) * dTheta * dPhi;
# Tot up the area for debugging purposes; it should add to 4 * pi at the end
A = A + dA;
sumT = sumT + ((S(phi, theta) + c_s) * (1 – a_gb)/epsilon / sigma)^(1/4) * dA;
}
}
T = sumT/A
#
return(c(T, sumT, A));
}

December 31, 2011 7:12 am

Arno Arrak says, Dec 30th 2911
Arno, What a breath of fresh air you bring to this debate.
Your book “What Warming” on the satellite temperature data record should be read by all balanced, non-doctrinaire people who want confirmation that there is absolutely no hint of significant man-induced global warming in the real-world scientific data. You have shown beyond reasonable doubt that the alarmism of the last decade has been fuelled entirely by smoke and mirrors.
In fact, even the highly suspect (and apparently easily ‘adjustable’) surface instrumental record, whilst showing more warming than the much more reliable satellite data, still does not give any cause for alarm if properly interpreted.
See: http://www.TheTruthAboutClimateChange.org/tempsworld.html
I have been updating this annually since around 2005. Year after year it has consistently shown the same linear upward trend since 1850 of a distinctly un-alarming 0.41degC/century. Superimposed on this is a plus or minus 0.25degC oscillation with a periodicity of around 65 years which, being oscillatory, is assumed by most authorities to be entirely due to natural ocean cycles. (It correlates particularly well with the Atlantic Multidecadal Oscillation.).
What I find astonishing is that even with this clearly un-alarming data record, plain as a pikestaff for all to see for at least the last 6 years, the warmist bandwagon still keeps rolling on and on.
Keep up the good work everyone (including, especially of course, the good LM) in 2012.
And a Happy New Year to all.

Joel Shore
December 31, 2011 7:38 am

Monckton of Brenchley says:

Besides, the only “science” he “injected” was a repeated misstatement to the effect that Kiehl & Trenberth’s greenhouse-gas forcings total of 86-125 Watts per square meter included feedbacks consequent upon the forcings.

To put it another way than I did in my last post, you have no idea what part of what you and they call the “forcing” due to water vapor is in fact a feedback. That is, this calculation of the “forcing” due to water vapor calculates the radiative effect of water vapor without dealing with the issue of how the water vapor got into the atmosphere. When you use this along with the 33 K temperature increase associated with the greenhouse effect, you are just making the assumption that none of the water vapor got into the atmosphere as the result of the feedback that occurred when the non-condensable greenhouse gases were added to the atmosphere. In other words, you are assuming that the water vapor feedback is zero. That is why your argument is completely circular, i.e., you are calculating the climate sensitivity under the assumption that the feedback is zero.

Joel Shore
December 31, 2011 8:00 am

Joe Born says:

My no-doubt naive interpretation is that phi is longitude and mu is (at an equinox) cosine of latitude (call it theta), in which case I would have thought that a differential of area (for a unit-radius sphere) would be cos(theta) d theta d phi = -[mu/sqrt(1-mu^2)] d mu d phi, and I was not able to detect the bracketed factor in Nikolov’s equation.

The differential area is sin(theta) d theta d phi and Nikolov’s equation is “correct” in that sense. However, his equation calculates the Earth’s average temperature under the assumption that (in the absence of a greenhouse effect) the temperature of the Earth’s surface is simply determined by the local insolation. This is a very naive assumption that neglects any heat storage or transport. I think a more realistic assumption is what the climate science community generally makes: One calculates what the average temperature would be under the assumption that the Earth’s surface temperature is uniform and then notes that to the extent that the surface temperature were non-uniform, Holder’s Inequality implies that the average surface temperature would be lower. This correction is actually pretty small for the sort of temperature distribution that occurs on the Earth in its current state. While it would probably be somewhat larger in the absence of a greenhouse effect (i.e., the absence of a greenhouse effect would result in larger surface temperature variation), I doubt it would be so large as to make the assumption that Nikolov and Zeller make anywhere close to accurate.

December 31, 2011 8:25 am

Wayne:
I don’t mean to be a pen pal here, but I realized (as I was taking my quarterly return to the post office) that the comments in my code above betray a misconception under which I briefly labored as I was commenting the code. What my code–and yours–computes is not, as I characterized it, “the temperature of a uniform-temperature sphere that radiates the same total power.” That value would be the 280 Kelvins or so everyone has been citing.
No doubt this comes as a surprise to very few people. (I heard you saying “Well, duh.”). I bring it up, though, because it makes me believe that Nikolov’s Equation 2 value really doesn’t mean much. Yes, under highly fictional conditions the average temperature could theoretically be much lower than what now prevails on earth, but, as you observe, those conditions are ones that would not prevail on earth even in the absence of an atmosphere. So I have difficulty in perceiving what it is Nikolov et al. can logically infer about the atmosphere’s effects from that equation’s result.
I’m therefore inclined like Lord M not to devote much more thought to Nikolov et al. until they have revised their presentation considerably.

Alan D McIntire
December 31, 2011 8:35 am

A couple of years ago I read a article stating that Earth had a rouglhy 10% denser atmosphere during the Permian Period than it does now , which helped make it warmer than it is now. From that article I thought I could apply the PV =nrT law to get the additional greenhouse effect from an Earth with a denser atmosphere, and Venus’s; surface temperature, in effect Dr Nikolov;’s calculation. I quickly found a snag in my reasoning,
M, and R are determined constants, P is an input for an atmospher of a given density, but RHO is not determined by initial conditions,
If P is 10% larger and Rho is ALSO 10% larger, you get the same temperature as before.
Venus has a P of roughly 90. What determines Rho? If Rho was 130, Venus would have
a temperature of 370 K.
I am underwhelmed by Dr Nikolov’s calculations.

lgl
December 31, 2011 9:02 am

Joel Shore
How much water vapor from solar forcing and how much from CO2 forcing?
You write “you would lose much, if not most, of the radiative effects due to water vapor.” (without non-condensable gases). Based on what?

Joel Shore
December 31, 2011 9:16 am

: Without the non-condensable greenhouse gases, the first-order effect would be a drop in temperature on the order of 10 C and because the saturation vapor pressure is a strong function of the temperature, this would cause a significant drop in water vapor, which would lead to more cooling and more water vapor condensing out, … In the end, according to the computer modeling of Lacis et al., the temperature drops by roughly the 30-35 K due to the greenhouse effect. (On the one hand, not all the water vapor is gone but on the other the albedo of the earth’s surface increases.)
Even if you do not believe this scenario, just assuming it doesn’t occur in order to derive your climate sensitivity and then claiming that this sensitivity includes the water vapor feedback is a completely circular argument. Nobody serious disagrees about the climate sensitivity in the absence of feedbacks…The question is how feedbacks modify the result. Monckton’s calculation sheds zero light on that question.

Alan D McIntire
December 31, 2011 10:24 am

Joel Shore says:
December 31, 2011 at 9:16 am
: Without the non-condensable greenhouse gases, the first-order effect would be a drop in temperature on the order of 10 C
Based on the gospel according to realclimate,
http://www.realclimate.org/index.php/archives/2005/04/water-vapour-feedback-or-forcing/
the removal of CO2 would decrease Earth’s greenhouse effect by 9%. The net greenhouse effect is about 250 watts- 100 latent heat of vaporization and 150 sensible heat. Multiply by 0.91 and you get 91 watts latent heat and 136.5 sensible heat over the 240 watt no greenhouse effect. That would reduce temperatures to about (376.5/390)^0.25 =0.99+, from about 288 K to about 285,5 K, a drop of 2.5 K, NOT your absurd 10K. The actual sensible drop in temperaturs would be less than 2.5 K since a smaller fraction of the decreased wattage would go into the latent heat of vaporization, and , there would be a drop in albedo due to fewer clouds, .

gbaikie
December 31, 2011 10:24 am

“Venus has a P of roughly 90. What determines Rho? If Rho was 130, Venus would have
a temperature of 370 K.”
It seems to me that is Venus had 92 atm of nitrogen, it would a lot cooler, but not as cold as 370 K.
I think it would much cooler, because my impression is CO2 is more transparent than Nitrogen.
On Venus with 92 Atm of CO2 sunlight reaches the surface, if instead it was nitrogen, it seems the
surface would be completely dark.
Picture of Venus surface:
http://www.mentallandscape.com/C_Venera_Perspective.jpg
http://www.mentallandscape.com/C_CatalogVenus.htm
Venus has 3.5% Nitrogen- or about 3 earth atmospheres of nitrogen, if instead
Venus had say 5 times more nitrogen it seem it could cool the planet significantly.
The Sun has large component of it’s solar energy as visible light, it seems to me the coolest
a planet at distance from the sun could be is if had gases which inhibited almost all
of the Sun’s spectrum from reaching the surface.
It seems that ocean depth is difficult on earth is hard penetrate- need sonar to find subs
under the ocean. Therefore enough water vapor may function as liquid water does in a ocean.
So instead adding nitrogen to Venus to cool it, putting lots water which become water vapor.
10 atm of water vapor, roughly equals 100 meter liquid water. Visible light can penetrate 100 meters of water, but if added to the existing Venus atmosphere, it seems as an addition it would block all light reaching it’s surface and in addition block other spectrum of sunlight.
Of course, simply energy needed to vaporize water that much water would at least temporarily cool the atmosphere

Dreadnought
December 31, 2011 10:38 am

Nice work, m’Lud!
I particularly appreciated your last paragraph:
“The army of light and truth, however few we be, will quietly triumph over the forces of darkness in the end: for, whether they like it or not, the unalterable truth cannot indefinitely be confused, concealed, or contradicted. We did not make the laws of science: therefore, it is beyond our power to repeal them.”
You are indeed a wordsmith of the first order, if I may say so.
Furthermore, it is truly heartening that your logical approach to the scientific method is impenetrable, even to the likes of the battle-hardened warmist RGates.
It will be with great interest that I observe and support the progress of your action against the climate crank shysters in 2012 and beyond. More power to your elbow!
Happy New Year!

Doug Proctor
December 31, 2011 10:51 am

Dave Wendt says:
December 30, 2011 at 6:21 pm
Dave,
Thanks for the reply! 0.088 W/m2 as the global contribution of planetary cooling. Non-intuitive, low number. Means both heating mechanisms and thermal conductivities of rock very low. The earth’s rocks are like blankets and the oceans and atmosphere, like radiators.
I appreciate the feedback. There are many things that you don’t think about, that you take for granted, that when you stop to consider them appear odd. All these things that are “settled” and “certain” that are then found to be undetermined, unsettled and, their implications, uncertain.
Where have we heard that before.
The answer seems weirdly low if, as in Calgary, you can simply put 300′ of pipe 40′ in the ground and have some significant impact on heating/powering your house. I will check further with these Green Technology Guys about the power output of their pipes. If the recharge power is really 0.088 W/m2, my (again) intuitive sense is that their heating system will fail quickly as the energy is drained away.
Oh, and the new Google Earth map of 6km deep temperatures across the USA: only very rich corporations could get into geothermal energy at those depths. Makes you wonder if that is the purpose: if you could actually heat cities with depths of a few hundred meters, anyone could do it (called “water well” drilling).

lgl
December 31, 2011 11:00 am

Joel
I certainly do not believe that scenario. Most of the vapor originates in the tropics, and table 8 here http://www.cccma.ec.gc.ca/papers/jli/pdf/puckrin2004.pdf says 295 W/m2 from H2O and 5 W/m2 from CO2 so clearly the very most of water vapor is solar driven (even if you add 10 or 20 W/m2 to the 5 from the combined effect). Adding 400 W/m2 solar we get at least 700 W/m2 non-CO2 in total, so removing the 10-20 or so W/m2 from CO2 would not change the temperature in the tropics significantly.

December 31, 2011 11:09 am

Wrong: the oceans, 80% of the surface would still remain in the ice and cloud free World so hypothetical albedo would be ~0.07!
Such confidence! Without an atmosphere, we would not have any oceans either. A kinda goes with B there.
To me this whole modeling thing is a farce, no matter how you apply it. This is the problem with the current generation of scientists in many respects in that they have replaced actual experimentation and fact finding with assumptions and computer models. The truth about GHG’s is out there if someone just put up the money to actually do the experiments rather than all this arm waving.
I like Chris M and I know that his heart is in the right place here but without actual experimental evidence his arm waving is no more conclusive than Mike Mann’s or James Hansen’s.
As a physical scientist and test engineer this is what first has to be established.
1. What is the desaturation altitude of the principal absorption spectra of CO2 in the wavebands where it is saturated at the ground?
2. Have the Lorentz transformations of the linewidths of individual absorption lines broadened for CO2 that has increased from 0.028% of the atmosphere to 0.039% If so, how much?
3. What exactly is the effect of a 1 degree k increase in temperature on the absorption lines of CO2. It is well known that there is a temperature dependency with regard to the absorption lines. Measure the lines in the tropics, subtropics, and polar regions to see how this effects the desaturation altitude for not only CO2 but for CH4 and N2O and H2O.
All of the computer modeling that I have ever seen does simplifications of the characteristics of absorption and emission of CO2. In my opinion this is why you are not seeing the mid troposphere warming predicted in the tropics by Hansen and others.
Quit arm waving and start measuring, we might learn something.

George E. Smith;
December 31, 2011 11:26 am

“”””” David Walton says:
December 30, 2011 at 11:33 am
Re: “It is a shame that the politicians took a stand on this issue, but as Churchill observed, even a fool (and presumably many fools) can be right sometimes.”
Au contraire. Thanks to politicians and the legions of ignorant, feel good eco-fools GE’s breakthrough incandescent that had nearly the efficiency of CFLs and none of the associated disposal hazards was canned and never brought into production. “””””
So how would that “efficient” incandescent bulb work David ?
Existing incandescents emit a lot of their radiant energy in the near infra-red, and those photons can’t easily be up converted to visible light; well ALL light “by definition” IS visible.
So presumably the efficient incandescent would have to have its Temperature raised. A 3,000 K filament would have its spectral irradiance peak at about 1.0 microns, twice that of the 6,000 K sun, and about 98% of the radiant energy would lie between 0.5 microns and 8.0 microns; but only 25% of it is below 1.0 microns, which is IR.
Well you can try raising the Temperature to get more high energy photons, that you could then shift to other visible (but lower) frequencies, with various “phosphors” and the various “halogen” lamps are effectively in this category; but you still end up generating more heat (waste) than light.
Even at 6,000 K the sun doesn’t provide a spectrum at earth surface, which can be efficiently converted to a better white light source.
Down converting phosphors used in either fluorescents (or CFLs) , have their problems; and moreso with “warm white” lamps, than with “daylight” lamps that have 4-5,000 K color Temperatures.
Last night, I was drinking a glass of Cranberry juice, while reading in bed using a 5,000K daylight LED lamp. The Cranberry juice was dark chocolate brown; about the same color it would look outside at night with low pressure Sodium patrking lot or street lights. Neither one of these lamps put out any red light to speak of, so you get brown instead of red.
Warm white LEDs (2700-3,000 K) or fluorescents, (CFLs) do not contain a red phosphor, just a broad yellow phosphor or a mix of yellow and green with fluorescents, that are all UV pumped so they emit no visible light without the phosphors.
And the problem with all red phosphors, as Television makers know, is that they are quite inefficient in energy conversion to visible light, because red light has very low luminosity, and all red phosphors put out a very broad red spectrum, so too much radiant energy is put into the near IR, rather than visible red.
So warm white CFLs or LEDs will always be less efficient than daylight ones, at least until multicolor RYGB LED lamps are developed, and that will be too expensive for general lighting.

davidmhoffer
December 31, 2011 12:13 pm

R. Gates;
We may not know all the feedbacks, nor where tipping points are, but we are seeing some already in play that strongly indivcate that feedbacks to even the currentl level of greenhouse gases are still responding and a new equalibrium point has not been reached. My response to Lord Monckton is specific on this point. >>>>
You cannot assert that Monckton’s sensitivity estimates are wrong on the basis of the feedbacks being unknown, while also asserting that your sensitivity estimates are credible because you lack the exact same data you criticise Monckton for not having.
You cannot both suck and blow.

December 31, 2011 12:26 pm

one should keep firmly in mind the distinction between first-order effects that definitely change the outcome, second-order effects that may or may not change it but won’t change it much, and third-order effects that definitely won’t change it enough to make a difference. One should ruthlessly exclude third-order effects, however superficially interesting.
I like this statement, but what are the first order effects on temperature of the Earth? This would be my list.
1. Solar Distance (inverse square law of the radiative flux from the sun)
2. Ocean (the big heat sink)
3. Atmospheric pressure
4. Gross Atmospheric Composition (Oxygen/Nitrogen/Inert gasses/water vapor)
5. Albedo variances from plants, land use, ice,
6. Earth Rotation
7. Altitude variations (mountains, altitude variations)
8. Internal heat and heat flow to the surface (volcanos and standard heat flow)
Using the hierarchy that Chris lays out here GHG’s are all secondary effects.
Until you take into account 1-8 above, it is of little use to worry about GHG’s.
For example it is a known fact of paleoclimatology that our current ice age (we are in an interglacial period within a larger 3-5 million year old ice age) began pretty much with the uplift of the Hindu/Kush mountains and the closure of the Isthmus of Panama. Before that temperatures were considerably higher than today, oceans were tens of meters higher with CO2 only slightly different than today. Therefore it should be quite clear that CO2 is of a lesser import than the variations in altitude of the planet and the differences in ocean circulation.
You can go farther back into the past to see differences in continental configurations coupled with CO2 as low as today but with global temperatures far different than what we see today.
Therefore to make wide ranging statements about some major effects of a secondary influence without measuring to actually understand it within the context of the major influences (Chris I am not claiming that you are), obscures more than it illuminates.
This is why we need a program of taking actual spectral measurements around the world so that we can actually quantify the effects of CO2 as a secondary influence (CH4, N2O, and H2O as well), so that we can accurately quantify these secondary effects and then see what happens when more or less of them are put into the mix.

R. Gates
December 31, 2011 1:26 pm

Here’s a simple thought experiment regarding a doubling of CO2, that indicates a general range for climate sensitivity. Let’s assume that the general 33C boost the earth gets from greenhouse gases is correct (other theories not withstanding). Let’s be generous on the conservative side, and suggest that about 25% of the 33C temperature boost comes from CO2 and related feedbacks. I suggest this is conservative because of the non-condensing nature of CO2 versus water vapor, such that without CO2 in the mix of greenhouse gases, as the earth cooled, it might well continue to cool as more water vapor is condensed from the atmosphere and lead right back to an ice-house earth state, and thus, losing the 25% part of the 33C rise might actually lead to losing much more. But regardless, let’s be generous and suggest that about 8.25C of warming comes from CO2, and this 8.25C would include any and all fast and slow feedback effects such as might come from diminished ice, and biosphere effects related to plankton etc. As the Holocene Climate has generally been pretty stable when compared to the very large oscillations of the last glacial period, the pre-industrial level of CO2 at 280 ppm has certainly played some role in helping to create the fairly stable Holocene Climate that our species has so enjoyed and has been the springboard of civilization due to that stability. The flourishing of the common grain plants such as wheat need a fairly stable climate and these grains have been so vital for our civilization. Now a doubling of CO2, from the Holocene average of about 280 ppm to 560 ppm could mean up to an additional 8.25C of warming, but let’s be realistic and allow for the logarithmic nature of the purely radiative effects from CO2 (ignoring feedbacks in our conservative estimate), and be extra generous and suggest that we only get 25% of an 8.25C warming from doubling CO2. That would still mean about 2C of warming. But let’s remember that two additional greenhouse gases are also increasing rapidly along with the rise in CO2…namely N2O and methane, and these certainly play some role in the 33C boost that the earth gets from greenhouse gases. Methane in particular could play an even bigger role as its positive feedback effects could see the growth of methane on a percentage basis outstrip even that of CO2 in the coming decades, and of course, methane eventually breaks down into CO2, meaning that its effects can be even harder to quantify over the long run as we get several decades of warming from methane and then many more decades of warming from the CO2 that methane reduces to. The point in bringing up methane and N2O is that if we add their effects to the generously conservative estimate of 2C warming from a doubling of CO2, we are easily in the range of 3C warming from a doubling that seems the most reasonable at the present time. Again, these are all very conservative estimate estimates, and point strongly in the direction that a 1.1C of warming per doubling seems quite on the low side by at least 100% or more. A separate issue to all this of course is what harm might or might not come from a 3C warming. Certainly pronouncements that effects from a doubling will be “low enough to be harmless” appears as more guesswork rather than substantive.

Myrrh
December 31, 2011 1:27 pm

Rosco says:
December 30, 2011 at 4:14 pm
Clearly, if one uses S(1-a) to calculate the maximum temperature the Sun “could” heat the Earth to (minus all other considerations such as evaporation, convection etc etc) you arrive at 360 K or ~ 87 degrees C.
Is this reasonable ?

Don’t know. But 67°C is given as the temp without the dynamics of the water cycle but with the rest of the atmosphere gases in place, practically 100% nitrogen and oxygen, so are you saying that without convection of these gases rising and falling parcels of air but in place the temp would be 20°C higher?
I’m not sure what the ‘picture’ is here but it does appear to make sense as part of the standard description, if the fluid gaseous atmosphere isn’t moving at all so no cooling winds and minus the water cycle.
[The standard descriptions I’ve seen is that the Earth without any atmosphere is -18°C., with atmosphere but without the water cycle 67°C.]

R. Gates
December 31, 2011 1:45 pm

davidmhoffer says:
December 31, 2011 at 12:13 pm
R. Gates;
We may not know all the feedbacks, nor where tipping points are, but we are seeing some already in play that strongly indivcate that feedbacks to even the currentl level of greenhouse gases are still responding and a new equalibrium point has not been reached. My response to Lord Monckton is specific on this point. >>>>
You cannot assert that Monckton’s sensitivity estimates are wrong on the basis of the feedbacks being unknown, while also asserting that your sensitivity estimates are credible because you lack the exact same data you criticise Monckton for not having.
You cannot both suck and blow.
_______
Again, Lord Monckton is asserting that the transient industrial-era sensitivity is nearly equal to the equalibrium sensitivity, yet this can’t possibly be a known quantity, as the system (even at current CO2 levels) has not yet reached an equalibrium temperature yet, and worse still, CO2 levels continue to rise at rates faster than previously seen in the past 800,000 years. The paleoclimate record combined with a consensus of global climate models point strongly to a sensitivity per doubling of CO2 certainly greater than 2C, thus making Lord Monckton’s estimate of 1.1C off by at least 100%. Yes, the climate models are imperfect (as all models are), and the paleodata is also incomplete, but taken as a whole, they provide far more weight of evidence pointing to a higher climate sensitivity than what Lord Monckton would allow for.

R. Gates
December 31, 2011 2:16 pm

Alan D McIntire says:
December 31, 2011 at 10:24 am
“The removal of CO2 would decrease Earth’s greenhouse effect by 9%. The net greenhouse effect is about 250 watts- 100 latent heat of vaporization and 150 sensible heat. Multiply by 0.91 and you get 91 watts latent heat and 136.5 sensible heat over the 240 watt no greenhouse effect. That would reduce temperatures to about (376.5/390)^0.25 =0.99+, from about 288 K to about 285,5 K, a drop of 2.5 K, NOT your absurd 10K. The actual sensible drop in temperaturs would be less than 2.5 K since a smaller fraction of the decreased wattage would go into the latent heat of vaporization, and , there would be a drop in albedo due to fewer clouds.”
_____
This is not taking into account the huge difference between condensing and non-condensing greenhouse gases, and the stabilizing effect that CO2 has on the overall greenhouse temperatures due to that non-condensing nature. Removing CO2 would remove that stabilizing effect, such that over a rather short period (certainly less than a century), the earth would gradually cool, more water vapor would be condensed out of the atmosphere and a return to an ice-house planet, globally very dry and very cold, would be the final result. This, by the way, doesn’t even take into account the biosphere effects of higher CO2, such that removing it would kill off plants, which would increase the planetary albedo by an even greater amount, and this, combined with increasing ice and snow coverage, and lower water vapor levels, would also lead inexorably to the ice-house earth of the past.

December 31, 2011 2:17 pm

Monckton of Brenchley says:
December 30, 2011 at 5:58 pm
[SNIP: Phil. – Let’s not go there…. and Joel Shore is more than capable of taking care of himself, although I’m sure he appreciates your support. Take the high road. -REP]

AJStrata begs me to appreciate that the “absorbing and emitting sphere” at the characteristic-emission altitude are not of the same radius. However, Kirchhoff’s radiation law is entirely clear: absorption and emission of radiation from the characteristic-emission surface of an astronomical body are – and are treated by the textbooks and the IPCC as – simultaneous and identical. It is as simple as that.

Kirchoff’s law is clear but unfortunately you have it wrong, ’emissivity equals absorptivity’ is correct not your version.

Bill Illis
December 31, 2011 2:30 pm

R. Gates says:
December 31, 2011 at 1:45 pm
.. and the paleodata is also incomplete, but taken as a whole, they provide far more weight of evidence pointing to a higher climate sensitivity
——————-
It is shocking how many pro-AGW people believe this. I’m assuming they have never checked whether that was true or not.
Like say for instance, CO2 was at 240 ppm 15 million years ago and temperature was +4C. All Co2 estimates close to the period. What is the CO2 sensitivity in this case. Or 33 million years ago when temperature was about the same as today and CO2 was 1200 ppm. All CO2 estimates near the period. (Handy/dandy sensitivity formula for checking climate science math).
CO2 sensitivity = Temp AnomC * LN(2) / LN(CO2ppm/280)
Temp AnomC = CO2 sensitivity / LN(2) * LN(CO2ppm/280)
Climate science uses a different formula:
Any data whatsoever = 3C per doubling

Joel Shore
December 31, 2011 2:43 pm

Alan D McIntyre says:

the removal of CO2 would decrease Earth’s greenhouse effect by 9%. The net greenhouse effect is about 250 watts- 100 latent heat of vaporization and 150 sensible heat. Multiply by 0.91 and you get 91 watts latent heat and 136.5 sensible heat over the 240 watt no greenhouse effect. That would reduce temperatures to about (376.5/390)^0.25 =0.99+, from about 288 K to about 285,5 K, a drop of 2.5 K, NOT your absurd 10K.

I said all the non-condensable greenhouse gases, not just water vapor. That said, a better estimate for the direct (pre-feedback) effect their removal would probably have been about 5 K, not 10 K.

Joel Shore
December 31, 2011 2:49 pm

lgl says:

I certainly do not believe that scenario. Most of the vapor originates in the tropics, and table 8 here http://www.cccma.ec.gc.ca/papers/jli/pdf/puckrin2004.pdf says 295 W/m2 from H2O and 5 W/m2 from CO2 so clearly the very most of water vapor is solar driven (even if you add 10 or 20 W/m2 to the 5 from the combined effect). Adding 400 W/m2 solar we get at least 700 W/m2 non-CO2 in total, so removing the 10-20 or so W/m2 from CO2 would not change the temperature in the tropics significantly.

I think your estimates are off-the-mark and it is also not really relevant what the fraction of the total W/m^2 is: The earth is poised at temperatures where a little bit of warming goes a long way toward increasing water vapor in the atmosphere (and, by the way, the most important parts of the atmosphere in terms of the water vapor feedback are not the warm regions but rather high in the troposphere where it is quite cold). I trust actual models that incorporate actual physics into them more than people throwing numbers around off the top of their head as if that proves something.
That being said, it is completely irrelevant to the issue of whether one can invoke a circular argument by ASSUMING the water vapor feedback doesn’t exist in order to derive a climate sensitivity that one claims includes the effects of the water vapor feedback.

R. Gates
December 31, 2011 3:01 pm

Bill Illis says:
December 31, 2011 at 2:30 pm
R. Gates says:
December 31, 2011 at 1:45 pm
.. and the paleodata is also incomplete, but taken as a whole, they provide far more weight of evidence pointing to a higher climate sensitivity
——————-
It is shocking how many pro-AGW people believe this. I’m assuming they have never checked whether that was true or not.
____
Actually, I have checked this, probably far more extensively than many skeptics. Of most interest in the paleoclimate data would be the most recent time period in which CO2 was around current or slightly higher levels, as this would also be the closest in terms of ocean and continent configurations as well as general solar output and insolation levels (i.e. the cool sun issue of the most distant past creates issues as well as the greatly different continent configurations). In this regards, a great amount of interest is of course focused on the mid-Pliocene period as this was the last and most recent period in which CO2 levels were approximately at this level or higher (3.0 to 3.3 mya). While much study of course needs to be done, and a great deal is on-going, a growing amount of paleoclimate data from this time period would indicate that a climate sensitivity of about 3C per a doubling of CO2 from pre-industrial levels is indicated. Most important of course, is the fact that this paleoclimate data would inherently include all fast and slow feedbacks from higher CO2 levels– something of course that the current batch of climate models need continuing improvement on. As the paleoclimate data continues to come in from the mid-Pliocene, and the sum total of feedbacks are better understood, a tightening of the estimate of climate sensitivity will occur, reducing the uncertainty bands. Currently the 3C estimate of warming per doubling of CO2 is holding up quite well.

Dave Wendt
December 31, 2011 3:15 pm

R. Gates says:
December 31, 2011 at 1:26 pm
Although I’ve never found your arguments here very convincing, in the past i must admit I thought you did at least a workmanlike job of arguing your position. However this comment is so lame that one hardly knows where to begin in addressing it or even whether it is worth wasting my dwindling supply of life to do so.
“Let’s be generous on the conservative side, and suggest that about 25% of the 33C temperature boost comes from CO2 and related feedbacks.”
25% is “generous and conservative”? Even the notoriously skeptical folks at Wikipedia have 25% at the very top of their estimate, In the real world the percent is more likely to be in single digits.
“Now a doubling of CO2, from the Holocene average of about 280 ppm to 560 ppm could mean up to an additional 8.25C of warming…”
8.25C? Really? This last doubling could double the GHE of all the other CO2 in the atmosphere? Got a link for that one?
“But let’s remember that two additional greenhouse gases are also increasing rapidly along with the rise in CO2…namely N2O and methane, and these certainly play some role in the 33C boost that the earth gets from greenhouse gases.”
At the molecular level methane is supposedly 30 time more radiatively effective than CO2, but it exists in the atmosphere, even after a big uptick in the 80s, at 1/200th the concentration of CO2. and the rate of growth has been fairly flat since the 90s. In the real atmosphere, what scant evidence which exists suggests that neither CH4 or N2O are more than negligible contributors to the GHE.
http://ams.confex.com/ams/Annual2006/techprogram/paper_100737.htm
” Certainly pronouncements that effects from a doubling will be “low enough to be harmless” appears as more guesswork rather than substantive.”
Lord Monckton’s statements may be guesswork, given the current state of the science most statements about the climate are, but his guesses bear at least some relation to reality. Something which can’t be said about your comment.

R. Gates
December 31, 2011 3:39 pm

Dave Wendt said: (regarding R. Gates conservative estimate of the impact of CO2 in the mix of greenhouse gas effects on the climate)
“25% is “generous and conservative”? Even the notoriously skeptical folks at Wikipedia have 25% at the very top of their estimate, In the real world the percent is more likely to be in single digits.”
____
I would simply direct those who question this estimate to the actual measurements and science, and a few an excellent sources can be found at:
http://www.nature.com/nature/journal/v344/n6266/abs/344529a0.html
http://www.esrl.noaa.gov/gmd/aggi/
http://rsta.royalsocietypublishing.org/content/369/1943/1891.full
http://scienceofdoom.com/2009/11/28/co2-an-insignificant-trace-gas-part-one/
I stand quite solidly behind my generously conservative estimate of 25% of the 33C warming being related to CO2– and this didn’t really even address the 15 micron issue (where earth’s LW peaks right where CO2’s LW is strongest, and water vapor is much weaker) nor the non-condensing nature of CO2 and its relative stability when compared to water vapor in the atmosphere. Quite simply- take away the CO2 from the atmosphere and we’d be back to an ice planet in fairly short order, such that the 25% conservative estimate of the contribution to the 33C of warming does not even begin to indicate the full measure and value of the stability that CO2 brings to temperatures from its non-condensing nature.

Alan D McIntire
December 31, 2011 3:48 pm

Joel Shore says:
December 31, 2011 at 2:43 pm
…..
I said all the non-condensable greenhouse gases, not just water vapor. ”
I wasn’t aware that water vapor was a non-condensable greenhouse gas.
The earth’s atmosphere has changed dramatically over its 4.6 billion year existence. Solar luminosity has increased roughly 40% over the last 4.6 billion years. Despite these drastic changes, earth has had stable enough temperatures to maintain liquid oceans over nearly the entire span of time. Obviously the water cycle has played a dominant part in maintaining that stability, and obviously the feedback must have been NEGATIVE over geological periods of time.
Monckton is right- these high sensitivity calculations belong in “Creature Feature” type disaster movies, not in scientific papers.

Dave Wendt
December 31, 2011 4:25 pm

R. Gates says:
December 31, 2011 at 3:39 pm
LIFE IS TOO SHORT!

davidmhoffer
December 31, 2011 4:33 pm

R. Gates;
The paleoclimate record combined with a consensus of global climate models point strongly to a sensitivity per doubling of CO2 certainly greater than 2C, thus making Lord Monckton’s estimate of 1.1C off by at least 100%.>>>
Oh bullsh*t.
Global climate models are just that, models! In fact, they are models based on the assumptions of the people who made them! By definition, they presume feedback values that you have just admitted nobody has any way of credibly quanitfying! So call your estimate a computer model and suddenly it is reasonably accurate? What utter and total bullsh*t!
Then let’s average the utter and total bullsh*t models which can’t hindcast worth beans, failed to predict ANYTHING accurateley since they were dirst written, with the paleo record which equates temperatures to…assumptions made by the exact same researchers in regard to relationship with tree rings. For the few decades in which we have both temperature data and paleo data, the paleo data to totaly and completely wrong for almost 1/2 the temperature record!
So you want to average computer models based on wild guesses that cannot be substantiated by measurements, hindcasting, or forecasting, with paleo models that can’t get the temperature trend right almost half the time, and then claim that the resulting number supports your sensitivity estimate?
Please invite as many of your warmist friends as possible to read what you wrote and the rebuttals in this thread. I dare you.

Joel Shore
December 31, 2011 4:54 pm

Alan: Obviously, I meant “carbon dioxide” when I typed “water vapor”.
As for paleoclimate: Those who have actually studied paleoclimate have a different view of the Earth’s history ( http://www.sciencemag.org/content/306/5697/821.summary ):

Climate models and efforts to explain global temperature changes over the past century suggest that the average global temperature will rise by between 1.5º and 4.5ºC if the atmospheric CO2 concentration doubles. In their Perspective, Schrag and Alley look at records of past climate change, from the last ice age to millions of years ago, to determine whether this climate sensitivity is realistic. They conclude that the climate system is very sensitive to small perturbations and that the climate sensitivity may be even higher than suggested by models.

It may be true that on long enough timescales, geochemical feedbacks (e.g., the buildup of CO2 that occurs during snowball or slushball earth events, the drawdown of CO2 that occurs during hotter epochs) tend to produce negative feedback. Unfortunately, such feedbacks are unlikely to save us on the centennial time scale.

Joel Shore
December 31, 2011 4:58 pm

Monckton is right- these high sensitivity calculations belong in “Creature Feature” type disaster movies, not in scientific papers.

As I have demonstrated here and nobody has seriously challenged, Monckton’s current pet argument about climate sensitivity is completely and utterly bogus. But, I suppose that won’t stop people from believing what they want to believe.

Bill Illis
December 31, 2011 5:21 pm

R. Gates says:
December 31, 2011 at 3:01 pm
In this regards, a great amount of interest is of course focused on the mid-Pliocene period as this was the last and most recent period in which CO2 levels were approximately at this level or higher (3.0 to 3.3 mya).
————————–
Well one can cherrypick a short period and make a comment about it but we need to look if the sensitivity applies consistently. Have a look at the last 15 million years. CO2 is playing no role at all.
http://img542.imageshack.us/img542/4995/co2andtempover15mys.png
And here are all of your CO2 estimates from 3.3 Mya to 3.0 Mya when temps were about 1.0C higher than today (excluding some recent CO2 estimates from Pagani which contradicts the Antarctic ice core numbers in the period when they overlap so should be discarded). These are pretty consistently below 280 ppm (indicating it should have been cooler).
3.000 184
3.000 208
3.008 215
3.034 236
3.194 243
3.266 211
3.310 220
3.310 248
3.317 254
3.322 267
3.327 229
3.338 247
3.343 266
3.348 237
3.354 243
3.363 279
3.368 271
3.373 289
3.383 312
3.388 302
3.393 308
3.396 277

wayne
December 31, 2011 5:27 pm

Joe Born says:
December 31, 2011 at 8:25 am

What my code–and yours–computes is not, as I characterized it, “the temperature of a uniform-temperature sphere that radiates the same total power.” That value would be the 280 Kelvins or so everyone has been citing.

I had to strike that out of your statement, for it is you that has mischaracterized what is in Dr. Nikolov & Zeller’s theory as I read it.
You say: “the temperature of a uniform-temperature sphere that radiates the same total power” and that is totally incorrect. Was they are saying is every point on such of a sphere is not, I repeat not at a uniform temperature. Since the relation of radiation on a massless gray body to the temperature at every point is non-linear due to the fourth root you must first compute the effective radiative temperature at each point from the radiation at that point and then average the temperatures across the sphere.
Lord M should very well pay attention to this theory for it’s physics are correct on the ‘mean effective radiative temperature’ and it corrects the slip in logic found everywhere within general “climate science” today. Most of the problem is most of climate science on the radiation side keep insisting on a massless surface without any thermal inertia, they keep assuming all radiation hitting the Earth immediately and totally radiates back to space. This is wrong.
I can’t speak for your program, but I will stand by mine for correctly computing the ‘mean effective radiative temperature’ of a massless gray body as a perfect radiator. Remember, there is no real temperature in such of an example for there is no mass. It takes mass to even define temperature. (but most climate scientist have no problem with it and therefore they are all wrong, sorry)

davidmhoffer
December 31, 2011 5:44 pm

R. Gates says:
December 31, 2011 at 1:26 pm
Here’s a simple thought experiment regarding a doubling of CO2, that indicates a general range for climate sensitivity. Let’s assume that the general 33C boost the earth gets from greenhouse gases is correct (other theories not withstanding). Let’s be generous on the conservative side, and suggest that about 25% of the 33C temperature boost comes from CO2 and related feedbacks>>>
since any feedbacks you could attribute to CO2 would apply equally to water vapour, and water vapour DWARFS CO2 as a GHG, your numbers are as bogus as the rest of your argument. But hey, let’s put that aside for a moment and use your utterly ridiculous 6 degrees.
That would mean that if 280 ppm = 6 degrees, then 400 ppm which is close to what we have now would result in another +3 (CO2 being logarithmic, plus 40% is about a 50% increase). That in turn would be about 6 degrees for doubling of CO2, TWICE as much as you claim with your utterly ridiculous circular logic based on artificial models being averaged with totaly discredited paleo records.
Nice. You’ve discredited your own theory!

davidmhoffer
December 31, 2011 6:03 pm

Joel Shore;
I doubt it would be so large as to make the assumption that Nikolov and Zeller make anywhere close to accurate.>>>
My first take on Nikolov and Zeller was that their -100K number was out of the realm of reality. Upon further consideration, I’m no so certain. If you calculate equilibrium temperature based on SB Law and an average of 235 w/m2, you get around 253K.
But there’s no such thing as “average” insolation, and P varies with T^4, not T. I did a quick back of the envelope calculation based on a 12 hour zero insolation followed by insolation rising linearly from zero to 1000 w/m2 in six hours, and then dropping back to zero in the next six hours. Save the fact I used a linear rise and fall, that’s going to be pretty close for the tropics and good enough for a back of the envelope calc. I got 150K as a result. Fudge factor using a linear rise, and call it 160K. Almost bang on N&Z and also bang on the rough average of the moon when you extrapolate from the polar data as well as the equatorial data which results in 150K instead of 250K.
I’ve emailed my calcs to Ira who also thinks I’m off my rocker. Happy to cc you if you’d like to discuss off line, but I think if you catch my drift, that we need to calculate T^4 based on the rise and fall of insolation from day to night, season to season, and across latitude zones with albedo and peak insolation changing as one nears the poles…and THEN average and THEN convert to T, we’re going to see a number in the range of -100K.
Averaging T based on averaging Insolation may well be the most collosal math error in human history.

December 31, 2011 6:33 pm

Joel Shore says:
“As I have demonstrated here and nobody has seriously challenged, Monckton’s current pet argument about climate sensitivity is completely and utterly bogus. But, I suppose that won’t stop people from believing what they want to believe.”
Aside from his usual psychological projection [Joel Shore believes what he wants to believe], Joel Shore’s ‘demonstration’ is debunked by the planet itself. If the global temperature was highly sensitive to changes in CO2, then ΔT would closely track ΔCO2. But it doesn’t.
So who should we believe… Joel Shore? Or Planet Earth, and our lyin’ eyes?

December 31, 2011 7:02 pm

To Joel Shore and R. Gates:
Agenda driven wild guesses using “supercomputer” models to prove your point don’t impress me. I was feeding IBM paper cards to “supercomputers” in the 70’s and I know how they work, right down to the assembler language, which I am well versed on. I can program any computer to offer proof of anything you’d like me to using a computer model, whether it is true or not. Let me know when you have something solid, as in real world physics, to go on.
In the meantime have a HAPPY NEW YEAR!
Best,
J,

Joel Shore
December 31, 2011 9:34 pm

davidmhoffer says:

Averaging T based on averaging Insolation may well be the most collosal math error in human history.

As I have already explained, the error is definitely quite small (on the order of a degree or two if I recall) for current Earth-like temperature distributions. What it would be like for a hypothetical earth with no greenhouse effect is a little more complex, since it would sort of depend on how the greenhouse gas “disappears”, but I think it is reasonable to calculate the average temperature in the absence of a greenhouse effect based on the assumption that the temperature distribution has similar properties to what it has now, which means the average temperature would be pretty close to 255 K.
At some point, these sort of hypothetical things become like counting angels on the head of a pin…The important thing to note is that without a greenhouse effect, the Earth’s surface would have to emit only ~240 W/m^2 instead of the current ~390 W/m^2 and that the highest possible average temperature it could have and do this is 255 K. If the temperature distribution was much more extreme than currently, then the average temperature would work out to be considerably less than this.

Joel Shore
December 31, 2011 9:52 pm

davidmhoffer says:

That would mean that if 280 ppm = 6 degrees, then 400 ppm which is close to what we have now would result in another +3 (CO2 being logarithmic, plus 40% is about a 50% increase). That in turn would be about 6 degrees for doubling of CO2,

You’ve made a mistake here: You have somehow managed to figure out what the ratio of log(c)/log(c_0) is when c_0 = 0, which is a neat trick because log(0) is undefined (or negative infinity, if you will). [In reality, what happens is that at lower concentrations, the log dependence breaks down…At low concentrations the dependence is more like linear, but without knowing the details of how that happens, you can’t use the temperature rise from 0 to 280ppm to predict the temperature rise from 280 to 400ppm.]

davidmhoffer
December 31, 2011 10:19 pm

Joel Shore;
but without knowing the details of how that happens, you can’t use the temperature rise from 0 to 280ppm to predict the temperature rise from 280 to 400ppm.]>>>>
OK fine. That being the case, please explain to R. Gates that his initial assumptions are bogus for the same reason. He’s either wrong for the reasons you’ve stated, or for the reasons I’ve stated. I’m happy to go with yours.

Joel Shore
January 1, 2012 6:32 am

davidmhoffer says:

OK fine. That being the case, please explain to R. Gates that his initial assumptions are bogus for the same reason. He’s either wrong for the reasons you’ve stated, or for the reasons I’ve stated. I’m happy to go with yours.

I don’t see the place where R. Gates has made the mistake that I pointed out that you made. Could you direct me to it?

January 1, 2012 6:39 am

Wayne:
You wouldn’t take yes for an answer.
If you parse what I said, namely, “What my code–and yours–computes is not, as I characterized it, “the temperature of a uniform-temperature sphere that radiates the same total power,” you will see that I was not criticizing the code (yours or mine) or the equation it implements; I was criticizing the mischaracterization made by my code’s comments. I was in fact saying exactly what you responded with by saying “You say: ‘the temperature of a uniform-temperature sphere that radiates the same total power’ and that is totally incorrect.”
Resolution aside, moreover, the only substantive difference I see between my R code and your C code is my code’s assumption–which makes a negligible difference in the result–that the background radiation is isotropic instead of all coming from the sun’s direction. So, yes, I do understand that your program does indeed “correctly [compute] the ‘mean effective radiative temperature’ of a massless gray body as a perfect radiator.” I therefore recognize that objections based, e.g., on the moon’s higher average temperature have no merit as far as the correctness of your and my results and, to the extent that our programs implement Equation 2, the authors’. (I still don’t understand their integrating with respect to mu, but that’s probably just a testimony to how little remains of my once-adequate mathematical ability.)
However, now that I have a firm grasp on what those results are, I find I’m not yet able to comprehend their relevance to the question before the house, and I certainly see no evidence that supports your contention that “most of climate science on the radiation side keep[s] insisting on a massless surface without any thermal inertia, they keep assuming all radiation hitting the Earth immediately and totally radiates back to space.”
And, the apparently contrary evidence in my mischaracterization above notwithstanding, I doubt that many reasonably numerate observers fail to grasp the fact that the total radiation emitted by a body can vary widely with surface-temperature distribution even if the spatial-average temperature is kept constant; many commenters in this blog and others have made that point implicitly–and occasionally explicitly–in criticizing the concept of global average temperature. So I would caution you not to jump to that conclusion–as there is some evidence that the authors have–from the fact that people resist Equation 2’s results. I believe it much more likely (1) that some are objecting not to its correctness but rather to its relevance and (2) that others misunderstand it because, in reading it over more quickly than you and now I have, they assume the authors are shooting for a different quantity, one that has more relevance than I’ve yet been able to perceive to the question of whether atmospheric density alone amplifies the surface’s temperature response to insolation. My guess that the latter is the reason why Lord M argued past you in his response.
Now, I haven’t given the rest of Nikilov’s post a fair shake, and, given what he reports is a great deal of prior effort on it by him and others, there is some basis for hope that there’s more to it than smart guys like, e.g., Roy Spencer yet see. But I am pessimistic because the authors seem to base so much on what I think is an invalid assumption that they are among very few who understand the temperature-field implications of Hoelder’s inequality.

Alan D McIntire
January 1, 2012 8:02 am

Joel Shore says:
December 31, 2011 at 4:58 pm
“As I have demonstrated here and nobody has seriously challenged, Monckton’s current pet argument about climate sensitivity is completely and utterly bogus. ”
With an average temperature of 288 K, the earth would radiate at about 390.7 watts Supposedly a doubling of CO2 would increase that by an additional 3.8 watts, That would result in an increased temperature to 288 * (394.5/390.7)^0.25 = 288.7, or an increase of 0.7 K, hardly catastrophic The assumed 3K warning is all smoke and mirrors
You have asserted that Monckton is wrong, but you haven’t DEMONSTRATED anything,
I think you’re using Lewis Carroll logic
The Hunting of the Snark
Lewis Carroll
Fit the First – The Landing
“Just the place for a Snark!” the Bellman cried,
As he landed his crew with care;
Supporting each man on the top of the tide
By a finger entwined in his hair.
“Just the place for a Snark! I have said it twice:
That alone should encourage the crew.
Just the place for a Snark! I have said it thrice:
What i tell you three times is true.”
Arguing about a non-greenhouse earth earlier, some have pointed out the average temperatue of the moon, about 250 K. Note that the average fluctuation over a lunar day is on the order of 100 to 300 K, as opposed to earth’s diurnal fluctuaton of more like 10 K.
Even if average temperatures on earth went up by 4.5 K, about 2/3 of that would be increased nighttime and winter temperatures, making the planet much cozier for us creatures who originated in the tropics. – and for life in general.

davidmhoffer
January 1, 2012 9:29 am

Joel Shore;
I don’t see the place where R. Gates has made the mistake that I pointed out that you made. Could you direct me to it?>>>
Sure, happy to oblige. Since you object to my extrapolation from 280 ppm as the logarithmic function breaks down at concentrations approaching zero, then by the same logic, R. Gates cannot extrapolate any sensitivity calculations based on any assumptions about the net effect of CO2 in current concentrations for the exact same reason. Since he doesn’t know how much of his 6 degrees is a product of CO2 in the non logarithmic range, he cannot draw any conclusions about the remaining effect in the logarithmic range.

Bill Illis
January 1, 2012 9:40 am

R. Gates says:
December 31, 2011 at 3:39 pm
… such that the 25% conservative estimate of the contribution to the 33C of warming does not even begin to indicate the full measure and value of the stability that CO2 brings to temperatures from its non-condensing nature.
————————————-
I seem to be picking on you lately but it is not on purpose.
CO2 might be non-condensing, but it cycles through the atmosphere on the order of just 4 to 5 years. Its lifetime is just 4 or 5 years.
Each 4.5 years, CO2 gets converted into C6H12O6 + 6O2 (vegetation) and H2CO3 (ocean) amongst hundreds of other molecules.
There is balance of CO2 in the atmosphere but it can turn around and get cut in half in short order by just more rainfall or more ice-sheet formation.
So let’s not have anymore of that CO2 controls 90% of the water vapour. It could easily be the other way around.

wayne
January 1, 2012 10:22 am

Joe Born says:
January 1, 2012 at 6:39 am
Joe, I do apologize. I missed the word ‘not’ and totally misunderstood you. Guess I’ve been beat over the head by some that like to make me always sound wrong this week, or maybe just too little sleep. So you did get your code close enough to see it was correct, great. I do have want to reply deeper to you further comment but I’ll have to get back later this eve. Just had to stop and write you this at the moment. Oh, check Dr. Spencer’s site and skip down to Dr. Nikolov’s statements on NASA’s new lunar orbiter data that apparently backs his equations (1) and (2). I’m like you, still trying to understand some of the deeper portions.

Joel Shore
January 1, 2012 10:23 am

Alan D McIntyre says:

With an average temperature of 288 K, the earth would radiate at about 390.7 watts Supposedly a doubling of CO2 would increase that by an additional 3.8 watts, That would result in an increased temperature to 288 * (394.5/390.7)^0.25 = 288.7, or an increase of 0.7 K, hardly catastrophic The assumed 3K warning is all smoke and mirrors

You have omitted feedbacks…and done the no-feedback calculation incorrectly by looking at the surface energy balance (which is hard to work with because much more than radiation affects it) rather than the top-of-the-atmosphere energy balance.

You have asserted that Monckton is wrong, but you haven’t DEMONSTRATED anything,
I think you’re using Lewis Carroll logic

I have provided detailed explanations and a very nice analogy here: http://wattsupwiththat.com/2011/12/30/feedback-about-feedbacks-and-suchlike-fooleries/#comment-848206 and http://wattsupwiththat.com/2011/12/30/feedback-about-feedbacks-and-suchlike-fooleries/#comment-848211
Nobody has made any coherent challenge to them because they are clearly correct.

January 1, 2012 2:43 pm

With regard to Gates, davidmhoffer says: “That would mean that if 280 ppm = 6 degrees, then 400 ppm which is close to what we have now would result in another +3 (CO2 being logarithmic, plus 40% is about a 50% increase). That in turn would be about 6 degrees for doubling of CO2, TWICE as much as you claim with your utterly ridiculous circular logic based on artificial models being averaged with totally discredited paleo records.”
Well said, as I was just conjuring the same thoughts while looking at the logarithmic heating effect of CO2 and CO2 at an atmospheric concentration of 0.04%. How so little can be so dynamic. Amazing little gaseous compound CO2 is.
http://wattsupwiththat.files.wordpress.com/2010/03/heating_effect_of_co2.png
Monckton, is of course, doing the right thing by and as you so did yourself in vetting out the nonsense.

Alan D McIntire
January 1, 2012 3:31 pm

Joel Shore says:
January 1, 2012 at 10:23 am
……
“You have omitted feedbacks…and done the no-feedback calculation incorrectly by looking at the surface energy balance (which is hard to work with because much more than radiation affects it) rather than the top-of-the-atmosphere energy balance.”
Yes, I omitted feedbacks. The original effect is small- about 0.7 K as Monkton asserted , based on simple calculations. It’s up to you to DEMONSTRATE that there will be a positive feedback fo a factor of 4 or greater- to increase temperatures by 3K as you assert. Over the aeons of earth’s history, feedback has generally been negative- else we wouldn;t have had oceans lasting for aeons.
As to surface energy balance see
http://www.geo.utexas.edu/courses/387H/Lectures/chap2.pdf
specifically sections 2.3 through 2.5 and figure 2.12
Ultimately, the amount of radiation leaving the earth has to equal the radiation approaching the earth for the situation to be in equilibrium If an instant doubling of CO2 would .decrease the outgoing wattage from 240 to 236.2 watts, the earth would warm up until the outgoing wattage reached 240 again. From figure 12, you’ll see that once the earth is in balance again, the surface flux would also have increased by 3.8 watts ( or less since CO2 is not equally transluctant in all frequencies). My assumed 3.8 watt increase for the surface was actually CONSERVATIVE.
You continue to misspell my name- so much for your powers of observation.

January 1, 2012 4:01 pm

Let it be made plain again: Kiehl & Trenberth’s paper of 1997, from which I took the radiative forcing interval for the presence as opposed to absence of all greenhouse gases, is not, repeat not, talking about feedbacks. The system forcing interval 101[86-125] Watts per square meter represents radiative forcings only, and does not include feedbacks. From this interval the system sensitivity follows, as explained in my posts. The troll who persists in asserting – without evidence or citation – that his inaccurate representation of Kiehl & Trenberth’s paper is correct and that I am not taking into account feedbacks that he wishes were a major constituent of the system forcing interval, and who draws futile analogies with the activities of Bill Gates, is not contributing constructively, as the generally unpleasant, sneering tone of his posts indicates.
In the opposite direction, I also have concerns about the so-called “slayers” who have long tried to pretend that there is no such thing as the greenhouse effect. One of the “slayers”, in a comment on this thread, has cited an experiment by Dr. Nasif Nahle. However, that experiment was not fit for its purpose, since the equipment used was inappropriate. Also, I have reason to know that Dr. Nahle has himself recently realized that the “slayers” are not pursuing honest or sensible science and has distanced himself from them.
The “slayers” do great harm to the objective of bringing the world’s ruling elite to its senses, because they make it easy for the climate-extremist faction to maintain that “skeptics” as a whole do not even believe in the greenhouse effect. Indeed, anyone wanting to discredit the questioners of the official storyline could hardly do better than to set up a spoof group to manufacture a bogus notion that there is no greenhouse effect, precisely so as to allow the alarmists to tar us all with the “slayers'” ludicrous brush.
To the “slayers”, and to Dr. Nikolov’s separate group, I say this. The warming effect of increasing the concentration of a a greenhouse gas in an atmosphere such as that of the Earth was well established by experiment in 1859. The experiment has been frequently repeated since then. Unless and until a convincing contrary experiment be done, the matter is not in doubt.
Therefore, however superficially attractive it may be to invent new notions of how the atmosphere keeps the Earth warm, any paradigm that assumes – as Dr. Nikolov’s team do – that the temperature at the surface of an astronomical body possessing an atmosphere is dependent solely upon its pressure and density is overlooking the measured and readily-verifiable fact that adding a very small quantum of CO2 to the atmosphere will cause a far larger warming than the resultant minuscule alteration in atmospheric density and pressure would lead them to expect.
My recommendation is that one should either stick to established science, as I try in my fumbling. layman’s way to do, or publish in the peer-reviewed literature a proper, scientifically-credible refutation of the established science – established not by mere modeling but by a straightforward and well-grounded experiment.
Using the IPCC’s own data and methods, it is not difficult to establish that climate sensitivity is low. One can do it by numerous methods, of which these postings have outlined two: the equilibrium system sensitivity and the transient industrial-era sensitivity, which are near-identical, suggesting that temperature feedbacks may be net-zero or thereby.
Fortunately, some of the world’s leading politicians and bureaucrats are beginning to listen rather carefully now to the low-sensitivity case, not least because it can be established without complex and increasingly incomprehensible computer models, and because it is doing a better job of predicting future climate than the extremists’ models, and because the pseudo-scientific attacks on it by the usual suspects are so in-your-face intellectually dishonest. The trolls who have tried to play a shoddy public-relations game by making up false challenges to the low-sensitivity case should realize, therefore, that such attacks based on bogus science are no longer impressive to the world’s decision-makers, who are increasingly listening to the low-sensitivity case with attention and respect. The billions spent on public relations by watermelon groups and governments are no longer convincing anyone. If the climate extremists want anyone serious to heed them, they should keep their arguments scientific and abandon the childishly ad-hominem tone that marks out their contributions for what they are – expressions of sheer desperation as their miserable house of cards collapses around them.

Joel Shore
January 1, 2012 5:36 pm

Alan D McIntire said:

Yes, I omitted feedbacks.

Then your calculation is not relevant to the issue at hand.

You continue to misspell my name- so much for your powers of observation.

Sorry about the misspelling…but snide remarks like this about what it does or does not indicate might care a little more weight if you hadn’t misspelled Monckton’s in the very same comment.

Joel Shore
January 1, 2012 5:53 pm

Monckton of Brenchley says:

Let it be made plain again: Kiehl & Trenberth’s paper of 1997, from which I took the radiative forcing interval for the presence as opposed to absence of all greenhouse gases, is not, repeat not, talking about feedbacks. The system forcing interval 101[86-125] Watts per square meter represents radiative forcings only, and does not include feedbacks. From this interval the system sensitivity follows, as explained in my posts.

It continues to amaze me that someone who considers himself qualified to make pronouncements that the scientists who write the IPCC report are wrong about feedbacks has so little basic understanding of what a forcing or feedback is that he would make such a nonsensical claim as the one you are making. Anyone who actually understood what the meaning of forcing and feedback are would know that they are defined by the context of the situation. Whether water vapor is considered a feedback or a forcing depends on whether the change in water vapor in the atmosphere comes about by some direct change or whether it comes about as a result of some other change, such as a change in non-condensable greenhouse gases that then lead to a change in temperature.
The whole question of relevance for the water vapor feedback is how much of the water vapor that is in the atmosphere is due to the other greenhouse gases in the atmosphere (and the resulting warming that they cause) and how much would be there anyway. You are assuming that it would all be there anyway.
The Kiehl and Trenberth paper is not addressing this question…They are merely calculating the total effect of all the greenhouse gases that are present in the atmosphere without regard to how they got there. The paper by Lacis et al addresses, using modeling, the question of what happens if you remove the non-condensable greenhouse gases from the atmosphere and concludes that most of the water vapor ends up getting removed too.
If you are really incapable of understanding this stuff, then read my analogy that presents in terms that even a non-scientist can understand. And, for heaven’s sake, refrain on commenting about feedbacks until you understand the very basics about what they are.

Joel Shore
January 1, 2012 6:09 pm

Monckton of Brenchley says:

Using the IPCC’s own data and methods, it is not difficult to establish that climate sensitivity is low. One can do it by numerous methods, of which these postings have outlined two: the equilibrium system sensitivity and the transient industrial-era sensitivity, which are near-identical, suggesting that temperature feedbacks may be net-zero or thereby.

Anybody can take data and methods and use them incorrectly to produce results that happen to be more in line with what they would like to think the truth is. The actual facts are these:
(1) Your “equilibrium system sensitivity” is simply wrong because you don’t understand the distinction between forcing and feedback and how it depends on context. In particular, you are assuming no feedbacks in computing your sensitivity because you are considering the radiative effect of all the water vapor that is in the atmosphere, without determining to what extent it is there only because of the additional warmth produced by the non-condensable greenhouse gases and to what extent it would be there anyway. It is trivially easy to see how it is wrong with an analogy as I have presented: http://wattsupwiththat.com/2011/12/30/feedback-about-feedbacks-and-suchlike-fooleries/#comment-848211
(2) The actual fact is that the industrial era data can be used to support a high sensitivity or a low sensitivity depending on assumptions made, even staying with assumptions that are reasonable. IN OTHER WORDS, THE INDUSTRIAL ERA DATA DOES NOT PROVIDE A TIGHT CONSTRAINT ON THE CLIMATE SENSITIVITY. This is why this piece of empirical data is used in concert with other empirical data in order to establish the best constraints on climate sensitivity (and, admittedly, even with all the data together the constraints are not as tight as one would ideally like them to be…but they are better).
P.S. – I agree with you about the “Slayers” and how they are hurting your cause more than helping it, at least among scientists. I think, however, that the Slayers may well have made the calculation that this is worth doing because they are not trying to appeal to scientists, but rather to the public. Your refusal to budge on your clearly wrong notions about point (1) above makes me wonder how seriously you are really trying to win over knowledgeable scientists…Your argument may fool a few more people than the Slayers’ arguments do, but ultimately nobody who really understands about forcings and feedbacks.

January 1, 2012 6:21 pm

I applaud you Lord Monckton for focusing strictly on using their own ammunition against them and staying within those boundaries. That is the only way exposes work in the real world. My most expressed congratulations on a job very well done.
Happy New Year to you and your kin,
Best,
J.

wayne
January 1, 2012 8:24 pm

Monckton of Brenchley says:
January 1, 2012 at 4:01 pm
I read your comment, two times, and see your points very clearly, but if I was going to be any real help to you, it would take me a while to carefully compile. It would have to do with the various ‘flavors’ of people here on WUWT, and there are many. You seem to be speaking to WUWT as a whole entity but you are going to find you will never get the expected resonse, as a whole, and I’m not talking of the counter commenters, if you don’t also pay attention to who and why each commenter is here to accomplish, or just for the argumentative entertainment. See my point, it took me the first year to come to that realization and the next to try to sort the personalities so I myself did not go starking mad! It made no sense.
If you want my help, I will not name names, but I just might help you to know why certain groups, even though they wholly support your efforts and see ecactly where you are coming from and try to aid when possible, might seem against you in some respects if you are not aware of exactly where their effort are lying. A very few, like myself, have move way past where you might we are and are moving on into the future, past the low sensitivity, you have convinced me, for that is where it is, low, maybe even so small ignorable, and I am personally trying to answer the next layer of questions that already seem to present themselves as Dr. Nikolov seem also to do. But keep up with your efforts for yours is hugely more than mine in the actual “today” with the politics the way they are.
Maybe that gives you a little insight when you might seem one of my comments, it is not counter to you, your point has already been accepted and I’ve moved down the scientific road. I’m not very good at either arguments or politics.
If you want it any deeper thoughts on that matter, let me know. Either a comment back here or Anthony or the mods have my email address and I give permission for you to have it and please excuse my terrible handling of words, that is one more of my weak points.

gbaikie
January 2, 2012 2:35 am

“A very few, like myself, have move way past where you might we are and are moving on into the future, past the low sensitivity, you have convinced me, for that is where it is, low, maybe even so small ignorable, and I am personally trying to answer the next layer of questions that already seem to present themselves as Dr. Nikolov seem also to do.”
Yes, I agree, trying to get answers. And it’s very messy. But I hope people are learning as
much as I imagine I am learning. I claim I don’t have the answer. I also claim no one has the answer- if we did there would less serious arguments.

richard verney
January 2, 2012 3:59 am

I agree with Lord Monkton that for tactical reasons, it is sensible to use IPCC whenever possible data even if that data is questionable.
I consider that observational data establishes what Lord Monkton has to say about low sensitivity UNLESS there is a significant delay in the effects of net feedbacks being felt in the climate system. If net feed back takes place on a century scale (or even 30 to 50 years), we have not yet seen the effects of net feedback and therefore the conclusions drawn from more recent observational may not be sound and may be over stated.
I should state that I am one of those who is sceptical to the claims of the IPCC and I consider it likely that net feedbacks to CO2 are neutral or even negative. I merely point out that there are arguments of certainty surrounding the point being made by Lord Monkton.

January 2, 2012 4:54 am

Please stop trashing compact flourescent light bulbs. I completely agree windmills and solar panels for bulk power generation are completely useless and a major con, although not for many smaller scale specialised applications
Agreed, CFLs are lovely and save a bunch of money! However, note that solar panels will not remain any sort of con at all for very long (and in some places aren’t much of a con now). At the current rate of improvement in the technology, large scale solar will be break even to win a bit within the decade without any sort of subsidy or stimulus, and there is no good reason to think that this decade will be “it” as far as solar cell technology is concerned. Also note that the cost/benefit solar cells doesn’t improve “smoothly” — it is a series of more or less discrete discoveries and subsequent improvements, smeared out by many things, in several completely distinct approaches.
To put it another way, within 20 years there will be major solar plants being built “everywhere” no matter what if capitalism continues to work the way it should. Probably within ten years, actually, for a smaller subset of everywhere. Higher fuel prices make it happen sooner; fuel price drops postpone it, but solar is coming unless cost-effective thermonuclear fusion happens first.
This is actually important because it illustrates another fallacy in the IPCC arguments — that all things will remain equal. They won’t. They never have. Witness (as a metaphor, if you like) the SETI program, looking for radio emissions from distant civilizations — initiated back at a time when the world was “bright” with high power radio emissions, with radio stations and television and so on and we thought distant civilizations must be like us, radiating like crazy. Then optical fiber came along and it was proposed that advanced civilizations might not radiate at all! And now cell phones have come along, and we’re radiating like crazy again, although in a very different (incoherent) way, so that we are probably still invisible to our own SETI search tools.
Similarly, while we will probably use fossil fuels to move vehicles unless/until somebody comes up with a cost-effective way of achieving similar energy densities some other way — it is difficult to compete with the energy content of a gallon of gasoline or diesel fuel — we will very likely stop using them to generate electricity most if not all of the time not to save the Earth from Evil CO_2 but because they are more expensive than (better) alternatives. At some point we may even get to where synthesizing gasoline and/or fuel oils from plants is economically more sensible than mining fossil oil, and we’ll basically start recycling the carbon with a energy assist from the Sun. No matter; in fifty years people will look back on the “panic” of the present and smile at our own naivety as we failed to foresee that (fill in a dozen technologies, some of which may not even been conceived yet) would make the use of fossil fuels moot long before they became a problem, if using fossil fuels could ever become a problem.
Personally I think oil is far too useful a substance to burn, and I dislike some aspects of the direct effect it has had on the history of the world (political, economic and military) as nations have used every possible means to ensure access to and control over oil as a resource. But it won’t matter what you, or I, or anyone else think of oil, or solar cells, or coal, or even wind politically — what matters is what is cost effective in a continually changing field of supporting technologies.
If thermonuclear fusion were perfected and made commercially viable by (say) 2020, is there any doubt that anthropogenic CO_2 production would (and probably will!) plunge to a fraction of its current value within at most another decade, and continue to plunge until it returns at or close to pre-industrial levels? If (when!) solar cells drop to $0.25/watt (and improved means of storing daytime energy for nighttime use are developed), will not the exact same thing happen, without any help? If thorium/salt fission plants turn out to be as meltdown-proof and cost-effective as they appear likely to be (and eliminate most of the risk of nuclear proliferation in their implementation) will this not make a huge difference in the CO_2 output of at least countries with vast thorium reserves (China and India come to mind, 1/3 of the world’s population)?
So perhaps the one technology to whack on is wind. Even wind is a useful energy source — witness the use of windmills for at least centuries to do work in areas difficult to supply energy in any other way AND where the wind is reliable. However, the wind isn’t terribly reliable in most locations. Probably the best thing to do with wind is leave it alone — don’t subsidize it, don’t encourage it, don’t discourage it. That way it can be implemented as an economic decision where it makes cost-benefit sense. The technology itself is pretty stable and unlikely to fundamentally change. The problem isn’t that wind isn’t useful, it is that wind isn’t useful everywhere, and isn’t worth additional cost (and lowered benefits) if there is no CAGW scare to drive it even in most places where the wind does blow with some degree of regularity.
rgb

January 2, 2012 4:57 am

Joel Shore continues to misrepresent Kiehl & Trenberth’s 1997 paper, in which radiative forcings only, and not temperature feedbacks, are considered. The paper is quite specific and I have already cited one of the relevant passages. The evidential value of the paper seems greater than that of Mr. Shore’s oft-repeated claims, which do not seem to be supported by evidence and are impolitely expressed.
Radiative forcings are changes in the net (down-minus-up) flux of radiation at the tropopause caused by some external perturbation of the climate, such as a change in insolation, a Milankovich cycle, or an anthropogenic increase in greenhouse-gas concentrations.
Temperature feedbacks are also radiative forcings, but they are consequent upon a temperature change triggered by the initial forcing that perturbed the presumed pre-existing equilibrium of the climate system. Feedbacks are consequently expressed not in Watts per square meter simpliciter but in Watts per square meter per Kelvin of the initial warming that triggered them.
The forcings in Kiehl and Trenberth’s paper are plainly and repeatedly expressed in Watts per square meter simpliciter, not in Watts per square meter per Kelvin.
If Mr. Shore would like to gain a better understanding of the distinction between radiative forcings and temperature feedbacks, he may like to read the comprehensive pedagogical study by Roe (2009). Roe was a student of the great Professor Richard Lindzen.
The IPCC’s case for alarm rests solely upon the assumption that temperature feedbacks are strongly net-positive. Reliance upon this assumption has the effect of multiplying any initial forcing approximately threefold.
However, a growing body of empirical evidence in the peer-reviewed literature suggests that the IPCC – which unwisely relies too heavily on purely numerical as opposed to empirical or theoretical methods – is simply wrong in its assumption that feedbacks are strongly net-positive. For powerful reasons explained in earlier posts here last year, it is near-impossible for the climate to remain stable in the presence of temperature feedbacks as strongly net-positive as the IPCC imagines.
No temperature feedback can be definitively measured by empirical methods. A fortiori, no feedback can be reliably determined by numerical methods either. Yet the IPCC’s entire case rests upon assuming the feedbacks are strongly net-positive (amplifying the original warming triggered by the base forcing), rather than net-zero or even somewhat net-negative. Negative feedbacks (whose history is given at the beginning of Roe’s paper, and it is a rattling good yarn), are consistent with the remarkably temperature-stable climate of the past 60 million years, during which global temperatures have not varied by more than 3% either side of the mean. Though even these small changes in temperature can bring us ice ages at one moment and a hothouse Earth the next, they are too small to be consistent with high climate sensitivity.
If Mr. Shore would like to grasp the reasons why temperature feedbacks in the Earth’s climate are very much more likely to be net-zero or even net-negative than net-positive, he may like to consult a process engineer, such as Dr. David Evans. It was in the context of electrical circuitry that the modern analysis of feedbacks began. The classic textbook is by Bode (1945): but be warned, the book requires a good working knowledge of applied mathematics, and is 551 pages long.

Stephen Wilde
January 2, 2012 6:46 am

Nikolov and Zeller suggest that there is no back radiation, just the temperature of the air above the surface. I agree and this is why.
We do not need the GHGs at all in order to set the surface temperature of the atmosphere.
Atmospheric pressure dictates the energy value of the latent heat of vaporisation so it is atmospheric pressure that dictates the rate at which energy can leave the oceans. The more it costs in terms of energy to achieve evaporation the warmer the oceans must become before equilibrium is reached.
So the oceans will build up to whatever temperature is permitted by atmospheric pressure with or without any GHGs in the air at all.
Once that ocean temperature is achieved the energy for the baseline temperature of the air above the surface is then supplied to the air by energy leaving the oceans and NOT by energy coming in from the sun and especially NOT by energy flowing down from above as so called back radiation.
So the upshot is that the oceans accumulate solar energy until they radiate 390 at current atmospheric pressure, at that point 170 continues to be added by solar but to balance the budget the atmosphere by virtue of its density retains whatever energy is required to achieve balance.
A feature of GHGs is that they add to the temperature of the air proportionately more than other gases in the atmosphere but in the end it is surface pressure that controls the energy value of the latent heat of vaporisation which is the ultimate arbiter of what rate of energy transfer can be achieved from oceans to air.
So if GHGs add a surplus over and above that required by surface pressure for equilibrium then the system has to make an adjustment but what it cannot do is alter the energy value of the latent heat of vaporisation in the absence of any change in atmospheric mass or pressure. So instead it is the rate of evaporation that must change to balance the budget in the absence of a significant change in surface pressure. Thus a change in the size or speed of the water cycle removes in latent form any excess energy produced as a result of GHGs.
There is no back radiation, merely a temperature for the atmosphere just above the surface and it is wholly pressure dependent. That temperature is a consequence not of downward atmospheric scattering of outgoing longwave but simply a consequence of atmospheric density slowing down energy loss first from sea to air and then by separate mechanisms from the air above the sea surface to space.
So if one increases atmospheric pressure at the surface the amount of energy required to provoke evaporation at the sea surface rises and the equilibrium temperature of the whole system rises including the temperature of the air above the surface.
The opposite if one decreases atmospheric pressure at the surface.
We have been looking at back radiation from the wrong point of view. There is no such thing. What we see is simply the air temperature near the surface and it is pressure dependent and not GHG dependent.

Stephen Wilde
January 2, 2012 6:56 am

Wherever my previous post refers to GHGs I actually mean non condensing GHGs.

Bill Illis
January 2, 2012 7:59 am

The feedbacks are only 2.0 W/m2/C anyway (including positive cloud feedback of close to 1.0 W/m2/C which is clearly in question now).
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-8-14.html
So, plug in the direct forcing from doubling of +3.7 W/m2 (might actually be +4.0 W/m2) and we get a direct temperature increase of about 1.0C at the tropopause. Okay, add 2.0 W/m2 of feedbacks and we get a rise of just 5.7 W/m2 to 6.0 W/m2 of total direct forcings and feedbacks.
That would result in and increase of temperatures at the tropopause of just +1.5C and, if the same wattage applied to the surface, an increase in temperatures of just +1.0C.
– direct forcing +4.0 W/m2,
– feedbacks +2.0 W/m2
– Stefan Boltzmann says that results in just +1.0C (surface) to +1.5C (tropopause)
The math backwards and forwards. The pro-AGW like to talk about feedbacks but then they never check to see if that makes much difference. The IPCC says it doesn’t.

January 2, 2012 8:21 am

I can’t speak for your program, but I will stand by mine for correctly computing the ‘mean effective radiative temperature’ of a massless gray body as a perfect radiator. Remember, there is no real temperature in such of an example for there is no mass. It takes mass to even define temperature. (but most climate scientist have no problem with it and therefore they are all wrong, sorry)
I’d like to chime in and support this statement, without necessarily endorsing the results of the computation (since I’d have to look at code and results directly to do that:-). Let’s just think about scaling for a moment. There are several equations involved here:
P = (4\pi R^2)\epsilon\sigma T^4
is the total power radiated from a sphere of radius R at uniform temperature T. \sigma is the Stefan-Boltzmann constant and can be ignored for the moment in a scaling discussion. \epsilon describes the emissivity of the body and is a constant of order unity (unity for a black body, less for a “grey” body, more generally still a function of wavelength and not a constant at all). Again, for scaling we will ignore \epsilon.
Now let’s assume that the temperature is not uniform. To make life simple, we will model a non-uniform temperature as a sphere with a uniform “hot side” at temperature T + dT and a “cold side” at uniform temperature T – dT. Half of the sphere will be hot, half cold. The spatial mean temperature, note well, is still T. Then:
P’ = (4\pi R^2)\epsilon\sigma ( 0.5*(T + dT)^4 + 0.5(T – dT)^4)
is the power radiated away now. We only care how this scales, so we: a) Do a binomial expansion of P’ to second order (the first order terms in dT cancel); and b) form the ratio P’/P to get:
P’/P = 1 + 6 (dT/T)^2
This lets us make one observation and perform an estimate. The observation is that P’ is strictly larger than P — a non-uniform distribution of temperature on the sphere radiates energy away strictly faster than it is radiated away by a uniform sphere of the same radius with the same mean temperature. This is perfectly understandable — the fourth power of the hot side goes up much faster than the fourth power of the cold side goes down, never even mind that the cold side temperature is bounded from below at T_c = 0.
The estimate: dT/T \approx 0.03 for the Earth. This isn’t too important — it is an order of magnitude estimate, with T \approx 300K and dT \approx 10K. (0.03^2 = 0.0009 \approx 0.001 so that 6(0.03)^2 \approx 0.006. Of course, if you use latitude instead of day/night side stratification for dT, it is much larger. Really, one should use both and integrate the real temperature distribution (snapshot) — or work even harder — but we’re just trying to get a feel for how things vary here, not produce a credible quantitative computation.
For the Earth to be in equilibrium, S/4 must equal P’ — as much heat as is incident must be radiated away. I’m not concerned with the model, only with the magnitude of the scaling ratio — 1375 * 0.006 = 8.25 W/m^2, divided by four suggests that the fact that the temperature of the earth is not uniform increases the rate at which heat is lost (overall) by roughly 2 W/m^2. This is not a negligible amount in this game. It is even less negligible when one considers the difference not between mean daytime and mean nighttime temperatures but between equatorial and polar latitudes! There dT is more like 0.2, and the effect is far more pronounced!
The point is that as temperatures increase, the rate at which the Earth loses heat goes strictly up, all things being equal. Hot bodies lose heat (to radiation) much faster than cold bodies due to Stefan-Boltzmann’s T^4 straight up; then anything that increases the inhomogeneity of the temperature distribution around the (increased) mean tends to increase it further still. Note well that the former scales like:
P’/P = 1 + 4 dT/T + …
straight up! (This assumes T’ = T + dT, with dT << T the warming.) At the high end of the IPCC doom scale, a temperature increase of 5.6C is 5.6/280 \approx 0.02. That increases the rate of Stefan-Boltzmann radiative power loss by a factor of 0.08 or nearly 10%. I would argue that this is absurd — there is basically no way in hell doubling CO_2 (to a concentration that is still < 0.1%) is going to alter the radiative energy balance of the Earth by 10%.
The beauty of considering P'/P in all of these discussions is that it loses all of the annoying (and often unknown!) factors such as \epsilon. All that they require is that \epsilon itself not vary in first order, faster than the relevant term in the scaling relation. They also give one a number of “sanity checks”. The sanity checks suggest that one simply cannot assume that the Earth is a ball at some uniform temperature without making important errors, They also suggest that changes of more than 1-2C around some geological-time mean temperature are nearly absurdly unlikely, given the fundamental T^4 in the Stefan-Boltzmann equation. Basically, given T = 288, every 1K increase in T corresponds to a 1.4% increase in total radiated power. If one wants a “smoking gun” to explain global temperature variation, it needs to be smoking at a level where net power is modulated at the same scale as the temperature in degrees Kelvin.
Are there candidates for this sort of a gun? Sure. Albedo, for one. 1% changes in (absolute) albedo can modulate temperature by roughly 1K. An even better one is modulation of temperature distribution. If we learn anything from the decadal oscillations, it is that altering the way temperature is distributed on the surface of the planet has a profound and sometimes immediate effect on the net heating or cooling. This is especially true at the top of the troposphere. Alteration of greenhouse gas concentrations — especially water — have the right order of magnitude. Oceanic trapping and release and redistribution of heat is important — Europe isn’t cold not just because of CO_2 but because the Gulf Stream transports equatorial heat to warm it up! Interrupt the “global conveyor belt” and watch Europe freeze (and then North Asia freeze, and then North America freeze, and then…).
But best of all is a complex, nonlinear mix of all of the above! Albedo, global circulation (convection), Oceanic transport of heat, atmospheric water content, all change the way temperature is distributed (and hence lost to radiation) and all contribute, I’m quite certain, in nontrivial ways to the average global temperature. When heat is concentrated in the tropics, T_h is higher (and T_c is lower) compared to T and the world cools faster. When heat is distributed (convected) to the poles, T_h is closer to T_c and the world cools overall more slowly, closer to a baseline blackbody. When daytime temperatures are much higher than nighttime tempratures, the world cools relatively quickly; when they are more the same it is closer to baseline black/grey body. When dayside albedo is high less power is absorbed in the first place, and net cooling occurs; when nightside albedo is high there is less night cooling, less temperature differential, and so on.
The point is that this is a complex problem, not a simple one. When anyone claims that it is simple, they are probably trying to sell you something. It isn’t a simple physics problem, and it is nearly certain that we don’t yet know how all of the physics is laid out. The really annoying thing about the entire climate debate is the presumption by everyone that the science is settled. It is not. It is not even close to being settled. We will still be learning important things about the climate a decade from now. Until all of the physics is known, and there are no more watt/m^2 scale surprises, we won’t be able to build an accurate model, and until we can build an accurate model on a geological time scale, we won’t be able to answer the one simple question that must be answered before we can even estimate AGW:
What is the temperature that it would be outside right now, if CO_2 were still at its pre-industrial level?
I don’t think we can begin to answer this question based on what we know right now. We can’t explain why the MWP happened (without CO_2 modulation). We can’t explain why the LIA happened (without CO_2 modulation). We can’t explain all of the other significant climate changes all the way back to the Holocene Optimum (much warmer than today) or the Younger Dryas (much colder than today) even in just the Holocene. We can’t explain why there are ice ages 90,000 years out of every 100,000, why it was much warmer 15 million years ago, why geological time hot and cold periods come along and last for millions to hundreds of millions of years. We don’t know when the Holocene will end, or why it will end when it ends, or how long it will take to go from warm to cold conditions. We are pretty sure the Sun has a lot to do with all of this but we don’t know how, or whether or not it involves more than just the Sun. We cannot predict solar state decades in advance, let alone centuries, and don’t do that well predicting it on a timescale of merely years in advance. We cannot predict when or how strong the decadal oscillations will occur. We don’t know when continental drift will alter e.g. oceanic or atmospheric circulation patterns “enough” for new modes to emerge (modes which could lead to abrupt and violent changes in climate all over the world).
Finally, we don’t know how to build a faithful global climate model, in part because we need answers to many of these questions before we can do so! Until we can, we’re just building nonlinear function fitters that do OK at interpolation, and are lousy at extrapolation.
rgb

Joel Shore
January 2, 2012 8:42 am

Bill Illis says:

The feedbacks are only 2.0 W/m2/C anyway (including positive cloud feedback of close to 1.0 W/m2/C which is clearly in question now).
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-8-14.html

The math backwards and forwards. The pro-AGW like to talk about feedbacks but then they never check to see if that makes much difference. The IPCC says it doesn’t.

Wow, Bill. The IPCC made a trivial math error and you are the first to notice it? How about the more likely explanation: You are the one who has made the math error because you don’t know what you are doing.
You have used the wrong equation for including the feedbacks: The feedbacks act not only on the original temperature rise due to increase in CO2 but also on the subsequent temperature rise due to the feedbacks themselves. You have to solve self-consistently.
What you have done is mathematically equivalent to saying that the sum of the infinite geometric series 1+ (2/3) + (4/9) + (8/27) + … = 1.67 because you just take the first two terms in the series. (The actual sum is 3.)

tallbloke
January 2, 2012 8:47 am

Robert Brown that is an outstanding contribution. With your permission I’d like to post that on my blog, where it is relevant to our discussion of Nikolov and Zeller’s Unified Theory of Climate.
Thanks

Joel Shore
January 2, 2012 9:07 am

Monckton of Brenchley says:

Radiative forcings are changes in the net (down-minus-up) flux of radiation at the tropopause caused by some external perturbation of the climate, such as a change in insolation, a Milankovich cycle, or an anthropogenic increase in greenhouse-gas concentrations.
Temperature feedbacks are also radiative forcings, but they are consequent upon a temperature change triggered by the initial forcing that perturbed the presumed pre-existing equilibrium of the climate system. Feedbacks are consequently expressed not in Watts per square meter simpliciter but in Watts per square meter per Kelvin of the initial warming that triggered them.
The forcings in Kiehl and Trenberth’s paper are plainly and repeatedly expressed in Watts per square meter simpliciter, not in Watts per square meter per Kelvin.

You are still missing the basic point: Anything can be a feedback or a forcing, depending on the context, i.e., the particular “experiment” you are running on the Earth’s climate system. In the context of determining the net radiative contribution of each greenhouse gas to the total greenhouse effect, Kiehl and Trenberth have considered all of the gases to be forcings because they were just trying to answer the question, “What is the total radiative effect of all of the greenhouse gases and what amount is directly attributable to each greenhouse gas?”
However, if we now consider a specific experiment, such as removing all of the non-condensable greenhouse gases from the atmosphere and asks what happens, what will happen in reality is that the resulting temperature drop will cause much of the water vapor to also be removed from the atmosphere. Hence, you do not need to directly reduce the forcing by 86-125 W/m^2 in order to get most of 33 K temperature change. You just need to take out most of the non-condensable greenhouse gases and then this causes the water vapor to come out and you lose most of the radiative effect due to it also.
This is, once again, analogous to the “Bill Gates feedback” that I discussed http://wattsupwiththat.com/2011/12/30/feedback-about-feedbacks-and-suchlike-fooleries/#comment-848211 . If you just look at the amount of money (“forcing”) needed to feed 5 million people, you would conclude that you need $500 million. So, the question becomes, does the public (“CO2”) need to contribute an amount (“forcing”) of $500 million. And, the answer is no: The public only needs to contribute $100 million to cause the feeding of 5 million hungry people to occur (“the temperature to rise by 33K”) because the Bill Gates “feedback” supplies the other $400 million.
You claim that the 86-125 W/m^2 somehow does not include the water vapor feedback but that is nonsense. What Kiehl and Trenberth have provided you with is the total radiative effect of the water vapor that is in the atmosphere. Without doing some serious calculations, you and they have no way of knowing what fraction of that water vapor (and hence of its radiative effect) would be there even if the non-condensable greenhouse gases weren’t present and how much is there only because the non-condensable greenhouse gases have warmed the atmosphere enough for the water vapor to be there.
The mistake you are making here is a mistake that is made by a lot of skeptics. For example, people have often said basically, “How is it possible to even predict that the temperature increases by 3 K when the forcing increases by 4 W/m^2 when the Steffan-Bolltzmann Equation clearly shows that the increase due to such a radiative effect is only about 1.1 K?” And, the answer to this is that, while the original radiative effect of the CO2 is only 4 W/m^2, the total radiative effect of everything that changes, the clouds, the water vapor, the ice albedo, etc., would be close to 12 W/m^2…and hence the temperature rise necessary to balance this would be about 3 K.

Joel Shore
January 2, 2012 9:13 am

Monckton of Brenchley says:

Negative feedbacks (whose history is given at the beginning of Roe’s paper, and it is a rattling good yarn), are consistent with the remarkably temperature-stable climate of the past 60 million years, during which global temperatures have not varied by more than 3% either side of the mean. Though even these small changes in temperature can bring us ice ages at one moment and a hothouse Earth the next, they are too small to be consistent with high climate sensitivity.

Not according to the scientists who actually study this ( http://www.sciencemag.org/content/306/5697/821.summary ):

Climate models and efforts to explain global temperature changes over the past century suggest that the average global temperature will rise by between 1.5º and 4.5ºC if the atmospheric COconcentration doubles. In their Perspective, Schrag and Alley look at records of past climate change, from the last ice age to millions of years ago, to determine whether this climate sensitivity is realistic. They conclude that the climate system is very sensitive to small perturbations and that the climate sensitivity may be even higher than suggested by models.

Stephen Wilde
January 2, 2012 9:42 am

“Are there candidates for this sort of a gun? Sure. Albedo, for one. 1% changes in (absolute) albedo can modulate temperature by roughly 1K. An even better one is modulation of temperature distribution. If we learn anything from the decadal oscillations, it is that altering the way temperature is distributed on the surface of the planet has a profound and sometimes immediate effect on the net heating or cooling. This is especially true at the top of the troposphere. Alteration of greenhouse gas concentrations — especially water — have the right order of magnitude. Oceanic trapping and release and redistribution of heat is important — Europe isn’t cold not just because of CO_2 but because the Gulf Stream transports equatorial heat to warm it up! Interrupt the “global conveyor belt” and watch Europe freeze (and then North Asia freeze, and then North America freeze, and then…).
But best of all is a complex, nonlinear mix of all of the above! Albedo, global circulation (convection), Oceanic transport of heat, atmospheric water content, all change the way temperature is distributed (and hence lost to radiation) and all contribute, I’m quite certain, in nontrivial ways to the average global temperature. ”
Halleluja.
Get some idea as to how all those factors vary and how the system responds in order to work back towards the pressure dominated baseline temperature and then you have some idea as to how the system works.
The secret underlying it all is that the total system energy content varies little if at all unless global atmospheric pressure or solar input changes.
In the meantime one can achieve wide variations in the energy flow from surface to space or across the surface via changes in the REGIONAL surface air pressure distribution which is where climate changes come in.

January 2, 2012 10:05 am

Sure, no problem. Hopefully I did the algebra all correctly — it’s a pain to do algebra at a keyboard when you can’t run latex on the result to look at it. But then, it is pretty simple algebra. I think that this is one of NZ’s most interesting points — I’m not convinced that “pressure” per se is responsible for heat trapping as there is a bit of question begging in there that confuses cause and effect (as noted in a post a few days ago) but I’m very firmly convinced that neglecting effects on the order of dT/T or (dT/T)^2 in BOTH spatial AND temporal averaging is a capital mistake. I also do think that there is very probably a pressure effect, but I’m guessing that it has more to do with convection rates than with ideal gas laws. In any event, the correct rules should involve temperature dependent bulk moduli, not PV = NkT per se, or if you prefer, Navier-Stokes solutions of chaotic complexity.
What I think that they got right is that there is a strong and largely ignored coupling between contact cooling of the surface via convection (including evaporation and conduction) and radiative cooling and trapping. Convection moves heat up through the bulk of the greenhouse gas column to where it can be efficiently radiated; I suspect that it is favorably driven as long as the surface is differentially warmed and cooled (as it is) to create those colorful, often wet, convection rolls called “the weather”. The weather and the wind are evidence that bulk transport of energy is rather important in the Earth’s dynamic energy balance. Not to mention the fact that northern Europe isn’t one big ice-pack. You’d think that people would learn from the fact that Scandinavia isn’t big ball of ice (but Greenland pretty much is, at about the same latitude). Good old Gulf Stream. Then there is the hypothesis that the Younger Dryas was caused by the interruption of the Gulf Stream when a huge freshwater ice dam broke during the original warming phase of the Holocene.
Here’s another one. I make beer. On part of making beer is boiling the wort for some hours to reduce the fluid volume of the barley-sugar-water to the right specific gravity to ferment to the desired target alcohol level (and do things to proteins and sugars and at the right point to bitter and flavor it with the hops). Big pot, lots of fluid, hot on the bottom, cool on the top (even before the boil). The otherwise reasonably clear liquid is full of little chunkies of coagulated proteins as well, so the liquid has a clearly visible “texture” that lets you see the movement of the fluid.
As any good fluid physicist should understand, the heating on the bottom relative to the top creates instability. Warm water is much less dense than cold water, from 4C right up to 100+ C (beer/syrup boils a bit over 100C). Conduction is slow. Radiation is very slow. Rather than heating the water in a stratified way, bottom to top, as the bottom water heats, it expands and experiences a buoyancy force from the denser cooler water above and around it. It is pushed up. Of course at the top there is nowhere to go (it’s in a pot, bound by gravity) so it just displaces the cooler water there, which sinks, is heated at the bottom, rises to the top, gives off its heat via evaporation and conduction and radiation, cools a bit, sinks, picks up more heat, iterate indefinitely.
But the rising and falling are not uniform. The wort creates convection rolls of warmer lower pressure lower density rising liquid and cooler higher pressure higher density falling liquid, heating at the bottom, transporting the heat to the top, giving it off there, and going back to the bottom for another load all while the liquid itself gradually warms. When the convection is obstructed, one can quickly build up a much higher temperature differential, and it actually takes MUCH longer to reach equilibrium — you can actually boil off all of the liquid on the bottom in local patches and scorch things in contact there because the bottom of the pan isn’t COOLED by the convection rolls.
In a pot, the convection rolls are clearly manifest — usually it goes up on one half of my pot and down on the other, unless I stir it or have it really perfectly aligned on the heat. On the earth, the same process occurs in a very irregularly shaped, heated, and cooled “pot”. Heat is dumped in from the sun, but in a constantly varying pattern as clouds move around reflecting a large fraction of incident energy from some areas and not others. It is differentially absorbed by the ground and the water. Some of that heat is differentially released immediately into the air (which is itself also directly warmed by the light that passes through it). Some causes evaporation of water, cooling the surface of land or water but carrying away absorbed heat into the air. Air, too, rises when warm and falls as it cools and this creates huge masses of air that are just as trapped as the beer in my pot, rising over here, falling over there, and running along the ground or upper troposphere in between in both vertical rolls and in horizontal cycles as well. This air all moves in a rotating frame that causes it to deflect as it moves, creating large scale patterns of circulation around low and high pressure systems. All of this is driven by thermal differentials that move energy around, carrying it from where there is a lot to where there is less as an approximate rule, carrying it from where it is relatively hot (down low) up to where it is relatively cool (above) as an approximate rule.
Radiation is what ultimately removes the energy picked up from the sun, but it is not all radiation from the solid ground that does it, nor is it all, or even mostly, CO_2 in dry air responsible for obstructing that heat transport. This is why, in the desert where the humidity is very low, on a quiet night it can freeze by morning where the temperature rises to close to 100F during the day. Not much of a “greenhouse” effect there, I’d say (and the best possible measure of true greenhouse effect cooling, one that is unfortunately not generally directly studied).
When the heat is transported up by convection, it goes through the greenhouse reflector. The stronger the greenhouse trapping, the greater the thermal pressure differential, the comparatively stronger the convective transport process becomes and the more efficient the cooling becomes. The “stratified” reflective blanket is penetrated by the cooling holes of convective rolls, by the active transport of heat from where it is trapped to where it is not. All of this favors smaller sensitivities.
If there is a real lesson in this, this is it. It is a simple principle of very elementary thermodynamics that all perturbations away from the simple radiative model of cooling obstructed by greenhouse gases will increase the cooling rate compared to the purely radiative baseline. It will do this because the greater temperature differentials are a source of free energy that is begging to do work. The system will nearly invariably self-organize to do work, and in the process reduce the temperature differential between the lower and upper troposphere. This, in turn, will increase the efficiency of the radiative cooling by lifting the heat to be lost up above the greenhouse blanket. I don’t know why this simple argument is ignored so often in climate studies when it is the source of the very instability that produces the convective rolls in my heating beer, the wind in my hair, the rain on my garden, and the seasons. Heat trapping is never improved by a heat-differential instability — that would just make the system even more unstable!
Hence the full expectation that the climate sensitivity and feedback should be expected to be negative and cool the earth compared to what one might expect from “pure” CO_2 greenhouse trapping even before one looks at its details. This is just the “fluctuation/dissipation theorem”, and the climate models that postulate egregiously high climate sensitivity are egregious because they violate it. Given the existence of multiple modes (e.g. radiation and convection) for non-equilibrium energy transport, blocking one will increase the rate of others, not vice versa. Sometimes so well that it is difficult to see any effect of the blockage.
This argument doesn’t really depend much upon water, but the evaporative cycle in general is a perfect example. It cools down low and warms up high, with a very, very few exceptions brought about by peculiar geography (e.g. Santa Ana winds) because in general the warmer moist air rises, gives off its heat, condenses as (cooler) water, and falls. Yes, the clouds and water vapor modulate albedo and greenhouse trapping, but most of this modulation is random compared to improved transport as the temperature differential increases. Again one expects the overall effect of water to be negative to neutral, not positive feedback, because it will in general consume free energy to move water around relative to static stratified models, energy seeking equilibrium with outer space at 3K far overhead, energy that wants to move vertically (on average) from the warm surface to the cold overhead and horizontally from hot places to cooler places.
rgb

davidmhoffer
January 2, 2012 10:48 am

Reposted from Ira Glickstein’s thread on N&Z as I think it is pertinant to this thread also:
Folks, as I watch this discussion I keep seeing people get lost in the details. Stand back and look at the big picture.
N&Z have provided a formula that appears to have predictive skill. One CANNOT falsify it by arguing the details! Sure radiative absorption and re-emission happens in a certain way. Sure convection happens in a certain way. Sure lapse rate works in a certain way.
So What?
If there is one thing we’ve learned over the last few years of the climate debate it is (or should be) that our understanding of the mechanisms and how they interact with one another is woefully incomplete. If we were anywhere near to understanding all the pieces of the puzzle and how they fit together, we’d have climate models with predictive skills coming out the yin yang. But the fact is we don’t.
I liken this discussion to being given a pail full of gravel and being asked to determine the weight of the gravel. I could thoroughly mix the gravel, extract a representative sample, weigh each rock, pebble and grain of sand, extrapolate the expected change in distribution of the rocks, pebbles, and sand from top of the bucket to the bottom of the bucket based on known paramaters for the settling of gravel over time, and from there arrive at an estimate of the weight of the gravel in the pail.
Or I could weigh the gravel and the pail, then pour the gravel out, and weigh the pail.
What N&Z are purporting to do is the latter. One cannot falsify their results by arguing about what the proper distribution of grains of sand is or how gravel does or does not settle when poured into a pail. The only way to determine if they are on to something is to weigh the gravel.
What they have said is that for a given TOA radiance, and a given mean surface atmospheric pressure, they can calculate the average surface temperature of a planet. They’ve even published their predictions for no less than EIGHT planetary bodies!
The only question we should be interested in at this point (it seems to me) is this:
Did they get the surface temps of those planetary bodies right or not?
If no, then their formulas are wrong.
If yes, then it seems to me there are only two possibilities.
1. Their formulas are correct, we just don’t know exactly WHY they are correct.
or
2. They successfully predicted the surface temps of 8 celestial bodies by coincidence.
If the latter, that’s one awfull big coincidence!
So, would it not make sense to dispense with the arguments about the life time of a photon in earth atmosphere, how convection changes with pressure, what absorption bands various gases have and just answer the question:
Did they nail the temps of those planetary bodies? Or not?

Joel Shore
January 2, 2012 11:34 am

Robert Brown says:

The system will nearly invariably self-organize to do work, and in the process reduce the temperature differential between the lower and upper troposphere. This, in turn, will increase the efficiency of the radiative cooling by lifting the heat to be lost up above the greenhouse blanket. I don’t know why this simple argument is ignored so often in climate studies when it is the source of the very instability that produces the convective rolls in my heating beer, the wind in my hair, the rain on my garden, and the seasons.

It is not ignored. A near as I can tell, what you are describing is called the “lapse rate feedback” and it is a negative feedback included in all of the climate models that comes about because, indeed, the higher altitudes of the troposphere are expected to warm a little more than the surface…and, since it is these high altitudes that are mainly important in determining how much temperature rise has to occur to re-establish energy balance, the surface warms less.
[Your description is vague enough that one could also interpret your argument as being that the greenhouse effect is smaller once convection comes into play than before it. That would be a statement that corresponds not so much to the lapse rate feedback but to the fact that the greenhouse effect is indeed of a smaller magnitude in models that include convection than in purely radiative analyses. Again, this is why all quantitative calculations of the effect include the effect of convection…i.e., it is what essentially does not allow the lapse rate to exceed the adiabatic lapse rate.]
However, there are other feedbacks that come into play that are positive. In particular, much of the same physics that controls the lapse rate feedback also controls the water vapor feedback, i.e., the fact that more water vapor in the atmosphere leads to more absorption of terrestrial radiation. Because these two processes are controlled by similar physics, it turns out that the spread amongst the climate models that is seen for the combined effect of these two feedbacks is much smaller than the spread for each individually. (I.e., models with a positive water vapor feedback that is larger in magnitude also tend to have a negative lapse rate feedback that is larger in magnitude, and models with a positive water vapor feedback that is smaller in magnitude also tend to have a negative lapse rate feedback that is smaller in magnitude.)
The fact that the net feedback due to water vapor and lapse rate feedbacks are reasonably well-simulated by the models is confirmed by satellite data for fluctuations that occur (and, for the long term multidecadal trends too, although here the data is a little less trustworthy because of artifacts that can affect these trends). See, for example, http://www.sciencemag.org/content/310/5749/841.abstract

tallbloke
January 2, 2012 11:42 am

Robert Brown:
many thanks, the article is posted here:
http://tallbloke.wordpress.com/2012/01/02/robert-brown-what-we-dont-know-about-energy-flow/
Feel free to join the Jelbring thread too, that’s where the action is today. Hans Jelbring himself has joined in the discussion.

Joel Shore
January 2, 2012 11:46 am

davidmhoffer says:

1. Their formulas are correct, we just don’t know exactly WHY they are correct.

Well, given that their formulas are based on a calculated value T_gb that appears to have been calculated incorrectly, their formula is probably not really correct. (See http://wattsupwiththat.com/2011/12/29/unified-theory-of-climate/#comment-850853 , although the credit should go to cba for arguing that they didn’t even do the math correctly, as I originally thought they had.)

2. They successfully predicted the surface temps of 8 celestial bodies by coincidence.
If the latter, that’s one awfull big coincidence!

Not when they have six free parameters, it isn’t! They didn’t predict anything. They simply did an empirical fit to some data using lots of free parameters!

Old Mike
January 2, 2012 1:06 pm

Robert Brown,
A breath of fresh air, as a retired chemical engineer I think your big picture sensitivity asessment is a great approach.
(As far as I’m concerned anyone contemplating trying to model or understand the physics of the atmosphere should first be skilled in “Transport Phenomena”. If anyone is interested the seminal text by Bird, Stewart and Lightfoot is the reference text, I understand the first edition is better than the second which contains some errors that escaped proofing. “Transport Phenomena” was not tought until the 3rd year of my 4 year degree programme. A solid grasp of the principles of heat transfer, mass transfer, thermodynamics, physical and reaction chemistry and calculus.)
Old Mike

davidmhoffer
January 2, 2012 1:19 pm

Joel shore;
Not when they have six free parameters, it isn’t! They didn’t predict anything. They simply did an empirical fit to some data using lots of free parameters!>>>
Equation 8. 2 parameters.

tallbloke
January 2, 2012 2:29 pm

Robert, in addition to Joel’s misapprehensions and inability to see that the major feedbacks are as you said, negative, you have some further crit on Tamino’s blog from the comment linked below going forward to deal with if you feel like enlightening them.
Watch it though, Grant Foster plays dirty with snipping comments.
http://tamino.wordpress.com/2011/12/10/oh-pleeze/#comment-58042

January 2, 2012 2:34 pm

Not according to the scientists who actually study this (
I go with Chris on this. If temperatures were so sensitive to changing conditions we would have either gone into thermal runaway long ago (don’t bring up Venus, it is a red herring), or the Earth would be iced over long ago.
This is where the faint sun paradox comes in. The sun was demonstrably weaker 3 billion years ago with an atmosphere not that different than today, with continents configured wildly different than today and yet temperatures were still temperate. CO2, maybe, but CO2 was 10x what it is today, suggesting yet again that temperature sensitivities are less than what has been advertised.

January 2, 2012 2:43 pm

Finally, we don’t know how to build a faithful global climate model, in part because we need answers to many of these questions before we can do so! Until we can, we’re just building nonlinear function fitters that do OK at interpolation, and are lousy at extrapolation.
Excellent post, and thus the underlying basis for my interest in actually going out and doing detailed measurements of the system as it is, not doing more fruitless computer modeling, unless it is to set hypotheses that the experimentalists then go out and get the data for in order to test the hypothesis!

Joel Shore
January 2, 2012 2:51 pm

davidmhoffer says:

Equation 8. 2 parameters.

Try again: Equation (8) has only two parameters but all it relates is T_S and N_TE. Equation (7) relating N_TE to the surface pressure P_S has another 4 parameters. So, to relate the physically-measured quantities of T_S and P_S, you have 6 free parameters.

Joel Shore
January 2, 2012 3:26 pm

Dennis Ray Wingo says:

This is where the faint sun paradox comes in. The sun was demonstrably weaker 3 billion years ago with an atmosphere not that different than today, with continents configured wildly different than today and yet temperatures were still temperate. CO2, maybe, but CO2 was 10x what it is today, suggesting yet again that temperature sensitivities are less than what has been advertised.

I can’t even follow your logic here. On the one hand, you ask, “How could the climate possibly be so temperate when the sun was that much weaker?” And, then on the other hand, you claim solving this with CO2 only requires very low temperature sensitivities! So, which is it? The fact is that there are huge uncertainties associated with 3 billion years ago, both in terms of temperatures and CO2 levels…but, yes, the best current understanding would be that there are geochemical feedbacks that operate on long time scales and add some stability, namely that cold temperatures tend to lead to a buildup of CO2 (because the processes that remove it from the atmosphere don’t operate when everything is frozen over) and likewise that these processes speed up when it warms. This would explain why the climate system is seen to be quite sensitive to perturbations on shorter timescales but that the climate has nonetheless remained within some quite broad limits (and it is unclear how broad, since there is still a lot of uncertainty about snowball/slushball events) over the long haul.

davidmhoffer
January 2, 2012 3:49 pm

Joel Shore says:
January 2, 2012 at 2:51 pm
davidmhoffer says:
Equation 8. 2 parameters.
Try again: Equation (8) has only two parameters but all it relates is T_S and N_TE. Equation (7) relating N_TE to the surface pressure P_S has another 4 parameters. So, to relate the physically-measured quantities of T_S and P_S, you have 6 free parameters>>>
8 is derived from 7, not the other way around.

tallbloke
January 2, 2012 4:05 pm

Joel: See Ned Nikolovs reply to Alan McIntire on the other thread. Those parameters are not as free as you would love them to be.
“They simply did an empirical fit to some data using lots of free parameters!”
And found that the curve fitting all the solar system bodies they tested comes out very similar in shape to the Temperature/potential temperature ratio as a function of atmospheric pressure according to the Poisson formula based on the Gas Law.
Heh.
Wriggle wriggle.

Joel Shore
January 2, 2012 4:34 pm

tallbloke: I don’t see where you are talking about regarding Nikolov’s reply. And, since they admitted flat-out that the fit the 4 parameters in Eq. (7), I don’t see them not being very free. [It seems that one of the parameters in Eq. (8) is not really free because it represents the 2.7 K background, so they maybe they only have 5 free parameters.]
And, the fact is that it is only vaguely similar in shape to the temperature / potential temperature ratio. Heck, that ratio goes to zero as pressure goes to zero while their ratio goes to 1.
Basically, this is a Rorschach test: Anybody who can ignore the wealth of empirical evidence that shows the surface temperature is elevated by the greenhouse effect, embracing instead a “theory” that doesn’t even satisfy conservation of energy, is simply demonstrating what pathetic extremes people can go to when their ideology drives them to reject science in favor of nonsense! It really exposes how much of the “skeptic” movement is not about science at all…It is about rationalizing what one’s ideology dictates one wants to believe.
Like I said, Young Earth creationists really don’t stack up all that badly against believers in Nikolov’s “theory”.

wayne
January 2, 2012 8:01 pm

Robert Brown :
January 2, 2012 at 8:21 am
Dr Brown, I feel I must at least write a temporary comment here at this very moment. Have been digging through your math and it’s implications ever since I read you commenting back in response to my paragraph and have been trying to write a long response back. I see so much underneath you equations.
That is an impressive mathematical breakdown of that so-simple paragraph. Thanks for the jump to the differentials. But you know, I think I understand why it queued you. That really is one of the very main, if not THE main (though incorrect), parameters that the justification of all of this radiative work in climate science seems to stand upon. Seems so un-physical.
When I can gather all of my thoughts and put in proper words (i wish!), I’ll write back here. Got quite a bit to say and hopefully get some assurance someone else sees where I am coming from on this subject.
Will read any responses over at tallbloke’s TalkShop.

wayne
January 2, 2012 8:57 pm

davidmhoffer says:
January 2, 2012 at 10:48 am

N&Z have provided a formula that appears to have predictive skill. One CANNOT falsify it by arguing the details! Sure radiative absorption and re-emission happens in a certain way. Sure convection happens in a certain way. Sure lapse rate works in a certain way.
So What?
If there is one thing we’ve learned over the last few years of the climate debate it is (or should be) that our understanding of the mechanisms and how they interact with one another is woefully incomplete. …
>>>>>>>>>>>>>>>>>>>>
O’ man o’ man David !! I’ve been waiting for a good year for you to say that. You saw me post on your site that I was going to fight the physics battle, right? Well your statement above hits it right on the head.
I’m not good at getting the concepts through to others, but now, you seem to be seeing it. Just read Dr Brown’s two long comments above, twice, for he has so well explained so many of my point I’ve been making in such a piecemeal manner spread across so many posts and comments. His ability to put it all together in just two comments is phenomenal to me. Now we are getting somewhere, maybe, if these thoughts can stay together and not be torn apart in the natural flow of four posts a day. (I call that flow ‘blog hell’)

tallbloke
January 3, 2012 1:41 am

Joel Shore says:
January 2, 2012 at 4:34 pm
Basically, this is a Rorschach test: Anybody who can ignore the wealth of empirical evidence that shows the surface temperature is elevated by the greenhouse effect, embracing instead a “theory” that doesn’t even satisfy conservation of energy,

I’ve answered your misconception of what Nikolov and Zeller are saying on the UTC thread here:
http://wattsupwiththat.com/2011/12/29/unified-theory-of-climate/#comment-851393
Like I said, Young Earth creationists really don’t stack up all that badly against believers in Nikolov’s “theory”.
Repeating your stupidity doesn’t make you appear any cleverer.

gbaikie
January 3, 2012 3:13 am

“Pressure by itself is not a source of energy! Instead, it enhances (amplifies) the energy supplied by an external source such as the Sun through density-dependent rates of molecular collision. This relative enhancement only manifests as an actual energy in the presence of external heating.
http://wattsupwiththat.com/2011/12/29/unified-theory-of-climate/#comment-851393
Why so complicated. With higher density at the surface, you will improve convection.
This means the ground will be cooler and the air will be warmer.
Normally in sunlight the ground is much warmer than the air above it.
Normally, if you stop convection- greenhouse or inside a car one gets warmer air.
A greenhouse or inside a car is preventing heat from going into the atmosphere [warming it].
So normally grass, dirt, sand, in sunlight is warmer than the air above it, with more air density the surface will be cooled more [more energy will removed from it].
If the surface was the same temperature as the surface air during sunlight you have very high air temperature. A black roof in summer can be 180 F. If you stop convection it could slightly hotter, maybe 200 F,
If you had better convection it might instead get 170 F and air temperature of 160F- or higher than hotter air temperature record.
“The biggest scorcher ever noted was on September 13, 1922, in El Azizia (also known as Al ‘Aziziyah), Libya, when the mercury hit 136 degrees Fahrenheit. El Azizia is near the Sahara desert …
California’s Death Valley had the second-highest temperature. This desert area hit 134 degrees Fahrenheit in 1913”
If we have less air density, the ground would get slightly warmer- less convection. And one would lower air temperature. Because less energy being transported to the air.
What we talking about is hardly unknown. But it’s natural. Like water vapor, it’s not unknown, it’s simply ignored. And instead we have this obsession with CO2- a minor warming affect [or even possibly a net cooling effect].

Eli Rabett
January 3, 2012 6:21 am

The statement that “The point is that as temperatures increase, the rate at which the Earth loses heat goes strictly up, all things being equal.” is where the good Doctor Brown goes GIGO. It is correct that the rate at which the Earth’s SURFACE loses heat goes strictly up, but the surface is NOT where most of the thermal IR is emitted to space.
That is rather high up in the atmosphere, which can be seen by comparing the emission to space with the Planck distribution of thermal emission from the surface (here for example, but there are plenty of accurate measurements and models). The point at which the emission curve matches a blackbody curve tells you what the temperature of the effective altitude at which emission is occurring to space. Raising the greenhouse gas concentration raises the level at which the emission to space occurs to a colder level, and thus one where emission is slower. To make up for that the surface has to warm in order to push more energy through the open window directly into space
Arthur Smith handled the surface temperature distribution issue in some detail. To cut a long story there short, assumption of a uniform surface temperature UNDERESTIMATES the effect of greenhouse gases on surface temperature. To return to the first paragraph, DECREASING the temperature at which the Earth’s atmosphere radiates to space by increasing greenhouse gases, requires that the surface temperature INCREASE in order to restore radiative balance.

Joel Shore
January 3, 2012 6:35 am

tallbloke says:

I’ve answered your misconception of what Nikolov and Zeller are saying on the UTC thread here:
http://wattsupwiththat.com/2011/12/29/unified-theory-of-climate/#comment-851393

And I have explained why your “answer” is not an answer here: I’ve answered your misconception of what Nikolov and Zeller are saying on the UTC thread here:
http://wattsupwiththat.com/2011/12/29/unified-theory-of-climate/#comment-851393

Repeating your stupidity doesn’t make you appear any cleverer.

I am not trying to be clever. I am just stating the obvious. Any reasonably-competent physical scientist reading the nonsense being written in support of Nikolov’s theory would conclude the same thing. The problem is that you guys here talk among yourselves and get a false notion that you are actually saying sensible things, as you can dismiss the few of us who are trying to explain why what you are saying is nonsense. (And, to be fair, those few of us include some AGW skeptics like Spencer, Monckton, Ira, and Eschenbach on this particular issue.)

Joel Shore
January 3, 2012 7:58 am

: Sorry…My previous post had the wrong link in it to my response. Here it is: http://wattsupwiththat.com/2011/12/29/unified-theory-of-climate/#comment-851644

gbaikie
January 3, 2012 8:39 am

It is correct that the rate at which the Earth’s SURFACE loses heat goes strictly up, but the surface is NOT where most of the thermal IR is emitted to space.
That is rather high up in the atmosphere, which can be seen by comparing the emission to space with the Planck distribution of thermal emission from the surface (here for example).
In your graph, that seems is measuring mostly the amount energy coming from the surface or reflected from clouds, and seems like a vast majority is reflected sunlight- as it’s similar Planck curve as sunlight. There huge bites out this Planck curve. But also there huge chunks of solar energy missing before the sunlight can reach the earth’s surface.
And one could expect that the section of IR which which is resulting all the solar energy which has absorbed and later radiated would be a Planck curve of the temperature of the earth surface or the spectrum IR emitted from greenhouse gases. And it seems the best time to measure this would be at nite.
Is the any measurement of that spectrum IR radiation and can tell where it is coming from- such as top of atmosphere, or surface. And how much energy this is quantitatively?
Can I assume the amount energy emitted from greenhouse gases into space would similar to what is called DLR, or back radiation which is radiated to earth surface. Has this been measured from high altitude balloons or space?

Brian
January 3, 2012 9:07 am

David Socrates says, December 30, 2011 at 11:57 pm : “I agree completly with the the general point you are making. However you have made a significant error in your explanation.
The lapse rate is not created by a “fall in gravitational field” (which would be completly negligible) but by a fall in pressurewith height due to the progressive reduction in the mass of atmosphere above.”
David,
One may think of it in terms of pressure and the weight of the atmosphere, but that doesn’t make things clearer (in my view). For the adiabatic lapse rate, a packet of gas higher in the atmosphere has a lower kinetic energy due to its conversion to gravitational potential energy. The loss of kinetic energy for the packet of randomly moving gas molecules is the loss of temperature. That’s why the temperature falls off linearly. The difference in temperature between some altitude and the surface is exactly equal to the amount of kinetic energy gained by an object (i.e., a molecule) in free fall. The other properties of the gas, such as lower density and pressure, follow directly from this gravitational effect of lower temperature.

Mikel Mariñelarena
January 3, 2012 11:10 am

Hi Joel,
I only did some elemmentary physics at high school and that was a long time ago so let’s see if I’m getting this right:
Lord Monckton says that, when we go from a world without GHGs to the one we have now, the effect of the corresponding ~101 W/m2, including forcings and feedbacks, is ~33K. Therefore, the effect of a doubling of CO2 (~3.7 W/m2), including forcing and feedbacks, would be ~1.2K.
You say that this is wrong because part of the ~101 W/m2 that we are adding to the atmosphere to get at the current situation is in fact a feedback. In other words, we could arrive at the same ~101 W/m2 by simply adding a fraction of the GHGs we now have (the non-condensable ones) and letting feedbacks operate to get the water vapour increase at its current level.
If my summary of the debate above is correct, I see merit in your argument (although you would do better without all that derogative language) and would like to see how LM or others reply to it, put in these terms. Perhaps they have already done so but I haven’t been able to grasp their logic.
In any case, I remain quite skeptical about the IPCC central value of the climate sensitivity. I do believe that the instrumental temperature record imposes quite a tight limit on this value, unless one decides to believe in a huge impact of aerosols that is simply not observed in reality. But that is another matter and I’ve found this dicussion very illuminating.
Best regards,
Mikel

Joel Shore
January 3, 2012 12:21 pm

Mikel says:

If my summary of the debate above is correct, I see merit in your argument (although you would do better without all that derogative language) and would like to see how LM or others reply to it, put in these terms.

Yes, you have that right. Sorry about what you perceive to be derogatory language, but it becomes rather frustrating when one presents an argument again and again, and is met with no serious argument in response but rather something to the effect of “Kiehl and Trenberth never use the word ‘forcings’ so therefore I am right”.

In any case, I remain quite skeptical about the IPCC central value of the climate sensitivity. I do believe that the instrumental temperature record imposes quite a tight limit on this value, unless one decides to believe in a huge impact of aerosols that is simply not observed in reality.

Unfortunately, not very tight at all, given the uncertainties in the aerosol forcing, the difference between transient and equilibrium sensitivity, and internal variability of the climate system … And also, not centered around low sensitivities if one does the calculations using the best estimates available and not confusing equilibrium and transient sensitivities, etc.
I think the instrumental temperature record does make very high sensitivities (much above the top of the IPCC likely range) seem pretty unlikely, given that they would seem to require a quite fortuitous cancellation between greenhouse and aerosol effects. Such cancellation is not impossible…but it is reasonable to believe that it is rather unlikely.

tallbloke
January 3, 2012 1:47 pm

Joel Shore says:
January 3, 2012 at 7:58 am
: Sorry…My previous post had the wrong link in it to my response. Here it is: http://wattsupwiththat.com/2011/12/29/unified-theory-of-climate/#comment-851644

Joel, that isn’t a reply to what I wrote, it your self repetition of your belief to yourself.
I’m giving up on you for now.

Joel Shore
January 3, 2012 4:43 pm

tallbloke says:

I’m giving up on you for now.

Fine with me! I’ll tell you: honestly, If I ever need to convince a colleague in physics that the AGW skeptic movement is based on pseudoscientific nonsense, I would be happy to give them the link to your website! If nothing else, they’ll get a good laugh out of it!

Myrrh
January 3, 2012 5:37 pm

R. Gates says:
December 31, 2011 at 3:39 pm
____
I would simply direct those who question this estimate to the actual measurements and science, and a few an excellent sources can be found at:
http://www.nature.com/nature/journal/v344/n6266/abs/344529a0.html
http://www.esrl.noaa.gov/gmd/aggi/
http://rsta.royalsocietypublishing.org/content/369/1943/1891.full
http://scienceofdoom.com/2009/11/28/co2-an-insignificant-trace-gas-part-one/
I stand quite solidly behind my generously conservative estimate of 25% of the 33C warming being related to CO2– and this didn’t really even address the 15 micron issue (where earth’s LW peaks right where CO2′s LW is strongest, and water vapor is much weaker) nor the non-condensing nature of CO2 and its relative stability when compared to water vapor in the atmosphere. Quite simply- take away the CO2 from the atmosphere and we’d be back to an ice planet in fairly short order, such that the 25% conservative estimate of the contribution to the 33C of warming does not even begin to indicate the full measure and value of the stability that CO2 brings to temperatures from its non-condensing nature.
===========
What the heck is that supposed to mean? I’ve asked before, it’s simply repeated in reply or I’m sent on wild goose chases where it’s simply repeated or not even mentioned. Does anyone here have any idea of what this means?:
I’ve just been reading Tyndall on Faraday, so am even more peeved at seeing this repeated without any explanation for what it means in whole or in any of its parts.
” such that the 25% conservative estimate of the contribution to the 33C of warming does not even begin to indicate the full measure and value of the stability that CO2 brings to temperatures from its non-condensing nature.”
What does it mean? Where is the measure and value of stability of non-condensing gases on temperatures given?

January 4, 2012 12:29 am

>>
Robert Brown says:
January 2, 2012 at 8:21 am
Are there candidates for this sort of a gun? Sure. Albedo, for one. 1% changes in (absolute) albedo can modulate temperature by roughly 1K.
<<
I agree. I modeled Kiehl and Trenberth 1997’s energy model. The KT97 model belongs to a class of climate models known as zero-dimension models. They are also called planet average climate models. If you allow this model to warm the surface by increasing GHG’s, the atmosphere warms faster–from 120% to 160% faster than the surface depending on how you treat latent and sensible heat fluxes.
Unfortunately, the atmosphere is only warming at 70% to 90% of the surface.
You get the correct atmospheric warming if you just alter the planet albedo. That means that GHG’s are playing no part in the current surface warming–at least according to my KT97 based model.
Jim

Dave Wendt
January 4, 2012 11:14 am

Myrrh says:
January 3, 2012 at 5:37 pm
“What the heck is that supposed to mean?”
R.Gates says
“Quite simply- take away the CO2 from the atmosphere and we’d be back to an ice planet in fairly short order, such that the 25% conservative estimate of the contribution to the 33C of warming does not even begin to indicate the full measure and value of the stability that CO2 brings to temperatures from its non-condensing nature.”
That suggests to me that our boy Gates may finally be coming to his senses and recognizing the overwhelmingly positive contribution of CO2 to our planet. Who knows, with a little encouragement he may take off his Darth Vader helmet and return entirely to the Light side of the Force.

Joel Shore
January 4, 2012 12:20 pm

Dave Wendt says:

That suggests to me that our boy Gates may finally be coming to his senses and recognizing the overwhelmingly positive contribution of CO2 to our planet. Who knows, with a little encouragement he may take off his Darth Vader helmet and return entirely to the Light side of the Force.

Everybody, except for AGW skeptics, believes that CO2 serves a vital role in making the planet’s temperature habitable. And, in fact, it is the concepts learned from this that allow us to extrapolate to what is likely to happen when we increase CO2 levels further.
It is not as simple as CO2 = GOOD or CO2 = BAD. After all, water is vital for you to survive but that does not mean you would want to be thrown into a 10 ft deep pool of it with a cement block tied to your feet. (And, perhaps a bit better analogy is the people who has died during marathons by hydrating too much: http://www.succeedscaps.com/overhydration.html )

January 4, 2012 12:39 pm

Joel Shore says:
“Everybody, except for AGW skeptics, believes that CO2 serves a vital role in making the planet’s temperature habitable.”
Fool. Does he really believe that?

cba
January 4, 2012 12:43 pm

“Christopher Monckton of Brenchley states:
Kiehl and Trenberth say that the interval of total forcing from the five main greenhouse gases is 101[86, 125] Watts per square meter. Since just about all temperature feedbacks since the dawn of the Earth have acted by now, the post-feedback or equilibrium system climate sensitivity parameter is 33 K divided by the forcing interval – namely 0.33[0.27, 0.39] Kelvin per Watt per square meter.

Consider this instead:
Surface T causes an average surface emission of 390 W/m^2
Incoming solar has about 239 W/m^2 absorbed into the Earth system (surface +atmosphere).
For balance the incoming must equal the outgoing and that means it is close to 239 W/m^2 actually escaping to space. Therefore we have 390-239 = 151 W/m^2 of actual blockage, not just 5 atmospheric gases with 101 W/m^2 per K&T.
This 151W/m^2 includes the ghg gases, clouds, aerosols, particulates etc. – everything in the atmosphere that is preventing the surface radiation amount from escaping. This is totally compatible with the ghgs causing 101 W/m^2 of absorption.
Note that the 33 deg C rise is due to all factors, not just 5 ghgs. The result is 33/151 = 0.218 deg C rise per W/m^2 increase, making it less than a simple radiative estimate based upon the fact that an escaping 1 W/m^2 due to an increase in surface T must compensate for the fact that less than 2/3 of the surface emission escapes to space = 239/390 = 61%. The radiative only estimate comes out to 0.3 deg C rise per W/m^2. This means that there is a net negative feedback present in the atmosphere.
Note too that while all existing feedbacks are accounted for, it does not include additional feedbacks due to a rise in temperature that increase the total W/m^2. However, at 0.218 deg C rise per W/m^2, one can check out the ipcc’s prime feedback factor – h2o vapor – and determine the effect caused by a 0.2 deg C rise in temperature using the standard relative humidity = constant assumption and the increase in h2o vapor present for a temperature increase. Suffice to say its effect is far smaller than a whole 1 W/m^2 and its contribution to increased T is only a small fraction of 0.218 deg C.

Joel Shore
January 4, 2012 1:56 pm

cba says:

Note too that while all existing feedbacks are accounted for, it does not include additional feedbacks due to a rise in temperature that increase the total W/m^2.

The only feedback that I know of that does not fall into this latter category is the lapse rate feedback, a negative feedback. Can you think of any other one?
So, really, a simpler way of putting it is that the only feedback that you have included as a feedback is the one known negative feedback. The positive ones (water vapor, ice albedo) and the one that is of unknown sign (clouds) are not included.

Joel Shore
January 4, 2012 2:01 pm

Smokey says:

“Everybody, except for AGW skeptics, believes that CO2 serves a vital role in making the planet’s temperature habitable.”
Fool. Does he really believe that?

So, you believe that? I thought you would believe that the effect of CO2 is too small to make the difference between the current state of the planet and the planet being significantly enough colder that it is essentially uninhabitable (or at least a very, very different place to inhabit)!

Tonyb
Editor
January 4, 2012 2:13 pm

Joel
My word, you are busy on the various threads the last few days. A happy new year to you
Tonyb

Joel Shore
January 4, 2012 4:27 pm

Hi, Tonyb. What can I say, the New Year, like a full moon, seems to have brought out a lot of craziness that needs to be dispelled!
Happy New Year to you too!
Joel

cba
January 5, 2012 7:31 am

“Joel Shore says:
January 4, 2012 at 1:56 pm
cba says:
Note too that while all existing feedbacks are accounted for, it does not include additional feedbacks due to a rise in temperature that increase the total W/m^2.
The only feedback that I know of that does not fall into this latter category is the lapse rate feedback, a negative feedback. Can you think of any other one?
So, really, a simpler way of putting it is that the only feedback that you have included as a feedback is the one known negative feedback. The positive ones (water vapor, ice albedo) and the one that is of unknown sign (clouds) are not included.

Joel,
No, once a feedback or a forcing happens, it changes the temperature. My sensitivity value is the sensitivity to the planet for a change of 1 W/m^2. That includes both some initial forcing plus whatever feedbacks occur due to a rise in T caused by the 1w/m^2 rise. Doubling the co2 value results in an increase blocking of 3.7W/m^2 – assuming the tropopause as the measuring point. If you go far higher in the atmosphere, that value drops closer to 2.7W/m^2 assuming the distribution of the co2 spreads to 70km and beyond. This will account for about 0.8 deg C rise in T to overcome the additional absorption. NOTE: this doesn’t include your beloved feedbacks that will happen when there is an increase of T= 0.8 + ft deg C caused by the 3.7 + fb W/m^2 where ft is the feedback T increase and fb is the feedback absorbed power. HOWEVER, thanks to the ipcc politicians, we know that h2o vapor feedback is by far the largest contributor and we know from meteorology that water vapor tends to maintain a constant RH.
From this, we can get a new value for absolute humidity given a total change in temperature T. As memory serves me, a 5 deg C change in T causes a 30% change in h2o vapor content assuming RH stays constant. For a 2 deg C change in T, that value drops to a 13% change in h2o vapor content again assuming constant RH. Plugging these values into a 1-d model one gets a 3.2 W/m^2 increase for h2o vapor blocking at the tropopause (and at 70km) and a meager 1.5 W/m^2 contribution from a 13% increase caused by a 2 deg C rise for T. 3.7+1.5 = 5.2 W/m^2 forcing + primary feedback assuming a 2 deg C rise total in T. Note this is only sufficient to cause a 1.13 deg C rise and that leaves 0.87 deg C unaccounted for. That’s an additional 4 W/m^2 of feedbacks above and beyond the ipcc’s professed primary feedback of h2o vapor.
Also, as previously mentioned, that little problem is solved by lacis and hansen by assuming cloud cover is at its maximum and that a slight warming will result in significantly reduced cloud cover that provides the missing W/m^2 feedback. Of course the ipcc can’t figure out if the cloud cover variation will result in positive or negative feedback.
So much for settled science, science, politics, and Catastrophic AGW.

Joel Shore
January 5, 2012 9:45 am

cba says:

Joel,
No, once a feedback or a forcing happens, it changes the temperature. My sensitivity value is the sensitivity to the planet for a change of 1 W/m^2. That includes both some initial forcing plus whatever feedbacks occur due to a rise in T caused by the 1w/m^2 rise. …
NOTE: this doesn’t include your beloved feedbacks that will happen when there is an increase of T= 0.8 + ft deg C caused by the 3.7 + fb W/m^2 where ft is the feedback T increase and fb is the feedback absorbed power.

I really can’t make heads-or-tails out of your statements here. They almost seem contradictory. My question to you is simple: Do you now agree that your estimate of the climate sensitivity of 0.8 C per CO2 doubling does not include any feedbacks that come about because of the temperature rise (but does include the so-called “lapse rate feedback”, which is really the odd-man-out as far as feedbacks is concerned)? If not, exactly why do you disagree?
I really don’t feel like wading through your calculation of the water vapor feedback…So, I’ll just ask you to cut to the chase: Are you saying the water vapor feedback is less than the IPCC says it is?
I do agree that to get the central estimate for the IPCC climate sensitivity requires the cloud feedback to be somewhat positive. [According to the IPCC AR4, essentially all of the other feedbacks except the cloud feedback gets you to the bottom end of the IPCC likely range, i.e., to ~2 C (the specific value they quote being 1.9 C).] I think your description of what they are saying clouds do, however, is too simplistic. In fact, the feedback involves both high clouds and low clouds…not simply what the total cloud fraction does…and do you have any evidence that they are assuming that the cloud cover is a maximum in the current state. (Assuming that the cloud cover in net does reduce when you warm is not the same thing as saying the current cloud cover is at a maximum.)

cba
January 5, 2012 2:54 pm

The 0.8 rise due to the co2 is the portion of the T rise that you would call the forcing. You’ll have to ‘wade’ through the details concerning the h2o vapor feedback to see what the “primary” feedback contribution is or isn’t.
Well, if dropping the temperature decreases cloud cover, (due to lack of h2o vapor available to form clouds) and if lacis and hansen’s assumption is correct in that if the temperature rises, the cloud cover will decrease and cause positive feedback that enhances the warming, it must be that we are at a maximum and the value cannot be greater than what it is now. The evidence is their claim that the cloud cover reduction is their amplification factor. Whether they figured it out or not that this means we’re at the maximum value is not something I know although I would expect that if they did know, they’d be in full back peddling mode. Then again, maybe they were hoping that no one would notice.
As for ipcc’s 1.9 deg C value, how many w/m^2 do they say exist in the feedback column and how much of it is due to the 3.7 W/m^2 of the co2 forcing?
Note that you’ll probably need the value 8.2 W/m^2 additional absorption for a doubling value for h2o vapor to go along with the 3.7 W/m^2 additional absorption for a doubling of co2.