Feedback about feedbacks and suchlike fooleries

By Christopher Monckton of Brenchley

Responses to my post of December 28 about climate sensitivity have been particularly interesting. This further posting answers some of the feedback.

My earlier posting explained how the textbooks establish that if albedo and insolation were held constant but all greenhouse gases were removed from the air the Earth’s surface temperature would be 255 K. Since today’s temperature is 288 K, the presence as opposed to absence of all the greenhouse gases – including H2O, CO2, CH4, N2O and stratospheric O3 – causes 33 K warming.

Kiehl and Trenberth say that the interval of total forcing from the five main greenhouse gases is 101[86, 125] Watts per square meter. Since just about all temperature feedbacks since the dawn of the Earth have acted by now, the post-feedback or equilibrium system climate sensitivity parameter is 33 K divided by the forcing interval – namely 0.33[0.27, 0.39] Kelvin per Watt per square meter.

Multiplying the system sensitivity parameter interval by any given radiative forcing yields the corresponding equilibrium temperature change. The IPCC takes the forcing from a doubling of CO2 concentration as 3.7 Watts per square meter, so the corresponding warming – the system climate sensitivity – is 1.2[1.0, 1.4] K, or about one-third of the IPCC’s 3.3[2.0, 4.5] K.

I also demonstrated that the officially-estimated 2 Watts per square meter of radiative forcings and consequent manmade temperature changes of 0.4-0.8 K since 1750 indicated a transient industrial-era sensitivity of 1.1[0.7, 1.5] K, very much in line with the independently-determined system sensitivity.

Accordingly. transient and equilibrium sensitivities are so close to one another that temperature feedbacks – additional forcings that arise purely because temperature has changed in response to initial or base forcings – are very likely to be net-zero.

Indeed, with net-zero feedbacks the IPCC’s transient-sensitivity parameter is 0.31 Kelvin per Watt per square meter, close to the 0.33 that I had derived as the system equilibrium or post-feedback parameter.

I concluded that climate sensitivity to the doubling of CO2 concentration expected this century is low enough to be harmless.

One regular troll – one can tell he is a troll by his silly hate-speech about how I “continue to fool yourself and others” – attempted to say that Kiehl and Trenberth’s 86-125 Watts per square meter of total forcing from the presence of the top five greenhouse gases included the feedbacks consequent upon the forcing, asserting, without evidence, that I (and by implication the two authors) was confusing forcings and feedbacks.

No: Kiehl and Trenberth are quite specific in their paper: “We calculate the longwave radiative forcing of a given gas by sequentially removing atmospheric absorbers from the radiation model. We perform these calculations for clear and cloudy sky conditions to illustrate the role of clouds to a given absorber for the total radiative forcing. Table 3 lists the individual contribution of each absorber to the total clear-sky [and cloudy-sky] radiative forcing.” Forcing, not feedback. Indeed, the word “feedback” does not occur even once in Kiehl & Trenberth’s paper.

In particular, the troll thought we were treating the water-vapor feedback as though it were a forcing. We were not, of course, but let us pretend for a moment that we were. If we now add CO2 to the atmospheric mix and disturb what the IPCC assumes to have been a prior climatic equilibrium, then by the Clausius-Clapeyron relation the space occupied by the atmosphere is capable of holding near-exponentially more water vapor as it warms. This – to the extent that it occurred – would indeed be a feedback.

However, as Paltridge et al. (2009) have demonstrated, it is not clear that the water vapor feedback is anything like as strongly positive as the IPCC would like us to believe. Below the mid-troposphere, additional water vapor makes very little difference because its principal absorption bands are largely saturated. Above it, the additional water vapor tends to subside harmlessly to lower altitudes, again making very little difference to temperature. The authors conclude that feedbacks are somewhat net-negative, a conclusion supported by measurements given in papers such as Lindzen & Choi (2009, 2010), Spencer & Braswell (2010, 2011), and Shaviv (2011).

It is also worth recalling that Solomon et al. (2009) say equilibrium will not be reached for up to 3000 years after we perturb the climate. If so, it is only the transient climate change (one-third of the IPCC’s ’quilibrium estimate) that will occur in our lifetime and in that of our grandchildren. Whichever way you stack it, manmade warming in our own era will be small and, therefore, harmless.

A true-believer at the recent Los Alamos quinquennial climate conference at Santa Fe asked me, in a horrified voice, whether I was really willing to allow our grandchildren to pay for the consequences of our folly in emitting so much CO2. Since the warming we shall cause will be small and may well prove to be beneficial, one hopes future generations will be grateful to us.

Besides, as President Klaus of the Czech Republic has wisely pointed out, if we damage our grandchildren’s inheritance by blowing it on useless windmills, mercury-filled light-bulbs, solar panels, and a gallimaufry of suchlike costly, wasteful, environment-destroying fashion statements, our heirs will certainly not thank us.

Mr. Wingo and others wonder whether it is appropriate to assume that the sum of various different fourth powers of temperature over the entire surface of the Earth will be equal to the fourth power of the global temperature as determined by the fundamental equation of radiative transfer. By zonal calculation on several hundred zones of equal height and hence of equal spherical-surface area, making due allowance for the solar azimuth angle applicable to each zone, I have determined that the equation does indeed provide a very-nearly-accurate mean surface temperature, varying from the sum of the zonal means by just 0.5 K in total. In mathematical terms, the Holder inequality is in this instance near-vanishingly small.

Dr. Nikolov, however, considers that the textbooks and the literature are wrong in this respect: but I have deliberately confined my analysis to textbook methods and “mainstream-science” data precisely so as to minimize the scope for any disagreement on the part of those who – until now – have gone along with the IPCC’s assertion that climate sensitivity is high enough to be dangerous. Deploying their own methods and drawing proper conclusions from them is more likely to lead them to rethink their position than attempting to reinvent the wheel.

Mr. Martin asks whether I’d be willing to apply my calculations to Venus. However, I do not share the view of Al Gore, Dr. Nikolov, or Mr. Huffman that Venus is likely to give us the answers we need about climate sensitivity on Earth. A brief critique of Mr. Huffman’s analysis of the Venusian atmospheric soup and its implications for climate sensitivity is at Jo Nova’s ever-fragrant and always-eloquent website.

Brian H asks whether Dr. Nikolov is right in his finding that, for several astronomical bodies [including Venus] all that matters in the determination of surface temperature is the mass of the atmospheric overburden. Since I am not yet content that Dr. Nikolov is right in concluding that the Earth’s characteristic-emission temperature is 100 K less than the 255 K given in the textbooks, I am disinclined to enquire further into his theory until this rather large discrepancy is resolved.

Rosco is surprised by the notion of dividing the incoming solar irradiance by 4 to determine the Wattage per square meter of the Earth’s surface. I have taken this textbook step because the Earth intercepts a disk-sized area of insolation, which must be distributed over the rotating spherical surface, and the ratio of the surface area of a disk to that of a sphere of equal radius is 1:4.

Other commenters have asked whether the fact that the characteristic-emission sphere has a greater surface area than the Earth makes a difference. No, it doesn’t, because the ratio of the surface areas of disk and sphere is 1:4 regardless of the radius and hence surface area of the sphere.

Rosco also cites Kiehl and Trenberth’s notion that the radiation absorbed and emitted at the Earth’s surface is 390 Watts per square meter. The two authors indicate, in effect, that they derived that value by multiplying the fourth power of the Earth’s mean surface temperature of 288 K by the Stefan-Boltzmann constant (0.0000000567 Watts per square meter per Kelvin to the fourth power).

If Kiehl & Trenberth were right to assume that a strict Stefan-Boltzmann relation holds at the surface in this way, then we might legitimately point out that the pre-feedback climate-sensitivity parameter – the first differential of the fundamental equation of radiative transfer at the above values for surface radiative flux and temperature – would be just 288/(390 x 4) = 0.18 Kelvin per Watt per square meter. If so, even if we were to assume the IPCC’s implicit central estimate of strongly net-positive feedbacks at 2.1 Watts per square meter per Kelvin the equilibrium climate sensitivity to a CO2 doubling would be 3.7 x 0.18 / (1 – 2.1 x 0.18) = 1.1 K. And where have we seen that value before?

In all this, of course, I do not warrant any of the IPCC’s or Kiehl and Trenberth’s or the textbooks’ methods or data or results as correct: that would be well above my pay-grade. However, as Mr. Fernley-Jones has correctly noticed, I am quite happy to demonstrate that if their methods and values are correct then climate sensitivity – whichever way one does the calculation – is about one-third of what they would like us to believe it is.

All the contributors – even the trolls – have greatly helped me in clarifying what is in essence a simple but not simpliste argument. To those who have wanted to complicate the argument in various ways, I say that, as the splendid Willis Eschenbach has pointed out before in this column, one should keep firmly in mind the distinction between first-order effects that definitely change the outcome, second-order effects that may or may not change it but won’t change it much, and third-order effects that definitely won’t change it enough to make a difference. One should ruthlessly exclude third-order effects, however superficially interesting.

Given that the IPCC seems to be exaggerating climate sensitivity threefold, only the largest first-order influences are going to make a significant difference to the calculation. And it is the official or textbook treatment of these influences that I have used throughout.

My New Year’s resolution is to write a short book about the climate question, in which the outcome of the discussions here will be presented. The book will say that climate sensitivity is low; that, even if it were as high as the IPCC wants us to think, it would be at least an order of magnitude cheaper to adapt to the consequences of any warming that may occur than to try, Canute-like, to prevent it; that there are multiple lines of evidence for systematic and connected corruption and fraud on the part of the surprisingly small clique of politically-motivated “scientists” who have fabricated and driven the now-failing climate scare; and that too many who ought to know better have looked the other way as their academic, scientific, political, or journalistic colleagues have perpetrated and perpetuated their shoddy frauds, because silence in the face of official mendacity is socially convenient, politically expedient, and, above all, financially profitable.

The final chapter will add that there is a real danger that the UN, using advisors from the European Union, will succeed in exploiting the fraudulent science peddled by the climate/environment axis as a Trojan horse to extinguish democracy in those countries which, unlike the nations of Europe, are still fortunate enough to have it; that the world’s freedom is consequently at immediate and grave risk from the vaunting ambition of a grasping, talent-free, scientifically-illiterate ruling elite of world-government wannabes everywhere; but that – as the recent history of the bureaucratic-centralist and now-failed EU has demonstrated – the power-mad adidacts are doomed, and they will be brought low by the ineluctable futility of their attempts to tinker with the laws of physics and of economics.

The army of light and truth, however few we be, will quietly triumph over the forces of darkness in the end: for, whether they like it or not, the unalterable truth cannot indefinitely be confused, concealed, or contradicted. We did not make the laws of science: therefore, it is beyond our power to repeal them.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
244 Comments
Inline Feedbacks
View all comments
Joel Shore
December 30, 2011 5:55 pm

Rosco says:

If you really believe that when the temperature in places like Death Valley approaches 60 degrees C the Sun is responsible for minus 18 C while the remaining 70 plus degrees C is provided by greenhouse effect you have been completely conned.
There is evidence the theory of quartering the solar constant to determine the greenhouse effect is wrong (actually I believe it is a deliberate con dressed up as plausible).

You are misunderstanding the argument. The argument is one for GLOBAL ENERGY BALANCE. It does not allow one to make statements about the local temperature. What is true globally is that the amount of power coming in has to equal the amount going out.
The amount coming in is equal to the solar constant times pi*R^2 where R is the Earth’s radius; however, ~30% of this is reflected because of the Earth’s albedo, so the actual amount absorbed by the Earth and its atmosphere is (1-alpha)*pi*R^2 where alpha is the albedo. The amount being emitted is equal to the quantity (epsilon*sigma*T^4) integrated over the Earth’s surface where epsilon is the emissivity, sigma is the Stefan-Boltzmann constant and T is the temperature. epsilon is very nearly equal to one (within about a percent) for most terrestrial surfaces in the infrared. Hence, to a good approximation, this integral is give by sigma*(ave(T^4))*(4*Pi*R^2) where ave(T^4) is the average of the quantity T^4 (absolute temperature to the fourth power) over the 4*Pi*R^2 of surface area of the Earth.
Hence, what global energy balance constrains is the average of T^4 over the surface of the Earth. It does not constrain the maximum temperature, or the minimum or anything like that. It constrains the average. [The last piece of the puzzle is to apply Holder’s Inequality, which tells you that the fourth root of ave(T^4) will be greater or equal to the average of T. Hence, when you get that the 4th root of the average of T^4 is about 255 K, this means the average temperature T is at most 255 K. In practice, for Earth-like temperature variations, the difference between the average of T and the 4th root of the average of T^4 is pretty small…and since the error due to that and the error due to assuming that the emissivity epsilon is exactly 1 are of similar magnitude and act in opposite directions, the 255 K estimate is a pretty reasonable value for what the average temperature would be in the absence of a greenhouse effect.]

December 30, 2011 5:58 pm

Once again, many thanks to everyone for your comments. Here are some answers.
First, the troll. Joel Shore says he thinks I define a “troll” as someone who injects science into his comments. Yet I had specifically stated that a “troll” is one who cannot keep his argument civil and polite, but resorts instead to hate-speech, of which Shore is serially guilty. Besides, the only “science” he “injected” was a repeated misstatement to the effect that Kiehl & Trenberth’s greenhouse-gas forcings total of 86-125 Watts per square meter included feedbacks consequent upon the forcings. It doesn’t. Get used to it.
R. Gates says the Earth has not yet reached its equilibrium temperature, in that not all of the feedbacks consequent upon the greenhouse-gas forcings of 2 Watts per square meter that we have added to the system since 1750 have acted. However, these feedbacks are a very small fraction of the total feedbacks generated by the presence as opposed to absence of all greenhouse gases in the atmosphere. In that context, very nearly all the feedbacks have acted, and the remaining feedbacks (even if they are as net-positive as the IPCC would wish) will have little influence on the determination of the equilibrium system sensitivity of 1.2 K per CO2 doubling.
Of course, R. Gates’ argument is correct as far as the transient sensitivity calculation since 1750 is concerned: but I had already pointed this out in my postings, drawing the legitimate conclusion that if the transient industrial-era sensitivity (1.1 K) is near-equal to the equilibrium system sensitivity (1.2 K) then it is likely that temperature feedbacks are net-zero or thereby. I submit, therefore, that I have not made an “error of logic” here.
R. Gates expresses a touching faith in models and paleoclimate data. He is entitled to his religion. However, Shaviv (2011), using models and a great deal of paleoclimate data, finds climate sensitivity to be around 1 K per doubling, not the IPCC’s 3 K. Douglass and Knox, in paper after paper, have also used modeling and have found climate sensitivity low. My own approach is to try to use empirical rather than numerical weather prediction for the determination of future climate states, because the uncertainties in modeling are too great, especially since the climate object is (or behaves as though it were) mathematically chaotic and hence inherently resistant to very-long-term prediction of its future states (IPCC, 2001, para. 14.2.2.2).
R. Gates also suggests I expess a “high level of certainty” that climate sensitivity is low and harmless. However, I have repeatedly made it plain that I do not warrant the reliability of the values for radiative forcing or for pre-greenhouse-gas global surface temperature that I have used. However, if those textbook/mainstream/IPCC values and methods are correct, then – like it or not – low climate sensitivity necessarily follows. Do the math.
AJStrata begs me to appreciate that the “absorbing and emitting sphere” at the characteristic-emission altitude are not of the same radius. However, Kirchhoff’s radiation law is entirely clear: absorption and emission of radiation from the characteristic-emission surface of an astronomical body are – and are treated by the textbooks and the IPCC as – simultaneous and identical. It is as simple as that.
Mydogsgotnonose says that the textbook value of 33 K for the warming effect of the presence as opposed to the absence of all greenhouse gases is an “elementary mistake”, on the ground that there would be no clouds or ice if there were no water vapor in the atmosphere, so that the albedo would be different. I don’t know how many more times I need to explain that in order to determine purely the warming effect of the greenhouse gases one must hold the albedo, emissivity, and insolation artificially constant. This is elementary climatology, and it makes perfect sense if the objective is to determine climate sensitivity robustly, rather than to determine the actual mean surface temperature that would subsist on the naked terrestrial lithosphere.
Wayne, supported by Dr. Burns and by Richard M, is attracted to Dr. Nikolov’s “Unified Climate Theory”. He says he has calculated the temperature of millions of points on the surface of the Earth and has concluded that the characteristic-emission temperature would indeed be 155 K, not the 255 K that the textbooks give. Well, I too have done that calculation, but I have not used points because that would introduce errors. I have done the calculation zonally, using up to a million zones of equal altitude and hence of equal spherical-surface area. I find the textbooks to be correct. If Dr. Nikolov wishes to convince the scientific community of his theory, he will have to deal with the alarmingly large discrepancy between his value and the currently-accepted value for the characteristic-emission temperature before proceeding further.
Wayne also produces a table of mean surface temperatures for three astronomical bodies in the solar system, apparently calculated by Dr. Nikolov solely from the respective atmospheric pressures, densities, and molar masses, and without reference to greenhouse gases at all. Perhaps I am misunderstanding the units used, but they do not cancel to yield Kelvin, as they should if they were to mean anything: instead, they seem to yield Kelvin Newton-meters per Joule. So I suspect that something may be amiss here. At present, my advice would be to treat Dr. Nikolov’s theory – however interesting and attractive it is at first blush – with caution.
Bill Illis has it absolutely right. If one starts empirically from commonly-accepted empirical data and methods, as I have tried to do, whichever way one attacks the numbers the climate sensitivity is about one-third of the IPCC’s central estimate. On the other hand, as Bill says, perhaps the climate possesses some “magical properties” that will be revealed later. Were it not for the ludicrous politicization of what should be a straightforward scientific question, it would by now be generally agreed that climate sensitivity is low enough to be harmless, and that, even if it were as high as the IPCC imagines, it would still be at least an order of magnitude more cost-effective to wait and adapt to any consequences of any warming that may occur than to try – futilely – to stop it happening by taxing, trading, regulating, reducing, or replacing CO2.

December 30, 2011 6:04 pm

With all due respect, Lord Monckton, you are indeed serving well in the cause. But I firmly believe you need to study more carefully and come to grips with Prof. Claes Johnson’s “Computational Blackbody Radiation” http://www.csc.kth.se/~cgjoh/blackbodyslayer.pdf in which he proves (backed up by Prof Nahle’s experiment in Sept 2011) that any back radiation simply does not have sufficient energy to ionise surface molecules and thus be converted to thermal energy. Only direct solar insolation does any warming.
This removes the possibility of any feedback what-so-ever. It removes the power source of any “greenhouse effect” and renders all models of sensitivity irrelevant.
PS. If you want data that really hits AGW on the head, use the Arctic trends on my site http://climate-change-theory.com

Larry Goldberg
December 30, 2011 6:08 pm

R Gates – you invoke Judith Curry on one hand, but cite that “paleoclimate data and GCM’s seem to be converging on a reasonably consistent range in the area of 3C, with error bars of about 1C on either side at a 95% confidence level.” Your (unsubstantiated) claim would not please Ms. Curry, who has a much clearer idea of the uncertainty inherent in the GCMs (regardless of whether they are run on a supercomputer or not), nor would she be impressed by any unsubstantiated claims of certainty (or “convergence”) of the paleoclimate data (particularly some paleoclimate “data” that even the team finds difficult to stomach). Monckton has proposed a theory: either expose the flaws in it, or stand back and let the adults deal with the issues.

Dave Wendt
December 30, 2011 6:21 pm

Doug Proctor says:
December 30, 2011 at 3:28 pm
“Is the planetary loss of heat a) significant, or b) hidden within other factors? What is the flux?
Anyone out there with thoughts on this?”
From what I’ve seen the CW is that the geothermal contribution to the global heat budget is about 88 milliwatts/m2, both terrestrially and in the ocean abyss. Having looked at several of the supposedly canonical works on this topic the only point that jumped out at me was that in any of the calculations the contribution from any volcanically active regions was systematically excluded. This work suggests that, at least for the oceans, the geothermal contribution has been underestimated and is in fact is not negligible
http://www.ocean-sci.net/5/203/2009/os-5-203-2009.pdf
Given that in doing their analysis they continue the practice of excluding any actual hot spots and use the most conservative estimates for the rest, it suggests, to me at least, that underestimating the geothermal component may have compromised the energy budget calculations, but in any case I suspect even the most thorough accounting is unlikely to bring the number up to much beyond a half a Watt/ m2, but then we’re talking about an imbalance that is in the area of only 3W/m2 so half a watt is still significant.

wayne
December 30, 2011 6:34 pm

R. Gates December 30, 11:58 am
Bob Fernley-Jones says:
December 30, 2011 at 2:24 pm
R. Gates December 30, 11:58 am

[Wayne], if an atmosphere-less earth would have an average temperature of 154K, why, does the atmosphere-less moon have one of 250K? Both would receive about the same energy from the sun.

I’m curious to know how meaningful any calculation of the so-called average temperature of the moon is, and just how it is integrated. Can you advise please?
One source gives a T range of 260K, which is greater than your average. BTW, three major differences between the moon and an airless & ocean-less Earth, (with current geology), are the regolith thermal factors, and rotation speed, together with speculation on albedo, which I guess would vary much more on Earth for very different regional geological reasons. It would be interesting to study the thermodynamics of heating and cooling on the moon, (at the surface and at depth), over its 4-week cycle.
Wayne, sorry for interjecting, but I found it hard to resist.
>>
Hey Bob, no problems matee. Unlike some ‘people’ here I do have to sleep every day at some point. I say ‘people’ for it sure seems some must be entire sleepless organizations, I find no time period when they are not always there, posting dribble.
R. Gates, do you not retain anything. This has been answered many times before. This is using KT07 and TFK09’s radiative computations. In their case they used a perfect, mass-less, black-body flat disc within their calculations to get the 33 K GHE.
Nikolov’s case is staying parallel but using a perfect massless gray-body sphere. Neither have any thermal inertia properties.
You see, even the concept is not real, it is not calculating a real temperature for there is mass. You must have some mass to even define a temperature. These calculations in all three cases are computing an effective radiative temperature. Big difference.
But on the moon, you have real mass, real specific heats and conductivity, real absorption and dispersion of that energy, stored to release at a cooler time, so the temperatures from the lit side to dark side are modulated to a huge degree. (but I was wrong in assuming that all readers here knew that)
There R. Gates, that is why.
Also you are talking about the temperature on the moon of 250K. That is assumed near the maximum and on a real body with mass but since there is no atmosphere, this temperature must be taken in the top layer of the dust. Right? Well, in Nikolov case that I integrated, there are also locations where the effective temperature is 250 K. In fact there is one location, directly under the sun, where it is receiving 1198 Wm-2 and at an emissivity of 0.955 the effective radiative temperature is 386K… at that one maximum point, but there is no thermal inertia, it is perfect massless radiator. It is just a fact that most of that sphere receive MUCH less on the average due to pure geometry and that’s right, it averages to a mean of 154.3 K, giving the “GHE” of 133 K.

wayne
December 30, 2011 7:09 pm

Monckton of Brenchley says:
December 30, 2011 at 5:58 pm
Wayne also produces a table of mean surface temperatures for three astronomical bodies in the solar system, apparently calculated by Dr. Nikolov solely from the respective atmospheric pressures, densities, and molar masses, and without reference to greenhouse gases at all. Perhaps I am misunderstanding the units used, but they do not cancel to yield Kelvin
>>
Hi Lord Monckton, I know you are trying and will accept science if it comes to be in the end correct.
On the units in my equation. That equation came from a thermodynamics text and the units do, in fact, all cancel but Kelvin. Here’s their breakdown to erase that doubt.
Starting at T = (P/ ρ) • (M/R)
using a little dimensional analysis you get
K = ( [kg • (m/s2)/m2] / [kg/m3] ) • ( [kg/mol] / [kg•m2/s2/mol/K] )
inversing the denominators gives
K = [kg • m • s-2 • m-2] • [kg-1 • m3] • [kg • mol-1] • [kg-1 • m-2 • s2 • mol • K]
reducing the simple and obvious
K = [m • m-2] • [m3] • [] • [m-2 • K]
coalescing and cancelling m4 and m-4
K = [] • [] • [] • [K]
and you do in fact end up in Kelvin.
Sorry Christopher. Once again, no radiation terms needed.

December 30, 2011 7:17 pm

The 33 K claimed GHG warming is an elementary mistake. It’s obtained by imagining that if you remove all the atmosphere the surface temperature of the Earth would be the same -18°C it is at present for radiative equilibrium of the composite emitter at the top of the atmosphere with space.
Wrong: the albedo of the Earth would fall from 0.3 to 0.07 because there’d be no clouds or ice. Redo the radiation calculation and the equilibrium radiative solution is close to 0°C.

In the absence of atmosphere the albedo of the Earth would range between that of Mercury and the Moon, from about 0.12 to 0.14, depending on the amount of iron on the surface. There is no such thing as an equilibrium temperature for an airless world, it varies in the sunlight to the cosine angle of the sun and drops to below 100k after dark.

davidmhoffer
December 30, 2011 7:48 pm

R. Gates;
And as we haven’t quite got all those feedbacks completely figured out yet, we might just want to take an ensemble average of climate models, and then average that with what the paleodata tell us…and what do you know…about 3C is a good number for a doubling of CO2 from preindustrial levels.>>>
Since you admit we don’t have all the feedbacks figured out yet, how does averaging climate models accomplish anything? Do you propose that averaging various numbers we surmise to be wrong with each other somehow arrives at an accurate average? And why would you average those with the paleodata? and which paleodata? The paleodata like Briffa’s that purports to represent the temperature of the earth over 1000 years based 50% on a single tree in Siberia? Mann’s paleodata that actually isn’t data it is only the graph that Mann draws from the data because he won’t show us the data or the methods? Or perhaps the paleodata that Mann and Jones quietly threw out because it showed a deline in temps instead of a rise for exactly the time periods that the models predicted a major rise for?
Do you actually think your arguments through before you make them?

George E. Smith;
December 30, 2011 8:36 pm

Well I believe the earth is warming; and has, off and on for the last 8-10 thousand years as we emerge from the last ice age; and I agree that anecdotal evidence of this or that glacial field or ice back receding now and then, indicates that long term warming trend.
I also believe that CO2 (for example) captures a narrow band of LWIR radiation from the earth surface (13.5 to 16.5 microns ?) for the degenerate bending mode of oscillation, as well as the (3.25 to 4.75 microns ?) that excites the assymmetrical stretch mode of CO2. Of course 98% of that earth emitted LWIR which Trenberth et al say is the 390 W/m^2 corresponding to a 288 K black body spectrum; lies between about 5.0 and 80.0 microns; most of which, is entirely oblivious of CO2.
But ! I do NOT believe the claims that CO2 is “well mixed” in earth’s atmosphere. My idea of “well mixed”, would mean that ANY statisticaslly valid sample of earth atmosphere, taken from any ordinary place on earth at any ordinary time, would assay as having the same molecular abundance of the common atmospheric molecular species; at least down to the 390 ppm of present day CO2.
It would not be statistically valid to take 2500 molecules of atmosphere, and declare it correct to have one CO2 molecule. But 25 million atmospheric molecules, should yield 10,000 CO2 molecules, and the shot noise in that should be 100 molecules of CO2, or about 1% of the mean number. Well, I’m sure the professional statisticians, would put a factor of three in there somewhere or other; and climatists seem to like that fudge factor. Well one cubic mm of STP air contains about 3 x 10^13 molecules, so it would seem that I only need a 10 micron cube of STP air to be valid as a sample.
Now NOAA in a famous three dimensional pole to pole plot (which is now apparently hidden from view) showed that at Mauna Loa in Hawaii, there is an annual CO2 abundance cycle of about 6 ppm peak to peak; but simultaneously at the North Pole, and over essentially all of the arctic, that cycle is 18 ppm, while at the south pole it is -1 ppm being out of phase with ML.
So the NP and the SP differ by about 19 ppm peak to peak over an annual cycle, out of a mean 390 ppm.
That’s a 5% difference in composition; or five times the shot noise RMS for a 25 million molecule sample. So no I don’t believe CO2 is even approximately well mixed in the atmosphere.
As to the effect of that CO2, or changes in it, I blieve CO2 is a rapid, virtually instantaneous moderator of the atmosphere energy.
For example, let’s say a new CO2 molecule emerges from the tailpipe of an SUV, at say one third of a metre above the ground. Now a photon of that at risk LWIR radiation at say 15 microns wavelength, can travel 300 km in just one millisecond, and essentially escape from the atmosphere. Well it can go 300 metres or about 1,000 feet in one microsecond; so it can get to my new tailpipe CO2 molecule in just one nano-second, and be absorbed; well unless it got absorbed in one picosecond by an already existing CO2 molecule just 0.3 mm from the ground.
So yes I think it is fair to say, that in the climate scale of things, 30 years for example, CO2 is virtually instantaneous, in its effect on the earth emitted LWIR radiant energy.
Now at STP near my SUV tailpipe, that CO2 absorbed energy is rapidly thermalized, by molecular collisions with atmospheric gas molecules, and results in an almost instantaneous local warming of the atmosphere AKA a rise in atmospheric Temperature. Yes; I believe all of those things happen all the time.
What happens next, seems to be less certain. The air is warmer so it ought to radiate more energy, and in an isotropic distribution pattern, so about half of that radiation should be directed earthwards, from whence the energy came, and about half should be directed to space and eventually escape, to cool the planet.
There is nothing much up there that could reflect that upward radiation, heading to space, certainly not the atmospheric gas molecules. Well of course there are clouds occasionally, but they don’t reflect LWIR; they ABSORB it due to the water content of clouds. Those water molecules can of course radiate LWIR radiation themselves, in an H2O specific spectrum.
The upward atmospheric emission of LWIR is; according to some people, only at GHG specific wavelengths, many of which will not be absorbed by the H2O molecules of clouds, since not all GHG spectral lines, match for all GHG species.
But in any case, it seems to me, that at least the earth’s atmosphere should react instantaneously in climate terms, to changes in CO2 molecular abundance. I’ve not seen or read much peer reviewed research papers that show this immediate response of the atmospheric Temperature to the obvious and well documented changes in CO2 in the atmosphere. In particular, the atmospheric Temperature seems to exhibit, absoluterly no response at all, to a 5% cyclic change in the molecular abundance of CO2, that takes place annually on earth.
Apparently, something else far more significant, is controlling the earth’s atmospheric Temperature. I have no idea what that is; but it seems that CO2 is not cutting the mustard.

December 30, 2011 8:36 pm

Larry Goldberg said December 30, 2011 at 6:08 pm
“Monckton has proposed a theory: either expose the flaws in it, or stand back and let the adults deal with the issues.”
The Good Lord has done no such thing! He has taken currently accepted theory and numbers therefrom, and analysed the results of applying them. Here is what he says:
“I have deliberately confined my analysis to textbook methods and “mainstream-science” data precisely so as to minimize the scope for any disagreement on the part of those who – until now – have gone along with the IPCC’s assertion that climate sensitivity is high enough to be dangerous. Deploying their own methods and drawing proper conclusions from them is more likely to lead them to rethink their position than attempting to reinvent the wheel.”

wayne
December 30, 2011 8:56 pm

Christopher Monckton of Brenchley
Christopher, I started to say no, but you are right, I am attracted to Dr. Nikolov’s “Unified Climate Theory”. I am just trying as hard as possible o stay strictly in proper science. I just feel like a lightning bolt hit me and I feel so foolish. And I can trace my misunderstanding back to, guess who, Joel Shores some years ago. I have been lied to and manipulated while getting acclimated to atmospheric physics. My forte has always been more in astronomy and astrophysics, and mainly there in gravitational effects, not thermodynamics though I have taken courses in it.
I have not verified N&Z’s work past the first column on the poster but you know, everything is checking out perfectly so far. Your integration must have a problem in the particular way your bands you used and I’m assuming latitudinal bands. Be sure to reduce the radiation field over each higher latitude band for their ever decreasing area at each latitude. That might be your mistake if being too high. I went to great effort to make sure the distribution was perfectly even. So, no error from bad distribution here. Triple checked and agrees with there figures given to the fourth digit. So, until I can find some great flaw, I will give them the benefit of the doubt as any good scientist deserves.
I also realize this negates many points and conjectures that I have commented on over the last few years, and it seems will negate huge swatches of other people’s understanding, maybe even yours. I keep asking how I could have discarded the aspects of pure thermodynamics so very easily, and I learned one good lesson, never, ever listen to a known troll.

George E. Smith;
December 30, 2011 9:00 pm

“”””” Mike G says:
December 30, 2011 at 10:44 am
Please stop trashing compact flourescent light bulbs. I completely agree windmills and solar panels for bulk power generation are completely useless and a major con, although not for many smaller scale specialised applications
CFL bulbs are a genuine improvement on those small glowing electric fires for most uses. Who would ever dream of using filament lighting for any commercial building or public space, but somehow it is best for use at home? Fair enough early CFL were not that good, but now they are a huge improvement in many locations, long lasting, cool running, very economical, and even quite quick starting.
It is a shame that the politicians took a stand on this issue, but as Churchill observed, even a fool (and presumably many fools) can be right sometimes.
Talking of progress, it won’t be that long before LEDs rule the world. “””””
Well Mike; why NOT trash those compact flourescent light bulbs; and also the compact fluorescent ones as well.
The “greens” trash coal fired power plants simply because (for one reason) they emit mercury into the atmospher in totally uncontrolled amounts.
So the all knowing gummint decrees that we shall replace our successful more than 100 year old Edison light bulbs with compact fluorescents; every one of which is guaranteed to contain that poisonous mercury that is not supposed to be good for us, when it comes from abundant coal energy.
Most CFLs; the small twisty kinds, are optically inefficient, since a lot of their white light emission is radiated right back into the light bulb, and lost.
Also, CFLs, and indeed ALL fluorescent lamps, are UV pumped phosphor systems, so essentially 100 percent of their light output, suffers from the quite unavoidable Stokes Shift energy loss, which results in waste heat.
So Fluorescents, and CFLs in particular are simply never going to be as efficient as LEDs, which contain no mercury at all. Even the cheapest highest market volume white light LED lamps are already inherently more efficient than the very best fluorescents, and are rapidly getting better.
And for niche markets, which demand, and can afford more expensive white light sources, with better color rendition indices, the multicolor (RYGB) types will be even higher efficiency.
So to mandate CFLs now, which are already obsolete, is totally insane.

R. Gates
December 30, 2011 9:41 pm

Larry Goldberg said:
” has proposed a theory: either expose the flaws in it, or stand back and let the adults deal with the issues.”

I would hardly call what Lord Monckton has proposed a “theory”. A conjecture perhaps, but hardly a theory. But regardless of what you choose to call it, it fails at the very start in assumptions made about the equalibrium temperature from even the current amount of CO2 (~390ppm). The Earth has not reaced that even yet as all feedback processes have not yet fully responded. Furthermore, it is unlikely that we will find out what the equalibrium temperature is of the Earth at 390 ppm of CO2 as we are adding approximately 2 ppm additional CO2 every year. These increases, along with increases in other greenhouse gases such as N2O and methane means we are chasing a currently moving target. In light of all this, it seems highly unlikely that Lord Monckton can possibly know what the sensitivity of the Earth is to a doubling of CO2 from preindustrial levels– certainly not well enough to assert that it will be “low enough to be harmless”.

davidmhoffer
December 30, 2011 9:48 pm

R. Gates;
The Earth has not reaced that even yet as all feedback processes have not yet fully responded.>>>
Since you keep referring to the many unknowns which you admit to regarding feedbacks, how can you assert with any degree of confidence that the feedback responses have not yet fully responded?

dalyplanet
December 30, 2011 10:09 pm

I would suggest that R. Gates proposed a reasonable benchmark prediction at 1:43 Dec 30
R Gates
~~I suppose if by 2030 if we see Arctic Sea ice return to the levels we saw in the mid 20th century and long-term global temps continue in the pattern downward we saw generally after the Holocene climate optimum, I might begin to suspect that the feedbacks related to CO2 increases were not as strong as the models indicated.~~

AndyG55
December 30, 2011 10:22 pm

@RGates
“certainly not well enough to assert that it will be “low enough to be harmless”.”
and certainly not well enough to assert that it will be high enough to cause any harm.
Face it.. WE DON”T KNOW WITH ANY SURETY AT ALL, EITHER WAY, so why the heck are we wasting so much money tackling a problem that we don’t even know exists !! (except in the minds of those with the agenda to make it exist)
PLANTS LOVE CO2.. please don’t starve the plants !!!

December 30, 2011 10:36 pm

R. Gates said December 30, 2011 at 9:41 pm
“I would hardly call what Lord Monckton has proposed a “theory”. A conjecture perhaps, but hardly a theory. But regardless of what you choose to call it, it fails at the very start in assumptions blah, blah, blah…”
R Gates, it’s neither theory, nor conjecture. The Good Lord takes as his assumptions the very same assumptions your Lords and Masters promulgate and then deduces from them. It’s called *logic*. I highly commend you study the subject. It’s difficult and will keep you out of mischief for a while.

davidmhoffer
December 30, 2011 10:38 pm

dalyplanet says:
December 30, 2011 at 10:09 pm
I would suggest that R. Gates proposed a reasonable benchmark prediction at 1:43 Dec 30>>>
Really? What did he predict? I’ve read his comment in full at least four times and I have no idea.

December 30, 2011 10:41 pm

AndyG55 said December 30, 2011 at 10:22 pm
“PLANTS LOVE CO2.. please don’t starve the plants !!!”
I won’t, promise! Pompous Gits like making plant food. Don’t tell the greenie-weenies, but making compost entails converting lotsa cellulose into CO2 (carbon perlewshun). And Pompous Gits like burning firewood in the cookstove to make even more carbon perlewshun (plant food).

December 30, 2011 11:57 pm

Brian says, December 30, 2011 at 1:01 pm:High sensitivity is only physically possible if the effective blackbody surface of Earth (at ~5 km, as you say) is rapidly expanding. Earth’s surface would then be warmed by the greater distance of fall in the gravitational field by the molecules starting at the effective surface (i.e., exactly what the lapse rate describes).
I agree completly with the the general point you are making. However you have made a significant error in your explanation.
The lapse rate is not created by a “fall in gravitational field” (which would be completly negligible) but by a fall in pressurewith height due to the progressive reduction in the mass of atmosphere above.

Mydogsgotnonose
December 31, 2011 12:35 am

Dennis Ray Wingo: ‘In the absence of atmosphere the albedo of the Earth would range between that of Mercury and the Moon, from about 0.12 to 0.14, depending on the amount of iron on the surface.’
Wrong: the oceans, 80% of the surface would still remain in the ice and cloud free World so hypothetical albedo would be ~0.07!

December 31, 2011 1:53 am

Wayne,
Would it be possible for you to supply your program? My experience is that verbal descriptions of calculations–such as yours above–usually contain latent ambiguities readily dispelled by viewing the actual code.
Before you object that there is no ambiguity since you’ve done nothing more than numerically evaluate the expression in Nikolov’s Equation 2, I’ll confess that, being a layman, I’m confused by that equation. My no-doubt naive interpretation is that phi is longitude and mu is (at an equinox) cosine of latitude (call it theta), in which case I would have thought that a differential of area (for a unit-radius sphere) would be cos(theta) d theta d phi = -[mu/sqrt(1-mu^2)] d mu d phi, and I was not able to detect the bracketed factor in Nikolov’s equation. So I’m hoping that your code would help us laymen understand what’s going on.
Any help would be appreciated.

John Marshall
December 31, 2011 2:16 am

I commend the noble Lord to read:-
Unified Theory of Climate. by Ned Nikolov PhD and karl Zeller PhD 2011.
And forget about the theory of GHG’s.

Capell
December 31, 2011 2:57 am

Paragraphs 1-7 seem logical. But instead of calculating the linear average sensitivity, it should be remembered that the greeenhouses gases exhibit logarithmic sensitivity. That would imply that the present-day sensitivity would be a small fraction of the linear average.

1 3 4 5 6 7 10