Guest essay by Christopher Monckton of Brencheley
The splendidly-titled Alberto Zaragoza Comendador, commenting on my recent posting taking apart Mr. Mann’s latest fantasia in Scientific American, was startled by my statement that only half of equilibrium global warming would emerge after a couple of hundred years, because –
“Equilibrium climate sensitivity is a measure of the global warming to be expected in 1000-3000 years’ time in response to a doubling of CO2 concentration, regardless of how that doubling came about. It has nothing to do with fossil-fuel emissions scenarios.”
El Comendador wrote:
“Whoa. Whoa whoa whoa. The effects of a CO2 doubling aren’t felt until 1000 years later? So if we hit 560 ppm we’ll in theory get 2.5°C of warming. But only 1.25°C will happen in the first 200 years? Am I getting this right? Can anybody please confirm?”
In fact, he had to write again, because I did not reply at once, for the fascinating answer to his entirely proper question needs a head posting to address it properly. He wrote:
“I have to ask the question again: is the literature certain (or as certain as climate science can be) about the time it takes for warming to kick in? At one point in the article Monckton says only half of the warming happens in the first 200 years. The rest may happen over the following 1000-3000 years. Politicians have set this nonsense 2 Cº limit, whichm when compared to pre-industrial times, means we only have 1.1 Cº warming left of warming before mega-disaster happens. I always knew it was a matter of decades, but now it seems to be a matter of centuries. If true, this takes the absurdity of the whole dangerous-anthropogenic-global-warming bandwagon to another level. And I wonder how many in the public know this: 0.01%, maybe? Of course it’s extremely convenient for the usual suspects that it will take so much time for warming to kick in: they can always claim the thing hasn’t been disproved, therefore the money should keep flowing.”
El Comendador is quite right to press his excellent question, and I must begin by apologizing that I was not able to answer it sooner.
I must also issue an Equation Alert. We’re going to have to review – in the simplest fashion – the fundamental equation of climate sensitivity, and then go deep into the IPCC’s documents to work out what they have hidden by their now-traditional device of not making it explicit what their projections entail. So, hold on to your hats. Here goes.
Climate sensitivity: The global warming ΔTt to be expected in response to a given proportionate increase in CO2 concentration over a specified term of years t is for present purposes sufficiently described by the simplified climate-sensitivity relation (1), where ΔTt, denominated in Kelvin or Celsius degrees, is the product of three quantities: the reciprocal of the fraction q of total anthropogenic forcing that is driven by CO2; a time-dependent climate-sensitivity parameter λt, which is itself the product of the instantaneous or Planck sensitivity parameter λ0 and a time-dependent temperature-feedback gain factor Gt; and the CO2 radiative forcing ΔFt. Annex B provides a more detailed discussion of (1), and of the uncertainties to which it gives rise.
Global warming ΔTt: On business as usual, without mitigation, global warming of 2.8 K from 2000-2100 is the mid-range projection in IPCC (2007, table SPM.3). Since the Earth has warmed at a rate well below those projected in all five IPCC Assessment Reports and there has been no global warming since 1996 (RSS, 2014), 2.8 K 21st-century warming will be taken as close to the upper bound.
CO2 concentration: On business as usual, unmitigated CO2 concentration over the 21st century will attain the annual values (in μatm) in Table 1, derived from the mid-range estimates in IPCC (2007).
CO2 forcing: According to the IPCC, a radiative forcing is an external perturbation in a presumed pre-existing climatic radiative equilibrium, leading to a transient radiative imbalance that will eventually settle toward a new equilibrium at a different global temperature. Experiment and line-by-line radiative transfer analysis have demonstrated that the CO2 radiative forcing ΔFt is reasonably approximated by the logarithmic relation (2),
where (Ct/C0) is a proportionate change in CO2 concentration over t years, with C0 the unperturbed value. Myhre et al. (1998), followed by IPCC (2001), give the coefficient k as 5.35, so that, for example, the CO2 forcing that arises from doubled concentration is 5.35 ln 2, or 3.708 W m–2.
Planck parameter λ0: Immediately after a perturbation by an external radiative forcing such as anthropogenically-increased CO2 concentration, the climate sensitivity parameter by which the forcing is multiplied to yield the global temperature response will take its instantaneous or Planck value λ0 = 0.31 K W–1 m2 (expressed reciprocally as 3.2 W m–2 K–1 in IPCC, 2007, p. 361 fn.).
The sensitivity parameter λn: To allow for the incremental operation of temperature feedbacks, considered by the IPCC to be strongly net-positive, λn is projected to increase over time. The IPCC implicitly takes λn as rising from the instantaneous value λ0 = 0.31 K W–1 m2 via the centennial value λ100 = 0.44 K W–1 m2 and the bicentennial value λ200 = 0.50 K W–1 m2 (derived in Table 2) to the equilibrium value λ∞ = 0.50 K W–1 m2. The equilibrium value is not attained for 1000-3000 years (Solomon et al., 2009).
Centennial parameter λ100: This and longer-term values of λn allow for longer-term mitigation benefit-cost appraisals. The IPCC projects CO2 concentration of 713 μatm in 2100 against 368 μatm in 2000, and a mid-range estimate of 2.8 K warming by 2100, of which 0.6 K is pre-committed (IPCC, 2007, table SPM.3), leaving 2.2 K of new warming, of which 70% (derived in Table 2), or 1.54 K, is CO2-driven. Therefore, the IPCC’s implicit centennial climate sensitivity parameter λ100 is 1.54 K divided by 5.35 ln(713/368) W m–2, or 0.44 K W–1 m2, representing an increase of 0.13 K W–1 m2 over a century against the Planck value λ0 = 0.31 K W–1 m2. This value is half of the equilibrium value λ∞, derived below.
Bicentennial parameter λ200: Examination of the six SRES emissions scenarios for 1900-2100 (Table 2) demonstrates the IPCC’s implicit bicentennial sensitivity parameter λ200 to be 0.50 K W–1 m2 on each scenario.
Equilibrium parameter λ∞: Dividing the IPCC’s 3.26 K central estimate of climate sensitivity to a CO2 doubling (IPCC, 2007, p. 798, box 10.2) by the 3.71 W m–2 radiative forcing in response to a CO2 doubling gives the implicit equilibrium sensitivity parameter λ∞ = 0.88 K W–1 m2, attained after 1000-3000 years.
CO2 fraction: In Table 2, the fraction q = 0.7 of total anthropogenic forcing attributable to CO2 emissions is derived from each of the six SRES standard emissions scenarios.
Plotting the four values λ0 = 0.31 K W–1 m2, λ100 = 0.44 K W–1 m2, λ∞ = 0.50 K W–1 m2, and λ∞ = 0.88 K W–1 m2, produces curve A in Fig. 1. As the inset panel A shows, the temperature rises quite sharply in the first century or two.
Figure 1. Two equally plausible evolutions of the climate-sensitivity parameter λn. Version A is implicit in IPCC (2007). However, version B, an epidemic curve, is equally plausible.
Now, the various values of the climate-sensitivity parameter arise over time because temperature feedbacks do not take effect instantaneously, particularly in the IPCC’s very high-sensitivity regime. They unfold on timescales of centuries to millennia.
One example of a millennial-scale feedback is the melting of the land-based ice in Greenland, which the IPCC says will only happen if global temperatures remain 2 Cº higher than today for several millennia. And even this is probably an exaggeration. Most of you are too young to remember, but 8000 years ago the mean temperature at the summit of the Greenland plateau was 2.5 Cº higher than it is today (Fig. 2), but the ice there did not melt. So the most one might expect, even after several millennia, is some further loss of ice around the coastal fringes of Greenland.
In passing, there is a characteristically hysterical recent piece (in The Guardian, inevitably) by the accident-prone Australian professional bed-wetter Graham Redfearn, saying that from 2002-2011 some 260 billion tons of ice a year has melted from Greenland. Oo-er! Even if that were the case, sea level would have risen by just 0.7 mm a year, or little more than a quarter of an inch over the decade.
Figure 2. Reconstructed temperatures at the summit of the Greenland ice cap, 6000 BC to date.
For reasons such as this, it is no less plausible that feedbacks will come into play slowly to start with, as in inset panel B, than that they will act near-instantaneously in the first century or two, as in the IPCC’s implicit regime (Fig. 1, inset panel A).
The literature is pointing ever more clearly towards only the smallest net-positive feedbacks even at equilibrium. In that event, the global warming from a doubling of Co2 concentration will not much exceed 1 Cº, and that will come about within a century or two rather than several millennia. But even on the IPCC’s high-sensitivity central case, after 100-200 years the warming in response to a CO2 doubling would not have reached much more than 1.5 Cº, because the feedbacks under a high-sensitivity regime take longer to come into full effect.
Under the IPCC’s imagined regime, of course, the warming would continue to increase all the way to equilibrium, though at a slower rate than in the first couple of centuries.
To be fair, one should also bear in mind that CO2 concentration on business as usual will continue to rise even beyond the doubling from the pre-industrial 280 μatm to 560 μatm in around 2080. However, CO2 concentration would have to double again, from 560 to 1120 μatm, to have the same warming effect as that of the previous doubling.
Finally, it is worth reiterating that there is no, repeat no, consensus in the scientific literature in support of the IPCC’s assertion that recent warming is mostly manmade. Legates et al. (2013) established that only 0.3% of abstracts of 11,944 climate science papers published in the 21 years 1991-2011 explicitly stated that we are responsible for more than half of the 0.69 Cº global warming since we began to have a theoretically-detectable effect on global temperature in 1950.
Suppose that 0.33 Cº – just under half of the observed 0.69 Cº – was our contribution to global warming since 1950. Suppose also that CO2 concentration in that year was 305 ppmv and is now 398 ppmv.
Then the radiative forcing from CO2 that contributed to that warming was 5.35 ln(398/305) = 1.42 Watts per square meter. Assume that the IPCC’s central estimate of 713 ppmv CO2 by 2100 (Table 1) is accurate. Assume also that the CO2 forcing from now to 2100 will be 5.35 ln(713/398), or 3.12 W m–2.
Assuming that the 0.7 ratio of CO2 forcing to that from other greenhouse gases (derived in Table 2) will remain broadly constant, and assuming that by 2100 temperature feedbacks will have exercised 0.44/0.31 of the warming effect seen to date, the manmade warming to be expected by 2100 on the basis of the 0.33 Cº warming since 1950 will be 3.12/1.42 x 0.33 x 0.44/0.31 = 1 Cº.
Broadly speaking, the IPCC expects this century’s warming to be equivalent to that from a doubling of CO2 concentration. In that event, 1 Cº is indeed all the warming we should expect from a CO2 doubling. And is that going to be a problem?
[No.]
@Lord Monckton 11:47.
My whole point is that there is near zero (0, Nought, Zilch, Nada) NET CO2 15 micron IR energy emitted by the surface to be absorbed by the nearby (‘absorption depth ~10 m) atmosphere.
It’s because that band is self-absorbed and mutually annihilates the same wavelength range surface IR emission to the atmosphere. Therefore, there can be no atmospheric warming from this cause (even if it were possible because Tyndall’s Experiment has been badly misinterpreted, but that’s another story).
What would, in the absence of any other factor, cause up to 1.2 K no-feedback CO2-AGW, for doubled pCO2, is the reduction of NET surface IR in the expanded 15 micron band wavelength range. Because the sum of convection, evapo-transpiration and NET surface IR has to be equal, on average. to the 160 W/m^2 thermalised SW at the surface, the surface temperature will rise until increased convection and evapo-transpiration makes up the shortfall.
This is basic coupled convection and radiation which I have measured ad nauseum is steel plants, aluminium plants and elsewhere for many decades. You can do your own experiment with a beach windbreak. Reduce convection and the sand temperature increases to increase NET surface IR. Take away the windbreak and the sand temperature falls as convective heat loss increases. There is no analytical solution for the separate heat transfer coefficients so you get tables in handbooks: I used McAdam’s ‘Heat Transfer’. It’s in computers now.
So, we have near zero extra CO2 15 micron band GHG-IR absorption, no back radiation and the maximum CO2 climate sensitivity, in the absence of any other factor, is ~1.2 K. However, there is another factor I intend to publish reducing real CO2 climate sensitivity to near zero, as shown experimentally.
That means the real AGW we had in the 1980s and 1990s was from something else: I submitted a paper to a Nature journal 2 years’ ago and had it bounced back in 48 hours because i show in it that Sagan’s aerosol optical physics is wrong and the sign of the indirect AIE is reversed!. To get it published I have to tell the Physicists that much of their experimental astrophyics is also suspect, not easy!
That the equilibrium global surface air temperature rises proportionately with the rise in the logarithm of the of the CO2 concentration is a premise to Mr. Monckton’s argument. The proportionality constant is “the equilibrium climate sensitivity” (TECS). The IPCC and Monckton assume TECS to be a constant. Sometimes the two parties differ about the magnitude of this constant.
That TECS is a constant is an article of faith among the authors of IPCC assessment reports. They express this faith when they advise policy makers of their estimates of the value of TECS. Is it true that TECS is a constant? This cannot be determined because the equilibrium global surface air temperature cannot be observed. As this is true, arguments that assume TECS to be a constant are non-falsifiable thus being scientifically nonsensical. Also, while policy makers act as though they believe they have been provided with information about the outcomes from their policy decisions when provided with a value for TECS, they have been provided with no information. This conclusion follows from the mathematical definition of “information.”
BERNARD T CLARK says:
March 26, 2014 at 9:12 am
For the last week, I have not been able to properly receive WUWT. Is anyone else having the same problem. I get no article, just comments. And the comments are not left justified, they’re center justified. Aditionally, there are lines overwritten.
HELP!
**********************
On windows hit CONTROL-F5, you have corrupt css stored in temp files.
its either that or you set a magnification/font change just o the one site.
Considering that (1) there has been a net cooling since the 1930s despite a 40 percent increase in atmospheric CO2, (2) with the recent cooling (since 1997) all the warming since 1950 has already been reversed, (3) the ample historical and geological records pointing to high temps/low CO2 and low temps/high CO2, I fail utterly to be persuaded that there is ANY sensitivity of climate to CO2, that is statistically distinguishable from zero, WHATSOEVER.
Monckton of Brenchley says:
March 26, 2014 at 11:47 am
‘Theo Goodwin says: “Have you ever asked an Alarmist to state the relationship between a computer model and the observational evidence? Try asking them how the relationship differs from that between a scientific theory and its evidence. No attempt at communication will be forthcoming.” I recently did exactly that in a debate against the treasurer of the Royal Society at Oundle School. He was reduced to gasping incoherence and eventually spluttered that I had been “dishonest”. To ask a question, however, is not dishonest. In the immortal words of Housman’s Greek Chorus, “I only ask because I want to know”.’
Bravo for you, Sir. Your heroic efforts in behalf of science and humanity are greatly appreciated.
I’d like to address the question that the treasurer of the Royal Society didn’t want to address. The relationship between a scientific theory and its evidence is manifested in the comparison of the predicted to the observed relative frequencies of the outcomes of events by which this theory was statistically validated. The IPCC’s computer models do not reference events or their outcomes hence being insusceptible to being validated. The computer models are susceptible to being “evaluated,” a scientifically meaningless concept that can be confused with “validated” because the two words sound similar.
In an “evaluation,” projections (NOT predictions) of the computer models are plotted on X-Y coordinates along side a global temperature time series. Skeptics who believe, with Mr. Monckton, in the constancy of the equilibrium climate sensitivity are duped into thinking they are looking at an attempt at validation of the computer models. This deception works on most skeptics.
Phew. This has to be the best response I’ve ever gotten. It’s refreshing to see somebody with technical knowledge explaining everything step by step, and I can’t imagine the effort it took you uncover what exactly the IPCC was saying. (I suppose you’ve read Montford’s book on the hockey stick so this seems to be the norm in climate science!).
Of course I don’t understand a thing about the equations, but it seems the biggest difference in all the climate sensibility scenarios is in the long-term effects. So if somebody says the warming resulting from a CO2 doubling is 1ºC, and the IPCC says it’s 3ºC, they obviously disagree in much warming will take place this century and the following. But the real disagreement is in the warming beyond that, as IPCC predicts Greenland will melt down, which will create further feedbacks and so on.
Something tells me their Greenland prediction would look as smart as the one about the Himalayas. I might not be able to watch Greenland melt in the year 2200, but we can always get Ray Kurzweil to check it out.
So thanks a lot. I’ve bookmarked the post so if/when my math moves beyond caveman level I will come back.
Moderator: Only the Lambda(sub, infinity) values (λ∞) are 0.88
I generally agree with Monckton’s summary at 11:47 but take exception to discussion of acidification as a fallback. Granted, ocean acidification is a conclusion sometimes offered by climatologists with little to no training or experience in ocean systems, so I comfortably consider the claims they proffer about the oceans with strong skepticism. That being said, study of the oceans per se in the context of AGW is fairly recent comparatively.
At any rate, my point is more so that since most study of the oceans is done by scientists outside of climatology as used here, I am quite comfortable examining and and accepting their claims and processes, which are in no way characterized as “settled” or complete.
For instance, the claim, ocean ph has dropped 25%, invites a host of points and conversations about the processes and implications.
an individual CO2 molecule has no temperature
As well, an individual CO2 molecule cannot produce sound. Some properties are macroscopic and others are microscopic.
I’m delighted that El Comendador is happy with my perhaps unexpectedly detailed response to his interesting question about when the warming predicted by the IPCC is predicted to occur. As he will have appreciated, the relevant data are in table 2 of the head posting, which demonstrates that the climate-sensitivity has risen from an instantaneous 0.31 Kelvin per Watt per square meter to 0.5 K/W/m2 after 200 years, on its way to an eventual 0.88 K/W/m2. Multiply this parameter by the 3.7 W/m2 radiative forcing and you get the warming after 200 years – about 1.8 C. Not a lot to worry about. The rest will happen much more gradually.
And El Comendador need not worry about Geenland’s icecap anytime soon. As the head posting shows, Greenland was 2.5 K warminer 8000 years ago today, and the ice (except at the low-lying coastal fringes) did not melt. This is not the place to talk about the trade-off between latitude, altitude, ambient temperature and the thermal inertia of large ice masses, but the notion that Greenland’s icecap is going to disappear anytime soon can be dismissed as mere rodomontade.
Mr Oldberg, as usual, filfully misunderstands the head posting, redfines several terms confusingly and inaccurately, and fauils to appreciate that when I present the IPCC’s methodology, including its determination of equilibirum climate sensitivity, I do not necessarily agree with it. Indeed, the purpose of the head posting was merely to show that, even if one did agree with it, there is very little to worry about.
And, whether Mr Oldberg likes it or not, one way to validate a computer model is to test its predictions (a projection is simply a form of prediction, and the two words may be used interchangeably without loss of clarity or rigor) against what has actually happened. Models that examine the question how fast the world will warm in response to our forcing of the climate can best be validated, therefore, by repeated comparison between the warming they predict over a given period and the warming that is observed and measured over that period. It is really as simple as that. And the models have failed, relentlessly, on the side of extravagant exaggeration.
RMF says he does not like me calling “acidification” a fallback argument of the true-believers. And he seems to suggest that most researchers into ocean pH are Olympian in their detachment from the debate about climate change. Alas, that is not so: just as with warming, so with “acidification”, a small band of politicized extremists led by one Hoegh-Guldberg in Australia are busy pushing the argument well beyond where true science would allow it to run. To take one example, the notion adumbrated by RMF that ocean pH has fallen by 25% since Man first began to influence climate is entirely absurd and without foundation. There is no systematic measurement of ocean pH worldwide. Local measurements are unduly affected by coastal influences such as rainfall runoff. And a prize is still available for anyone who can devise an automated method of measuring pH throughout the world’s oceans. There is no sound empirical evidence that ocean pH has changed at all, and no sound theoretical reason for supposing that it woiuld do much harm to the calcifying organisms if it did.
Mr. Monckton:
Thank you for taking the time to respond. Among the claims that you make in your response is that “…one way to validate a computer model is to test its predictions (a projection is simply a form of prediction, and the two words may be used interchangeably without loss of clarity or rigor) against what has actually happened.” This happens to be a topic on which I have conducted a study and reported my finding in a peer-reviewed article. This article is available to you at http://wmbriggs.com/blog/?p=7923 .
In brief, my finding is contrary to your claim. When “prediction” and “projection” are used as synonyms in making a global warming argument the effect is to make of this argument an equivocation. An equivocation is an argument in which a term changes meaning in the midst of this argument. By logical rule, one cannot draw a proper conclusion from an equivocation. To draw an IMPROPER conclusion is the equivocation fallacy.
One need not be “Olympian” to do basic research and studies of the ocean, one merely need not have the conclusion prior to the work. Monckton, you’re not suggesting it’s not useful to theorize about ocean ph or to theorize and carry out studies on how ph affects marine forms, are you??
Monckton of Brencheley – you have not responded to the issue I raised March 26 5:52am about how it is totally inappropriate to apply Stefan-Boltzmann equations to the transparent thin surface layer of the oceans. As more than 99% of solar radiation passes through the first 1cm the absorptivity is less than 0.01. The models are wrong in assuming this surface layer receives most of its thermal energy from radiation. It in fact receives over 99% by conduction and convection.
(continued)
Considering the mean free path of molecules, there are ample molecules in that 1cm layer to determine the temperature of the first 1cm layer of air above the surface, because molecules collide at the interface and exxchange kinetic energy. Hence the temperatures of these two layers are very close and should be reflected in about 70% of the temperature records that are used to determine global means. How can carbon dioxide back radiation (which is pseudo scattered anyway) affect such temperatures when the main energy input is non-radiative?
RMF misunderstands me. I am not consigning legitimate bioresearch in the oceans to the dustbin. No doiubt there are armies of diligent marine biologists beavering away with an impartiality and and indifference to the over-politicized debate on the climate that is as commendably detached from mere politics as the Gods of Olympus were said to be.
However, it is true that a small band of trouble-makers, just like the small band of trouble-makers who have peddled the global warming nonsense, have been whipping up a scare about supposed “acidification” of the oceans that is largely nonsense. The very term “acidification” is a give-away. The oceans are pronouncedly alkaline. They were last acid, for a brief and largely unexplained period, some 55 million years ago (you’re too young to remember). And when we talk of “acid”, it need not be particularly strong acid. The mean pH of the oceans is 7.8-8.4, with wide variations along the coasts, particularly in South America. That interval is decidedly alkaline. The pH of rainwater, however, is pronouncedly acid at 5.4-5.6. Yet calcifying organisms thrive along coastlines prone to flooding (think Brisbane River & Great Barrier Reef).
Dr Ha’n justifiably com,plains that I had not replied to his earlier posting to the effect that one cannot apply the Stefan-Boltzmann equation to the ocean surface. However, after a diligent rereading of the head posting i can find nothing in it that makes any such suggestion, so I fear there has been a breach of Eschenbach’s Rule.
True, the instantaneous or Planck value of the climate-sensitivity parameter lambda, at 0.31 Kelvin per Watt per square meter, to which I did refer in the head possting, is indeed the first derivative of the Stefan-Boltzmann equation. However, this parameter is derived not at or anywhere near the surface at all but at the characteristic-emission altitude, about six miles above us halfway up the troposphere, at which incoming and outgoing fluxes of radiation are by definition identical. I once spent a happy weekend with some 30 years of mid-troposphere temperature change kindly supplied by John Christy, determining for myself that the value 0.31 K/W/m2 used by the models is correct. At least they got something right. But it was my realization that just about none of the true-believers who were saying The Science Is Settled had the faintest idea how this parameter was actually determined that made me realize we were dealing not with science but with Communistic politics.
As it becomes ever more apparent to all that the claims of the totalitarian Left about the climate are in all material respects exaggerated, people will perhaps look more closely at the habit of routine and egregious mendacity that is a consequence of the enormous campaign of disinformation by a million agents of Soviet propaganda, that infected our media, our academe and our other institutions for decades. Though the evil empire that promoted that vicious campaign of lies was eventually flung into oblivion, today’s hard Left, having learned how to dissemble on the grand scale, have now largely lost the ability to tell the difference between that which is true and that which is not. To them, as to the Soviets who trained them so well and often without their knowledge, it is not the truth but the Party Line that matters. On the climate, the Party Line is now being daily demonstrated to have been in substance false. As more and more people come to realize this, they will begin to question everything they are told by the left/Green inheritors of the Communist/fascist mantle, and the world will be a merrier place for that.
I’ve been appeased, but that probably has more to do with dinner.
Thanks, Christopher, Lord Monckton. Very good article.
Thanks for pointing to the alternate solution for the evolution of the climate-sensitivity parameter.
This might partially explain the overshoot in the GCMs that use the faster-temperature-rise solution.
[SNIP – Doug Cottonesque language from Sydney]
Monckton of Brenchley Implicit in your article is the assumption that carbon dioxide warms the surface by the process of back radiation as “explained” in IPCC documentation. That also is then supported by Stefan-Boltzmann calculations based on the sum of insolation and back radiation. You have not answered my point and thus your whole argument is as fictitious as the greenhouse conjecture itself.
I have studied the physics of radiative heat transfer and thermodynamics extensively, so I suggest you have your response reviewed by a physicist with similar knowledge.
Monckton – continued You must be aware of the claim regarding 33 degrees of warming. This is obviously calculated using S-B applied to the surface, which acts nothing like a black or gray body because 70% of it at least is a thin transparent water surface, and black and gray bodies do not transmit any radiation by definition. Yet your sensitivity calculations are based on similar assumptions. Does water vapor cause about 25 degrees of that warming – say 10 degrees for every 1% WV? If so, show me data indicating that loactions with 4% WV above them are 30 degrees hotter than those with 1% above them at similar altitudes and latitudes.
Alex Ha’n: there are few physicists with the appropriate knowledge.
I suggest that Lord Monckton contact Will Happer of Princeton, who warned ‘the team’ 21 years’ ago that they had got the IR physics wrong.
Alternatively, I could be contacted, but I am just a humble Metallurgical Engineer with a PhD in Applied Physics and a seriously bad reputation for iconoclasm!
Thanks AlecM – I may ask Will Happer and yourself to review a comprehensive paper I’m writing which ought to be the last nail in the greenhouse coffin, but of course Monckton will continue pulling out the nails with complete disregard for physics and reality throughout the Solar System, where, at least on other planets it seems, physics works altogether differently.
Good point. 15 microns corresponds to a temperature of -80 degrees C; definitely upper atmosphere territory. temperature to wavelength conversion
“””””…..AlecM says:
March 26, 2014 at 10:11 am
@george e smith: I am simply applying the Law of Conservation of Energy and Maxwell’s Laws (via Poynting Vectors = monochromatic Irradiance) to the vector interaction of the surface and the (opposite direction) atmospheric Radiation Fields (the collection of Poynting Vectors over all possible IR wavelengths).
This is simple physics at 2nd year undergraduate level. The Trenberth Energy budget with ‘back radiation’ aka ‘forcing’ bouncing back as part of the surface RF, now cast in the role of a real energy flux, breaches Conservation of Energy so can’t happen……”””””
Well, I don’t have a very good feel for what “2nd year undergraduate level simple physics” would be..
I had five full years of physics in high school, long before I ever set foot on any University campus, starting at around age 13. Well we did some dabblings in “mechanics ” (statics & dynamics) during the two years before that.
They didn’t call it physics, in high school, didn’t want to stampede the sheep, I suppose. So it was just H, L, S, E&M , plus mechanics. No I didn’t go to any ordinary kind of high school; it was called a Technical High School, before they changed the name eventually, (after I left), to “College” which is just a $5 word for “high school.”
But I understand conservation of energy, and also Maxwell’s equations, since I did take Radiophysics, as one of my three physics majors at he University level (that was third year).
Don’t quite see how to apply that (simply) to CO2 changes.
I would think that monochromatic irradiance would be nearly zero, except for perhaps today’s lasers, since in thermal radiation spectra, at ordinary Temperatures, the energy at any single wavelength would be miniscule.
But maybe RichardsCourtney can explain your idea to me, once he figures it out.
And the numbers in Kevin Trenberth’s “energy budget” are actually power numbers, not energy numbers; well more rigorously power areal density numbers.
If he gave say the number of terajoules incident on earth over a typical 30 year climate interval, then I would call that an energy budget; but he doesn’t, he uses power units, and we have:
P = dE / dt which is an instantaneous differential quantity, which is the slope of an energy versus time graph.
Well this is a bit far off Christopher’s thread subject.
Lord Monckton wrote;
“Though it is clear on paleoclimate timescales that it is temperature that changed first and CO2 concentration change that followed, the CO2 concentration change was – and is – capable of reinforcing and amplifying the temperature change.”
Go on, pull my other leg while you are at it. That is not only a bad example of circular logic it isn’t even a good example of mobius strip logic.
So to state it another way; temperature drives CO2 levels AND CO2 levels drive temperature, UM KAY….. If you say so.
Surely you are joking….. (Ok, apparently you are serious and I’ll refrain from calling you Shirley).
It has to be ONE or the OTHER, not BOTH.
CO2 levels could conceivably affect the response time of the gases in the atmosphere (causing them to warm/cool more quickly after sunrise, for example), but they cannot be controlled by AND ALSO control the average temperature.
How, one would reasonably ask, can this mythical molecule (CO2) know when to “obey” the temperature and when to “command” the temperature ?????
Your logic would lead to a runaway train…….
Cheers, Kevin.
Monckton of Brencheley and others: This comment has greater significance than many realise …
http://wattsupwiththat.com/2012/01/24/refutation-of-stable-thermal-equilibrium-lapse-rates/#comment-909809
In that post, Robert Brown was incorrect in assuming perpetual motion occurs because what does happen is that a state of thermodynamic equilibrium evolves with maximum entropy in any system or combination of connected systems, and, since this state has no unbalanced energy potentials, it is isentropic with a thermal gradient such that the sum of mean molecular kinetic energy and gravitational potential energy is homogeneous.