Guest essay by Nic Lewis
The Otto et al. paper has received a great deal of attention in recent days. While the paper’s estimate of transient climate response was low, the equilibrium/effective climate sensitivity figure was actually slightly higher than that in some other recent studies based on instrumental observations. Here, Nic Lewis notes that this is largely due to the paper’s use of the Domingues et al. upper ocean (0–700 m) dataset, which assesses recent ocean warming to be faster than other studies in the field. He examines the effects of updating the Otto et al. results from 2009 to 2012 using different upper ocean (0–700 m) datasets, with surprising results.
Last December I published an article here entitled ‘Why doesn’t the AR5 SOD’s climate sensitivity range reflect its new aerosol estimates?‘ (Lewis, 2012). In it I used a heat-balance (energy-budget) approach based on changes in mean global temperature, forcing and Earth system heat uptake (ΔT, ΔF and ΔQ) between 1871–80 and 2002–11. I used the RCP 4.5 radiative forcings dataset (Meinshausen et al, 2011), which is available in .xls format here, conformed it with solar forcing and volcanic observations post 2006 and adjusted its aerosol forcing to reflect purely satellite-observation-based estimates of recent aerosol forcing.
I estimated equilibrium climate sensitivity (ECS) at 1.6°C,with a 5–95% uncertainty range of 1.0‑2.8°C. I did not state any estimate for transient climate response (TCR), which is based on the change in temperature over a 70-year period of linearly increasing forcing and takes no account of heat uptake. However, a TCR estimate was implicit in the data I gave, if one makes the assumption that the evolution of forcing over the long period involved approximates a 70-year ramp. This is reasonable since the net forcing has grown substantially faster from the mid-twentieth century on than previously. On that basis, my best estimate for TCR was 1.3°C. Repeating the calculations in Appendix 3 of my original article without the heat uptake term gives a 5–95% range for TCR of 0.9–2.0°C.
The ECS and TCR estimates are based on the formulae:
(1) ECS = F2× ΔT / (ΔF − ΔQ) and (2) TCR = F2× ΔT / ΔF
where F2× is the radiative forcing corresponding to a doubling of atmospheric CO2 concentrations.
A short while ago I drew attention, here, to an energy-budget climate study, Otto et al. (2013), that has just been published in Nature Geoscience, here. Its author list includes fourteen lead/coordinating lead authors of relevant AR5 WG1 chapters, and myself. That study uses the same equations (1) and (2) as above to estimate ECS and TCR. It uses a CMIP5-RCP4.5 multimodel mean of forcings as estimated by general circulation models (GCMs) (Forster et al, 2013), likewise adjusting the aerosol forcing to reflect recent satellite-observation based estimates – see Supplementary Information (SI) Section S1. It Although the CMIP5 forcing estimates embody a lower figure for F2× (3.44 W/m2) than do those per the RCP4.5 database (F2×: 3.71 W/m2), TCR estimates from using the two different sets of forcing estimates are almost identical, whilst ECS estimates are marginally higher using the CMIP5 forcing estimates[i].
Although the Otto et al. (2013) Nature Geoscience study illustrates estimates based on changes in global mean temperature, forcing and heat uptake between 1860–79 and various recent periods, it states that the estimates based on changes to the decade to 2000–09 are arguably the most reliable, since that decade has the strongest forcing and is little affected by the eruption of Mount Pinatubo. Its TCR best estimate and 5–95% range based on changes to 2000-09 are identical to what is implicit in my December study: 1.3°C (uncertainty range 0.9–2.0°C).
While the Otto et al. (2013) TCR best estimate is identical to that implicit in my December study, its ECS best estimate and 5–95% range based on changes between 1860–79 to 2000–09 is 2.0°C (1.2–3.9°C), somewhat higher than the 1.6°C (1.0–2.9°C) per my study, which was based on changes between 1871–80 and 2002–11. About 0.1°C of the difference is probably accounted for by roundings and the difference in F2× factors due to the different forcing bases. But, given the identical TCR estimates, differences in the heat-uptake estimates used must account for most of the remaining 0.3°C difference between the two ECS estimates.
Both my study and Otto et al. (2013) used the pentadal estimates of 0–2000-m deep-layer ocean heat content (OHC) updated from Levitus et al. (2012), and made allowances in line with the recent studies for heat uptake in the deeper ocean and elsewhere. The two studies’ heat uptake estimates differed mainly due to the treatment of the 0–700-m layer of the ocean. I used the estimate included in the Levitus 0–2000-m pentadal data, whereas Otto et al. (2013) subtracted the Levitus 0–700-m pentadal estimates from that data and then added 3-year running mean estimates of 0–700-m OHC updated from Domingues et al (2008).
Since 2000–09, the most recent decade used in Otto et al. (2013), ended more than three years ago, I will instead investigate the effect of differing heat uptake estimates using data for the decade 2003–12 rather than for 2000–09. Doing so has two advantages. First, forcing was stronger during the 2003–12 decade, so a better constrained estimate should be obtained. Secondly, by basing the 0–700-m OHC change on the difference between the 3-year means for 2003–05 and for 2010–12, the influence of the period of switchover to Argo – with its higher error uncertainties – is reduced.
In this study, I will present results using four alternative estimates of total Earth system heat uptake over the most recent decade. Three of the estimates adopt exactly the same approach as in Otto et al. (2013), updating estimates appropriately, and differ only in the source of data used for the 3-year running mean 0–700-m OHC. In one case, I calculate it from the updated Levitus annual data, available from NOAA/NOCDC here. In the second case I calculate it from updated Lyman et al. (2010), data, available here. In the third case I use the updated Domingues et al. (2008) data archived at the CSIRO Sea Level Rise page in relation to Church et al. (2011), here. Since that data only extends to the mean for 2008–10, I have extended it for two years at a conservative (high) rate of 0.33 W/m2 – which over that period is nearly double the rate of increase per the Levitus dataset, and nearly treble that per the Lyman dataset. The final estimate uses total system heat uptake estimates from Loeb et al. 2012 and Stephens et al. 2012. Those studies melded satellite-based estimates of top-of-atmosphere radiative imbalance with ocean heat content estimates, primarily updated from the Lyman et al. (2010) study. The Loeb 2012 and Stephens 2012 studies estimated average total Earth system heat uptake/radiative imbalance at respectively 0.5 W/m2 over 2000–10 and 0.6 W/m2 over 2005–10. I take the mean of these two figures as applying throughout the 2003–12 period.
I use the same adjusted CMIP5-RCP4.5 forcings dataset as used in the Otto et al. (2013) study, updating them from 2000–09 to 2003–12, to achieve consistency with that study (data kindly supplied by Piers Forster). Likewise, the uncertainty estimates I use are derived on the same basis as those in Otto et al. (2013).
I am also retaining the 1860–79 base reference period used in Otto et al. (2013). That study followed my December study in deducting 50% of the 0.16 W/m2 estimate of ocean heat uptake (OHU) in the second half of the nineteenth century per Gregory et al. (2002), the best-known of the earlier energy budget studies. The 0.16 W/m2 estimate – half natural, half anthropogenic – seemed reasonable to me, given the low volcanic activity between 1820 and 1880. However, I deducted only 50% of it to compensate for my Levitus 2012-derived estimate of 0–2000-m ocean heat uptake being somewhat lower than that per some other estimates. Although the main reason for making the 50% reduction in the Gregory (2002) OHU estimate for 1861–1900 disappears when considering 0–700-m ocean heat uptake datasets with significantly higher trends than per Levitus 2012, in the present calculations I nevertheless apply the 50% reduction in all cases.
Table 1, below, shows comparisons of ECS and TCR estimates using data for the periods 2000–09 (Otto et al., 2013), 2002–11 (Lewis, 2012 – my December study) and 2003–12 (this study) using the relevant forcings and 0–700 m OHC datasets.
Table 1: ECS and TCR estimates based on last decade and 0.08 W/m2 ocean heat uptake in 1860–79.
Whichever periods and forcings dataset are used, the best estimate of TCR remains 1.3°C. The 5–95% uncertainty range narrows marginally when using changes to 2003–12, giving slightly higher forcing increases, rather than to 2000–09 or 2002–11, rounding to 0.9–1.95°C. The ‘likely’ range (17–83%) is 1.05–1.65°C. (These figures are all rounded to the nearest 0.05°C.) The TCR estimate is unaffected by the choice of OHC dataset.
The ECS estimates using data for 2003–12 reveal the significant effect of using different heat uptake estimates. Lower system heat uptake estimates and the higher forcing estimates resulting from the 3-year roll-forward of the period used both contribute to the ECS estimates being lower than the Otto et al. (2013) ECS estimate, the first factor being the most important.
Although stating that estimates based on 2000–09 are arguably most reliable, Otto et al. (2013) also gives estimates based on changes to 1970–79, 1980–89, 1990–99 and 1970–2009. Forcings during the first two of those periods are too low to provide reasonably well-constrained estimates of ECS or TCR, and estimates based on 1990–99 may be unreliable since this period was affected both by the eruption of Mount Pinatubo and by the exceptionally large 1997–98 El Niño. However, the 1970–2009 period, although having a considerably lower mean forcing than 2000–09 and being more impacted by volcanic activity, should – being much longer – be less affected by internal variability than any single decade. I have therefore repeated the exercise carried out in relation to the final decade, in order to obtain estimates based on the long period 1973–2012.
Table 2, below, shows comparisons of ECS and TCR estimates using data for the periods 1900–2009 (Otto et al., 2013) and 1973–2012 (this study) using the relevant forcings and 0–700-m OHC datasets. The estimates of system heat uptake from two of the sources used for 2003–12 do not cover the longer period. I have replaced them by an estimate based on data, here, updated from Ishii and Kimoto (2009). Using 2003–12 data, the Ishii and Kimoto dataset gives almost an identical ECS best estimate and uncertainty range to the Lyman 2010 dataset, so no separate estimate for it is shown for that period. Accordingly, there are only three ECS estimates given for 1973–2012. Again, the TCR estimates are unaffected by the choice of system heat uptake estimate.
Table 2: ECS and TCR estimates based on last four decades and 0.08 W/m2 ocean heat uptake in1860–79
The first thing to note is that the TCR best estimate is almost unchanged from that per Otto et al. (2013): just marginally lower at 1.35°C. That is very close to the TCR best estimate based on data for 2003–12. The 5–95% uncertainty range for TCR is slightly narrower than when using data for 1972–2012 rather than 1970–2009, due to higher mean forcing.
Table 2 shows that ECS estimates over this longer period vary considerably less between the different OHC datasets (two of which do not cover this period) than do estimates using data for 2003–12. As in Table 1, all the 1973–2012 based ECS estimates come in below the Otto et al. (2013) one, both as to best estimate and 95% bound. Giving all three estimates equal weight, a best estimate for ECS of 1.75°C looks reasonable, which compares to 1.9°C per Otto et al. (2013). On a judgemental basis, a 5–95% uncertainty range of 0.9–4.0°C looks sufficiently wide, and represents a reduction of 1.0°C in the 95% bound from that per Otto et al. (2013).
If one applied a similar approach to the four, arguably more reliable, ECS estimates from the 2003–12 data, the overall best estimate would come out at 1.65°C, considerably below the 2.0°C per Otto et al. (2013). The 5–95% uncertainty range calculated from the unweighted average of the PDFs for the four estimates is 1.0–3.1°C, and the 17–83%, ‘likely’, range is 1.3–2.3°C. The corresponding ranges for the Otto et al. (2013) study are 1.2–3.9°C and 1.5–2.8°C. The important 95% bound on ECS is therefore reduced by getting on for 1°C.
References
Church, J. A. et al. (2011): Revisiting the Earth’s sea-level and energy budgets from 1961 to 2008. Geophysical Research Letters 38, L18601, doi:10.1029/2011gl048794.
Domingues, C. M. et al. (2008): Improved estimates of upper-ocean warming and multi-decadal sea-level rise. Nature453, 1090-1093, doi:http://www.nature.com/nature/journal/v453/n7198/suppinfo/nature07080_S1.html.
Forster, P. M., T. Andrews, P. Good, J. M. Gregory, L. S. Jackson, and M. Zelinka (2013): Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models, J. Geophys. Res. Atmos., 118, doi:10.1002/jgrd.50174
Ishii, M. and M. Kimoto (2009): Reevaluation of historical ocean heat content variations with time-varying XBT and MBT depth bias corrections. J. Oceanogr., 65, 287 – 299.
Levitus, S. et al. (2012): World ocean heat content and thermosteric sea level change (0–2000 m), 1955–2010. Geophysical Research Letters39, L10603, doi:10.1029/2012gl051106.
Loeb, NG et al. (2012): Observed changes in top-of-the-atmosphere radiation and upper-ocean heating consistent within uncertainty. Nature Geoscience, 5, 110-113.
Lyman, JM et al. (2009): Robust warming of the global upper ocean. Nature, 465, 334–337. http://www.nature.com/nature/journal/v465/n7296/full/nature09043.html
Meinshausen M., S. Smith et al. (2011): The RCP greenhouse gas concentrations and their extension from 1765 to 2500. Climate Change, Special RCP Issue
Otto, A. et al. (2013): Energy budget constraints on climate response. Nature Geoscience, doi:10.1038/ngeo1836
Stephens, GL et al (2012): An update on Earth’s energy balance in light of the latest global observations. Nature Geoscience, 5, 691-696
[i]Total forcing after adjusting the aerosol forcing to match observational estimates is not far short of total long-lived greenhouse gas (GHG) forcing. Therefore, differing estimates of GHG forcing – assuming that they differ broadly proportionately between the main GHGs – change both the numerator and denominator in Equation (1) by roughly the same proportion. Accordingly, differing GHG forcing estimates do not matter very much when estimating TCR, provided that the corresponding F2× is used to calculate the ECS and TCR estimates, as was the case for both my December study and Otto et al. (2013). ECS estimates will be more sensitive than TCR estimates to differences in F2× values, since the unvarying deduction for heat uptake means that the (ΔF − ΔQ) factor in equation (2) will be affected proportionately more than the F2× factor. All other things being equal, the lower CMIP5 F2× value will lead to ECS estimates based on CMIP5 multimodel mean forcings being nearly 5% higher than those based on RCP4.5 forcings.


HR, you write “you need to moderate your skepticism appropriately.”
I have absolutely no intention whatsoever of moderating my skepticism. There is no empirical data whatsoever to support the hypothesis of CAGW, and until we get such empirical data, I will continue to believe that CAGW is a hoax. The warmists have been conducting pseudo- science for years, trying to pretend that the estimates they have made on climate sensitivity have a meaning in physics. IMHO, as I have noted, I think these estimates are completely worthless.
Measurements since before 1900 demonstrate that sensitivity is between zero and insignificant.
Natural Climate change has been hiding in plain sight
http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html
“””””…..bobl says:
May 24, 2013 at 6:47 am
I am struggling with a not so related issue that came to me just yesterday. The theory has it that N2 an O2 lacks vibrational modes in the infrared making it incapable of reradiating heat. To me this implies that all IR radiation to space from the atmosphere must be from a greenhouse gas? So if the concentration of greenhouse gasses increases then the number of photons released to space must necessarilly increase, given that the non radiating gasses transfer their energy by collisions.
Surely this has to increase losses to space overall.
What am I missing?………””””””””””””
Bob, what it is that you are missing is an understanding of the fundamental difference between atomic or molecular line/band spectra emission/absorption radiation, which is entirely a consequence of atomic and molecular structure of SPECIFIC materials; and THERMAL RADIATION which is a continuum spectrum of EM radiation, that is NOT material specific, and depends (spectrally) ONLY on the Temperature of the material. Of course, the level of such emission or absorption depends on the density of the material (atoms/molecules per m^3).
Spectroscopists have known since pre-Cambrian times, that the sun emits a broad spectrum of continuum thermal radiation, on top of which it was discovered by Fraunhoffer and others, there is a whole flock of narrow atomic or molecular line spectra at very specific frequencies, that are characteristic of specific elements or charged ions, in the sun.
So-called “Black Body Radiation ” is an example of a thermal continuum spectrum.
I deliberately said “so-called”, because nobody ever observed black body radiation, since the laws of Physics prohibit the existence of any such object.
Well some folks think a black hole might be a black body.
By definition, a black body absorbs 100% of electromagnetic radiation of ANY frequency or wavelength down to, but not including zero; and up to, but not including infinity.
Yet no physical object (sans a black hole) is able to absorb 100% of even ONE single frequency, or wavelength; let alone All frequencies or wavelengths. To do that, the body would have to have a surface refractive index of exactly 1.0, the same as the refractive index of empty space. That would require that the velocity of light in the material be exactly (c).
Now (c) = 1/sqrt(munought x epsilonnought) ; the permeability, and permittivity of free space.
munought = 4pi E-7 Volt seconds per Amp metre.
epsilonnought = 8.85418781762 E-2 Amp seconds per Volt metre.
Both of these, and (c) = 2.99792458 E+8 are exact values. the only such fundamental physical constants that are exact.
So a material with a product of permeability and permittivity = 1 / c^2 would have a velocity of EM radiation also equal to (c). But that is not sufficient.
Free space vacuum, also has a characteristic impedance = sqrt( munought / epsilonnought) which is approximately 120 pi Ohms, or 377 Ohms.
And when a wave travelling in a medium of 377 Ohms, such as free space, encounters a medium of different impedance, there is a partially transmitted wave, and a partially reflected wave; so no total absorption.
So any real physical medium, must have a permeability of munought, and a permittivity of epsilon nought, at all frequencies and wavelengths, in order to qualify as a black body. It would be indistinguishable from the vacuum of free space.
The point of all this, is that real bodies only approximate what a black body might do, and only do so over narrow ranges of frequency or wavelength, depending on their Temperature.
And in the case of gases like atmospheric nitrogen and oxygen; the molecular density is extremely low. so the EM absorption doesn’t come anywhere near 100%, even for huge thicknesses of atmosphere. But the absorption per molecule is not zero, as some people assume, so even non IR active non GHG gases do absorb and emit a continuum spectrum of thermal radiation based on the gas Temperature.
Experimental practical near black bodies, operate as anechoic cavities, where radiation can enter a small aperture, and then gets bounced around in the cavity and never escapes. Some derivations of the Planck radiation law are based on such cavity radiation.
In the case of a “black body cavity”, the required conditions are that the walls be perfectly reflecting of ALL EM radiaton, and also must have zero thermal conductivity so that heat energy cannot leak out through the walls.
Once again, such conditions are a myth, and no ideal black body cavity can exist either.
So we have the weird circumstance, that Blackbody radiation has never been observed by anybody, and simply cannot exist, yet all kinds of effort went into theoretical models of a non-existing non-phenomenon, and gave us one of the crown jewels of modern physics; the Planck radiation formula.
Has anyone looked at/challenged this?
http://www.naturalnews.com/040448_solar_radiation_global_warming_debunked.html
Climate sensitivity may be irrelevant or wrong
Steven Mosher says:
May 24, 2013 at 8:47 am
“You’ve shown that the consensus is broader and more uncertain than people think, not by questioning the existence of the consensus but by working with others to demonstrate that some of the core beliefs ( how much will it warm) admit many answers.”
So it is not a consensus after all. Good to see that the 3C consensus is breaking up. We will all benefit from that (other than the rent seakers).
I also applaud the fact that Steven Mosher has transformed into something less cryptic than usual. Long may it continue as he often has something valuable to add when the notion takes him.
Clearly estimates of climate sensitivity have had to fall because models based on higher numbers have tracked so poorly they have reached the point of falsification. The greatest pressure is on the TCR value since sufficient time has now passed without significant warming to rule out a high value for this number. The ECS on the other hand makes predictions that cannot be fully falsified for hundreds of years so I expect we’ll see people continuing to defend high numbers here for some time. I expect estimates of TCR and ECS will continue to fall if we see cooling over the next decade. These numbers in any case are still based on a simple forcing model with feedback which I don’t think is at all realistic.
I expect the immediate response of the most alarmed will be to start talking up the ECS and downplaying the TCR. However these ECS values are not really alarming. Over the longer term we are staring down the barrel of the next ice age. I find it reassuring to think that our influence on the planet might allow us to dodge this calamity. In fact I am more concerned that ECS might not be big enough to allow this to happen.
The problem is that ECS is bigger than TCR because of long term feedbacks to warming that depend on slow processes like the melting of ice sheets or warming of the deep oceans. But in the context of a planet that should be heading into an ice age the effect of added CO2 may not be to warm but merely to offset the expected natural cooling. If the greenhouse effect is not actually warming the planet but simply staving off the descent into the next ice age then none of these feedback effects will come into play.
Steven Mosher wrote:
“I think it might be instructive for WUWT readers to understand how Anthony’s claims about microsite bias would play into your calculations. For example, if one assumed that the land warming was biased by .1C per decade from 1979-current, what would that do to the sensitivity calculation?”
Good point, Steve. That assumption would reduce the increase in global temperature betrween the 1860-79 mean and the 2003-12 mean from 0.76 C to about 0.68 C. All the climate sensitivity estimates, and their uncertainty ranges, would then reduce by about 11%. So a sensitivity of 1.7 C would change to just over 1.5 C, for example.
Clive Menzies , I too have seen that , then I ran across this , it makes one wonder if we’re not over complicated this .http://www.crh.noaa.gov/dtx/march.php
What I would like to see is negative (below 1 deg C) ECS /TCS, I.e., AT what minimums over the next decades / century would it take for both estimates to get tickets to the LIndzen and Choi ball game ((0.7 deg C)???
“One thing we do know is that the human response to climate sensitivity is very high. The positive feedbacks are much stronger than what is going on in the atmosphere.”
That’s because of the renewables subsidy forcing, which will result in runaway inflation-level rise and economies going under if the propaganda levels exceed 400ppm.
In the previous article, Willis questioned why the volcanic forcings were being spread back in time by a running mean filter. It was confirmed by Nic that this was the case but he stated that it was immaterial to the findings of Otto et al 2013. This is probably true.
Now that Nic has kindly linked to a source of the forcings used, I have plotted it up against UAH TLT and TLS and marked in the dates of the two major eruptions.
I chose the SH extra-tropical region since this shows no visible impact from El Chichon and allows us to see the background variation in temperatures that was happening at that time. (Note stratospheric temps tend to vary in the opposite sense so I have inverted and scaled to give a ‘second opinion’ on the background variaitons).
http://climategrog.wordpress.com/?attachment_id=273
Now we see that the effects of the back spreading of the forcing data produce a totally false correlation with natural variations of temperature that preceded the eruption. This has nothing to do with forcing or the model and is entirely a result of improper processing.The distorted form of the forcing data just happens to correlate with the natural temperature background around the time of the event.
Incidentally, I remain even more convinced now of my initial assessment that this is a five year running mean, not a three year as suggested by Willis and confirmed by Nic. I would ask Nic to check his source of information because it seems pretty incontrovertible from this, that it is affecting two points either side not one, hence it is a 5 pt filter kernel.
So why was this done? There is no valid reason and it has to be an intentional act , you can’t accidentally run a filter on one of your primary inputs.
Whoever had the idea to “smooth” the volcanic forcings, are they also introducing this practice elsewhere than Otto et al, where it may be falsely improving the ability of the hindcasts to reproduce key features of the temperature record?
What I love about science are the necessary assumptions that are made in order to carry out a calculation, you know the kind of thing I mean….’let’s assume a value for such and such’ or, let’s invent a concept like a ‘Black Body’, which of course cannot exist but is nonetheless useful in carrying out this calculation; well here are a couple observations from ‘real life’ which in my opinion seem to render ‘sensitivity’ calculations almost completely irrelevant….
Let’s assume (see what I did there?) that the increase of CO2 concentration from 350 to 400ppm does indeed capture sufficient energy to raise the overall temperature of the atmosphere by say 1 degree C. Let’s then assume that excess heat is eventually transported by ocean currents towards the polar regions. In the case of the Arctic Ocean in winter, sea Ice cover is reduced thereby allowing ‘larger volumes of warmer’ water to come into contact with the atmosphere at a time when there is no solar input (indeed conditions are ideal for heat loss to space).
Could it not then be argued that a slight heating of the atmosphere would cause and be balanced by subsequent polar cooling effect?
Indeed could it be further argued that Arctic Ocean heat loss could be a self amplifying effect ( a bit like the Warmist ‘feedbacks’…subsequently causing ‘runaway cooling’?
Phil. says:
May 24, 2013 at 8:04 am
No, because the atmosphere is optically thick at the GHG wavelengths, i.e. lower in the atmosphere it absorbs more than it emits. Emission to space only occurs above a certain height and therefore at a certain temperature, as the concentration increases then that height increases and the temperature decreases and hence emission to space goes down.
You are oversimplifying the situation.
First, the GHE is real and works off of radiation from the surface. Bobl wasn’t referring to this process.
Second, thermalization and radiation of atmospheric energy (not surface energy) is basic physics. This works in parallel to the GHE and this is what Bobl was asking about. Since the density of the atmosphere is reduced the higher you go, the average distance the radiation travels until re-absorption (or loss to space) is computable, let’s assume X meters upwards. It looks like any flow through a pipe. Now, if you add more CO2 you increase the probability of these events occurring which increases the flow of energy at all levels of the atmosphere towards space. Essentially you create a wider pipe. If climate models ignore this process it’s not surprising they get the wrong answer.
Nic Lewis’s work (a significant contribution) and it’s implications need to be put into perspective. His work doesn’t seem to take into account the paleo record, nor should it necessarily do so. But the extremely short sample period needs to be recognized.
Additionally, from my reading of his results (as well as Dr. Otto’s apparently); at most, we may have a reprieve of ten or fifteen years before the same effects are upon us.
Not exactly a ‘Hallelujah’. JP
1) “… if one makes the assumption that the evolution of forcing over the long period involved approximates a 70-year ramp. This is reasonable [based on another assumption that] the net forcing has grown substantially faster from the mid-twentieth century on than previously.”
***
2) “… estimates based on changes to the decade to 2000–09 are arguably the most reliable, since that decade has the strongest forcing… .” [assumes the forcing is of any significance at all]
***
3) “…forcing was stronger during the 2003–12 decade…” [assumes significant forcing causation]
***
4) “… Since that data only extends to the mean for 2008–10, I have extended it for two years at a conservative (high) rate of 0.33 W/m2… ”
***
From statements like those quoted above, this well-executed paper appears to be a careful attempt to both: 1) deprogram genuinely brainwashed AGW cult members by gingerly casting doubt upon their core beliefs; and 2) provide a face-saving way for AGW crooks who know better to back down from their lies.
It is not, nevertheless, robust, open, debate.
When a debate opponent has no evidence to back up their conjectures, when that opponent offers only assumptions and speculation, then, no matter how complicated their math, it adds up to no more than “I simply believe this.” There is nothing to debate. The above is only playing their imaginary game. It may get them to change their behavior slightly, but not significantly. It’s like going along with a person having a psychotic episode just enough to get them out of the middle of the road and onto the shoulder. “Yes, yes, my good fellow, those tiny green men most likely do want you to go with them, but, I know that they want you to walk on the shoulder, not down the centerline. There’s a good lad. Just keep to the right of (or left — in U.K.) of that solid white line there. Good luck!”
While it is shrewd not to try to tell them “TINY GREEN MEN DO NOT EXIST,” the above really isn’t a debate.
Conclusion: While scientific discussion is very important, the main goal is to save our economies, thus we must win over the voters. And that debate needs to be simply and powerfully stated. In terms such as:
“All people are actually doing is just taking another guess.” [Jim Cripwell]
“Climate science attempts to model this as a simple scalar average, without even knowing if the combination of all the feedbacks represents a stationary function. That is, they don’t even know if the mean of the sensitivity is a constant.” [bobl]
“Clearly estimates of climate sensitivity have had to fall because models based on higher numbers have tracked so poorly they have reached the point of falsification. ” [IanH]
GO, you wonderful WUWT SCHOLARS — argue with vigor! TRUTH IS ON YOUR SIDE.
AAAAAaaaack! Please forgive me. I messed up my first after the “]” in first paragraph. Sigh.
Oh, brother… “my first [end bold]…”
Thank you Richard, that’s exactly what I was trying to say, I was thinking about how energy lost from the surface, by convection is radiated to space, and whether CO2 partial pressure plays into the efficiency of that process.
1. CO2 molecule takes up energy through collision with non radiating gas
2. C02 molecule emits photon
It seems to me that increasing the CO2 concentration, increases the probability of such an interaction, and therefore must increase the emission to space. Does this component for example form part of the increased IR emission in the C02 emission bands seen in the Satellite record?
This isn’t much more that a thought at the moment, but seems to me that this is just a question of conservation IE energy in Vs energy out, anything that increases energy out must result in an overall cooling – granted it could be stratified, cooling in upper atmosphere only, but given the convection processes at play… Increasing the efficiency of radiation must increase the temperature difference, increasing the rate of convective and conductive heat transport to match.
This question has rocked my world so to speak, I can’t reconcile this with a warming effect, and to date I have been firmly of the opinion that CO2 warms. That’s still true if one only considers radiation, in that case radiation to space should decrease as GHGs rise because the radiation never reaches from the surface to height. But likely not if convective heat is radiated to space by GHGs. In that case there is always plenty of energy drawn from the thermal energy of surrounding N2 and O2 to feed into the pipe…
Thoughts on this are wlecome
left should be right and right left above (I really need a vacation… !!!)
Nic // Why not do a meta analysis to collapse those wide C.I. values. The consistency between the various results suggests that the C.I. is too large.
bobl says:
May 24, 2013 at 5:30 pm
Thank you Richard, that’s exactly what I was trying to say, I was thinking about how energy lost from the surface, by convection is radiated to space, and whether CO2 partial pressure plays into the efficiency of that process.
1. CO2 molecule takes up energy through collision with non radiating gas
2. C02 molecule emits photon
It seems to me that increasing the CO2 concentration, increases the probability of such an interaction, and therefore must increase the emission to space.
———————————————————-
I think you’re also assuming that the radiation always has to be outwards (are you?). The reality is that the CO2 molecule has basically a 50/50 chance of radiating up and out or down and in. The net effect is to increase the transit time of the photon and increase the energy content of the atmosphere and the surface as a result. Of course this is happening at all levels of the atmosphere just to make it more complicated. Finally, it can be directly observed just by measuring the radiation from a dark sky at night.
So Mosh,
Where is the fine line between denialism and lukewarmerism?
1.2 per doubling of CO2?
Nic:
Could you tell us something about the journey from your first interestm your first calculations, your first paper and to the collabroartion towards this paper?
Would be interesting to hear.
‘Climate’ and ‘Climate Change’ are interpretations, in part based on the psychological state of the ‘observer’ at any particular time and therefore not physical in any way or form, i.e. fantasies or phantasms.
Fantasies and phantasms have no sensitivity, not even memory, they are only apparitions.
bobl 6.47 am: ‘Surely this has to increase losses to space overall.’
The fundamental problem with Climate Alchemy is that it starts from the premise that the ~15 µm CO2 IR band emitting at ~220 °K to space controls IR energy flux to space because if you double CO2, it reduces that band’s emitted flux by ~3 W/m^2.
However, at present CO2 level, that band is ~8% of OLR. 92% of the OLR comes from cloud level, the H2O bands and in the atmospheric window, near the surface temperature.
The premise has to shift to accepting that the Earth self regulates OLR equal to SW energy IN and the variations about the set point are oscillations as long time constant parts of the system adapt.
In other words, CO2-AGW is by definition zero on average.