Guest essay by Nic Lewis
The Otto et al. paper has received a great deal of attention in recent days. While the paper’s estimate of transient climate response was low, the equilibrium/effective climate sensitivity figure was actually slightly higher than that in some other recent studies based on instrumental observations. Here, Nic Lewis notes that this is largely due to the paper’s use of the Domingues et al. upper ocean (0–700 m) dataset, which assesses recent ocean warming to be faster than other studies in the field. He examines the effects of updating the Otto et al. results from 2009 to 2012 using different upper ocean (0–700 m) datasets, with surprising results.
Last December I published an article here entitled ‘Why doesn’t the AR5 SOD’s climate sensitivity range reflect its new aerosol estimates?‘ (Lewis, 2012). In it I used a heat-balance (energy-budget) approach based on changes in mean global temperature, forcing and Earth system heat uptake (ΔT, ΔF and ΔQ) between 1871–80 and 2002–11. I used the RCP 4.5 radiative forcings dataset (Meinshausen et al, 2011), which is available in .xls format here, conformed it with solar forcing and volcanic observations post 2006 and adjusted its aerosol forcing to reflect purely satellite-observation-based estimates of recent aerosol forcing.
I estimated equilibrium climate sensitivity (ECS) at 1.6°C,with a 5–95% uncertainty range of 1.0‑2.8°C. I did not state any estimate for transient climate response (TCR), which is based on the change in temperature over a 70-year period of linearly increasing forcing and takes no account of heat uptake. However, a TCR estimate was implicit in the data I gave, if one makes the assumption that the evolution of forcing over the long period involved approximates a 70-year ramp. This is reasonable since the net forcing has grown substantially faster from the mid-twentieth century on than previously. On that basis, my best estimate for TCR was 1.3°C. Repeating the calculations in Appendix 3 of my original article without the heat uptake term gives a 5–95% range for TCR of 0.9–2.0°C.
The ECS and TCR estimates are based on the formulae:
(1) ECS = F2× ΔT / (ΔF − ΔQ) and (2) TCR = F2× ΔT / ΔF
where F2× is the radiative forcing corresponding to a doubling of atmospheric CO2 concentrations.
A short while ago I drew attention, here, to an energy-budget climate study, Otto et al. (2013), that has just been published in Nature Geoscience, here. Its author list includes fourteen lead/coordinating lead authors of relevant AR5 WG1 chapters, and myself. That study uses the same equations (1) and (2) as above to estimate ECS and TCR. It uses a CMIP5-RCP4.5 multimodel mean of forcings as estimated by general circulation models (GCMs) (Forster et al, 2013), likewise adjusting the aerosol forcing to reflect recent satellite-observation based estimates – see Supplementary Information (SI) Section S1. It Although the CMIP5 forcing estimates embody a lower figure for F2× (3.44 W/m2) than do those per the RCP4.5 database (F2×: 3.71 W/m2), TCR estimates from using the two different sets of forcing estimates are almost identical, whilst ECS estimates are marginally higher using the CMIP5 forcing estimates[i].
Although the Otto et al. (2013) Nature Geoscience study illustrates estimates based on changes in global mean temperature, forcing and heat uptake between 1860–79 and various recent periods, it states that the estimates based on changes to the decade to 2000–09 are arguably the most reliable, since that decade has the strongest forcing and is little affected by the eruption of Mount Pinatubo. Its TCR best estimate and 5–95% range based on changes to 2000-09 are identical to what is implicit in my December study: 1.3°C (uncertainty range 0.9–2.0°C).
While the Otto et al. (2013) TCR best estimate is identical to that implicit in my December study, its ECS best estimate and 5–95% range based on changes between 1860–79 to 2000–09 is 2.0°C (1.2–3.9°C), somewhat higher than the 1.6°C (1.0–2.9°C) per my study, which was based on changes between 1871–80 and 2002–11. About 0.1°C of the difference is probably accounted for by roundings and the difference in F2× factors due to the different forcing bases. But, given the identical TCR estimates, differences in the heat-uptake estimates used must account for most of the remaining 0.3°C difference between the two ECS estimates.
Both my study and Otto et al. (2013) used the pentadal estimates of 0–2000-m deep-layer ocean heat content (OHC) updated from Levitus et al. (2012), and made allowances in line with the recent studies for heat uptake in the deeper ocean and elsewhere. The two studies’ heat uptake estimates differed mainly due to the treatment of the 0–700-m layer of the ocean. I used the estimate included in the Levitus 0–2000-m pentadal data, whereas Otto et al. (2013) subtracted the Levitus 0–700-m pentadal estimates from that data and then added 3-year running mean estimates of 0–700-m OHC updated from Domingues et al (2008).
Since 2000–09, the most recent decade used in Otto et al. (2013), ended more than three years ago, I will instead investigate the effect of differing heat uptake estimates using data for the decade 2003–12 rather than for 2000–09. Doing so has two advantages. First, forcing was stronger during the 2003–12 decade, so a better constrained estimate should be obtained. Secondly, by basing the 0–700-m OHC change on the difference between the 3-year means for 2003–05 and for 2010–12, the influence of the period of switchover to Argo – with its higher error uncertainties – is reduced.
In this study, I will present results using four alternative estimates of total Earth system heat uptake over the most recent decade. Three of the estimates adopt exactly the same approach as in Otto et al. (2013), updating estimates appropriately, and differ only in the source of data used for the 3-year running mean 0–700-m OHC. In one case, I calculate it from the updated Levitus annual data, available from NOAA/NOCDC here. In the second case I calculate it from updated Lyman et al. (2010), data, available here. In the third case I use the updated Domingues et al. (2008) data archived at the CSIRO Sea Level Rise page in relation to Church et al. (2011), here. Since that data only extends to the mean for 2008–10, I have extended it for two years at a conservative (high) rate of 0.33 W/m2 – which over that period is nearly double the rate of increase per the Levitus dataset, and nearly treble that per the Lyman dataset. The final estimate uses total system heat uptake estimates from Loeb et al. 2012 and Stephens et al. 2012. Those studies melded satellite-based estimates of top-of-atmosphere radiative imbalance with ocean heat content estimates, primarily updated from the Lyman et al. (2010) study. The Loeb 2012 and Stephens 2012 studies estimated average total Earth system heat uptake/radiative imbalance at respectively 0.5 W/m2 over 2000–10 and 0.6 W/m2 over 2005–10. I take the mean of these two figures as applying throughout the 2003–12 period.
I use the same adjusted CMIP5-RCP4.5 forcings dataset as used in the Otto et al. (2013) study, updating them from 2000–09 to 2003–12, to achieve consistency with that study (data kindly supplied by Piers Forster). Likewise, the uncertainty estimates I use are derived on the same basis as those in Otto et al. (2013).
I am also retaining the 1860–79 base reference period used in Otto et al. (2013). That study followed my December study in deducting 50% of the 0.16 W/m2 estimate of ocean heat uptake (OHU) in the second half of the nineteenth century per Gregory et al. (2002), the best-known of the earlier energy budget studies. The 0.16 W/m2 estimate – half natural, half anthropogenic – seemed reasonable to me, given the low volcanic activity between 1820 and 1880. However, I deducted only 50% of it to compensate for my Levitus 2012-derived estimate of 0–2000-m ocean heat uptake being somewhat lower than that per some other estimates. Although the main reason for making the 50% reduction in the Gregory (2002) OHU estimate for 1861–1900 disappears when considering 0–700-m ocean heat uptake datasets with significantly higher trends than per Levitus 2012, in the present calculations I nevertheless apply the 50% reduction in all cases.
Table 1, below, shows comparisons of ECS and TCR estimates using data for the periods 2000–09 (Otto et al., 2013), 2002–11 (Lewis, 2012 – my December study) and 2003–12 (this study) using the relevant forcings and 0–700 m OHC datasets.
Table 1: ECS and TCR estimates based on last decade and 0.08 W/m2 ocean heat uptake in 1860–79.
Whichever periods and forcings dataset are used, the best estimate of TCR remains 1.3°C. The 5–95% uncertainty range narrows marginally when using changes to 2003–12, giving slightly higher forcing increases, rather than to 2000–09 or 2002–11, rounding to 0.9–1.95°C. The ‘likely’ range (17–83%) is 1.05–1.65°C. (These figures are all rounded to the nearest 0.05°C.) The TCR estimate is unaffected by the choice of OHC dataset.
The ECS estimates using data for 2003–12 reveal the significant effect of using different heat uptake estimates. Lower system heat uptake estimates and the higher forcing estimates resulting from the 3-year roll-forward of the period used both contribute to the ECS estimates being lower than the Otto et al. (2013) ECS estimate, the first factor being the most important.
Although stating that estimates based on 2000–09 are arguably most reliable, Otto et al. (2013) also gives estimates based on changes to 1970–79, 1980–89, 1990–99 and 1970–2009. Forcings during the first two of those periods are too low to provide reasonably well-constrained estimates of ECS or TCR, and estimates based on 1990–99 may be unreliable since this period was affected both by the eruption of Mount Pinatubo and by the exceptionally large 1997–98 El Niño. However, the 1970–2009 period, although having a considerably lower mean forcing than 2000–09 and being more impacted by volcanic activity, should – being much longer – be less affected by internal variability than any single decade. I have therefore repeated the exercise carried out in relation to the final decade, in order to obtain estimates based on the long period 1973–2012.
Table 2, below, shows comparisons of ECS and TCR estimates using data for the periods 1900–2009 (Otto et al., 2013) and 1973–2012 (this study) using the relevant forcings and 0–700-m OHC datasets. The estimates of system heat uptake from two of the sources used for 2003–12 do not cover the longer period. I have replaced them by an estimate based on data, here, updated from Ishii and Kimoto (2009). Using 2003–12 data, the Ishii and Kimoto dataset gives almost an identical ECS best estimate and uncertainty range to the Lyman 2010 dataset, so no separate estimate for it is shown for that period. Accordingly, there are only three ECS estimates given for 1973–2012. Again, the TCR estimates are unaffected by the choice of system heat uptake estimate.
Table 2: ECS and TCR estimates based on last four decades and 0.08 W/m2 ocean heat uptake in1860–79
The first thing to note is that the TCR best estimate is almost unchanged from that per Otto et al. (2013): just marginally lower at 1.35°C. That is very close to the TCR best estimate based on data for 2003–12. The 5–95% uncertainty range for TCR is slightly narrower than when using data for 1972–2012 rather than 1970–2009, due to higher mean forcing.
Table 2 shows that ECS estimates over this longer period vary considerably less between the different OHC datasets (two of which do not cover this period) than do estimates using data for 2003–12. As in Table 1, all the 1973–2012 based ECS estimates come in below the Otto et al. (2013) one, both as to best estimate and 95% bound. Giving all three estimates equal weight, a best estimate for ECS of 1.75°C looks reasonable, which compares to 1.9°C per Otto et al. (2013). On a judgemental basis, a 5–95% uncertainty range of 0.9–4.0°C looks sufficiently wide, and represents a reduction of 1.0°C in the 95% bound from that per Otto et al. (2013).
If one applied a similar approach to the four, arguably more reliable, ECS estimates from the 2003–12 data, the overall best estimate would come out at 1.65°C, considerably below the 2.0°C per Otto et al. (2013). The 5–95% uncertainty range calculated from the unweighted average of the PDFs for the four estimates is 1.0–3.1°C, and the 17–83%, ‘likely’, range is 1.3–2.3°C. The corresponding ranges for the Otto et al. (2013) study are 1.2–3.9°C and 1.5–2.8°C. The important 95% bound on ECS is therefore reduced by getting on for 1°C.
References
Church, J. A. et al. (2011): Revisiting the Earth’s sea-level and energy budgets from 1961 to 2008. Geophysical Research Letters 38, L18601, doi:10.1029/2011gl048794.
Domingues, C. M. et al. (2008): Improved estimates of upper-ocean warming and multi-decadal sea-level rise. Nature453, 1090-1093, doi:http://www.nature.com/nature/journal/v453/n7198/suppinfo/nature07080_S1.html.
Forster, P. M., T. Andrews, P. Good, J. M. Gregory, L. S. Jackson, and M. Zelinka (2013): Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models, J. Geophys. Res. Atmos., 118, doi:10.1002/jgrd.50174
Ishii, M. and M. Kimoto (2009): Reevaluation of historical ocean heat content variations with time-varying XBT and MBT depth bias corrections. J. Oceanogr., 65, 287 – 299.
Levitus, S. et al. (2012): World ocean heat content and thermosteric sea level change (0–2000 m), 1955–2010. Geophysical Research Letters39, L10603, doi:10.1029/2012gl051106.
Loeb, NG et al. (2012): Observed changes in top-of-the-atmosphere radiation and upper-ocean heating consistent within uncertainty. Nature Geoscience, 5, 110-113.
Lyman, JM et al. (2009): Robust warming of the global upper ocean. Nature, 465, 334–337. http://www.nature.com/nature/journal/v465/n7296/full/nature09043.html
Meinshausen M., S. Smith et al. (2011): The RCP greenhouse gas concentrations and their extension from 1765 to 2500. Climate Change, Special RCP Issue
Otto, A. et al. (2013): Energy budget constraints on climate response. Nature Geoscience, doi:10.1038/ngeo1836
Stephens, GL et al (2012): An update on Earth’s energy balance in light of the latest global observations. Nature Geoscience, 5, 691-696
[i]Total forcing after adjusting the aerosol forcing to match observational estimates is not far short of total long-lived greenhouse gas (GHG) forcing. Therefore, differing estimates of GHG forcing – assuming that they differ broadly proportionately between the main GHGs – change both the numerator and denominator in Equation (1) by roughly the same proportion. Accordingly, differing GHG forcing estimates do not matter very much when estimating TCR, provided that the corresponding F2× is used to calculate the ECS and TCR estimates, as was the case for both my December study and Otto et al. (2013). ECS estimates will be more sensitive than TCR estimates to differences in F2× values, since the unvarying deduction for heat uptake means that the (ΔF − ΔQ) factor in equation (2) will be affected proportionately more than the F2× factor. All other things being equal, the lower CMIP5 F2× value will lead to ECS estimates based on CMIP5 multimodel mean forcings being nearly 5% higher than those based on RCP4.5 forcings.


Greg Goodman says:
May 24, 2013 at 3:22 pm
////////////////////////////////////////////
Well done.
I (and many others) have been suggesting for years that volcanos are used as a fudge factor to give the impression that model and their projections (predictions) are more capable than they truly are. Now we see the role of smoothing in bringing about this impression.
You are right to observe that there ought to have been no smoothing of volcano forcing.
Greg also points out (Greg Goodman says: May 25, 2013 at 3:31 am) also points out the questionable adjustment made to the ARGO data. It is important to bear in mind that this makes the ARGO data set potentially unreliable at least as far as its tuning to pre ARGO data sets. The full implications of this questionable adjustment should not be overlooked whenever one discusses OHC or ARGO data.
Steve Mosher says, “Let me put the importance of this metric into perspective: every degree of C in uncertainty is worth about 1 trillion dollars a year if you are planning to mitigate.”
Are you saying that the “C” in CAGW will be a trillion dollars in damage per 1 degree rise, or are you claiming it will cost one trillion dollars to change the T one C. Either way I call B.S. to your numbers, so please show me your power. (I actually think a drop of one C would be far more costly, and reduced CO2 means less food, and more land and warter required to produce said food.)
bobl says:
May 24, 2013 at 5:30 pm
Thank you Richard, that’s exactly what I was trying to say, I was thinking about how energy lost from the surface, by convection is radiated to space, and whether CO2 partial pressure plays into the efficiency of that process.
1. CO2 molecule takes up energy through collision with non radiating gas
2. C02 molecule emits photon
————————————————————————————————–
I see all of this as a function of the residence time of the energy involved. So a GHG decreases the residence time of energy received via collision from a non GHG, but can, 50% of the time, increase the residence time of outgoing energy recieved from OLWIR from the surface, by directing said energy back towards the surface. Clearly, if the GHG cools the upper atmosphere relative to a non GHG molecule, then conduction from below, as well as convection accelerates upwards.
An interesting thought experiment is what would happen in an atmosphere with zero GHG. According to radiation theory the atmosphere would be far cooler (some say 30 degrees) then the surface. However, then the hotter surface would continually net conduct energy to the atmosphere just above the surface, the atmosphere above the surface would then cool by conducting energy to ever higher elevations within the atmosphere, and the lower atmosphere would then continually recieve ever more energy via conduction from the surface. Eventually, as energy is never lost, the atmosphere would establish an equalibrium with the surface, the lapse rate would be set via the molecules per sq M with the T established, not by different vibrational rates of each molecule, as they would equalise, but by the number of molecules hitting the measuring instrument. (the more mass per m2, the higher the specific heat per m2) Eventualy, in this non GHG world, you would not have back radiation to the surface, but “back conduction” to the surface, thereby increasing the specific heat above the S-B equation.
So this is my assertion, based on Davids Law of physics which reads, “Only two things can effect the energy content of any system in a radiative balance. Either a change in the input, or a change in the “residence time” of some aspect of those energies within the system.”
26 May: UK Telegraph: Louise Gray: Hay Festival 2013: global warming is ‘fairly flat’, admits Lord Stern
Lord Stern, who originally warned the Government about climate change, has admitted that global warming has been “fairly flat” for the last decade.
“I note this last decade or so has been fairly flat,” he told the Telegraph Hay Festival audience.
He said the reasons were because of quieter solar activity, aerosol pollution in certain parts of the world blocking sunshine and heat being absorbed by the deep oceans.
Lord Stern pointed out that all these effects run in cycles or are random so warming could accelerate again soon.
“In the next five to ten years it is likely we will see the acceleration because these things go in cycles,” he warned…
He said it was an “illusion” to claim that the short term flat line in global warming means that global warming is no longer a threat.
“It is a dangerous extrapolation of the short term phenomenon into a long term trend when the underlying responses for long term trends in terms of rising greenhouse gases are well understood and clear.”
Lord Stern also said he has written to the Prime Minister urging him to introduce a target to decarbonise electricity by 2030 as part of the Energy Bill, currently going through Parliament.
***He said investors need the policy clarity in order to build the infrastructure Britain needs in future…
http://www.telegraph.co.uk/culture/hay-festival/10081250/Hay-Festival-2013-global-warming-is-fairly-flat-admits-Lord-Stern.html
***yes, it’s all about those “investors” Lord Stern.
Richard M says:
May 25, 2013 at 7:04 am
The drop in sensitivity by simply factoring in a microsite bias also applies to any other temperature record adjustments. For example, by removing the OHC adjustments Greg Goodman mentioned or all the historic adjustments that lower previous temperatures I would expect to see a much, much smaller sensitivity. IOW, there calculations are only as good as the data itself.
I wonder what the number would be if only raw temperature data was used …
########################
Raw data is garbage. You obviously havent looked at it.
ruvfsy says:
May 24, 2013 at 7:50 pm
So Mosh,
Where is the fine line between denialism and lukewarmerism?
1.2 per doubling of CO2?
##################################
first principles will get you to 1.2C. I think we pu the line of demarcation at 1C. But as folks get more evidence that might be push down.
Here’s the difference between a lukewarmer and a CAGW: looking at the same PDF for sensitivity that ranges from 1 to 6, the lukewarmer will note that over half of the PDF falls below 3C. the CAGWer will talk about everything above 3C.
Simple point. The range is uncertain. there is room for many beliefs. But, I discount people who say the KNOW the value is low. I discount people who say the FEAR the value is high.
Just look at the best knowledge we have, dont whine that this knowledge is imperfect. Look at the bet understanding and describe it without over reaching. The PDF runs from about 1 to 6. Its a good bet that the true value is less than 3C. If you can say that you are a lukewarmer
A. Humans add GHGs
B. GHGs warm, they do not cool the planet.
C. How much? over time periods of 100 years or less, doubling is more likely to create less than 3C warming than it is to create more than 3C warming.
Steven,
“B. GHGs warm, they do not cool the planet.”
According to the data from NASA last year: “Carbon dioxide and nitric oxide are natural thermostats,” explains James Russell of Hampton University, SABER’s principal investigator. “When the upper atmosphere (or ‘thermosphere’) heats up, these molecules try as hard as they can to shed that heat back into space.”
That’s what happened on March 8th when a coronal mass ejection (CME) propelled in our direction by an X5-class solar flare hit Earth’s magnetic field. (On the “Richter Scale of Solar Flares,” X-class flares are the most powerful kind.) Energetic particles rained down on the upper atmosphere, depositing their energy where they hit. The action produced spectacular auroras around the poles and significant upper atmospheric heating all around the globe.
“The thermosphere lit up like a Christmas tree,” says Russell. “It began to glow intensely at infrared wavelengths as the thermostat effect kicked in.”
http://science.nasa.gov/science-news/science-at-nasa/2012/22mar_saber/
Nic Lewis says:
May 24, 2013 at 2:11 pm
Steven Mosher wrote:
“I think it might be instructive for WUWT readers to understand how Anthony’s claims about microsite bias would play into your calculations. For example, if one assumed that the land warming was biased by .1C per decade from 1979-current, what would that do to the sensitivity calculation?”
Good point, Steve. That assumption would reduce the increase in global temperature betrween the 1860-79 mean and the 2003-12 mean from 0.76 C to about 0.68 C. All the climate sensitivity estimates, and their uncertainty ranges, would then reduce by about 11%. So a sensitivity of 1.7 C would change to just over 1.5 C, for example.
I hope Anthony and others take note of this. What this does is allow people to Frame their concerns about temperature accuracy in the LARGER PICTURE of the scientific debate over sensitivity.
Look at the equations for ECS and TCR. People need to relate their skepticism to this equation
That way they are part of the debate.
HR says:
May 24, 2013 at 9:08 am
Jim Cripwell
In order to understand science you need a health dose of caution. The limits of our data and understanding mean we must pepper our conclusions with appropriate caveats and/or uncertainty ranges. You seem to completely misunderstand this and instead favour the idea of perfection or nothing. The unfortunate truth is that most of the time science is about being less wrong than it is about being right you need to moderate your skepticism appropriately.
#####################################
Yes, Many have pointed out to Jim that he needs to be skeptical about his skepticism.
Skepticism in science is a TOOL, it is not a philosophy. By Jim’s definitions we can never know anything, which is fine for philosophical skepticism, but death to science which operates on incomplete, inconclusive, data and induction. There is a debate in science. people can join that debate by following Nic’s path. So there is a seat for everybody at that table. Crying that the table doesnt exist will get you ignored. Complaining that you want to eat at a different table, will get you ignored. There is a debate. Its a scientific debate. Its about climate science and the most important question we can ask: How much warmer? Arm waving will get you ignored.
Attcking fundamental physics will get you ignored. Screaming fraud will get you ignored. There is a debate. There is a question on the table and open seats. Do like Nic and take a seat.
John Day says:
May 25, 2013 at 8:14 am
@richard M
> around 30% of the sun’s energy is absorbed directly in the atmosphere just to start with…
I think you’re confusing absorption with albedo. 30% of the sun’s energy is reflected by the clouds, 3% is absorbed by the clouds. The atmosphere absorbs only 13% of solar irradiation directly (not counting clouds). Of the non-reflected energy, 84% (the ‘elephant’) reaches the surface and oceans, 4% of that is reflected back. The remainder of that (~80%, aka the ‘elephant’) is re-rediated as IR heat. Yes, 90% of this is aborbed the atmosphere, but its source was the surface. And yes, that heat absorbed by the atomsphere,mostly by water vapor, is significant (as I said in my post) as GHE warming. CO2 plays a minor role, again as I said in my post.
I am using the KT energy budget diagram as my source. Where did you get your numbers? In addition, my 30% was computed after you subtract out albedo and includes clouds which may have led to some confusion. If you add the atmospheric sources you have 78+80+17 = 175 w/m2 and that does not include thermalized energy. That’s a lot of energy.
http://rabett.blogspot.com/2012/08/one-kt-diagram.html
Clearly CO2 can interact with any molecule in the atmosphere so I didn’t limit it. I agree the effect is small but so is the greenhouse contribution of CO2 compared to water vapor and clouds. We have two small effects and we don’t know exactly how they work in total. That was my point.
Steven Mosher says:
May 26, 2013 at 7:59 am
Raw data is garbage. You obviously havent looked at it.
I happen to agree. The problem is very biased people think they know how to adjust the data. As has been proven time and again in medical research, researcher bias always affects the results towards the bias. We can be 99+% assured that the data is biased on the warm side. Being mathematically inclined, I’m more of the opinion that the errors are likely to cancel out and it’s probable we don’t even know all of the factors that make it bad. That makes using the raw data as good (and probably better) as anything else.
Steven Mosher says:
May 26, 2013 at 8:08 am
Steven Mosher says:
May 26, 2013 at 8:08 am
____________________________
Attempts at appearances of reasonableness only go so far.
There isn’t any compelling argument that a doubling of CO2 produces anywhere near as much as 3C.
All this middle- road stuff only puts half your headlights in the opposing lane.
Steven Mosher. You have made this personal. Let me reply in the same spirit. You misquote what I write, and give a completely false impression of what I believe. I have stated over and over again that CAGW is a perfectly viable and reasonable hypothesis. But that is all it is; just a hypothesis with no empirical data to back it up, and prove it is correct. You accuse me of bringing nothing to the table. That is false. You claim that by bringing hypothetical estimates of a numeric value of climate sensitivity to the table, you are somehow proving the CAGW is correct. You insist that there is no categorical difference between estimates and measurements. All I try to point out is that, until you have actual measurements of climate sensitivity, you cannot make CAGW any more than a hypothesis. I dont bring nothing to the table. I point out that what you and the warmists bring to the table is not proper physics.
The fundamental issue, which you refuse to discuss, is whether the IPCC statements in the SPMs that there is a 95% or 90% probability of something being correct, have any basis in science. I maintain that while CAGW is a viable hypothesis, that it all it is, and the IPCC claims of high probabilities that things are correct, are scientific garbage.
Steven’s claim that you must “sit at the table” (i.e. accept the basic paradigm) to have an impact on a scientific discussion is nonsense. Both ulcers and plate tectonics are examples that prove his assertion is wrong (and there are many others). Sorry Steven, making erroneous assertions to try and force people to your way of thinking only detracts from your credibility.
@RobertM
> …my 30% was computed after you subtract out albedo and includes clouds
> which may have led to some confusion.
What confusion? I was not confused. Your statement was very clear and was simply wrong:
“…around 30% of the sun’s energy is absorbed directly in the atmosphere just to start with…”
>Where did you get your numbers?
http://education.gsfc.nasa.gov/ess/units/unit2/u2l5a.html (see budget diagram under Resources)
Steven Mosher says:
May 26, 2013 at 8:08 am
ruvfsy says:
May 24, 2013 at 7:50 pm
So Mosh,
Where is the fine line between denialism and lukewarmerism?
1.2 per doubling of CO2?
##################################
first principles will get you to 1.2C. I think we pu the line of demarcation at 1C. But as folks get more evidence that might be push down.
————————————————————————
Let it be recorded here, Steven Mosher considers Richard Lindzen to be a denier.
Also Mr Mosher, I notice you avoided an answer to my questions with regard to your trillion dollar per degree C statement.
I’m a believer of the standard assertion, that GHGs like H20, CO2, and O3 can and do absorb small sections of the LWIR surface emitted radiation, and thereby raise the Temperature of that atmosphere, at whatever altitude the absorption occurs; quite high, in the case of O3.
Now whatever means, by which, the Temperature of the atmosphere is increased; the result of such a higher Temperature, has to be an increase in the rate of radiative cooling of that atmosphere to outer space. Higher Temperature things radiate faster; all else being equal.
Well increasing atmospheric Temperature, also should enhance convective transport of heat energy to higher altitudes, to be lost to space.
So increasing the atmospheric Temperature, beyond what the natural altitude lapse rate would dictate, should permit faster cooling to space, with a lower surface Temperature; because of the rapid radiative transfer of energy from the surface to higher altitude GHGs, bypassing the slower conduction/ convection mechanisms.
For the life of me, I can’t even imagine by what mechanism, a higher atmospheric Temperature, can transport heat energy down to the surface, contrary to what the second law mandates for actual heat energy transport. The net heat flow must be from the warmer surface to the cooler upper atmosphere. True; the smaller Temperature gradient between a lowered Temperature surface , and a Temperature enhanced upper atmosphere (via LWIR radiation), must diminish the rate of conductive heat energy transport, upwards in the atmosphere.
And of course, we know that downward LWIR radiation from the warmed atmosphere is strongly absorbed in a thin layer of mostly ocean surface, leading to accelerated evaporation, and transfer of latent heat energy back into the atmosphere.
So yes I believe (more) GHGs increase atmospheric Temperatures; but no, I don’t believe that increases surface Temperatures; it just increases the upper atmosphere radiative cooling rate.
Cold things don’t cool faster than hot things; there’s that T^4 problem there.
Readers of WUWT, and all friends of “Blackbody Radiation” (including moi) might be interested in the following narrative, from the June 2013 issue of Optics & Photonics News; a regular publication of The Optical Society of America; one of the foundation bodies of the American Institute of Physics; in their “Optical Feedback” (letters) section, from one Edward Collett of New Jersey.
He comments on an article in the April 2013 OPN issue regarding a less than happy episode between Albert Einstein, and S.N. Bose, of Bose-Einstein Statistics renown.
Bose had published a paper on this statistics, and Einstein followed with a paper he couldn’t have written without seeing the Bose paper. Einstein never thought to include Bose by invite, as a co-author of his paper.
It seems that in 1905, Einstein had recognized the inconsistency of Max Planck’s derivation of the Black Body Radiation formula.
“it was part quantum and part classical”.
Einstein spent the next 20 years trying to formulate Planck’s radiation law in completely Quantum Mechanical terms. He failed and by 1925, Einstein had not developed any part of modern Quantum Mechanics. No wonder Einstein didn’t like the whole idea of quantum theory.
The above freely excerpts from Collett’s letter, and he is relating from the literature on Bose and Einstein.
Bottom line is, even Einstein himself; nor anyone else, has derived BB radiation laws from Quantum Mechanical theory. It is purely classical physics, and quite fictional at that.
Yes Max Planck suggested that EM radiation came in integral chunks; same as pumpkins on a vine.
You can pick two pumpkins, or 17 pumpkins, but not 2.71828 pumpkins; nor photons either.
Planck placed no restrictions on the size (energy) of either photons, or pumpkins. He simply said that the photon energy, and the associated wave frequency were related by E = h (nu)
That makes h = energy times time (action), so h is the action in each cycle of the associated wave frequency of the photon.
(nu) can range from zero to infinity, sans both ends, without restriction, so the photon energies are in no way quantized; just as the mass and size of pumpkins are not quantized.
Quantum theory deals with the actual physical energy levels of electrons et al in real physical materials. BB radiation theory, incorporates the real physical properties of no material of any kind; and is in every way non-existent; yet such an important step in the evolution of modern physical understanding of our universe.
By all accounts, S.N Bose was one very smart guy, and apparently a very nice guy too. One of the great Indian physicists, like Raman and Chandrasekar . Forgive me if I have left out any of the well known Indian Physics “biggies”.
george e. smith says:
May 26, 2013 at 7:34 pm
Good analysis as usual, George.
@george e. smith
> Bose had published a paper on this statistics, and Einstein followed with a
> paper he couldn’t have written without seeing the Bose paper. Einstein
> never thought to include Bose by invite, as a co-author of his paper.
Are you referring to Bose’s watershed paper “”Plancks Gesetz und Lichtquantenhypothese”, Zeitschrift für Physik, 1924? (If you were referring to another Bose paper, please elaborate.)
If so, you’ve got the historical details completely wrong here.
Einstein had indeed seen Bose’s paper, because Einstein wrote it! Bose (the ‘bos’ in boson) had requested Einstein to translate it into German for him. He had unsuccessfully tried to get it published elsewhere, but was rejected because his startling new theory about the distribution of energy in a photon gas didn’t conincide with the ‘consensus’ theory (classical Maxwell-Boltzmann distribution of ordinary gasses).
http://en.wikipedia.org/wiki/Satyendra_Nath_Bose
As for Planck’s Black-Body Radiation Law being ““it was part quantum and part classical” and Eintein’s involvment you’ve got that twisted too. Planck formulated the law in 1900 using only empirically derived constants, under classical assumptions. It wasn’t until 1914 that he further expressed it as a statistical distribution:
http://en.wikipedia.org/wiki/Planck's_law#CITEREFPlanck1914
So that statement about “in 1905, Einstein had recognized the inconsistency of Max Planck’s derivation of the Black Body Radiation formula” is BS. (What inconsistency?) What Einstein did in 1905 (besides Relativity) was to discover the photolectric effect (scattering of photons as light). It was for this photoelectric work that Einstein received his only Nobel Prize in 1921.
Planck rightfully deserves credit as the ‘father of quantum theory” because the Planck Relation (with its famous constant ‘nu’: e=h*nu) and the radiation law are fundamental “planks” (sorry) in Quantum Theory. Planck’s Relation (like Einstein’s e=mc2 and Boltzmann’s S=k*logW) is amazing because it reveals a remarkable relationship between two worlds that previously seemed unrelated.
Einstein gets supporting credit too, for his photoelectric effect, which was one of the earliest portrayals of photons as particles.
@me>…. famous constant nu…..
Oops, I meant Planck’s famous constant ‘h’ of course. Nu is the photo’s frequency.
“””””……John Day says:
May 27, 2013 at 4:29 am
@george e. smith
> Bose had published a paper on this statistics, and Einstein followed with a
> paper he couldn’t have written without seeing the Bose paper. Einstein
> never thought to include Bose by invite, as a co-author of his paper……”””””
Well John, if you read my post, you would see I merely relayed the essence of a letter published in OPN for June 2013.
I suggest that the best place for your learned rebuttal of that “BS”, is for you to send it to the OPN feedback column, for them to publish; they always seek to learn the truth, so they would be most appreciative of Wiki’s definitive reporting on the matter.
As for Einstein having “written” Bose’s circa 1924 paper, your Wiki source merely says that Einstein simply translated into German; he did not “write it”.
Yes Einstein belatedly received his Nobel prize in physics for the work on the photo-electric effect.
And for your information, the photo-electric effect has nothing whatsoever to do with “the scattering of photons as light”.
It relates to the emission of electrons from certain metals, when irradiated by EM radiation.
Classical physics, had no explanation for the PE effect (and still doesn’t).
The emission of electrons (or not) is quite unrelated to the intensity of the EM radiation; other than the number of electrons emitted (if any).
What determines the emission (or not), is the frequency or wavelength of the radiation. No matter how weak, the irradiance; even down to a single photon, if the photon energy [h.(nu)] exceeds a certain threshold, electron emission can occur (and with quite high quantum efficiency).
But even a kilowatt of power from a CO2 laser, at 10.6 microns wavelength, will not release a single photo-electron from a material that will emit an electron with as high as 90% QE, from a single 2 eV photon.
So if you rely on wiki for your source of factual scientific information; don’t be surprised if they sometimes feed you “BS”.
And do send your rebuttal to OPN feedback column; I can’t wait to see it in print.
@george e. smith
> … if you read my post, you would see I merely relayed the
> essence of a letter published in OPN for June 2013…
Yet, you had no problem in letting dbstealey and the rest of us think it was your analysis too. George, no offense, but you occasionally pontificate a bit too much on topics you havent’ quite mastered. So, as remediation, I suggest that _you_ write the rebuttal to OPN.
> And for your information, the photo-electric effect has nothing whatsoever
> to do with “the scattering of photons as light”.
Oops, typo, I meant to say “as electrons”. Mea culpa, pescavi. [And perhaps I should have put “scattering” in quotes.] But, PE _is_ a kind of scattering, in a broad sense (if you squint a little, so you can’t distinguish between photons and electrons). So, arriving particles (photons) arrive, interact with atoms and leave in different directions (as electrons). That is ‘scattering’, which is what I was trying to convey (by slightly abusing the term)..
In fact, the photo-electric effect is just one of five well-known ‘scattering’ interactions of photons and atomic matter:
1. Coherent Scattering
2. Photo-disintegration
3. Pair Production
4. Compton Effect or scattering
5. Photoelectric Effect
http://whs.wsd.wednet.edu/faculty/busse/mathhomepage/busseclasses/radiationphysics/lecturenotes/chapter12/chapter12.html
(Squint a little when you look at the diagrams and you’ll see that they all result in a kind of ‘scattering’ effect)
>… Einstein simply translated into German; he did not “write it”.
Now you seem to be trying to hide your previous endorsement of this allegation that Einstein had somehow plagiarized Bose’s idea:
” Bose had published a paper on this statistics, and Einstein followed with a paper he couldn’t have written without seeing the Bose paper. ”
The historical facts are: 1) Bose wrote a paper but could _not_ get his paper published 2) he sent the paper to Einstein and asked for his help to get it published.
So the above allegation is clearly false and misleading. Of course Einstein had seen the paper! That was the point I was making! So Einstein translated (i..e ‘re-wrote’) the paper in German. Do you disagree with Wikipedia and other references on the historical accuracy of this? [ BTW, Wikipedia is a collection of peer-reviewed documents (not that that guarantees total accuracy of course)]
Einstein was no saint, but it is well known that he did enthusiastically endorse Bose and his work to the international scientific community. Subsequently, Bose was promoted and given a 2-year paid sabbatical to visit Europe to collaborate with his “peers” (even though he did not have a PhD). Dirac named the ‘boson’ in his honor.
But now I am pontificating. So, pen down.
😐
Well John, I just lost an hour’s worth of typing, when PG&E decided to put a four way stop sign on my local power grid. My LED reading lamp, was only out for sixty seconds; but my internet was out and the loss was total, when I tried to post it in the dark.
So just a short comment. Wiki of course does NOT publish peer reviewed papers. They publish what someone unknown wrote. Now they certainly list peer reviewed papers, such as the German Bose paper you mentioned..
But did you look up that paper itself to be certain that the wiki author correctly quoted from it; or from any of the other references.
Some of the Wiki authors, are English language impaired, so what they write versus what they cite in bibliographies, are two quite different things.
As for what I personally write, 95% of it I simply type from memory; so yes, sometimes I disremember it. The other 5%, is typically data directly excerpted from reference handbooks, and other widely available texts. I almost always cite my sources, when I do that.
As for Wiki, I NEVER consult them for information. I do sometimes check them when others, such as you, give specific links to them. Mostly, that is a waste of time.
And Stealey, evidently had no trouble discerning the difference between what I excerpted from Collett’s letter, and what was subsequently my own personal input. So what was your problem with that ?
You should really start thinking for yourself; there is little of this that any WUWT reader can’t understand. So stop citing Wiki references, unless you first read the peer reviewed papers they list, to ensure they quoted them correctly..
@george e. smith
>Wiki of course does NOT publish peer reviewed papers.
Wikipedia is not a system for publishing scholarly papers (yet). It is an on-line encyclopedia that may be reviewed, collaborated or edited by anyone, including experts and yourself. In theory, such a ‘collaborative public encyclopedia’ cannot possibly work, but in practice it works remarkably well. It’s not perfect, but, like a living thing, Wikipedia is evolving and getting better.
George, I know you want to change the subject (which is already far off-topic from ‘aerosol adjusted forcings’) but you really need to face the music and apologize for retweeting those errors in that OPN letter. Just say you’re sorry for any misinformation that you may have inadvertently said or repeated in regard to the 1924 Bose-Einstein paper. There, I said it for you.
😐
@george e. smith
>But did you look up that paper itself to be certain that the wiki author correctly quoted from it;
Most of these early Zeitschrift manuscripts are locked up behind Springer paywalls ($40 to unlock). But the Bose paper is an exception:
http://tu-dresden.de/die_tu_dresden/fakultaeten/fakultaet_mathematik_und_naturwissenschaften/fachrichtung_physik/itp/tp/lehre_dir/vorlesungen_dir/ws_2012_2013/folder.2012-12-21.5820555304/bose_1924.pdf
You’ll see that Bose received full credit as the only author of the paper (“Bose, Dacca University India). Einstein’s name doesn’t appear until the end, in the Translator’s Note, where he warmly praises the work as “important progress in my opinion”. Einstein was already famous in 1924, so that tiny endorsement gave Bose the huge opportunity he was seeking.