By Patrick J. Michaels and Paul C. “Chip” Knappenberger
We have two new entries to the long (and growing) list of papers appearing the in recent scientific literature that argue that the earth’s climate sensitivity—the ultimate rise in the earth’s average surface temperature from a doubling of the atmospheric carbon dioxide content—is close to 2°C, or near the low end of the range of possible values presented by the U.N.’s Intergovernmental Panel on Climate Change (IPCC). With a low-end warming comes low-end impacts and an overall lack of urgency for federal rules and regulations (such as those outlined in the President’s Climate Action Plan) to limit carbon dioxide emissions and limit our energy choices.
The first is the result of a research effort conducted by Craig Loehle and published in the journal Ecological Modelling. The paper is a pretty straightforward determination of the climate sensitivity. Loehle first uses a model of natural modulations to remove the influence of natural variability (such as solar activity and ocean circulation cycles) from the observed temperature history since 1850. The linear trend in the post-1950 residuals from Loehle’s natural variability model was then assumed to be largely the result, in net, of human carbon dioxide emissions. By dividing the total temperature change (as indicated by the best-fit linear trend) by the observed rise in atmospheric carbon dioxide content, and then applying that relationship to a doubling of the carbon dioxide content, Loehle arrives at an estimate of the earth’s transient climate sensitivity—transient, in the sense that at the time of CO2 doubling, the earth has yet to reach a state of equilibrium and some warming is still to come.
Loehle estimated the equilibrium climate sensitivity from his transient calculation based on the average transient:equilibrium ratio projected by the collection of climate models used in the IPCC’s most recent Assessment Report. In doing so, he arrived at an equilibrium climate sensitivity estimate of 1.99°C with a 95% confidence range of it being between 1.75°C and 2.23°C.
Compare Loehle’s estimate to the IPCC’s latest assessment of the earth’s equilibrium climate sensitivity which assigns a 66 percent or greater likelihood that it lies somewhere in the range from 1.5°C to 4.5°C. Loehle’s determination is more precise and decidedly towards the low end of the range.
The second entry to our list of low climate sensitivity estimates comes from Roy Spencer and William Braswell and published in the Asia-Pacific Journal of Atmospheric Sciences. Spencer and Braswell used a very simple climate model to simulate the global temperature variations averaged over the top 2000 meters of the global ocean during the period 1955-2011. They first ran the simulation using only volcanic and anthropogenic influences on the climate. They ran the simulation again adding a simple take on the natural variability contributed by the El Niño/La Niña process. And they ran the simulation a final time adding in a more complex situation involving a feedback from El Niño/La Niña onto natural cloud characteristics. They then compared their model results with the set of real-world observations.
What the found, was the that the complex situation involving El Niño/La Niña feedbacks onto cloud properties produced the best match to the observations. And this situation also produced the lowest estimate for the earth’s climate sensitivity to carbon dioxide emissions—a value of 1.3°C.
Spencer and Braswell freely admit that using their simple model is just the first step in a complicated diagnosis, but also point out that the results from simple models provide insight that should help guide the development of more complex models, and ultimately could help unravel some of the mystery as to why full climate models produce high estimates of the earth’s equilibrium climate sensitivity, while estimates based in real-world observations are much lower.
Our Figure below helps to illustrate the discrepancy between climate model estimates and real-world estimates of the earth’s equilibrium climate sensitivity. It shows Loehle’s determination as well as that of Spencer and Braswell along with 16 other estimates reported in the scientific literature, beginning in 2011. Also included in our Figure is both the IPCC’s latest assessment of the literature as well as the characteristics of the equilibrium climate sensitivity from the collection of climate models that the IPCC uses to base its impacts assessment.
Figure 1. Climate sensitivity estimates from new research beginning in 2011 (colored), compared with the assessed range given in the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) and the collection of climate models used in the IPCC AR5. The “likely” (greater than a 66% likelihood of occurrence)range in the IPCC Assessment is indicated by the gray bar. The arrows indicate the 5 to 95 percent confidence bounds for each estimate along with the best estimate (median of each probability density function; or the mean of multiple estimates; colored vertical line). Ring et al. (2012) present four estimates of the climate sensitivity and the red box encompasses those estimates. The right-hand side of the IPCC AR5 range is actually the 90% upper bound (the IPCC does not actually state the value for the upper 95 percent confidence bound of their estimate). Spencer and Braswell (2013) produce a single ECS value best-matched to ocean heat content observations and internal radiative forcing.
Quite obviously, the IPCC is rapidly losing is credibility.
As a result, the Obama Administration would do better to come to grips with this fact and stop deferring to the IPCC findings when trying to justify increasingly burdensome federal regulation of carbon dioxide emissions, with the combined effects of manipulating markets and restricting energy choices.
References:
Loehle, C., 2014. A minimal model for estimating climate sensitivity. Ecological Modelling, 276, 80-84.
Spencer, R.W., and W. D. Braswell, 2013. The role of ENSO in global ocean temperature changes during 1955-2011 simulated with a 1D climate model. Asia-Pacific Journal of Atmospheric Sciences, doi:10.1007/s13143-014-0011-z.
=========================================================
Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”
Mac the Knife says:
February 28, 2014 at 11:19 am
Gary Pearse says:
February 28, 2014 at 10:28 am
>>>>>>>>>>>>>>>>>>>>
I am with you. The Holocene has been very stable. If CO2 does have any effect it is minor and quickly countered by negative feed backs.
The Super El Nino was 1997-1998. You can see the inflection point in the % change in Earthshine Albedo measurements as the climate switched gears Graph
========================================================
If CO2 doesn’t do what they claimed it would (Harm Ma’ Nature) then there is no basis for the USEPA or any other nations equivalent to regulate it.
The greedy fingers reaching for cap and trade etc are cut off.
I am having a hard time separating Anthony’s and others’ comments from the referenced studies or papers.
Am I the only one?
Gotta be a way to use colors or fonts or another way to read the referenced papers versus the moderators and such, ya think?
Gums whines…
Valid physics tells us there is no warming caused by water vapor or any other greenhouse gas. A planet’s surface may be partly warmed by direct solar radiation, but even that is not found to be necessary on some other planets. Nor is any radiation from a colder atmosphere able to raise the temperature of a surface because that would be a process in which entropy had decreased. Radiation from the atmosphere plays a part in slowing surface cooling, but what does most of the slowing are nitrogen and oxygen molecules which slow conduction from the surface.
But none of these processes are what plays the main role in setting and controllling surface temperatures. The amount of solar radiation absorbed by the atmosphere and the thermal gradient that forms autonomously in a gravitational field according to the laws of physics are the main factors determining these temperatures. This is very obvious on other planets, but as we stand in the nice warm sunshine on Earth we get somewhat confused as to what’s warming what. Just remember that there is absolutely no evidence in temperature records that the greenhouse gas water vapor increases mean surface temperatures. That fact is a bit of a bother for those who try to imagine the temperature trends show sensitivity to carbon dioxide, when in fact they are mostly just showing the main 1,000 year and 60 year natural cycles regulated by the planets.
You can easily calculate the climate sensitivity from cloud forcing observations.
There was a 5% decrease in cloud cover in the 1990s (0.9w/m2) which caused 0.3deg of warming or 0.06 deg/% cloud change.
Climate sensitivity is there for near neutral, or about 1.2 deg C for the proposed CO2 doubling forcing of 3.7w/m2.(assuming this is real in the first place)
Very similar to spencer and braswell.
It’s the relevant concern if you buy into their notion that carbon dioxide is the most significant driver of climate. I don’t think there’s any evidence of that yet. Other drivers sensitivity and forcings could be much more relevant.
Possibly so. But we need to find out as much as we can about CO2 sensitivity in any case.
Not to Obama or the UN.
It is the “driving” concern overall. Obama won’t be with us forever. Meanwhile, we are defended by (and counting on) his outstanding failure of leadership to tide us over.
Where is this theory that ONLY man made CO2 causes the problem coming from. Someone replied to one of my comments on a blog that their professor stated the is ONLY man made CO2 … then went on with how the other “Natural” CO2 was not harmful. HOW? What is different IR wise about man made CO2?
If we focus on the fact that we burn fossil fuels for the heat they produce, and that CO2 is a by-product (a minor greenhouse gas) we can apply common sense and a few calculations to show that the heat emissions from our energy use are four times the amount necessary to account for the actual measured rise in atmospheric temperature. We can then deduce that the effect of CO2 must be minor. We can then argue that the international push for CCS,carbon capture and storage, makes no sense. To reduce CO2 concentration by 1ppm 9,000,000 tons must be removed: at what cost and for what benefit? We can also argue that since nuclear power emits more than twice the total heat as its electrical output, we should not permit nor license any more nuclear plants, but that is what we are now doing. What kind of scientists do we have that cannot realize that heat is what causes temperatures to rise? The heat we emit can account for most of the things we are experiencing,i.e.rising water and land temperature, melting of glaciers. In the past century annual energy usage has increased tenfold. Sure, CO2 has increased 25%, but it is HEAT that should guide us. Will our elected officials respond? Try even getting an acknowledgement of receipt. If I sound bitter and frustrated it’s because I am.
Craig Loehle, I have a question on the difference between transient and ‘equilibrium’ sensitivity, with respect to atmospheric CO2.
The splice of the Law Dome ice core and the Keeling curve show that the log(CO2) is essentially biphasic, linear from 1800-1957 and from 1958-2014. The ratio of the slopes is about 4.27. This kink in atmospheric CO2 ‘forcing’ should surely be evident in temperature record. If there is a decade or so between transient and ‘equilibrium’, then there should be a smoother kink in the temperature record in 67, a 20 year tau would place the kink in the late 70’s early 80’s.
Philip Haddad “we can apply common sense and a few calculations to show that the heat emissions from our energy use are four times the amount necessary to account for the actual measured rise in atmospheric temperature.”
Care to show us those calculations?
I’ll give a specific example. In 2008 energy use was 16 terawatts which is equivalent to 50x10E16 btus for that year. The mass of the atmosphere is 1166x10E16 pounds and has a specific heat of 0.24. dH=M Cp dT. 50x10E16 =1166x10E16 x0.24 xdT. Solving for dT, the change in temperature is
50/ (1166×0.24) = 0.17*F, the potential temperature rise if all the heat went there. The actual measured rise was 0.04-0.05*F ( the slope of the line tangent at 2008 for temperature versus time). I hope this is adequate but I would be happy to discuss it further.
evanmjones says:
February 28, 2014 at 11:22 am
All this talk about climate sensitivity is playing their game.
“It’s the irrelevant concern.”
FIFY
evanmjones says:February 28, 2014 at 11:22 am
All this talk about climate sensitivity is playing their game.
It’s the relevant concern.
Perhaps the more relevant concern is whether warming will do more good than harm. Particularly since at the current rate we have over a century and a half before CO2 doubles.
” If anyone is familiar with London then this IR image clearly shows a large warm rectangle with a cool blob within. The cool blob is a lake in the Fairlop Waters Country park and the large yellow block on the right hand side is farmland. It is not what I expected. Notice that trees and water are ‘cool’.
that’s not a picture of UHI. Its a picture of SUHI. UHI has to do with the air temperature below the canopy layer. SUHI is the surface ( think dirt) temperature. UHI and SUHI are related but not in any simple way
Here is an example of some of the biases in making IR images of LST and a comparison of SUHI and UHI
http://www.uv.es/juy/Doc/sobrino_et_al_2013_IJRS_UHI.pdf.
Craig.
The first sentence of the paper is wrong
Climate sensitivity is the response to any radiative forcing. lambda.
say your lambda is for example .75 C per watt/m^2
The sensitivity to C02 doubling ( no feedbacks) is then 3.71 Watts * 0.75
Looking only at C02 forcing ( which is maybe 75% of the forcing ) will give you the wrong answer
@Jordan at 12:33 pm
Model statistics describe the models and nothing more. There is no justification to make the leap to average model behaviour somehow being a reliable indicator of climate.
1:43 pm
The main gripe in my earlier comment is the practice of assuming average model response has meaning, whereas individual “realisations” don’t.
I think that sums it up nicely. If the mean model response has any meaning, why should adding poorer models to the ensemble have greater effect on the meaning than good models?
But let me propose that while the mean of the ensemble has little meaning, the standard deviation of the ensemble has GREAT meaning. It is a direct measure of how unsettled is the science. By eliminating poorer models from the ensemble does the envelope narrow, the standard deviation reduces, and the science becomes more settled. Not necessarily more correct, just more settled.
The article employs the phrase “the equilibrium climate sensitivity” (TECS). The “the” in this phrase implies TECS to be a constant e.g. 3 Celsius per doubling of the CO2 concentration.
TECS is the ratio between the change in the global surface air temperature at equilbrium and the change in the logarithm of the change in the CO2 concentration. Information that TECS is a constant is not a product of scientific research conducted thus far. Thus, to assume TECS to be a constant is to fabricate this information.
I have raised the issue of sensitivity to water vapor on the SkS site in comments #20 and #25 after this comment pointed out that nowhere is there any discussion of the autonomous thermal gradient in any atmosphere. In case they delete comment #25, it reads …
Moderator: As Tom Dayton pointed out, there is no thread discussing the autonomous thermal gradient that evolves at the molecular level as the isentropic state of maximum entropy in a gravitational field – a now proven fact of thermodynamic physics which happens to have been the subject of my postgraduate research for several years. That is understandable, of course, because there is no need for any extra “33 degrees of warming” if Loschmidt was right. Seeing that no one has proved Loschmidt wrong, and modern physics has been used to prove him right, I’ll do occasional searches on SkS for the word “Loschmidt” (which does not appear anywhere on the site at the moment) and then perhaps respond to any post or comment thereon. Meanwhile you might like to search for any study which uses real world temperature and precipitation data to confirm that the sensitivity to a 1% increase in water vapor above a region is several degrees of warming. I happen to have reviewed a study (to be published in April) which shows the sensitivity is negative, which of course is what is to be expected because the Loschmidt effect causes even warmer surface temperatures which are then reduced because the wet lapse rate is less steep.
philohaddad
I see how you got there. I was thinking of comparing to a forcing like I was describing above. So taking:
5.101E+14 m^2 earth area
143,851 terawatt hours total energy use per Wikipedia 2008
1.43851E+17 watt hours
5.17864E+20 joules
1.64213E+13 watts
16.42134703 terawatts
0.032192407 W/m^2
The forcing would only 0.032W/m^2 for all that energy we used. The direct forcing from CO2 at half a doubling is about 1.9W/m^2, so it is pretty small compared to that (1.7%), and it is 6.4% of the 0.5W/m^2 I was describing above. I suppose taken all at once, it could heat the atmosphere, but taken over a year, the energy escapes so fast, it really can’t make much of an impact. Good to know the relative amount though. Thanks.
Another interesting question would be the relative concentration of that energy use. 70% of the area could be ignored (ocean), and probably 99% of land too (rural, guessing). So maybe the urban forcing is 0.032/(1-0.70)/(1-0.99) = 10.7 W/m^2. That is certainly not insignificant at a local level (it is also concentrated near the ground, which is not considered). That’s going to make a dent.
Steven Mosher: You have a good background in sensitivity, forcing / feedback, would you care to critique my comment at 11:21am? Am I making a sensible argument for low sensitivity given the pause in atmospheric temperatures? If not, why not? Thanks!
The sensitivity to carbon dioxide (even if it were positive) could not then be multiplied by extra water vapor because, as I explained in a comment on SkS, the assumption that, if the ocean were warmed then more evaporation would occur, is not correct. The rate of evaporation depends more on the temperature gap at the boundary and so, if the atmosphere supposedly warms first then that gap would narrow and evaporation decrease.
Then, even if they were right about water vapor warming, the sensitivity of water vapor would have to be about 5 to 8 degrees for each 1% of water vapor, as the level varies between about 1% and 4%. So they would have to show that a desert with only 1% water vapor above it would be about 15 to 24 degrees cooler than a rainforest with 4% above it at a similar latitude and altitude. Obviously no such real world data exists, and what data does exist demonstrates a cooling effect correlated with extra precipitation.
But of course the main problem is the incorrect SkS assumption that temperatures would be homogeneous throughout the troposphere in the absence of water vapor and other radiating molecules, because molecules in free flight after a collision cannot generate gravitational potential energy out of nothing. Modern physics can be used to explain why Loschmidt was correct about autonomous thermal gradients in solids, liquids and gases. Running a wire up the outside of a cylinder does not, however, bring about any perpetual motion of energy, because the wire also develops a thermal gradient and the combined system comes to a new and stable state of thermodynamic equilibrium. So the validity of the Loschmidt effect is all that is needed to show the greenhouse guesswork was incorrect, even though Loschmidt was wrong about perpetual motion.
Well, as expected, SkS true to form deleted all three of my detailed comments within one to two hours because they obviously had no valid counter arguments, and nowhere on their site will you find the word “Loschmidt” because that is the weak link (now a broken link) in their chain of deception.
I would like to suggest another paper by Stephen Schwartz of Brookhaven Laboratory, Heat capacity, time constant, and sensitivity of Earth’s climate system. Schwartz S. E. J.
Geophys. Res., 112, D24S05 (2007). doi:10.1029/2007JD008746
As I understand the paper Dr. Schwartz estimate climate sensitivity to doubling of CO2 as 1.1 ± 0.5 K. He based his estimate on ocean heat content.
This version of the paper was attacked by a team of climatologists/ modelers and subsequently Dr Schwartz revised his estimate upward by a small amount. His result appear to be not much different from that of Spencer and Brasswell. Since the heat capacity of the oceans is orders of magnitude greater than the atmosphere, I believe that the estimates based on ocean models is more convincing.
@PMHinSC
“Perhaps the more relevant concern is whether warming will do more good than harm. Particularly since at the current rate we have over a century and a half before CO2 doubles.”
Says the crab, when you warm the water from 20C to 25C.
If that is Your argument, our beloved warmists will say, maybe you’re right for 2 degrees, but what if 4 or 6 degrees bla,bla bla and then You are in the defense. If you argument with this “even-if the temperature”, this “even-if” will not be heard and for them and for neutral listeners you already admitted 2 degrees.
Please mind my english, but I hope you get the point anyway.
“Sensitivity to CO2 is …a constant”
Actually the physics tells us that the logarithm of CO2 is the relevant variable. That is why we use “doubling”. The process is multiplicative, not arithmetic.
The curve is probably exponential as the following analogy demonstrates. During WWII some blackout curtains were flimsy. So the people used two layers and maybe even three. But there came a point when adding more layers had no measurable effect. However, we cannot explore this curtain analogy using only CO2, because the infrared windows are open and closed in different parts of the spectrum depending on the molecule (CO2, CH4, H2O.) and the windows overlap.
In any event, too much attention is paid to the GHG effect on land. The oceans make up 70% or so of the Earth’s surface and considerably more in the tropics as you can see by eyeballing a map. While the oceans do reflect a portion of the energy depending on roughness and the angle of the sun, that is not the relevant point. What counts is that most of the energy normal to the surface is absorbed by the oceans. That’s the key to the Earth’s climate.
You can easily determine this for yourself by looking at the infrared bands of a satellite image. The oceans show up as black. In fact you can use the infrared band to generate a precise and accurate map showing coastlines, middle infrared is best.
***
Gail Combs says:
February 28, 2014 at 2:24 pm
arxiv.org/pdf/0906.3625.pdf (A Bayesian prediction of the next glacial inception)
***
Thanks. An interesting paper, tho the statistics are beyond me. It “predicts” the current interglacial lasting about 50000 yrs more, a similar prediction to what our resident solar expert thinks is plausible.
Resolved: There are a nearly infinite number of at least semi-serous ways of looking at CO2 sensitivity under a single assumption that climate revolves around the CO2 molecule. Thanks to the newer ones that try to better match models with observation. It leans more to the serious and away from the permanent human condition of promising control of weather by control of people. But it labors under assumption never the less.