Still Another Low Climate Sensitivity Estimate
Guest post By Patrick J. Michaels and Paul C. “Chip” Knappenberger
Global Science Report is a weekly feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”
As promised, we report here on yet another published estimate of the earth’s equilibrium climate sensitivity that is towards the low end of the United Nations’ Intergovernmental Panel on Climate Change (IPCC) range of possibilities.
Recall that the equilibrium climate sensitivity is the amount that the earth’s surface temperature will rise from a doubling of the pre-industrial atmospheric concentration of carbon dioxide. As such, it is probably the most important factor in determining whether or not we need to “do something” to mitigate future climate change. Lower sensitivity means low urgency, and, if low enough, carbon dioxide emissions confer a net benefit.
And despite common claims that the “science is settled” when it comes to global warming, we are still learning more and more about the earth complex climate system—and the more we learn, the less responsive it seems that the earth’s average temperature is to human carbon dioxide emissions.
The latest study to document a low climate sensitivity is authored by independent scientist Nic Lewis and is scheduled for publication in the Journal of Climate. Lewis’ study is a rather mathematically complicated reanalysis of another earlier mathematically complicated analysis that matches the observed global temperature change to the temperature change produced from a simple climate model with a configurable set of parameters whose actual values are largely unknown but can be assigned in the model simulations. By varying the values of these parameters in the models and seeing how well the resulting temperature output matches the observations, you can get some idea as to what the real-world value of these parameters are. And the main parameter of interest is the equilibrium climate sensitivity. Lewis’ study also includes additional model years and additional years of observations, including several years from the current global warming “hiatus” (i.e., the lack of a statistically significant rise in global temperature that extends for about 16 years, starting in early 1997).
We actually did something along a similar vein—in English—and published it back in 2002. We found the same thing that Lewis did: substantially reduced warming. We were handsomely rewarded for our efforts by the climategate mafia, who tried to get 1) the paper withdrawn, 2) the editor fired—not just from the journal, but from Auckland University, and 3) my (Michaels) 1979 PhD “reopened” by University of Wisconsin.
Lewis concludes that the median estimate of the equilibrium climate sensitivity is ~1.7°C, with a 90% range extending from 1.0°C to 3.0°C. (That’s almost exactly what we found 11 years ago.)
Based on this result, we welcome Lewis (2013) to the growing list of results published in the scientific literature since 2010 which find the climate sensitivity to be on the low side of the IPCC. God knows what the climategaters are emailing today.
Figure 1 illustrates all the new results as well as the IPCC’s take.

Take special note of the new findings (and their mean) in relation to the black bar at the top labeled “IPCC AR5 Climate Models.” Of the 19 state-of-the-art climate models used in the IPCC’s newest Assessment Report (which is still in its draft form) exactly zero have an equilibrium climate sensitivity that is as low as the mean value of estimates from the recent literature included in our Figure.
Based on the collection of results illustrated in our Figure, the future climate change projections about to be issued by the IPCC are off by an average of a whopping 70 percent.
No wonder the IPCC is reluctant to lower their best estimate of the actual value of the earth’s equilibrium climate sensitivity. If they did, they would be admitting that the collection of climate models they have chosen (there is choice involved here) to project the earth’s future climate are, well, how should we put this, wrong!…which would mean that so too is the rate at which the sky is falling, according to the USGCRP and the US EPA.
We, at Cato’s Center for the Study of Science, will continue our efforts to portray the evolving state of climate science and to convince the powers-that-be that national and international assessments upon which EPA regulations are founded (and loony proposals for a carbon tax are based) are fatally flawed. Or as we put it, in our recent (April 12) review of the USGCRP’s draft “National Assessment,” in its current form, “the NCA [National Climate Assessment] will be obsolete on the day of its official release.”
References:
Aldrin, M., et al., 2012. Bayesian estimation of climate sensitivity based on a simple climate model fitted to observations of hemispheric temperature and global ocean heat content. Environmetrics, doi: 10.1002/env.2140.
Annan, J.D., and J.C Hargreaves, 2011. On the generation and interpretation of probabilistic estimates of climate sensitivity. Climatic Change, 104, 324-436.
Hargreaves, J.C., et al., 2012. Can the Last Glacial Maximum constrain climate sensitivity? Geophysical Research Letters, 39, L24702, doi: 10.1029/2012GL053872
Intergovernmental Panel on Climate Change, 2007. Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Solomon, S., et al. (eds). Cambridge University Press, Cambridge, 996pp.
Lewis, N. 2013. An objective Bayesian, improved approach for applying optimal fingerprint techniques to estimate climate sensitivity. Journal of Climate, doi: 10.1175/JCLI-D-12-00473.1.
Lindzen, R.S., and Y-S. Choi, 2011. On the observational determination of climate sensitivity and its implications. Asia-Pacific Journal of Atmospheric Science,47, 377-390.
Michaels, P.J., et al., 2002. Revised 21st century temperature projections. Climate Research, 23, 1-9.
Ring, M.J., et al., 2012. Causes of the global warming observed since the 19th century. Atmospheric and Climate Sciences, 2, 401-415, doi: 10.4236/acs.2012.24035.
Schmittner, A., et al. 2011. Climate sensitivity estimated from temperature reconstructions of the Last Glacial Maximum. Science, 334, 1385-1388, doi: 10.1126/science.1203513.
van Hateren, J.H., 2012. A fractal climate response function can simulate global average temperature trends of the modern era and the past millennium. Climate Dynamics, doi: 10.1007/s00382-012-1375-3.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
To summarize, of the studies since 2010, six find CS under 2 degrees C., six over & two exactly that amount. Not one finds CS as high as AR4 or AR5. Will IPCC ignore these papers? What about CACCA (catastrophic anthropogenic climate change alarmists) in general?
Low sensitivity is what the Earth’s climate system has shown to have now.
This may be because it contains a controlling thermostat; the water cycle.
Mainly negative feedback from clouds is what appears to act.
In my climate pages I link to many interesting reference papers on this.
Could someone tell me why an increase of 2°C would cause such devastation as is forecast (without, oddly, any specifications as to what devastation) to this planet?
Also, from what point is this 2°C measured from – now, or at some point in the past which was, or nearly was, 2°C cooler than now, or even some point in the future that is warmer (by an unknown amount) than now?
Finally, could it be possible that this 2°C actually be of benefit to our life on this planet?
All the studies cited are not of equivalent quality. By which is meant not academic content, but rather underlying methodological uncertainty. The higher quality studies all seem to come in below 2, but above 1.5. Close enough for ‘government work’.
Great to see the wealth on recent posts on sensitivities , which is the crux of the argument as the authors point out.
From a previous comment , note that in looking at observed temp & CO2 data over the industrial era, that we have an observed temp change of ~ 1.5 C/doubling, regardless of if CO2 is the cause of this change or not (a correlation based on the pertinent math). See :
http://wattsupwiththat.com/2013/04/24/some-sense-about-sensitivity/#comment-1286094
Note on figure 1 of this post that nearly every estimate of sensitivity is greater than 1.5. There is a profound implication here which I want to expand upon.
For any estimate greater than 1.5, it is absolutely implied there are other forcings at work which are trying to push global temps down – it’s the only way you can have the observed change in industrial era temps AND be consistent with these estimates of sensitivity. If the CAGWers truly believe these higher sensitivity estimates, one has to conclude that our anthropogenic CO2 output is having a demonstrable net positive effect – without it, we would be seeing considerable global cooling (with the net cooling be greatest for the greatest sensitivity estimates). In all periods of history , colder has been bad for humanity.
Of course, the alternative to this is that other forcings are net neutral or potentially positive. If this is the case, then CO2 sensitivity has to be even less than 1.5 (so that we can get the net observed 1.5 C / doubling ). In this case, the CAGWers are not just wrong, but wildly wrong. With such a low sensitivity, the implication of this case is that CO2 will have little meaningful impact on temps.
So, given the observation based on data that we have seen a net 1.5C / doubling trend over the last 120 + years, we can conclude one of 2 things :
1) If we have a sensitivity higher than 1.5 ( as CAGWers suggest), we are doing ourselves a huge favor by continuing to emit CO2 & keeping global temps from plunging
2) If we have a sensitivity lower than 1.5 , increased CO2 will produce irrelevant temperature changes.
I hope everyone can appreciate the implication – in either case, non matter how you look at, there is absolutely no argument that we should reduce CO2 – AND this is based on data only – no models required.
You are missing the new Masters paper in your compilation: http://link.springer.com/article/10.1007/s00382-013-1770-4
“By varying the values of these parameters… you can get some idea as to what the real-world value of these parameters are.”
No, you actually cannot do so to any useful degree of precision.
The models are way too complex and have way too many tunable parameters to be able to declare any subset of them to be valid in the real world based on how well the model output matches the observations.
[snip – call to action – mod]
I got spanked again. Good one mods. You could have just snipped the “call to action” and left the rest undisturbed???
Drs. Michaels and Knappenberger:
By definition, the equilibrium climate sensitivity (TECS) is the change in the equilibrium temperature at Earth’s surface from a change in the logarithm of the atmospheric CO2 concentration. As the equilibrium temperature is not an observable, when claims are made about the magnitude of TECS, these claims are insusceptible to being tested. Thus, TECS is a scientifically untenable concept.
The sensitivity of 1.1℃ is like the Shwartzchild Radius of climate. You can get closer and closer to it but you can’t pass through or your grants will vanish.
Dr Gray, you Sir are a true New Zealander. Thank you for your contribution in unravelling this climate nonsense.
I am struggling to understand why the AR5 can give the modelled estimate and ignore the scientific papers. Isn’t there supposed mandate to review the scientific literature, not promote somebody’s model? Is there no shame, accountability or scrutiny? Will the MSM even notice?
Two posts in a row by the old guard. Let’s raise a toast to the old guard!
Pat Michaels does not mention that he was fighting on the same issue well before 2002. In the early 1990s Michaels (and others) had been pointing out that the high CO2 sensitivity factored into the climate models was not matching the recent temperature trends. Then in 1992-3 the modellers started introducing a damper to simulate the impact of sulphate emissions. This meant that only a minimal reduction in the CO2 sensitivity range was required to give a much better match with the temp record — including the difficult pause in warming during the 1970s.
When in April 1995 such CO2+Sulphate model-run results by Mitchell (at the Met) appeared in Chapter 3 of the review draft of SAR, Michaels was suspicious that these simulations were really doing what they claimed. He wanted to look at polar and southern hemisphere results — where the sulphate impact should be almost zero — and see how they these zones tracked in the new CO2+Sulphate runs. He was suspicious because he had seen model results where the dampening was much more widely distributed (and therefore impacting greater on the overall result) than the limited impact Sulphate should over the industrial mid-lat NH.
As an expert reviewer for the IPCC he requested this model results from the IPCC and then directly from Mitchell. Mitchell refused to provide Michaels with the zonal breakdown. After a number of refusals, Mike MacCracken got involved on behalf of the US delegation. Mitchell refused MacCracken’s requests. One reason given for refusal was that the results had not yet been published. After the results were peer-review published Mitchell still refused. Eventual this dispute hit the science press and was criticized not only in a US House of Reps hearing but also in a Nature editorial. Mitchell never provided the model data zonal breakdown and perhaps no one will ever know whether Michaels’ suspicions were correct.
Meanwhile, Mitchell’s graph went on to be championed as one of the key findings in SAR. In a silent acknowledgement of the previous skeptical critique, it appear below a chart of a CO2-only model run projecting way above the temp record after the 1940s. This pair of charts appeared not only in Chapter 3, but they were also inserted in the final draft of Chapter 8 and place up front in the Policymaker’s Summary.
Jeff L says:
April 25, 2013 at 3:18 pm
“…we have an observed temp change of ~ 1.5 C/doubling, regardless of if [sic] CO2 is the cause of this change or not…”
“Regardless…if CO2 is the cause”. If CO2 isn’t the cause, what the heck are you doubling?
JP
GlynnMhor, above, is right: Lewis (2013) merely “reconstructs” the temperature record, as it were, using a highly-tunable model (that is assumed to be based upon “settled science”, behind all of its tunable parameters, but is not). Just another example of modelling portrayed as real-world experiment (the naive use of “objective Bayesian probability” notwithstanding). This is surely precisely why peer-review is not to be trusted–because the “peer” reviewers have not learned, and have no intention of learning, that even the most fundamental “science” in the models is NOT settled (specifically, the mixing together of directed radiation and diffusive heat radiation–which has not been disentangled from heat convection and conduction, as its use implies–in the radiative transfer theory; the idea of “radiative forcing”, by any and all variables, as the governing physics; and the greenhouse effect–of increasing temperature with increasing CO2–concocted from that incorrect physics).
As for this “compilation”, Michaels and Knappenberger need to know about the definitive experimental calculation of the true CO2 climate sensitivity (which is zero), from the amazing results of the (first and only) proper Venus/Earth temperatures comparison, in
CO2 Climate Sensitivity Vs. Reality.
Niff says:
April 25, 2013 at 5:05 pm
“I am struggling to understand why the AR5 can give the modelled estimate and ignore the scientific papers.”
Struggle no more. AR5 isn’t out yet. About a year to go. JP
So how does a rotating body like a planet, in the presence of a radiant energy source, such as the sun, ever rach thermal equilibrium ?
Just thought I would ask.
Niff says:
April 25, 2013 at 5:05 pm
I am struggling to understand why the AR5 can give the modelled estimate and ignore the scientific papers.
=========
because the very last thing the IPCC will publish is a report that says “everything is fine, nothing to worry about, we can all go home”. without a crisis the IPCC would have no reason to exists. no more lavish conferences with jet setters and limousines, no more grants to save the world. a lot of unemployable academics and bureaucrats would have to go out and try and find jobs.
In every climate conference the first order of business is to decide the location for the next conference. the second order of business is to party like the world is about to end. having drunk your fill and shagged your brains out, the last order of business is to get your picture in the paper. either posing with some high up mucky mucky, or in flagrante on the dance floor with his wife. then off to the airport, publish a paper on your vast accomplishments and sit back and enjoy life. you’ve earned it.
George, you know it can’t in any instantaneous sense be at equilibrium, but it can be at a quasi-equilibrium state in which it oscillates about an equilibrium mean (whatever that means!).
About ten feet below the surface of the moon, the temperature is pretty much constant. With no pesky atmosphere and ocean, the moon behaves exactly as the fairly simple physics equations predict.
commieBob:
You claim that: “About ten feet below the surface of the moon, the temperature is pretty much constant.” Can you provide empirical substantiation of this conclusion? If not, your argument that “With no pesky atmosphere and ocean, the moon behaves exactly as the fairly simple physics equations predict” is circular.
I feel obliged to mention in all of these threads about climate sensitivity (“CS”), is that CS is itself likely a function of global temperature.
CS is not a single number, like the gravitational constant, that applies at all times and temperature ranges. CS is a function of the net impact of a large number of feedbacks, some positive, some negative, some fast and some slow. None of these feedbacks are themselves constant over a wide range of earth’s temperatures.
It is more likely that at the depths of an ice-age, CS is low, it rises with temperature and peaks in the mid-range between ice-age and interglacial. CS then falls again as temperatures approach today’s relatively high levels when glaciation is near its minimum. This is the only pattern of CS that can explain the planet’s history of tipping back and forth between two stable thermal equilibria, while spending virtually no time in between.
So, if a study to estimate CS focuses on either ice-age conditions or modern conditions, it will likely arrive at a low value for CS. However, studies of the intermediary periods of glacial advance or retreat, will arrive at high values of CS.
What ultimately matters is today’s value, which appears to be low, and suggests that temperatures can’t get much higher than they are now. Which is in agreement with several million years of history where no tipping points or runaway warming was observed.
Russ R.:
The post of Michaels and Knappenberger is not about your “climate sensitivity” (CS) but rather is about “the equilibrium climate sensitivity” (TECS). Though CS is a variable, TECS is a constant.
Earth’s climate and “mean temperature” is dependant on many variables, including position of the continents, deep ocean currents, tilt wobble, tilt precession and orbital parameter variations! (all long term influences) It DOES NOT depend on CO2 concentration levels! From ice core analysis, CO2 concentrations lag Earth’s temperature changes: rising 200-800 years AFTER the Earth warms and falling 600-2,000 years AFTER the Earth cools! CO2 concentrations are a lagging indicator, NOT a forcing! Take CO2 concentrations out of the models and the real dominant forcing becomes obvious: Solar variability. We have wasted billions, it not a couple trillion dollars chasing our tail. Process control engineers have been laughing at AGW for three decades!
Take CO2 concentrations out of the models and see what happens. Then you can truly derail the AGW religion and gravy train!
Bill Yarber
John Parsons says:
April 25, 2013 at 5:33 pm
“Regardless…if CO2 is the cause”. If CO2 isn’t the cause, what the heck are you doubling?”
——————————
Maybe I didn’t state this in a way that was fully understandable. Basically, if you ASSUME that all temp change over the industrial era is due to CO2 & all other forcing is net zero, then correlating temps & CO2 over that time range yields ~ 1.5 °C/doubling sensitivity . Now as far as the assumption, I am not saying that I think that is correct or not – I am not presenting any data which either supports or refutes that assumption. Thus my statement “Regardless…if CO2 is the cause” – because I am not saying CO2 is or is not the cause of the temp change only that what the implied sensitivity is if you make the above assumption. I hope that clarifies for you.
Interesting comedown. But not far down enough for me. I suggest that sensitivity is zero – no warming at all from doubling. That is because Ferenc Miskolczi determined that the absorption of infrared radiation by the atmosphere has been constant for 61 years. At the same time, carbon dioxide went up by 21.6 percent. This substantial increase of carbon dioxide had no influence at all on absorption of IR by the atmosphere. And no absorption means no greenhouse effect, case closed. Put as much CO2 as you want into the atmosphere, you get no warming. That is exactly what we have now. There is more CO2 than ever in the atmosphere and more is added daily but there is no warming and there has been none for 17 years as even Pachauri the railroad engineer has to admit.