New energy-budget-derived estimates of climate sensitivity and transient response in Nature Geoscience
Guest post by Nic Lewis
Readers may recall that last December I published an informal climate sensitivity study at WUWT, here. The study adopted a heat-balance (energy budget) approach and used recent data, including satellite-observation-derived aerosol forcing estimates. I would like now to draw attention to a new peer-reviewed climate sensitivity study published as a Letter in Nature Geoscience, “Energy budget constraints on climate response”, here. This study uses the same approach as mine, based on changes in global mean temperature, forcing and heat uptake over 100+ year periods, with aerosol forcing adjusted to reflect satellite observations. Headline best estimates of 2.0°C for equilibrium climate sensitivity (ECS) and 1.3°C for the – arguably more policy-relevant – transient climate response (TCR) are obtained, based on changes to the decade 2000–09, which provide the best constrained, and probably most reliable, estimates.
The 5–95% uncertainty ranges are 1.2–3.9°C for ECS and 0.9–2.0°C for TCR. I should declare an interest in this study: you will find my name included in the extensive list of authors: Alexander Otto, Friederike E. L. Otto, Olivier Boucher, John Church, Gabi Hegerl, Piers M. Forster, Nathan P. Gillett, Jonathan Gregory, Gregory C. Johnson, Reto Knutti, Nicholas Lewis, Ulrike Lohmann, Jochem Marotzke, Gunnar Myhre, Drew Shindell, Bjorn Stevens, and Myles R. Allen. I am writing this article in my personal capacity, not as a representative of the author team.
The Nature Geoscience paper, although short, is in my view significant for two particular reasons.
First, using what is probably the most robust method available, it establishes a well-constrained best estimate for TCR that is nearly 30% below the CMIP5 multimodel mean TCR of 1.8°C (per Forster et al. (2013), here). The 95% confidence bound for the Nature Geoscience paper’s 1.3°C TCR best estimate indicates some of the highest-response general circulation models (GCMs) have TCRs that are inconsistent with recent observed changes. Some two-thirds of the CMIP5 models analysed in Forster et. al (2013) have TCRs that lie above the top of the ‘likely’ range for that best estimate, and all the CMIP5 models analysed have an ECS that exceeds the Nature Geoscience paper’s 2.0°C best estimate of ECS. The CMIP5 GCM with the highest TCR, per the Forster et. al (2013) analysis, is the UK Met. Office’s flagship HadGEM2-ES model. It has a TCR of 2.5°C, nearly double the Nature Geoscience paper’s best estimate of 1.3°C and 0.5°C beyond the top of the 5–95% uncertainty range. The paper obtains similar, albeit less well constrained, best estimates using data for earlier periods than 2000–09.
Secondly, the authors include fourteen climate scientists, well known in their fields, who are lead or coordinating lead authors of IPCC AR5 WG1 chapters that are relevant to estimating climate sensitivity. Two of them, professors Myles Allen and Gabi Hegerl, are lead authors for Chapter 10, which deals with estimates of ECS and TCR constrained by observational evidence. The study was principally carried out by a researcher, Alex Otto, who works in Myles Allen’s group.
Very helpfully, Nature’s editors have agreed to make the paper’s main text freely available for a limited period. I would encourage people to read the paper, which is quite short. The details given in the supplementary information (SI) enable the study to be fully understood, and its results replicated. The method used is essentially the same as that employed in my December study, being a more sophisticated version of that used in the Gregory et al. (2002) heat-balance-based climate sensitivity study, here. The approach is to draw sets of samples from the estimated probability distributions applicable to the radiative forcing produced by a doubling of CO2-equivalent greenhouse gas atmospheric concentrations (F2×) and those applicable to the changes in mean global temperature, radiative forcing and Earth system heat uptake (ΔT, ΔF and ΔQ), taking into account that ΔF is closely correlated with F2×. Gaussian (normal) error and internal climate variability distributions are assumed. ECS and TCR values are computed from each set of samples using the equations:
(1) ECS = F2× ΔT / (ΔF − ΔQ) and (2) TCR = F2× ΔT / ΔF .
With sufficient sets of samples, probability density functions (PDFs) for ECS and TCR can then be obtained from narrow-bin histograms, by counting the number of times the computed ECS and TCR values fall in each bin. Care is needed in dealing with samples where any of the factors in the equations are negative, to ensure that each is correctly included at the low or high end when calculating confidence intervals (CIs). Negative factors occur in a modest, but significant, proportion of samples when estimating ECS using data from the 1970s or the 1980s.
Estimates are made for ECS and TCR using ΔT, ΔF and ΔQ derived from data for the 1970s, 1980s, 1990s, 2000s and 1970–2009, relative to that for 1860–79. The estimates from the 2000s data are probably the most reliable, since that decade had the strongest forcing and, unlike the 1990s, was not affected by any major volcanic eruptions. However, although the method used makes allowance for internal climate system variability, the extent to which confidence should be placed in the results from a single decade depends on how well they are corroborated by results from a longer period. It is therefore reassuring that, although somewhat less well constrained, the best estimates of ECS and TCR using data for 1970–2009 are closely in line with those using data for the 2000s. Note that the validity of the TCR estimate depends on the historical evolution of forcing approximating the 70-year linear ramp that the TCR definition involves. Since from the mid-twentieth century onwards greenhouse gas levels rose much faster than previously, that appears to be a reasonable approximation, particularly for changes to the 2000s.
I have modified the R-code I used for my December study so that it computes and plots PDFs for each of the five periods used in the Nature Geoscience study for estimating ECS and TCR. The resulting ECS and TCR graphs, below, are not as elegant as the confidence region graphs in the Nature Geoscience paper, but are in a more familiar form. For presentation purposes, the PDFs (but not the accompanying box-and-whisker plots) have been truncated at zero and the upper limit of the graph and then normalised to unit total probability. Obviously, these charts do not come from the Nature Geoscience paper and are not to be regarded as associated with it. Any errors in them are entirely my own.
The box-and-whisker plots near the bottom of the charts are perhaps more important than the PDF curves. The vertical whisker-end bars and box-ends show (providing they are within the plot boundaries) respectively 5–95% and 17–83% CIs – ‘very likely’ and ‘likely’ uncertainty ranges in IPCC terminology – whilst the vertical bars inside the boxes show the median (50% probability point). For ECS and TCR, whose PDFs are skewed, the median is arguably in general a better central estimate than the mode of the PDF (the location of its peak), which varies according to how skewed and badly-constrained the PDF is. The TCR PDFs (note the halved x-axis scaling), which are unaffected by ΔQ and uncertainty therein, are all better constrained than the ECS PDFs.
The Nature Geoscience ECS estimate based on the most recent data (best estimate 2.0°C, with a 5–95% CI of 1.2–3.9°C) is a little different from that per my very similar December study (best estimate 1.6°C, with a 5–95% CI of 1.0–2.9°C, rounding outwards). The (unstated) TCR estimate implicit in my study, using Equation (2), was 1.3°C, with a 5–95% range of 0.9–2.0°C, precisely in line with the Nature Geoscience paper. In the light of these comparisons, I should perhaps explain the main differences in the data and methodology used in the two studies:
1) The main difference of principle is that the Nature Geoscience study uses GCM-derived estimates of ΔF and F2×. Multimodel means from CMIP5 runs per Forster et al. (2013) can thus be used as a peer-reviewed source of forcings data. ΔF is accordingly based on simulations reflecting the modelled effects of RCP 4.5 scenario greenhouse gas concentrations, aerosol abundances, etc. My study instead used the RCP 4.5 forcings dataset and the F2× figure of 3.71°C reflected in that dataset; I adjusted the projected post-2006 solar and volcanic forcings to conform them with estimated actuals. Use of CMIP5-based forcing data results in modestly lower estimates for both ΔF and F2× (3.44°C for F2×). Since CO2 is the dominant forcing agent, and its concentration is accurately known, the value of ΔF is closely related to the value of F2×. The overall effect of the difference in F2× on the estimates of ECS and TCR is therefore small. As set out in the SI, an adjustment of +0.3 Wm−2 to 2010 forcing was made in the Nature Geoscience study in the light of recent satellite-observation constrained estimates of aerosol forcing. On the face of it, the resulting aerosol forcing is slightly more negative than that used in my December study.
2) The Nature Geoscience study derives ΔQ using the change in estimated 0–2000 m ocean heat content (OHC) – which accounts for most of the Earth system heat uptake – from the start to the end of the relevant decade (or 1970–2009), whereas I computed a linear regression slope estimate using data for all years in the period I took (2002–11). Whilst I used the NODC/NOAA OHC data, which corresponds to Levitus et al. (2012), here, for the entire 0–2000 m ocean layer, the Nature Geoscience study splits that layer between 0–700 m and 700–2000 m. It retains the NODC/NOAA Levitus OHC data for the 700–2000 m layer but uses a different dataset for 0–700 m OHC – an update from Domingues et al. (2008), here.
3) The periods used for the headline results differ slightly. I used changes from 1871–80 to 2002–11, whilst the Nature Geoscience study uses changes from 1860–79 to 2000–09. The effects are very small if the CMIP5 GCM-derived forcing estimates are used, but when employing the RCP 4.5 forcings, switching to using changes from 1860–79 to 2000–09 increases the ECS and TCR estimates by around 0.05°C.
Since the Nature Geoscience study and my December study give identical estimates of TCR, which are unaffected by ΔQ, the difference in their estimates of ECS must come primarily from use of different ΔQ figures. The difference between the ECS uncertainty ranges of the two studies likewise almost entirely reflects the different central estimates for ΔQ they use. The ECS central estimate and 5–95% uncertainty range per my December heat-balance/energy budget study were closely in line with the preferred main results estimate for ECS, allowing for additional forcing etc. uncertainties, per my recent Journal of Climate paper, of 1.6°C with a 5–95% uncertainty range of 1.0–3.0°C. That paper used a more complex method which, although less robust, avoided reliance on external estimates of aerosol forcing.
The take-home message from this study, like several other recent ones, is that the ‘very likely’ 5–95% ranges for ECS and TCR in Chapter 12 of the leaked IPCC AR5 second draft scientific report, of 1.5–6/7°C for ECS and 1–3°C for TCR, and the most likely values of near 3°C for ECS and near 1.8°C for TCR, are out of line with instrumental-period observational evidence.
===============================================================
Here’s a figure of interest from from the SI file – Anthony
Fig. S3| Sensitivity of 95th percentile of TCR to the best estimate and standard error of the change in forcing from the 2000s to the 1860-1879 reference period. The shaded contours show the 95th percentile boundary of the TCR confidence interval, the triangles show cases (black and blue) from the sensitivity Table S2, and a smaller adjustment to aerosol forcing for comparison (red).
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

This whole thing is playing out like a Douglas Adams sub-plot. Not meaning to detract from Nic’s fine work, but really an analysis by two lesser known figures based on existing data shouldn’t have global policy implications. But lo and behold…
joeldshore says:
May 19, 2013 at 7:34 pm
————————-
Hey Joel, how come you weren’t bullshitting about deep ocean warming five years ago ??
…. I suppose that another kettle of f**king fish too.
Give it up dude.
Yes, “indistinguishable from zero” is less than 2 degrees, but please do carry on with the mental masturbation.
William Astley says:
May 19, 2013 at 6:47 pm
You confusing what you are repeating with what happened physically in the past and what is happening currently.
Here we go again:
http://www.leif.org/EOS/muscheler05nat_nature04045.pdf :
“our extended analysis of the radiocarbon record reveals several periods during past centuries in which the strength of the magnetic field in the solar wind was similar to, or even higher than, that of today. …
Solanki et al. combine radiocarbon (14C) data, visually observed sunspot numbers and models to extend the historical sunspot record over the Holocene. They exclude the most recent 100 years of the 14C record, which are influenced by 14C-depleted fossil-fuel emissions and atomic-bomb tests conducted since AD 1950. …
irrespective of the data set applied, the recent solar activity is not exceptionally high (Fig. 2). The 14C results are broadly consistent with earlier reconstructions based on 10Be data from the South Pole, which show that production rates around AD 1780 and in the twelfth century were comparable to those observed today. …
our reconstruction indicates that solar activity around AD 1150 and 1600 and in the late eighteenth century was probably comparable to the recent satellite-based observations. In any case, as noted by Solanki et al., solar activity reconstructions tell us that only a minor fraction of the recent global warming can be explained by the variable Sun.”
http://www.leif.org/EOS/2009GL038004-Berggren.pdf :
“A comparison with sunspot and neutron records confirms that ice core 10Be reflects solar Schwabe cycle variations, and continued 10Be variability suggests cyclic solar activity throughout the Maunder and Spoerer grand solar activity minima. Recent 10Be values are low; however, they do not indicate unusually high recent solar activity compared to the last 600 years. …
Periodicity in 10Be during the Maunder minimum reconfirms that the solar dynamo retains cyclic behavior even during grand solar minima. We observe that although recent 10Be flux in NGRIP is low, there is no indication of unusually high recent solar activity in relation to other parts of the investigated period.”
As I said the whole question has recently been re-examined by a panel of experts at a workshop dedicated to this problem: Leif Svalgaard, Mike Lockwood, Jürg Beer, Andre Balogh, Paul Charbonneau, Ed Cliver, Nancy Crooker, Marc DeRosa, Ken McCracken, Matt Owens, Pete Riley, George Siscoe, Sami Solanki, Friedhelm Steinhilber, Ilya Usoskin, and Yi-Ming Wang. The conclusion is that recent solar activity was not exceptionally high. Around 1780, activity seems to have been even higher than today.
Since the Sun has not behaved in a way compatible with your other references, they are now moot and irrelevant. I think I have pointed all this out several times, but you have a hard time coming to grips with reality.
One more: http://www.leif.org/EOS/muscheler07qsr.pdf :
“The solar modulation maximum around 1780 AD indicated by the 14C and 10Be data was on the level of the second part of the 20th century or even higher. …
“The cosmogenic radio-nuclide records indicate that the current solar activity is relatively high compared to the period [just] before 1950 AD. However, as the mean value during the last 55 yr was reached or exceeded several times during the past 1000 yr the current level of solar activity can be regarded as relatively common”
So, it is time to bury the wrong notion of recent exceptionally high solar activity. This is, of course, difficult to do because once people have locked on to ‘findings’ that confirm their agenda and beliefs, they get stuck on the wrong science and can’t give it up. You are a good example of someone afflicted with that syndrome.
MSM has moved on! no doubt this is based on older studies:
20 May: Sydney Morning Herald Bloomberg: Rising heat to increase NY deaths: Nature study
Manhattan may see deaths from heat rise by as much as 20 per cent in the 2020s and 90 per cent by the 2080s in a worst-case scenario, a study found.
The study, published this week in the journal Nature Climate Change, was done by Columbia University’s Earth Institute and the Mailman School of Public Health. Higher winter temperatures may cut cold-related mortality, though net temperature-related deaths may still climb by a third by the 2080s, according to a statement detailing the findings.
“This serves as reminder that heat events are one of the greatest hazards faced by urban populations around the globe,” said Radley Horton, a climate scientist at the Earth Institute’s Center for Climate Systems Research and a co-author…
Daily records from Central Park in New York show that average monthly temperatures increased 3.6 degrees Fahrenheit from 1901 to 2000, the statement said. Last year was the warmest on record in Manhattan, it said. In cities, heat is concentrated by buildings and pavement. The temperature in New York is expected to climb by as much as 4.2 degrees Fahrenheit by the 2050s, the statement said…
http://www.smh.com.au/environment/climate-change/rising-heat-to-increase-ny-deaths-nature-study-20130520-2jvbb.html
I am curious how the redaction will unfold as the Northern Hemisphere, particularly the high latitude northern regions cool.
The following are the first baby steps. There will likely be some heated discussion to see if this new result can be excluded from AR-5.
http://www.bbc.co.uk/news/science-environment-22567023
Climate slowdown means extreme rates of warming ‘not as likely’
The Intergovernmental Panel on Climate Change reported in 2007 that the short-term temperature rise would most likely be 1-3C (1.8-5.4F).
But in this new analysis, by only including the temperatures from the last decade, the projected range would be 0.9-2.0C.
“The most extreme projections are looking less likely than before.”
This latest research, including the decade of stalled temperature rises, produces a range of 0.9-5.0C.
“It is a bigger range of uncertainty,” said Dr Otto.
“But it still includes the old range. We would all like climate sensitivity to be lower but it isn’t.”
It appears the IPCC have not read Lindzen and Choi’s (2011, 2009) or Idso 1998 which both find that planet resist forcing changes, rather than amplify forcing changes.
http://www.johnstonanalytics.com/yahoo_site_admin/assets/docs/LindzenChoi2011.235213033.pdf
On the Observational Determination of Climate Sensitivity and Its Implications
Richard S. Lindzen1 and Yong-Sang Choi2
We estimate climate sensitivity from observations, using the deseasonalized fluctuations in sea surface temperatures (SSTs) and the concurrent fluctuations in the top-of-atmosphere (TOA) outgoing radiation from the ERBE (1985-1999) and CERES (2000- 2008) satellite instruments. Distinct periods of warming and cooling in the SSTs were used to evaluate feedbacks. An earlier study (Lindzen and Choi, 2009) was subject to significant criticisms. The present paper is an expansion of the earlier paper where the various criticisms are taken into account. … ….We again find that the outgoing radiation resulting from SST fluctuations exceeds the zerofeedback response thus implying negative feedback. In contrast to this, the calculated TOA outgoing radiation fluxes from 11 atmospheric models forced by the observed SST are less than the zerofeedback response, consistent with the positive feedbacks that characterize these models. ….
http://www.int-res.com/articles/cr/10//c010p069.pdf
Over the course of the past 2 decades, I have analyzed a number of natural phenomena that reveal how Earth’s near-surface air temperature responds to surface radiative perturbations. These studies all suggest that a 300 to 600 ppm doubling of the atmosphere’s CO2 concentration could raise the planet’s mean surface air temperature by only about 0.4°C. Even this modicum of warming may never be realized, however, for it could be negated by a number of planetary cooling forces that are intensified by warmer temperatures and by the strengthening of biological processes that are enhanced by the same rise in atmospheric CO2 concentration that drives the warming. Several of these cooling forces have individually been estimated to be of equivalent magnitude, but of opposite sign, to the typically predicted greenhouse effect of a doubling of the air’s CO2 content, which suggests to me that little net temperature change will ultimately result from the ongoing buildup of CO2 in Earth’s atmosphere. Consequently, I am skeptical of the predictions of significant CO2-induced global warming that are being made by state-of-the-art climate models and believe that much more work on a wide variety of research fronts will be required to properly resolve the issue.
In reply to:
lsvalgaard says:
May 19, 2013 at 8:45 pm
William Astley says:
May 19, 2013 at 6:47 pm
You confusing what you are repeating with what happened physically in the past and what is happening currently.
Here we go again:
William:
What will your response be to cooling?
As I said, there is observational evidence that the Northern hemisphere, in particular northern high latitudes regions have started to cool.
There are cycles of warming and cooling in the paleo climate record which correlate with cosmogenic isotope changes.
Comment:
An example that redaction does not invalidate a result is the gradual acceptance in the geomagnetic field that are cyclic, frequent, and rapid geomagnetic excursions. It took roughly a decade for that observational anomaly to be accepted.
Which of the following papers need to be redacted?
Greenland ice temperature, last 11,000 years determined from ice core analysis, Richard Alley’s paper. See the Dansgaard-Oeschger cycles in the data.
http://www.climate4you.com/images/GISP2%20TemperatureSince10700%20BP%20with%20CO2%20from%20EPICA%20DomeC.gif
http://www.essc.psu.edu/essc_web/seminars/spring2006/Mar1/Bond%20et%20al%202001.pdf
Persistent Solar Influence on North Atlantic Climate During the Holocene (William: Holocene is the name for this interglacial period)
http://www.agu.org/pubs/crossref/2003/2003GL017115.shtml
Timing of abrupt climate change: A precise clock by Stefan Rahmstorf
Many paleoclimatic data reveal a approx. 1,500 year cyclicity of unknown origin. A crucial question is how stable and regular this cycle is. An analysis of the GISP2 ice core record from Greenland reveals that abrupt climate events appear to be paced by a 1,470-year cycle with a period that is probably stable to within a few percent; with 95% confidence the period is maintained to better than 12% over at least 23 cycles. This highly precise clock points to an origin outside the Earth system; oscillatory modes within the Earth system can be expected to be far more irregular in period.
https://ams.confex.com/ams/pdfpapers/74103.pdf
The Sun-Climate Connection by John A. Eddy, National Solar Observatory
Solar Influence on North Atlantic Climate during the Holocene
A more recent oceanographic study, based on reconstructions of the North Atlantic climate during the Holocene epoch, has found what may be the most compelling link between climate and the changing Sun: in this case an apparent regional climatic response to a series of prolonged episodes of suppressed solar activity, like the Maunder Minimum, each lasting from 50 to 150 years8.
The paleoclimatic data, covering the full span of the present interglacial epoch, are a record of the concentration of identifiable mineral tracers in layered sediments on the sea floor of the northern North Atlantic Ocean. The tracers originate on the land and are carried out to sea in drift ice. Their presence in seafloor samples at different locations in the surrounding ocean reflects the southward expansion of cooler, ice-bearing water: thus serving as indicators of changing climatic conditions at high Northern latitudes. The study demonstrates that the sub-polar North Atlantic Ocean has experienced nine distinctive expansions of cooler water in the past 11,000 years, occurring roughly every 1000 to 2000 years, with a mean spacing of about 1350 years.
William Astley says:
May 19, 2013 at 9:32 pm
As I said, there is observational evidence that the Northern hemisphere, in particular northern high latitudes regions have started to cool.
So what? The climate warms and cools all the time.
Timing of abrupt climate change: A precise clock by Stefan Rahmstorf …
“This highly precise clock points to an origin outside the Earth system; oscillatory modes within the Earth system can be expected to be far more irregular in period.”
There is no such precise clock.
http://www.leif.org/EOS/Obrochta2012.pdf :
“Our new results suggest that the “1500-year cycle” may be a transient phenomenon whose origin could be due, for example, to ice sheet boundary conditions for the interval in which it is observed. We therefore question whether it is necessary to invoke such exotic explanations as heterodyne frequencies or combination tones to explain a phenomenon of such fleeting occurrence that is potentially an artifact of arithmetic averaging.” …
Therefore, HSG provides relatively little data supporting actual 1500-year intervals of climate variability in either the Holocene or last glacial. The number is likely an artifact of averaging and seems to have little statistical justification.”
Ken Gregory says:
May 19, 2013 at 4:45 pm
The helio-magnetic field strength has increased by a factor of 9 from 1895 to 1991.
lsvalgaard says at 6:42 pm: No, it has not: see figure 10 of http://www.leif.org/research/2009JA015069.pdf
My comment included a link to a graph of helio-magnetic field (HMF) strength published as Fig. 1 from R.U. Rao January 2011paper at: http://www.friendsofscience.org/assets/documents/Rao-GCR_GW.pdf
The 1895 value was 1 nT, the 1991 value was 9 nT, giving the factor of 9. The paper says the HMF data was reproduced from two McCracken’s papers. The figure 10 in your paper roughly matches the Rao data from 1910 onward. You have 1991 value of 9.1 nT which agrees with the Rao value. However, the Rao graph shows values under 2 nT from 1890 to 1895, but your values are 4.7 and above. Why the large discrepancy between the two papers in this time period?
Using your numbers, The HMF strength increase from 4.06 nT in 1901 to 9.07 nT in 1991, or by a factor of 2.23.
Ken Gregory says:
May 19, 2013 at 10:29 pm
The paper says the HMF data was reproduced from two McCracken’s papers….Why the large discrepancy between the two papers in this time period?
I just returned from a workshop where McCracken noted that his earlier values were incorrect. His re-examination of the data shows substantial agreement with my paper.
Using your numbers, The HMF strength increase from 4.06 nT in 1901 to 9.07 nT in 1991, or by a factor of 2.23.
Is somewhat meaningless as you pick a solar cycle minimum point and compare with a solar cycle maximum point. In every cycle there is a variation by a factor of two from minimum to maximum.
James Annan mentioned a new paper by Stott et al (open access) that concludes the upper 95% bounds on temperature increase of the climate models are too high:
http://iopscience.iop.org/1748-9326/8/1/014024
I was unable to work out a transient climate response from their estimates, but they seem to be implying a lower estimate than the current IPCC one.
William Astley – Thanks for your long and informative comment (19 May 4:10pm). I won’t argue with the idea that Nic Lewis’ paper is important, but I wonder why … presumably it is politically important (pity it’s too late for AR5), because the science is, to my mind, very dubious. Later in your comment, you say “increased solar luminosity and reduced CRF over the previous century should have contributed a warming of 0.47 +/-0.19C, while the rest should be mainly attributed to anthropogenic causes“. I have to disagree with that last part – there is a very visible cyclical (or apparently cyclical) effect in the temperature record of the last 150 or so years. This appears to have contributed a lot to the 20thC temperature, because 2 of its upward phases occurred in the 20thC and only one downward phase.
http://members.iinet.net.au/~jonas1@westnet.com.au/GlobalTemperature_PDOPhaseTrends.JPG
[In this graph I called the phases “PDO” because they seemed to correlate, but I only used a multi-segment least-squares linear trend fit for the trend lines. (Someone posted a much better graph than this one recently, but I can’t find it).]
We don’t know how much this apparent cycle contributed to 20thC temperature because we don’t know enough about it, but eyeballing the graph would suggest that it could have contributed all of the balance after your “0.47 +/-0.19C”.
Two papers on line provide some eye-opening insight on possible cause of change to average global temperature.
The first one is ‘Global warming made simple’ at http://lowaltitudeclouds.blogspot.com/. It shows, with simple calculations, how a tiny change in low altitude clouds could account for half of the average global temperature change in the 20th century, and what could have caused that tiny change. (The other half of the temperature change is from natural ocean oscillation which is dominated by the PDO)
The second paper is ‘Natural Climate change has been hiding in plain sight’ at http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html . This paper presents a simple equation that calculates average global temperatures since they have been accurately measured world wide with an accuracy of 90%, irrespective of whether the influence of CO2 is included or not. The equation uses a proxy of the time-integral of sunspot numbers. A graph is included which shows the calculated trajectory overlaid on measurements.
No disrespect intended to Nic Lewis, I consider all the claims regarding the ability to assess climate sensitivity disingenuous, even bordering on the dishonest.
It may be possible to calculate how CO2 behaves in laboratory conditions and hence to calculate a theoretical warming in relation to increasing CO2 levels in laboratory conditions. But that is not the real world.
In the real world, increased concentrations of CO2 would theoretically block a certain proportion of incoming solar insolation so that less solar radiance is absorbed by the ground and oceans, and it would also increase the rate of out going radiation at TOA. Both of these are potentially cooling factors. Thus the first issue is whether in real world conditions the theoretical laboratory ‘heat trapping’ effect of CO2 exceeds the ‘cooling’ effects of CO2 blocking incoming solar irradiance and increasing radiation at TOA and if so, by how much? The second issue is far more complex, namely the inter-relationship with other gases in the atmosphere, whether it is swamped by the hydrological cycle, and what effect it may have on the rate of convection at various altitudes and/or whether convection effectively outstrips any ‘heat trapping’ effect of CO2 carrying the warmer air away and upwards to the upper atmosphere where the ‘heat’ is radiated to space. None of those issues can be assessed in the laboratory, and can only be considered in real world conditions by way of empirical observational data.
The problem with making an assessment based upon observational data is that it is a hapless task since the data sets are either too short and/or have been horribly bastardised by endless adjustments, siting issues, station drop outs and polluted by UHI and/or we do not have accurate data on aerosol emissions and/or upon clouds. Quite simply data sets of sufficiently high quality do not exist, and therefore as a matter of fact no worthwhile assessment can be made..
The nub of the issue is that it is simply impossible to determine a value for climate sensitivity from observation data until absolutely everything is known and understood about natural variation, what its various constituent components are, the forcings of each and every individual component and whether the individual component concerned operates positively or negatively, and the upper and lower bounds of the forcings associated with each and every one of its constituent components.
This is logically and necessarily the position, since until one can look at the data set (thermometer or proxy) and identify the extent of each change in the data set and say with certainty to what extent, if any, that change was (or was not) brought about by natural variation, one cannot extract the signal of climate sensitivity from the noise of natural variation.
I seem to recall that one of the Team recognised the problem and at one time observed “”Quantifying climate sensitivity from real world data cannot even be done using present-day data, including satellite data. If you think that one
could do better with paleo data, then you’re fooling yourself. This is
fine, but there is no need to try to fool others by making extravagant
claims.”
We do not know whether at this stage of the Holocene adding more CO2 does anything, or, if it does, whether it warms or cools the atmosphere (or for that matter the oceans). Anyone who claims that they know and/or can properly assess the effect of CO2 in real world conditions is being disengenuous.
For what it is worth, 33 years worth of satellite data (which shows that temperatures were essentially flat between 1979 and 1997 and between 1999 to date and demonstrates no correlation between CO2 and temperature) suggests that the climate sensitivity to CO2 is so low that it is indistinguishable from zero. But that observation should be viewed with caution since it is based upon a very short data set, and we do not have sufficient data on aerosols or clouds to enable a firm conclusion to be drawn.
Richard Verney said
“For what it is worth, 33 years worth of satellite data (which shows that temperatures were essentially flat between 1979 and 1997 and between 1999 to date and demonstrates no correlation between CO2 and temperature) suggests that the climate sensitivity to CO2 is so low that it is indistinguishable from zero. But that observation should be viewed with caution since it is based upon a very short data set, and we do not have sufficient data on aerosols or clouds to enable a firm conclusion to be drawn”
Reconstructed CET to 1538 overlaid with official co2 data seem to support your assertion.
http://wattsupwiththat.com/2013/05/08/the-curious-case-of-rising-co2-and-falling-temperatures/
I will be at the Met Office archives in Exeter in the next hour as I try to push the CET reconstruction back a couple of more centuries and determine the transition between the MWP to LIA.
In the meantime I am continually struck as to the extremes in the past-much more so than today-and the very heavy rainfall episodes experienced here in the UK.
tonyb
‘… assuming that we do work pretty diligently to gradually wean ourselves off of fossil fuels …’ Joel Shore.
================================
Relative price to the consumer, compared to other forms of energy without government subsidies and other free-market distortions, will determine the future of fossil fuels — nothing else. Everyone knows that.
I have an article in the Times (London) on this new Otto et al paper and its implications for policy.
http://www.thetimes.co.uk/tto/opinion/columnists/article3769210.ece
quote from my article: “The most likely estimate is 1.3C. Even if we reach doubled carbon dioxide in just 50 years, we can expect the world to be about two-thirds of a degree warmer than it is now, maybe a bit more if other greenhouse gases increase too. That is to say, up until my teenage children reach retirement age, they will have experienced further warming at about the same rate as I have experienced since I was at school.”
Further to my post at 12:09 am, in which I suggest that it is impossible to assess climate sensitivity until we fully understand natural variation, consider a few examples from the thermometer record (Hadcrut 4) before the rapid increase in manmade CO2 emissions.
1. Between 1877 and 1878, the temperature anomaly change is positive 1.1degC (from -0.7 to +0.4C). To produce that change requires a massive forcing. That change was not caused by increased CO2 emissions, nor by reduced aerosol emissions. May be it was an El Nino year (I have not checked but no doubt Bob may clarify) but we need to be able to explain what forcings were in play that brought about that change, because those forcings may operate at other times (to more or less extent).
2. Between 1853 and 1862 temperatures fell by about 0.6degC. What caused this change? Presumably it was not an increase in aerosol emissions. So what natural forcings were in play? Again one can see a similar cooling trend between about 1880 and about 1890, which may to some extent have been caused by Krakatoa, but if so what would the temperature have been but for Krakatoa?
3. Between about 1861 to 1869 there was an increase in temperatures of about 0.4degC. What caused this warming that decade. It is unlikely to be related to any significant increase in CO2 emissions and/or reduction in aerosol emissions. How do we know that the forcings that brought about that change were not in play (perhaps to an even greater level since we do not know the upper bounds of those forcings) during the late 1970s to late 1990s?
4. Between about 1908 and 1915 again there is about 0.4degC warming. What caused this warming during this period. Are they the same forcings that were in play during the period 1861 to 1969, or are they different forcings? It is unlikely to be related to any significant increase in CO2 emissions and/or reduction in aerosol emissions. How do we know that the forcings that brought about this change were not in play (perhaps to an even greater level since we do not know the upper bounds of those forcings) during the late 1970s to late 1990s? If the forcings that were operative during the period 1908 to 1915 were different to those that were operative during 1861 to 1869 can all these forcings collectively operate at the same time, and if so How do we know that the forcings that brought about that change were not in play during the late 1970s to late 1990s?
One could go through the entire thermometer record and make similar observations about each and every change in that record. But my point is that until one fully understands natural variability (all its forcings and the upper and lower bounds of such and their inter-relationship with one another), it is impossible to attribute any change in the record to CO2 emissions (or for that matter manmade aerosol emissions). Until one can completely eliminate natural variability, the signal of climate sensitivity to CO2 cannot be extracted from the noise of natural variation. Period!
“The take-home message from this study, like several other recent ones, is that the ‘very likely’ 5–95% ranges for ECS and TCR in Chapter 12 of the leaked IPCC AR5 second draft scientific report, of 1.5–6/7°C for ECS and 1–3°C for TCR, and the most likely values of near 3°C for ECS and near 1.8°C for TCR, are out of line with instrumental-period observational evidence.”
Many thanks to Nic Lewis for his thorough analysis. He seems to cut through a lot of the bias and manipulation.
What I don’t follow here is that figure S3 that Anthony added at the end seems to in accord with AR5 leaked values.
Is Nic saying he has a difference of opinion with what is shown in the paper ?
Good commetns from Richard Verney – thank you..
http://wattsupwiththat.com/2013/04/25/a-compilation-of-lower-climate-sensitivities-plus-a-new-one/#comment-1289118
I have a serious problem with the entire concept of “climate sensitivity”. I think it could actually be more ”cargo cult” than atmospheric physics.
Here is my problem:
Atmospheric CO2 LAGS temperature T at ALL time scales, from the 9 month delay for ~ENSO cycles to the ~~600 year delay inferred in the ice core data for much longer cycles.
When I studied this subject in 2007-2008, the only signal I was able to derive from the modern data was that [dCO2/dt varies ~contemporaneously with T and CO2 lags T by 9 months].
This physical reality has since been widely accepted, but dismissed as a “feedback effect”.
This is like saying you cannot hear the orchestra, but you can clearly hear the piccolo.
I say you ARE hearing the orchestra – atmospheric CO2 lags temperature because temperature drives CO2.
The observed rise in CO2 may indeed have a significant humanmade component – but is probably driven much more by deforestation than fossil fuel combustion.
Regards, Allan
http://wattsupwiththat.com/2013/03/30/the-pitfalls-of-data-smoothing/#comment-1265693
When I first pointed out this relationship in January 2008 (dCO2/dt varies with T and CO2 lags T by 9 months), it was deemed incorrect.
Then it was accepted as valid by some on the warmist side of this debate, but dismissed as a “feedback”.
This “feedback argument” appears to be a “cargo cult” rationalization, derived as follows:
“We KNOW that CO2 drives Temperature, therefore it MUST BE a feedback.”
More below from 2009:
__________________
http://wattsupwiththat.com/2009/01/21/antarctica-warming-an-evolution-of-viewpoint/#comment-77000
Time is limited so I can only provide some more general answers to your questions:
My paper was posted Jan.31/08 with a spreadsheet at
http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/
The paper is located at
http://icecap.us/images/uploads/CO2vsTMacRae.pdf
The relevant spreadsheet is
http://icecap.us/images/uploads/CO2vsTMacRaeFig5b.xls
There are many correlations calculated in the spreadsheet.
In my Figure 1 and 2, global dCO2/dt closely coincides with global Lower Tropospheric Temperature LT and Surface Temperature ST. I believe that the temperature and CO2 datasets are collected completely independently, and yet there is this clear correlation.
After publishing this paper, I also demonstrated the same correlation with different datasets – using Mauna Loa CO2 and Hadcrut3 ST going back to 1958. More recently I examined the close correlation of LT measurements taken by satellite and those taken by radiosonde.
Further, I found (actually I was given by Richard Courtney) earlier papers by Kuo (1990) and Keeling (1995) that discussed the delay of CO2 after temperature, although neither appeared to notice the even closer correlation of dCO2/dt with temperature. This correlation is noted in my Figures 3 and 4.
See also Roy Spencer’s (U of Alabama, Huntsville) take on this subject at
http://wattsupwiththat.wordpress.com/2008/01/25/double-whammy-friday-roy-spencer-on-how-oceans-are-driving-co2/
and
http://wattsupwiththat.wordpress.com/2008/01/28/spencer-pt2-more-co2-peculiarities-the-c13c12-isotope-ratio/
This subject has generated much discussion among serious scientists, and this discussion continues. Almost no one doubts the dCO2/dt versus LT (and ST) correlation. Some go so far as to say that humankind is not even the primary cause of the current increase in atmospheric CO2 – that it is natural. Others rely on a “material balance argument” (mass balance argument) to refute this claim – I think these would be in the majority. I am an agnostic on this question, to date.
The warmist side also has also noted this ~9 month delay, but try to explain it as a “feedback effect” – this argument seems more consistent with AGW religious dogma than with science (“ASSUMING AGW is true, then it MUST be feedback”). 🙂
It is interesting to note, however, that the natural seasonal variation in atmospheric CO2 ranges up to ~16ppm in the far North, whereas the annual increase in atmospheric CO2 is only ~2ppm. This reality tends to weaken the “material balance argument”. This seasonal ‘sawtooth” of CO2 is primarily driven by the Northern Hemisphere landmass, which is much greater in area than that of the Southern Hemisphere. CO2 falls during the NH summer due primarily to land-based photosynthesis, and rises in the late fall, winter and early spring as biomass degrades.
There is also likely to be significant CO2 solution and exsolution from the oceans.
See the excellent animation at http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4
It is also interesting to note that the detailed signals we derive from the data show that CO2 lags temperature at all time scales, from the 9 month delay for ~ENSO cycles to the ~~600 year delay inferred in the ice core data for much longer cycles.
Regards, Allan
“Wolfram Alpha tells me that puts the amount of ibuprofen in my body at the time that it’s working is 0.00006299%, or 62.9 ppm.”
Josh, Earth is not experiencing an addition of 62.9ppm of CO2 in a sudden so the comparison is not relevant regardless of whether CO2 has or has not a temperature effect.
An ECS of ~ 2C also agrees with a fit I once made to model the temperature dependence of the HADCRUT4 data to CO2 data with feedbacks and including the heat inertia of the oceans. See posting here.
A Calculation of annual warming due to measured Mauna Loa CO2 increases taking into account time lags favors a climate feedback value of 2 W/m2K-1 if post 1958 warming is entirely due to CO2. If instead a 60 year oscillation is superimposed the feedback value is likely 1.5 W/m2K-1 or less.
A prediction can then be made for future long term temperatures calculated using IPCC emissions scenario A1B for future CO2 levels. Let’s assume that after 2100 emissions fall as non-carbon energy sources (e.g. nuclear fusion) are adopted leading to a peak CO2 concentration of 800ppm followed by a gradual fall in CO2 levels as the Oceans re-absorb excess CO2. The peak rise in temperature would then occur around 2250 with a maximum increase above pre-industrial levels of 3.4(2.0) degrees C. Thereafter CO2 levels and temperatures slowly revert back to natural levels over another 1000 years. By that time we will probably have the onset of another major glaciation to worry about !
climatereason says:
May 20, 2013 at 12:16 am
/////////////////////////////////////////////////
Tony
You are undertaking an extremely worthwhile exercise with CET, I am very impressed.
I consider that many fail to appreciate the extent of natural variability (and the underlying strength of forcings that have brought about that change).
I have often posted to the effect that the holy grail of cliimate science is the proper appreciation and understanding of natural variability. We need to fully know and understand this and its bounds. It is only after we possess a full understanding of natural variability that we can begin to eliminate it from the various data sets and thence extract a response signal (if any) to CO2.
joeldshore says:
May 19, 2013 at 7:34 pm
There are also other greenhouse gases, like CH4, which contribute warming…So, I think, roughly speaking, the effect of the aerosols and of the non-CO2 greenhouse gases may about cancel. (Aerosols and CH4 also have a shorter perturbation time in the atmosphere…whereas a perturbation in CO2 levels lasts a long time.)
Joel, that is not how I understood the article, and why I asked my specific question. The article looks at all warming. Hence, any of the warming create by other non-CO2 GHGs are also included in this estimate.
That is also why some people have objected to the article. The analysis only tells us what amount of warming we can expect given an estimated cooling effect of aerosol emissions. And, given ocean oscillations were generally in their warm modes for many of the periods examined. It is possible that none of the warming is due to GHGs at all.
It should be possible to consider ocean oscillations, at least the primary ones (AMO and PDO/ENSO). Of the time periods studied only one is somewhat neutral. The 1980-1989 had a positive PDO and a negative AMO. All the others were biased in one direction or the other. Of course, this assumes both oscillations are equal and I suspect the PDO/ENSO is actually stronger. However, using 1980-1989 would appear to give us the best estimate vs. any of the other periods.
Allan MacRae:It is also interesting to note that the detailed signals we derive from the data show that CO2 lags temperature at all time scales, from the 9 month delay for ~ENSO cycles to the ~~600 year delay inferred in the ice core data for much longer cycles.
I recently started to have another look at the d/dt CO2 question. There is no lag in rate of change and this in accordance with basic water/air equilibration which takes a matter of hours.
http://climategrog.wordpress.com/?attachment_id=233
There is a lag in the longer response that is likely due to deeper water turnover.
http://climategrog.wordpress.com/?attachment_id=254
Now if there is a lag on the decadal time-scale it becomes hard to support CO2 driving temperature.