New energy-budget-derived estimates of climate sensitivity and transient response in Nature Geoscience
Guest post by Nic Lewis
Readers may recall that last December I published an informal climate sensitivity study at WUWT, here. The study adopted a heat-balance (energy budget) approach and used recent data, including satellite-observation-derived aerosol forcing estimates. I would like now to draw attention to a new peer-reviewed climate sensitivity study published as a Letter in Nature Geoscience, “Energy budget constraints on climate response”, here. This study uses the same approach as mine, based on changes in global mean temperature, forcing and heat uptake over 100+ year periods, with aerosol forcing adjusted to reflect satellite observations. Headline best estimates of 2.0°C for equilibrium climate sensitivity (ECS) and 1.3°C for the – arguably more policy-relevant – transient climate response (TCR) are obtained, based on changes to the decade 2000–09, which provide the best constrained, and probably most reliable, estimates.
The 5–95% uncertainty ranges are 1.2–3.9°C for ECS and 0.9–2.0°C for TCR. I should declare an interest in this study: you will find my name included in the extensive list of authors: Alexander Otto, Friederike E. L. Otto, Olivier Boucher, John Church, Gabi Hegerl, Piers M. Forster, Nathan P. Gillett, Jonathan Gregory, Gregory C. Johnson, Reto Knutti, Nicholas Lewis, Ulrike Lohmann, Jochem Marotzke, Gunnar Myhre, Drew Shindell, Bjorn Stevens, and Myles R. Allen. I am writing this article in my personal capacity, not as a representative of the author team.
The Nature Geoscience paper, although short, is in my view significant for two particular reasons.
First, using what is probably the most robust method available, it establishes a well-constrained best estimate for TCR that is nearly 30% below the CMIP5 multimodel mean TCR of 1.8°C (per Forster et al. (2013), here). The 95% confidence bound for the Nature Geoscience paper’s 1.3°C TCR best estimate indicates some of the highest-response general circulation models (GCMs) have TCRs that are inconsistent with recent observed changes. Some two-thirds of the CMIP5 models analysed in Forster et. al (2013) have TCRs that lie above the top of the ‘likely’ range for that best estimate, and all the CMIP5 models analysed have an ECS that exceeds the Nature Geoscience paper’s 2.0°C best estimate of ECS. The CMIP5 GCM with the highest TCR, per the Forster et. al (2013) analysis, is the UK Met. Office’s flagship HadGEM2-ES model. It has a TCR of 2.5°C, nearly double the Nature Geoscience paper’s best estimate of 1.3°C and 0.5°C beyond the top of the 5–95% uncertainty range. The paper obtains similar, albeit less well constrained, best estimates using data for earlier periods than 2000–09.
Secondly, the authors include fourteen climate scientists, well known in their fields, who are lead or coordinating lead authors of IPCC AR5 WG1 chapters that are relevant to estimating climate sensitivity. Two of them, professors Myles Allen and Gabi Hegerl, are lead authors for Chapter 10, which deals with estimates of ECS and TCR constrained by observational evidence. The study was principally carried out by a researcher, Alex Otto, who works in Myles Allen’s group.
Very helpfully, Nature’s editors have agreed to make the paper’s main text freely available for a limited period. I would encourage people to read the paper, which is quite short. The details given in the supplementary information (SI) enable the study to be fully understood, and its results replicated. The method used is essentially the same as that employed in my December study, being a more sophisticated version of that used in the Gregory et al. (2002) heat-balance-based climate sensitivity study, here. The approach is to draw sets of samples from the estimated probability distributions applicable to the radiative forcing produced by a doubling of CO2-equivalent greenhouse gas atmospheric concentrations (F2×) and those applicable to the changes in mean global temperature, radiative forcing and Earth system heat uptake (ΔT, ΔF and ΔQ), taking into account that ΔF is closely correlated with F2×. Gaussian (normal) error and internal climate variability distributions are assumed. ECS and TCR values are computed from each set of samples using the equations:
(1) ECS = F2× ΔT / (ΔF − ΔQ) and (2) TCR = F2× ΔT / ΔF .
With sufficient sets of samples, probability density functions (PDFs) for ECS and TCR can then be obtained from narrow-bin histograms, by counting the number of times the computed ECS and TCR values fall in each bin. Care is needed in dealing with samples where any of the factors in the equations are negative, to ensure that each is correctly included at the low or high end when calculating confidence intervals (CIs). Negative factors occur in a modest, but significant, proportion of samples when estimating ECS using data from the 1970s or the 1980s.
Estimates are made for ECS and TCR using ΔT, ΔF and ΔQ derived from data for the 1970s, 1980s, 1990s, 2000s and 1970–2009, relative to that for 1860–79. The estimates from the 2000s data are probably the most reliable, since that decade had the strongest forcing and, unlike the 1990s, was not affected by any major volcanic eruptions. However, although the method used makes allowance for internal climate system variability, the extent to which confidence should be placed in the results from a single decade depends on how well they are corroborated by results from a longer period. It is therefore reassuring that, although somewhat less well constrained, the best estimates of ECS and TCR using data for 1970–2009 are closely in line with those using data for the 2000s. Note that the validity of the TCR estimate depends on the historical evolution of forcing approximating the 70-year linear ramp that the TCR definition involves. Since from the mid-twentieth century onwards greenhouse gas levels rose much faster than previously, that appears to be a reasonable approximation, particularly for changes to the 2000s.
I have modified the R-code I used for my December study so that it computes and plots PDFs for each of the five periods used in the Nature Geoscience study for estimating ECS and TCR. The resulting ECS and TCR graphs, below, are not as elegant as the confidence region graphs in the Nature Geoscience paper, but are in a more familiar form. For presentation purposes, the PDFs (but not the accompanying box-and-whisker plots) have been truncated at zero and the upper limit of the graph and then normalised to unit total probability. Obviously, these charts do not come from the Nature Geoscience paper and are not to be regarded as associated with it. Any errors in them are entirely my own.
The box-and-whisker plots near the bottom of the charts are perhaps more important than the PDF curves. The vertical whisker-end bars and box-ends show (providing they are within the plot boundaries) respectively 5–95% and 17–83% CIs – ‘very likely’ and ‘likely’ uncertainty ranges in IPCC terminology – whilst the vertical bars inside the boxes show the median (50% probability point). For ECS and TCR, whose PDFs are skewed, the median is arguably in general a better central estimate than the mode of the PDF (the location of its peak), which varies according to how skewed and badly-constrained the PDF is. The TCR PDFs (note the halved x-axis scaling), which are unaffected by ΔQ and uncertainty therein, are all better constrained than the ECS PDFs.
The Nature Geoscience ECS estimate based on the most recent data (best estimate 2.0°C, with a 5–95% CI of 1.2–3.9°C) is a little different from that per my very similar December study (best estimate 1.6°C, with a 5–95% CI of 1.0–2.9°C, rounding outwards). The (unstated) TCR estimate implicit in my study, using Equation (2), was 1.3°C, with a 5–95% range of 0.9–2.0°C, precisely in line with the Nature Geoscience paper. In the light of these comparisons, I should perhaps explain the main differences in the data and methodology used in the two studies:
1) The main difference of principle is that the Nature Geoscience study uses GCM-derived estimates of ΔF and F2×. Multimodel means from CMIP5 runs per Forster et al. (2013) can thus be used as a peer-reviewed source of forcings data. ΔF is accordingly based on simulations reflecting the modelled effects of RCP 4.5 scenario greenhouse gas concentrations, aerosol abundances, etc. My study instead used the RCP 4.5 forcings dataset and the F2× figure of 3.71°C reflected in that dataset; I adjusted the projected post-2006 solar and volcanic forcings to conform them with estimated actuals. Use of CMIP5-based forcing data results in modestly lower estimates for both ΔF and F2× (3.44°C for F2×). Since CO2 is the dominant forcing agent, and its concentration is accurately known, the value of ΔF is closely related to the value of F2×. The overall effect of the difference in F2× on the estimates of ECS and TCR is therefore small. As set out in the SI, an adjustment of +0.3 Wm−2 to 2010 forcing was made in the Nature Geoscience study in the light of recent satellite-observation constrained estimates of aerosol forcing. On the face of it, the resulting aerosol forcing is slightly more negative than that used in my December study.
2) The Nature Geoscience study derives ΔQ using the change in estimated 0–2000 m ocean heat content (OHC) – which accounts for most of the Earth system heat uptake – from the start to the end of the relevant decade (or 1970–2009), whereas I computed a linear regression slope estimate using data for all years in the period I took (2002–11). Whilst I used the NODC/NOAA OHC data, which corresponds to Levitus et al. (2012), here, for the entire 0–2000 m ocean layer, the Nature Geoscience study splits that layer between 0–700 m and 700–2000 m. It retains the NODC/NOAA Levitus OHC data for the 700–2000 m layer but uses a different dataset for 0–700 m OHC – an update from Domingues et al. (2008), here.
3) The periods used for the headline results differ slightly. I used changes from 1871–80 to 2002–11, whilst the Nature Geoscience study uses changes from 1860–79 to 2000–09. The effects are very small if the CMIP5 GCM-derived forcing estimates are used, but when employing the RCP 4.5 forcings, switching to using changes from 1860–79 to 2000–09 increases the ECS and TCR estimates by around 0.05°C.
Since the Nature Geoscience study and my December study give identical estimates of TCR, which are unaffected by ΔQ, the difference in their estimates of ECS must come primarily from use of different ΔQ figures. The difference between the ECS uncertainty ranges of the two studies likewise almost entirely reflects the different central estimates for ΔQ they use. The ECS central estimate and 5–95% uncertainty range per my December heat-balance/energy budget study were closely in line with the preferred main results estimate for ECS, allowing for additional forcing etc. uncertainties, per my recent Journal of Climate paper, of 1.6°C with a 5–95% uncertainty range of 1.0–3.0°C. That paper used a more complex method which, although less robust, avoided reliance on external estimates of aerosol forcing.
The take-home message from this study, like several other recent ones, is that the ‘very likely’ 5–95% ranges for ECS and TCR in Chapter 12 of the leaked IPCC AR5 second draft scientific report, of 1.5–6/7°C for ECS and 1–3°C for TCR, and the most likely values of near 3°C for ECS and near 1.8°C for TCR, are out of line with instrumental-period observational evidence.
===============================================================
Here’s a figure of interest from from the SI file – Anthony
Fig. S3| Sensitivity of 95th percentile of TCR to the best estimate and standard error of the change in forcing from the 2000s to the 1860-1879 reference period. The shaded contours show the 95th percentile boundary of the TCR confidence interval, the triangles show cases (black and blue) from the sensitivity Table S2, and a smaller adjustment to aerosol forcing for comparison (red).
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

If I correctly understand they assume all the warming since 1860 is due to CO2 and this is all they come out with.
It is still good for the skeptics, as they pointed since long that climate models have lost any contact to reality, but still wrong.
Considering the low quality of the temperature data, the lack of data for the reference period (1860-1879), the warming that was in the first half of the century, the millions of adjustments we saw happening in the temperature data I am very skeptical towards any significance of the results.
In the last decade we had zero increase for about 25% increase in human CO2 emissions, very low warming for the ARGO data after adjustment.
Where is the study that calculates sensitivity based on this?
“Estimates are made for ECS and TCR using ΔT, ΔF and ΔQ derived from data for the 1970s, 1980s, 1990s, 2000s and 1970–2009, relative to that for 1860–79”
Actually, CO2 is pretty much transparent at visible wavelengths. It stops almost none of the sun’s energy from reaching the ground. Consider the first graph in the following link: link It shows strong absorption by CO2 between 10 and 20 um (microns). (For reference, blackbody radiation occurs at about 10 um for a temperature of 30 deg. C. so most radiation from the surface of the planet happens at longer wavelengths. link) The other thing to notice is that this particular absorption band overlaps with that of water. The question isn’t whether the energy will be absorbed, it is how high in the atmosphere it will be absorbed. 🙂
Anthony –
From the BBC. I just burst out laughing when I started reading this one.
http://www.bbc.co.uk/news/science-environment-22567023
First they need to prove that the Easth’s temperature variations are driven, even in part or mostly or entirely, by CO2. After that they can start to show by how much.
“Chris Schoneveld says:
May 20, 2013 at 5:02 am
I agree with Kristian. Indeed, most commenters, ignore or forget the arguments put forward so convincingly by Bos Tisdale. It is not solar nor CO2, it’s the oceans, stupid.”
and its oceans all the way down.
Steven Mosher says:
May 20, 2013 at 8:23 am
“It is not solar nor CO2, it’s the oceans, stupid.”
and its oceans all the way down.
Amen!
izen says:
May 20, 2013 at 7:04 am
97% of scientist are unpersuaded of Bob’s ENSO hypothesis.
—————————————
You’ve asked them all then ? Or another clownish Izen conclusion with no basis ??
Just because you define something like climate sensitivity does not mean it really exists.
The implication of climate sensitivity is rising CO2 levels cause rising global temperatures.
But for the last 15 years at least CO2 levels have been rising and temperature not.
This is an existential problem for climate sensitivity. In other words trying to measure it is just a waste of time.
http://www.woodfortrees.org/plot/esrl-co2/isolate:60/mean:12/scale:0.25/plot/hadcrut3vgl/isolate:60/mean:12/from:1958
is my current favorite graph
it clearly shows temperature changes lead co2 changes hence co2 can not be causing temperatures to rise.
Temperature changes causing CO2 levels to change (or at least changing the rate of CO2 level change) is not a new idea
http://wattsupwiththat.com/2012/08/30/important-paper-strongly-suggests-man-made-co2-is-not-the-driver-of-global-warming/
http://wattsupwiththat.com/2012/08/30/important-paper-strongly-suggests-man-made-co2-is-not-the-driver-of-global-warming/
http://suyts.wordpress.com/2012/03/18/a-review-of-the-co2-correlation-and-a-discussion-of-warming-abatement/
I guess once idea’s get sticky they take a while to die
shame really
The Box-Whisker plots in the first chart are simply not credible, especially for the cyan 1970-1979 range. By eye-ball integration, about 75% of the cyan curve is below 2.0, 85% is below 3.0. Yet the high end of the box (83%) hangs out at 9.3. I suspect there is an error in the calculation of the box parameters from the truncation of the density curve at zero.
As for truncating at zero, I see little justification for that except “it doesn’t fit my mental model.”
If the data is telling you there is a 5% chance of less than zero, don’t truncate it, don’t hide it. Honor it and leave it for yourself and others to analyze and explain later.
Stephen Rasey says:
“The Box-Whisker plots in the first chart are simply not credible,”
The calculation of the box-whisker values is correct, although at an ECS value as high as 9.3 C the denominator is so low that sampling and other uncertainties make the exact value imprecise. As stated, the box-whisker values take account of probability area that falls outside the 0-10 C range of the PDF plot, but the PDF is normalised to unit probability over that range. This follows the treatment in Figure 9.20 of IPCC AR4 WG1. So you can’t directly infer the box-whisker values from integrating under the PDF curve for the 1970s or 1980s, where the probability lying outside the 0-10 C range is, as stated, significant.
ECS values below zero are inconsistent with the physics, not a mental model. The probability area outside the plot boundaries is not hidden, since it is reflected in the box and whisker plots. As for not truncating the plots, perhaps you have an infinite-width computer screen?
commieBob says:
May 20, 2013 at 7:58 am
//////////////////////////////////
I don’t dispute the measured absorption characteristics. It absorbs some, may be not a lot, but it is a factor on one side of the equation. That is all that I am saying.
You sate: “The other thing to notice is that this particular absorption band overlaps with that of water. The question isn’t whether the energy will be absorbed, it is how high in the atmosphere it will be absorbed.” and that is why I mentioned the inter-relation between CO2 with other gases in the atmosphere, and also why i mentioned altitude.
As for not truncating the plots, perhaps you have an infinite-width computer screen?
Did you clip the distribution for plotting purposes or truncate and renormalize?
Only the former has anything to do with the size of the computer screen.
As stated, the box-whisker values take account of probability area that falls outside the 0-10 C range of the PDF plot, but the PDF is normalised to unit probability over that range.
Well then, what’s the point of putting up a PDF at all?
If we are to believe the Box-whisker, at least 5 percent and likely over 10% of the distribution must be larger than 10 because 17% of the distribution is larger than 9.3. That would be a hefty part of the distribution to renormalize out of existance. Why show any PDF if over 15% of the tails are missing? The cyan PDF and box-whisker are just not credible as plotted. I do not have to believe them and I don’t.
inconsistent with the physics, not a mental model.
Physics is not a mental model of how the universe works?
In reply to, again to:
lsvalgaard says:
May 19, 2013 at 10:08 pm
William Astley says:
May 19, 2013 at 9:32 pm
As I said, there is observational evidence that the Northern hemisphere, in particular northern high latitudes regions have started to cool.
So what? The climate warms and cools all the time.
William:
Yes, we agree in agreement that ‘the climate warms and cools’. It does not however warm and cool ‘all the time’, as you state.
The questions which we (or at least I am trying to answer) trying to answer is:
1) Why did the ‘Dansgaard-Oeschger’ cyclic warming and cooling occur in the past? Michael Mann is focusing on removing the cyclic warming to help with the message.
2) Is the 20th century warming the warming phase of a Dansgaard-Oeschger cycle?
3) As the solar magnetic cycle has abruptly and anomalously changed will this change result in planetary cooling?
Comment: Feel free to answer the above questions providing logic, observation data, and peer previewed papers to support your answer.
Have you looked at the Greenland Ice sheet temperature data for the last 11,000 years?
I am making a testable prediction. The high northern regions of the planet will cool due to the sudden change to the solar magnetic cycle. (See the paper excerpt and link to below ‘Solar activity and Svalbard temperatures’ which predicts cooling in the high Northern latitudes based on analysis of past cooling that correlates with solar magnetic cycle changes and the current known solar magnetic cycle change, also see the paper ‘Synchronized Northern Hemisphere climate change and solar magnetic cycles during the Maunder Minimum’ which notes there was synchronous cooling during the Maunder minimum, and see the paper below ‘ Long-term Evolution of Sunspot Magnetic’ that predicts solar cycle 25 will be a Maunder like minimum.)
You are making no prediction. You appear to have some other agenda. I am not sure what your point is or motivation is. You appear to be trying to convince people that the solar magnetic cycle is not variable and there is no sun-climate connection.
Greenland ice temperature, last 11,000 years determined from ice core analysis, Richard Alley’s paper.
http://www.climate4you.com/images/GISP2%20TemperatureSince10700%20BP%20with%20CO2%20from%20EPICA%20DomeC.gif
Note the warming that occurs on the Greenland Ice sheet and there is concurrent with Greenland ice sheet synchronous slight cooling of the Antarctic ice sheet, which is called by the paleo climate specialists the ‘polar see-saw’.
There are nine warming and cooling periods in the current interglacial. The warming and cooling phases are called Dansgaard-Oeschger cycles. Each and every warming and cooling period has an increase in solar magnetic cycle activity during the warming phase and a decrease during the cooling phase.
This is from the paper you so confidently quote alleging that it provides data and analysis to challenge the assertion that Dansgaard-Oeschger cycle and the Heinrich events are not caused by solar magnetic cycle changes.
The authors of the paper you quote appear to believe the use of a thesaurus can substitute for data, analysis, researching other papers, and logic. I am curious why these joke paper has published and why you are quoting it.
http://www.leif.org/EOS/Obrochta2012.pdf
Re-examination of evidence for the North Atlantic “1500-year cycle” at Site 609
In Holocene sections these variations are coherent with 14C and 10Be estimates of solar variability.
Our new results suggest (William: The authors looked at one site. Other authors looked at multiple sites and found disconnected ice sheets during the Glacial phase warm and cool synchronously. i.e. The entire Northern hemisphere high latitude regions are warming and cooling synchronously. That is not possible with a chaotic ice sheet mechanism as the ice sheets are physically disconnected unless there is forcing function that is capable for affecting the entire Northern hemisphere) that the “1500-year cycle” may be a transient phenomenon whose origin could be due, for example, to ice sheet boundary conditions for the interval in which it is observed. We therefore question whether it is necessary to invoke such exotic explanations as heterodyne frequencies or combination tones to explain a phenomenon of such fleeting occurrence (The Dansgaard-Oeschger cycle is roughly 500 years long. The more sever version of the D-O cycle is called a Heinrich event. The Heinrich events are roughly 1000 years in duration. These are not fleeting planetary temperature changes. Adjectives and the liberal use of a thesaurus does not substitute for a scientific argument. It is a fact that solar magnetic cycle changes occur at the same time as the D-O cycles and the Heinrich events.) that is potentially an artifact of arithmetic averaging.
William:
The above paper is a joke! If we were not in the middle of a ‘climate war’, I would assume the authors had intended it as joke.
Manipulation of the data and analysis, calling people names, blocking the publication of papers that disprove the extreme AGW theory, media specials, and endless hype about extreme weather does not change the mechanisms.
It appears the 20th century observed warming was the warming phase of Dansgaard-Oeschger (D-O) cycle. Based on the paloeclimatic record and the fact that there has been an anomalous change to the solar magnetic cycle which appears will lead to a Maunder like minimum, the same regions that warmed during the 20th century will now cool.
Are the authors of the above joke paper trying to convince us that the D-O cycle did not happen?
There is absolutely no evidence the D-O cycles is an artifact of arithmetic averaging.
What the heck does the phrase ‘an artifact of arithmetic averaging’ mean in the context of explaining what caused the D-O cycles? How in world did this joke paper get published? Why was it written?
Look at this graph which is Greenland Ice sheet temperature data for the last 11,000 years. There are clearly nine (9) D-O cycles evident in the data. There is historical documents that records the effects the D-O cycle warming phases had on civilization in the affected regions (Warm: beneficial increased food production, less disease, healthy happy people) and the cold phases had on the affected regions (Cold: starvation, reduced food production, increased disease, unhappy people).
Greenland ice temperature, last 11,000 years determined from ice core analysis, Richard Alley’s paper.
http://www.climate4you.com/images/GISP2%20TemperatureSince10700%20BP%20with%20CO2%20from%20EPICA%20DomeC.gif
http://www.climate4you.com/
http://www.solen.info/solar/images/comparison_similar_cycles.png
http://arxiv.org/abs/1009.0784v1
William: When the solar magnetic cycle slows down there is 10 to 12 year delay before there is cooling in the high Arctic regions.
http://arxiv.org/abs/1112.3256
Solar activity and Svalbard temperatures
The long temperature series at Svalbard (Longyearbyen) show large variations, and a positive trend since its start in 1912. During this period solar activity has increased, as indicated by shorter solar cycles. …. ….The temperature at Svalbard is negatively correlated with the length of the solar cycle. The strongest negative correlation is found with lags 10 to 12 years. These models show that 60 per cent of the annual and winter temperature variations are explained by solar activity. For the spring, summer and fall temperatures autocorrelations in the residuals exists, and additional variables may contribute to the variations. These models can be applied as forecasting models. …. ….We predict an annual mean temperature decrease for Svalbard of 3.5 ±2C from solar cycle 23 to solar cycle 24 (2009 to 2020) and a decrease in the winter temperature of ≈6 C. … … A systematic study by Solheim, Stordahl and Humlum [15] (called SSH11 in the following) of the correlation between SCL and temperature lags in 11 years intervals, for 16 data sets (William: solar cycles), revealed that the strongest correlation took place 10 to 12 years after the mid-time of a solar cycle, for most of the locations included. In this study the temperature series from Svalbard (Longyearbyen) was included, and a relation between the previous sunspot cycle length (PSCL) and the temperature in the following cycle was determined. This relation was used to predict that the yearly average temperature, which was -4.2 C in sunspot cycle (SC) 23, was estimated to decrease to -7.8 C in SC24, with a 95% confidence interval of -6.0 to -9.6C [15]. SSH11[15] found that stations in the North Atlantic (Torshavn, Akureyri and Svalbard), had the highest correlations.
William: Latitude and longitude of Svalbard (Longyearbyen)
78.2167° N, 15.6333° E Svalbard Longyearbyen, Coordinates
http://www.pnas.org/content/early/2010/11/08/1000113107.abstract
Synchronized Northern Hemisphere climate change and solar magnetic cycles during the Maunder Minimum
The Maunder Minimum (A.D. 1645–1715) is a useful period to investigate possible sun–climate linkages as sunspots became exceedingly rare and the characteristics of solar cycles were different from those of today. Here, we report annual variations in the oxygen isotopic composition (δ18O) of tree-ring cellulose in central Japan during the Maunder Minimum. We were able to explore possible sun–climate connections through high-temporal resolution solar activity (radiocarbon contents; Δ14C) and climate (δ18O) isotope records derived from annual tree rings. The tree-ring δ18O record in Japan shows distinct negative δ18O spikes (wetter rainy seasons) coinciding with rapid cooling in Greenland and with decreases in Northern Hemisphere mean temperature at around minima of decadal solar cycles. We have determined that the climate signals in all three records strongly correlate with changes in the polarity of solar dipole magnetic field, suggesting a causal link to galactic cosmic rays (GCRs). These findings are further supported by a comparison between the interannual patterns of tree-ring δ18O record and the GCR flux reconstructed by an ice-core 10Be record. Therefore, the variation of GCR flux associated with the multidecadal cycles of solar magnetic field seem to be causally related to the significant and widespread climate changes at least during the Maunder Minimum.
http://arxiv.org/abs/1009.0784v1
Long-term Evolution of Sunspot Magnetic Fields
Independent of the normal solar cycle, a decrease in the sunspot magnetic field strength has been observed using the Zeeman-split 1564.8nm Fe I spectral line at the NSO Kitt Peak McMath-Pierce telescope. Corresponding changes in sunspot brightness and the strength of molecular absorption lines were also seen. This trend was seen to continue in observations of the first sunspots of the new solar Cycle 24, and extrapolating a linear fit to this trend would lead to only half the number of spots in Cycle 24 compared to Cycle 23, and imply virtually no sunspots in Cycle 25.
[snip – off topic rant – mod]
inconsistent with physics
Suppose I perform a low-temperature-physics experiment.
I collect my data. When compiling it, I find that 72 values out of 1000 are giving a temperature that is below absolute zero.
What should I do? (select all that apply)
A). temperatures below absolute zero are inconsistent with physics. So throw out these data and proceed with the 928 values that are “good”.
B). Work with the 1000 data points as collected.
C). Work with the 1000 data points as collected, and widen my error bars.
D). Burn the journal and start the experiment over.
E). Write up a paper on the set-up, methodology, and surprising results. Then start over.
F). Other….
Steven Mosher: Anthony. that is the big bottom line here. You had cook and company trashing Nic and it appears that 14 IPCC authors think differenly than the Cook and company
I think that is an important observation.
This is the best estimation of its kind to date.The derivation makes use of what I might call the “consensus assumptions” (sun and clouds are correctly accounted for, “equilibrium” is a relevant concept, TCS is a constant independent of starting temperature, etc) and comes to a conclusion that what we have been told is the “consensus result” is an exaggeration.
The results are heavily dependent on the priors chosen, and on the subsets of the data included in the analysis. For those who accept the “consensus assumptions”, I think the conclusion is that there are not enough data for a result to inspire any confidence, so the best result is the pdf with the widest spread. It is informative that the pdf with the narrowest spread has the most recent data and highest CO2 concentration. The calculations can be repeated annually as the data accumulate, and it will be interesting to see how the pdfs respond if the present “seemingly reduced rate of warming” continues as CO2 continues to accumulate.
Those of us who have criticized the “consensus assumptions” will most likely be unmoved by this derivation, but I think that it is becoming harder and harder for anyone to believe that there is anything like a “consensus” around the claims of James Hansen and Al Gore.
lsvalgaard says:
May 20, 2013 at 8:25 am
Steven Mosher says:
May 20, 2013 at 8:23 am
“It is not solar nor CO2, it’s the oceans, stupid.”
and its oceans all the way down.
Amen!
………………
Not the end of the story, just end of one chapter to draw you to the next.
I’ve read parts of the draft of this intriguing story it is not ‘Sinking of the Titanic’, but ‘Sinking in the North Atlantic’, however location area is the same, the IPCC’s got it, but as usual can’t get it quite right.
http://www.vukcevic.talktalk.net/CB.htm
Svalgaard and Mosher you are on the list for the preprint., however you may like it not a lot.
William Astley says:
May 20, 2013 at 10:44 am
1) Why did the ‘Dansgaard-Oeschger’ cyclic warming and cooling occur in the past? Michael Mann is focusing on removing the cyclic warming to help with the message.
There is no such precise ‘cycle’. http://www.leif.org/EOS/Obrochta2012.pdf by very respected authors.
2) Is the 20th century warming the warming phase of a Dansgaard-Oeschger cycle?
Therefore 2) is moot.
3) As the solar magnetic cycle has abruptly and anomalously changed will this change result in planetary cooling?
Actually, it is very likely that it will have the opposite effect: without dark spots to lower TSI, we may get even more irradiance during a Maunder-like minimum.
I am making a testable prediction. The high northern regions of the planet will cool due to the sudden change to the solar magnetic cycle.
This is not a prediction, but just an assertion.
http://arxiv.org/abs/1009.0784v1
Long-term Evolution of Sunspot Magnetic Fields
Here is an update of that paper: http://www.leif.org/research/apjl2012-Liv-Penn-Svalg.pdf [note the authors].
Article in BBC news seems to say that not much has really changed. Look at quote by Otto
http://www.bbc.co.uk/news/science-environment-22567023
“We would expect a single decade to jump around a bit but the overall trend is independent of it, and people should be exactly as concerned as before about what climate change is doing,” said Dr Otto.
Is there any succour in these findings for climate sceptics who say the slowdown over the past 14 years means the global warming is not real?
“None. No comfort whatsoever,” he said.”
Steven Mosher: and its oceans all the way down.
Good. Very good. 😎
lsvalgaard says:
May 20, 2013 at 8:25 am
Steven Mosher says:
May 20, 2013 at 8:23 am
“It is not solar nor CO2, it’s the oceans, stupid.”
and its oceans all the way down.
Amen!
>>>>>>>>>
It is good to have you guys around.
lgl says, May 20, 2013 at 7:42 am:
“Do you have a better proxy for ENSO going back to 1880?”
I already gave you one:
http://i1172.photobucket.com/albums/r565/Keyell/ENSOampAMOb_zps2f9f8129.png
This is ENSO (bottom graphs). Once again, NINO3.4 is not ENSO. NINO3.4 represents the eastern half of the ENSO phenomenon. It has the strongest amplitudes. That is why its imprint on the global curve is so evident. But the upward shifts are generated in the western half of the ENSO region (the warm pool). Not the eastern. So NINO3.4 does not incorporate all the oceanic (and atmospheric) processes that is the ENSO phenomenon. You need to include both sectors.
A climate sensitivity lower than 2 K was already demonstrated here:
Scafetta N., 2008. Comment on ‘Heat capacity, time constant, and sensitivity of Earth’s climate system’ by Schwartz. Journal of Geophysical Research 113, D15104.
http://onlinelibrary.wiley.com/doi/10.1029/2007JD009586/abstract
http://people.duke.edu/~ns2002/pdf/2007JD009586.pdf
The real transient climate sensitivity (at the decadal-multidecadal scale) is very likely about 1.0-1.5 K, as demonstrated by geometrical constrains based on the 60-year natural oscillations. See here:
Scafetta N., 2012. Testing an astronomically based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models. Journal of Atmospheric and Solar-Terrestrial Physics 80, 124-137.
http://www.sciencedirect.com/science/article/pii/S1364682611003385
http://people.duke.edu/~ns2002/pdf/ATP3533.pdf
where the model using the climate sensitivity correction plus the natural oscillations is demonstrated to well forecast climate change. See the update of the model to March/2013 at the bottom of my web-site here:
Astronomical Climate model forecast vs. IPCC
http://people.duke.edu/~ns2002/#astronomical_model-1
A low climate sensitivity by at least half of what claimed by the IPCC is also demonstrated in section 3 of my latest publication:
Scafetta N., 2013. Discussion on common errors in analyzing sea level accelerations, solar trends and global warming. Pattern Recognition in Physics, 1, 37–57.
http://www.pattern-recogn-phys.net/1/37/2013/prp-1-37-2013.html
http://www.pattern-recogn-phys.net/1/37/2013/prp-1-37-2013.pdf
Kristian
“This is ENSO (bottom graphs)”
No it isn’t.