Assessment of Equilibrium Climate Sensitivity and Catastrophic Global Warming Potential Based on the Historical Data Record

This exercise in data analysis pins down a value of 1.8C for ECS.

Guest essay by Jeff L.

Introduction:

If the global climate debate between skeptics and alarmists were cooked down to one topic, it would be Equilibrium Climate Sensitivity to CO2 (ECS) , or how much will the atmosphere warm for a given increase in CO2 .

Temperature change as a function of CO2 concentration is a logarithmic function, so ECS is commonly expressed as X ° C per doubling of CO2. Estimates vary widely , from less than 1 ° C/ doubling to over 5 ° C / doubling. Alarmists would suggest sensitivity is on the high end and that catastrophic effects are inevitable. Skeptics would say sensitivity is on the low end and any changes will be non-catastrophic and easily adapted to.

All potential “catastrophic” consequences are based on one key assumption : High ECS ( generally > 3.0 ° C/ doubling of CO2). Without high sensitivity , there will not be large temperature changes and there will not be catastrophic consequences. As such, this is essentially the crux of the argument : if sensitivity is not high, all the “catastrophic” and destructive effects hypothesized will not happen. One could argue this makes ECS the most fundamental quantity to be understood.

In general, those who are supportive of the catastrophic hypothesis reach their conclusion based on global climate model output. As has been observed by many interested in the climate debate, over the last 15 + years, there has been a “pause” in global warming, illustrating that there are significant uncertainties in the validity of global climate models and the ECS associated with them.

There is a better alternative to using models to test the hypothesis of high ECS. We have temperature and CO2 data from pre-industrial times to present day. According to the catastrophic theory, the driver of all longer trends in modern temperature changes is CO2. As such, the catastrophic hypothesis is easily tested with the available data. We can use the CO2 record to calculate a series of synthetic temperature records using different assumed sensitivities and see what sensitivity best matches the observed temperature record.

The rest of this paper will explore testing the hypothesis of high ECS based on the observed data. I want to re-iterate the assumption of this hypothesis, which is also the assumption of the catastrophists position, that all longer term temperature change is driven by changes in CO2. I do not want to imply that I necessarily endorse this assumption, but I do want to illustrate the implications of this assumption. This is important to keep in mind as I will attribute all longer term temperature changes to CO2 in this analysis. I will comment at the end of this paper on the implications if this assumption is violated.

Data:

There are several potential datasets that could be used for the global temperature record. One of the longer and more commonly referenced datasets is HADCRUT4, which I have used for this study (plotted in fig. 1) The data may be found at the following weblink :

http://www.cru.uea.ac.uk/cru/data/temperature/HadCRUT4-gl.dat

I have used the annualized Global Average Annual Temperature anomaly from this data set. This data record starts in 1850 and goes to present, so we have 163 years of data. For the purposes on this analysis, the various adjustments that have been made to the data over the years will make very little difference to the best fit ECS. I will calculate what ECS best fits this temperature record, given the CO2 record.

clip_image002

Figure 1 : HADCRUT4 Global Average Annual Temperature Anomaly

The CO2 data set is from 2 sources. From 1959 to present, the Mauna Loa annual mean CO2 concentration is used. The data may be found at the following weblink :

ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_annmean_mlo.txt

For pre-1959, ice core data from Law Dome is used. The data may be found at the following weblink :

ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/law/law_co2.txt

The Law Dome data record runs from 1832 to 1978. This is important for 2 reasons. First, and most importantly, it overlaps Mauna Loa data set. It can easily be seen in figure 2 that it is internally consistent with the Mauna Loa data set, thus providing higher confidence in the pre-Mauna Loa portion of the record. Second, the start of the data record pre-dates the start of the HADCRUT4 temperature record, allowing estimates of ECS to be tested against the entire HADCRUT4 temperature record. For the calculations that follow, a simple splice of the pre-1959 Law Dome data onto the Mauna Loa data was made , as the two data sets tie with little offset.

clip_image004

Figure 2 : Modern CO2 concentration record from Mauna Loa and Law Dome Ice Core.

Calculations:

From the above CO2 record, a set of synthetic temperature records can be constructed with various assumed ECS values. The synthetic records can then be compared to the observed data (HADCRUT4) and a determination of the best fit ECS can be made.

The equation needed for the calculation of the synthetic temperature record is as follows :

∆T = ECS* ln(C2/C1)) / ln(2)

where :

∆T = Change in temperature, ° C

ECS = Equilibrium Climate Sensitivity , ° C /doubling

C1 = CO2 concentration (PPM) at time 1

C2 = CO2 concentration (PPM) at time 2

For the purposes of this test of sensitivity, I set time 1 to 1850, the start of the HADCRUT4 temperature dataset. C1 at the same time from the Law Dome data set is 284.7 PPM. For each year from 1850 to 2013, I use the appropriate C2 value for that time and calculate ∆T with the formula above. To tie back to the HADCRUT4 data set, I use the HADCRUT4 temperature anomaly in 1850 ( -0.374 ° C) and add on the calculated ∆T value to create a synthetic temperature record.

ECS values ranging from 0.0 to 5.0 ° C /doubling were used to create a series of synthetic temperature records. Figure 3 shows the calculated synthetic records, labeled by their input ECS, as well as the observed HADCRUT4 data.

clip_image006

Figure 3: HADCRUT4 Observed data and synthetic temperature records for ECS values between 0.0 and 5.0 ° C / doubling. Where not labeled, synthetic records are at increments of 0.2 ° C / doubling. Warmer colors are warmer synthetic records.

From Figure 3, it is visually apparent that a ECS value somewhere close to 2.0 ° C/ doubling is a reasonable match to the observed data. This can be more specifically quantified by calculating the Mean Squared Error (MSE) of the synthetic records against the observed data. This is a “goodness of fit” measurement, with the minimum MSE representing the best fit ECS value. Figure 4 is a plot of MSE values for each ECS synthetic record.

clip_image008

Figure 4 : Mean Squared Error vs ECS values. A few ECS values of interest are labeled for further discussion

In plotting, the MSE values, a ECS value 1.8 ° C/ doubling is found to have the minimum MSE and thus is determined to be the best estimate of ECS based on the observed data over the last 163 years.

Discussion :

A comparison to various past estimates of ECS is made in figure 5. The base for figure 5 comes from the following weblink :

http://www.cato.org/sites/cato.org/files/wp-content/uploads/gsr_042513_fig1.jpg

See link for the original figure.

clip_image010

Figure 5 : Comparison of the results of this study (1.8) to other recent ECS estimates.

The estimate derived from this study agrees very closely with other recent studies. The gray line on figure 5 at a value of 2.0 represents the mean of 14 recent studies. Looking at the MSE curve in figure 4, 2.0 is essentially flat with 1.8 and would have a similar probability. This study further reinforces the conclusions of other recent studies which suggest climate sensitivity to CO2 is low relative to IPCC estimates .

The big difference with this study is that it is strictly based on the observed data. There are no models involved and only one assumption – that the longer period variation in temperature is driven by CO2 only. Given that the conclusion of a most likely sensitivity of 1.8 ° C / doubling is based on 163 years of observed data, the conclusion is likely to be quite robust.

A brief discussion of the assumption will now be made in light of the conclusion. The question to be asked is : If there are other factors affecting the long period trend of the observed temperature trend (there are many other potential factors, none of which will be discussed in this paper), what does that mean in terms of this best fit ECS curve ?

There are 2 options. If the true ECS is higher than 1.8, by definition , to match the observed data, there has to be some sort of negative forcing in the climate system pushing the temperature down from where it would be expected to be. In this scenario, CO2 forcing would be preventing the temperature trend from falling and is providing a net benefit.

The second option is the true ECS is lower than 1.8. In this scenario, also by definition, there has to be another positive forcing in the climate system pushing the temperature up to match the observed data. In this case CO2 forcing is smaller and poses no concern for detrimental effects.

For both of these options, it is hard to paint a picture where CO2 is going to be significantly detrimental to human welfare. The observed temperature and CO2 data over the last 163 years simply doesn’t allow for it.

Conclusion :

Based on data sets over the last 163 years, a most likely ECS of 1.8 ° C / doubling has been determined. This is a simple calculation based only on data , with no complicated computer models needed.

An ECS value of 1.8 is not consistent with any catastrophic warming estimates but is consistent with skeptical arguments that warming will be mild and non-catastrophic. At the current rate of increase of atmospheric CO2 (about 2.1 ppm/yr), and an ECS of 1.8, we should expect 1.0 ° C of warming by 2100. By comparison, we have experienced 0.86 ° C warming since the start of the HADCRUT4 data set. This warming is similar to what would be expected over the next ~ 100 years and has not been catastrophic by any measure.

For comparison of how unlikely the catastrophic scenario is, the IPCC AR5 estimate of 3.4 has an MSE error nearly as large as assuming that CO2 has zero effect on atmospheric temperature (see fig. 4).

There had been much discussion lately of how the climate models are diverging from the observed record over the last 15 years , due to “the pause”. All sorts of explanations have been posited by those supporting a high ECS value. The most obvious resolution is that the true ECS is lower, such as concluded in this paper. Note how “the pause” brings the observed temperature curve right to the 1.8 ECS synthetic record (see fig. 3). Given an ECS of 1.8, the global temperature is right where one would predict it should be. No convoluted explanations for “the pause” are needed with a lower ECS.

The high sensitivity values used by the IPCC , with their assumption that long term temperature trends are driven by CO2, are completely unsupportable based on the observed data. Along with that, all conclusions of “climate change” catastrophes are also completely unsupportable because they have the high ECS values the IPCC uses built into them (high ECS to get large temperature changes to get catastrophic effects).

Furthermore and most importantly, any policy changes designed to curb “climate change” are also unsupportable based on the data. It is assumed that the need for these policies is because of potential future catastrophic effects of CO2 but that is predicated on the high ECS values of the IPCC.

Files:

I have also attached a spreadsheet with all my raw data and calculations so anyone can easily replicate the work.

ECS Data (xlsx)

=============================================================

About Jeff:

I have followed the  climate debate  since the 90s. I was an early “skeptic” based on my geologic background – having knowledge of how climate had varied over geologic time, the fact that no one was talking about natural variation and natural cycles  was an immediate red flag. The further I dug into the subject, the more I realized there were substantial scientific problems. The paper I am submitting is a paper I have wanted to write for years , as I did the basic calculations several years ago & realized there was no support in the observed data for high climate sensitivity.

Advertisements

  Subscribe  
newest oldest most voted
Notify of

This simplistic and faulty analysis assumes that the Hadrut temperature record is a true representation of World temperatures over the period shown. In fact, the Hadcrut graph, like all similar ones is the result of continuous ‘adjustment’ to increase recent temperatures and reduce past temperature records. Most recent temperature increases are solely the result of beneficial ‘adjustments’.
Basing a comparison and sensitivity between CO2 levels and temperature also assumes that there are no other influences at all (like the Sun) on Earth’s average temperature. Coming up with a figure of 1.8 or the IPCC 2.1 or the alarmist 4 is just folly.
First one has to establish what are the real influences on Earth temperature and then work back to their likely effects, not assume that it is just CO2 and attribute ‘adjusted’ temperature rise to that.
REPLY: Then go do it, but in the meantime your comment is little more than whining – Anthony

Camburn

Simple and elegant.

vboring

Thermal mass seems to be ignored.
You can’t calculate an equilibrium value without a thermal mass unless you assume the thermal mass is negligible – making the instantaneous value the equilibrium value. Considering the amount of water on the planet, it seems unlikely that the thermal mass of the planet is negligible.
And, of course, the oceans move heat spatially and temporally. The simplest acceptably accurate solution to a problem is definitely the best one, but this solution is too simple, too inaccurate.

Alex Hamilton

The assumption relating to climate sensitivity to carbon dioxide is dependent upon an assumption that there would be uniform temperatures in the troposphere in the absence of moisture and so-called greenhouse gases. GH gases are assumed to establish a “lapse rate” by radiative forcing and subsequent upward convection.
In physics “convection” can be diffusion at the molecular level or advection or both. It is important to understand that the so-called “lapse rate” (which is a thermal gradient) evolves spontaneously at the molecular level, because the laws of physics tell us such a state is one with maximum entropy and no unbalanced energy potentials. In effect, for individual molecules the mean sum of kinetic energy and gravitational potential energy is constant.
So this thermal gradient is in fact a state of thermodynamic equilibrium. If it is already formed in any particular region then indeed extra thermal energy absorbed at the bottom of a column of air will give the impression of warm air rising. But that may not be the case if the thermal gradient is not in thermodynamic equilibrium and is initially not as steep as it normally would be. In such a case thermal energy can actually flow downwards in order to restore thermodynamic equilibrium with the correct thermal gradient.
What then is the “correct” thermal gradient? The equation (PE+KE)=constant amounts to MgH+MCpT=constant (where M is the mass, H is the height differential and T the temperature differential and Cp the specific heat.) So the theoretical gradient for a pure non-radiating gas is -g/Cp as is well known to be the so-called dry adiabatic lapse rate. However, thermodynamic equilibrium must also take into account the fact that radiation could be transferring energy between any radiating molecules (such as water vapour or carbon dioxide) and this has a propensity to reduce the net result for the thermal gradient. Hence we get the environmental lapse rate representing the overall state of thermodynamic equilibrium.

Gordon Ford

This can’t be! The data agrees with the conclusions! (sarc off)

cnxtim

The simple fact is this; warmists believe that traces of CO2 generated at ground level by the burning of so called “fossil fuels” make the implausible journey to the upper atmosphere and cause CAGW – they have NO other position whatsoever, and since despite their mantra and models, recent GW has ceased for 17.5 years, they have NO position whatsoever.
Case proven and closed, time to get a real job and stop wasting the taxpayers money!

Good work. But the assumption that I find almost universal and, to my mind, the most unlikely, is that CO2 emissions will continue at the current rate. Look at the full range of assumptions
that simple assumption requires : that electric cars will not replace current ICE vehicles for many decades; that electricity will not be increasingly produced by non-CO2 emitting generators (especially nuclear, which is experiencing unprecedented adoption in India, China, the Middle East, South America, Britain, etc, places where a large portion of the CO2 emission sources are located), that natural gas will not continue to replace most coal generation, or alternatively, that the non-emitting coal combustion process developed at Ohio State will not become commercialized).
That CO2 emissions will remain the same for the extended future I find utterly implausible and
practically impossible. Time and technology marches on. Always has, always will.

TRG

ntesdorf: I suppose if you find the analysis simplistic and faulty, you might as well criticize it on the same basis, which you seem to have done quite nicely.

Robertv

[snip – waaaaaaaaaaayyy off topic – Anthony]

albertalad

I always suspect calculations based on 100 plus years which eliminates the historical earth climate. Other warm periods in time plus the various ice ages. However I do understand in the AGW camp CO2 as THE factor. What I don’t get is why we always fall into the AGW trap and only concentrate what the AGW camp wants us to talk about, CO2. Something melted each ice age long before man ever existed. I know, I know – trying to prove man is entirely responsible is the buzz words. With respect – I don’t trust any temperature massaged so many times none of us know what real temperature were or supposed to be anymore. Even the different data collected by different device cannot agree with each other and have to be massaged.

Pat Kelly

Well, setting aside the raging debate on the credibility of the data set being used, this is assuming that ALL influences on global temperature are solely attributable to carbon dioxide concentrations, which I presume most sincere people would doubt. However, as a tact to take while in a bar debating the severity of anthropogenic global warming, I fully support its simplicity in pointing out the flaws of an alarmist’s argument for catastrophe.

Willis Eschenbach

I hate to be the guy throwing cold water, but that method needs to be tested on out-of sample data. All you’ve done up there is a simple fit of CO2 to temperature. I can do the same thing with the cost of US postage stamps, and get the same level of significance. Or I can do it with population, or with the cumulative inflation rate … so what?
As a first test of your results, you need to do an “out-of-sample” test by doing the following:
1. Divide your data into 3 periods of ~ 50 years each.
2. Fit the CO2 increase to the temperature in each of the periods separately.
3. Apply the “climate sensitivity” you found in step 2 to the other two segments of the data and note how poorly they fit.
Give that a shot, report back your results …
w.

‘In general, those who are supportive of the catastrophic hypothesis reach their conclusion based on global climate model output.”
Wrong. Hansen for example relies on Paleo data.
REPLY: and I call BS on your “wrong”

“Models based on the business-as-usual scenarios of the Intergovernmental Panel on Climate Change (IPCC) predict a global warming of at least 3 °C by the end of this century.”

From Hansen’s “paper”:
Hansen, J., 2007: Climate catastrophe. New Scientist, 195, no. 2614 (July 28), 30-34.
A sea level rise of several metres will be a near certainty if greenhouse gas emissions keep increasing unchecked. Why are scientists reluctant to speak out? http://pubs.giss.nasa.gov/docs/2007/2007_Hansen_2.pdf
– Anthony

“Hansen for example relies on Paleo data.”
But who relies on Hansen? Anyone? [I mean, anyone rational.]

Dr Burns

Here’s Siple vs Mauna Loa. I wouldn’t be surprised if Law Dome has also been faked.
http://www.ferdinand-engelbeen.be/klimaat/klim_img/siple1a.jpg

jhborn

I saw no formulas in the “modeled temps” tab of your spreadsheet, but I infer from your discussion that you assumed no delay. My understanding of “equilibrium climate sensitivity” is the temperature increase that results after the C02 concentration has reached double and remained there indefinitely.
In other words, proponents of high equilibrium client sensitivity would say that temperatures would continue to climb even if CO2 concentration remained fixed; it would approach the equilibrium value asymptotically.
This means you need at least two parameters (only two if you assume a first-order linear system): equilibrium climate sensitivity and time constant, or, as vboring put it, thermal mass.

Dr Burns

The article ignors the fact that CO2 changes are a result of warming rather than a cause.

rgbatduke

A perfectly reasonable analysis as far as it goes. It suffers from the usual — the assumption that CO_2 is the only knob is almost certainly false. For example, would anyone care to take the model and hindcast the Little Ice Age from it? How about the Medieval Warm Period? We know that the climate varies naturally by order of 1 C or more on a century time scale. Indeed, if one looks even at HADCRUT4:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1800/to:2013/plot/hadcrut4gl/trend
The rule rather than the exception is for the climate to vary by 0.1C or more over a decade. Furthermore, the rule rather than the exception is for the climate to vary by 0.1/decade or more over multiple decades in a row in a single direction.
How anyone could call the stretch from 1970 to 2000 “unusual” is beyond me, when the stretch from 1910 to 1940 is almost identical in structure and span.
Note well that CO_2 did not descend or remain neutral from 1855 to 1910, or from 1950 to 1970, or from 2000 to the present, but the climate did.
Basically, one simply cannot look at the temperature record anywhere and ascertain how much of any given stretch of temperature or its variation occurs due to “natural” causes and how much occurs due to variations in atmospheric GHG chemistry. No simple model fits (even when reasonably well done, as this one is) can accomplish it. Neither, apparently, can predictive models.
rgb

JamesS

It seems to me there is a fundamental mental error being made if one considers the ECS as anything but the net end result. The author discusses the possibilities of a high ECS with negative forcings keeping temperatures lower, and a low ECS with positive forcings keeping temperatures elevated. I see the ECS as the final result of all the forcings, negative and positive, on the global temperature. If the temperature and CO2 observations suggest a value of 1.8, then that is the true value. Period. The end.
In a complex system like the climate, wouldn’t all of the myriad variables and forcings mingle to determine how the climate reacts to increased CO2, and then that would BE the ECS? Maybe I’m seeing it wrong, and I would certainly appreciate seeing where I’ve made my error.

Greig

The problem with this analysis is that it assumes that all warming is caused by CO2, which is obviously wrong when viewing the HADCRUT plot. There is clearly a natural components which caused warming in early 1900s, and some cooling 1940-1970. It is faulty logic to make an assumption that is wrong, and then declaring that if there is a natural component then the situation is even better. In fact, addition of a natural component that suggests a higher ECS, is not a good thing because we don’t know what are the future natural drivers to the climate. There may be in the future natural warming added to CO2 forcing.
So the error here is that we are suggesting we know something, when in fact we don’t. We face an unquantified future risk (it may be bad, it may not be). When we acknowledge this uncertainty, it is wrong to be panicking and declaring that we face certain doom unless we dismantle our energy technology base, but fooling ourselves into believing everything is OK is also self-deceit.
I would encourage the author of this essay to do the analysis again for a range of assumptions of natural climate change vs CO2 forcing see what results. Instead of pinning down a value for ECS, I suggest it would probably result in a wide range of values. But I believe it would be a worthwhile venture, if only to show the impact of assumptions of natural vs human induced in this debate.

RichardLH

“The big difference with this study is that it is strictly based on the observed data. There are no models involved and only one assumption – that the longer period variation in temperature is driven by CO2 only.”
Well there MUST be some natural variability in there as well. So the figure could be lower (or higher) than that quoted.
http://i29.photobucket.com/albums/c274/richardlinsleyhood/Fig8HadCrutGISSRSSandUAHGlobalAnnualAnomalies-Aligned1979-2013withGaussianlowpassandSavitzky-Golay15yearfilters_zps670ad950.png

1. to calculate ECS you have to include OHC ( or assume) that OHC is zero.
in other words, ECS implies that the system has reached equillibrium. So since the
system has not, you need to have an estimate for delta OHC.
2. If you do not include delta OHC, then you are estimating something closer to TCR
TCR is roughly 1.3 to 2.2 so your estimate is in line with this
Next, to do the estimate properly you need all Forcing
use all forcing to give you lambda.. the system response to all forcing
from lambda you can calculate the senitivity to C02 doubling

DonV

I concur with what Willis stated, but I propose you take your simple analysis a little further. Show us a graph of the deviation of the “actual” temperature record from the “1.8” calculated record, then calculate the simple statistical values that determine the “cause/effect” certainty of your “model”. Second, you need to turn the whole concept on it’s head and ask the question, does the temperature record cause the CO2 change instead of vice versa. Others have proposed and documented with a fair degree of certainty that the integral of the actual temperature trends predict the current CO2 values.
IMHO temperature drives CO2 increase and the burning of fossil fuels is dwarfed by simple outgassing from the integral effect of accumulated ocean warming. More importantly than that, you are falling into the trap of arguing about the “noise” when in fact on any given day/month/year temperature changes far above 1 or 2 degrees in spite of a relatively CONSTANT CO2. There is NO temperature signal at Mauna Loa that matches the annual cyclical CO2 concentration signal!

timetochooseagain

There are a number of factors of unknown magnitude that render any attempt to derive the right value of sensitivity in the manner done here essentially impossible.
To begin with, a proper model must recognize that the real Earth has thermal inertia. This means, at the very least, one needs to use a differential equation of the form:
T = sensitivity*Forcing -response time*dT/dt
The second problem is that one needs to recognize that more forcings other than CO2 act on the temperature record. These include volcanic eruptions, variations in solar brightness, other greenhouse gases (such as methane, CFCs, N2O, etc), dynamically induced non feedback variations in cloud cover, sulphates, black carbon, land use change, and many, many more factors, most of which are highly uncertain.
The third problem is uncertainty of the temperature record itself-how much of the change is real versus due to data biases?
The fourth problem is non linearity of sensitivity-that is, df/dT, where f represents the rate of radiative heat loss, is not a constant, as various effects can increase or decrease the change of rate with temperature at higher temperatures.
All of these problems make attempting to estimate the sensitivity in this way pretty much a pointless exercise. Something that would make more sense would be to attempt to estimate the magnitude of the feedback response, which overcomes every problem except the fourth.
Of course, if you do that, you’re going to get an answer that’s like a third what you’re getting here. But given all the problems with this approach, that’s not that surprising.

Stevek

The most significant evidence we have of low sensitivity is the pause. The alarmist need the heat to be in the ocean. They need it or they know it is game over.
If we have perfect measurements from satellites of the input and output heat radiation budget then we will know if heat is in ocean.
My guess is that there is a big negative feedback mechanism we do not fully understand. There is some type of release valve , throttle as Willis says. It has to do with water cycle or wind in my opinion.

Willis Eschenbach

Here you go, it’s the secret of climate that we’ve searched for so long …

SO … I can use that to calculate the climate sensitivity of the relationship. Just like the head post, I’ve used the log of the underlying data (to base 2, as in the head post).
Only problem?
The red line is not CO2. It’s the CPI, the Consumer Price Index, since 1850 …
Jeff, I hope you can see that this type of “match up the curves” is not all that diagnostic …
w.
PS: If you truly want to do this kind of analysis, you need to use a lagging equation on your forcings, and you need to include all known forcings. The problem is, most forcings we have no clue about for the 1800’s … so we end up with error bars (which you’ve neglected) from floor to ceiling.

Jeff, what is your uncertainty estimate, plus/minus degrees C ?
Also what you are estimating is the transient climate response since it will take additional time for the oceans to adjust to changes in temperature.

Alex Hamilton

Continuing from my comment at 2:16pm, the inevitable conclusion is that it is not greenhouse gases that are raising the surface temperature by 33 degrees or whatever, but the fact that the thermal profile is already established by the force of gravity acting at the molecular level on all solids, liquids and gases. So the “lapse rate” is already there, and indeed we see it in the atmospheres of other planets as well, even where no significant solar radiation penetrates.
In fact, because the “dry” lapse rate is steeper, and that is what would evolve spontaneously in a pure nitrogen and oxygen atmosphere, and because we know that the wet adiabatic lapse rate is less steep than the dry one, it is obvious that the surface temperature is not as high because of these greenhouse gases. Carbon dioxide (being one molecule in about 2,500 other molecules) has very little effect, but whatever effect it does have would thus be very minor cooling.
I don’t care what you think you can deduce from whatever apparent correlation you think you can demonstrate from historical data, there is no valid physics which points to carbon dioxide warming.

You are looking at transient climate response (TCR), not equilibrium climate sensitivity (ECS). The IPCC AR4 TCR numbers are 1.5 to 2.8 C per doubling of CO2, with a mean estimate of 2.1. In the AR5, the average TCR across CMIP5 models was 1.8 C per doubling. So technically your result ends up being exactly the same as the models :-p

Mosher says
“‘In general, those who are supportive of the catastrophic hypothesis reach their conclusion based on global climate model output.”
Wrong. Hansen for example relies on Paleo data.”
Really Mosh? No one has ever shown any DATA that demonstrates that CAGW is real. That is the problem, if there was DATA that demonstrates that, there would be no issue whatsover. with us skeptics, except for perhaps on how to address/solve it. CAGW currently exists ONLY in the models. Hansen does not rely on Paleo data to prove CAGW. Perhaps he uses paleo data, sticks it into his MODEL, and viola!…. CAGW.
Aslo, if you are mentioning Paleo data because of the recent paper. I scanned through the paper and saw no graphical presentation of any of the temperature data the were fitting the models too. Is that because all of the paleo temperature data series being investigated were “hockey sticks” and they are trying to “Hide” that fact as best they can?

Pat Frank

Granting the analysis, the result is that 1.8 C is the upper limit of climate sensitivity, not the most likely value. The reason is that the analysis assumes that all the warming since 1850 is due to CO2. Enter any other source of warming, the fraction of warming due to CO2 decreases below 1.0, and climate sensitivity is less than 1.8 C.
It’s not the equilibrium climate sensitivity you’re working here with, by the way, but the transient climate sensitivity. Equilibrium climate sensitivity is determined by the final temperature state reached after the GHG emissions stop and atmospheric [CO2] has become constant (no longer increasing). Transient CS is the immediate increase in air temperature in response to steadily increasing CO2.
That all said, there is still zero evidence that any of the warming since 1880 is due to increased atmospheric CO2.

“The problem with this analysis is that it assumes that all warming is caused by CO2, which is obviously wrong when viewing the HADCRUT plot.”
its also wrong given what we know about other GHG forcings.
all that said, he almost has all of the pieces.. many others have done similar efforts.
they are used by the IPCC.
There are 3 sources of estimates
A) Paleo estimates ( LGM typically)
B) observation estimates ( like this one and Nic Lewis)
C) models.
Note. Most folks put higher weights on A and B. Hansen for example argues that C) is the least
reliable.
This effort falls in the B) class.
Its a start, but the author would do well to read all the papers that have done similar estimates.
once upon a time Nic Lewis did this. Rather than working from ignorance he read the science.
He found some areas that needed improvement. He improved known approaches. He came up
with lowered estimates. he testified in from of parliment.
Here is a lesson. If you want to argue that sensitivity as a metric makes no sense or makes unwarranted assumptions.. nobody is going to listen to you. Thats voice in the wilderness stuff. You are outside the tent pissing in.
IF you read the science, find assumptions, mistakes, etc and come up with improvements,
then you can make a difference.
Your choice: stay on the outside and make no difference. work from the inside and improve.
simple choice, you are free to do either.

timetochooseagain

@Zeke Hausfather-No, he’s not looking at that either. He’s not looking at anything. There are a number of reasons this isn’t even an estimate of the “transient response”
But as usual the “let’s push the number higher” crowd wants people to only listen to their arguments that whenever anyone gets an answer they don’t like, that the truth can only be higher. it’s got to be higher. Because the real answer is ~3 and you just known.
Guess what. You’re wrong. Really badly wrong.

DocMartyn

I got 1.71 using a similar approach.
http://judithcurry.com/2013/05/16/docmartyns-estimate-of-climate-sensitivity-and-forecast-of-future-global-temperatures/
I actually think that this methodology is rather good at resolving the lag between transient and ‘equilibrium’ climate sensitivity. The inflection around the 50’s in the Keeling Curve should be reflected in the line shape of temperature; which it does if you assume no lag. You can play with any lag you like, but warming and pause, screw up any lag >12 months.

MarkW

Haven’t read it all the way through yet, but the analysis seems to be assuming that all of the warming in recent decades is the result of CO2.

RichardLH

Jeff: Do not despair. There are wriggles in the data (mustn’t call then cycles) that cannot be CO2 related so we need to take that into account as well.
http://i29.photobucket.com/albums/c274/richardlinsleyhood/Fig8HadCrutGISSRSSandUAHGlobalAnnualAnomalies-Aligned1979-2013withGaussianlowpassandSavitzky-Golay15yearfilters_zps670ad950.png
And the future (LOWESS style) looks downwards!

RichardLH

MarkW says:
February 13, 2014 at 3:30 pm
“Haven’t read it all the way through yet, but the analysis seems to be assuming that all of the warming in recent decades is the result of CO2.”
You’re not suggesting that we need to flatten the CO2 line still further because there might be some natural variability in there as well are you? You know, like what the IPCC says there is?

RichardLH

Zeke Hausfather says:
February 13, 2014 at 3:14 pm
“You are looking at transient climate response (TCR), not equilibrium climate sensitivity (ECS). The IPCC AR4 TCR numbers are 1.5 to 2.8 C per doubling of CO2, with a mean estimate of 2.1”
Remind me again why we are tiptoeing down the dotted line drawn by Scenario C?
http://i29.photobucket.com/albums/c274/richardlinsleyhood/HansenUpdated_zpsb8693b6e.png

RichardLH

Pat Frank says:
February 13, 2014 at 3:16 pm
“That all said, there is still zero [hard] evidence that any of the warming since 1880 is due to increased atmospheric CO2.”
+1

Jim Cripwell

Let me support what Pat Frank writes You write ” I want to re-iterate the assumption of this hypothesis, which is also the assumption of the catastrophists position, that all longer term temperature change is driven by changes in CO2.”
There is nothing wrong with this as long as you state, explicitly, that what you are estimating is the maximum value for climate sensitivity. If all of the observed rise in temperature is due to natural causes, and not CO2, then the value of climate sensitivity is 0 C

RichardLH

Jim Cripwell says:
February 13, 2014 at 4:00 pm
“There is nothing wrong with this as long as you state, explicitly, that what you are estimating is the maximum value for climate sensitivity. If all of the observed rise in temperature is due to natural causes, and not CO2, then the value of climate sensitivity is 0 C”
But that’s heresy! CO2 MUST be the cause. I mean…….

DocMartyn

“Steven Mosher
its also wrong given what we know about other GHG forcings.
all that said, he almost has all of the pieces.. many others have done similar efforts.
they are used by the IPCC.
There are 3 sources of estimates
A) Paleo estimates ( LGM typically)”
I am quite happy for them to use the ice core CO2/Temperature record to calculate ECS, as long as they use the atmospheric Dust levels as a proxy for aerosol forcing. Given that dust levels change by three orders of magnitude from warm to cold ages, with 1000 times more atmospheric dust in the ice ages when compared to the warmest parts of the record, they don’t use them
Any reconstruction that ignores the dust levels is completely and utterly bogus; but then you knew that Mosher.

Robert of Texas

Seems to me the point is to put an upper limit on sensitivity – not that the value is likely. If the upper limit (using ridiculous assumptions all in favor of CAGW) is less than 2C, then the models are falsified (again).
So if the point was to actually compute the sensitivity, the approach is too simple and inaccurate and would have to identify and categorize all of the forcings If the point was merely to show 2C+ sensitivity is not supported by data, the approach seems to work (at least for me).
I do like the simple approach. I do also understand it isn’t the same as computing the sensitivity; Its a way of putting a boundary on it.

Bill Illis

Mosher says “Hansen for example relies on Paleo data.”
Let’s take the last glacial maximum and Hansen’s estimates based on that (and he actually wrote a paper on it). Temps were -5.0C lower, CO2 was at 185 ppm. Hansen should have calculated climate sensitivity of 8.3C per doubling based on those numbers. But he came up with 3.0C per doubling. How did he manage that? Only two possible explanations. He is very bad at the math of global warming or he faked up the numbers.

RichardLH

Steven Mosher:
“Hansen for example relies on Paleo data.”
And he was SO right about how all this would play out wasn’t he?
http://i29.photobucket.com/albums/c274/richardlinsleyhood/HansenUpdated_zpsb8693b6e.png
What was Scenario C again? Why would we follow that dotted line?

Steve Case

Here’s one
http://i56.tinypic.com/f3tlb6.jpg
I did a few years back.
You can also compare CO2 to major league home runs and get a nice fit.
And I’ve done the CO2 to HADCRUT4 comparison, goes something like this:
1850 – 1878 CO2 up less than 1 ppm temp up 0.4 deg 28 yrs
1878 – 1911 CO2 up 5 ppm temp down a minus -0.4 deg 33 yrs
1911 – 1944 CO2 up 15 ppm temp up 0.7 deg 33 yrs
1944 – 1977 CO2 up 30 ppm temp down minus -0.4 deg 33yrs
1977 – 2010 CO2 up 60 ppm temp up 0.8 ppm 33 yrs
Looks like CO2 doesn’t have much to do with it and some sort of 66 year cycle seems to have a lot more.
Personally think the feedbacks are negative, and climate sensitivity is less than 1.2 deg Celsius per doubling of CO2.

Leonard Lane

Thank you for your research and publication.
I feel that if the HADCRUT, and other temperature records, including the satellite data, have been so grossly adjusted to reduce warming periods before CO2 started its rapid increase and to increase warming as the atmospheric CO2 levels increased, that they are false. This dishonest and criminal tampering with the data to suggest global warming that just never happened means that accurate and true measured temperature to compute global temperature simply do not exist.
Thus, we have no measured data to compare with modeling results. Unless something like another little ice age (which I hope does not occur) cools to the extent that future data cannot be adjusted upward without it being obviously and criminally altered, then we are stuck with adjusted data.
The solution to the problem seems to be to elect politicians who will disavow global warming, un-fund those who claim it is real (as happened in Australia, at least at the federal level). and refocus our research to real problems. If we did not fund any any more dishonest “global warming, climate change, …, scientists”, that might partially solve the problem.
Engineers must often seek professional registration before they engineer projects, and the registered engineers are accountable for their work. MDs are also held accountable for their work via malpractice lawsuits, fines, etc. and they too must be certified as professionals. Perhaps it is time to make scientists and lawyers accountable for the honesty and quality of their work.

RichardLH

Steve Case says:
February 13, 2014 at 4:25 pm
“Looks like CO2 doesn’t have much to do with it and some sort of 66 year cycle seems to have a lot more.”
The data says you’re right.
http://i29.photobucket.com/albums/c274/richardlinsleyhood/Fig8HadCrutGISSRSSandUAHGlobalAnnualAnomalies-Aligned1979-2013withGaussianlowpassandSavitzky-Golay15yearfilters_zps670ad950.png

Alex Hamilton says: February 13, 2014 at 3:13 pm
In fact, because the “dry” lapse rate is steeper, and that is what would evolve spontaneously in a pure nitrogen and oxygen atmosphere, and because we know that the wet adiabatic lapse rate [with water vapor] is less steep than the dry one, it is obvious that the surface temperature is not as high because of these greenhouse gases. Carbon dioxide (being one molecule in about 2,500 other molecules) has very little effect, but whatever effect it does have would thus be very minor cooling.
Absolutely, great comment.
If only Mosher et al would read & understand Dr. Hans Jelbring’s paper
http://ruby.fgcu.edu/courses/twimberley/EnviroPhilo/FunctionOfMass.pdf

Paul_K

Jeff L,
Please don’t take my comments as too negative, because I think it is always good for people to test stuff for themselves, and I am honestly not trying to discourage you, but…
That said, your analysis is flawed in a number of different ways. (1) You cannot fit a memoryless formula, which relates equilibrium temperature change to forcing, to realworld data which should reflect transient temperature change against forcing (2) you are assuming that all of the temperature change is due to CO2 (3) you are ignoring the spurious nature of your final correlation/prediction which should be evident to you if you examine the cyclic nature of your error function.