Claim: How the IPCC arrived at climate sensitivity of about 3 deg C instead of 1.2 deg C.

UPDATE from Girma: “My title should have been ‘How to arrive at IPCC’s climate sensitivity estimate’ instead of the original”

Guest essay by Girma Orssengo, PhD

1) IPCC’s 0.2 deg C/decade warming rate gives a change in temperature of dT = 0.6 deg C in 30 years

IPCC:

“Since IPCC’s first report in 1990, assessed projections have suggested global average temperature increases between about 0.15°C and 0.3°C per decade for 1990 to 2005. This can now be compared with observed values of about 0.2°C per decade, strengthening confidence in near-term projections.”

Source: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-projections-of.html

2) The HadCRUT4 global mean surface temperature dataset shows a warming of 0.6 deg C from 1974 to 2004 as shown in the following graph.

Orssengo_IPCC1

Source: http://www.woodfortrees.org/plot/hadcrut4gl/from:1974/to:2004/trend/plot/hadcrut4gl/from:1974/to:2005/compress:12

3) From the following Mauna Loa data for CO2 concentration in the atmosphere, we have CO2 concentration for 1974 of C1 = 330 ppm and for 2004 of C2=378 ppm

Orssengo_IPCC2

Source: http://www.woodfortrees.org/plot/esrl-co2/compress:12

Using the above data, the climate sensitivity (CS) can be calculated using the following proportionality formula for the period from 1974 to 2004

CS = (ln (2)/ln(C2/C1))*dT = (0.693/ln(378/330))*dT = (0.693/0.136)*dT = 5.1*dT

For change in temperature of dT = 0.6 deg C from 1974 to 2004, the above relation gives

CS = 5.1 * 0.6 = 3.1 deg C, which is IPCC’s estimate of climate sensitivity and requires a warming rate of 0.2 deg C/decade.

IPCC’s warming rate of 0.2 deg C/decade is not the climate signal as it includes the warming rate due to the warming phase of the multidecadal oscillation.

To remove the warming rate due to the multidecadal oscillation of about 60 years cycle, least squares trend of 60 years period from 1945 to 2004 is calculated as shown in the following link:

Orssengo_IPCC3

Source: http://www.woodfortrees.org/plot/hadcrut4gl/from:1945/to:2004/trend/plot/hadcrut4gl/from:1945/to:2005/compress:12

This result gives a long-term warming rate of 0.08 deg C/decade. From this, for the three decades from 1974 to 2004, dT = 0.08* 3 = 0.24 deg C.

Substituting dT=0.24 deg C in the equation for Climate sensitivity for the period from 1974 to 2004 gives

CS = 5.1* dT = 5.1* 0.24 = 1.2 deg C.

IPCC’s climate sensitivity of about 3 deg C is incorrect because it includes the warming rate due to the warming phase of the multidecadal oscillation. The true climate sensitivity is only about 1.2 deg C, which is identical to the climate sensitivity with net zero-feedback, where the positive and negative climate feedbacks cancel each other.

Positive feedback of the climate is not supported by the data.

UPDATE:

To respond to the comments, I have included the following graph

Girma offset 0.01

Source: http://www.woodfortrees.org/plot/hadcrut4gl/mean:756/plot/hadcrut4gl/compress:12/from:1870/plot/hadcrut4gl/from:1974/to:2004/trend/plot/esrl-co2/scale:0.005/offset:-1.62/detrend:-0.1/plot/esrl-co2/scale:0.005/offset:-1.35/detrend:-0.1/plot/esrl-co2/scale:0.005/offset:-1.89/detrend:-0.1/plot/hadcrut4gl/mean:756/offset:-0.27/plot/hadcrut4gl/mean:756/offset:0.27/plot/hadcrut3sh/scale:0.00001/offset:2/from:1870/plot/hadcrut4gl/from:1949/to:2005/trend/offset:0.025/plot/hadcrut4gl/from:1949/to:2005/trend/offset:0.01

I have got a better estimate of the warming of the long-term smoothed GMST using least squares trend from 1949 to 2005 as shown in the above graph, which shows the least squares trend coincides with the Secular GMST curve for the period from 1974 to 2005. For this case, the warming rate of the least squares trend for the period from 1949 to 2005 is 0.09 deg C/decade.

This gives dT = 0.09 * 3 = 0.27 deg C, and the improved climate sensitivity estimate is

CS = 5.1*0.27 = 1.4 deg C.

That is an increase in Secular GMST of 1.4 deg C for doubling of CO2 based on the instrumental records.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

172 Comments
Inline Feedbacks
View all comments
Greg Goodman
May 19, 2013 12:53 pm

“Did we have some global reduction of CO2 leading into the LIA? I don’t think we did.”
We have trouble even getting it to exist in temperature by the time they’ve finished massaging the data but it would be worth looking at. I don’t see why the laws of physics would not have applied during that period.

Greg Goodman
May 19, 2013 1:33 pm

There are concerted efforts to censor the CO2 history to maintain the theme. Mainstream variously claim CO2 remained lower than recent levels for variously : 20ka, 200ka or 20 millions years.
There probably is some data if you dig but I’d expect it to be scrappy and very poor time resolution and noisy as hell.

blueice2hotsea
May 19, 2013 1:52 pm

Greg Goodman
May 19, 2013 at 12:47 pm
Why would I use annual means as data points? Because it doesn’t shift the graph by two years and it saves me pushing all those buttons for a three year running mean.
So I re-did the graph anyway with triple RM and guess what? It looks like this. It still has those 1990 peaks 1992 dips.
I would like to take your word for it that something big happened in 1990 and Pinatubo was inconsequential. But I still need more.

Girma
May 19, 2013 2:25 pm

Willis
your statistical claims are … well … let me call them charmingly naive at best, and unintentionally misleading at worst,
So should I sit still because we don’t have enough data?
What counts is the analysis gives you an EXCELLENT description of the observed data as shown:
http://orssengo.com/GlobalWarming/GmstPatternOf20thCentury.png
Here is the equation:
T = 1.871*ln(CO2/320.09)
T is the simple fit to the 63-years moving average GMST and CO2 is the annual CO2 concentration in the atmosphere.
The equation for T since 1869 is
T = 0.5*t1*(year-1895)^2 + t2*(year-1895) + t3
where
t1 = 5.477*10^(-5) deg C/year^2
t2 = 2.990*10^(-3) deg C/year
t3 = -0.344 deg C
Here is the graph again
http://orssengo.com/GlobalWarming/GmstPatternOf20thCentury.png
If you cannot see that the above graph is an EXCELLENT description of the climate of the 20th century, I cannot help you. Let others judge whether the above graph is an accurate description of the climate of the 20th century.
It works. That is what counts.

Greg Goodman
May 19, 2013 2:39 pm

“Because it doesn’t shift the graph by two years ” Well it probably logs it at 1990 instead of 1990.5 so you have 6m shift. Not sure why you are taking about a 2 year shift.
http://climategrog.wordpress.com/?attachment_id=233
I really don’t see huge difference between what you reproduced at WTF.org and my plot , except that you don’t have ICOADS and hadSST2 seems to be offset slightly lower.
“I would like to take your word for it that something big happened in 1990 and Pinatubo was inconsequential. But I still need more.”
Err, what I said was:
This shows there’s no significant drop from El Chicon or Mt Pinatubo. Nothing that stands out from the usual ups and downs.
I’ve just zoomed in on my plot and CO2 drop starts at 1990.55 ICOADS SST is somewhat smoother in form but seems about the same.
PinaTubo was June 1991
The following dip is remarkable only being smaller that average in the record. HadSST2 goes a bit deeper but still nothing more than average.
I’m not sure what you are seeing that you think I am missing.
I would invite you to take Willis’ volcano test. Imagine someone gave you that data and asked you to point out the when a major VE happened. Would you be pointing to 1991 ??

Girma
May 19, 2013 4:37 pm

Willis
I agree with you that 25 years data is not long enough for strong conclusion.
However, what else can I do?
Don’t you think it is possible to estimate annual GMST within +/- 0.2 deg C for the next 20 years?

blueice2hotsea
May 19, 2013 4:57 pm

Greg Goodman
Yes. Pinatubo erupted 1991.5. However, your graph dips dramatically in 1989. I am unable to re-produce that precursor dip at WFT. Can you?
re Willis’ volcano test. If the volcanic event released significant SO2 I might look for a strong negative acceleration in temperature. Like this
Note the two strongest candidates are the trenches which occur in 1982 & 1992 – years which are coincident with El Chichón and Pinatubo. respectively. I may 1991 or 1992. Close enough, there is a lag.

george e. smith
May 19, 2013 8:00 pm

“””””…..As a result of these two things, the fact that your R^2 is greater than 99% is MEANINGLESS. There’s not even enough data to calculate the statistical significance of the relationship between the two tiny datasets, much less determine its meaning……”””””
Well long before you assign any significance to any R^2 or other artifact of statistical manipulations; there is the much more fundamental question of whether or not you actually have ANY valid data to apply your statistry to.
So unless your data sampling regimen (in two variables; space and time) conforms to the Nyquist criterion for sampling of band limited continuous functions; you don’t even have data to masticate; it is simply noise. That of course is in-band noise so no filtering process can remove it to recover a signal.

Greg Goodman
May 20, 2013 12:42 am

blueice2hotsea, GIStemp does seem to dip later and deeper than SST.and CRUtem3
Do the same thing with CRUtem3 and there’s nothing for Mt P and the 1980 dip clearly precedes the eruption.
http://www.woodfortrees.org/plot/crutem3vsh/mean:12/mean:9/mean:7/from:1978/derivative/normalise
Since GISS have made several retrospective adjustments to their data and Hansen also exaggerates volcanic cooling estimations it may well be interesting to look into why.

Editor
May 20, 2013 12:45 am

Girma says:
May 19, 2013 at 2:25 pm

Willis

your statistical claims are … well … let me call them charmingly naive at best, and unintentionally misleading at worst,

So should I sit still because we don’t have enough data?

Well, yes, you should. Not exactly “sit still”, but “stop making unsupportable claims because you don’t have enough data”, because that’s what real scientists do.
They either wait for the data to accumulate until it reaches statistical significance, or they figure out some other analysis method that does give statistically significant results with the existing data.
What they don’t do is make unsustainable claims based on data plus analysis which shows NO STATISTICAL SIGNIFICANCE.
You go on to say:
Girma says:
May 19, 2013 at 4:37 pm

Willis
I agree with you that 25 years data is not long enough for strong conclusion.
However, what else can I do?

What you can do is try another analysis method. Using a sixty-three year average on your data puts the autocorrelation through the roof, making your results statistically insignificant. But that’s a result of your method, not of the size of the dataset (163 years of monthly data, N = almost 2,000. That’s reasonable data. So use another method to establish what you are trying to show. You may notice, for example, that many of my graphs are scatter charts of some kind. And many of them use just the raw data itself, no processing of any kind. So you might experiment along those lines.
But each of the results must be treated with caution, and tested for significance allowing for autocorrelation. Because no matter how good it looks, the numbers have to pencil out.

Don’t you think it is possible to estimate annual GMST within +/- 0.2 deg C for the next 20 years?

I’d say not at present, not with any certainty. Oh, I suppose you could estimate it at something like 0.05° ± 0.2°C and you’d have a good chance of being right, but that’s just a crap shoot.
The problem is that climate is chaotic. This means that absent some very, very clever method we haven’t invented yet, it is inherently not predictable. Now it is possible that long-term climate prediction is a “boundary problem” as some have claimed … but I’ve never seen any evidence that that is the case.
In addition, we have evidence that the climate models, whose programmers do think it is a boundary problem, can’t predict 20 years out, they’ve been quite bad at projecting the future ever since climate stopped warming. This, of course, is because they are merely incrementing machines, reading the inputs (forcings) and doing a linear transform with a lag … and as a result, none of them predicted the current hiatus in the warming.
Regards,
w.

Greg Goodman
May 20, 2013 1:00 am

Girma: What counts is the analysis gives you an EXCELLENT description of the observed data as shown
Yes cos+quadratic does give a reasonably good fit to that part of the data. That’s why I suggested it. However threre is also circa 21y and 11y that will affect the residual you are fitting to CO2 as well as a 160 that hadley processing removes.
http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/
I don’t follow what you are actually fitting to what but it seems you are now comparing the quadratic to linear ( a two point approximation to ln CO2 )
As I pointed out above ln CO2 is an additional radiative forcing , therefore you need to integrate it (or differentiate you T model to get dT/dt) . Since ln CO2 is almost a straight line,it’s integral will be a quadratic. At least you will have something similar to fit.
I’m not endorsing that as a correct evaluation of CS but if you want to go for a direct attribution as you intended in this exercise, that would seem to the appropriate way to do it.
What you are currently doing does not make sense unless you assume that the climate system and the oceans are adjusting almost instantly to the new “forcing” and that dT is the change in equilibrium state. I don’t think you’ll get many backers for that idea on either side of the debate.
The simplest way would be to fit cos+lin to dT/dt and then regress that with ln CO2.
To do it accurately you should do the cos+lin OLS fit to unfiltered data.
That will affect your CS but I can’t guess in which direction.

Editor
May 20, 2013 1:36 am

blueice2hotsea says:
May 19, 2013 at 4:57 pm

re Willis’ volcano test. If the volcanic event released significant SO2 I might look for a strong negative acceleration in temperature. Like this
Note the two strongest candidates are the trenches which occur in 1982 & 1992 – years which are coincident with El Chichón and Pinatubo. respectively. I may 1991 or 1992. Close enough, there is a lag.

I gotta say, that is tortured data. You’ve done a kind of pseudo-gaussian averaging, then differentiated the resulting average.
It seems if anything you’re doing it backwards, that you should differentiate the actual data, rather than the smoothed data, and then smooth that. But the difference is fairly small.
In any case, here’s the underlying data, from the WFT site here. I’ve first differentiated the data, then smoothed it. You can see the result is the same as yours.

Now, I’ve marked the month of the El Chichon eruption in blue, and that of Pinatubo in red. After El Chichon, the dT/dt continued to increase. After six months, the dT/dt begins to fall … but not all that much before turning up again.
Regarding Pinatubo, it erupted near the end of a long, large, 18-month decline in dT/dt. After the eruption, dT/dt continued to decline somewhat, but not strongly, for another eight months, and then started to increase.
In neither case is there a change in trend from before to after the eruption.
So I’m sorry, Blue, but I don’t accept your claim that that is “close enough” to be evidence for a volcano effect.
w.

Greg Goodman
May 20, 2013 1:38 am

Willis: “In addition, we have evidence that the climate models, whose programmers do think it is a boundary problem, can’t predict 20 years out, they’ve been quite bad at projecting the future ever since climate stopped warming. This, of course, is because they are merely incrementing machines, reading the inputs (forcings) and doing a linear transform with a lag … and as a result, none of them predicted the current hiatus in the warming.”
I think the failure of current models due to preconceived ideas being allowed to affect the model not the modelling process itself.
Volcanic effects have been exaggerated , as I’ve discussed above and you have said many times there is little evidence that volcanic effect is anything near what the models produce. Either the volcanic input is wrong or the models fail to reproduce climate _insensitivity_ to changes in radiative forcing.
This allows them to put in hypothetical +ve feedbacks to CO2 and it ends up as garbage.
I don’t think the problem in inherent in the modelling process but more to do with personal biases (group thinking) being allow rig the model to produce “expected” outcomes.

Greg Goodman
May 20, 2013 3:59 am

Willis: It seems if anything you’re doing it backwards, that you should differentiate the actual data, rather than the smoothed data, and then smooth that. But the difference is fairly small.
Both kernel convolution filters and differentiation are linear operations. The result should be mathematically identical.
Tortured, not really. The filter response is very similar to the guassian but is not “pseudo-gaussian”, it is actually slightly better at removing a fixed frequency like the annual cycle than the gaussian.
http://climategrog.wordpress.com/2013/05/19/triple-running-mean-filters/
Since we are looking in this case for a difference produced by a volcano plotting the difference (ie what is done here) seems to be the most appropriate way to view it.
Far from being tortured , I would say that every step was justified, correctly executed and nothing was superfluous, over-processing.
Anyway we are both agreed about what it shows.
Doing the same with ICOADS SST and most of that trough disappears too.
http://climategrog.wordpress.com/?attachment_id=233

Girma
May 20, 2013 4:38 am

Thanks Willis for your response.

Girma
May 20, 2013 5:20 am

blueice2hotsea
You are open to suggestions and progressive improvement. Therefore your “back-of-the-envelope” does not offend me. (I think the title is a stretch.)
I agree now the title is a stretch.
A more appropriate title would have been:
“How to arrive at IPCC’s climate sensitivity estimate of about 3 deg C and a contrasting estimate of 1.2 deg C”

herkimer
May 20, 2013 5:34 am

I agree with Don Easterbrook’s comments . One cannot make a years forecast based on the trend of the summer months only . Similarly one cannot make a century forecast by only looking at the last 20-30 warm years of a climate cycle that may be 100-120 years long. There are 60 year climate cycles and 100-120 cycles . An analysis of the CET records going back to 1538 by Tony Brown in a previous track showed regular temperature dips every 100-120 years . We are into similar dips like we had 1890,1780, 1670 and 1560. Don has been right all along predicting the global temperatures to drop because he looked at longer term cycles that are apparent in the ice core records .If we want to predict 100 year ahead, we need to look at least 100 years back too. Natural variables that shape these longer term cycles clearly seem to override the effects of CO2

May 20, 2013 7:02 am

Y’all wasting your time -running around in ever decreasing circles circles unless you include the millennial temperature cycle in any calculations see first post at
http://climatesense-norpag.blogspot.com
There is no consistent empirical relation between CO2 and temperature you can select a time frame that will show robustly that CO2 is an Ice House Gas if you want too.To forecast future temperatures you will be more successful if you forget CO2 completely – it is an effect not a cause.

Editor
May 20, 2013 9:13 am

Greg Goodman says:
May 20, 2013 at 3:59 am

Willis:

It seems if anything you’re doing it backwards, that you should differentiate the actual data, rather than the smoothed data, and then smooth that. But the difference is fairly small.

Both kernel convolution filters and differentiation are linear operations. The result should be mathematically identical.

Yes, you’re 100% right, my error. I made the assumption that since one of them was invertable and one was not, that the order was important … but it’s not, the results are identical.
w.

John Tillman
May 20, 2013 10:30 am

No surprise that Arrhenius was also a proponent of ethnically-based eugenics & racism.

Greg Goodman
May 20, 2013 1:17 pm

Willis: Yes, you’re 100% right, my error. I made the assumption that since one of them was invertable and one was not, that the order was important … but it’s not, the results are identical.
Your comment was right in suggesting filtering is best done last. Good principal. Just in this case it did not matter.
Interestingly, both the first difference ( which is a trivial kernel convolution ) and a gaussian can be done in one hit. Just use the analytical derivative of the gaussian to make the weighting kernel.
The two point difference is an approximation to the the real derivative of the data. By doing an analytical diff of the gaussian the operation is the same as doing a gaussian on the true diff of the data not the two point approx.
Yeah, I know , I was a sceptical as hell when I first read it but you can check it out. It’s a neat technique.
How much difference it make will depend up on the nature of the data.

Greg Goodman
May 20, 2013 1:21 pm

The difference of the two methods is subtle. If you use the two point difference of each point in the gausian kernel to build the derivative kernel it is identical to the two step method. The gain is the fact that you do the analytical diff of the gauss and sample that to make the kernel.

1 5 6 7
Verified by MonsterInsights