Claim: How the IPCC arrived at climate sensitivity of about 3 deg C instead of 1.2 deg C.

UPDATE from Girma: “My title should have been ‘How to arrive at IPCC’s climate sensitivity estimate’ instead of the original”

Guest essay by Girma Orssengo, PhD

1) IPCC’s 0.2 deg C/decade warming rate gives a change in temperature of dT = 0.6 deg C in 30 years

IPCC:

“Since IPCC’s first report in 1990, assessed projections have suggested global average temperature increases between about 0.15°C and 0.3°C per decade for 1990 to 2005. This can now be compared with observed values of about 0.2°C per decade, strengthening confidence in near-term projections.”

Source: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-projections-of.html

2) The HadCRUT4 global mean surface temperature dataset shows a warming of 0.6 deg C from 1974 to 2004 as shown in the following graph.

Orssengo_IPCC1

Source: http://www.woodfortrees.org/plot/hadcrut4gl/from:1974/to:2004/trend/plot/hadcrut4gl/from:1974/to:2005/compress:12

3) From the following Mauna Loa data for CO2 concentration in the atmosphere, we have CO2 concentration for 1974 of C1 = 330 ppm and for 2004 of C2=378 ppm

Orssengo_IPCC2

Source: http://www.woodfortrees.org/plot/esrl-co2/compress:12

Using the above data, the climate sensitivity (CS) can be calculated using the following proportionality formula for the period from 1974 to 2004

CS = (ln (2)/ln(C2/C1))*dT = (0.693/ln(378/330))*dT = (0.693/0.136)*dT = 5.1*dT

For change in temperature of dT = 0.6 deg C from 1974 to 2004, the above relation gives

CS = 5.1 * 0.6 = 3.1 deg C, which is IPCC’s estimate of climate sensitivity and requires a warming rate of 0.2 deg C/decade.

IPCC’s warming rate of 0.2 deg C/decade is not the climate signal as it includes the warming rate due to the warming phase of the multidecadal oscillation.

To remove the warming rate due to the multidecadal oscillation of about 60 years cycle, least squares trend of 60 years period from 1945 to 2004 is calculated as shown in the following link:

Orssengo_IPCC3

Source: http://www.woodfortrees.org/plot/hadcrut4gl/from:1945/to:2004/trend/plot/hadcrut4gl/from:1945/to:2005/compress:12

This result gives a long-term warming rate of 0.08 deg C/decade. From this, for the three decades from 1974 to 2004, dT = 0.08* 3 = 0.24 deg C.

Substituting dT=0.24 deg C in the equation for Climate sensitivity for the period from 1974 to 2004 gives

CS = 5.1* dT = 5.1* 0.24 = 1.2 deg C.

IPCC’s climate sensitivity of about 3 deg C is incorrect because it includes the warming rate due to the warming phase of the multidecadal oscillation. The true climate sensitivity is only about 1.2 deg C, which is identical to the climate sensitivity with net zero-feedback, where the positive and negative climate feedbacks cancel each other.

Positive feedback of the climate is not supported by the data.

UPDATE:

To respond to the comments, I have included the following graph

Girma offset 0.01

Source: http://www.woodfortrees.org/plot/hadcrut4gl/mean:756/plot/hadcrut4gl/compress:12/from:1870/plot/hadcrut4gl/from:1974/to:2004/trend/plot/esrl-co2/scale:0.005/offset:-1.62/detrend:-0.1/plot/esrl-co2/scale:0.005/offset:-1.35/detrend:-0.1/plot/esrl-co2/scale:0.005/offset:-1.89/detrend:-0.1/plot/hadcrut4gl/mean:756/offset:-0.27/plot/hadcrut4gl/mean:756/offset:0.27/plot/hadcrut3sh/scale:0.00001/offset:2/from:1870/plot/hadcrut4gl/from:1949/to:2005/trend/offset:0.025/plot/hadcrut4gl/from:1949/to:2005/trend/offset:0.01

I have got a better estimate of the warming of the long-term smoothed GMST using least squares trend from 1949 to 2005 as shown in the above graph, which shows the least squares trend coincides with the Secular GMST curve for the period from 1974 to 2005. For this case, the warming rate of the least squares trend for the period from 1949 to 2005 is 0.09 deg C/decade.

This gives dT = 0.09 * 3 = 0.27 deg C, and the improved climate sensitivity estimate is

CS = 5.1*0.27 = 1.4 deg C.

That is an increase in Secular GMST of 1.4 deg C for doubling of CO2 based on the instrumental records.

0 0 votes
Article Rating
172 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
richard telford
May 18, 2013 5:35 am

This is not “How the IPCC arrived at climate sensitivity of about 3 deg C instead of 1.2 deg C.”
If you want to know how the IPCC arrived at climate sensitivity of about 3 deg C, I recommend reading the IPCC report: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6.html

Dodgy Geezer
May 18, 2013 5:51 am

Don’t think of publishing this as a paper, if you want to keep your academic career….

Girma
May 18, 2013 5:56 am

The multi-decadal oscillation of the GMST has been described by Swanson et al [1]:
“Temperatures reached a relative maximum around 1940, cooled until the mid 1970s, and have warmed from that point to the present. Radiative forcings due to solar variations, volcanoes, and aerosols have often been invoked as explanations for this non-monotonic variation (4). However, it is possible that long-term natural variability, rooted in changes in the ocean circulation, underlies much of this variability over multiple decades (8–12).”
After removing the multi-decadal oscillation, Wu et al have reported their result for the long-term warming rate [2]:
“…the rapidity of the warming in the late twentieth century was a result of concurrence of a secular warming trend and the warming phase of a multidecadal (~65-year period) oscillatory variation and we estimated the contribution of the former to be about 0.08 deg C per decade since ~1980.”
This long-term warming rate result of 0.08 deg C/decade by Wu et al has been confirmed by Tung and Zhou [3]:
“The underlying net anthropogenic warming rate in the industrial era is found to have been steady since 1910 at 0.07–0.08 °C/decade, with superimposed AMO-related ups and downs that included the early 20th century warming, the cooling of the 1960s and 1970s, the accelerated warming of the 1980s and 1990s, and the recent slowing of the warming rates.”
References
[1] Swanson et al. (2009)
Long-term natural variability and 20th century climate change
http://www.pnas.org/content/106/38/16120.full.pdf+html
[2] Wu et al. (2011)
On the time-varying trend in global-mean surface temperature
http://bit.ly/10ry70o
[3] Tung and Zhou (2012)
Using data to attribute episodes of warming and cooling in instrumental records
http://www.pnas.org/content/110/6/2058

Graham
May 18, 2013 5:57 am

All sensitivity figure are wrong, because there is no linkage between CO2 and temperature.
REPLY: John O’Sullivan, leader of the Principia cult, there’s no need to hide behind a fake email address. We always know who you are here.
MX record about cutestudio.net exists.
Connection succeeded to postoffice.omnis.com SMTP.
220 postoffice.omnis.com ESMTP Postfix – (mail-hub-e.omnis.com)
> HELO verify-email.org
250 mail-hub-e.omnis.com
> MAIL FROM:
=250 2.1.0 Ok
> RCPT TO:
=550 5.1.7 : Sender address rejected: undeliverable address: host mail.verify-email.org[66.36.231.122] said: 553 5.3.0 … No such user here (in reply to RCPT TO command)
– Anthony

May 18, 2013 6:01 am

“Positive feedback of the climate is not supported by the data.” Which is why it has taken more than $100 billion in research grants, and over two decades of overt government pressure, to “balance” the bad data with the reluctant physical reality. OUR ‘outcome based education’ system has achieved the desired low information voter, with near total scientific illiteracy. OUR advocacy media has marketed the fable. OUR grant system has bribed the more than willing university system and sustainable product industry. OUR EPA extortion network has provided the muscle, combining to create the BIGGEST BUNCO OPERATION of all time. Thought far from the founders intentions, you must admit OUR crime syndicate government is the most elaborate in all history. Unfortunately for the public, you are not the beneficiaries of OUR racketeering.

Kelvin Vaughan
May 18, 2013 6:06 am

The IPCC are trying to conserve the worlds natural resources.
See who is exploiting the worlds natural resources in the fourth paragraph:
http://en.wikipedia.org/wiki/Natural_resource

Baa Humbug
May 18, 2013 6:11 am

Well, at least your heading in the right direction. Alert me when you get to climate sensitivity of zero and I’ll pay attention.

lgl
May 18, 2013 6:14 am

Then add some ocean heat and ice sheet adjustment and you probably end up around 1.5 C.

PMHinSC
May 18, 2013 6:22 am

While interesting, this analysis assumes cause and effect which has not been empirically established. “Correlation does not prove causation” and feed back from water vapor, clouds, etc. is ignored. Although the physics is solid it is incomplete and anthropogenic vs natural is a long way from being resolved. This is more information to present to the jury but we are a long way from final summation much less a verdict.

Ed_B
May 18, 2013 6:32 am

“Although the physics is solid”
—————–
The world is not a simple physics equation. It is dynamic, not static. Thunderheads take heat and dump it above 90% of the CO2 “blanket”. Clouds form to reflecty heat from the sun.
There is no way that current models capture the complexity of the heat trapping/disbursement of the earth. Thats why 100% of them failed to predict the current lack of warming.

Jim Cripwell
May 18, 2013 6:33 am

Baa Humbug says”Alert me when you get to climate sensitivity of zero and I’ll pay attention”
You may be interested in my extremely simplistic approach to this issue. Since no-one has measured a CO2 signal in any modern temperature/time graph, from standard singal to nosie ratio physics, there is a strong indication that the climate sensitivity of CO2 is indistinguishable from zero.

Eliza
May 18, 2013 6:33 am

Unfortunately the 97% climate scientist consensus by crazy Cook claim is being promoted all over MSM so big fail for skeptic sites i’m afraid (just*type global warming in Google news).. Also Antony I think you got a case of blatant misrepresentation/passing off as you/copyright breach at the following site and should take them to court very quick http://vvattsupwiththat.blogspot.com/

REPLY:
Oh I’m familiar with Dr. Russell Seitz and his parodies, but he’s not worth pursuing (much less paying attention to) for two reasons.
1. Streisand effect
2. Fair use law allows for use of materials specifically for the purpose of parody/criticism.
Waste of time, really, to pay any attention to him. – Anthony

Mike jarosz
May 18, 2013 6:35 am

Teddy Roosevelt a true progressive. Big business, bad big government good.

Girma
May 18, 2013 6:44 am

Latif and Keenlyside (2010)
A Perspective on Decadal Climate Variability and Predictability
The global surface air temperature record of the last 150 years is characterized by a long-term warming trend, with strong multidecadal variability superimposed. Similar multidecadal variability is also seen in other(societal important) parameters such as Sahel rainfall or Atlantic hurricane activity.
http://oceanrep.geomar.de/8744/1/Latif.pdf
Latif
Uncertainty in climate change projections
To some extent, we need to “ignore” the natural fluctuations, if we want to “see” the
human influence on climate. Had forecasters extrapolated the mid-century warming into the
future, they would have predicted far more warming than actually occurred. Likewise, the
subsequent cooling trend, if used as the basis for a long-range forecast could have erroneously
supported the idea of a rapidly approaching ice age.
The scientific challenge is to quantify the
anthropogenic signal in the presence of the background climate noise. The detection of the
anthropogenic climate signal thus requires at least the analysis of long records, because we
can be easily fooled by the short-term natural fluctuations, and we need to understand their
dynamics to better estimate the noise level.

http://oceanrep.geomar.de/9199/1/JGE.pdf

May 18, 2013 6:45 am

To: Kelvin Vaughan
OK, I read the 4th paragraph, which says:
“There is much debate worldwide over natural resource allocations, this is partly due to increasing scarcity (depletion of resources) but also because the exportation of natural resources is the basis for many economies (particularly for developed nations such as Australia).”
We sometimes forget that all of the world’s natural resources are controlled by national governments, except for privately owned mineral rights on privately owned land in the United States. Ironically, it is the individual owners of the small portion of private mineral interests in the United States who have turned the energy market upside down with fracked natural gas, drastically reducing energy costs for everyone on the globe.
If you want to change the way natural resources are used globally, look to leadership in national governments. If they are elected they have the power to do what you want, even though they would like for you to think it is private energy sector interests who to blame for everything. If they are not elected by you, you might want to think about changing your government to some form of elected representative democracy. The last place you want to look to for solutions is the United Nations and the IPCC, which is about as far as you can get from a democracy. Think of the UN as a World Chamber of Commerce, originally designed and still controlled by the world’s dictators and world’s financial elite — the 1% of the 1% of the 1%.

Don Easterbrook
May 18, 2013 6:51 am

What ever happened to ’cause-and-effect’ in science? Just because temperature went up and CO2 also went up over the same period doesn’t make a basis for calculating how much temperature will go up as CO2 increases! This whole analysis is based on the false premise that temperature is a function of CO2. Why don’t we do the same analysis for the period 1945 to 1977 and calculate how much COOLING occurs with increase in CO2? And why don’t we calculate for the period 1880 to 1915 how much COOLING occurs with increase in CO2? And why don’t we calculate for the Maunder Minimum how much COOLING occurs with increase in CO2? You get the idea–the notion that temperature is a function of CO2 is invalided until you first show a cause-and-effect relationship between the two!

Russ R.
May 18, 2013 6:57 am

“To remove the warming rate due to the multidecadal oscillation of about 60 years cycle, least squares trend of 60 years period from 1945 to 2004 is calculated.”
Shouldn’t the trend over a complete 60 year cycle be zero?

May 18, 2013 6:57 am

“CS = 5.1 * 0.6 = 3.1 deg C, which is IPCC’s estimate of climate sensitivity and requires a warming rate of 0.2 deg C/decade.”
Is this actually how the IPCC came up with 3.1 °C / doubling ?????
If it is, all I have to say is Wow , that is some fine cherry picking !
How about we cherry pick from the 50’s to late 70’s ?? – then we’ll get a negative sensitivity :))
This short period is the steepest period of warming & obviously will give the highest sensitivity (and cause the greatest alarm). If you take a similar approach, with no de-trending, over the industrial age, you come up with 1.8° C / doubling

May 18, 2013 7:04 am

Its pretty easy to see by looking at the two earlier rises (1850 – 1880) (1910 – 1945), and comparing them to the (1975 – 2000) rise, that the MOST you get is 1C per doubling. I don’t really like the analysis method presented here, though.

Richard111
May 18, 2013 7:09 am

Okay, just posting a link puts you in the spam box. Trying again.
New Discovery: NASA Study Proves Carbon Dioxide Cools Atmosphere

Ryan
May 18, 2013 7:23 am

This is not how the IPCC arrived at 3C. They give the reasons on their website for anyone who wants to look them up.

May 18, 2013 7:24 am

The SSTs peaked in about 2003 and the earth has been in a cooling trend cooling since then.The peak was not only a peak in the 60 year cycle but also quite likely a peak in the solar millennial cycle which must be included in any calculations. .It is not possible to calculate the effect of anthropogenic CO2 until we know within closer limits what the natural variation is.For example if you look at the ice core data for the Holocene and if you believe that CO2 is the climate driver you would have to conclude that on a scale of thousands of years CO2 was an Ice House – not a green house gas.For the data and an estimate of the coming cooling for the next few hundred years check the post”Climate Forecasting for Britain’s Seven Alarmist Scientists and for UK Politicians.” at
http://climatesense-norpag.blogspot.com

Dr. Deanster
May 18, 2013 7:29 am

The reality is, we can’t really know what the climate sensitivity is, nor can we test it anywhere outside of a lab, which is not reality!!! The REASON, the temperature metrics are all screwed up with adjustments.
NOTE … in this post, and every other post, this or that model or theory is compared to Hadcrut, or GISS, or NOAA,, all of which are contaminated and adjusted. Occassionally they will throw in the Satelite Metics, but they dont’ go back far enough to measure anything outside of the late 20th century warming, which is only data collected over a single phase of climate forcings; high solar input, single phase of PDO, AMO, etc…. I think that Anthony, and any other person versed in science would agree that a mere 35 years is not a sufficient sample to determine the climate sensitivity for longer periods of time, like centuries.
Because of this very real fact, the real climate sensitivity to CO2 is whatever it is the lab .. which I think I”ve seen about 1C thrown around. ALL other changes in the earths temperature, either up or down, are due to nature. PERIOD. Further, as stated many times by others, CO2 sensitivity is log, thus, we will have to reach 1000 ppm to increase the temp by 2C, and even there, the probability of the actual temp metric reaching that point is, IMO, very low, because nature does have feedbacks.

Editor
May 18, 2013 7:49 am

Girma, thanks for the post. You say:

IPCC’s warming rate of 0.2 deg C/decade is not the climate signal as it includes the warming rate due to the warming phase of the multidecadal oscillation.
To remove the warming rate due to the multidecadal oscillation of about 60 years cycle, least squares trend of 60 years period from 1945 to 2004 is calculated as shown in the following link:

This result gives a long-term warming rate of 0.08 deg C/decade. From this, for the three decades from 1974 to 2004, dT = 0.08* 3 = 0.24 deg C.

And just what is the “multidecadal oscillation” when it is at home? How are you “removing the warming rate due to the multidecadal oscillation” by calculating its linear trend? And how have you determined that we are in the “warming phase of the multidecadal oscillation”? Exactly when did said warming phase start, and when will it end?
I’m not sure whether the problem is in your descriptions or in your procedures, Girma, but I can’t follow your logic there. Like someone said above … don’t quit your day job quite yet …
w.

Louis LeBlanc
May 18, 2013 8:02 am

Re Jim Cripwell — shouldn’t that be “singal to nosie raito” ??

Pamela Gray
May 18, 2013 8:10 am

Two comments:
1. The assumption that a complete quasi-60 year oscillation just happens to be within the temperature range being studied is silly. Did you capture one? And on what observational data do you stand on regarding that oscillation capture?
2. Furthermore, the measurement error related to any OLS trendline through such highly variable data must be calculated and reported. Easily done: Excel it.

Pamela Gray
May 18, 2013 8:11 am

Willis you beat me to it. Damn.

May 18, 2013 8:12 am

In 1896 Arrhenius estimated that a halving of CO2 would decrease temperatures by 4-5°C and a doubling of CO2 would cause a temperature rise of 5-6°C. In his 1906 publication, Arrhenius adjusted the value downwards to 1.6°C (including water vapor feedback: 2.1°C). Recent estimates from IPCC (2007) say this value (the Climate Sensitivity) is likely to be between 2 and 4.5°C. But Sherwood Idso in 1998 calculated the Climate Sensitivity to be 0.4°C, and more recently Richard Lindzen at 0.5°C. Roy Spencer calculated 1.3°C in 2011.

Greg Goodman
May 18, 2013 8:14 am

Willis, I think what he’s doing is assuming that the period of natural variation is (exactly) 60 years and thus by looking at (any) 60 period you should average out the cycle and get a more representative slope.
It’s a pretty crude way of fitting a 60y cosine + linear trend model to the data.
Crude but effective.
I would not say it was that rigorous and it has a lot of obvious weaknesses but it is a way to show what I have called “cosine warming”:
http://climategrog.wordpress.com/?attachment_id=209

Richard M
May 18, 2013 8:14 am

Climate sensitivity is not a constant. It is variable and dependent upon other factors. That is due to the chaotic nature of climate. When near an attractor state it will be small. The further away it gets the higher it will be for any forcing.

Gary Pearse
May 18, 2013 8:16 am

Don Easterbrook says:
May 18, 2013 at 6:51 am
“…..Why don’t we do the same analysis for the period 1945 to 1977 and calculate how much COOLING occurs with increase in CO2? And why don’t we calculate for the period 1880 to 1915 how much COOLING occurs with increase in CO2?…”
Don, a good point. I believe if there is a CO2 signal and it is is concentrated in the last 60 years, then the downslopes of cooling periods should be reduced by the effect and the upslopes increased. Nowhere have I seen even this simple analysis because it looks like the slopes pre- CO2 main effect and post remain about the same.

Girma
May 18, 2013 8:20 am

Willis
“And just what is the “multidecadal oscillation” when it is at home? How are you “removing the warming rate due to the multidecadal oscillation” by calculating its linear trend? And how have you determined that we are in the “warming phase of the multidecadal oscillation”? Exactly when did said warming phase start, and when will it end?”
Willis
Here is the 21-years moving average GMST showing the multidecadal oscillation since 1880.
http://bit.ly/15WbhXW
The above result shows during the period from 1975 to 2005 the multidecadal oscillation (the 21-years moving average) was during its warming phase. It also shows the period for one complete oscillation is about 60 deg C. As this oscillation is natural, it must be removed from climate sensitivity calculation. This is done by calculating the trend for the secular long-term trend that is monotonic, not based on the 30-years least squares trend. For this, there are published results:
“…the rapidity of the warming in the late twentieth century was a result of concurrence of a secular warming trend and the warming phase of a multidecadal (~65-year period) oscillatory variation and we estimated the contribution of the former to be about 0.08 deg C per decade since ~1980.”
http://bit.ly/10ry70o
The secular (long-term non-period) warming rate of 0.08 deg C/decade above is identical to my approximate estimate given in my essay.
When the warming phase starts and ends can be seen as the 21-years moving average GMST crosses the secular GMST curve.

May 18, 2013 8:21 am

According to Nicola Scafetta:
“The climate system is clearly characterized by a 60-year cycle. We have seen statistically compatible periods of cooling during 1880-1910, 1940-1970, 2000-(2030 ?) and warming during 1850-1880, 1910-1940, 1970-2000.”
[See HadCRUT4 Global monthly mean temperature anomalies 1850-2013.09 (°C) + linear trends + 13 months mean (WoodForTrees – Observatorio ARVAL)]:
http://www.woodfortrees.org/plot/hadcrut3gl/from:1850/to:2012.84/plot/hadcrut3gl/from:1850/to:1880/trend/plot/hadcrut3gl/from:1880/to:1910/trend/plot/hadcrut3gl/from:1910/to:1940/trend/plot/hadcrut3gl/from:1940/to:1970/trend/plot/hadcrut3gl/from:1970/to:2001/trend/plot/hadcrut3gl/from:2001/to:2012.84/trend/plot/hadcrut3gl/mean:13

Greg Goodman
May 18, 2013 8:25 am

“Shouldn’t the trend over a complete 60 year cycle be zero?” Unfortunately not. The mean yes , not the “trend”. Again see the warming cosine.
http://climategrog.wordpress.com/?attachment_id=209
Girma’s calculation certainly will not give the correct result and it is a misrepresentation to say that is how the IPCC does it, though thier focusing on the last half of 20th c. is not far from the same thing.
It could be said that this is a parody of IPCC rather than a correction.
I would certainly hesitate to present such trivial and dubious calculation with a PhD nailed to the masthead.
BTW, I’ve never found out what Girma has a PhD in, maybe he would like to tell us.

Greg Goodman
May 18, 2013 8:39 am

Looking at derivates of SST and co2 is much more informative.
http://climategrog.wordpress.com/?attachment_id=223
Most of the inter-annual changes is explained by temperature causing CO2 out-gassing , not CO2 changing temperature.
http://climategrog.wordpress.com/?attachment_id=233
The short term proportionality between temp and rate of change of SST shows 1 degree deviation from equilibrium conditions causes 8 ppm/year of CO2 out-gassing from the oceans.
It also shows that there is a very rapid response time that pretty much destroys the IPCC’s claim of 100-1000 year residency of anthropogenic CO2.

Girma
May 18, 2013 8:41 am

Greg Goodman
Girma’s calculation certainly will not give the correct result
The fact is IPCC’s climate sensitivity estimate is 3 deg C and the non-feed back estimate is 1.2 deg C, and you can calculate these values directly from the data as described in my essay.

Matthew R Marler
May 18, 2013 8:46 am

Girma,
In your third figure you display a straight line fit to curvy data; why not add the model fit on which you base your calculation, that is, the linear trend plus the 60 yr. oscillation?
Your title is misleading. What you have is a different estimate of climate sensitivity to CO2, not “how” the IPCC derived their estimate.

kadaka (KD Knoebel)
May 18, 2013 8:47 am

From richard telford on May 18, 2013 at 5:35 am:

If you want to know how the IPCC arrived at climate sensitivity of about 3 deg C, I recommend reading the IPCC report: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6.html

But that section speaks of equilibrium climate sensitivity, while Dr. Orssengo is calculating dynamic values.
If you’re going to clutter up our comments with your theoretical debunkings, please do our host Anthony the courtesy of at least addressing the same topic.

May 18, 2013 8:52 am

Girma Orssengo rightly demonstrates that one cannot determine climate sensitivity empirically from observed changes in CO2 concentration and in global mean surface temperature unless one either studies periods that are multiples of ~60 years to cancel the transient effects of the warming and cooling phases of the Pacific and related ocean oscillations or studies periods centered on a phase-transition in the ocean oscillations.
However, one should hesitate to draw conclusions empirically over as short a period as 60 years: for it is the cry-babies’ contention that temperature feedbacks operate over various timescales out to 1000-3000 years (Solomon et al., 2009). This was one of the reasons why they sneered at Lindzen & Choi (2009, 2011), who studied sensitivity empirically over about 25 years.

Girma
May 18, 2013 8:53 am

You all are asking me the same question.
My result is based the following climate pattern of the 20th century:
http://bit.ly/15FKX0n

Greg Goodman
May 18, 2013 8:54 am

Another interesting feature of the rate of change plot is that d/dt( CO2 ) has a also taken a “pause” since 1997:
http://climategrog.wordpress.com/?attachment_id=223
ie CO2 is still rising but no longer accelerating. So if CO2 was driving temperature we would still be seeing rising temps.
On the other hand since is it rate of change of CO2 that is affected by temperature, this period confirms it is temp driving CO2 and not the opposite.
There is more to be gained from the long term relationship that shows a ratio of 4 rather than 8 between dT/dt and d2/dt2(CO2) .
That should tell us more about the proportion of emissions that do/don’t get absorbed and possible inverse effect on temps. This is probably a proportional+differential relationship that needs more thought and study.
I don’t think this is going to be explained by hacking about of WTF.org.

Don Easterbrook
May 18, 2013 8:55 am

The problem with ‘averaging out’ the 60 year cycles is that it still doesn’t validate temperature as a function of CO2. We’ve been thawing out from the Little Ice Age for about 5 centuries, and most of that warming occurred BEFORE the sharp rise in CO2 emissions after 1945. The GISP2 Greenland ice core shows that temperatures in Greenland from 10,000 years ago to 1500 years ago were virtually all 1 to 2.5 degrees C warmer than present with ‘normal’ CO2 levels. These temperatures are for Greenland, not global, but they correlate very well with fluctuation of the world’s glaciers so they reflect global temperatures. How can anyone possibly defend calculating a global temp/CO2 function when temperature is clearly NOT a function of CO2 for 8,500 years? Such calculations are total nonsense!

Girma
May 18, 2013 8:59 am

Don Easterbrook
I am not saying CO2 is causing the warming. I believe it is the warming that is causing the increase in CO2 concentration, as the vostok ice cores show. The CO2 concentration will drop when the temperature falls.

Master_Of_Puppets
May 18, 2013 9:04 am

It’s the drugs !

Yvan Dutil
May 18, 2013 9:05 am

This is a fundamentally wrong calculation. You must take account of the thermal inertia. Otherwise, you largely underestimate the sensitivity. This is first year heat transfer physics by the way,
In addition, fitting a long period oscillation will remove some energy form the linear component even if the true periodic signal is zero because both function are not necessarily orthogonal on such period of time. Basic, Fourier analysis. First year again.

Kelvin Vaughan
May 18, 2013 9:11 am

plakat1 says:
May 18, 2013 at 6:45 am
The point was that the IPCC are making things look bad to stop us ordinary people using up the worlds resources. The super rich want it all for themselves.

Greg Goodman
May 18, 2013 9:13 am

Monckton of Brenchley says: “This was one of the reasons why they sneered at Lindzen & Choi (2009, 2011), who studied sensitivity empirically over about 25 years.”
LC2011 looked at very short snippets of time within that period. That probably tells us something useful but should not be read as being representative of longer periods.
I think the method was rigorous but care is needed in interpreting what it shows.

Greg Goodman
May 18, 2013 9:15 am

Girma says:
May 18, 2013 at 8:59 am
Don Easterbrook
I am not saying CO2 is causing the warming. I believe it is the warming that is causing the increase in CO2 concentration, as the vostok ice cores show. The CO2 concentration will drop when the temperature falls.
… then you should be looking at the derivative of CO2 not plain ppm 😉

Greg Goodman
May 18, 2013 9:21 am

Girma says:
Greg Goodman
Girma’s calculation certainly will not give the correct result
The fact is IPCC’s climate sensitivity estimate is 3 deg C and the non-feed back estimate is 1.2 deg C, and you can calculate these values directly from the data as described in my essay.
===
That does not mean that is how IPCC got there , neither does it mean the second result gives the correct sensitivity.
You are still including ‘cosine warming’.

Editor
May 18, 2013 9:29 am

Pamela Gray says:
May 18, 2013 at 8:11 am

Willis you beat me to it. Damn.

It’s the time zone …
w.

Girma
May 18, 2013 9:32 am

Greg Goodman
That does not mean that is how IPCC got there , neither does it mean the second result gives the correct sensitivity.
The data and the formulas I used are the same ones used by the IPCC. The chance of arriving at the same value using the same formula and data but different method is almost nil.

Greg Goodman
May 18, 2013 9:47 am

The formula you used for CO2 ‘forcing’ yes. That is not how they calculate sensitivity. That comes from a number of sources mainly climate models. I don’t think they are right , just that it is incorrect to say what you presented is how they do it.
Though what you present is a fairly good caricature.
You have not commented on the fact that your method is still including “cosine warming”:
http://climategrog.wordpress.com/?attachment_id=209
If you go from peak to peak you may get what you were intending to get , and it will be a lot less.
That may be interesting.

Greg Goodman
May 18, 2013 9:49 am

Sorry, you’re not far off peak to peak in your third plot, I was seeing the dates on the early ones.

Greg Goodman
May 18, 2013 9:55 am

Girma says:
My result is based the following climate pattern of the 20th century:
http://bit.ly/15FKX0n
What result does your calculation give for the previous cycle 1875-1940?

Girma
May 18, 2013 9:55 am

Greg Goodman
Here are published results for the the long-term warming rate:
“…the rapidity of the warming in the late twentieth century was a result of concurrence of a secular warming trend and the warming phase of a multidecadal (~65-year period) oscillatory variation and we estimated the contribution of the former to be about 0.08 deg C per decade since ~1980.”
http://bit.ly/10ry70o
It is IDENTICAL to my value of 0.08 deg C/decade long-term warming trend.

James Evans
May 18, 2013 10:19 am

I’m with Girma. I don’t know why everyone else has got their panties in a bunch.
The Team thought that they had the whole “CO2 causes terrifying warming” thing sorted because the warming from about 1970 to about 2000 matched their calculations of sensitivity (assuming that temps would be flatlining if it wasn’t for the increase in CO2.)
But if you look at the whole pattern from about 1880, there’s a very obvious oscillation:
http://www.woodfortrees.org/plot/gistemp/plot/hadcrut4gl/from:1880
If you look at the overall trend then clearly the Team’s sensitivity sums were nonsense.

Editor
May 18, 2013 10:29 am

Girma says:
May 18, 2013 at 8:20 am

Willis

“And just what is the “multidecadal oscillation” when it is at home? How are you “removing the warming rate due to the multidecadal oscillation” by calculating its linear trend? And how have you determined that we are in the “warming phase of the multidecadal oscillation”? Exactly when did said warming phase start, and when will it end?”

Willis
Here is the 21-years moving average GMST showing the multidecadal oscillation since 1880.
http://bit.ly/15WbhXW
The above result shows during the period from 1975 to 2005 the multidecadal oscillation (the 21-years moving average) was during its warming phase. It also shows the period for one complete oscillation is about 60 deg C [presumably you meant “years”]. As this oscillation is natural, it must be removed from climate sensitivity calculation. This is done by calculating the trend for the secular long-term trend that is monotonic, not based on the 30-years least squares trend. For this, there are published results:

First, let me start with the fact that if you have one peak and two troughs, you don’t have enough data to say whether there is or is not any “natural oscillation”. You’d need several cycles, perhaps a large number, to establish that.
Next, you claim that the oscillation has a 60 year period. Your graph in support of your claim shows the bottom of one “cycle” at about 1905, the top of the “cycle” at about 1950, and the next bottom at about 1965 …
So the first upwards half of your “cycle” seems to last about 45 years, and the next half of the cycle seems to last 15 years … say what? Yes, those add up to 60 years, but it hardly looks like an “oscillation” …
Also, you’re using an averaging period (756 months) that is half of the length of your dataset … not recommended.
Next, you’ve spliced in CO2 data on top of the temperature data, and in a very similar color, which is misleading at best. Plus, for unknown reasons, you’ve left the first 19 years of HadCRUT data off your graph. Here’s your graph, back to the start of the HadCRUT data …

The early data is already beginning to show problems with your theory, so let’s look at a longer dataset …
Here’s the BEST data, treated the same way. You can see that from 1850 on the two are quite close … but the earlier data illustrates why your method is … well, let me call it far less than optimal …

This is a totally typical situation in climate science. You have what looks like a solid cycle, it lasts for a couple of complete swings … and then it fades out and is replaced by something entirely different. You’ve been suckered by one of mother nature’s oldest tricks, my friend …
Overall, I’d say you have not even come close to establishing either of your two claims in the title. You haven’t said a word about how the IPCC got 3°C for the sensitivity, and you haven’t provided any justification, either theoretical or observational, for removing what you admit to be part of the natural signal. Handwaving about a hypothetical 60 year cycle won’t do it …
w.

Greg Goodman
May 18, 2013 10:38 am

Like I said, crude but effective. If you can back it up by a more rigorous study that makes it more credible. That backs up you 0.08 K/decade.
IPCC also bring all sorts exaggerated volcanic cooling, parametrised water vapour feedbacks at stuff in as well and it all goes wrong.
You are assuming no such effects in the way you calculate your sensitivity. So they are not doing the same thing but ignoring the cyclic variation. Saying they do is inaccurate, as a few people have pointed out.
You also stated you think temp causes CO2 but carry on calculating CO2 causing temps.
Since the mechanism is very different and depends on d/dt CO2 these are not interchangeable.
I think you need to look at that.

kadaka (KD Knoebel)
May 18, 2013 10:50 am

Greg Goodman on May 18, 2013 at 9:15 am posted:
http://www.woodfortrees.org/plot/hadsst2gl/from:1958/mean:12/mean:9/mean:7/derivative/normalise/plot/esrl-co2/derivative:0.003/derivative:-1.03/mean:12/mean:9/mean:7/normalise
Greg, what the heck are you trying to show there?
You have two “derivative” calls for the CO₂, but the function takes no value, read the WFT Help page. Leaving off the values gets the same result, which is “subtract each sample from one before it”, which you do twice:
http://www.woodfortrees.org/plot/hadsst2gl/from:1958/mean:12/mean:9/mean:7/derivative/normalise/plot/esrl-co2/derivative/derivative/mean:12/mean:9/mean:7/normalise
Then for both, you convert the months into 12 month running means, then convert those running means to 9 month running means, then convert the 9 month running means of the 12 month running means into 7 month running means.
WHY?

Girma
May 18, 2013 10:51 am

Willis
I have not claimed a constant pattern before 1869. As the climate forcing changes, the pattern also changes.
If you include the time before 1869, the residual will have a trend.
Here is a simple fit to the 21-years and 63-years moving average since 1869 with R^2 = 0.998 and R^2 = 1, respectively.
http://orssengo.com/GlobalWarming/ClimateSensitivityOfOnePointThreeDegC.png
The above graph shows clearly the climate pattern since 1869.
144 years is sufficient to establish climate relationship during that time.

Chad Wozniak
May 18, 2013 10:59 am

@Joseph Olson –
Yes, the AGW scam is the biggest fraud in history, making Bernie Madoff’s peculations look like pocket change by comparison.
When the bubble bursts, I’d like to see the AGWers forced to reimburse the taxpayers for all the money paid to them in so-called “research” (translate: propaganda) grants and all the monies spent subsidizing environmentally as well as economically disastrous “renewable energy” schemes likie bird-killing wind turbines and habitat-destroying solar arrays.

Greg Goodman
May 18, 2013 11:35 am

Girma , look at rate of change of hadcrut and best.
http://www.woodfortrees.org/plot/hadcrut3vgl/mean:252/mean:188/mean:141/derivative/plot/best/mean:252/mean:188/mean:141/derivative/plot/esrl-co2/scale:0.003/offset:-1.03/detrend:-0.22/from:1982/derivative/mean:12/mean:9/mean:7
The hadcrut that you chose to use has a acceleration that would not be explained by the ln(co2) formula.
You have to have a credible model for you data. What you are trying to do is fit a cos+linear and use linear for your sensitivity. There is a significant upward curve in your plot an that is the acceleration we see here. Your cos plus linear would have a level cosine as derivative
BEST show a deceleration and a strong 30 y as well a 60y then goes sky high (probably residual UHI)
The whole idea of linear CO2 AGW does not fit the data, not even with a 60 year cosine. I agree having one is whole lot better than not but the model is just wrong. No sense in trying attribution until there is a reasonable model.
Now have a look at CO2 (I’ve filtered out 12m seasonal). There’s a lot of action in there too and guess what … when you look it actually shows it’s caused by temperature and not the other way around.
You actually suggested this was the case in one of you comments but it’s not what your post is about. I again suggest you try to take a look.
http://climategrog.wordpress.com/?attachment_id=233

Greg Goodman
May 18, 2013 11:40 am

kadaka, don’t worry about the params to diff ( like you said they have no effect) I trimmed down Girma’s plot and forgot to remove them.
The triple running mean is to provide a filter that does not mess up the data.
Each step is reduced by a factor of 1.3371 or as near as you can.
If you don’t, this sort of thing can happen where the runny mean inverts peaks in the data.
http://www.woodfortrees.org/plot/rss/from:1980/plot/rss/from:1980/mean:60/plot/rss/from:1980/mean:30/mean:22/mean:17
Running means are a disaster , the triple is quite a good low pass filter.

Greg Goodman
May 18, 2013 11:42 am

BTW, its not “converting” anything, it is three successive filters

May 18, 2013 11:45 am

Natural variability doesn’t appear to have fixed periodicity. As far as can se from the CET and the N. Atlantic geological records oscillations vary in length between 46 and 65 years, however the average since 1650 happens to be ~ 60 years: http://www.vukcevic.talktalk.net/NVb.htm
from 1890 I found periods 52, 62, and 65, so in a short-ish period of data FT analysis may this identify around 60yr.

Bill Illis
May 18, 2013 11:48 am

Its the feedback assumptions/theories that take the 1.1C/1.2C to a total of 3.0C per doubling.
Here us a little table showing how it is actually calculated. The feedbacks are over 2.2 W/m2 per 1.0C increase in temperatures. 1.1C of doubled CO2/GHG causes a feedback increase of 8% in water vapor, a 2% decline in cloud optical thickness and reduced Albedo as ice melts. The first round of these feedbacks produces another increase in temperatures of 0.7C,
That 0.7C increase in temperature from feedbacks produces another round of feedbacks increasing the temperature another 0.45C, Then there is round #3 of feedbacks on the feedbacks so on and so on. Pretty soon, the tropopause has increased its forcing by 11.7 W/m2 and temps here are 3.0C higher. The theory assumes the surface will increase by something like the tropopause (although there is small increase in the lapse rate counted in the feedback numbers already).
http://s2.postimg.org/xkjw426dl/Stefan_Boltzmann_3_0_C_doubling.png
However, if the feedbacks are much less than this 2.2 W/m2/K, we get much less warming. And here is something rarely talked about, if the feedbacks are much more than this 2.2 W/m2/K, there is actually a runaway greenhouse effect. The actual feedbacks only need to be something like 3.0 W/m2/K and we are at 11C of warming per doubling. Bump them up to 4.0 W/m2/K or so and the oceans boil off eventually.
The feedbacks numbers have been carefully chosen to keep the numbers at 3.0C per doubling.
So far, water vapor is coming in less that 50% of that predicted, and cloud optical thickness is probably Zero as far as we can tell.
http://s13.postimg.org/7dk4nfh6f/Hadcrut4_vs_TCWV_Scatter.png
http://s9.postimg.org/y8o23z2rz/UAH_RSS_vs_TCWV_Scatter.png
Feedbacks strength versus how much warming we get.
http://s24.postimg.org/7jjj2kcgl/Feedback_Strength.png

Editor
May 18, 2013 11:49 am

A couple of commenters have said that isn’t how the IPCC arrived at tis ECS, and to read the IPCC report to see how they really did it. Well, I have read the IPCC report (AR4) – some parts of it many times – and it is clear that Girma Orssengo’s article is basically correct in this respect. For example, in TS.4.5 . it says “Large ensembles of climate model simulations have shown that the ability of models to simulate present climate has value in constraining climate sensitivity.”. In other words, they have matched climate sensitivity to observed temperature in the way Girma describes. Since the models are clearly driven mainly by the warming period of late 20thC, Girma’s actual calculation is pretty reasonable, though AR4 is opaque in areas like this so it is difficult to be certain.
IPCC’s big problem arose when it couldn’t find enough forcing in CO2 to match their value for ECS. The (well-known) calculations for CO2 gave them an ECS of only 1.2. What they came up with was “feedbacks”. Water vapour and albedo was easy to explain, but that could only bring ECS up to 1.9 (AR4 8.6.2.3 page 633). Ignore at this point the fact that there was no actual evidence for this “feedback”. Clouds were and still are one of the big unknowns in climate science, so that is where they created a “feedback” to complete the picture. There is no known mechanism for cloud “feedback” and no empirical observation, but by coding suitable parameters into the models [yes, they say that is what they did, see AR4 8.2.1.3] they could bring ECS up to the required 3.2.
So, in summary, temperature increase of late 20thC ==> ECS 3.2 ==> feedbacks …
… which brings me to Willis Eschenbach’s (“w”) comment:-
w says “How are you “removing the warming rate due to the multidecadal oscillation” by calculating its linear trend?“. What Girma did was to take the linear trend over 60 years, which is the length of a cycle that is very visible in the temperature record of the last 120+ years. His unspoken argument is that you have to take a temperature trend over complete cycle(s) in order to remove the cyclical effect: “To remove the warming rate due to the multidecadal oscillation of about 60 years cycle, least squares trend of 60 years period from 1945 to 2004 is calculated“. This is entirely logical as far as it goes – to check, simply calculate some linear trends in a sinewave.
I say “as far as it goes”, because although it is probably the best and only way to remove the cyclical effect, nevertheless w is correct to point out that we don’t know much at all about this cycle. We can’t assume that its amplitude is constant, that its shape is symmetrical, that it is the only factor to be removed in order to find CO2-forced temperature change, etc, etc.
So, although Girma’s explanation is a good one, and it does correctly show that the IPCC’s ECS is totally unreliable, nevertheless it cannot be used as a basis for further calculations. Before climate science can move ahead, it is necessary to understand much more about climate’s major components.

May 18, 2013 11:50 am

Correction: last sentence in my post above is misleading and should be ignored. Sorry, my bad!

Don Easterbrook
May 18, 2013 11:52 am

Girma
“I am not saying CO2 is causing the warming. I believe it is the warming that is causing the increase in CO2 concentration, as the vostok ice cores show. The CO2 concentration will drop when the temperature falls.”
Although I agree with this statement, that isn’t the way the IPCC is using temp as a function of CO2. They are saying that if CO2 goes up by X amount, the temp will rise Y amount and that is simply not a valid conclusion. My point is that temp is NOT a function of CO2 when you look at the past history (long term or short term) of temp and CO2. You can calculate virtually any CS you want (including warming or cooling) just by selecting a different time span as the basis for your calculation. So what use is CS at all?
Not all of the 60 year cycle temps are the same–there are big differences, so you can’t just use one number to subtract them out of your equation.

Girma
May 18, 2013 11:59 am

Mike Jonas
Thank you for your reasoned comments.
Here is a published paper that arrive at recent warming rate of 0.08 deg C/decade:
“The underlying net anthropogenic warming rate in the industrial era is found to have been steady since 1910 at 0.07–0.08 °C/decade, with superimposed AMO-related ups and downs that included the early 20th century warming, the cooling of the 1960s and 1970s, the accelerated warming of the 1980s and 1990s, and the recent slowing of the warming rates.”
Tung and Zhou (2012)
Using data to attribute episodes of warming and cooling in instrumental records
http://www.pnas.org/content/110/6/2058

May 18, 2013 12:05 pm

Gentlemen Look at Fig2 on the top post at http://climatesense-norpag.blogspot.com
if you think that CO2 drives temperatures you are forced to admit that CO2 cooled rather than warmed the world for thousands of years during the Holocene.The 60 and 1000 year cycles are the key ones for the present discussion. They are clearly the controlling factors.For the 1000 year cycle see later figures and earlier posts on my blog.

Editor
May 18, 2013 12:28 pm

Girma says:
May 18, 2013 at 10:51 am

Willis
I have not claimed a constant pattern before 1869. As the climate forcing changes, the pattern also changes.

Huh? It’s not a “pattern” if it is constantly changing.
What you have claimed is that there is a “multidecadal oscillation”, one which you have not identified with any physical phenomenon.
Now you are saying that it is really a temporary “multidecadal oscillation” which will change in some unspecified manner with some unidentified quantity called the “climate forcing”.
Gotta say, my friend, you are not making things clearer …

… The above graph shows clearly the climate pattern since 1869.
144 years is sufficient to establish climate relationship during that time.

Again, I fear your meaning is totally unclear. What does it mean to “establish [a] climate relationship”? Because from your graph, you have one peak and two troughs in the data. It sounded for a while like you were claiming a regular cycle, one that could legitimately be removed from the data in the manner in which we deal with say the cyclical annual variations.
But no, now that I’ve shown that your “multidecadal oscillation” pattern doesn’t exist further back than 1869, now you are saying it’s just a temporary pattern, might disappear tomorrow … but if so, what is the justification for removing it?
And despite not knowing what we’re looking for, just some ephemeral “multidecadal oscillation” that appears and disappears, you claim that 144 years of data (two troughs and one peak) are enough to “establish [a] climate relationship” … ‘fraid I can’t help you with that one …
My friend, I fear you are pursuing a blind alley. You can’t just arbitrarily decide to filter out some parts of the data because you like the result …
w.

May 18, 2013 12:29 pm

Heres a quote from the blogpost linked earlier.
Having some passing acquantance with the above literature I would suggest that the currently most useful compilation for thinking about the record of the last 2000 years is.
Christiansen and Ljungqvist 2012
http://www.clim-past.net/8/765/2012/cp-8-765-2012.pdf
Fig.3 Im not sure how to import figs into WUWT check the original post at http://climatesense-norpag.blogspot.co
The point of most interest in Fig 3 is the present temperature peak and the MWP peak at 1000 AD which correlate approximately with the solar millenial cycle seen in Fig2. The various minima of the Little Ice age and the Dalton minimum of the early 19th century also show up well.
The general principal is to perfom spectral and wavelet analysis on the the temperature and any possibly useful driver associated time series to find any quasicyclic patterns which can be cross correlated. (possibly with appropriate time lags)
For a general review of this approach see several Scafetta papers eg
http://www.fel.duke.edu/~scafetta/pdf/scafetta-JSTP2.pdf
For decadal scale variations a 60 year cycle ,which seems to correlate temperatures and the PDO, is well established see the post” Global Cooling -Methods and Testable Decadal Predictions” at
http://climatesense-norpag.blogspot.com.

May 18, 2013 1:10 pm

Willis, is Girma really your “friend”? If not, the repeated use of that title may be construed as somewhat condescending and, with your stature, that is really not necessary.

Greg Goodman
May 18, 2013 1:20 pm

“Guest essay by Girma Orssengo, PhD”
Dr Orssengo, since you are using your title in a way that would tend to lend authority to your views, would you be so kind as to reply to enquiry as to the field of study in which you gained your doctorate?
That is not intended to be provocative so please do not take it the wrong way but since you make a point of waving you qualification, it would seem proper to state your area of competence.

Clay Marley
May 18, 2013 1:40 pm

Mike sez: “His unspoken argument is that you have to take a temperature trend over complete cycle(s) in order to remove the cyclical effect”
Just a caution here; the Cosine Warming effect can occur with a single full cycle. It is mainly dependent on the starting and end points. If they aren’t the same, an apparent linear slope occurs.

Alexej Buergin
May 18, 2013 1:43 pm

Greg Goodman
just google him and you find this:
http://www.geocities.ws/girmao/resume.htm

Greg Goodman
May 18, 2013 1:55 pm

Thanks Alexej . I tried googling his name some time last year a drew a blank.

Janice Moore
May 18, 2013 2:01 pm

Chris Schoneveld, are you Mr. Eschenbach’s grandmother? You are, at least, on a first name basis with him. Hm?

Dr Burns
May 18, 2013 2:09 pm

Correlation is not causation.

kadaka (KD Knoebel)
May 18, 2013 2:15 pm

From Greg Goodman on May 18, 2013 at 11:40 am:

kadaka, don’t worry about the params to diff ( like you said they have no effect) I trimmed down Girma’s plot and forgot to remove them.

But what purpose can the dual “derivative” subtractions serve?
Start: A, B, C, D, E, F
Step1: A, B-A, C-B, D-C, E-D, F-E
Step2: A, B-A-A, C-B-(B-A), D-C-(C-B), E-D-(D-C), F-E-(E-D)
-equaling: A, B-2A, C-2B+A, D-2C+B, E-2D+C, F-2E+D
What’s the justification?

The triple running mean is to provide a filter that does not mess up the data.
Each step is reduced by a factor of 1.3371 or as near as you can.
If you don’t, this sort of thing can happen where the runny mean inverts peaks in the data.
http://www.woodfortrees.org/plot/rss/from:1980/plot/rss/from:1980/mean:60/plot/rss/from:1980/mean:30/mean:22/mean:17
Running means are a disaster , the triple is quite a good low pass filter.

O RYL?
Stick the trends on it.
http://www.woodfortrees.org/plot/rss/from:1980/trend/plot/rss/from:1980/mean:60/trend/plot/rss/from:1980/mean:30/mean:22/mean:17/trend
Original data: 0.0131384°C per year
60-mo running mean: 0.0160371°C per year, a 22% increase.
Your 30,22, then 17 sample “filters”: 0.0167072 per year, a 27% increase. You keep making the warming worse!
Of course that comes from shortening the dataset. The 60-mo “filter” takes off 30 months on each end, you drop from currently 400 data points to 340. Your ‘triple filter’ is down to 332, you lost 34 per end, int(30/2) + int(22/2) + int(17/2). Same rise, but you keep shortening the run.
Greg Goodman said on May 18, 2013 at 11:42 am:

BTW, its not “converting” anything, it is three successive filters

It’s three sequential running means. One running mean smooths things out, is called a filter. Three in a row is data distortion. You don’t take a running mean of a running mean, you certainly don’t do it twice, and you never pretend afterward that you still have data.

Other_Andy
May 18, 2013 2:17 pm
Other_Andy
May 18, 2013 2:18 pm

Darn, Alexej beat me to it….

Janice Moore
May 18, 2013 2:30 pm

Recalling that I characterized “Girma” as a “Snake Oil Salesman” in my Jane Austen parody post in the “Sense about Sensitivity Thread” in April, 2013, I looked up why I did and discovered why I still DO think Girma is a slick operator:
Girma says [citing the following article by M. Latif favorably]:
April 24, 2013 at 7:44 pm
“Communicating Climate Science
***
There is a broad scientific consensus that the climate of the 21st century will warm in response to the anthropogenic emission of greenhouse gases (GHGs) into the atmosphere, but by how much remains highly uncertain. This is due to three factors: natural variability, model error, and emission scenario uncertainty.
***There is overwhelming scientific evidence that a significant share of 20th century warming is driven by the increase of GHGs. They will continue to accumulate in the atmosphere over the next years and possibly even decades, which together with the inertia of the climate system will support further warming. But what else do we really know about the climate of the 20th and 21st century?
Surface air temperature (SAT) during the 20th century displays a gradual warming and superimposed short-term fluctuations (the figure shows observed annual Northern Hemisphere and Arctic SAT as red lines). The upward trend contains the climate response to enhanced atmospheric GHG levels but also a natural component.
***
To some extent, we need to “ignore” the natural fluctuations, if we want to “see” the human influence on climate. ***
The uncertainty in climate sensitivity itself is in my opinion a good reason to demand reductions of global GHG emissions,
because the possibility of ‘a dangerous interference with the climate system’ cannot be ruled out with high confidence.
To predict the future climate we have to consider both natural variability and anthropogenic forcing. The latter is taken into account by assuming scenarios about future GHG and aerosol emissions. The scenarios cover a wide range of the main driving forces of future emissions, from demographic to technological and economic developments. IPCC AR4 published only climate projections based on such scenarios with no attempt to take account of the likely evolution of the natural variability. *** In the real world, the natural variations will introduce a large degree of irregularity, and even short-term cooling may occur during the next years .
This could have been explained better to the public, as in some media reports the existence of Global Warming has been questioned after for more than ten years no global SAT record has been observed. Had we emphasized more the uncertainty, that debate which confused many people could have been avoided.
Mojib Latif is a Professor of Climate Physics at Kiel University and Head of the Ocean Circulation and Climate Dynamics Division of the Helmholtz Centre for Ocean Research, Germany. He is Contributing Author of the IPCC Reports 2001 (TAR) and 2007 (AR4).”
[End Girma’s post.]

kadaka (KD Knoebel)
May 18, 2013 2:39 pm

Geocities still exists? And from Western Samoa?

May 18, 2013 3:54 pm

Your end point is almost a decade old.
Determining climate sensitivity to a given forcing via observed temperature change over the past 60yrs is silly for obvious reasons.
You need to look at spectral dampening through the TOA boundary (since all radiative forcings are derived at the TOA). The data in that regard (CERES) makes clear the fact that there has indeed been a minor dampening in the CO2 spectrum, yet total OLWR has actually increased from 1979-present, by over 1W/m^2. It can be determined that at least 94% of the warming observed since 1979 was naturally forced via a slight reduction in low level cloud cover, of the cumulus/cumulonimbus type, and depleted upper tropospheric/stratospheric H2O/O^3, possibly stemming from a weakening magnetic field or increase in the ferocity of the solar wind up until solar cycle 23.

Greg Goodman
May 18, 2013 3:56 pm

kadaka (KD Knoebel) says: You keep making the warming worse!
No I don’t because I did not fit trends. That would be a stupid thing to do and is certainly not a test of quality of a frequency filter. No cookie. Try again.
“It’s three sequential running means. One running mean smooths things out, is called a filter. Three in a row is data distortion. You don’t take a running mean of a running mean, you certainly don’t do it twice, and you never pretend afterward that you still have data.”
You argue from ignorance . Go and inform yourself about filter design ,frequency response, and phase distortion. Look at the response of a single running mean, a gaussian and the triple running mean.
Then you may be able to explain to me why the running mean gets nearly all the peaks and troughs perfectly upside down in the RSS data that I posted earlier and that you chose to ignore.
http://www.woodfortrees.org/plot/rss/from:1980/plot/rss/from:1980/mean:60/plot/rss/from:1980/mean:30/mean:22/mean:17
Once you have learnt enough to understand that, come back and admit you were talking rather too loudly about something of which you have little understanding or knowledge.

Girma
May 18, 2013 4:50 pm

Willis
But no, now that I’ve shown that your “multidecadal oscillation” pattern doesn’t exist further back than 1869, now you are saying it’s just a temporary pattern, might disappear tomorrow … but if so, what is the justification for removing it?
The quality of the instrumental temperature records is poor before 1880s as stated by Phil Jones here:
Temperature data for the period 1860-1880 are more uncertain, because of sparser coverage, than for later periods in the 20th Century.
http://news.bbc.co.uk/2/hi/8511670.stm
However, here is a published paper that shows the multidecadal oscillation extends back to 1400 years:
A signature of persistent natural thermohaline circulation cycles in
observed climate
Knight et al.
Analyses of global climate from measurements dating
back to the nineteenth century show an ‘Atlantic
Multidecadal Oscillation’ (AMO) as a leading large-scale
pattern of multidecadal variability in surface temperature.
Yet it is not possible to determine whether these fluctuations
are genuinely oscillatory from the relatively short
observational record alone. Using a 1400 year climate
model calculation, we are able to simulate the observed
pattern and amplitude of the AMO. The results imply the
AMO is a genuine quasi-periodic cycle of internal climate
variability persisting for many centuries, and is related to
variability in the oceanic thermohaline circulation (THC).

This relationship suggests we can attempt to reconstruct
past THC changes, and we infer an increase in THC
strength over the last 25 years. Potential predictability
associated with the mode implies natural THC and
AMO decreases over the next few decades independent
of anthropogenic climate change.
….
The quasi-periodic nature of the model’s AMO
suggests that in the absence of external forcings at least,
there is some predictability of the THC, AMO and global
and Northern Hemisphere mean temperatures for several
decades into the future. We utilise this to forecast decreasing
THC strength in the next few decades. This natural
reduction would accelerate anticipated anthropogenic THC
weakening, and the associated AMO change would partially
offset expected Northern Hemisphere warming. This effect
needs to be taken into account in producing more realistic
predictions of future climate change.

May 18, 2013 5:01 pm

Been saying it for years and will say it again.
What climate science has re-discovered is the PDO/AMDO. Period. Since we already discovered it in 1996, I perceive no value in re-discovering it at a cost of hundreds of billions of dollars.

Stephen
May 18, 2013 5:10 pm

Maybe I just misunderstood something in the math here, but something seems amiss:
The 5.1 in the climate-sensitivity calculation comes from using CO2 concentrations from 1974 and 2004, but it is used with the temperature change from 1945 to 2004 at the bottom. When looking at the response of temperature to CO2, shouldn’t the changes in each over the same time-period be used?

Greg Goodman
May 18, 2013 5:26 pm

“This result gives a long-term warming rate of 0.08 deg C/decade. From this, for the three decades from 1974 to 2004, dT = 0.08* 3 = 0.24 deg C.”
No, it seems he is using the dT of three decades in each case.

Girma
May 18, 2013 5:32 pm

Stephen
When looking at the response of temperature to CO2, shouldn’t the changes in each over the same time-period be used?
The problem is the CO2 concentration is a smooth monotonic curve. However, the Annual GMST data is not. It has a clear oscillation of about 60 years. To remove this oscillation, you need to smooth the Annual GMST curve with longer period of about 60 years. Here is the annual global mean temperature smoothed with a 63-years moving average showing the pattern.
http://bit.ly/1109qyb
This graph shows the warming based on the 30-years least squares trend of 0.6 deg C is greater than that based on the long-term trend of 0.24 deg C.
My main point is that the long-term trend should be used to estimate climate sensitivity, not the 30-years least squares trend.

Girma
May 18, 2013 5:42 pm

Wills
I am estimating climate sensitivity for the period 1974 to 2004. Is not the following pattern sufficient to do that?
http://bit.ly/1109qyb
Why not?

Greg Goodman
May 18, 2013 5:45 pm

“My main point is that the long-term trend should be used to estimate climate sensitivity, not the 30-years least squares trend.”
That is a very valid point.
Now if you want tp propose a cos+linear model, why not fit one directly using OLS, rather than doing a questionable filter?

Greg Goodman
May 18, 2013 5:51 pm

In fact quadratic + cosine would be a lot closer, the linear part isn’t linear. That is what Scaffeta does (but with multiple cosines.

Girma
May 18, 2013 5:55 pm

Greg
I agree the method is an approximate one, but I have checked it agrees with complicated published results.
Here is my argument.
I have shown below how the 60-years least squares trend relates to the long-term secular GMST. It shows the 60-years least squares trend is tangent to the secular GMST.
http://bit.ly/110bM02
As a result, the change in temperature from 1974 to 2004 may be estimated from the 0.08 deg C/decade warming rate of the 60-years least squares trend.

Greg Goodman
May 18, 2013 5:57 pm

Mathematical Softwares: Mathematica, MatLab, MathCAD, and Microsoft Excel
Programming Languages: Visual Basic, ASP, HTML, Crystal Report, Fortran, C, and Pascal
With that kind of baggage you should be able to fit a simple model without relying upon trivial detrending and runny mean filters available at WTF.org.

Greg Goodman
May 18, 2013 6:14 pm

before calling CO2 “monotonic” you ought to have a look at:
http://climategrog.wordpress.com/?attachment_id=233

May 18, 2013 6:34 pm

The formula shown, does that work with actual temperature, in Kelvin?

Girma
May 18, 2013 6:36 pm

UPDATE
To respond to the comments, I have included the following graph
http://bit.ly/16GPCm3
I have got a better estimate of the warming of the long-term smoothed GMST using a least squares trend from 1949 to 2005 as shown in the above graph, which shows the least squares trend coincides with the Secular GMST curve for the period from 1974 to 2005. For this case, the warming rate of the least squares trend for the period from 1949 to 2005 is 0.09 deg C/decade.
This gives dT = 0.09 * 3 = 0.27 deg C, and the improved climate sensitivity estimate is
CS = 5.1*0.27 = 1.4 deg C.
That is an increase in Secular GMST of 1.4 deg C for doubling of CO2 based on the instrumental records.

Girma
May 18, 2013 6:40 pm

Greg
before calling CO2 “monotonic” you ought to have a look at:
I mean the annual CO2 concentration:
http://www.woodfortrees.org/plot/esrl-co2/compress:12

Girma
May 18, 2013 6:44 pm

Greg
With that kind of baggage you should be able to fit a simple model without relying upon trivial detrending and runny mean filters available at WTF.org.
I want others to do it for themselves with available data and software online.
How many can determine for themselves a climate sensitivity of 1.4 deg C from the data and software easily available online as described in my essay?

Greg Goodman
May 18, 2013 7:00 pm

“I mean the annual CO2 concentration:”
I’m working the the annual conc too. But once you look at the rate of change you start to realise that it’s not just random noise or “stochastic” variation. It’s highly correlated to temperature.
Now if you are supposedly investigating the relationship of CO2 and temperature that is not the sort of thing you can ignore.

Greg Goodman
May 18, 2013 7:18 pm

“I want others to do it for themselves with available data and software online.”
Well that would be nice. But that is really not an acceptable excuse for not doing it properly. WTF.org is so basic you just can’t do this sort of analysis. Maybe there are better online tools or you could use a spreadsheet.
You’re taking the log of two points on the CO2 curve , that gives you the increased “forcing” at the end of the period. You seem to be applying that across the whole period.
What you need is the integral of the instantaneous forcing over the period. It’s not correct.

Girma
May 18, 2013 7:50 pm

Greg
I have done the analysis as you suggested. Here is my final result, which shows the linear relationship between Secular GMST (after the multidecdal oscillation has been removed) and the logarithm of the CO2 concentration.
http://orssengo.com/GlobalWarming/ClimateSensitivityOfOnePointThreeDegC.png
From the above graph, the slope of the linear relationship for the period from 1974 to 2004 is
k = dT/(ln C2 – ln C1) = (0.31 – 0.06)/(ln 377 – ln 331) = 0.25 / ln(377/331) = 0.25 / 0.130 = 1.923
From k = dT/(ln(C2/C1)), for doubling of CO2 we have
k = CS/ln 2
Therefore CS = k * ln 2 = 1.923 * ln 2 = 1.923 * 0.693 = 1.3 deg C for doubling of CO2

Greg Goodman
May 18, 2013 8:34 pm

“I have done the analysis as you suggested. ”
Well you haven’t. But it’s not my job to try and force you to do something you don’t want or are not capable of doing.
I’m tired. Bedtime.

Editor
May 18, 2013 8:52 pm

Girma says:
May 18, 2013 at 6:36 pm

UPDATE
To respond to the comments, I have included the following graph
http://bit.ly/16GPCm3

It looks like you’ve established that CO2 and temperature are running in parallel … but that’s not supposed to happen. Instead, temperature is supposed to vary with Log2(CO2). So already you are far afield from the IPCC (not that that is a bad thing.)
But in any case, what you have demonstrated is mere correlation. It’s been known for a while that in a vague hand-waving kind of fashion temperature is correlated with Log2(CO2) … but unfortunately, over the last fifteen years or so, they seem to have decoupled. Log2(CO2) continued to rise apace, but temperature has not cooperated.
That is, of course, if they were ever coupled in the first place …
w.

Girma
May 18, 2013 9:26 pm

Wllis
The relationship between temperature and and CO2 is between the Secular GMST (the monotonously increasing long-term GMST that has similar shape as the CO2 concentration and the sea level rise) and the CO2 concentration. The multidecadal oscillation of about 63 deg C period should not be considered as they are transient.
Willis, plotting the Secular GMST and ln (CO2) gives you a linear relationship given by
T = 1.871*ln(CO2/320.09)
That is the relationship between HadCRUT4 and the Mauna Loa datasets. T is the simple fit to the 63-years moving average of the annual GMST and CO2 is the annual CO2 concentration.
Please try it and see if it works. It has worked for me.
The equation for T since 1869 is
T = 0.5*t1*(year-1895)^2 + t2*(year-1895) + t3
where
t1 = 5.477*10^(-5) deg C/year^2
t2 = 2.990*10^(-3) deg C/year
t3 = -0.344 deg C
Here is the graph for the relationship between the model and the annual GMST:
http://orssengo.com/GlobalWarming/GmstPatternOf20thCentury.png

May 18, 2013 9:33 pm

Using the Hadcrut4 data and extrapolating the Keeling curve back to 1850 I find:
From 1850 to 1878 CO2 went up less than 1 ppm and temperature went up 0.4°C
From 1878 to 1911 CO2 went up nearly 6 ppm and temperatures dropped about -0.6°C.
From 1911 to 1944 CO2 went up over 16 ppm and temperatures went up 0.7°C.
From 1944 to 1976 CO2 went up over 29 ppm and temperatures dropped almost -0.4°C.
From 1976 to 2012 CO2 went up about 48 ppm and temperatures are up nearly 0.8°C.
Looks like this:
http://oi44.tinypic.com/4ikn78.jpg
You can take any two functions that increase over time and make it look like one causes the other but it isn’t necessarily the case.

Girma
May 18, 2013 10:00 pm

stacase
The annual CO2 is related to the 63-years moving average GMST with an R^2 = 0.99
Here is the correlation:
http://www.woodfortrees.org/plot/hadcrut4gl/mean:732/from:1901/normalise/plot/esrl-co2/compress:12/normalise/offset:0.615/detrend:-0.125

Brian H
May 18, 2013 10:06 pm

Jim;
Can you elaborate on the fascinating “singal to nosie” physics? Does it require handkerchiefs?
😀

george e. smith
May 18, 2013 10:09 pm

“””””…..IPCC:
“Since IPCC’s first report in 1990, assessed projections have suggested global average temperature increases between about 0.15°C and 0.3°C per decade for 1990 to 2005. This can now be compared with observed values of about 0.2°C per decade, strengthening confidence in near-term projections.”……””””””
So how does that jibe with the CRU public declaration that there has been NO statistically significant warming in the last 17 (now 18 ) years.
So if there was all of 0.6 deg. C rise since 1974, it must all have happened before 1995, because there has been none since.
Why do people keep on insisting on a link to CO2, when the data (real observed measurements) show there is no cause/effect connection (either way) whatsoever..
Keeping on repeating that old mantra under the authority of a PhD shingle doesn’t make it any more believable. The data tells the story; CO2 and Temperature can and do move in either the same or opposite directions, or both, and there is no link.

Girma
May 18, 2013 10:29 pm

george e. smith
CO2 and Temperature can and do move in either the same or opposite directions, or both, and there is no link.
As the temperature increases, more CO2 is released from the oceans (where it is about 50 times than in the atmosphere) increasing the CO2 concentration in the atmospheric.
As the temperature decreases, more CO2 is dissolved in the oceans decreasing the CO2 concentration in the atmosphere.
That is what the Vostok ice core data shows for thousands of years as shown below:
http://www.climatedata.info/Proxy/Proxy/icecores.html

george e. smith
May 18, 2013 10:37 pm

“””””……Girma says:
May 18, 2013 at 7:50 pm
Greg
I have done the analysis as you suggested. Here is my final result, which shows the linear relationship between Secular GMST (after the multidecdal oscillation has been removed) and the logarithm of the CO2 concentration.
http://orssengo.com/GlobalWarming/ClimateSensitivityOfOnePointThreeDegC.png
From the above graph, the slope of the linear relationship for the period from 1974 to 2004 is
k = dT/(ln C2 – ln C1) = (0.31 – 0.06)/(ln 377 – ln 331) = 0.25 / ln(377/331) = 0.25 / 0.130 = 1.923
From k = dT/(ln(C2/C1)), for doubling of CO2 we have
k = CS/ln 2
Therefore CS = k * ln 2 = 1.923 * ln 2 = 1.923 * 0.693 = 1.3 deg C for doubling of CO2…..”””””
So Girma, have you tried to fit the data, since say IGY in 1957/8 into a mathematical relationship of the form:-
y = mx + c, where y and x represent mean global surface Temperature, and Mauna Loa atmospheric CO2 abundance, either in that order, or in the reverse order (flip T and CO2), and show that it is any less likely to be true that your logarithmic form. For that matter, have you tried to fit the T and CO2 data to a formula of the form:-
(CO2)2 – (CO2)1 = a log (T2/T1) and shown it is any less plausible than your formula.
And for the piece de resistance try fitting the GMST and MLCO2 to an equation of the form:-
y = a.exp (-1/mx^2) +y0 since 1957/8, and of course x and y represent GMST and MLCO2 in either order (or both).
And show that is any less plausible than your logarithmic relationship.
remember only real measured data between 1957 and 2013; no computer simulated estimates, running averages, filtration residues or any such crap.; just the actual data.
For a bonus, try fitting either GMST or CO2, or both since 1957, to the average Telephone number in the Manhattan Telephone directories published between 1957, and 2013.

RACookPE1978
Editor
May 19, 2013 12:04 am

stacase says:
May 18, 2013 at 9:33 pm

Using the Hadcrut4 data and extrapolating the Keeling curve back to 1850 I find:
From 1850 to 1878 CO2 went up less than 1 ppm and temperature went up 0.4°C
From 1878 to 1911 CO2 went up nearly 6 ppm and temperatures dropped about -0.6°C.
From 1911 to 1944 CO2 went up over 16 ppm and temperatures went up 0.7°C.
From 1944 to 1976 CO2 went up over 29 ppm and temperatures dropped almost -0.4°C.
From 1976 to 2012 CO2 went up about 48 ppm and temperatures are up nearly 0.8°C.

Excellent! (But you knew this was coming, didn’t you?)
But, I would amend this summary slightly.

Using the Hadcrut4 data and extrapolating the Keeling curve back to 1850 I find:
From 1850 to 1878, 28 yrs, CO2 went up less than 1 ppm and temperature went up 0.4°C
From 1878 to 1911, 33 yrs, CO2 went up nearly 6 ppm and temperatures dropped about -0.6°C.
From 1911 to 1944, 33 yrs, CO2 went up over 16 ppm and temperatures went up 0.7°C.
From 1944 to 1976, 32 yrs, CO2 went up over 29 ppm and temperatures dropped almost -0.4°C.
From 1976 to 1996, 20 yrs, CO2 went up about 32 ppm and temperatures went up nearly 0.3°C.
From 1996 to 2013, 17 yrs, CO2 went up about 15 ppm and temperatures went up nearly 0.0°C.

What this change does is highlight the very short duration of any possible rising-CO2=rising-temperature effect, and highlight even more the length of the rising-CO2=no-change-in-temperature relationship we see right now in today’s climate.
(1)Your value of +0.8 C between 1976 and 2011 is off: We were about 0.3 above baseline of the mid-70’s in 2011. We have been at 0.0 change (or declining slightly since 1996.) Right now, at 2013 May, we are only 0.1 degree higher than the baseline of the mid-1970’s, so going up 0.8 between 1970 and2011 isn’t possible.
(2) Please, verify my (approximated) value of CO2 change at the 1996 and 2013 dates also.

John Parsons
May 19, 2013 12:42 am

Don Easterbrook says:
May 18, 2013 at 6:51 am
“…the notion that temperature is a function of CO2 is invalided [sic] until you first show a cause-and-effect relationship between the two!”
The cause/effect of CO2 and temperature has been known for 150 years. What is it about variation around a secular trend that seems to elude you? JP

atarsinc
May 19, 2013 1:04 am

Don Easterbrook says:
May 18, 2013 at 11:52 am
Girma
“I am not saying CO2 is causing the warming. I believe it is the warming that is causing the increase in CO2 concentration, as the vostok ice cores show. The CO2 concentration will drop when the temperature falls.”
You, Girma and many others in the skeptic community seem to have a problem discerning when CO2 increase acts as a radiative forcing and when it acts as a feedback. Of course the Vostok cores show CO2 increasing after warming. It’s a feedback. JP

kadaka (KD Knoebel)
May 19, 2013 1:39 am

@ Greg Goodman on May 18, 2013 at 3:56 pm:
Okay, you’re agreeing you screwed up on the “doubled differential” part. That’s a start. Now to the rest.

You argue from ignorance . Go and inform yourself about filter design ,frequency response, and phase distortion. Look at the response of a single running mean, a gaussian and the triple running mean.

From A Handbook of Numerical and Statistical Techniques: With examples mainly from the life sciences by J.H. Pollard, 1977, starting at pg 26, Simple methods of smoothing crude data:

There is a difficulty, however: as well as smoothing a set of data, running averages distort values that are already smooth. (…)

We therefore note the following three points concerning running averages:
1. They reduce irregularities and fluctuations.
2. They distort values already smooth.
3. They do not provide values at the beginning and end of a table.
The smoothed values in fig. 4.2.1 can be made even smoother by repeating the running-average process, but further values will be lost from both ends of the table and the danger of distortion becomes greater.

From Leif Svalgaard to Willis Eschenbach to Roy Spencer, the dangers of repeated averaging are known and warned against. I have known such for a long time. You have already smoothed the data once, and want to do it again, twice. I will need better justification than “You’re stupid, shut up until you learn something.”

Then you may be able to explain to me why the running mean gets nearly all the peaks and troughs perfectly upside down in the RSS data that I posted earlier and that you chose to ignore.

I did not ignore it, but you are speaking from arrogance thus it is understandable you will make all sorts of assumptions about me, that will invariably be in whatever direction further inflates your self-aggrandizement.
Let’s look at running means in smaller bites:
http://www.woodfortrees.org/plot/rss/from:1980/plot/rss/from:1980/mean:15/plot/rss/from:1980/mean:30/plot/rss/from:1980/mean:45/plot/rss/from:1980/mean:60/plot/rss/from:1980/mean:30/mean:22/mean:17
We’ll get rid of the noisy main signal for clarity, the 15-mo running mean shows its shape well enough:
http://www.woodfortrees.org/plot/rss/from:1980/mean:15/plot/rss/from:1980/mean:30/plot/rss/from:1980/mean:45/plot/rss/from:1980/mean:60/plot/rss/from:1980/mean:30/mean:22/mean:17
Your wonderful “triple filter” hugs no better than a simple 30-mo running average, said value being the largest of the ones you specified, while it loses much detail, for no benefit.
As seen in the 15-mo running average, there appears to be an underlying pattern of about five years, often just shorter. Thus 60-mo is a bad choice for a running average, as for example at the trough at approx. 1989.5, it’s averaging the 1987 and 1992 peaks.
Thus the usual sage advice is to only use a running average big enough to “smooth” noisy graph data, as for visual clarity, but stop before you introduce artifacts.
You are trumpeting the marvels of your miraculous “triple filter”, using that graph you said I ignored, when all you’ve basically shown is it’s better to stop at a 30 month running average than to enlarge to 60 months.

Once you have learnt enough to understand that, come back and admit you were talking rather too loudly about something of which you have little understanding or knowledge.

I must assume you are very new to this site, to think a blanket appeal to your authority should mean anything. Especially with the obvious errors you’ve made.
There are authorities on this site I do respect, because they do not have your “shut up and accept my authority” attitude, and are willing to explain things. They have warned against repeated averages as you are advocating, as I have been warned decades ago. You are discussing “filter design ,frequency response, and phase distortion” as if fixed frequencies were involved. Willis Eschenbach warned us about people like you.
If you can get one of them to sign off on your particular number-mangling, I will consider it to have worth, within its obvious demonstrated limited usefulness. But your appeal to your own authority? Your “Cease your ignorant whining, learn something elsewhere, then come apologize to me!” attitude? This site wouldn’t exist if Anthony Watts had accepted that. Neither would Climate Audit, or Jo Nova’s site, or many others.
Climate skepticism wouldn’t have gotten this far if we had adopted the attitude of the opposition as you have done, assuming you are “on our side”. And from the obvious mistakes you are making, I’ll need more assurances that you’re an authority, on much of anything, than your imperious decree.

Steve
May 19, 2013 1:39 am

with regard to the formulated relationship between Temperature and CO2 I believe I have seen both the symbols LN and LOG used, where the base has not been made explicit, at least to the extent I have examined. I typed both as CAPS simply to make clear of the letters I used, the two variants are shown in formulas as lower case.
thus, I think the natural log ( base e ) is correct with regard to this formulated relationship.
yes, or no ?

kadaka (KD Knoebel)
May 19, 2013 2:08 am

Whoops, I said on May 19, 2013 at 1:39 am:

Okay, you’re agreeing you screwed up on the “doubled differential” part.

I pulled the same mistake as you, that should be “doubled derivative“, not “differential”. Sorry about that, Greg!

kadaka (KD Knoebel)
May 19, 2013 2:35 am

From Steve on May 19, 2013 at 1:39 am:

with regard to the formulated relationship between Temperature and CO2 I believe I have seen both the symbols LN and LOG used, where the base has not been made explicit, at least to the extent I have examined. (…)
thus, I think the natural log ( base e ) is correct with regard to this formulated relationship.
yes, or no ?

ln(2)/ln(10) = 0.301029996
log(2)/log(10) = 0.301029996
ln(15)/ln(8) = 1.302296865
log(15)/log(8) = 1.302296865
ln(1)/ln(2) = 0
log(1)/log(2) = 0
(trivial case)
As Dr. Orssengo’s equation used a log over a log of the same base, although I’d agree “ln” should indicate base e, it doesn’t matter. You could use logs of base 7 or base pi, you’d get the same result, when the base is the same it “cancels out”.

Ian W
May 19, 2013 3:20 am

Trying to fit a linear change or a cyclic variation/oscillation to a chaotic system is a nonsense. There may be brief periods where there is ‘a fit’ then the system will change due multiple unknown non-linear non-cyclic interactions and the pattern will not exist any more as expected. Just because we humans like to see a simple pattern doesn’t mean that there is one. Even doing Fourier analyses is just obfuscating the concept that there must be standard repeating patterns that make up the apparent random noise – well you may find some but they will be dependent on the algorithm used and the end points they won’t describe the chaotic system because by definition they expect repeating patterns at various scales from a chaotic system.

Greg Goodman
May 19, 2013 3:26 am

kadaka “From A Handbook of Numerical and Statistical Techniques: With examples mainly from the life sciences by J.H. Pollard, 1977, starting at pg 26, Simple methods of smoothing crude data:”
Well if your understanding of data processing and filter design is based on “life sciences” text books I am not surprised you do not understand the subject.
I try avoid the use of the word arrogant , it can easily backfire.
“Thus the usual sage advice is to only use a running average big enough to “smooth” noisy graph data, as for visual clarity, but stop before you introduce artifacts.”
“Your wonderful “triple filter” hugs no better than a simple 30-mo running average, said value being the largest of the ones you specified, while it loses much detail, for no benefit.”
Filters are not designed to “hug” neither is their only function “visual clarity”. Your whole language and attitude shows you have no understanding of data processing or filtering yet you continue to get excited and call me arrogant because I do.
I told you why running mean was bad but it obviously was beyond your comprehension, so you just ignored what I explained and carry on shouting even louder.
This subject was one I have wanted to write a post about for a while so here is a brief explanation
http://climategrog.wordpress.com/2013/05/19/triple-running-mean-filters/

May 19, 2013 3:57 am

Re: RACookPE1978 – May 19, 2013 at 12:04 am
You added the span of years for each epoch in my post. They would have ranged from 28 to 37 years except that you broke the last and most recent span of 37 years down to 20 and 17 respectively. Obviously you wanted to point out the lull that’s been going on for a long time now. But yes, adding the span of years is a good idea.
Hadcrut4 has 2010 as the high point which is what I used. So I should have said 2010 instead of 2012. The 1976 value is -0.24 and 2010 is 0.54. The difference is 0.78 which I rounded off to 0.8. Had I used the very last year in the time series, 2012, I would have gotten 0.7, but I was picking off high & low points.
Yes, I see I have a typo! CO2 for 1976 is 332 ppm and for 2010 it’s 390 ppm for a difference of 58 ppm not 48.
Thanks for your comments, I’ll correct the ppm error since 1976 and stew about what to use as an end point if I post this elsewhere.

May 19, 2013 4:10 am

RE: Ian W says: – May 19, 2013 at 3:20 am
Trying to fit a linear change or a cyclic variation/oscillation to a chaotic system is a nonsense. There may be brief periods where there is ‘a fit’ then the system will change due multiple unknown non-linear non-cyclic interactions and the pattern will not exist any more as expected. Just because we humans like to see a simple pattern doesn’t mean that there is one. Even doing Fourier analyses is just obfuscating the concept that there must be standard repeating patterns that make up the apparent random noise – well you may find some but they will be dependent on the algorithm used and the end points they won’t describe the chaotic system because by definition they expect repeating patterns at various scales from a chaotic system.
Random Walk from Wikipedia:
http://en.wikipedia.org/wiki/Random_walk
I’ve fooled around with random pattern generators on Excel and it’s surprising how often they nearly match those Hadcrut, GISS, RSS, UAH graphs we’ve all stared at.

Greg Goodman
May 19, 2013 4:15 am

IIan W says:
Trying to fit a linear change or a cyclic variation/oscillation to a chaotic system is a nonsense. There may be brief periods where there is ‘a fit’ then the system will change due multiple unknown non-linear non-cyclic interactions and the pattern will not exist any more as expected. Just because we humans like to see a simple pattern doesn’t mean that there is one. Even doing Fourier analyses is just obfuscating the concept that there must be standard repeating patterns that make up the apparent random noise – well you may find some but they will be dependent on the algorithm used and the end points they won’t describe the chaotic system because by definition they expect repeating patterns at various scales from a chaotic system.
===
What you caution is valid as a caution but it is incorrect to go to the other extreme and say that any order or pattern in a chaotic system is illusionary.
Calling a system chaotic is simply a statement of the lack of understand we have of how it works.
That is not a reason for not looking or calling everything “stochastic”
The climate system has many physical feedbacks which will lend to the creation of possible oscillatory behaviour. There are also many periodic drivers of which the daily and annual changes are the most obvious.

kadaka (KD Knoebel)
May 19, 2013 4:17 am

From Greg Goodman on May 19, 2013 at 3:26 am:

I told you why running mean was bad but it obviously was beyond your comprehension…

Yes, running means are so bad that you have repeatedly recommended a 30 month running mean, followed by a 22 month running mean, followed by a 17 month running mean, all on the same data. You say running means are bad, then heartily recommend three times the badness.
You’ve also complained about the simple WFT tools, then repeatedly used them. Even after it was demonstrated you were using them incorrectly.
But because you call your triple-badness 3x running means “filters”, somehow they are great and wonderful, and if someone familiar with WFT recognizes your “filters” are a crappy unneeded triple running means, they are obviously ignorant and needing of your “brief explanation”?
Fine, I’ll go read your “explanation”.

Nope, you’re still stupid. You are referring to the sinc(x) function, used for Fourier transforms, talking about frequencies. For example, bold added:

The triple running mean has the advantage that it has a zero in the frequency response that will totally remove a precise frequency as well letting very little of higher frequencies through. If there is a fixed, known frequency to be eliminated, this can be an improvement on a gaussian filter of similar period.

These are measurements of climate, an inherently chaotic system. You want to use tools suitable for perfect precise fixed frequencies. If you would have read Willis Eschenbach’s post that I linked to, or to what others have said, or even made an honest open-minded examination of that data, you would know that you are neck-deep in folly.
Now, feel free to insist I’m the idiot while you’re the expert, while I laugh at you like you were a freshly-graduated engineer who expects a 15mm hex nut to fit perfectly into the slot they precisely spec’d at 15.000mm.

Greg Goodman
May 19, 2013 4:29 am

Girma says “Willis, plotting the Secular GMST and ln (CO2) gives you a linear relationship given by T = 1.871*ln(CO2/320.09) ”
No, plotting does not give any relationship. What does that is doing a linear regression. Here you have done a linear regression of T on log of co2 ratio.
The thing is that log CO2 represents an additional “forcing” (W/m2) , that will produce a rate of change of temperature (temperature is a measure of energy not power !)
That is why , yesterday, I said you need to integrate. Then you need to account for how much of today’s dT is because of feedbacks to yesterday CO2 etc, etc.
Sorry, if it was as trivial as you are trying to make it , it would have been solved in the 19th century on the back of an envelop.

richard verney
May 19, 2013 4:38 am

I consider all the claims regarding the ability to assess climate sensitivity disingenuous, even bordering on the dishonest.
It may be possible to calculate how CO2 behaves in laboratory conditions and hence to calculate a theoretical warming in relation to increasing CO2 levels in laboratory conditions. But that is not the real world.
In the real world, increased concentrations of CO2 would theoretically block a certain proportion of incoming solar insolation so that less solar radiance is absorbed by the ground and oceans, and it would also increase the rate of out going radiation at TOA. Both of these are potentially cooling factors. Thus the first issue is whether in real world conditions the theoretical laboratory ‘heat trapping’ effect of CO2 exceeds the ‘cooling’ effects of CO2 blocking incoming solar irradiance and increasing radiation at TOA and if so, by how much? The second issue is far more complex, namely the inter-relationship with other gases in the atmosphere and what effect it may have on the rate of convection at various altitudes and/or whether convection effectively outstrips any ‘heat trapping’ effect of CO2 carrying the warmer air away and upwards to the upper atmosphere where the ‘heat’ is radiated to space. None of those issues can be assessed in the laboratory, and can only be considered in real world conditions by way of empirical observational data. This is a hapless task since the data sets are either too short and/or have been horribly bastardised by endless adjustments, siting issues, station drop outs and polluted by UHI. Quite simply data sets of sufficiently high quality do not exist.
It is simply impossible to determine a value for climate sensitivity from observation data until absolutely everything is known and understood about natural variation, what its various constituent components are, the forcings of each and every individual component and whether the individual component concerned operates positively or negatively, and the upper and lower bounds of the forcings associated with each and every one of its constituent components.
This is logically and necessarily the position, since until one can look at the data set (thermometer or proxy) and identify the extent of each change in the data set and say with certainty to what extent, if any, that change was (or was not) brought about by natural variation, one cannot extract the signal of climate sensitivity from the noise of natural variation.
I seem to recall that one of the Team recognised the problem and at one time observed “”Quantifying climate sensitivity from real world data cannot even be done using present-day data, including satellite data. If you think that one
could do better with paleo data, then you’re fooling yourself. This is
fine, but there is no need to try to fool others by making extravagant
claims.”
We do not know whether at this stage of the Holocene adding more CO2 does anything, or, if it does, whether it warms or cools the atmosphere (or for that matter the oceans). Anyone who claims that they know and/or can properly assess the effect of CO2 in real world conditions is being disengenuous.
Jim Cripwell says: May 18, 2013 at 6:33 am
“Baa Humbug says”Alert me when you get to climate sensitivity of zero and I’ll pay attention”
You may be interested in my extremely simplistic approach to this issue. Since no-one has measured a CO2 signal in any modern temperature/time graph, from standard signal to noise ratio physics, there is a strong indication that the climate sensitivity of CO2 is indistinguishable from zero.”
As noted above, we do not know enough about natural variation to extract the so called climate sensitivity from the noise.
For what it is worth, 33 years worth of satellite data (which shows that temperatures were essentially flat between 1979 and 1997 and between 1999 to date and demonstrates no correlation between CO2 and temperature) suggests that the climate sensitivity of CO2 is so low that it is indistinguishable from zero.

Chris Wright
May 19, 2013 4:48 am

Surely the best evidence we have comes from the ice cores, which record the temperature and CO2 cycles over nearly the last million years.
As far as I’m aware, the ice cores show no trace of CO2 changes causing corresponding temperature changes. But they consistently show that CO2 changes are driven by temperature changes.
If so, then it strongly suggests that the warming effect of CO2 is close to zero. As the greenhouse effect does – presumably – work in the laboratory, then it also suggests that strong negative feedbacks are dominant in the climate system.
Chris

Greg Goodman
May 19, 2013 5:01 am

“But because you call your triple-badness 3x running means “filters”, somehow they are great and wonderful, and if someone familiar with WFT recognizes your “filters” are a crappy unneeded triple running means, they are obviously ignorant and needing of your “brief explanation”?”
No, if someone who is not familiar with filters wades in telling me “you don’t do this … you don’t do that … you never do blah” simple because they do not understand squat, I will point out that they are shouting about something they do not understand.
Running means have serious deficiencies but with a little knowledge these can be over come. I have explained the origin of the problem, and detailed how to correct it. That may be useful to others who are willing to learn.
“Nope, you’re still stupid. You are referring to the sinc(x) function, used for Fourier transforms, talking about frequencies.”
You have not understood a word of it have you? You are clearly unable and unwilling to learn anything and will keep ignoring everything I show you so that you can keep waving your arms and shouting insults.
Enough with the insults. I’m bored.

Greg Goodman
May 19, 2013 5:27 am

richard verney says: It is simply impossible to determine a value for climate sensitivity from observation data until absolutely everything is known and understood about natural variation, what its various constituent components are, the forcings of each and every individual component and whether the individual component concerned operates positively or negatively, and the upper and lower bounds of the forcings associated with each and every one of its constituent components.
===
Interesting thoughts. That is certainly true if you hit everything with a 60 filter.
However, I do not think it is as black and white as that. It must be possible to rule out certain extreme values. From there the task is to narrow it down by closer inspection. Until we have a much better understanding that will likely remain very large.
Geologically CO2 has always lagged, this does preclude it acting as a positive feedback. This could be one reason climate tends to flip between glacial and interglacial states.
On a short time-scale temperature seems to determine atmospheric CO2 by out-gassing from oceans:
http://climategrog.wordpress.com/?attachment_id=233
The key question remains at the interdecadal scale and that is where data reliability and politically motived adjustments make an honest scientific enquiry a lot more difficult.

Greg Goodman
May 19, 2013 6:51 am

stacase: I’ve fooled around with random pattern generators on Excel and it’s surprising how often they nearly match those Hadcrut, GISS, RSS, UAH graphs we’ve all stared at.
Yes, that’s worth doing to get a feel for it can look like. Roy Spencer gave out an xls a couple of years back on his site if anyone wants to look at that.

Girma
May 19, 2013 6:52 am

Greg
I have reproduced your result:
http://www.woodfortrees.org/plot/rss/compress:12/derivative/normalise/plot/esrl-co2/compress:12/derivative/derivative/from:1979/normalise
Excellent correlation between global mean temperature and CO2 concentration.
Amazing.
But they are telling us the increase is due to human emission of CO2.
What a ……?

Ian W
May 19, 2013 7:17 am

Greg Goodman says:
May 19, 2013 at 4:15 am
The climate system has many physical feedbacks which will lend to the creation of possible oscillatory behaviour. There are also many periodic drivers of which the daily and annual changes are the most obvious.

I would be interested how you find a diurnal or even annual pattern after 3 multimonth smoothings looking at a 60 year oscillation. 😉 especially as the inputs may not correlate at all with the outputs due to mulltiple different feedback lags and the chaotic interactions between those feedbacks and the inputs.

Ian W
May 19, 2013 7:18 am

Girma says:
May 19, 2013 at 6:52 am
Greg
I have reproduced your result:
http://www.woodfortrees.org/plot/rss/compress:12/derivative/normalise/plot/esrl-co2/compress:12/derivative/derivative/from:1979/normalise
Excellent correlation between global mean temperature and CO2 concentration.
Amazing.
But they are telling us the increase is due to human emission of CO2.
What a ……?

Perhaps it is a good empirical proof of Henry’s Law.

Greg Goodman
May 19, 2013 7:47 am

G: But they are telling us the increase is due to human emission of CO2.
Ian: Perhaps it is a good empirical proof of Henry’s Law.
yes, I think this relationship is fairly clearly temp driving d/dt(CO2) with no discernible lag. That is basic chemistry/physics laws.
That is telling us that the short term response is ocean out-gassing and little to do with emissions.
That had a ratio of 8 ppm/K
It should be remembered that this is a dynamic response where T represents a temporary deviation from the temperature that would be at equilibrium with the instantaneous ocean pCO2 level.
The other factor was the mean dT/dt over that period of 0.7K/century ( this is SST, interesting to compare to hadCrut) . CO2 shows mean accel of 2.8 ppm/year/century.
That is a ratio of 4 ppm/K for the 50 year means, most of that was a warming period so this gives an estimation of the long term response.
If this is still “dynamic” it must be a very deep water response. and it is likely we need to take a variation in temp gradient into account. If the deep temperature change is , for example , half that seen at the surface that could correspond to the same 8 ppm/K seen in short term.
There may be another way to interpret the 50 year means.
There is also an interesting repetition in d/dt CO2: 1998 is a perfect replay or 1974
http://climategrog.wordpress.com/?attachment_id=232
That plot gives a slightly different acceleration of CO2 but not far off.

Greg Goodman
May 19, 2013 7:52 am

The finite, positive dT/dt means something is warming the ocean system and causing it to be generally slightly ahead of equilibrium at least during this warming segment.

Greg Goodman
May 19, 2013 8:04 am

BTW it takes less than a hour to re-equilibrate to a change on temp/CO2 in agitated water. In this context it is instantaneous at the surface.
Deeper there will be delays associated with mixing of water volumes but not CO2 concentration itself.

Greg Goodman
May 19, 2013 8:38 am

http://climategrog.wordpress.com/?attachment_id=254
Same data with four year low pass filter applied.
Here the decadal scale variations show CO2 acceleration lagging rate of change of temperature .
The lag is larger at the beginning and end of the record. Varying between about 0.5 years around 1990-95 to >1.5 years at the end.

Greg Goodman
May 19, 2013 8:51 am

It seems that as we move into “the pause” lag increased, similar to what is was pre 1975 at the end of the last cooling period.
Observation => WAG: greater lag in deep water exchanges during cooling periods.
This should gives clues as to the origin of the warming. More on that once I’ve thought it through…

blueice2hotsea
May 19, 2013 10:14 am

Greg Goodman at May 19, 2013 at 8:38 am
Around 1990 the graph reveals a dramatic negative acceleration in CO2 and a corresponding negative SST trend. I would have expected to see this dip in 1992 due to the Pinatubo eruption. Does the smoothing cause a shift in dating?
Also – and it may be only me – but I strongly associate CO2 with green and temp with red. For a time, the graph’s reversed expected colors were somewhat disconcerting, a mental illusion which flipped the meaning of the colors back and forth between actual and conditioned.

blueice2hotsea
May 19, 2013 10:38 am

Girma.
IMO, your attempt to isolate secular variance by removing 60 yr pseudo-cycle noise is far more honest (and IMO accurate) than to leave it in and claim that the trend from mid-70’s onward is “mostly” due to anthro CO2. We have had far too much of the latter from media and activists over the past 17 years.
You are open to suggestions and progressive improvement. Therefore your “back-of-the-envelope” does not offend me. (I think the title is a stretch.)
Thanks and good luck.

Greg Goodman
May 19, 2013 10:52 am

blueice2hotsea
No the filter will not introduce a spurious shift, it is centred correctly. There will be a slight spreading in width of the peak. However, bear in mind that rate of change drops before the actual temperature.
Max negative rate of change is about when temp goes through zero on the way down.
Having said that even dT/dt can’t drop before the eruption, if is the cause. This shows there’s no significant drop from El Chicon or Mt Pinatubo. Nothing that stands out from the usual ups and downs.
However, there is a very noticeable negative acceleration in CO2 at about the right time for Mt Agung around 1963/64. This stands out as one of the few deviations of the two datasets. A paper was published looking at that in detail which concluded the drop in CO2 could probably be attributed to the eruption.
That is also an explicit recognition of the effect of sea temperature on atmospheric CO2.
CO2 is colourless and temps are only red in alarmist literature. Seems they’ve got you trained with that preconception. 😉
If I had thought about attributing colours, I would have made the sea blue.

Steve
May 19, 2013 11:15 am

to KD, above:
thanks, and I should have been more ‘clear’ – yes, i do understand the cancel out in this particular equation – I was trying to indicate “log” vs “ln” i have read in other discussions and articles as well as this one – it’s , for me , a matter of there is so much “to forget” that I need to be remembering.

Editor
May 19, 2013 11:54 am

Girma says:
May 18, 2013 at 10:00 pm

stacase
The annual CO2 is related to the 63-years moving average GMST with an R^2 = 0.99
Here is the correlation:
http://www.woodfortrees.org/plot/hadcrut4gl/mean:732/from:1901/normalise/plot/esrl-co2/compress:12/normalise/offset:0.615/detrend:-0.125

Girma, you have not considered a couple of things. One is the pernicious side of smoothing. This is the increasing autocorrelation that occurs when you smooth a dataset. You need to, have to, absolutely must take autocorrelation into account when considering whether a given R^2 is significant or not. I use the method of Quenouille, which is as follows:

The other is the small size of your dataset. You are using annual CO2 figures, and you show an overlap in your two datasets of 25 years … which means that N is only equal to 25, an absurdly small dataset to analyze.
As a result of these two things, the fact that your R^2 is greater than 99% is MEANINGLESS. There’s not even enough data to calculate the statistical significance of the relationship between the two tiny datasets, much less determine its meaning.
I am somewhat surprised, given your PhD and your list of programs you use (Mathematica, Excel, etc.) that you seem to be so woefully unaware of even these rudimentary issues of sample size and autocorrelation when analyzing climate data … truly, you have demonstrated beyond question that are out of your depth here.
You need to read up on the handling of autocorrelated datasets, because at present, unfortunately, your statistical claims are … well … let me call them charmingly naive at best, and unintentionally misleading at worst,
w.

May 19, 2013 12:03 pm

Why do we constantly assume that all warming post LIA is due to CO2 and not due to the end of whatever condition caused the LIA in the first place and a recovery from that event? Why are all climate graphs starting with the end of the LIA and do not go back to around 1100 or so and show the drop into the LIA in context with the recovery out of it?
Did we have some global reduction of CO2 leading into the LIA? I don’t think we did.

blueice2hotsea
May 19, 2013 12:18 pm

Greg Goodman –
When I attempt a similar graph as yours, there is a sharp inflection at 1992. This does not square with 1990 in your graph.
Is there something wrong at WFT? Or what am I doing wrong? I must say that the closer alignment with Pinatubo event is interesting.

Greg Goodman
May 19, 2013 12:47 pm

Or what am I doing wrong?
WTF.org says: “Compress Reduces the number of samples by averaging across the given number of months and replacing with the average. Use this to simplify a dataset before doing complex operations on it ”
You also seem have reduced a monthly series to annual. Why would you want that?
http://climategrog.wordpress.com/2013/05/19/triple-running-mean-filters/
Try again using a 12, 9, 7 triple running mean to remove the annual cycle instead of “isolate”.

Greg Goodman
May 19, 2013 12:53 pm

“Did we have some global reduction of CO2 leading into the LIA? I don’t think we did.”
We have trouble even getting it to exist in temperature by the time they’ve finished massaging the data but it would be worth looking at. I don’t see why the laws of physics would not have applied during that period.

Greg Goodman
May 19, 2013 1:33 pm

There are concerted efforts to censor the CO2 history to maintain the theme. Mainstream variously claim CO2 remained lower than recent levels for variously : 20ka, 200ka or 20 millions years.
There probably is some data if you dig but I’d expect it to be scrappy and very poor time resolution and noisy as hell.

blueice2hotsea
May 19, 2013 1:52 pm

Greg Goodman
May 19, 2013 at 12:47 pm
Why would I use annual means as data points? Because it doesn’t shift the graph by two years and it saves me pushing all those buttons for a three year running mean.
So I re-did the graph anyway with triple RM and guess what? It looks like this. It still has those 1990 peaks 1992 dips.
I would like to take your word for it that something big happened in 1990 and Pinatubo was inconsequential. But I still need more.

Girma
May 19, 2013 2:25 pm

Willis
your statistical claims are … well … let me call them charmingly naive at best, and unintentionally misleading at worst,
So should I sit still because we don’t have enough data?
What counts is the analysis gives you an EXCELLENT description of the observed data as shown:
http://orssengo.com/GlobalWarming/GmstPatternOf20thCentury.png
Here is the equation:
T = 1.871*ln(CO2/320.09)
T is the simple fit to the 63-years moving average GMST and CO2 is the annual CO2 concentration in the atmosphere.
The equation for T since 1869 is
T = 0.5*t1*(year-1895)^2 + t2*(year-1895) + t3
where
t1 = 5.477*10^(-5) deg C/year^2
t2 = 2.990*10^(-3) deg C/year
t3 = -0.344 deg C
Here is the graph again
http://orssengo.com/GlobalWarming/GmstPatternOf20thCentury.png
If you cannot see that the above graph is an EXCELLENT description of the climate of the 20th century, I cannot help you. Let others judge whether the above graph is an accurate description of the climate of the 20th century.
It works. That is what counts.

Greg Goodman
May 19, 2013 2:39 pm

“Because it doesn’t shift the graph by two years ” Well it probably logs it at 1990 instead of 1990.5 so you have 6m shift. Not sure why you are taking about a 2 year shift.
http://climategrog.wordpress.com/?attachment_id=233
I really don’t see huge difference between what you reproduced at WTF.org and my plot , except that you don’t have ICOADS and hadSST2 seems to be offset slightly lower.
“I would like to take your word for it that something big happened in 1990 and Pinatubo was inconsequential. But I still need more.”
Err, what I said was:
This shows there’s no significant drop from El Chicon or Mt Pinatubo. Nothing that stands out from the usual ups and downs.
I’ve just zoomed in on my plot and CO2 drop starts at 1990.55 ICOADS SST is somewhat smoother in form but seems about the same.
PinaTubo was June 1991
The following dip is remarkable only being smaller that average in the record. HadSST2 goes a bit deeper but still nothing more than average.
I’m not sure what you are seeing that you think I am missing.
I would invite you to take Willis’ volcano test. Imagine someone gave you that data and asked you to point out the when a major VE happened. Would you be pointing to 1991 ??

Girma
May 19, 2013 4:37 pm

Willis
I agree with you that 25 years data is not long enough for strong conclusion.
However, what else can I do?
Don’t you think it is possible to estimate annual GMST within +/- 0.2 deg C for the next 20 years?

blueice2hotsea
May 19, 2013 4:57 pm

Greg Goodman
Yes. Pinatubo erupted 1991.5. However, your graph dips dramatically in 1989. I am unable to re-produce that precursor dip at WFT. Can you?
re Willis’ volcano test. If the volcanic event released significant SO2 I might look for a strong negative acceleration in temperature. Like this
Note the two strongest candidates are the trenches which occur in 1982 & 1992 – years which are coincident with El Chichón and Pinatubo. respectively. I may 1991 or 1992. Close enough, there is a lag.

george e. smith
May 19, 2013 8:00 pm

“””””…..As a result of these two things, the fact that your R^2 is greater than 99% is MEANINGLESS. There’s not even enough data to calculate the statistical significance of the relationship between the two tiny datasets, much less determine its meaning……”””””
Well long before you assign any significance to any R^2 or other artifact of statistical manipulations; there is the much more fundamental question of whether or not you actually have ANY valid data to apply your statistry to.
So unless your data sampling regimen (in two variables; space and time) conforms to the Nyquist criterion for sampling of band limited continuous functions; you don’t even have data to masticate; it is simply noise. That of course is in-band noise so no filtering process can remove it to recover a signal.

Greg Goodman
May 20, 2013 12:42 am

blueice2hotsea, GIStemp does seem to dip later and deeper than SST.and CRUtem3
Do the same thing with CRUtem3 and there’s nothing for Mt P and the 1980 dip clearly precedes the eruption.
http://www.woodfortrees.org/plot/crutem3vsh/mean:12/mean:9/mean:7/from:1978/derivative/normalise
Since GISS have made several retrospective adjustments to their data and Hansen also exaggerates volcanic cooling estimations it may well be interesting to look into why.

Editor
May 20, 2013 12:45 am

Girma says:
May 19, 2013 at 2:25 pm

Willis

your statistical claims are … well … let me call them charmingly naive at best, and unintentionally misleading at worst,

So should I sit still because we don’t have enough data?

Well, yes, you should. Not exactly “sit still”, but “stop making unsupportable claims because you don’t have enough data”, because that’s what real scientists do.
They either wait for the data to accumulate until it reaches statistical significance, or they figure out some other analysis method that does give statistically significant results with the existing data.
What they don’t do is make unsustainable claims based on data plus analysis which shows NO STATISTICAL SIGNIFICANCE.
You go on to say:
Girma says:
May 19, 2013 at 4:37 pm

Willis
I agree with you that 25 years data is not long enough for strong conclusion.
However, what else can I do?

What you can do is try another analysis method. Using a sixty-three year average on your data puts the autocorrelation through the roof, making your results statistically insignificant. But that’s a result of your method, not of the size of the dataset (163 years of monthly data, N = almost 2,000. That’s reasonable data. So use another method to establish what you are trying to show. You may notice, for example, that many of my graphs are scatter charts of some kind. And many of them use just the raw data itself, no processing of any kind. So you might experiment along those lines.
But each of the results must be treated with caution, and tested for significance allowing for autocorrelation. Because no matter how good it looks, the numbers have to pencil out.

Don’t you think it is possible to estimate annual GMST within +/- 0.2 deg C for the next 20 years?

I’d say not at present, not with any certainty. Oh, I suppose you could estimate it at something like 0.05° ± 0.2°C and you’d have a good chance of being right, but that’s just a crap shoot.
The problem is that climate is chaotic. This means that absent some very, very clever method we haven’t invented yet, it is inherently not predictable. Now it is possible that long-term climate prediction is a “boundary problem” as some have claimed … but I’ve never seen any evidence that that is the case.
In addition, we have evidence that the climate models, whose programmers do think it is a boundary problem, can’t predict 20 years out, they’ve been quite bad at projecting the future ever since climate stopped warming. This, of course, is because they are merely incrementing machines, reading the inputs (forcings) and doing a linear transform with a lag … and as a result, none of them predicted the current hiatus in the warming.
Regards,
w.

Greg Goodman
May 20, 2013 1:00 am

Girma: What counts is the analysis gives you an EXCELLENT description of the observed data as shown
Yes cos+quadratic does give a reasonably good fit to that part of the data. That’s why I suggested it. However threre is also circa 21y and 11y that will affect the residual you are fitting to CO2 as well as a 160 that hadley processing removes.
http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/
I don’t follow what you are actually fitting to what but it seems you are now comparing the quadratic to linear ( a two point approximation to ln CO2 )
As I pointed out above ln CO2 is an additional radiative forcing , therefore you need to integrate it (or differentiate you T model to get dT/dt) . Since ln CO2 is almost a straight line,it’s integral will be a quadratic. At least you will have something similar to fit.
I’m not endorsing that as a correct evaluation of CS but if you want to go for a direct attribution as you intended in this exercise, that would seem to the appropriate way to do it.
What you are currently doing does not make sense unless you assume that the climate system and the oceans are adjusting almost instantly to the new “forcing” and that dT is the change in equilibrium state. I don’t think you’ll get many backers for that idea on either side of the debate.
The simplest way would be to fit cos+lin to dT/dt and then regress that with ln CO2.
To do it accurately you should do the cos+lin OLS fit to unfiltered data.
That will affect your CS but I can’t guess in which direction.

Editor
May 20, 2013 1:36 am

blueice2hotsea says:
May 19, 2013 at 4:57 pm

re Willis’ volcano test. If the volcanic event released significant SO2 I might look for a strong negative acceleration in temperature. Like this
Note the two strongest candidates are the trenches which occur in 1982 & 1992 – years which are coincident with El Chichón and Pinatubo. respectively. I may 1991 or 1992. Close enough, there is a lag.

I gotta say, that is tortured data. You’ve done a kind of pseudo-gaussian averaging, then differentiated the resulting average.
It seems if anything you’re doing it backwards, that you should differentiate the actual data, rather than the smoothed data, and then smooth that. But the difference is fairly small.
In any case, here’s the underlying data, from the WFT site here. I’ve first differentiated the data, then smoothed it. You can see the result is the same as yours.

Now, I’ve marked the month of the El Chichon eruption in blue, and that of Pinatubo in red. After El Chichon, the dT/dt continued to increase. After six months, the dT/dt begins to fall … but not all that much before turning up again.
Regarding Pinatubo, it erupted near the end of a long, large, 18-month decline in dT/dt. After the eruption, dT/dt continued to decline somewhat, but not strongly, for another eight months, and then started to increase.
In neither case is there a change in trend from before to after the eruption.
So I’m sorry, Blue, but I don’t accept your claim that that is “close enough” to be evidence for a volcano effect.
w.

Greg Goodman
May 20, 2013 1:38 am

Willis: “In addition, we have evidence that the climate models, whose programmers do think it is a boundary problem, can’t predict 20 years out, they’ve been quite bad at projecting the future ever since climate stopped warming. This, of course, is because they are merely incrementing machines, reading the inputs (forcings) and doing a linear transform with a lag … and as a result, none of them predicted the current hiatus in the warming.”
I think the failure of current models due to preconceived ideas being allowed to affect the model not the modelling process itself.
Volcanic effects have been exaggerated , as I’ve discussed above and you have said many times there is little evidence that volcanic effect is anything near what the models produce. Either the volcanic input is wrong or the models fail to reproduce climate _insensitivity_ to changes in radiative forcing.
This allows them to put in hypothetical +ve feedbacks to CO2 and it ends up as garbage.
I don’t think the problem in inherent in the modelling process but more to do with personal biases (group thinking) being allow rig the model to produce “expected” outcomes.

Greg Goodman
May 20, 2013 3:59 am

Willis: It seems if anything you’re doing it backwards, that you should differentiate the actual data, rather than the smoothed data, and then smooth that. But the difference is fairly small.
Both kernel convolution filters and differentiation are linear operations. The result should be mathematically identical.
Tortured, not really. The filter response is very similar to the guassian but is not “pseudo-gaussian”, it is actually slightly better at removing a fixed frequency like the annual cycle than the gaussian.
http://climategrog.wordpress.com/2013/05/19/triple-running-mean-filters/
Since we are looking in this case for a difference produced by a volcano plotting the difference (ie what is done here) seems to be the most appropriate way to view it.
Far from being tortured , I would say that every step was justified, correctly executed and nothing was superfluous, over-processing.
Anyway we are both agreed about what it shows.
Doing the same with ICOADS SST and most of that trough disappears too.
http://climategrog.wordpress.com/?attachment_id=233

Girma
May 20, 2013 4:38 am

Thanks Willis for your response.

Girma
May 20, 2013 5:20 am

blueice2hotsea
You are open to suggestions and progressive improvement. Therefore your “back-of-the-envelope” does not offend me. (I think the title is a stretch.)
I agree now the title is a stretch.
A more appropriate title would have been:
“How to arrive at IPCC’s climate sensitivity estimate of about 3 deg C and a contrasting estimate of 1.2 deg C”

herkimer
May 20, 2013 5:34 am

I agree with Don Easterbrook’s comments . One cannot make a years forecast based on the trend of the summer months only . Similarly one cannot make a century forecast by only looking at the last 20-30 warm years of a climate cycle that may be 100-120 years long. There are 60 year climate cycles and 100-120 cycles . An analysis of the CET records going back to 1538 by Tony Brown in a previous track showed regular temperature dips every 100-120 years . We are into similar dips like we had 1890,1780, 1670 and 1560. Don has been right all along predicting the global temperatures to drop because he looked at longer term cycles that are apparent in the ice core records .If we want to predict 100 year ahead, we need to look at least 100 years back too. Natural variables that shape these longer term cycles clearly seem to override the effects of CO2

May 20, 2013 7:02 am

Y’all wasting your time -running around in ever decreasing circles circles unless you include the millennial temperature cycle in any calculations see first post at
http://climatesense-norpag.blogspot.com
There is no consistent empirical relation between CO2 and temperature you can select a time frame that will show robustly that CO2 is an Ice House Gas if you want too.To forecast future temperatures you will be more successful if you forget CO2 completely – it is an effect not a cause.

Editor
May 20, 2013 9:13 am

Greg Goodman says:
May 20, 2013 at 3:59 am

Willis:

It seems if anything you’re doing it backwards, that you should differentiate the actual data, rather than the smoothed data, and then smooth that. But the difference is fairly small.

Both kernel convolution filters and differentiation are linear operations. The result should be mathematically identical.

Yes, you’re 100% right, my error. I made the assumption that since one of them was invertable and one was not, that the order was important … but it’s not, the results are identical.
w.

John Tillman
May 20, 2013 10:30 am

No surprise that Arrhenius was also a proponent of ethnically-based eugenics & racism.

Greg Goodman
May 20, 2013 1:17 pm

Willis: Yes, you’re 100% right, my error. I made the assumption that since one of them was invertable and one was not, that the order was important … but it’s not, the results are identical.
Your comment was right in suggesting filtering is best done last. Good principal. Just in this case it did not matter.
Interestingly, both the first difference ( which is a trivial kernel convolution ) and a gaussian can be done in one hit. Just use the analytical derivative of the gaussian to make the weighting kernel.
The two point difference is an approximation to the the real derivative of the data. By doing an analytical diff of the gaussian the operation is the same as doing a gaussian on the true diff of the data not the two point approx.
Yeah, I know , I was a sceptical as hell when I first read it but you can check it out. It’s a neat technique.
How much difference it make will depend up on the nature of the data.

Greg Goodman
May 20, 2013 1:21 pm

The difference of the two methods is subtle. If you use the two point difference of each point in the gausian kernel to build the derivative kernel it is identical to the two step method. The gain is the fact that you do the analytical diff of the gauss and sample that to make the kernel.