Spencer: Natural variability unexplained in IPCC models

Evidence for Natural Climate Cycles in the IPCC Climate Models’ 20th Century Temperature Reconstructions

by Roy W. Spencer, Ph. D.

What can we learn from the IPCC climate models based upon their ability to reconstruct the global average surface temperature variations during the 20th Century?

While the title of this article suggests I’ve found evidence of natural climate cycles in the IPCC models, it’s actually the temperature variability the models CANNOT explain that ends up being related to known climate cycles. After an empirical adjustment for that unexplained temperature variability, it is shown that the models are producing too much global warming since 1970, the period of most rapid growth in atmospheric carbon dioxide. This suggests that the models are too sensitive, in which case they are forecasting too much future warming, too.

Climate Models’ 20th Century Runs

We begin with the IPCC’s best estimate of observed global average surface temperature variations over the 20th Century, from the “HadCRUT3″ dataset. (Monthly running 3-year averages are shown throughout.) Of course, there are some serious concerns over the validity of this observed temperature record, especially over the strength of the long-term warming trend, but for the time being let’s assume it is correct (click on image to see a large version).

IPCC-17-model-20th-Century-vs-HadCRUT3-large

Also shown in the above graph is the climate model temperature reconstruction for the 20th Century averaged across 17 of the 21 climate models which the IPCC tracks. To provide a reconstruction of 20th Century temperatures included in the PCMDI archive of climate model experiments, each modeling group was asked to use whatever forcings they believed were involved in producing the observed temperature record. Those forcings generally include increasing carbon dioxide, various estimates of aerosol (particulate) pollution, and for some of the models, volcanoes. (Also shown are polynomial fits to the curves, to allow a better visualization of the decadal time scale variations.)

There are a couple of notable features in the above chart. First, the average warming trend across all 17 climate models (+0.64 deg C per century) exactly matches the observed trend…I didn’t plot the trend lines, which lie on top of each other. This agreement might be expected since the models have been adjusted by the various modeling groups to best explain the 20th Century climate.

The more interesting feature, though, is the inability of the models to mimic the rapid warming before 1940, and the lack of warming from the 1940s to the 1970s. These two periods of inconvenient temperature variability are well known: (1) the pre-1940 warming was before atmospheric CO2 had increased very much; and (2) the lack of warming from the 1940s to the 1970s was during a time of rapid growth in CO2. In other words, the stronger warming period should have been after 1940, not before, based upon the CO2 warming effect alone.

Natural Climate Variability as an Explanation for What The Models Can Not Mimic

The next chart shows the difference between the two curves in the previous chart, that is, the 20th Century temperature variability the models have not, in an average sense, been able to explain. Also shown are three known modes of natural variability: the Pacific Decadal Oscillation (PDO, in blue); the Atlantic Multidecadal Oscillation (AMO, in green); and the negative of the Southern Oscillation Index (SOI, in red). The SOI is a measure of El Nino and La Nina activity. All three climate indicies have been scaled so that their net amount of variability (standard deviation) matches that of the “unexplained temperature” curve.

IPCC-17-model-20th-Century-vs-HadCRUT3-residuals-vs-PDO-AMO-SOI-large

As can be seen, the three climate indices all bear some level of resemblance to the unexplained temperature variability in the 20th Century.

An optimum linear combination of the PDO, AMO, and SOI that best matches the models’ “unexplained temperature variability” is shown as the dashed magenta line in the next graph. There are some time lags included in this combination, with the PDO preceding temperature by 8 months, the SOI preceding temperature by 4 months, and the AMO having no time lag.

IPCC-17-model-20th-Century-vs-HadCRUT3-residuals-vs-PDO-AMO-SOI-fit-large

This demonstrates that, at least from an empirical standpoint, there are known natural modes of climate variability that might explain at least some portion of the temperature variability seen during the 20th Century. If we exclude the post-1970 data from the above analysis, the best combination of the PDO, AMO, and SOI results in the solid magenta curve. Note that it does a somewhat better job of capturing the warmth around 1940.

Now, let’s add this natural component in with the original model curve we saw in the first graph, first based upon the full 100 years of overlap:

IPCC-17-model-20th-Century-vs-HadCRUT3-residuals-vs-PDO-AMO-SOI-fit-2-large

We now find a much better match with the observed temperature record. But we see that the post-1970 warming produced by the combined physical-statistical model tends to be over-stated, by about 40%. If we use the 1900 to 1970 overlap to come up with a natural variability component, the following graph shows that the post-1970 warming is overstated by even more: 74%.

IPCC-17-model-20th-Century-vs-HadCRUT3-residuals-vs-PDO-AMO-SOI-fit-3-large

Interpretation

What I believe this demonstrates is that after known, natural modes of climate variability are taken into account, the primary period of supposed CO2-induced warming during the 20th Century – that from about 1970 onward – does not need as strong a CO2-warming effect as is programmed into the average IPCC climate model. This is because the natural variability seen BEFORE 1970 suggests that part of the warming AFTER 1970 is natural! Note that I have deduced this from the IPCC’s inherent admission that they can not explain all of the temperature variability seen during the 20th Century.

The Logical Absurdity of Some Climate Sensitivity Arguments

This demonstrates one of the absurdities (Dick Lindzen’s term, as I recall) in the way current climate change theory works: For a given observed temperature change, the smaller the forcing that caused it, the greater the inferred sensitivity of the climate system. This is why Jim Hansen believes in catastrophic global warming: since he thinks he knows for sure that a relatively tiny forcing caused the Ice Ages, then the greater forcing produced by our CO2 emissions will result in even more dramatic climate change!

But taken to its logical conclusion, this relationship between the strength of the forcing, and the inferred sensitivity of the climate system, leads to the absurd notion that an infinitesimally small forcing causes nearly infinite climate sensitivity(!) As I have mentioned before, this is analogous to an ancient tribe of people thinking their moral shortcomings were responsible for lightning, storms, and other whims of nature.

This absurdity is avoided if we simply admit that we do not know all of the natural forcings involved in climate change. And the greater the number of natural forcings involved, then the less we have to worry about human-caused global warming.

The IPCC, though, never points out this inherent source of bias in its reports. But the IPCC can not admit to scientific uncertainty…that would reduce the chance of getting the energy policy changes they so desire.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

138 Comments
Inline Feedbacks
View all comments
February 1, 2010 1:32 pm

What I’d like to see is a solar climate model. Something that can run on a computer and explain past temperature changes and predict future temperature changes. This is an area where the believers have done a much better job than the skeptics.
And, with all due respect Dr. Spencer, I’d also like you to release to the public your satellite computer code for processing UAH temperatures.

Richard Tyndall
February 1, 2010 1:35 pm

Sorry to go off topic straight away but the Guardian newspaper in the UK is claiming an exclusive showing that Jones at the CRU covered up problems with data from Chinese weather stations
http://www.guardian.co.uk/environment/2010/feb/01/leaked-emails-climate-jones-chinese
“Today the Guardian reveals how Jones withheld the information requested under freedom of information laws. Subsequently a senior colleague told him he feared that Jones’s collaborator, Wei-Chyung Wang of the University at Albany, had “screwed up”.
The apparent attempts to cover up problems with temperature data from the Chinese weather stations provide the first link between the email scandal and the UN’s embattled climate science body, the Intergovernmental Panel on Climate Change, as a paper based on the measurements was used to bolster IPCC statements about rapid global warming in recent decades.
Wang was cleared of scientific fraud by his university, but new information brought to light today indicates widespread concern about the affair among scientists.
In particular, it emerges that documents which Wang claimed would exonerate him and Jones did not exist.”

Steve Goddard
February 1, 2010 1:43 pm

How true.

From: Kevin Trenberth
To: Michael Mann
Subject: Re: BBC U-turn on climate
Date: Mon, 12 Oct 2009 08:57:37 -0600
The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.

John Galt
February 1, 2010 1:44 pm

I respectfully disagree that natural variability is unexplained by the IPCC. It’s rather easy to see the IPCC believes natural variability no longer exists.
It’s something of the past, like the saber-toothed tiger. Besides, human influence is so strong that natural variability doesn’t matter — or so they would have you believe.
BTW: I believe it shows extreme hubris when the various and assorted climatologists and climate modelers assert they can separate the the natural warming signal from the anthropological influence.

James Sexton
February 1, 2010 1:44 pm

Lol, very nice. One has to admire the clarity in Dr. Spencer’s interpretation and conclusion.

Onion
February 1, 2010 1:44 pm

Sounds like homeopathy!

February 1, 2010 1:48 pm

This absurdity is avoided if we simply admit that we do not know all of the natural forcings involved in climate change. And the greater the number of natural forcings involved, then the less we have to worry about human-caused global warming.
I find it interesting [and possibly significant] that the ‘natural forcings’ only cause the observed climate to oscillate about the model climate, but not progressively deviating from it over long enough time. Usually, when you fail a prediction, the failure is cumulative and it gets worse and worse as time goes on [e.g. calculating predicted positions of a minor planet based on inaccurate original positions], but the climate seems to ‘pull itself’ back to ‘where it should be’ according to the models. I assume that the models are NOT constantly updated [assimilated] with the newest observations, but are allowed to ‘run free’ based only one the initial conditions and the processes being modeled.

Steve in SC
February 1, 2010 1:48 pm

Well hello, since the models were programmed to forecast increasing temperatures, that is precisely what they do. The extensive use of ADAFFs (arbitrarily determined artificial fudge factors) to cover for unknown phenomena and their effects almost guarantee a cluster[snip].
(preemptive self snip)

DirkH
February 1, 2010 1:50 pm

Rule of thumb: In the warm phase of the PDO, mankind fears to be fried. In the cool phase of the PDO, they fear an ice age. Is http://www.globalcooling.com already taken? Yes. From the website:
“Global Cooling develops innovative refrigeration solutions.”

Spector
February 1, 2010 1:58 pm

I believe the lump in many of these curves from 1939 to 1946 has been attributed abnormal maritime data obtained from ships at sea during World War II. During this period traffic on the ‘normal’ shipping lanes was often diverted to other routes.

February 1, 2010 1:59 pm

Earth’s climate has changed in the past, is changing now, and will continue to change because Earth’s heat source – the Sun – is a variable star.
The geologic record of that fact is clear.
Many studies have shown that changes in Earth’s climate are linked to changes in the Sun – Earth’s heat source.
Other studies have shown that changes in the Sun are induced by oscillations of the Sun about the center-of-mass of the solar system, induced mostly by ever changing positions of planets around the Sun.
Although modern astronomy discarded astrology as voodoo science, it now appears that astrology may have had a better scientific foundation than the Standard Solar Model of a Hydrogen-filled Sun!
See: “Earth’s heat source – the Sun”, Energy and Environment 20 (2009) 131-144: http://arxiv.org/pdf/0905.0704
With kind regards,
Oliver K. Manuel
Former NASA PI for Apollo

February 1, 2010 2:03 pm

Why to spend time with global HadCRUT of dubious quality, trying to fit something on it. Increased “GH” effect should manifest mostly in polar areas, where cold air holds only a little humidity and +40% change of CO2 should deliver biggest increase in “forcing”.
Antarctic: no warming during the last 30 years
http://climexp.knmi.nl/data/itlt_0-360E_-66–90N_na.png
Arctic: no net warming compared to 40ties, just natural oscillations up and down
http://climexp.knmi.nl/data/icrutem3_hadsst2_0-360E_70-90N_na.png

February 1, 2010 2:04 pm

Oliver K. Manuel (13:59:06) :
it now appears that astrology may have had a better scientific foundation than the Standard Solar Model of a Hydrogen-filled Sun!
It is statements like that that make some people not take WUWT seriously. Let us at least try to preserve a modicum of science.

Eric (skeptic)
February 1, 2010 2:04 pm

Do any GCMs ever converge to the types of patterns seen in the real world measurements for the major oscillations? It seems to me that having runs of models that result in cycles correlating at least somewhat with reality would be somewhat validating. That would seem to require the incorporation of exogenous forcings such as volcanoes and solar. Without model runs that can duplicate natural oscillations we are left with a hopelessly oversimplified prediction of some sort of warming sometime in the future.
But displaying all the model runs as a single average precludes that possibility. I would like to see this analysis in this article performed against 100’s or 1000’s of individual model runs and see what cyclical behaviors emerge that match parts of reality. The typical, predictable argument against my request will invoke the uncertainty of initial conditions when in fact the reality is that initial conditions don’t matter in the long run. If most or all model runs fail to match reality over the long run that is due to model error not chaotic effects propagating from initial conditions.

February 1, 2010 2:07 pm

More, the decline in HadCRUT between 1945-1978 should be really decline. By fudging with sea water sampling correction factor, “they” managed to create sudden step down in 1945and subsequent flat period/mild rise instead of clear decline, which is visible in all NH temperature station records.

jack mosevich
February 1, 2010 2:15 pm

http://www.leif.org/research
This is an issue about which I have been wondering: how often are the models calibrated? There are undoubtedly many parameters which are updated through time to better fit observations with a hopefully corresponding better ability to project into the future. I know that there is confusion about this as some people accuse modelers of curve fitting, which is certainly not true. Can anyone, e.g. Dr. Spencer, please elucidate?
So, where does a best fit end and a projection begin?

Brian D Finch
February 1, 2010 2:17 pm

@Leif Svalgard
Leif, Oliver is taking the piss (ie: he is being satirical).

RichieP
February 1, 2010 2:21 pm

Richard Tyndall (13:35:26)
There is either some severe form of cognitive dissonance/sheer schizophrenia going on at the Guardian or I’m dreaming or I’m just missing the plot …
Fred Pearce Monday 1 February 2010 18.04 GMT :
http://www.guardian.co.uk/environment/2010/feb/01/climate-emails-sceptics
Fred PearceMonday 1 February 2010 21.00 GMT :
http://www.guardian.co.uk/environment/2010/feb/01/dispute-weather-fraud

Henry chance
February 1, 2010 2:21 pm

It is all about carbon. If we send the carbon to another planet, utopis will surely come.
We can name sites were even the slightest mention of warming in the early a1940’s is forbidden.

DirkH
February 1, 2010 2:23 pm

“Eric (skeptic) (14:04:51) :
[…]
The typical, predictable argument against my request will invoke the uncertainty of initial conditions when in fact the reality is that initial conditions don’t matter in the long run.”
You would be right for a negatively fedback, stable, oscillating system. One could watch it to analyze the nature of its oscillations.
But the existing GCMs are tuned to incorporate assumed positive feedbacks, otherwise they would not produce alarming results. They are inherently unstable by design. For such a system, the initial conditions lead to ever-amplifying oscillations or a push over the brink straight ahead, so for them, initial conditions can make all the difference between end of the world in 2026 or 2100, take your pick. That’s why they do large numbers of runs and average them -simply saying we ran our model and the world will end in March 2099 would sound too silly- and that’s also why they have this huge span of possible outcomes.
The instability is built into these systems because it is a requirement. The use case is predicting the end of the world.

RichieP
February 1, 2010 2:24 pm

… or is it just a way to offer up Jones as scapegoat and save the rest of the myth?

February 1, 2010 2:27 pm

Brian D Finch (14:17:39) :
Leif, Oliver is taking the piss (ie: he is being satirical).
I think not.

View from the Solent
February 1, 2010 2:30 pm

re Richard Tyndall 13:35
More from The Guardian at http://www.guardian.co.uk/environment/2010/feb/01/dispute-weather-fraud .
This is significant. The Guardian is the bible of UK socialism and as far from sceptical as you can get. They still plug the hokum, but are beginning to question the background.

February 1, 2010 2:32 pm

This sounds very similar to the omitted variable bias in statistics. If you run a regression Y=BX and omit some variable(s) on the right hand side, if the omitted variable(s) are positively correlated with the included X variable(s) (where all have a positive effect on Y), then your estimate of the “sensitivity” coefficient B will be biased upwards.

John Galt
February 1, 2010 2:35 pm

@Leif Svalgaard (13:48:17) :
When the source code for a climate model is updated, it’s like changing an hypothesis. This would not be an issue if the source code was properly archived after each release.
However, we have seen GISS does not control or archive their source code any better than they maintain their raw data.
What really needs to be done for each climate model is archive the source code for each release and the raw data, the adjustments and the output for each run. The data (only the inputs are data) and the output should be put into a database that can be queried and analyzed using standard tools.
Since the data and output are not national secrets and since the agencies are funded with public monies, they should be published on the internet and downloadable as XML data.

1 2 3 6