Why models can't predict climate accurately

 By Christopher Monckton of Brenchley

Dr Gavin Cawley, a computer modeler the University of East Anglia, who posts as “dikranmarsupial”, is uncomfortable with my regular feature articles here at WUWT demonstrating the growing discrepancy between the rapid global warming predicted by the models and the far less exciting changes that actually happen in the real world.

He brings forward the following indictments, which I shall summarize and answer as I go:

 

1. The RSS satellite global temperature trend since 1996 is cherry-picked to show no statistically-discernible warming [+0.04 K]. One could also have picked some other period [say, 1979-1994: +0.05 K]. The trend on the full RSS dataset since 1979 is a lot higher if one takes the entire dataset [+0.44 K]. He says: “Cherry picking the interval to maximise the strength of the evidence in favour of your argument is bad statistics.”

The question I ask when compiling the monthly graph is this: “What is the earliest month from which the least-squares linear-regression temperature trend to the present does not exceed zero?” The answer, therefore, is not cherry-picked but calculated. It is currently September 1996 – a period of 17 years 6 months. Dr Pachauri, the IPCC’s climate-science chairman, admitted the 17-year Pause in Melbourne in February 2013 (though he has more recently got with the Party Line and has become a Pause Denier).

2. “In the case of the ‘Pause’, the statistical test is straightforward. You just need to show that the observed trend is statistically inconsistent with a continuation of the trend in the preceding decades.”

No, I don’t. The significance of the long Pauses from 1979-1994 and again from 1996-date is that they tend to depress the long-run trend, which, on the entire dataset from 1979-date, is equivalent to a little over 1.2 K/century. In 1990 the IPCC predicted warming at 3 K/century. That was two and a half times the real-world rate observed since 1979. The IPCC has itself explicitly accepted the statistical implications of the Pause by cutting its mid-range near-term warming projection from 2.3 to 1.7 K/century between the pre-final and final drafts of AR5.

3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”

One does not need anything as complex as a general-circulation model to explain observed temperature change. Dr Cawley may like to experiment with the time-integral of total solar irradiance across all relevant timescales. He will get a surprise. Besides, observed temperature change since 1950, when we might have begun to influence the warming trend, is well within natural variability. No explanation beyond natural variability is needed.

4. The evidence for an inconsistency between models and data is stronger than that for the existence of a pause, but neither is yet statistically significant.

Dr Hansen used to say one would need five years without warming to falsify his model. Five years without warming came and went. He said one would really need ten years. Ten years without warming came and went. The NOAA, in its State of the Climate report for 2008, said one would need 15 years. Fifteen years came and went. Ben Santer said, “Make that 17 years.” Seventeen years came and went. Now we’re told that even though the Pause has pushed the trend below the 95% significance threshold for very nearly all the models’ near-term projections, it is “not statistically significant”. Sorry – not buying.

5. If the models underestimate the magnitude of the ‘weather’ (e.g. by not predicting the Pause), the significance of the difference between the model mean and the observations is falsely inflated.

In Mark Twain’s words, “Climate is what you expect. Weather is what you get.” Strictly speaking one needs 60 years’ data to cancel the naturally-occurring influence of the cycles of the Pacific Decadal Oscillation. Let us take East Anglia’s own dataset: HadCRUT4. In the 60 years March 1953-February 2014 the warming trend was 0.7 K, equivalent to just 1.1 K/century. CO2 has been rising at the business-as-usual rate.

The IPCC’s mid-range business-as-usual projection, on its RCP 8.5 scenario, is for warming at 3.7 K/century from 2000-2100. The Pause means we won’t get 3.7 K warming this century unless the warming rate is 4.3 K/century from now to 2100. That is almost four times the observed trend of the past 60 years. One might well expect some growth in the so-far lacklustre warming rate as CO2 emissions continue to increase. But one needs a fanciful imagination (or a GCM) to pretend that we’re likely to see a near-quadrupling of the past 60 years’ warming rate over the next 88 years.

6. It is better to understand the science than to reject the models, which are “the best method we currently have for reasoning about the effects of our (in)actions on future climate”.

No one is “rejecting” the models. However, they have accorded a substantially greater weighting to our warming influence than seems at all justifiable on the evidence to date. And Dr Cawley’s argument at this point is a common variant of the logical fallacy of arguing from ignorance. The correct question is not whether the models are the best method we have but whether, given their inherent limitations, they are – or can ever be – an adequate method of making predictions (and, so far, extravagantly excessive ones at that) on the basis of which the West is squandering $1 billion a day to no useful effect.

The answer to that question is No. Our knowledge of key processes – notably the behavior of clouds and aerosols – remains entirely insufficient. For example, a naturally-recurring (and unpredicted) reduction in cloud cover in just 18 years from 1983-2001 caused 2.9 Watts per square meter of radiative forcing. That natural forcing exceeded by more than a quarter the entire 2.3 W m–2 anthropogenic forcing in the 262 years from 1750-2011 as published in the IPCC’s Fifth Assessment report. Yet the models cannot correctly represent cloud forcings.

Then there are temperature feedbacks, which the models use to multiply the direct warming from greenhouse gases by 3. By this artifice, they contrive a problem out of a non-problem: for without strongly net-positive feedbacks the direct warming even from a quadrupling of today’s CO2 concentration would be a harmless 2.3 Cº.

But no feedback’s value can be directly measured, or theoretically inferred, or distinguished from that of any other feedback, or even distinguished from the forcing that triggered it. Yet the models pretend otherwise. They assume, for instance, that because the Clausius-Clapeyron relation establishes that the atmosphere can carry near-exponentially more water vapor as it warms it must do so. Yet some records, such as the ISCCP measurements, show water vapor declining. The models are also underestimating the cooling effect of evaporation threefold. And they are unable to account sufficiently for the heteroskedasticity evident even in the noise that overlies the signal.

But the key reason why the models will never be able to make policy-relevant predictions of future global temperature trends is that, mathematically speaking, the climate behaves as a chaotic object. A chaotic object has the following characteristics:

1. It is not random but deterministic. Every change in the climate happens for a reason.

2. It is aperiodic. Appearances of periodicity will occur in various elements of the climate, but closer inspection reveals that often the periods are not of equal length (Fig. 1).

3. It exhibits self-similarity at different scales. One can see this scalar self-similarity in the global temperature record (Fig. 1).

4. It is extremely sensitive to the most minuscule of perturbations in its initial conditions. This is the “butterfly effect”: a butterfly flaps its wings in the Amazon and rain falls on London (again).

5. Its evolution is inherently unpredictable, even by the most sophisticated of models, unless perfect knowledge of the initial conditions is available. With the climate, it’s not available.

clip_image002clip_image004

clip_image006clip_image008

Figure 1. Quasi-periodicity at 100,000,000-year, 100,000-year, 1000-year, and 100-year timescales, all showing cycles of lengths and magnitudes that vary unpredictably. Click each image to enlarge it.

Not every variable in a chaotic object will behave chaotically: nor will the object as a whole behave chaotically under all conditions. I had great difficulty explaining this to the vice-chancellor of East Anglia and his head of research when I visited them a couple of years ago. When I mentioned the aperiodicity that is a characteristic of a chaotic object, the head of research sneered that it was possible to predict reliably that summer would be warmer than winter. So it is: but that fact does not render the climate object predictable.

By the same token, it would not be right to pray in aid the manifest chaoticity with which the climate object behaves as a pretext for denying that we can expect or predict that any warming will occur if we add greenhouse gases to the atmosphere. Some warming is to be expected. However, it is by now self-evident that trying to determine how much warming we can expect on the basis of outputs from general-circulation models is futile. They have gotten it too wrong for too long, and at unacceptable cost.

The simplest way to determine climate sensitivity is to run the experiment. We have been doing that since 1950. The answer, so far, is a warming trend so far below what the models have predicted that the probability of major warming diminishes by the month. The real world exists, and we who live in it will not indefinitely throw money at modelers to model what the models have failed to model: for models cannot predict future warming trends to anything like a sufficient resolution or accuracy to justify shutting down the West.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
187 Comments
Inline Feedbacks
View all comments
AGW_Skeptic
April 2, 2014 1:38 pm

Bravo!

F. Ross
April 2, 2014 1:44 pm

Good post.
Especially nice once again to see enumerated the characteristics of a(-n?) chaotic object.

AnonyMoose
April 2, 2014 1:46 pm

“Besides, observed temperature change since 1950, when we might have begun to influence the warming trend, is well within natural variability.”
There is a pair of temperature graphs from the 20th century which show nearly indistinguishable rates of temperature change. One is from before 1950, so must be natural variability. Anyone remember where those graphs are available? (Sure, I can recreate them, but prefer to give credit to the original article.)

April 2, 2014 1:47 pm

A brilliant, concise and precise answer. Thanks

April 2, 2014 1:55 pm

As I understand it, supposedly 95% of the greenhouse effect is due to water vapor, the remaining 5% from methane, CO2, ozone and other chemicals. If this is true, why would anyone be surprised that models which primarilly attempt to explain warming by examining CO2 and methane (and aerosols) are not working? Seems to me the slightest changes in water vapor behavior would completely overwhelm any effects of changes in CO2 and methane concentrations.

April 2, 2014 1:56 pm

Thank you for your lucid explanations.

April 2, 2014 1:56 pm

You forgot to mention non-stationarity. In statistics, stationarity has to do with the underlying probability distribution being the same over time. So non-stationary means that the distribution is changing over time. When trying to analyze time-series data over time it is a problem if the underlying system changes, in effect changing the data distribution. Earth’s climate system does change over time, with respect to albedo, circulation, particulates, etc. These changes modify the climate engine, changing how it works, making accurate modeling practically impossible.

juan slayton
April 2, 2014 1:59 pm

heteroskedasticity ??!!
Aw, com’on m’Lord. First you guys produce the OED, then you pull rank….
: > )

Latitude
April 2, 2014 2:02 pm

excellent….
“No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
..and no believer can make a GCM based on a fraudulent temperature history

Coin Lurker
April 2, 2014 2:07 pm

Where did Cawley say these things? I’ve googled for several of the quotes in vain.

April 2, 2014 2:07 pm


I don’t know of a graph, but there was this exchange between the greenest of greens (in both senses of the word) the BBC’s Roger Harrabin and Professor Phil Jones of Climategate ‘fame’ from 2010:
http://news.bbc.co.uk/1/hi/8511670.stm
RH – Do you agree that according to the global temperature record used by the IPCC, the rates of global warming from 1860-1880, 1910-1940 and 1975-1998 were identical?
PJ – (after prevarication) So, in answer to the question, the warming rates for all 4 periods are similar and not statistically significantly different from each other.
RH – Do you agree that from 1995 to the present there has been no statistically-significant global warming
PJ – Yes, but only just.
PJ also says about natural variability ‘This area is slightly outside my area of expertise.’ which is honest, at least. Of course, one could argue that ‘so is everything else’.

Alan Robertson
April 2, 2014 2:10 pm

AnonyMoose says:
April 2, 2014 at 1:46 pm
“Besides, observed temperature change since 1950, when we might have begun to influence the warming trend, is well within natural variability.”
There is a pair of temperature graphs from the 20th century which show nearly indistinguishable rates of temperature change. One is from before 1950, so must be natural variability. Anyone remember where those graphs are available? (Sure, I can recreate them, but prefer to give credit to the original article.)
___________________________
Do you remember where I parked my car?
🙂
Lots of good graphs in these links:
http://wattsupwiththat.com/2014/03/29/when-did-anthropogenic-global-warming-begin/#more-106553
http://wattsupwiththat.com/2014/03/25/study-many-us-weather-stations-show-cooling-maximum-temperatures-flat/#more-106253

JJ
April 2, 2014 2:16 pm

3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”

I’m sorry, but the proper response to this nonsense is “So #$%^ing what?”
Alarmist GCMs do not verify. The relative performance or even existence of any other GCM is absolutely irrelevant to that question. Claiming otherwise is a Tu Quoque fallacy, and a decidedly unscientific thing to say.

6. It is better to understand the science than to reject the models, which are “the best method we currently have for reasoning about the effects of our (in)actions on future climate”.

LOL. It is understanding science, specifically the science of modeling but the epistemology of science generally, that causes one to reject these models.

April 2, 2014 2:17 pm

Christopher Monckton of Brenchley:
Off topic, but, just in case you missed it, your name came up in Wall Street Journal today in the context of “The Unpersuadables” by Will Storr: http://online.wsj.com/news/articles/SB10001424052702304418404579467702052177982?mod=WSJ_LifeStyle_Lifestyle_5&mg=reno64-wsj.
A hatchet job, of course, but it does keep one’s name before the public.

ShrNfr
April 2, 2014 2:18 pm

“Truth is what works.” – William James
No matter what the alternative models might be, it is evident that the models from the IPCC have produced significant and biased errors when their forecasts and the future are compared. Given that these models are being used to justify extraordinary costs and efforts, it should be required that they at least do not produce biased predictions. End of topic.

son of mulder
April 2, 2014 2:21 pm

“5. Its evolution is inherently unpredictable, even by the most sophisticated of models, unless perfect knowledge of the initial conditions is available.”
Even with perfect knowledge of the initial conditions as soon as a computer starts to calculate it makes roundings and so the track diverges from the actual.
Sorry to be a pedant

Alexander K
April 2, 2014 2:23 pm

Thanks for yet another clear and lucid exposition of those pesky things called ‘demonstrable facts’ which the warmist bretheren prefer to obfuscate.

Angech
April 2, 2014 2:23 pm

Diikranmarsupial cannot be Dr Gavin Cawley. I have never met the good mild mannered doctor with his refined manners and steel trap mind. I have however encountered the marsupial in its native habitats at skeptical science and Tamino’s where its behaviour and language leads the bunch. It might be a big brother to the little Australian Island marsupial with bad manners. Thank you for tanning its Hyde.

Txomin
April 2, 2014 2:24 pm

Excellent, Monckton, excellent. And thank you for letting your manners by nearly as good as your argumentation.

u.k.(us)
April 2, 2014 2:28 pm

3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
==============
It ain’t from a lack of trying.
Maybe the variables need better constraints ?

Anything is possible
April 2, 2014 2:29 pm

3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
==================================
Which begs the question : Has any skeptic been given the opportunity to try?

Dodgy Geezer
April 2, 2014 2:30 pm

…This is the “butterfly effect”: a butterfly flaps its wings in the Amazon and rain falls on London (again)…
With all the high technology we have access to in the UK, CAN’T SOMEONE CATCH THAT DEMMED BUTTERFLY??!!!

M seward
April 2, 2014 2:32 pm

This drivel regarding linear trends just gets my goat and for me epitomises the stupidity of so much of the so called research regarding climate change.
Fitting linear trends to a set of data is fine if you have no knowledge of any higher order trend behaviour of the data, i.e. you do not understand the mechanism. If you do have higher order insight into the mechanism and know or reasonably suspect it contains cyclical or other non linear elements then using a linear fit is simply puerile beyond a certain simplistic point.
I once saw a paper that purported to show an uptrend in sea level rise and had a fairly long data set of local sea level oscillating and when compared to the PDO there was a correlation. The PDO oscillation was sine wave like and the pattern was plain as day when you looked at the graph. So what did this paper do? It then ran a linear regression and discerned an uptrend. Ergo global warming was causing sea level rise. The problem? The data set started in a ‘trough” and finished near a crest” and so the the “uptrend” was a construct not of global warming but of mis matching linear mathematics with sinusoidal. They would have got essentially the same result if the data conformed to a pure sine wave, i.e. by definition had zero uptrend.
This sort of stuff just goes to the calibre of people involved in the work.
Please spare me the linear trend crap because it is as vulnerable to the cherry picking argument as cherry picking sinusoidal data.

Dodgy Geezer
April 2, 2014 2:34 pm

@juan slayton
…heteroskedasticity ??!!
Aw, com’on m’Lord. First you guys produce the OED, then you pull rank….

The word is not difficult to understand once you realise that a proper education includes Classical Greek…

j ferguson
April 2, 2014 2:35 pm

“pray in aid” = “argue” ?

1 2 3 8