Why models can't predict climate accurately

 By Christopher Monckton of Brenchley

Dr Gavin Cawley, a computer modeler the University of East Anglia, who posts as “dikranmarsupial”, is uncomfortable with my regular feature articles here at WUWT demonstrating the growing discrepancy between the rapid global warming predicted by the models and the far less exciting changes that actually happen in the real world.

He brings forward the following indictments, which I shall summarize and answer as I go:

 

1. The RSS satellite global temperature trend since 1996 is cherry-picked to show no statistically-discernible warming [+0.04 K]. One could also have picked some other period [say, 1979-1994: +0.05 K]. The trend on the full RSS dataset since 1979 is a lot higher if one takes the entire dataset [+0.44 K]. He says: “Cherry picking the interval to maximise the strength of the evidence in favour of your argument is bad statistics.”

The question I ask when compiling the monthly graph is this: “What is the earliest month from which the least-squares linear-regression temperature trend to the present does not exceed zero?” The answer, therefore, is not cherry-picked but calculated. It is currently September 1996 – a period of 17 years 6 months. Dr Pachauri, the IPCC’s climate-science chairman, admitted the 17-year Pause in Melbourne in February 2013 (though he has more recently got with the Party Line and has become a Pause Denier).

2. “In the case of the ‘Pause’, the statistical test is straightforward. You just need to show that the observed trend is statistically inconsistent with a continuation of the trend in the preceding decades.”

No, I don’t. The significance of the long Pauses from 1979-1994 and again from 1996-date is that they tend to depress the long-run trend, which, on the entire dataset from 1979-date, is equivalent to a little over 1.2 K/century. In 1990 the IPCC predicted warming at 3 K/century. That was two and a half times the real-world rate observed since 1979. The IPCC has itself explicitly accepted the statistical implications of the Pause by cutting its mid-range near-term warming projection from 2.3 to 1.7 K/century between the pre-final and final drafts of AR5.

3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”

One does not need anything as complex as a general-circulation model to explain observed temperature change. Dr Cawley may like to experiment with the time-integral of total solar irradiance across all relevant timescales. He will get a surprise. Besides, observed temperature change since 1950, when we might have begun to influence the warming trend, is well within natural variability. No explanation beyond natural variability is needed.

4. The evidence for an inconsistency between models and data is stronger than that for the existence of a pause, but neither is yet statistically significant.

Dr Hansen used to say one would need five years without warming to falsify his model. Five years without warming came and went. He said one would really need ten years. Ten years without warming came and went. The NOAA, in its State of the Climate report for 2008, said one would need 15 years. Fifteen years came and went. Ben Santer said, “Make that 17 years.” Seventeen years came and went. Now we’re told that even though the Pause has pushed the trend below the 95% significance threshold for very nearly all the models’ near-term projections, it is “not statistically significant”. Sorry – not buying.

5. If the models underestimate the magnitude of the ‘weather’ (e.g. by not predicting the Pause), the significance of the difference between the model mean and the observations is falsely inflated.

In Mark Twain’s words, “Climate is what you expect. Weather is what you get.” Strictly speaking one needs 60 years’ data to cancel the naturally-occurring influence of the cycles of the Pacific Decadal Oscillation. Let us take East Anglia’s own dataset: HadCRUT4. In the 60 years March 1953-February 2014 the warming trend was 0.7 K, equivalent to just 1.1 K/century. CO2 has been rising at the business-as-usual rate.

The IPCC’s mid-range business-as-usual projection, on its RCP 8.5 scenario, is for warming at 3.7 K/century from 2000-2100. The Pause means we won’t get 3.7 K warming this century unless the warming rate is 4.3 K/century from now to 2100. That is almost four times the observed trend of the past 60 years. One might well expect some growth in the so-far lacklustre warming rate as CO2 emissions continue to increase. But one needs a fanciful imagination (or a GCM) to pretend that we’re likely to see a near-quadrupling of the past 60 years’ warming rate over the next 88 years.

6. It is better to understand the science than to reject the models, which are “the best method we currently have for reasoning about the effects of our (in)actions on future climate”.

No one is “rejecting” the models. However, they have accorded a substantially greater weighting to our warming influence than seems at all justifiable on the evidence to date. And Dr Cawley’s argument at this point is a common variant of the logical fallacy of arguing from ignorance. The correct question is not whether the models are the best method we have but whether, given their inherent limitations, they are – or can ever be – an adequate method of making predictions (and, so far, extravagantly excessive ones at that) on the basis of which the West is squandering $1 billion a day to no useful effect.

The answer to that question is No. Our knowledge of key processes – notably the behavior of clouds and aerosols – remains entirely insufficient. For example, a naturally-recurring (and unpredicted) reduction in cloud cover in just 18 years from 1983-2001 caused 2.9 Watts per square meter of radiative forcing. That natural forcing exceeded by more than a quarter the entire 2.3 W m–2 anthropogenic forcing in the 262 years from 1750-2011 as published in the IPCC’s Fifth Assessment report. Yet the models cannot correctly represent cloud forcings.

Then there are temperature feedbacks, which the models use to multiply the direct warming from greenhouse gases by 3. By this artifice, they contrive a problem out of a non-problem: for without strongly net-positive feedbacks the direct warming even from a quadrupling of today’s CO2 concentration would be a harmless 2.3 Cº.

But no feedback’s value can be directly measured, or theoretically inferred, or distinguished from that of any other feedback, or even distinguished from the forcing that triggered it. Yet the models pretend otherwise. They assume, for instance, that because the Clausius-Clapeyron relation establishes that the atmosphere can carry near-exponentially more water vapor as it warms it must do so. Yet some records, such as the ISCCP measurements, show water vapor declining. The models are also underestimating the cooling effect of evaporation threefold. And they are unable to account sufficiently for the heteroskedasticity evident even in the noise that overlies the signal.

But the key reason why the models will never be able to make policy-relevant predictions of future global temperature trends is that, mathematically speaking, the climate behaves as a chaotic object. A chaotic object has the following characteristics:

1. It is not random but deterministic. Every change in the climate happens for a reason.

2. It is aperiodic. Appearances of periodicity will occur in various elements of the climate, but closer inspection reveals that often the periods are not of equal length (Fig. 1).

3. It exhibits self-similarity at different scales. One can see this scalar self-similarity in the global temperature record (Fig. 1).

4. It is extremely sensitive to the most minuscule of perturbations in its initial conditions. This is the “butterfly effect”: a butterfly flaps its wings in the Amazon and rain falls on London (again).

5. Its evolution is inherently unpredictable, even by the most sophisticated of models, unless perfect knowledge of the initial conditions is available. With the climate, it’s not available.

clip_image002clip_image004

clip_image006clip_image008

Figure 1. Quasi-periodicity at 100,000,000-year, 100,000-year, 1000-year, and 100-year timescales, all showing cycles of lengths and magnitudes that vary unpredictably. Click each image to enlarge it.

Not every variable in a chaotic object will behave chaotically: nor will the object as a whole behave chaotically under all conditions. I had great difficulty explaining this to the vice-chancellor of East Anglia and his head of research when I visited them a couple of years ago. When I mentioned the aperiodicity that is a characteristic of a chaotic object, the head of research sneered that it was possible to predict reliably that summer would be warmer than winter. So it is: but that fact does not render the climate object predictable.

By the same token, it would not be right to pray in aid the manifest chaoticity with which the climate object behaves as a pretext for denying that we can expect or predict that any warming will occur if we add greenhouse gases to the atmosphere. Some warming is to be expected. However, it is by now self-evident that trying to determine how much warming we can expect on the basis of outputs from general-circulation models is futile. They have gotten it too wrong for too long, and at unacceptable cost.

The simplest way to determine climate sensitivity is to run the experiment. We have been doing that since 1950. The answer, so far, is a warming trend so far below what the models have predicted that the probability of major warming diminishes by the month. The real world exists, and we who live in it will not indefinitely throw money at modelers to model what the models have failed to model: for models cannot predict future warming trends to anything like a sufficient resolution or accuracy to justify shutting down the West.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
187 Comments
Inline Feedbacks
View all comments
Berényi Péter
April 2, 2014 3:44 pm

No one is “rejecting” the models.

I do.
To be accurate, I am rejecting reductionist computational general circulation climate models. I am rejecting them, because they are trying to simulate a single run of a unique entity (terrestrial climate) with no adequate physical understanding of the general class to which said entity belongs to.
The class is that of irreproducible quasi stationary non equilibrium thermodynamic systems. Some members of this class could be studied experimentally in the lab, but that was never done. Also, no entity belonging to this class had a successful computational model ever.
A system is irreproducible, if microstates belonging to the same macrostate can evolve into different macrostates in a short time, which is certainly the case with chaos. For such systems not even straightforward definition of Jaynes entropy is known, therefore doing theoretical thermodynamics on them is premature.
However, all is not lost, there is tantalizing evidence of unexplained symmetries in the climate system. One could do experiments in the lab to see if it is a general property of such systems and if it is related to some variational principle. That’s how science is supposed to work.
Until such time saying models built along the current paradigm are “the best method we currently have for reasoning about the effects of our (in)actions on future climate” is plain silly. If the best we have is inadequate, we have nothing to work with.

Non Nomen
April 2, 2014 3:44 pm

Thank you, Mylord, for forging another nail for the coffin of the warmistas. Although someone mentioned lately they prefer cremation…

norah4you
April 2, 2014 3:51 pm

92-94% of all CO2 comes from vulcanos. Active and dead. All readings close to vulcanos, such as in Hawaii only comes from instruments placed there by vulcanoexperts who use figures to calculate next eruption. Rest of CO2 comes almost all from natural sources.
As for computermodels (I am educated systemprogrammer as well as teacher in Geography (including Geologi), History and some other subjects) Computermodels never ever can be better than the skill the systemprogrammer has and only if all factors/variables that is at hand in real life, at least 43 to take into account, as well as correct (not corrected figures) are what’s used in program/model. (I used 43 variables including underwaterstreams etc when I myself wrote a program in early 1990’s in order to establish correct sealevel in Oceans from Stone Age up to 1000 AD). For todays so called models – well none of them would have passed to exam 30-40 years back in time. They forgotten all Theory of Science….. as we said in old days: Bad input – bad output.

garymount
April 2, 2014 4:09 pm

Coin Lurker says:
April 2, 2014 at 2:07 pm
Where did Cawley say these things? I’ve googled for several of the quotes in vain.
– – –
He made these comments in the comments section here:
http://wattsupwiththat.com/2014/03/31/dataset-of-datasets-shows-no-warming-this-millennium/

Henrik Sørensen
April 2, 2014 4:19 pm

heteroskedasticity
I spent five minutes trying to figure out how to pronounce it .. and failed

Konrad
April 2, 2014 4:29 pm

Stupendus says:
April 2, 2014 at 3:24 pm
“Just take a model, any model and remove the CO2 fudge factor, I reckon the accuracy of the model will improve somewhat, maybe by about 97%”
——————————————————–
I believe this would be workable. Of course pressing “Delete” on the whole file would also improve accuracy and have the added benefit of reducing cost to the taxpayer by over 97%.

April 2, 2014 4:33 pm

Still it looks like the old ones of Chaco Canyon New Mexico had a better handle on the knowing of the weather long term than this Crawleyone or Mike Mann etal. All they had was some curved stones, life out in the weather and the talkers from the past.
Sun comes up sun goes down.
Rain Comes, Snow comes, hot comes , cold comes, sometimes more, sometimes less.
Repeat long term, short term, very long term, then the very, very long terms come and its all new once more.

April 2, 2014 4:44 pm

Oh,
And,, John F. Kerry is a true believer and he says its all a fact.

David L. Hagen
April 2, 2014 4:47 pm
george e. smith
April 2, 2014 4:50 pm

I can’t fathom, why the rules of the Monckton Flat earth climate game, are uncomprehendable to these “computer medelers”; excuse me, that’s “computer modelers”.
Rule #1…..Obtain the most recently released (current month) global anomaly report from GISSTemp / HadCrud / UAH / RSS / whatever; and record that as “Final Month Datum.”
Rule #2……Obtain the same data source report, from the next previous month, and record that as “Initial Month Datum”.
Rule #3…..CALCULATE by standard statistical mathematical protocol, the value of the trend between the initial Month Datum, and the Final month Datum; and the statistical standard deviation for that trend value.
Rule #4…..If the calculated value for the trend is statistically different from zero; as indicated by the standard deviation value, go to END.
Rule #5…..If the calculated value for the trend is statistically equal to zero, based on the calculated standard deviation, jump to Rule #2.
END… subtract the month number for Initial Month Datum, from the month number for Final Month Datum.
Report the result at END to WUWT, and assert identity, with Monckton of Brenchley.
QED !

banjo
April 2, 2014 4:53 pm

Dikran Marasupial?
That would be a Prattypus.

JimF
April 2, 2014 4:53 pm

Great stuff, m’lord.

rogerknights
April 2, 2014 4:53 pm

In Mark Twain’s words, “Climate is what you expect. Weather is what you get.”

Twain made the first rough approximation of that saying, but it was polished into its current form by Robert Heinlein, who deserves credit for it.
PS: Bravo!

April 2, 2014 4:59 pm

Looks like our friend Lord Monckton may be headed to London tower soon….
http://www.thetimes.co.uk/tto/environment/article4051905.ece
Is the gibbet next?

george e. smith
April 2, 2014 5:00 pm

To ensure agreement between experimentally observed values for climate warming data, and computer generated simulations, the following software patch is to be installed, and the simulations rerun.
Patch: Change the value of the variable, “Earth Rotation frequency” , from 0.0000000 to 1.1574074E-5 Hertz.

April 2, 2014 5:14 pm

It is better to understand the science than to reject the models

It is better to understand the science. And the science says to reject the models. SO I disagree with Christopher Monckton. Some do indeed reject the models because they are useless. Now that does not say that “models” will never be useful. However the ones in use today suffer from an extreme bias of political nature that renders them useless.

April 2, 2014 5:15 pm

BTW: Thanks for the top about the kangaroo. I do not understand why some must think hiding their identity somehow makes them wiser.

April 2, 2014 5:29 pm

“The significance of the long Pauses from 1979-1994…”
UAH globe, Dec 1978 to March 1995: http://snag.gy/jF8Gh.jpg

MarkB
April 2, 2014 5:40 pm

Christopher Monckton of Brenchley
. . .
3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
One does not need anything as complex as a general-circulation model to explain observed temperature change. Dr Cawley may like to experiment with the time-integral of total solar irradiance across all relevant timescales. He will get a surprise. Besides, observed temperature change since 1950, when we might have begun to influence the warming trend, is well within natural variability. No explanation beyond natural variability is needed.

It would be interesting if the author could expand upon how this time-integral process works. To support the claim “no explanation beyond natural variability is needed” one has to do some credible attribution, otherwise the proper claim is “we don’t know what’s going on”. While that may well be the case then one can’t rule out a significant anthropogenic driver.
It would also be interesting if the author could comment on how to reconcile the results of Kosaka and Xie 2013 which suggests an attribution consisting of both a specifically identified natural variation component and an anthropogenic component over the relevant period.

KevinK
April 2, 2014 5:45 pm

“No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
I’ve never tried to model the exact number of angels that will fit on the head of a pin either.
The observed climate can be explained simply; it’s awful dang chaotic, changes slowly (except when it changes abruptly), and there is so much noise (weather) superimposed on top of the climate that observing it is the only sensible thing that folks should be doing.
Cheers, Kevin

bw
April 2, 2014 5:47 pm

The full RSS MSU plots of anomaly from 1979 to 2014 show zero anomaly around 1980.
The current anomaly is about 0.2 degrees higher, 34 years later.
Note that anomaly values show a month-to-month variability that can easily reach 0.2 degrees, not that the real Earth actually changes by 0.2 degrees in a month.
Annual changes can change more. The point here is that an anomaly value of 0.2 degrees over 34 years is not significant. UAH shows the same.
It does not matter what the models claim, actual gobal temperatures measured with scientific integrity over 34 years show no significant change.

Greg
April 2, 2014 5:48 pm

Svend Ferdinandsen says:
I believe the models get it so wrong, because they are tuned and constructed to replicate the warming from 1980 to 2000, which they do very well
Well they don’t do it that well. There are a lot of different ways to combine the many inputs and frig factors to reproduce the general wiggles of a short period. The problem is that it does not project backwards or forwards in anything resembling close to real world.
I’ve recently discovered that Lacis et al ( part of Hansens team at GISS ) published a quite thorough paper in 1993 that had volcanic forcing considerably stronger than they now attribute it, based on simple direct physics. More recently they’ve watered it down to try to get the data to fit their models !
http://climategrog.files.wordpress.com/2014/03/erbe_vs_aerosol_forcing.png?w=814
In fact the earlier figures fit Mt. Pinatubo much better but require a recognition of the strong negative feedback in the tropics.
That also reveals a strong warming climate reaction that runs at least until the 1998 El Nino.
http://climategrog.files.wordpress.com/2014/03/tropical-feedback_resp.png?w=814
The warming they are trying to attribute to CO2 is a climate kickback to recover the energy deficit caused by major volcanoes.
To fix the models they just need to play with all the frig factors:
Put volc. aerosol forcing back to what they properly calculated it to be in 1993 ( optical density * 31 W/m2 ).
Add an 8 month exponential decay reaction ( relaxation to equilibrium ).
90% negative feedback to radiative changes by tweaking the tropical cloud parametrisations.
That matches the top-of-atmosphere energy budget measured by ERBE. Then temps stop rising and the models work. CO2 problem disappears in a puff of colourless, odourless, non-toxic gas , since it too is reduced by 90% by cloud feedbacks.

April 2, 2014 5:48 pm

That third graph is most remarkable. Not only does it show that what we are experiencing now is nothing new, I am amazed that the earth’s temperature has varied by only one degree (Celsius) over the last 2,000 years. The Earth’s climate is remarkably stable, all things considered.

Arno Arrak
April 2, 2014 5:50 pm

It is clear that these pseudo-scientists doing modeling have not the slightest idea of what to do with scientific measurements. If you get obviously inaccurate results what you do is find out the reason for it, make a correction, and try again. There is no evidence that anything of this sort has been done for the last 24 years when Hansen first tried to model future “business as usual” temperatures. We know that his predictions have been way off but year after year new predictions come out, more money is spent on supercomputers, the number of predictions skyrockets, and yet none of them work. After 24 years, their predictions are no better than Hansen’s first prediction. As an administrator I would decide that after 24 years of trying to make it work it is not working, cut my losses, and close down the enterprise. As a scientist I would decide that after 24 years of trying it is clear that either it is impossible to make it work or else that the personnel are simply incompetent to handle this task. In either case, my decision would also be to shut it down to stop the flow of erroneous predictions into global climate forecasts. As a neutral outside observer I seriously suggest shutting down the climate modeling arm, selling the hardware, and firing the personnel. The latter is common business practice, and government as well. When Nixon canceled the last three moon shots the prime contractor for the Apollo Lunar Lander Module was forced to lay off ten thousand men within a month. That was unjust but laying off those non-performing modelers would serve the cause of justice and improve climate forecasts.

April 2, 2014 5:50 pm

To answer the question about “Why models can’t predict climate accurately?” Models (fashion models) are supposed to be attractive, not smart.