Why models can't predict climate accurately

 By Christopher Monckton of Brenchley

Dr Gavin Cawley, a computer modeler the University of East Anglia, who posts as “dikranmarsupial”, is uncomfortable with my regular feature articles here at WUWT demonstrating the growing discrepancy between the rapid global warming predicted by the models and the far less exciting changes that actually happen in the real world.

He brings forward the following indictments, which I shall summarize and answer as I go:

 

1. The RSS satellite global temperature trend since 1996 is cherry-picked to show no statistically-discernible warming [+0.04 K]. One could also have picked some other period [say, 1979-1994: +0.05 K]. The trend on the full RSS dataset since 1979 is a lot higher if one takes the entire dataset [+0.44 K]. He says: “Cherry picking the interval to maximise the strength of the evidence in favour of your argument is bad statistics.”

The question I ask when compiling the monthly graph is this: “What is the earliest month from which the least-squares linear-regression temperature trend to the present does not exceed zero?” The answer, therefore, is not cherry-picked but calculated. It is currently September 1996 – a period of 17 years 6 months. Dr Pachauri, the IPCC’s climate-science chairman, admitted the 17-year Pause in Melbourne in February 2013 (though he has more recently got with the Party Line and has become a Pause Denier).

2. “In the case of the ‘Pause’, the statistical test is straightforward. You just need to show that the observed trend is statistically inconsistent with a continuation of the trend in the preceding decades.”

No, I don’t. The significance of the long Pauses from 1979-1994 and again from 1996-date is that they tend to depress the long-run trend, which, on the entire dataset from 1979-date, is equivalent to a little over 1.2 K/century. In 1990 the IPCC predicted warming at 3 K/century. That was two and a half times the real-world rate observed since 1979. The IPCC has itself explicitly accepted the statistical implications of the Pause by cutting its mid-range near-term warming projection from 2.3 to 1.7 K/century between the pre-final and final drafts of AR5.

3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”

One does not need anything as complex as a general-circulation model to explain observed temperature change. Dr Cawley may like to experiment with the time-integral of total solar irradiance across all relevant timescales. He will get a surprise. Besides, observed temperature change since 1950, when we might have begun to influence the warming trend, is well within natural variability. No explanation beyond natural variability is needed.

4. The evidence for an inconsistency between models and data is stronger than that for the existence of a pause, but neither is yet statistically significant.

Dr Hansen used to say one would need five years without warming to falsify his model. Five years without warming came and went. He said one would really need ten years. Ten years without warming came and went. The NOAA, in its State of the Climate report for 2008, said one would need 15 years. Fifteen years came and went. Ben Santer said, “Make that 17 years.” Seventeen years came and went. Now we’re told that even though the Pause has pushed the trend below the 95% significance threshold for very nearly all the models’ near-term projections, it is “not statistically significant”. Sorry – not buying.

5. If the models underestimate the magnitude of the ‘weather’ (e.g. by not predicting the Pause), the significance of the difference between the model mean and the observations is falsely inflated.

In Mark Twain’s words, “Climate is what you expect. Weather is what you get.” Strictly speaking one needs 60 years’ data to cancel the naturally-occurring influence of the cycles of the Pacific Decadal Oscillation. Let us take East Anglia’s own dataset: HadCRUT4. In the 60 years March 1953-February 2014 the warming trend was 0.7 K, equivalent to just 1.1 K/century. CO2 has been rising at the business-as-usual rate.

The IPCC’s mid-range business-as-usual projection, on its RCP 8.5 scenario, is for warming at 3.7 K/century from 2000-2100. The Pause means we won’t get 3.7 K warming this century unless the warming rate is 4.3 K/century from now to 2100. That is almost four times the observed trend of the past 60 years. One might well expect some growth in the so-far lacklustre warming rate as CO2 emissions continue to increase. But one needs a fanciful imagination (or a GCM) to pretend that we’re likely to see a near-quadrupling of the past 60 years’ warming rate over the next 88 years.

6. It is better to understand the science than to reject the models, which are “the best method we currently have for reasoning about the effects of our (in)actions on future climate”.

No one is “rejecting” the models. However, they have accorded a substantially greater weighting to our warming influence than seems at all justifiable on the evidence to date. And Dr Cawley’s argument at this point is a common variant of the logical fallacy of arguing from ignorance. The correct question is not whether the models are the best method we have but whether, given their inherent limitations, they are – or can ever be – an adequate method of making predictions (and, so far, extravagantly excessive ones at that) on the basis of which the West is squandering $1 billion a day to no useful effect.

The answer to that question is No. Our knowledge of key processes – notably the behavior of clouds and aerosols – remains entirely insufficient. For example, a naturally-recurring (and unpredicted) reduction in cloud cover in just 18 years from 1983-2001 caused 2.9 Watts per square meter of radiative forcing. That natural forcing exceeded by more than a quarter the entire 2.3 W m–2 anthropogenic forcing in the 262 years from 1750-2011 as published in the IPCC’s Fifth Assessment report. Yet the models cannot correctly represent cloud forcings.

Then there are temperature feedbacks, which the models use to multiply the direct warming from greenhouse gases by 3. By this artifice, they contrive a problem out of a non-problem: for without strongly net-positive feedbacks the direct warming even from a quadrupling of today’s CO2 concentration would be a harmless 2.3 Cº.

But no feedback’s value can be directly measured, or theoretically inferred, or distinguished from that of any other feedback, or even distinguished from the forcing that triggered it. Yet the models pretend otherwise. They assume, for instance, that because the Clausius-Clapeyron relation establishes that the atmosphere can carry near-exponentially more water vapor as it warms it must do so. Yet some records, such as the ISCCP measurements, show water vapor declining. The models are also underestimating the cooling effect of evaporation threefold. And they are unable to account sufficiently for the heteroskedasticity evident even in the noise that overlies the signal.

But the key reason why the models will never be able to make policy-relevant predictions of future global temperature trends is that, mathematically speaking, the climate behaves as a chaotic object. A chaotic object has the following characteristics:

1. It is not random but deterministic. Every change in the climate happens for a reason.

2. It is aperiodic. Appearances of periodicity will occur in various elements of the climate, but closer inspection reveals that often the periods are not of equal length (Fig. 1).

3. It exhibits self-similarity at different scales. One can see this scalar self-similarity in the global temperature record (Fig. 1).

4. It is extremely sensitive to the most minuscule of perturbations in its initial conditions. This is the “butterfly effect”: a butterfly flaps its wings in the Amazon and rain falls on London (again).

5. Its evolution is inherently unpredictable, even by the most sophisticated of models, unless perfect knowledge of the initial conditions is available. With the climate, it’s not available.

clip_image002clip_image004

clip_image006clip_image008

Figure 1. Quasi-periodicity at 100,000,000-year, 100,000-year, 1000-year, and 100-year timescales, all showing cycles of lengths and magnitudes that vary unpredictably. Click each image to enlarge it.

Not every variable in a chaotic object will behave chaotically: nor will the object as a whole behave chaotically under all conditions. I had great difficulty explaining this to the vice-chancellor of East Anglia and his head of research when I visited them a couple of years ago. When I mentioned the aperiodicity that is a characteristic of a chaotic object, the head of research sneered that it was possible to predict reliably that summer would be warmer than winter. So it is: but that fact does not render the climate object predictable.

By the same token, it would not be right to pray in aid the manifest chaoticity with which the climate object behaves as a pretext for denying that we can expect or predict that any warming will occur if we add greenhouse gases to the atmosphere. Some warming is to be expected. However, it is by now self-evident that trying to determine how much warming we can expect on the basis of outputs from general-circulation models is futile. They have gotten it too wrong for too long, and at unacceptable cost.

The simplest way to determine climate sensitivity is to run the experiment. We have been doing that since 1950. The answer, so far, is a warming trend so far below what the models have predicted that the probability of major warming diminishes by the month. The real world exists, and we who live in it will not indefinitely throw money at modelers to model what the models have failed to model: for models cannot predict future warming trends to anything like a sufficient resolution or accuracy to justify shutting down the West.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
187 Comments
Inline Feedbacks
View all comments
MattS
April 2, 2014 10:34 pm

” The RSS satellite global temperature trend since 1996 is cherry-picked to show no statistically-discernible warming [+0.04 K]. One could also have picked some other period [say, 1979-1994: +0.05 K]. The trend on the full RSS dataset since 1979 is a lot higher if one takes the entire dataset [+0.44 K].”
Is this accurate? If it is, doesn’t this mean that something like 80% of all the warming in the RSS data set took place in just 3 years (1994-1996)? Can you say “step change”?

Leonard Lane
April 2, 2014 10:44 pm

juan slayton says:
April 2, 2014 at 1:59 pm
heteroskedasticity ??!!
Aw, com’on m’Lord. First you guys produce the OED, then you pull rank….
: > )
Here is a simple explanation of the term.
http://www.statsmakemecry.com/smmctheblog/confusing-stats-terms-explained-heteroscedasticity-heteroske.html
I is also a basic term in most statistics books that discuss linear regression.

Bruce
April 2, 2014 10:52 pm

Again the good Lord uses big scientific/mathematical words that he doesn’t fully understand.
No doubt they give him a warm feeling but they don’t lift the Highland fog.
The fact that Cawley doesn’t know what he is talking about makes the Lord’s verbosity especially meretricious.

ren
April 2, 2014 11:15 pm
Admin
April 2, 2014 11:16 pm

Here’s a nice simple solar integral model, to help those poor CRU modellers get started.
http://woodfortrees.org/plot/hadcrut4gl/from:1850/mean:50/normalise/plot/sidc-ssn/from:1850/mean:50/offset:-40/integral/normalise

juan slayton
April 2, 2014 11:20 pm

Leonard, Bruce: Only English speakers can perpetrate these lexical preposterosities. No, wait, there’s German….

JCR
April 2, 2014 11:33 pm

3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
Hang on – why SHOULD skeptics have to produce such a GCM. Skeptics aren’t the one’s demanding massive social, political and economic disruption. If someone wants me to take such a hit to my standard of living, they better have all their scientific ducks in a row. Computer-assisted guessing isn’t going to cut it, especially when the guesses are diverging further and further from real-world observations.
– what makes you think Monckton DOESN’T understand the “big scientific/mathematical words” he uses?

phlogiston
April 3, 2014 12:10 am

Dr. Doug L. Hoffman says:
April 2, 2014 at 1:56 pm
You forgot to mention non-stationarity. In statistics, stationarity has to do with the underlying probability distribution being the same over time. So non-stationary means that the distribution is changing over time.
The last section on chaotic nonlinearity in climate is a golden nugget hidden at the end of Brenchley’s fine post. His point about quasi-periodicity is hugely important in the context of tireless attempts to attribute climate “oscillations” to direct astrophysical forcing. Equally important is Doug’s point about non-stationarity – the Lorenz climate runs clearly showed that the evolving chaotic system moved from one apparent plateau or baseline to another with no outside forcing – just its internal attractors. Bob Tisdale has educated us about the climate shifts – moments at which global ocean-driven temperatures have apparently shifted up to a different level at certain timepoints. This is classic behavior of Lorenz type deterministic nonlinear systems.
The foundational paper for climate modeling is of course Deterministic Nonperiodic Flow by Lorenz 1962:
http://www.astro.puc.cl/~rparra/tools/PAPERS/lorenz1962.pdf
Just think for a moment what computer modeling meant in 1962. Computations in the hundreds rather than hundreds of millions per second, iterative runs lasting several days which today would take only a second on a cell phone. Despite the advance in computer technology however, this paper has not been and may never be surpassed in terms of its significance to climate science. Without an understanding of this paper there is no climate science any more than there is science of gravity without Newton’s principia and Einstein’s general relativity or chemistry without Mendeleev’s periodic table etc.
When will folks get the point – climate temperature shifts, or “climate change” does not need to be explained by outside forcing. Not by CO2, soot, small particles, big particles, ozone, nor by astrophysical cycles of whatever exotic flavor or harmonic.
Climate changes itself.

Editor
April 3, 2014 12:15 am

Christopher Monckton has written an accurate, relevant and forceful article as usual. Thank you, CM. However I would not quite agree with you on two points:
re point 2 : I don’t buy the idea that there was a step rise in temperature around 1994-6. Yes, you can do a flat linear trend up to  1994 and again at a higher level from 1996 on. But a flat linear trend over a period doesn’t mean the temperature was flat. If you take a simple sine wave and tilt it to a rising trend, you can get the same result.
Re point 6 : CM says “No one is “rejecting” the models“, and then gives a long detailed and accurate description of the models which demonstrate very clearly why the models should indeed be rejected. To CM’s analysis I would add that the method they use is incapable of remaining accurate more than a few weeks into the future, if that. The reason is that they use similar logic and processes to weather models, which will always magnify inaccuracies exponentially over time. It is for this reason that weather models are incapable of predicting weather more than a few days ahead. The same applies to climate models. [CM covers this obliquely in 6-4, 6-5, but I would contend that prediction is manifestly impossible even with 
perfect knowledge of the initial conditions.]. A few weeks is totally useless for climate prediction, so the climate models should indeed be rejected.

Hot under the collar
April 3, 2014 12:19 am

The UEA has had its own, more recent problems with “accuracy”
http://www.bbc.co.uk/news/uk-england-norfolk-26764883

Dr. Strangelove
April 3, 2014 12:28 am

Dr. Gavin Cawley (a.k.a. dikranmarsupial)
Can you answer the criticism of Dr. Patrick Frank that GCMs have no predictive power?
http://www.skeptic.com/wordpress/wp-content/uploads/v14n01resources/climate_of_belief.pdf

Evan Jones
Editor
April 3, 2014 1:07 am

Frank says:
April 2, 2014 at 9:35 pm
Thanks.
Basically, what I am saying is that you need to look at the entire picture, writ large, and create your input from there. And to hell with machinegun barrels. There is no way you can ever know if that mattered a tenth as much as the bad pastry Napoleon ate before the battle of Waterloo (Ligny, actually.) We are talking CHAOS, here.
Someone once asked Michelangelo how to make a sculpture of an elephant. He is said to have replied, by taking a block of marble and cutting away all the parts that don’t look like an elephant. There’s your man — he got it.
But these model bozos start out by trying to construct a mouse out of grains of sand (and sure enough, they wind up with a freakin’ elephant — then they cry wolf).
I designed a Civil War game (Blue Vs Gray) that storyboarded down to the last relevant detail. Every card pick. Every die roll. Every step loss. No one else, in all the hundreds of ACW games out there, has even come close to that. (At least one college professor actually used it to teach the Civil War to his class, and that’s the best level of accuracy a wargame can ever cut. And I’m an old hand.)
How did I get there? FROM THE TOP. That’s how I got there.
It had a supremely simple combat results table. How did I design that? I took the 35 or so largest battles of the war (I can still rattle them off). I then noted down how many troops on both sides, what the losses were, and who “won” and why (outnumbering the enemy or superior generalship).
Nothing more.
Here is what I wound up with:
Red Die:
1: Attacker Routed
2: Attacker Defeated
3: General’s Battle (Commander with highest initiative wins. Stalemate on ties.)
4: Soldier’s Battle (If one force is 5 points stronger, stronger force wins. Otherwise, stalemate.)
5: Defender Defeated
6: Defender Routed
If one side outnumbers the other by 10, alter the die one in their favor. (Two for 20 (etc.))
White Die:
1-3: Light Casualties
4-5: Normal Casualties
6: Heavy Casualties
Ten pages of morale rules? Didn’t need ’em. The morale factor was the players themselves. Best “simulation” of that — evah.
Losses based on the size of your own force, NOT largest force. (Yes, I know — but that’s what produced the historically accurate results and plausible possibilities. And that’s the point. I didn’t let “civil war consensus” destroy the accuracy of my game.)
And that’s it. And it gave plausible results for every single one of those battles (including a few two-turn jobs, such as Gettysburg, Shiloh, Chickamaugua). I even worked out a simple metric for commanders getting killed, wounded, or sacked that fit seamlessly (Shiloh, Wilderness, etc.).
Now, I have seen incredibly complex Combat Results Tables for for strategic level Civil War games. And not one of them came close to accurately covering the actual results of any given battle, far less all of them.
Why? And Why not? Because I let the war write the rules. Top-down. If the war didn’t fit, I altered the rules. But them and their thousands of “accurate” parameters and massively complex sub-systems? what did they do? They tried to make the rules write the war. And they would never alter one of their beloved subsystems because then it “wouldn’t be accurate”.
“Accurate”, hell! They couldn’t make one lousy battle work out right, once the dust cleared. (But, by Jimminy, they got that canister round rate right! The fools.)
And even the outcomes were virtually predetermined: (The infamous “Lee always wins” syndrome.) But Lee didn’t always win. He got clean wins in only 3 of the 7 major attacks he made. Not only did my system reflect that, but the results Lee actually got were more likely — while preserving the — accurate — unpredictability of the results.
I won’t go into how I made the rest of the game work — systems, replayability, “fun factor”. But you get the idea.
http://boardgamegeek.com/collection/items/boardgame/89?rated=1
But the point is that when you are confronted by controlled chaotic subject matter, you design from the top-down. That way you can control your path. But if you do it from the bottom up, you won’t ever come close. Not ever. And anyone who thinks they can do otherwise is just another garden-variety high-IQ fool — crippled by his own intelligence.
If I’m sounding brash and arrogant here, please understand, it’s only because I am.

Greg Goodman
April 3, 2014 1:15 am

Joe Johnson :”…about the sacred models since. Oh yeah, it [climate model] was obvious from the history comments that it was originally written by James Hansen — which probably explains why he was always so enamored of it.”
That probably also explains why as head of GISS he was able to ditch the physics-based estimations of developed by the team ( Lacis et al 1992 ) and substitute a lower value that helped to model to better reproduce late 20th c. climate.
What he effectively did was to change the data to fit the model rather than to change the model to fit the data.
http://climategrog.files.wordpress.com/2014/03/erbe_vs_aerosol_forcing.png?w=814
What they tried tried to do was the match volcanic forcing directly to the short term change, ignoring the fact that the initial reaction was much stronger and in close agreement with the Lacis figure. The problem is that this implies a strong negative feedback to changes in radiative forcing.
That did not fit the agenda, so they ditched the solid science of Lacis et al and arbitrarily rescaled to input instead of correcting the model to have a negative feedback.
And that is fundamentally why GCMs don’t work, is because they are not allowed to.
Hansen was co-author on the Lacis paper, so knew full-well that what they were doing was contrary to “the science” said about aerosol forcings.
Once they stop trying to rig the results, there’s a change GCMs may get a lot closer to reality.

Crispin in Waterloo but really in Cape Point
April 3, 2014 1:38 am

A well presented discussion of the good Doctor’s points raised in objection.
All points were satisfactorily refuted. Models of the climate have absorbed far too much money for far too little in benefit. If they can’t predict the general climate in terms of a temperature rise for which they claim the physics is well understood then the models should be assumed non-physical until proven otherwise.

April 3, 2014 1:38 am

if there are ice age cycles then to decontexualise from those is to present a meaningless snapshot. Taking 30year snapshots and extending 100 yr predictions lines is weak. If that is the basis of betting money then the charts should be shown to financial traders [who would laugh at them]
if this is an inter glacial warming period why look for ‘a villain’ for natural warming and create a show trial of co2?
why are modern bsc climate science courses 50% the study of sustainability? the subject looks misnamed.
why do the ipcc prefer those who got their phds in the last 8 years? because they would be soaked in the misnamed ‘ climate science’?
i did ask on RealClimate if the code and design for the models was open source so that anyone could examine the formula used and was told only 1 of the models [out of the 50 or so] was open. So its basically ‘black box’. if they are publicly funded all the designs and code should be online for anyone to inspect.

April 3, 2014 1:59 am

Does anyone understand the Jet Stream? What does the model suggest for that little demon, a number eight in its bed? I just love math.

DirkH
April 3, 2014 3:24 am

Konrad says:
April 2, 2014 at 3:13 pm
“When Callendar tried to revive the idea that adding radiative gases to the atmosphere would reduce the atmospheres radiative cooling ability in 1938, Sir George Simpson had this to say -”
But even Callendar 1938 outperforms todays unverified unvalidated computer models.
http://climateaudit.org/2013/07/26/guy-callendar-vs-the-gcms/
So, when Dikranmarsupial calls todays “climate science” the best there ever was, he is mistaken; Climate Science has regressed over the last 70 years; and we would have achieved MORE by not doing any “research” at all.

April 3, 2014 3:41 am

Mr Turner asks whether anyone understands the jet streams. The polar jet streams were discovered by accident in the Second World War. They have been much studied, since the recent displacement of the circumpolar vortex caused eddies in the northern polar jet stream, giving many places in eastern North America their coldest winter on record.
Mr Jonas queries whether the Singer Event – a step-change in global temperature in the late 1990s – occurred. Yes, it did, beginning with the rebound in global temperatures following the Pinatubo eruption, and ending with the Great el Niño of 1998. The Singer Event was self-evidently not caused by CO2. One would not necessarily expect CO2-driven warming to be entirely smooth. However – and this goes some way to answering a point by MattS – the profile of temperature change since the late 1970s does not fit well with the notion that CO2 was the main driver of warming.
Mr Jonas and others say I have, in effect “rejected” the models. No, I haven’t. They are not at all valuable in predicting CO2-driven global temperature change, for the empirical and theoretical reasons outlined in the head posting. However, they are useful in short-term weather forecasting because unpredictable chaos-driven bifurcations in the evolution of the climate object are less likely to occur in the short term than in the long. They are also useful in assisting with the understanding of climatic processes. The IPCC, however, abuses them.
Many have taken Dr Cawley to task for having stated that “no skeptic has made a GCM that can explain the observed climate using only natural forcings”. Actually, there are several simple models that can reproduce the temperature change of the instrumental era solely on the basis of the time-integral of solar activity, though – as far as I know – none has yet been peer-reviewed and published in a journal. I received a draft from a Norwegian group last year, for instance. I rewrote it for them to strengthen the English and to clarify the mathematics. There is also a TSI-integral model at woodfortrees.org.
“Bruce” lowers the tone by saying I have used big scientific and mathematical words that I do not fully understand. This is a breach of the Eschenbach Rule. What words did I use that he did not understand? And on what evidence, if any, does he consider that I did not understand them? If he is thinking of “heteroskedasticity”, for instance, he may like to read Dr Cawley’s published papers on the subject.
Frank and Evan M Jones discuss the question whether one should model from the top down or from the bottom up. The usual approach in modeling processes over time – such as climate – is to start at the beginning, known as t0, and include as much information on all scales as the model can handle. This information is called the “initial conditions”. And one should not dismiss the potential influence of apparently minor events. It is in the nature of chaotic objects that even the smallest perturbation in the initial conditions can cause drastic bifurcations in the evolution of the object. It is also worth recalling historical events. The Hyksos people had stirrups and the Egyptians didn’t. Guess who won the war.
Mr Webb asserts, on no evidence, that the heteroskedasticity of the climate “ignores the laws of thermodynamics”. What I had actually written was that even the noise overlying the data is heteroskedastic [and particularly with respect to variations in the inputs]. Again, Mr Webb might like to read some of Dr Cawley’s papers before maundering on about these matters.
The untastefully pseudonymous “gymnosperm” says that “trend matching is foolishness”. Another breach of the invaluable Eschenbach Rule. I did not attempt to “match” the trends on different timescales: I merely pointed out, correctly, that at all timescales temperature exhibits aperiodic behavior. Nor is “gymnosperm” correct in saying “we have no meaningful way to evaluate … trends”. The IPCC has made specific predictions about the near-term trend in global temperature. It backdates those predictions to 2005, the last year for data included in the previous Fourth Assessment Report. The predicted trend can, therefore, be evaluated by comparison with the observed trend. The former is rising; the latter is not.
Mr Chang rightly says that GCMs have trouble predicting weather more than a few days out. That is an ineluctable consequence of the chaoticity of the climate. The longer one waits after making a prediction, the more likely it is that a bifurcation will take the climate object off in an unpredicted direction.
“Angech” asks whether a trend 20-50 years long has only a 10% probability of being correct. That is not how statisticians would look at a trend. They would be more concerned with the number of data points (which is why I use monthly rather than annual data in compiling my temperature graphs: the more data points, the more reliable the analysis). And they would be concerned with the measurement uncertainties. In the temperature datasets, the measurement uncertainties to two standard deviations sum to about 0.15 K, so that any trend less than this (up or down) over a given period cannot be statistically distinguished from a zero trend with 95% confidence.
Mr Oldberg says the models don’t make predictions. Yes, they do, and the predictions are wrong. Get over it.
Mr Marler says there is no intrinsic reason why a chaotic model cannot reasonably predict the weather 200 years into the future. Yes, there is. It’s the Lorenz constraint. In his 1963 paper, in which he founded what later came to be called chaos theory, he wrote: “In view of the inevitable inaccuracy and incompleteness of weather observations, precise, very-long-range weather forecasting would seem to be non-existent.” And “very-long-range” means more than about 10 days out. See also Giorgi (2005); IPCC (2001, para. 14.2.2.2).
Fred Berple says “The IPCC spaghetti grap clearly has some model runs that show no temperature increase, consistent with the pause.” Not the latest one. See Fig. 11.25a of the Fifth Assessment Report (2013). The trend is now below all models’ outputs in the spaghetti graph.
Mr Price says running a climate model again and again with the same initial conditions and getting different results each time and averaging them to get a “projection” is unsound. It is also impossible. Models are deterministic: if two runs of a model have the same initial conditions and the same algorithms, they will produce identical outputs.
The paleozoically pseudonymous “thingadonta” says temperatures may fall. Good point. Dick Lindzen says there is an approximately equal chance of warming or cooling to 2050.
The electrostatically pseudonymous “Sparks” wanders off the reservation, asking why we have a monarchy which, he thinks, interferes with democracy. We keep our monarchy not only because we are proud of our history but also because we are proud of our Queen. The net profit in tourism from having a proper, old-fashioned monarchy greatly exceeds the cost of the monarchy itself. Our Queen is a lot less costly to run than your President. And she does not interfere in democracy: she is a constitutional monarch.
Mr Newton makes the profound point that “The Earth’s climate is remarkably stable, all things considered”. He notices that temperatures have varied little over the past 2000 years. In fact, absolute temperatures have varied by only 1% either side of the long-run average in 420,000 years. That is enough to take us in and out of ice ages, but it is not enough to allow us to imagine that strongly net-positive feedbacks are operating.
The acronymically pseudonymous “bw” says the current RSS anomaly is only 0.2 degrees higher than 34 years ago. That’s not how it’s done. One takes the trend on the data. That shows 0.44 degrees’ warming since 1979.
Mr Whitman kindly says Mr Storr did not persuade him I was “unpersuadable”. In a future posting I hope to answer the question I get a great deal from true-believers: “What would it take to convince you we must shut down the West to Save The Planet?”

ddpalmer
April 3, 2014 4:24 am

“No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
So I can’t complain that there is a problem with the US health care system unless I first make my own system? Or complain about the tax system without first making my own tax structure?
What an asinine claim.

April 3, 2014 5:25 am

“What would it take to convince you we must shut down the West to Save The Planet?”
That is their focus. The social ecologists see everything in terms of ecology. They say ‘ego centric man’ [with its industrialisation ‘ecology’] must be transformed into an eco centric man whose ecology will be one of joy. So their focus is one of transformation from ego centric to eco centric [which sounds very ego centric to start with :)]. What does it take to turn one into another? This is the subject of a recent research project
‘Psychology could hold the key to tackling climate change……. “Funded by a €1.5M grant from the European Research Council, Dr Lorraine Whitmarsh from the University’s School of Psychology will for the next five years lead an international team tasked with providing evidence to support this theory.” http://phys.org/news/2014-04-psychology-key-tackling-climate.html#jCp
This transformation they say should happen even if there was NO global warming and one hears this echo in such responses like ‘isn’t sustainability a good thing we should do anyway?’ and many more phrases along those lines.
For them the climate is just another tool in their toolbox of transformation to the ‘new man’ [a concept used in the 1930s to have collectivisation that resulted in massive famines or the famous ‘killing of the sparrows’ under Mao].
So is it ego centric to want to have clean water? Is it ego centric not to want to live in mud huts? What is an eco centric man? We are told its means from not eating meat and ‘not washing to save the planet'[social ecology] to mass extermination of humans [deep ecology] whose numbers need to be reduced through famines and pestilence [thus reductions in co2]. Social and deep ecologists hate each other.[like in Monty Python Life of Brian People’s Front and Popular People’s Front]
So they will keep on asking ‘What would it take to convince you we must shut down the West to Save The Planet?” and if you agree to that then they will ask ‘What would it take to convince you must stop eating meat to Save The Planet?” and so until you are a good eco man. Which is a taliban style narrative.
They call to sacrifice everything in the name of ‘saving the planet’ except stupidity. Saving means something is under threat. It is no accident that in AR5 the term vulnerability [a term used to measure threat to ecosystems] has compared to AR4 been totally decoupled from anything to do with climate. So they not even pretending they need climate reasons any more to promote eco man sustainability. It is now a good in itself. No need for climate reasons or proof.

Philip Marsh
April 3, 2014 5:32 am

heteroskedasticity is misspelled. It has a c, not a k. To American eyes it look similar to skedaddle, as to hetero-skedaddle-away.

steverichards1984
April 3, 2014 6:04 am

Its a shame that they deny the fact that basic flaws in their thinking leads to gross errors within their computer models.
Their models do not need to be totally accurate.
They need to be good enough.
For example, if you were to write an aeroplane simulator (to practice flying) you do not need to program into the system ‘whole wing’ aerodynamics to give an accurate representation of how a wing gives more lift for a greater angle of attack, just to create an equation that allows the system to give the ‘correct’ amount of lift for the airspeed and angle of attack. Other equations deal with stall speed.
However, if you were designing a new wing for manufacture and you wished to simulate the performance of the wing under all phases of flight, then you would need a very accurate model of how a wing generated lift and drag.
It appears that climate modellers are trying to do a detailed simulation (GCM) without understanding all (or many) of the interactions between key variables.
If you do not understand all of the mechanisms, then increasing the area resolution of your simulation from 1000 km square down to 5 km square will not help you at all.
You have just dramatically increased the number of calculations with errors and / or unknowns.
The model needs to be ‘good enough’ for the job in hand.

prjindigo
April 3, 2014 6:06 am

Honestly I think the use of the “forcing” approach is completely wrong. It assumes that there are only a FEW actions that affect temperature and nearly completely ignores convection and atmospheric tidal forces. It also neglects electromagnetic induction as a heat source… which is stupid since you can see it almost every night up in Finland.

bobl
April 3, 2014 6:12 am

I would make the point to Lord Monckton and Cawley, that there is a significant problem in that the models are “Non Physical”, That is the models assume or predict situations that are energetically impossible and ignore the costs of effects. For example, one paper forcasts a 20% increase is hydrological cycling (rainfall) when the actual imbalance is only capable of increasing cycling by 0.8% before all the imbalance energy is consumed in the additional hydrological cycling and further temperature increase is dampened by the negative feed back of evaporation. Cawley also fails to understand that feedback is not a scalar, treating feedbacks as a resultant scalar sum is non-physical. Real feedbacks have an amplitude and a lag. One needs to introduce the square root of negative 1.
I have a huge problem with models that presume to say something about what happens in the climate but ignores the need to establish a physical mechanism, and then prove that the mechanism is actually possible by looking at the energy expenditures within that mechanism in comparison with the available driving energy.
For example, at the average ocean wave of 3m contains almost 30 kW per square meter of wavefront. If we were to assume that waves are driven by wind, which is driven by temperature then we would have the situation that the effect contains more energy that the cause. Wave energy must therefore predominately come from somewhere else.

Angech
April 3, 2014 6:20 am

Thank you for taking the time to respond to each and every one of us from most of every one of us. I do prefer the more science and less show aspect of this article which was excellent.