By Christopher Monckton of Brenchley
Dr Gavin Cawley, a computer modeler the University of East Anglia, who posts as “dikranmarsupial”, is uncomfortable with my regular feature articles here at WUWT demonstrating the growing discrepancy between the rapid global warming predicted by the models and the far less exciting changes that actually happen in the real world.
He brings forward the following indictments, which I shall summarize and answer as I go:
1. The RSS satellite global temperature trend since 1996 is cherry-picked to show no statistically-discernible warming [+0.04 K]. One could also have picked some other period [say, 1979-1994: +0.05 K]. The trend on the full RSS dataset since 1979 is a lot higher if one takes the entire dataset [+0.44 K]. He says: “Cherry picking the interval to maximise the strength of the evidence in favour of your argument is bad statistics.”
The question I ask when compiling the monthly graph is this: “What is the earliest month from which the least-squares linear-regression temperature trend to the present does not exceed zero?” The answer, therefore, is not cherry-picked but calculated. It is currently September 1996 – a period of 17 years 6 months. Dr Pachauri, the IPCC’s climate-science chairman, admitted the 17-year Pause in Melbourne in February 2013 (though he has more recently got with the Party Line and has become a Pause Denier).
2. “In the case of the ‘Pause’, the statistical test is straightforward. You just need to show that the observed trend is statistically inconsistent with a continuation of the trend in the preceding decades.”
No, I don’t. The significance of the long Pauses from 1979-1994 and again from 1996-date is that they tend to depress the long-run trend, which, on the entire dataset from 1979-date, is equivalent to a little over 1.2 K/century. In 1990 the IPCC predicted warming at 3 K/century. That was two and a half times the real-world rate observed since 1979. The IPCC has itself explicitly accepted the statistical implications of the Pause by cutting its mid-range near-term warming projection from 2.3 to 1.7 K/century between the pre-final and final drafts of AR5.
3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
One does not need anything as complex as a general-circulation model to explain observed temperature change. Dr Cawley may like to experiment with the time-integral of total solar irradiance across all relevant timescales. He will get a surprise. Besides, observed temperature change since 1950, when we might have begun to influence the warming trend, is well within natural variability. No explanation beyond natural variability is needed.
4. The evidence for an inconsistency between models and data is stronger than that for the existence of a pause, but neither is yet statistically significant.
Dr Hansen used to say one would need five years without warming to falsify his model. Five years without warming came and went. He said one would really need ten years. Ten years without warming came and went. The NOAA, in its State of the Climate report for 2008, said one would need 15 years. Fifteen years came and went. Ben Santer said, “Make that 17 years.” Seventeen years came and went. Now we’re told that even though the Pause has pushed the trend below the 95% significance threshold for very nearly all the models’ near-term projections, it is “not statistically significant”. Sorry – not buying.
5. If the models underestimate the magnitude of the ‘weather’ (e.g. by not predicting the Pause), the significance of the difference between the model mean and the observations is falsely inflated.
In Mark Twain’s words, “Climate is what you expect. Weather is what you get.” Strictly speaking one needs 60 years’ data to cancel the naturally-occurring influence of the cycles of the Pacific Decadal Oscillation. Let us take East Anglia’s own dataset: HadCRUT4. In the 60 years March 1953-February 2014 the warming trend was 0.7 K, equivalent to just 1.1 K/century. CO2 has been rising at the business-as-usual rate.
The IPCC’s mid-range business-as-usual projection, on its RCP 8.5 scenario, is for warming at 3.7 K/century from 2000-2100. The Pause means we won’t get 3.7 K warming this century unless the warming rate is 4.3 K/century from now to 2100. That is almost four times the observed trend of the past 60 years. One might well expect some growth in the so-far lacklustre warming rate as CO2 emissions continue to increase. But one needs a fanciful imagination (or a GCM) to pretend that we’re likely to see a near-quadrupling of the past 60 years’ warming rate over the next 88 years.
6. It is better to understand the science than to reject the models, which are “the best method we currently have for reasoning about the effects of our (in)actions on future climate”.
No one is “rejecting” the models. However, they have accorded a substantially greater weighting to our warming influence than seems at all justifiable on the evidence to date. And Dr Cawley’s argument at this point is a common variant of the logical fallacy of arguing from ignorance. The correct question is not whether the models are the best method we have but whether, given their inherent limitations, they are – or can ever be – an adequate method of making predictions (and, so far, extravagantly excessive ones at that) on the basis of which the West is squandering $1 billion a day to no useful effect.
The answer to that question is No. Our knowledge of key processes – notably the behavior of clouds and aerosols – remains entirely insufficient. For example, a naturally-recurring (and unpredicted) reduction in cloud cover in just 18 years from 1983-2001 caused 2.9 Watts per square meter of radiative forcing. That natural forcing exceeded by more than a quarter the entire 2.3 W m–2 anthropogenic forcing in the 262 years from 1750-2011 as published in the IPCC’s Fifth Assessment report. Yet the models cannot correctly represent cloud forcings.
Then there are temperature feedbacks, which the models use to multiply the direct warming from greenhouse gases by 3. By this artifice, they contrive a problem out of a non-problem: for without strongly net-positive feedbacks the direct warming even from a quadrupling of today’s CO2 concentration would be a harmless 2.3 Cº.
But no feedback’s value can be directly measured, or theoretically inferred, or distinguished from that of any other feedback, or even distinguished from the forcing that triggered it. Yet the models pretend otherwise. They assume, for instance, that because the Clausius-Clapeyron relation establishes that the atmosphere can carry near-exponentially more water vapor as it warms it must do so. Yet some records, such as the ISCCP measurements, show water vapor declining. The models are also underestimating the cooling effect of evaporation threefold. And they are unable to account sufficiently for the heteroskedasticity evident even in the noise that overlies the signal.
But the key reason why the models will never be able to make policy-relevant predictions of future global temperature trends is that, mathematically speaking, the climate behaves as a chaotic object. A chaotic object has the following characteristics:
1. It is not random but deterministic. Every change in the climate happens for a reason.
2. It is aperiodic. Appearances of periodicity will occur in various elements of the climate, but closer inspection reveals that often the periods are not of equal length (Fig. 1).
3. It exhibits self-similarity at different scales. One can see this scalar self-similarity in the global temperature record (Fig. 1).
4. It is extremely sensitive to the most minuscule of perturbations in its initial conditions. This is the “butterfly effect”: a butterfly flaps its wings in the Amazon and rain falls on London (again).
5. Its evolution is inherently unpredictable, even by the most sophisticated of models, unless perfect knowledge of the initial conditions is available. With the climate, it’s not available.
Figure 1. Quasi-periodicity at 100,000,000-year, 100,000-year, 1000-year, and 100-year timescales, all showing cycles of lengths and magnitudes that vary unpredictably. Click each image to enlarge it.
Not every variable in a chaotic object will behave chaotically: nor will the object as a whole behave chaotically under all conditions. I had great difficulty explaining this to the vice-chancellor of East Anglia and his head of research when I visited them a couple of years ago. When I mentioned the aperiodicity that is a characteristic of a chaotic object, the head of research sneered that it was possible to predict reliably that summer would be warmer than winter. So it is: but that fact does not render the climate object predictable.
By the same token, it would not be right to pray in aid the manifest chaoticity with which the climate object behaves as a pretext for denying that we can expect or predict that any warming will occur if we add greenhouse gases to the atmosphere. Some warming is to be expected. However, it is by now self-evident that trying to determine how much warming we can expect on the basis of outputs from general-circulation models is futile. They have gotten it too wrong for too long, and at unacceptable cost.
The simplest way to determine climate sensitivity is to run the experiment. We have been doing that since 1950. The answer, so far, is a warming trend so far below what the models have predicted that the probability of major warming diminishes by the month. The real world exists, and we who live in it will not indefinitely throw money at modelers to model what the models have failed to model: for models cannot predict future warming trends to anything like a sufficient resolution or accuracy to justify shutting down the West.
True physics based volcanic forcing and climate reaction to it:
http://climategrog.files.wordpress.com/2014/03/tropical-feedback_resp-fcos.png?w=814
If climate can wipe out Mt Pinatubo’s effects we can be forget CO2.
Henrik Sørensen says:
April 2, 2014 at 4:19 pm
Is “skedasis” cognate with “skedaddle”? Might go back to the same root.
I too build predictive models as a career. I need to predict the chemical and physical stability of new drug products and set product specifications and expiration dates. We conduct lengthy scientific studies as well as required “formal stability studies” and all data and models are freely available to any agency where we are filing the product.
And in my professional opinion as a modeler of chemical reactions, climate models are a giant failure. They explain nothing and predict nothing. You don’t need a degree to see they fail. I don’t see what the AGW crowd is going on about. Your models are junk, get over it, it happens. They look foolish trying to defend them.
I would like to ask a question.
How many people are actually responsible for climate modeling code? For that matter, how many actually have access to view that code.
Who are they?
If you think about it, they are currently driving energy policy, “right now”.
Am I wrong?
This statement presumes that all natural forcings are known. I do not think this is true.
Why is there a state funded monarchy active in military objectives and interfering with democracy in the so called united kingdom?
3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
No one has made a model that explains observed climate.
“The simplest way to determine climate sensitivity is to run the experiment. ”
Yes, I think the only way that the IPCC and other alarmists will tone down their models and alarmist predictions is to give the climate another few decades or so to run the experiment, if temperatures don’t rise much in the next few decades, or even cool (which is my take on the data), this will show what effects things like high solar activity in the 20th century, clouds and the PDO had on the warming in the late 20th century (and indeed since the LIA), then they will finally come around, the various paradigms will be replaced, and to save face we can thank them for stimulating debate, their ‘excellent’ research that led to the advance of science etc etc.
It’s classic Thomas Kuhn.
Oh boring how these alarmists are grasping at straws. Keep at ’em. Thank you Lord Monckton.
Repeating the same action again and again, while expecting different results. is stupidity.
Running a climate model again and again with the same initial conditions, getting different results each time, then averaging the results to get a “projection” is both mathematically and scientifically unsound.
Averaging the results of an ensemble of models whose output has already been averaged, to get a “more reliable” final projection is much more than unsound – it’s scientific fraud.
No … GCM that can explain the observed climate using only natural forcings.
==============
This is completely false and Dr Gavin Cawley should know this.
The IPCC spaghetti graph clearly has some model runs that show no temperature increase, consistent with the pause. Thus the IPCC models themselves are telling us that the pause is within the natural variability predicted by the models.
Look at the IPCC spaghetti graph. The spread between the top and bottom models runs is the models themselves predicting natural variability. They are telling us that climate may follow the lowest results or the highest result, without any change in forcings.
The IPCC is being dishonest in saying that climate will follow the mean. The spaghetti graph is telling us that even the models think climate is highly variable without the slightest change in forcings.
But rather than listen to the models, the IPCC constructs an artificial model mean, and tries to sell this as future climate.
Simple, “…winter cool to colder, spring – a bit warmer, summer a bit warmer, autumn, cooling. Unfortunately we can not give exact dates when these changes will happen but they are approximate with vast variations expected below or above norm.”
No one can exactly predict climate, but they can predict weather to a point, with cloud cover and expected rain fall observed by radar and satellite. Cyclone, hurricane and tsunami warnings and volcanic eruptions. Earthquakes ? Yet when I regularly check the weather forecast with BOM or Essential Energy – Stormtracker, we often miss out from expected storms.
Climate is what we expect, weather is what we get! Keep repeating this, and yell it out aloud.
“Besides, observed temperature change since 1950, when we might have begun to influence the warming trend, is well within natural variability.”
Where was the decision made that the anthropological effects of land-use changes on local climate were less important then CO2 production?
4. It is extremely sensitive to the most minuscule of perturbations in its initial conditions. This is the “butterfly effect”: a butterfly flaps its wings in the Amazon and rain falls on London (again).
That’s fun, but usually when a butterfly flaps its wings the wave has been overwhelmed by turbulence within 10 inches or so of the butterfly. Chaotic models generally produce oscillations within a range for long periods into the future, but they are simply not perfectly periodic, seldom explosive (the name for the effect quoted in italics.) The large Lyapunov exponents make the chaotic model amplify the unknown error in the initial conditions and parameter estimates into unpredictable turbulence much more quickly than they would with a periodic function.
There are models of chaotic phenomena, heartbeat and breathing for example, where forecasts are reasonably accurate several cycles in advance. If the climate has “cycles” of about 60 years, there is no intrinsic reason why a chaotic model can not reasonably accurately predict the distribution of the weather (mean, variance, quartiles, 5% and 95% quantiles) 200 years into the future. That they don’t do so yet is evidence that they don’t do so yet, not that they can’t ever do so.
Matthew Marler:
Attempts at modeling the climate at long range are hampered by the severe shortage of independent observed events; for example, there are no such events going back 200 years. I imagine that this is not a factor in studies of heartbeat or breathing.
“No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
Conversely no one has made a GCM that can explain the observed climate using co2 as a major factor either. Is this a joke?
“3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.” ”
So, one can only refute the failed GCM’s … with a GCM?
Circular . . . nay, Pretzel Logic, befitting of Bureautard.
They model, therefore They think. We observe, therefore We SEE.
Posted on ATTP re RP JUN but relevant to time frames with computer models so I thought I would add it here
“There is a difference between trending and truth, and a difference between probability and truth.
You may well be able to point out a trend in 20-50 years, heck there will always be a trend up,down or flat. But imputing significance to it is another matter.
A discernible upward trend in that time interval is only 10 % likely to be correct, 90 % likely to be wrong.
Given 98 years on your figures you are 50 % likely to be right! at 247 years you are 95% likely to be right.Your words.
If we extrapolate this to surface temps for Marko we could say that the IPCC is 90% wrong to be advocating action on climate change based on a small 20-50 year trend in temperature changes particularly when the trend is now flattening rapidly due to the pause”.
Christopher, is this concept that a 20-50 year trend is only 10% likely, meaning it is 90% unlikely right and can you use it in your forays?
Arno Arrak wrote;
“When Nixon canceled the last three moon shots the prime contractor for the Apollo Lunar Lander Module was forced to lay off ten thousand men within a month.”
I could be mistaken, but I believe it was the US House of Representatives that withdrew the funding for the last three moon shots. Back then they controlled the “purse strings”.
After a while one of the top NASA officials observed; “It was probably a good thing we stopped when we did, before some more people (re: Apollo 1) got killed” (paraphrased by myself).
The Apollo missions where indeed amazing. But they also benefited from a string of relatively good luck. The more times you try to do something incredibility complex and risky the more likely it will fail in a spectacular manner, just simple statistics. Yes, the Apollo 13 crew made it back safely, but just barely.
My father ran trains and locomotives for a railroad, as a youngster starting out he came up to a “stop sign” (red STOP SIGNAL in RR parlance) quite fast and managed to stop the many thousand ton train JUST before the signal. The “old head” training him got out, walked up to the signal, looked around and said; “Wow, that’s pretty impressive, how many times do you think you can do that ???” Dad gave up his “hot-rodding” ways and went on to a safe 50 year career on the railroad without causing a single fatality.
Regarding climate modeling, I agree, it’s time to “SHUT ER DOWN”.
Cheers, Kevin.
GCM has trouble predicting weather more than a few days out. What kind of fool would try to argue that GCM should be able to model climate over decades?
The entire exercise of trend matching is foolishness. There is a downward trend from the Eocene, an upward trend from the LGM. There have been so many ups and downs that we have no meaningful way to evaluate these trends. We really have to get beyond trends and start digging into processes. The problem for my old SKS buddy Dikran et al is that the deeper one digs into processes, the more trends become the only game in town.
The statement that climate variation is heteroskedastic will be as difficult to observationally disprove as the Lambda Cold Dark Matter theory, though both theories ignore the laws of thermodynamics.
Great post ! Nature [ wrt climate ] is not yet represented in the models, therefore the models are useless to predict the future and determine policy.
evanmjones writes: “You (most emphatically) do NOT start out with a man to man simulation, where the design of a machine-gun barrel winds up (spuriously) turning defeat into victory.” I don’t have any particular expertise in wargame modeling–so if I’m completely wrong here, just say so and no hard feelings. What strikes me about your interesting post is whether that barrel design can actually turn defeat into victory over a wide range of top-down inputs (so for example, if one side has a huge well equipped modern army and the other is on horses with swords, we wouldn’t expect the design of a barrel to affect things much. But for relatively evenly matched armies, is there a bottom up effect in the real world?). If barrel design can have that effect, that means, I think, that reality acts like a bottom-up model and is intractable from a computational viewpoint over a wide range of top-down inputs. And for things like big wars, we really don’t have the controlled data to tell us whether bottom up effects are meaningful and over what range of top-down inputs. How would one even go about answering that question? The same questions apply to climate models, I think.
ossqs says (5:58 pm)
“… For that matter, how many actually have access to view that code.”
Well everyone, I think. I downloaded a copy of one of the GCMs a couple of years ago and looked through it. It was a big pile of Fortran code with lots of changes made (with the old code left in, but commented out). When I saw that one of the parameters controlling an equation had THE SIGN CHANGED (not just the value) I decided that it was all a bunch of crap and haven’t wasted much time worrying about the sacred models since. Oh yeah, it was obvious from the history comments that it was originally written by James Hansen — which probably explains why he was always so enamored of it.
Joe Johnson says:
Oh, I forgot to say that at the time I looked at the code, I thought to myself, “If anyone working for me had written such unprofessional crap, I probably would have fired him”…