By Christopher Monckton of Brenchley
Dr Gavin Cawley, a computer modeler the University of East Anglia, who posts as “dikranmarsupial”, is uncomfortable with my regular feature articles here at WUWT demonstrating the growing discrepancy between the rapid global warming predicted by the models and the far less exciting changes that actually happen in the real world.
He brings forward the following indictments, which I shall summarize and answer as I go:
1. The RSS satellite global temperature trend since 1996 is cherry-picked to show no statistically-discernible warming [+0.04 K]. One could also have picked some other period [say, 1979-1994: +0.05 K]. The trend on the full RSS dataset since 1979 is a lot higher if one takes the entire dataset [+0.44 K]. He says: “Cherry picking the interval to maximise the strength of the evidence in favour of your argument is bad statistics.”
The question I ask when compiling the monthly graph is this: “What is the earliest month from which the least-squares linear-regression temperature trend to the present does not exceed zero?” The answer, therefore, is not cherry-picked but calculated. It is currently September 1996 – a period of 17 years 6 months. Dr Pachauri, the IPCC’s climate-science chairman, admitted the 17-year Pause in Melbourne in February 2013 (though he has more recently got with the Party Line and has become a Pause Denier).
2. “In the case of the ‘Pause’, the statistical test is straightforward. You just need to show that the observed trend is statistically inconsistent with a continuation of the trend in the preceding decades.”
No, I don’t. The significance of the long Pauses from 1979-1994 and again from 1996-date is that they tend to depress the long-run trend, which, on the entire dataset from 1979-date, is equivalent to a little over 1.2 K/century. In 1990 the IPCC predicted warming at 3 K/century. That was two and a half times the real-world rate observed since 1979. The IPCC has itself explicitly accepted the statistical implications of the Pause by cutting its mid-range near-term warming projection from 2.3 to 1.7 K/century between the pre-final and final drafts of AR5.
3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
One does not need anything as complex as a general-circulation model to explain observed temperature change. Dr Cawley may like to experiment with the time-integral of total solar irradiance across all relevant timescales. He will get a surprise. Besides, observed temperature change since 1950, when we might have begun to influence the warming trend, is well within natural variability. No explanation beyond natural variability is needed.
4. The evidence for an inconsistency between models and data is stronger than that for the existence of a pause, but neither is yet statistically significant.
Dr Hansen used to say one would need five years without warming to falsify his model. Five years without warming came and went. He said one would really need ten years. Ten years without warming came and went. The NOAA, in its State of the Climate report for 2008, said one would need 15 years. Fifteen years came and went. Ben Santer said, “Make that 17 years.” Seventeen years came and went. Now we’re told that even though the Pause has pushed the trend below the 95% significance threshold for very nearly all the models’ near-term projections, it is “not statistically significant”. Sorry – not buying.
5. If the models underestimate the magnitude of the ‘weather’ (e.g. by not predicting the Pause), the significance of the difference between the model mean and the observations is falsely inflated.
In Mark Twain’s words, “Climate is what you expect. Weather is what you get.” Strictly speaking one needs 60 years’ data to cancel the naturally-occurring influence of the cycles of the Pacific Decadal Oscillation. Let us take East Anglia’s own dataset: HadCRUT4. In the 60 years March 1953-February 2014 the warming trend was 0.7 K, equivalent to just 1.1 K/century. CO2 has been rising at the business-as-usual rate.
The IPCC’s mid-range business-as-usual projection, on its RCP 8.5 scenario, is for warming at 3.7 K/century from 2000-2100. The Pause means we won’t get 3.7 K warming this century unless the warming rate is 4.3 K/century from now to 2100. That is almost four times the observed trend of the past 60 years. One might well expect some growth in the so-far lacklustre warming rate as CO2 emissions continue to increase. But one needs a fanciful imagination (or a GCM) to pretend that we’re likely to see a near-quadrupling of the past 60 years’ warming rate over the next 88 years.
6. It is better to understand the science than to reject the models, which are “the best method we currently have for reasoning about the effects of our (in)actions on future climate”.
No one is “rejecting” the models. However, they have accorded a substantially greater weighting to our warming influence than seems at all justifiable on the evidence to date. And Dr Cawley’s argument at this point is a common variant of the logical fallacy of arguing from ignorance. The correct question is not whether the models are the best method we have but whether, given their inherent limitations, they are – or can ever be – an adequate method of making predictions (and, so far, extravagantly excessive ones at that) on the basis of which the West is squandering $1 billion a day to no useful effect.
The answer to that question is No. Our knowledge of key processes – notably the behavior of clouds and aerosols – remains entirely insufficient. For example, a naturally-recurring (and unpredicted) reduction in cloud cover in just 18 years from 1983-2001 caused 2.9 Watts per square meter of radiative forcing. That natural forcing exceeded by more than a quarter the entire 2.3 W m–2 anthropogenic forcing in the 262 years from 1750-2011 as published in the IPCC’s Fifth Assessment report. Yet the models cannot correctly represent cloud forcings.
Then there are temperature feedbacks, which the models use to multiply the direct warming from greenhouse gases by 3. By this artifice, they contrive a problem out of a non-problem: for without strongly net-positive feedbacks the direct warming even from a quadrupling of today’s CO2 concentration would be a harmless 2.3 Cº.
But no feedback’s value can be directly measured, or theoretically inferred, or distinguished from that of any other feedback, or even distinguished from the forcing that triggered it. Yet the models pretend otherwise. They assume, for instance, that because the Clausius-Clapeyron relation establishes that the atmosphere can carry near-exponentially more water vapor as it warms it must do so. Yet some records, such as the ISCCP measurements, show water vapor declining. The models are also underestimating the cooling effect of evaporation threefold. And they are unable to account sufficiently for the heteroskedasticity evident even in the noise that overlies the signal.
But the key reason why the models will never be able to make policy-relevant predictions of future global temperature trends is that, mathematically speaking, the climate behaves as a chaotic object. A chaotic object has the following characteristics:
1. It is not random but deterministic. Every change in the climate happens for a reason.
2. It is aperiodic. Appearances of periodicity will occur in various elements of the climate, but closer inspection reveals that often the periods are not of equal length (Fig. 1).
3. It exhibits self-similarity at different scales. One can see this scalar self-similarity in the global temperature record (Fig. 1).
4. It is extremely sensitive to the most minuscule of perturbations in its initial conditions. This is the “butterfly effect”: a butterfly flaps its wings in the Amazon and rain falls on London (again).
5. Its evolution is inherently unpredictable, even by the most sophisticated of models, unless perfect knowledge of the initial conditions is available. With the climate, it’s not available.
Figure 1. Quasi-periodicity at 100,000,000-year, 100,000-year, 1000-year, and 100-year timescales, all showing cycles of lengths and magnitudes that vary unpredictably. Click each image to enlarge it.
Not every variable in a chaotic object will behave chaotically: nor will the object as a whole behave chaotically under all conditions. I had great difficulty explaining this to the vice-chancellor of East Anglia and his head of research when I visited them a couple of years ago. When I mentioned the aperiodicity that is a characteristic of a chaotic object, the head of research sneered that it was possible to predict reliably that summer would be warmer than winter. So it is: but that fact does not render the climate object predictable.
By the same token, it would not be right to pray in aid the manifest chaoticity with which the climate object behaves as a pretext for denying that we can expect or predict that any warming will occur if we add greenhouse gases to the atmosphere. Some warming is to be expected. However, it is by now self-evident that trying to determine how much warming we can expect on the basis of outputs from general-circulation models is futile. They have gotten it too wrong for too long, and at unacceptable cost.
The simplest way to determine climate sensitivity is to run the experiment. We have been doing that since 1950. The answer, so far, is a warming trend so far below what the models have predicted that the probability of major warming diminishes by the month. The real world exists, and we who live in it will not indefinitely throw money at modelers to model what the models have failed to model: for models cannot predict future warming trends to anything like a sufficient resolution or accuracy to justify shutting down the West.
I believe the models get it so wrong, because they are tuned and constructed to replicate the warming from 1980 to 2000, which they do very well.
Unfortunately the modellers have made it so complicated, that no one dare to start over again, so they hang on their own bad assumptions, and the only thing they do, is to build even more layers on to disguise the failings.
There are a lot of simulations in use for a lot of physical phenomena, but in every case that I know of where simulations produce useful results, the underlying physics is understood, but the equations are too difficult to solve given the boundary conditions. Climate science does not understand the underlying physics (any pretense to the contrary is laughable), nor are the boundary conditions known in anything like the detail required. We are many years away from the equations being known, but too hard to solve. Modelers are wandering in the dark. It’s pretty hard to incorporate what you don’t know in your models.
I was particularly amused by the assertion that “it’s the best thing we have”. Bleeding was the best way we had of treating most everything in the 16th century. That hardly made it right.
What the climate establishment really doesn’t want to say is what the real state of the science is. Their funders think it is entirely different. Eventually, the truth will out, despite everyone’s best efforts. Our task is to keep them from doing something stupid before that happens.
As soon as I saw the name “Cawley”, the association melan-cawley, or perhaps, watermelon-cawley (sorry).
As son of mulder, erm, pedants 🙂 , once a computer starts processing the (probably kludged, see
?readme from CG1), the numbers have rounding errors. I would go even further and say that the initial state is an approximation, because of differences between binary and decimal representation.
Further to that, with all of the infilling of missing data points, real accuracy (pardon the pun) is impossible. Many years ago I had a rather picky, but good professor who insisted on “working the units” (doing a reasonableness analysis) before even starting work on a problem (shows that I’m from the slide-rule era…). He was also very careful about accuracy and precision, and the misuse thereof (yep, right, show me a thermometer with four digits….).
Ends up as lies, darned lies, and Climate Models…
Great analysis – I just wish they’d listen (and learn)….
Dodgy Geezer: “The word is not difficult to understand once you realise that a proper education includes Classical Greek…”
I’m pretty sure that most people who know what “heteroskedastic” means didn’t take Greek–and vice versa. (And knowing Latin wouldn’t have given me a clue to what, e.g., “nisi prius” means as a legal term.)
I consider knowing how to say “faithful companions” or “wine-dark sea” in Homeric to be a relic of a misspent youth; don’t let your grandchildren take dead languages.
Prof Christopher English destroys Climate models.
UnCommonly good.
Melord quoth: It is extremely sensitive to the most minuscule of perturbations in its initial conditions. This is the “butterfly effect”: a butterfly flaps its wings in the Amazon and rain falls on London (again).
I think there is a better analogy. Think back to your childhood and the games we played in playground:
What those models do is play crack the whip with your inputs.
The fist kid in line makes a small change in course. By the time the last kid reaches that point, he is flying through the air. And that’s what happens when employing the bottom-to-top type of model we have seen applied, a small error goes in one end and comes out the other magnified manyfold. This is an inherent problem with all such models.
As an estwhile wargame designer I have seen this play out many a time.
A top-down model, while far more rude and crude, does not go off the rails in such a manner.
The lesson here is if you want to design a reasonable simulation of controlled chaos, say, the Eastern Front, you start with armies and army groups and work your way on down (if at all). You (most emphatically) do NOT start out with a man to man simulation, where the design of a machine-gun barrel winds up (spuriously) turning defeat into victory.
And all that leads directly into your well stated point 5 . . .
Richard, it is Christopher “Essex” not “English”, but is still a very good.
“No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
Maybe that’s because no sceptic would be foolish enough to try with current technology that cannot run the CFD in the vertical dimension in the global resolution required?
When Callendar tried to revive the idea that adding radiative gases to the atmosphere would reduce the atmospheres radiative cooling ability in 1938, Sir George Simpson had this to say –
“..but he would like to mention a few points which Mr. Callendar might wish to reconsider. In the first place he thought it was not sufficiently realised by non-meteorologists who came for the first time to help the Society in its study, that it was impossible to solve the problem of the temperature distribution in the atmosphere by working out the radiation. The atmosphere was not in a state of radiative equilibrium, and it also received heat by transfer from one part to another. In the second place, one had to remember that the temperature distribution in the atmosphere was determined almost entirely by the movement of the air up and down. This forced the atmosphere into a temperature distribution which was quite out of balance with the radiation. One could not, therefore, calculate the effect of changing any one factor in the atmosphere..”
Still as true today as in 1938. If the models cannot properly model all non-radiative transports, then they cannot work. But climastrologists would not dare trying to properly model non-radiative transports because that would reveal that the net effect of our radiative atmosphere over the oceans was cooling of the oceans. That would defeat the true purpose of the models, which is propaganda tools.
The IPCC models beg the question. They have coded in that adding CO2 to the atmosphere causes warming so that is what their computer predictions show but Nature shows otherwise.
Let us reason within the contest of the greenhouse effect theory.
AGW is based on the idea that adding CO2 to the atmosphere causes its radiative thermal insulation properties to increase because of CO2’s LWIR absorption bands. The insulation causes a restriction in radiative heat flow which results in warming in the lower atmosphere and cooling in the upper atmosphere where earth radiates to space in the LWIR. The warming in the lower atmosphere causes more H2O to enter the atmosphere which results in more warming because H2O is also a greenhouse gas with LWIR absorption bands. This mechanism provides a positive feedback. The results of added insulation and positive H2O feedback is modeled as if it were another heat source but that is not what really happens in the Earth’s atmosphere.
Besides being a greenhouse gas, H2O is a primary coolant in the Earth’s atmosphere moving heat from the surface to where clouds form via the heat of vaporization. More heat is moved in this manner then by LWIR absorption band radiation from the surface and convection combined. So more H2O means that more heat is moved which is a negative feedback. that is not factored into IPCC’s models.
More H2O means that more clouds form. Clouds not only reflect incoming solar energy but they provide a more efficient LWIR radiator to space then the clear atmosphere they replace. Clouds provide another negative feedback that the IPCC models have ignored.
As the increased insulation warms the lower atmosphere it cools the upper atmosphere. According to greenhouse effect theory, from space the earth looks like a 0 degree F black body radiating at an equivalent altitude of 17k feet. But there is no radiating black body radiating to space at 17k feet. Because of the low emmisivity of the atmosphere we are realling talkgrayabout grey bodies radiating at higher temperatures and hence lower altitudes. It is these lower altitudes where the actual radiation takes place that is the cold end of the radiative thermal insulation so the upper atmosphere I speak of is well within the troposphere The cooling in the upper atmosphere causes less H2O to appear which counteracts the addition of more CO2 which provides still another negative feedback.
H2O provides negative feedbacks to the addition of greenhouse gases which mitigates their possible effect Negativete Negitive feedback inherentlye inharently stable. The Earth’s climainherentlyn inharently stable to changes in greenhouse gases long enough for life to evolve. We are here. The IPCC models do not include the negative feedbacks so they are wrong and hence their results have been wrong. It is all that simple.
The butterfly just jumped the shark.
@AnonyMoose
A number of graphs showing what you want can be found on p28 or 29 of the technical summary of AR5. Yes, straight from the IPCC, would you believe it.
http://www.climatechange2013.org/images/report/WG1AR5_TS_FINAL.pdf
A short answer to the question of why the models can’t predict accurately is that they don’t predict at all. As I’m using the term a “prediction” is an extrapolation across a specified time interval between an observed state of nature and an unobserved but observable state of nature. For example, it is an extrapolation from the state “cloudy” to the state “rain in the next 24 hours.” Observation of the observed state provides the user of the associated model with information about the unobserved state. It is this information that makes it possible to control the associated system.
Each of the two states belongs to a collection of mutually exclusive collectively exhaustive states that is called a “state-space.” A pairing of a state from each state-spaces describes an event. For the global warming models of today, there are no states, state-spaces, events or specified time intervals. The user of the model is provided with no information. Thus, using existing climate models, control of the climate system is not possible.
AnonyMoose says:
April 2, 2014 at 1:46 pm
There is a pair of temperature graphs from the 20th century which show nearly indistinguishable rates of temperature change. One is from before 1950, so must be natural variability.
Is this what you were thinking of:
http://wattsupwiththat.com/2014/03/29/when-did-anthropogenic-global-warming-begin/#comment-1601068
Coin Lurker says:
April 2, 2014 at 2:07 pm
Where did Cawley say these things? I’ve googled for several of the quotes in vain.”
Try dikran marsupial on Google. Quite a lot there. Only the Lord’s word for this being a synonym for Dr Cawley.
Christopher Monckton and Joe Born (April 2, 2014 at 2:17 pm),
I spent about 4 hours yesterday in my local Barnes and Noble Booksellers perusing a copy of the book “The Unpersuadables’ by Will Storr while sipping Starbucks triple venti cappuccinos.
There are a dozen or so pages in the book dedicated to Christopher Monckton as a “famous skeptic” about climate. Those pages include mostly background on Monckton and then a brief account of Storr’s interview with Monckton.
Storr casts the context about discussing Monckton in a socio-political-inheritable way, not a scientific way. No science was formally addressed on climate.
Storr basically claims to show that Monckton had to be the way he was acting / thinking and was “unpersuadable”. I was not persuaded by Storr that Monckton was “unpersuadable”. : )
John
In my previous post, please change the phrase “from each state-spaces” to the phrase “from each state-space.”
Coyote’s model is pretty good. Better than the so-called professionals.
http://www.coyoteblog.com/coyote_blog/tag/coyote-climate-model
Here Here!
According to the 2009 Trenberth ‘Energy Budget** the IPCC modellers exaggerate the real GHE by a factor of 3 and the real surface mean heat transfer to the atmosphere by the same factor.
To offset this excess warming, they apply incorrect physics at the top of the atmosphere to cool the upper atmosphere. Then in ‘hind-casting’, they claim about 25% more low level cloud ‘reflection’ of solar energy than reality.
These shenanigans have the effect of making the sunlit part of the oceans much warmer and the cloudy bits colder, hence no average temperature rise compared with measured data. However, because water evaporation rate increases exponentially with temperature, the result is to create the imaginary ‘positive feedback’, needed to give the 3x real GHE.
It’s a clever fraud designed to meet the demands of the politicians and the Mafia who own renewables and carbon trading, for a way to con the Public.
http://citeseerx.ist.psu.edu/viewdoc/download?rep=rep1&type=pdf&doi=10.1.1.210.2513
3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
=============================
Just take a model, any model and remove the CO2 fudge factor, I reckon the accuracy of the model will improve somewhat, maybe by about 97%
“No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
What an incredibly stupid statement.
I would comment – No climate scientist has made a GCM that can yet explain the observed climate! No matter what smoke and mirrors they put up, the climate modelling does not match observations! I.e. they do not work!
The significance of the long Pauses from 1979-1994 and again from 1996-date
In other words, during the period 1979 – present, a period of ~ 35 years, there has been 15 + 17 = 32 years of pause!!!
Christopher Monckton of Brenchley: Greatly appreciate your posts here on WUWT.
“No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
or rather “nobody can make a gcm that can explain the observed climate.”
The CSIRO would be lost if you took their modeling toys away, it’s what they do. Here they used models to try telling us that cyclones move coral species around the islands of NW Australia. http://pindanpost.com/2014/04/02/cyclones-are-an-environmental-benefit/
They actually get paid to play with their toys!