Why models can't predict climate accurately

 By Christopher Monckton of Brenchley

Dr Gavin Cawley, a computer modeler the University of East Anglia, who posts as “dikranmarsupial”, is uncomfortable with my regular feature articles here at WUWT demonstrating the growing discrepancy between the rapid global warming predicted by the models and the far less exciting changes that actually happen in the real world.

He brings forward the following indictments, which I shall summarize and answer as I go:

 

1. The RSS satellite global temperature trend since 1996 is cherry-picked to show no statistically-discernible warming [+0.04 K]. One could also have picked some other period [say, 1979-1994: +0.05 K]. The trend on the full RSS dataset since 1979 is a lot higher if one takes the entire dataset [+0.44 K]. He says: “Cherry picking the interval to maximise the strength of the evidence in favour of your argument is bad statistics.”

The question I ask when compiling the monthly graph is this: “What is the earliest month from which the least-squares linear-regression temperature trend to the present does not exceed zero?” The answer, therefore, is not cherry-picked but calculated. It is currently September 1996 – a period of 17 years 6 months. Dr Pachauri, the IPCC’s climate-science chairman, admitted the 17-year Pause in Melbourne in February 2013 (though he has more recently got with the Party Line and has become a Pause Denier).

2. “In the case of the ‘Pause’, the statistical test is straightforward. You just need to show that the observed trend is statistically inconsistent with a continuation of the trend in the preceding decades.”

No, I don’t. The significance of the long Pauses from 1979-1994 and again from 1996-date is that they tend to depress the long-run trend, which, on the entire dataset from 1979-date, is equivalent to a little over 1.2 K/century. In 1990 the IPCC predicted warming at 3 K/century. That was two and a half times the real-world rate observed since 1979. The IPCC has itself explicitly accepted the statistical implications of the Pause by cutting its mid-range near-term warming projection from 2.3 to 1.7 K/century between the pre-final and final drafts of AR5.

3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”

One does not need anything as complex as a general-circulation model to explain observed temperature change. Dr Cawley may like to experiment with the time-integral of total solar irradiance across all relevant timescales. He will get a surprise. Besides, observed temperature change since 1950, when we might have begun to influence the warming trend, is well within natural variability. No explanation beyond natural variability is needed.

4. The evidence for an inconsistency between models and data is stronger than that for the existence of a pause, but neither is yet statistically significant.

Dr Hansen used to say one would need five years without warming to falsify his model. Five years without warming came and went. He said one would really need ten years. Ten years without warming came and went. The NOAA, in its State of the Climate report for 2008, said one would need 15 years. Fifteen years came and went. Ben Santer said, “Make that 17 years.” Seventeen years came and went. Now we’re told that even though the Pause has pushed the trend below the 95% significance threshold for very nearly all the models’ near-term projections, it is “not statistically significant”. Sorry – not buying.

5. If the models underestimate the magnitude of the ‘weather’ (e.g. by not predicting the Pause), the significance of the difference between the model mean and the observations is falsely inflated.

In Mark Twain’s words, “Climate is what you expect. Weather is what you get.” Strictly speaking one needs 60 years’ data to cancel the naturally-occurring influence of the cycles of the Pacific Decadal Oscillation. Let us take East Anglia’s own dataset: HadCRUT4. In the 60 years March 1953-February 2014 the warming trend was 0.7 K, equivalent to just 1.1 K/century. CO2 has been rising at the business-as-usual rate.

The IPCC’s mid-range business-as-usual projection, on its RCP 8.5 scenario, is for warming at 3.7 K/century from 2000-2100. The Pause means we won’t get 3.7 K warming this century unless the warming rate is 4.3 K/century from now to 2100. That is almost four times the observed trend of the past 60 years. One might well expect some growth in the so-far lacklustre warming rate as CO2 emissions continue to increase. But one needs a fanciful imagination (or a GCM) to pretend that we’re likely to see a near-quadrupling of the past 60 years’ warming rate over the next 88 years.

6. It is better to understand the science than to reject the models, which are “the best method we currently have for reasoning about the effects of our (in)actions on future climate”.

No one is “rejecting” the models. However, they have accorded a substantially greater weighting to our warming influence than seems at all justifiable on the evidence to date. And Dr Cawley’s argument at this point is a common variant of the logical fallacy of arguing from ignorance. The correct question is not whether the models are the best method we have but whether, given their inherent limitations, they are – or can ever be – an adequate method of making predictions (and, so far, extravagantly excessive ones at that) on the basis of which the West is squandering $1 billion a day to no useful effect.

The answer to that question is No. Our knowledge of key processes – notably the behavior of clouds and aerosols – remains entirely insufficient. For example, a naturally-recurring (and unpredicted) reduction in cloud cover in just 18 years from 1983-2001 caused 2.9 Watts per square meter of radiative forcing. That natural forcing exceeded by more than a quarter the entire 2.3 W m–2 anthropogenic forcing in the 262 years from 1750-2011 as published in the IPCC’s Fifth Assessment report. Yet the models cannot correctly represent cloud forcings.

Then there are temperature feedbacks, which the models use to multiply the direct warming from greenhouse gases by 3. By this artifice, they contrive a problem out of a non-problem: for without strongly net-positive feedbacks the direct warming even from a quadrupling of today’s CO2 concentration would be a harmless 2.3 Cº.

But no feedback’s value can be directly measured, or theoretically inferred, or distinguished from that of any other feedback, or even distinguished from the forcing that triggered it. Yet the models pretend otherwise. They assume, for instance, that because the Clausius-Clapeyron relation establishes that the atmosphere can carry near-exponentially more water vapor as it warms it must do so. Yet some records, such as the ISCCP measurements, show water vapor declining. The models are also underestimating the cooling effect of evaporation threefold. And they are unable to account sufficiently for the heteroskedasticity evident even in the noise that overlies the signal.

But the key reason why the models will never be able to make policy-relevant predictions of future global temperature trends is that, mathematically speaking, the climate behaves as a chaotic object. A chaotic object has the following characteristics:

1. It is not random but deterministic. Every change in the climate happens for a reason.

2. It is aperiodic. Appearances of periodicity will occur in various elements of the climate, but closer inspection reveals that often the periods are not of equal length (Fig. 1).

3. It exhibits self-similarity at different scales. One can see this scalar self-similarity in the global temperature record (Fig. 1).

4. It is extremely sensitive to the most minuscule of perturbations in its initial conditions. This is the “butterfly effect”: a butterfly flaps its wings in the Amazon and rain falls on London (again).

5. Its evolution is inherently unpredictable, even by the most sophisticated of models, unless perfect knowledge of the initial conditions is available. With the climate, it’s not available.

clip_image002clip_image004

clip_image006clip_image008

Figure 1. Quasi-periodicity at 100,000,000-year, 100,000-year, 1000-year, and 100-year timescales, all showing cycles of lengths and magnitudes that vary unpredictably. Click each image to enlarge it.

Not every variable in a chaotic object will behave chaotically: nor will the object as a whole behave chaotically under all conditions. I had great difficulty explaining this to the vice-chancellor of East Anglia and his head of research when I visited them a couple of years ago. When I mentioned the aperiodicity that is a characteristic of a chaotic object, the head of research sneered that it was possible to predict reliably that summer would be warmer than winter. So it is: but that fact does not render the climate object predictable.

By the same token, it would not be right to pray in aid the manifest chaoticity with which the climate object behaves as a pretext for denying that we can expect or predict that any warming will occur if we add greenhouse gases to the atmosphere. Some warming is to be expected. However, it is by now self-evident that trying to determine how much warming we can expect on the basis of outputs from general-circulation models is futile. They have gotten it too wrong for too long, and at unacceptable cost.

The simplest way to determine climate sensitivity is to run the experiment. We have been doing that since 1950. The answer, so far, is a warming trend so far below what the models have predicted that the probability of major warming diminishes by the month. The real world exists, and we who live in it will not indefinitely throw money at modelers to model what the models have failed to model: for models cannot predict future warming trends to anything like a sufficient resolution or accuracy to justify shutting down the West.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
187 Comments
Inline Feedbacks
View all comments
April 3, 2014 6:21 am

Why the models are not worth considering:
1) the models are composed of formulas each of which has NOT been proven. The formulas used in the models are conjectures, postulates, theories unproven. Each individul assumption needs to undergo rigorous testing and backtesting. People in this business try to conflate what is proven that CO2 absorbs radiation from the sun with the theories in the models which are UNPROVEN. A model of unproven formulas is unlikely to produce correct results. There are so many formula that are in these models many of which may not even have the signs correct in terms of how they affect things. The IPCC admits that large numbers of things are very uncertain but then they say that the results are certain to 95%. That is simply unsustainable assertion.
2) The models are gridded approximations of the earths surface that are iterated over millions/billions of times. The initial error in the data which is large is only magnified millions of times and the possibility that the result is at all meaningful is zero. The analogy to chaotic system simulations such as wind tunnels is not sufficient because there is not enough evidence the formulas are at all correct. Even if correct the initial data are not known well enough to trust the results. The errors in the results are greater than all possible outcomes. For instance, the error bars on the 2100 temperature are more than 30 degrees wide. Any 2100 temperature could be said to fit the models.
3) None of the models are any more predictive than any other models. One model which works for one time period better than the others does not work any better in other time periods. They choose to average the models to take out this random effect but this is indicative of a problem. If any of the models actually had correct physics in them we would expect one to outperform the others. There would be evidence of efficacy. There isn’t so they choose to average the models because the models are really “fits” to the data. It would make sense if you have 20 fits to the data that averaging the fits would produce a better fit. However, if there were models which really were better then averaging would produce a poorer fit because you would be taking some poor models and averaging them with good models. Since that isn’t the case we can safely say the models are all just expensive fits. Much cheaper fits can be generated without all the stuff they do in these models.
4) Fits to the data means that there is no proof the models are actually representative of the physics. Therefore, there is no proof that outside the backtested and backfitted data that was used to fit all the models that the models will predict. Therefore the only way to test the models is to take NEW data which hasn’t been incorporated into the models fitting and see if the models can predict them. Since the models were created in 1979 and after the only data relevant is recent data. Recent data does not match the models. That is disproof because backtested data and fitted data cannot be used to “prove” models that were constructed with that data. That is circular logic.
5) They claim the data in some cases is not “fitted” but there is experimenter bias evident in all these results. The modelers all have a bias and they do not consider all the reasons these models could be in error or the data in error. They literally change the data to match the models and vice versa. For instance temeprature data for the US for the last 120 years has been adjusted by algorithms that are unpublished and are not proven. The adjustments modify the historical record significantly showing the temperatues we measured for the last 120 years were significantly cooler than we measured at the time in the past. Yet they have not proven these adjustments actually make sense. They have not gone to specific locations and shown why the locations are reporting the erroneous data to show efficacy to the modifications they are making to the historical record. If the historical record is off significantly then the models based on this record and fit to this record which are said to be good matches with this data would be erroneous. In any case there is simply not enough good data to calibrate the models due to the large uncertainty in most of the data except for the most recent data (last 30 years or so).
6) Large portions of the earths surface and oceans until the last 30 years are simply unknown with any accuracy to construct models. The oceans only until the last 13 years that ARGO has been in operation. We do not have enough data to construct models. This should be evident. It’s not saying we can’t eventually figure this out, just that it is genuinely evil to say that you know something you don’t. The modelers and climate scientists simply don;’t know and should admit that this science is still in its infancy and needs time to prove its theories and refine them. There is nothing wrong with that. There is something wrong in saying you know something you don’t.
7) The IPCC conflated the fact that in 2007 their models showed a high correlation with the historical record (which was circular logic as pointed out above) to say that therefore they had accounted for most if not all natural variability. Therefore based on this analysis they concluded that the chance that the variability seen in 1979-1998 of warming must be with 95% certainty be because of CO2. Since 1998 the temperatures have been flat. This means natural variability was not accounted as they presumed. The models did not account for natural variability. Therefore their assertion of 95% certainty was unjustified and erroneous conclusion based on poor thinking and poor mathematics. Now they say “likely” and refrain from giving a solid certainty but it is more severe than that. The level of variability is such that it is not clear at all that any of the warming in 1979 – 1998 was caused by CO2 or maybe only a small part therefore their ability to predict is zilch.
8) As lord monckton has pointed out and I have been saying this for a long time the historical rate of change is lower than they predict over the remaining period requiring a sudden unproven nonlinear increase in the rate of temperature change higher than we have ever seen and sustained for an unbelievable long period (i.e. 4x rate of change increase for 80 consecutive years without pause). This assertion needs to be proved as it is beyond the experience and data we have it is unlikely to be the case that we have this sudden change in the rate or that it is sustained for such a long period. Such a belief in a sudden change like this is more akin to a religious belief than a scientific belief. They cannot show how this will happen, why it will happen other than pointing to models as if they were magic. We need to see how this sudden massive rate of increase in temperatures is possible because it seems ridiculous on the face of it.
9) The CO2 output of humans has really only been significant since 1945. They must admit that any temperature increases from 1880-1945 are natural variability and actually weaken the argument for CO2. If temperatures between 1880-1945 went up as much as between 1945-2013 then since the changes before 1945 were not from co2 then it is possible that the changes or most of the changes after 1945 could be from things other than co2 as well.
Since 1945 the record is confusing because from 1945-1975 temperatures DECLINED during major CO2 production. Also now between 1996-2013 temperatures are zero trend even with massive CO increase. Therefore during the period 1945-2013 while CO2 production has been consistent and rising there have been 47 of the 68 years showed no increase or even decrease in temperatures yet we are to believe that temperatures will now suddenly spike at a rate 2x or more the period 1979-1998 for 80 years continuously without pause when the evidence seems to point that CO2 actually is a minor effect on temperature as it was increasing massively during this entire period and for the vast majority of this time there was no increase in temperature. Something else is clearly at work. Why can they not admit this. It’s obvious to all but the stupidest person. It is certainly possible that co2 has some effect but clearly there are other things that have a huge impact and until those are accounted for it is impossible to make predictions they claim. Why is this not obvious to everyone?
10) Whatever increase in temperatures is asserted it is not at all proven that the consequences of an increase are negative. For the last 400 years temperatures have been increasing and for that entire period human and animals have generally benefited from the increase in temperature. It is extremely unlikely we have just reached the exact inflection point where rising temperatures cause a problem. In fact the IPCC does say if you look closely that there is actual benrfit from temperature increases up to a degree more or even 2 degrees. Therefore the net result of all this co2 may be net positive depending on the level of temperature increase even by their statements. However, the 2 degrees is even uncertain. Predictions such as in 80 years when temps hit 2 degrees food production will decrease are so ridiculous its impossible to understand how anyone takes this seriously. We have no idea what food production in 2080 will be but it is zero probability given our growth in knowledge that it will be lower because of 2 degrees. These kind of things in their models and their compputations show that the entire thing is complete hogwash.
I think it is worth studying climate. I think it is worth studying many of these things. I am simply saying we don’t know and to say we know and make the assertions they do is academic criminality in my mind because it is so clearly not proven, not known.

Robany
April 3, 2014 6:45 am

“No one is “rejecting” the models.”
I am and so should everyone else. All the reasons Lord Monckton gives for chaotic objects being unmodellable are correct but another problem with models is that they are computer programs. When developing software, it must be tested and validated. Fundamentally this means making sure the software does what its author expects it to do. These expectations may be based on empirical data (GCMs clearly fail to match empirical data) or simply what the author wants. If a GCM author expects CO2 to be the control knob, then guess what, CO2 is the control knob. As I think Willis has observed, the rest is just tuning the model internals so that the output does the desired thing when the CO2 knob is twiddled.
Computer models, even ones which closely match the data (no known GCM), do not necessarily tell you anything about the underlying physical mechanisms. I could probably use a n-th order polynomial to give a decent fit to temperature series but the coefficients would contain no useful information on the reasons for the variation in those series.

Reply to  Robany
April 3, 2014 10:30 am

“No one is “rejecting” the models.”
I am and so should everyone else.

@Robany – I agree. Perhaps what CM was saying is that models in general should not be dismissed. That I would agree with. Models do serve a purpose. Even bad ones tell you that you have to go back to the drawing board.

Jason Calley
April 3, 2014 6:52 am

“We keep our monarchy not only because we are proud of our history but also because we are proud of our Queen.”
We have a monarchy here in the United States as well; we subjects just do not know who the monarch (he, she or they) is. In remembrance of the old, lost Constitutional Republic we still hold a traditional mock election every four years, where we choose who shall live in the White House and become the publicity officer for the royals.

April 3, 2014 7:00 am

In answer to Philip Marsh, “heteroskedasticity” is usually spelled “heteroscedasticity” in the UK, but, in deference to the majority of readers here, who are from the United States, I spell it, as they do, with a “k”, like the “k” in “skeptic”. OK?

Philip Marsh
April 3, 2014 7:11 am

Monckton of Brenchley says:
April 3, 2014 at 7:00 am
In answer to Philip Marsh, “heteroskedasticity” is usually spelled “heteroscedasticity” in the UK, but, in deference to the majority of readers here, who are from the United States, I spell it, as they do, with a “k”, like the “k” in “skeptic”. OK?
I stand corrected.

ferd berple
April 3, 2014 7:22 am

Monckton of Brenchley says:
April 3, 2014 at 3:41 am
Not the latest one. See Fig. 11.25a of the Fifth Assessment Report (2013). The trend is now below all models’ outputs in the spaghetti graph.
============
Thank you for the reply. Question:
Is there a graph/table that shows the raw model runs? From what I can see, the IPCC spaghetti graph only include 1 ensemble mean per model. Thus, by the time the model runs appear in the graph, they have already been averaged which hides the variability in each model. All we see is the variability across models.
My point is that the IPCC report itself is hiding the variability in the individual models, and then further hides the variability across models by the use of the ensemble mean. So in effect the IPCC uses an average of averages.
I believe a very informative article for WUWT would be to plot the raw model data for all models, then draw a min max boundary on the data. This would show that full variance the models are predicting, which is a reasonable measure of natural variability as predicted by the models.
The reason is to show that the models themselves are actually telling us that natural variability is high. That based on a similar set of assumptions, the models are predicting a very large range of results. This range is not a result of forcings, because the models are drawing their forcing estimates fro the same data, so the range must be a result of natural variability as predicted by the models.
This doesn’t mean that the models are correct in their measure of natural variability. Rather that the models are telling us that natural variability is high, but by the process of averaging, not once by twice, the IPCC is hiding the variability, which may well be why scientists such as Dr Gavin Cawley believe variability is low.

April 3, 2014 7:35 am

I am in the business of computer ‘modelling’. I can assure Lord Monckton that you cannot reasonably model something as complex and unknown as ‘climate’, whether it is for one day; one year; or 1000 years. Matters not. Sun and cosmic influences by themselves are impossible to correlate into code. The many to many variables in climate [about 1 million] make modelling both global and local [thermodynamic physics] aspects of climate, impossible.
So this ‘computer modeller’ from state-funded East Anglia, is just another troll, who likely, would not be hired in the private market if he is this ignorant; or is just another propagandist with a business card containing a ‘scientific’ job title. There is no computer-science validity to the cult of warm. See Mann et al. for computing model / stats fraud.

Mark Bofill
April 3, 2014 8:05 am

Lots of good material here. I’d like to comment on a few points of interest.
On Pauses:

No, I don’t. The significance of the long Pauses from 1979-1994 and again from 1996-date is that they tend to depress the long-run trend, which, on the entire dataset from 1979-date, is equivalent to a little over 1.2 K/century. In 1990 the IPCC predicted warming at 3 K/century. That was two and a half times the real-world rate observed since 1979. The IPCC has itself explicitly accepted the statistical implications of the Pause by cutting its mid-range near-term warming projection from 2.3 to 1.7 K/century between the pre-final and final drafts of AR5.

emphasis added
When we talk about a Pause, there’s an implicit assumption associated, specifically, that there is some trend that has paused. What trend are we talking about? As I understand it, Gavin correctly notes that there is no significant pause in observed trends. Lord Monckton correctly notes that there is a significant pause in IPCC projected trends. What do you mean when you say Pause? Pause in what, projected or observed trends?
For my part, I’m interested in this in the first place because climate science is the singular exception and curiosity I’ve encountered in my life where scientific predictions appear to be failing spectacularly. At least it’s the only exception I’m aware of. I care about IPCC projected trends. That’s what I mean when I refer to a Pause.
On Rejecting Models:
I think sloppy speech can be confusing. Rejecting the models and rejecting the idea that the models are currently good enough for a particular purpose may not be exactly the same thing. I don’t reject the models, I’m not even quite sure what that means. Would that mean I think we should throw them in the trash and quit trying to improve them? I certainly don’t think that. Would that mean I think they have no value whatsoever, in any context? No, I’d seriously doubt that.
All this being said, for any given model it’s critical to understand what the model is good for. What aspects of the system does the model model. What are the model’s capabilities and limitations. A VM (virtual machine) for example can be a model that very exactly emulates a microprocessor’s execution of any program over it’s instruction set. A VM can be a very good model of a deterministic system within its scope, such a good model in fact that programs natively compiled for that processor will often execute on a VM without the slightest modification. Are VM’s perfect models? Emphatically not! For example, often no effort is made to emulate the timing of the target processor; the VM might run much more quickly or much more slowly than the actual machine. So even a very good model of a predictable, deterministic system like a microprocessor need not be accurate in every metric to have value.
What are the GCM’s good for? I don’t know for sure. I expect they are useful for some things. Analysis like those done by Lucia at the Blackboard however tell me that GCM’s aren’t useful for projecting atmospheric temperature trends on decadal timescales, and that the models probably aren’t good for projecting atmospheric temperature trends period. Can we improve them? Well, that’d be great if we can. If there are people who are trying, I say more power to them. But let’s not kid ourselves about the capabilities of GCMs as they stand today. I reject using these models for projecting atmospheric temperature trends when it’s been demonstrated that they model this poorly.
A final thought. My opinion is that a good model is the pinnacle of a thorough understanding. Not being able to model certain aspects of a system well does not demonstrate that we know nothing about that system. But being able to accurately model the aspects of a system we are interested in does demonstrate mastery, to my mind. I think that the failure of the GCM’s to project atmospheric temperature trends accurately should be an alarm, a wake up call to those who believe the science is settled and that we understand the Earth’s climate well enough to predict and control it via policy.

Damian
April 3, 2014 8:12 am

Good job. My favorite combination. Low verbiage, high yield. Thanks.

Solomon Green
April 3, 2014 8:50 am

Lord Monkton says:
‘No one is “rejecting” the models. However, they have accorded a substantially greater weighting to our warming influence than seems at all justifiable on the evidence to date.’
Actually many of us are rejecting the models insofar as they are supposed to provide predictions of future climate.
Forget about the (possible) apparent fudging of the raw data and the dubious homogenisation 9of that data. Even if we accept the shaky foundations on which these models have been built there are far too many variables (parameters, if you prefer) for any linear model, be it deterministic or stochastic, to be valid.
Has anyone ever counted the “forcings” that might be involved in climate change?
In a peer-reviewed paper in a professional journal, I once alleged that there were at least forty and, of the many letters and emails that were received after publication, none disputed this figure.
For those who might be tempted to argue, please note that there are at least six “greenhouse gases” alone. Start counting from there.

Mark Hladik
April 3, 2014 8:53 am

Old’un:
My apologies if this has been brought up, but I clicked your link to the WG1 report, and scanned through it.
It is very interesting!
MOST interesting is a small figure (blowing it up helped these old eyes some … … ) on Page 46 (pagination from the .pdf file, not the ‘listed’ text page).
The second graph on that page is titled “Reconstructed (grey) and Simulated (red) NH Temperature”, right below a reconstruction of TSI. Unless I am mistaken, does this graph not contradict Mikey’s hockey stick? I see a MWP, and a LIA, quite distinctly, somehow correlating to changes in TSI (above).
Could some young eyes take a look at that, and get back to me, ASAP?
Thanks,
Mark H.

April 3, 2014 9:03 am

I think sloppy speech can be confusing. Rejecting the models and rejecting the idea that the models are currently good enough for a particular purpose may not be exactly the same thing. I don’t reject the models, I’m not even quite sure what that means. Would that mean I think we should throw them in the trash and quit trying to improve them? I certainly don’t think that. Would that mean I think they have no value whatsoever, in any context? No, I’d seriously doubt that.
In the world of science, simplifications are used far more often than not, principally because the analytic workload is too high, or it is impossible to accurately qualify all of the significant variables involved.
Both are the case with climate simulations. As Chris accurately states, climate is inordinately susceptible in the short term to the butterfly effect, and in the longer term a qualitative misunderstanding of how the variables applied in the simulations actually work in the real world of climate. Some variables are easy to understand, such as the Milankovitch variables associated with isolation. The variable of CO2 is really not well understood, nor is its relationship to other variables such as water vapor. We do understand the physics at the quantum level of CO2 emission and absorption, but how those fundamental underlying energy transfer mechanisms interact with convection, conduction, variable radiation from other sources is beyond where we are at today.
I have yet to see a qualitative comparison between CO2 absorption emission spectra as recorded several decades ago by the U.S. military and an equivalent spectra today. That would give us a qualitative means to understand exactly what that variable’s impact on total energy budget of the planet is. This is real science, measuring variables and then estimating the resulting impact on climate. However, we would much rather use computer models, based upon flawed assumptions of these variables behaviors. These flawed assumptions spring at least somewhat from the destructive circle of the demands that come from federal funding vs what those that fund expect to see.
Models can only tell us so much, and when measurements conflict with models, it is ALWAYS the models that must be modified, not arm waved away like the “missing heat” fallacy, or the outright denial that the observed data conflicts with the models..

PeterinMD
April 3, 2014 9:08 am

Tony Price says:
April 2, 2014 at 6:13 pm
“Repeating the same action again and again, while expecting different results. is stupidity.”</I?
Not it's insanity, and pretty much explains this whole Climate Change fiasco!

April 3, 2014 9:09 am

that should be “insolation” above.

MarkB
April 3, 2014 9:14 am

Eric Worrall says:
April 2, 2014 at 11:16 pm
Here’s a nice simple solar integral model, to help those poor CRU modellers get started.
http://woodfortrees.org/plot/hadcrut4gl/from:1850/mean:50/normalise/plot/sidc-ssn/from:1850/mean:50/offset:-40/integral/normalise

Let me be the first to point out that this model predicts a non-existent 30 year hiatus starting ~1910 and misses one starting ~1945.
On a more general note, an integrating model is going to be very sensitive to thresholds and prone to runaway (unless the integration time is limited in which case this little parlor trick doesn’t work). Constrained by Stefan-Boltzmann it won’t actually blow up but it wouldn’t be pretty. It would be far preferable that the merely logarithmic forcing AWG guys are closer to the truth.

Matthew R Marler
April 3, 2014 9:40 am

Terry Oldberg: Matthew Marler:
Attempts at modeling the climate at long range are hampered by the severe shortage of independent observed events; for example, there are no such events going back 200 years. I imagine that this is not a factor in studies of heartbeat or breathing.

That does not imply, as Lord Monckton wrote, that a chaotic model of a chaotic phenomenon can have no predictive value. It does imply that there is no realistic hope of basing model parameters on multiple cycles of the phenomenon, so the parameter estimates, if based on data, are necessarily more uncertain that with a periodic model observed over multiple cycles.

Matthew R Marler
April 3, 2014 9:45 am

Mike Webb: The statement that climate variation is heteroskedastic will be as difficult to observationally disprove as the Lambda Cold Dark Matter theory, though both theories ignore the laws of thermodynamics.
Heteroskedastic means that the variance is not constant across time and location, e.g. temperatures near the Equator might be less variable than temperatures in Central Missouri, at same day of year and time of day. It is certainly subject to empirical verification or rejection.

Jim G
April 3, 2014 9:55 am

Excellent piece! Is there very much work out there which looks at the interplay between water vapor/humidity/cloud formation and TSI reaching various levels within the atmosphere visa vi temperature? There would, of course, from what analysis I have seen, be significant multiple colinearity amongst the water vapor/humidty/clouds and TSI if used as independent (causal) variables to temperature.

Matthew R Marler
April 3, 2014 10:04 am

Monckton of Brenchley: Mr Marler says there is no intrinsic reason why a chaotic model cannot reasonably predict the weather 200 years into the future. Yes, there is. It’s the Lorenz constraint. In his 1963 paper, in which he founded what later came to be called chaos theory, he wrote: “In view of the inevitable inaccuracy and incompleteness of weather observations, precise, very-long-range weather forecasting would seem to be non-existent.” And “very-long-range” means more than about 10 days out. See also Giorgi (2005); IPCC (2001, para. 14.2.2.2).
Lorenz, 1963, 51 years ago in an active research field, is not the last word. Note the vague “would seem to be”. A prediction of a functional, like the mean, over 3 periods is not impossible. They are not trying to predict the temperature of Central Missouri on June 11, 2025 at 2:30 pm, they are trying to predict the June 2025 afternoon mean temperature, Clearly they can not do that now, but merely citing the “butterfly effect” and “chaos” is not sufficient to show that the goal is unachievable. A thorough overview of one field of modeling with dynamic models including chaotic models is the book “Dynamic Systems in Neuroscience” by Eugene Izhikevich. Of course, reasonable success in heartbeat, breathing rhythms, and neuronal modeling are no guarantee that the problems of modeling the climate will necessarily be overcome, but they are evidence that universal claims of the non-predictability of chaotic models are false.

April 3, 2014 11:28 am

“No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
To which I reply with a quote from Dr. Saul Perlmutter: “”Science isn’t a matter of trying to prove something – it is a matter of trying to figure out how you are wrong and trying to find your mistakes.”
Seems climate science doesn’t realize this.

April 3, 2014 12:05 pm

Patch: Change the value of the variable, “Earth Rotation frequency” , from 0.0000000 to 1.1574074E-5 Hertz.
Your number is not only in error, it is a variable and not a constant.

April 3, 2014 1:39 pm

Mr Marler has breached Eschenbach’s Rule by not quoting me accurately and completely. He says I wrote that “a chaotic model of a chaotic phenomenon can have no predictive value”..What I wrote was that modeling a chaotic object prevented the making of “policy-relevant” predictions – in other words, predictions accurate enough to be acted upon sensibly. And I explained why: the climate object’s “Its evolution is inherently unpredictable, even by the most sophisticated of models, unless perfect knowledge of the initial conditions is available. With the climate, it’s not available.”
I cited Lorenz on the inaccuracy and incompleteness of weather observations. In the absence of sufficiently precise and well-resolved data, one cannot predict the evolution of a chaotic object more than a few days out. And the science has indeed moved on in the half-century since Lorenz’s paper: it has confirmed the need for precise, well-resolved data before a chaotic object can be modeled reliably in the very long term.
It is startlingly evident that the models are not correctly predicting the one thing everyone wants them to predict: global temperature change. At present, they are not fit for their purpose, and chaos is one of the reasons. A chaotic object, being deterministic, is completely predictable under the condition that the modeler possesses perfect knowledge of both the initial conditions and the evolutionary processes. In the climate, we have neither, and can never acquire the first.

Matthew R Marler
April 3, 2014 2:56 pm

Monckton of Brenchly: Mr Marler has breached Eschenbach’s Rule by not quoting me accurately and completely. He says I wrote that “a chaotic model of a chaotic phenomenon can have no predictive value”..What I wrote was that modeling a chaotic object prevented the making of “policy-relevant” predictions – in other words, predictions accurate enough to be acted upon sensibly. And I explained why: the climate object’s “Its evolution is inherently unpredictable, even by the most sophisticated of models, unless perfect knowledge of the initial conditions is available. With the climate, it’s not available.”
I cited Lorenz on the inaccuracy and incompleteness of weather observations. In the absence of sufficiently precise and well-resolved data, one cannot predict the evolution of a chaotic object more than a few days out. And the science has indeed moved on in the half-century since Lorenz’s paper: it has confirmed the need for precise, well-resolved data before a chaotic object can be modeled reliably in the very long term.
It is startlingly evident that the models are not correctly predicting the one thing everyone wants them to predict: global temperature change. At present, they are not fit for their purpose, and chaos is one of the reasons. A chaotic object, being deterministic, is completely predictable under the condition that the modeler possesses perfect knowledge of both the initial conditions and the evolutionary processes. In the climate, we have neither, and can never acquire the first.

Here is the second paragraph of my first post: There are models of chaotic phenomena, heartbeat and breathing for example, where forecasts are reasonably accurate several cycles in advance. If the climate has “cycles” of about 60 years, there is no intrinsic reason why a chaotic model can not reasonably accurately predict the distribution of the weather (mean, variance, quartiles, 5% and 95% quantiles) 200 years into the future. That they don’t do so yet is evidence that they don’t do so yet, not that they can’t ever do so.
I acknowledged that the current climate models are not sufficiently accurate, and I directed attention to successful modeling of chaotic processes with chaotic models to show that universal assertions of the impossibility of usefully modeling chaotic process are not true. What is a “universal assertion of the impossibility of modeling chaotic processes”?
Will this do?
Monckton of Brenchley: At present, they are not fit for their purpose, and chaos is one of the reasons. A chaotic object, being deterministic, is completely predictable under the condition that the modeler possesses perfect knowledge of both the initial conditions and the evolutionary processes.
All models are predictable only with a range of uncertainty, and only up to a point. The difference between chaotic models and non-chaotic models is that, with estimates of parameters and initial conditions instead of exact values (what we always have), chaotic models become useless faster. However, there is no reason that GCMs of necessity will never be accurate enough for useful predictions of the functionals of weather distribution (means, variances, quartiles, other percentiles).
For the record, other chaotic models of chaotic systems are the multi-body gravitational problems that are addressed by the programs that guide satellites and space probes. In those cases, parameters and initial conditions are known with sufficient accuracy that the computations produce sufficiently accurate results (especially “sufficiently accurate results” because course corrections are possible.)
There is no guarantee that GCMs or other models will ever be accurate enough, but there is also no guarantee that they won’t be. On this I disagree with Lord Monckton of Brenchley, as much as I admire most of his work, and his extraordinary dedication.

Matthew R Marler
April 3, 2014 3:02 pm

Another example of a “universal denial” is the implicit assumption of this title: Why models can’t predict climate accurately
Unless there is also an implicit “yet” at the end, I think the implicit assumption is not demonstrated to be true. I think that is a little like predicting that polio (malaria, whooping cough, measles) will never be eradicated because of the difficulties encountered so far.

catweazle666
April 3, 2014 3:27 pm

Oh dear, here we go again.
“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”
IPCC Working Group I: The Scientific Basis, Third Assessment Report (TAR), Chapter 14 (final para., 14.2.2.2), p774.
As it was in the beginning, is now, and ever shall be.
Anyone who claims that it is possible to model such a system now or at any point in the future for more than a matter of days – possibly a few weeks at the most – is either utterly misinformed or a confidence trickster.
And there is no “yet”.